Advancing Fall Detection Utilizing Skeletal Joint Image Representation and Deformable Layers

Authors

  • Hamza Ergüder Yildiz Technical University
  • Tuncay Uzun Yildiz Technical University
  • Murat Baday Stanford University School of Medicine

DOI:

https://doi.org/10.5566/ias.3087

Keywords:

Computer Vision, Deep Learning, Fall Detection, Pose Estimation

Abstract

Falls are a significant concern among the elderly population, with 25% of individuals over 65 years old experiencing a fall severe enough to require a visit to the emergency department each year. Early detection of falls can prevent serious injuries and complications, making it an important problem to address. There are various methods for detecting falls, utilizing different types of sensor input data. However, when considering factors such as ease of setup, accessibility, and accuracy, utilizing cameras for fall detection is a highly effective approach. In this study, a novel video-based fall detection algorithm that relies on skeleton joints is introduced. The results of pose estimation are preprocessed into an image representation and ShuffleNet V2 model with the addition of a Deformable Layer is employed for classification. Experiments were carried out on four distinct datasets: URFD, UP-Fall Detection, Le2i, and NTU RGB+D 60, which encompass individuals engaged in various activities, including falls. The results showcase exceptional performance across all these datasets, affirming the efficacy of the approach in accurately detecting falls in video footage.

References

Aharon N, Orfaig R, Bobrovsky BZ (2022). Bot-sort:Robust associations multi-pedestrian tracking.arXiv:2206.14651.

Bisong E, Bisong E (2019). Google colaboratory.Building machine learning and deep learningmodels on google cloud platform a comprehensiveguide for beginners :59–64.

Charfi I, Miteran J, Dubois J, Atri M, Tourki R(2013). Optimized spatio-temporal descriptorsfor real-time fall detection: comparison of supportvector machine and adaboost-based classification.Journal of Electronic Imaging 22:041106.

Dai J, Qi H, Xiong Y, Li Y, Zhang G, Hu H, WeiY (2017). Deformable convolutional networks.arXiv:1703.06211.

Delahoz Y, Labrador M (2014). Survey on falldetection and fall prevention using wearable andexternal sensors. Sensors 14:19806–42.

Dentamaro V, Impedovo D, Pirlo G (2021). Falldetection by human pose estimation and kinematictheory. In: 2020 25th International Conference onPattern Recognition (ICPR), IEEE.

Duan H, Wang J, Chen K, Lin D (2022). Dg-stgcn:Dynamic spatial-temporal modeling for skeleton-based action recognition. arXiv:2210.05895.

Espinosa R, Ponce H, Guti ́errez S, Mart ́ınez-Villase ̃norL, Brieva J, Moya-Albor E (2019). A vision-basedapproach for fall detection using multiple camerasand convolutional neural networks: A case studyusing the UP-fall detection dataset. Computers inBiology and Medicine 115:103520.

Fang HS, Xie S, Tai YW, Lu C (2016). Rmpe:Regional multi-person pose estimation.arXiv:1612.00137.

Galvao YM, Portela L, Ferreira J, Barros P,De Araujo Fagundes OA, Fernandes BJT (2021).A framework for anomaly identification applied onfall detection. IEEE Access 9:77264–74.

Guti ́errez J, Martin S, Rodriguez V (2023). Humanstability assessment and fall detection based ondynamic descriptors. IET Image Processing17:3177–95.

He K, Zhang X, Ren S, Sun J (2015). Deep residuallearning for image recognition. arXiv:1512.03385.

Jager TE, Weiss HB, Coben JH, Pepe PE (2000).Traumatic brain injuries evaluated in u.s.emergency departments, 1992-1994. AcademicEmergency Medicine 7:134–40.

Jocher G, Chaurasia A, Qiu J(2023). YOLO by Ultralytics.https://github.com/ultralytics/ultralytics.

Juraev S, Ghimire A, Alikhanov J, Kakani V, Kim H(2022). Exploring human pose estimation and theusage of synthetic data for elderly fall detection inreal-world surveillance. IEEE Access 10:94249–61.

Kingma DP, Ba J (2014). Adam: A method forstochastic optimization. arXiv:1412.6980.

Kwolek B, Kepski M (2014). Human fall detection onembedded platform using depth maps and wirelessaccelerometer. Computer Methods and Programsin Biomedicine 117:489–501.

Li J, Gao M, Li B, Zhou D, Zhi Y, Zhang Y (2022).Kamtfenet: a fall detection algorithm based onkeypoint attention module and temporal featureextraction. International Journal of MachineLearning and Cybernetics 14:1831–44.

Lin TY, Maire M, Belongie S, Bourdev L, Girshick R,Hays J, Perona P, Ramanan D, Zitnick CL, Doll ́arP (2014). Microsoft coco: Common objects incontext. arXiv:1405.0312.

Ma N, Zhang X, Zheng HT, Sun J (2018).Shufflenet v2: Practical guidelines for efficient cnnarchitecture design. arXiv:1807.11164.

Mart ́ınez-Villase ̃nor L, Ponce H, Brieva J, Moya-Albor E, N ́u ̃nez-Mart ́ınez J, Pe ̃nafort-Asturiano C(2019). UP-fall detection dataset: A multimodalapproach. Sensors 19:1988.

Moreland B, Kakara R, Henry A (2020). Trends innonfatal falls and fall-related injuries among adultsaged ≥65 years — united states, 2012–2018.MMWR Morbidity and Mortality Weekly Report69:875–81.

Nooruddin S, Islam MM, Sharna FA, Alhetari H, KabirMN (2021). Sensor-based fall detection systems:a review. Journal of Ambient Intelligence andHumanized Computing 13:2735–51.

Ramirez H, Velastin SA, Cuellar S, Fabregas E,Farias G (2023). Bert for activity recognitionusing sequences of skeleton features and dataaugmentation with gan. Sensors 23:1400.

Ramirez H, Velastin SA, Meza I, Fabregas E, MakrisD, Farias G (2021). Fall detection and activityrecognition using human skeleton features. IEEEAccess 9:33532–42.

Serpa YR, Nogueira MB, Neto PPM, Rodrigues MAF(2020). Evaluating pose estimation as a solutionto the fall detection problem. In: 2020 IEEE 8thInternational Conference on Serious Games andApplications for Health (SeGAH).

Shahroudy A, Liu J, Ng TT, Wang G (2016). NTURGB+D: A Large Scale Dataset for 3D HumanActivity Analysis. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas, NV, USA: IEEE.

Singh T, Vishwakarma DK (2018). Human activityrecognition in video benchmarks: A survey. In:Lecture Notes in Electrical Engineering. SpringerSingapore, 247–59.

Sterling DA, O’Connor JA, Bonadies J (2001).Geriatric falls: Injury severity is high anddisproportionate to mechanism. The Journal ofTrauma Injury Infection and Critical Care 50:116–19.

Taufeeque M, Koita S, Spicher N, Deserno TM(2021). Multi-camera, multi-person, and real-timefall detection using long short term memory. In:Medical Imaging 2021: Imaging Informatics forHealthcare, Research, and Applications.

Tsai TH, Hsu CW (2019). Implementation of falldetection system based on 3d skeleton for deeplearning technique. IEEE Access 7:153049–59.

Wang BH, Yu J, Wang K, Bao XY, Mao KM(2020). Fall detection based on dual-channelfeature integration. IEEE Access 8:103443–53.

Weytjens H, De Weerdt J (2020). Process OutcomePrediction: CNN vs. LSTM (with Attention), vol.397. Cham: Springer International Publishing,321–33.

Yadav SK, Luthra A, Tiwari K, Pandey HM, AkbarSA (2022). ARFDNet: An efficient activityrecognition & fall detection system using latentfeature pooling. Knowledge Based Systems239:107948.

Yuan J, Liu C, Liu C, Wang L, Chen Q (2022).Real-time human falling recognition viaspatial and temporal self-attention augmentedgraph convolutional network. In: 2022 IEEEInternational Conference on Real-time Computingand Robotics (RCAR).

Zahan S, Hassan GM, Mian A (2023). Sdfa:Structure-aware discriminative feature aggregationfor efficient human fall detection in video.IEEE Transactions on Industrial Informatics19:8713–21.

Zhao Z, Zhang L, Shang H (2022). A lightweightsubgraph-based deep learning approach for fallrecognition. Sensors 22:5482

Downloads

Published

2024-03-27

How to Cite

Ergüder, H., Uzun, T., & Baday, M. (2024). Advancing Fall Detection Utilizing Skeletal Joint Image Representation and Deformable Layers. Image Analysis and Stereology, 43(1), 97–107. https://doi.org/10.5566/ias.3087

Issue

Section

Original Research Paper