Hierarchical multi-view aggregation network for sensor-based human activity recognition
Autoři:
Xiheng Zhang aff001; Yongkang Wong aff002; Mohan S. Kankanhalli aff002; Weidong Geng aff001
Působiště autorů:
State Key Laboratory of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang Province, China
aff001; School of Computing, National University of Singapore, Singapore, Singapore
aff002
Vyšlo v časopise:
PLoS ONE 14(9)
Kategorie:
Research Article
doi:
https://doi.org/10.1371/journal.pone.0221390
Souhrn
Sensor-based human activity recognition aims at detecting various physical activities performed by people with ubiquitous sensors. Different from existing deep learning-based method which mainly extracting black-box features from the raw sensor data, we propose a hierarchical multi-view aggregation network based on multi-view feature spaces. Specifically, we first construct various views of feature spaces for each individual sensor in terms of white-box features and black-box features. Then our model learns a unified representation for multi-view features by aggregating views in a hierarchical context from the aspect of feature level, position level and modality level. We design three aggregation modules corresponding to each level aggregation respectively. Based on the idea of non-local operation and attention, our fusion method is able to capture the correlation between features and leverage the relationship across different sensor position and modality. We comprehensively evaluate our method on 12 human activity benchmark datasets and the resulting accuracy outperforms the state-of-the-art approaches.
Klíčová slova:
Engineering and technology – Electronics – Accelerometers – Equipment – Measurement equipment – Magnetometers – Computer and information sciences – Recurrent neural networks – Artificial intelligence – Machine learning – Deep learning – Biology and life sciences – Neuroscience – Neural networks – Sensory perception – Vision – Psychology – Research and analysis methods – Mathematical and statistical techniques – Mathematical functions – Time domain analysis – Social sciences – Physical sciences – Physics – Thermodynamics – Entropy
Zdroje
1. Ramasamy Ramamurthy S, Roy N. Recent trends in machine learning for human activity recognition—A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2018; p. e1254.
2. Singh D, Merdivan E, Hanke S, Kropf J, Geist M, Holzinger A. Convolutional and Recurrent Neural Networks for Activity Recognition in Smart Environment. In: Towards Integrative Machine Learning and Knowledge Extraction. Springer; 2017. p. 194–205.
3. Banos O, Garcia R, Holgado-Terriza JA, Damas M, Pomares H, Rojas I, et al. mHealthDroid: A novel framework for agile development of mobile health applications. In: International Workshop on Ambient Assisted Living; 2014. p. 91–98.
4. Storm FA, Heller BW, Mazzà C. Step detection and activity recognition accuracy of seven physical activity monitors. PloS one. 2015;10(3):e0118723. doi: 10.1371/journal.pone.0118723
5. Plötz T, Hammerla NY, Olivier P. Feature learning for activity recognition in ubiquitous computing. In: IJCAI. vol. 22; 2011. p. 1729.
6. Siirtola P, Röning J. Recognizing human activities user-independently on smartphones based on accelerometer data. International Journal of Interactive Multimedia and Artificial Intelligence. 2012;1(5):38–45. doi: 10.9781/ijimai.2012.155
7. Capela NA, Lemaire ED, Baddour N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients. PloS one. 2015;10(4):e0124414. doi: 10.1371/journal.pone.0124414
8. Yazdansepas D, Niazi AH, Gay JL, Maier FW, Ramaswamy L, Rasheed K, et al. A multi-featured approach for wearable sensor-based human activity recognition. In: IEEE International Conference on Healthcare Informatics; 2016. p. 423–431.
9. Zebin T, Scully PJ, Ozanyan KB. Inertial sensor based modelling of human activity classes: Feature extraction and multi-sensor data fusion using machine learning algorithms. In: eHealth 360. Springer; 2017. p. 306–314.
10. Yang J, Nguyen MN, San PP, Li X, Krishnaswamy S. Deep convolutional neural networks on multichannel time series for human activity recognition. In: IJCAI; 2015. p. 3995–4001.
11. Ordóñez FJ, Roggen D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors. 2016;16(1):115. doi: 10.3390/s16010115
12. Münzner S, Schmidt P, Reiss A, Hanselmann M, Stiefelhagen R, Dürichen R. CNN-based sensor fusion techniques for multimodal human activity recognition. In: ACM International Symposium on Wearable Computers; 2017. p. 158–165.
13. Radu V, Tong C, Bhattacharya S, Lane ND, Mascolo C, Marina MK, et al. Multimodal deep learning for activity and context recognition. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. vol. 1; 2018. p. 157.
14. Shoaib M, Bosch S, Incel OD, Scholten H, Havinga PJ. Fusion of smartphone motion sensors for physical activity recognition. Sensors. 2014;14(6):10146–10176. doi: 10.3390/s140610146
15. Chen Y, Shen C. Performance analysis of smartphone-sensor behavior for human activity recognition. IEEE Access. 2017;5:3095–3110. doi: 10.1109/ACCESS.2017.2676168
16. Kwon H, Abowd GD, Ploetz T. Adding structural characteristics to distribution-based accelerometer representations for activity recognition using wearables. In: ACM International Symposium on Wearable Computers; 2018. p. 72–75.
17. Yang Z, Raymond OI, Zhang C, Wan Y, Long J. DFTerNet: Towards 2-bit dynamic fusion networks for accurate human activity recognition. IEEE Access. 2018;.
18. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, et al. Convolutional neural networks for human activity recognition using mobile sensors. In: International Conference on Mobile Computing, Applications and Services; 2014.
19. Ha S, Yun JM, Choi S. Multi-modal convolutional neural networks for activity recognition. In: IEEE International Conference on Systems, Man, and Cybernetics; 2015. p. 3017–3022.
20. Ha S, Choi S. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In: IJCNN; 2016. p. 381–388.
21. Jiang W, Yin Z. Human activity recognition using wearable sensors by deep convolutional neural networks. In: ACM MM; 2015. p. 1307–1310.
22. Singh MS, Pondenkandath V, Zhou B, Lukowicz P, Liwickit M. Transforming sensor data to the image domain for deep Learning—An application to footstep detection. In: IJCNN; 2017. p. 2665–2672.
23. Ravi D, Wong C, Lo B, Yang GZ. A deep learning approach to on-node sensor data analytics for mobile or wearable devices. IEEE journal of biomedical and health informatics. 2017;21(1):56–64. doi: 10.1109/JBHI.2016.2633287
24. Rueda FM, Fink GA. Learning attribute representation for human activity recognition. In: IEEE International Conference on Pattern Recognition; 2018. p. 523-528.
25. Chen Y, Zhong K, Zhang J, Sun Q, Zhao X. LSTM networks for mobile human activity recognition. In: IEEE International Conference on Artificial Intelligence: Technologies and Applications; 2016.
26. Inoue M, Inoue S, Nishida T. Deep recurrent neural network for mobile human activity recognition with high throughput. Artificial Life and Robotics. 2016; p. 1–13.
27. Edel M, Köppe E. Binarized-BLSTM-RNN based human activity recognition. In: International Conference on Indoor Positioning and Indoor Navigation; 2016. p. 1–7.
28. Vu TH, Dang A, Dung L, Wang JC. Self-gated recurrent neural networks for human activity recognition on wearable devices. In: Thematic Workshops of ACM MM; 2017. p. 179–185.
29. Hammerla NY, Halloran S, Plötz T. Deep, convolutional, and recurrent models for human activity recognition using wearables. In: IJCAI; 2016. p. 1533–1540.
30. Zeng M, Gao H, Yu T, Mengshoel OJ, Langseth H, Lane I, et al. Understanding and improving recurrent networks for human activity recognition by continuous attention. In: ACM International Symposium on Wearable Computers; 2018. p. 56–63.
31. Zhang L, Wu X, Luo D. Human activity recognition with HMM-DNN model. In: International Conference on Cognitive Informatics and Cognitive Computing; 2015. p. 192–197.
32. Yao S, Hu S, Zhao Y, Zhang A, Abdelzaher T. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In: International Conference on World Wide Web; 2017. p. 351–360.
33. Zheng Y, Liu Q, Chen E, Ge Y, Zhao JL. Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Frontiers of Computer Science. 2016;10(1):96–112. doi: 10.1007/s11704-015-4478-2
34. Liu C, Zhang L, Liu Z, Liu K, Li X, Liu Y. Lasagna: Towards deep hierarchical understanding and searching over mobile sensing data. In: International Conference on Mobile Computing and Networking; 2016. p. 334–347.
35. Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: CVPR; 2018. p. 7794-7803.
36. Malinowski M, Doersch C, Santoro A, Battaglia P. Learning visual question answering by bootstrapping hard attention. In: ECCV; 2018. p. 3–20.
37. Santoro A, Raposo D, Barrett DG, Malinowski M, Pascanu R, Battaglia P, et al. A simple neural network module for relational reasoning. In: NIPS; 2017. p. 4967–4976.
38. Hu H, Gu J, Zhang Z, Dai J, Wei Y. Relation networks for object detection. In: CVPR; 2018. p. 3588-3597.
39. Stisen A, Blunck H, Bhattacharya S, Prentow TS, Kjærgaard MB, Dey A, et al. Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition. In: ACM Conference on Embedded Networked Sensor Systems; 2015. p. 127–140.
40. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. In: NIPS; 2017. p. 5998–6008.
41. Chavarriaga R, Sagha H, Calatroni A, Digumarti ST, Tröster G, Millán JdR, et al. The opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters. 2013;34(15):2033–2042. doi: 10.1016/j.patrec.2012.12.014
42. Reiss A, Stricker D. Introducing a new benchmarked dataset for activity monitoring. In: IEEE International Symposium on Wearable Computers; 2012. p. 108–109.
43. Altun K, Barshan B, Tunçel O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognition. 2010;43(10):3605–3620. doi: 10.1016/j.patcog.2010.04.019
44. Zappi P, Lombriser C, Stiefmeier T, Farella E, Roggen D, Benini L, et al. Activity recognition from on-body sensors: Accuracy-power trade-off by dynamic sensor selection. In: Wireless sensor networks. Springer; 2008. p. 17–33.
45. Bachlin M, Plotnik M, Roggen D, Maidan I, Hausdorff JM, Giladi N, et al. Wearable assistant for parkinson’s disease patients with the freezing of gait symptom. IEEE Transactions on Information Technology in Biomedicine. 2010;14(2):436–446. doi: 10.1109/TITB.2009.2036165
46. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL. A public domain dataset for human activity recognition using smartphones. In: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning; 2013.
47. Zhang M, Sawchuk AA. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In: ACM Conference on Ubiquitous Computing; 2012. p. 1036–1043.
48. Kwapisz JR, Weiss GM, Moore SA. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter. 2011;12(2):74–82. doi: 10.1145/1964897.1964918
49. Lockhart JW, Weiss GM, Xue JC, Gallagher ST, Grosner AB, Pulickal TT. Design considerations for the WISDM smart phone-based sensor mining architecture. In: International Workshop on Knowledge Discovery from Sensor Data; 2011. p. 25–33.
50. Kingma DP, Ba J. Adam: A method for stochastic optimization. In: ICLR; 2015.
51. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. Tensorflow: A system for large-scale machine learning. In: USENIX Symposium on Operating Systems Design and Implementation; 2016. p. 265–283.
52. Cheng KT, Wang YC. Using mobile GPU for general-purpose computing–a case study of face recognition on smartphones. In: Proceedings of 2011 International Symposium on VLSI Design, Automation and Test; 2011. p. 1–4.
Článek vyšel v časopise
PLOS One
2019 Číslo 9
- S diagnostikou Parkinsonovy nemoci může nově pomoci AI nástroj pro hodnocení mrkacího reflexu
- Je libo čepici místo mozkového implantátu?
- Pomůže v budoucnu s triáží na pohotovostech umělá inteligence?
- AI může chirurgům poskytnout cenná data i zpětnou vazbu v reálném čase
- Nová metoda odlišení nádorové tkáně může zpřesnit resekci glioblastomů
Nejčtenější v tomto čísle
- Graviola (Annona muricata) attenuates behavioural alterations and testicular oxidative stress induced by streptozotocin in diabetic rats
- CH(II), a cerebroprotein hydrolysate, exhibits potential neuro-protective effect on Alzheimer’s disease
- Comparison between Aptima Assays (Hologic) and the Allplex STI Essential Assay (Seegene) for the diagnosis of Sexually transmitted infections
- Assessment of glucose-6-phosphate dehydrogenase activity using CareStart G6PD rapid diagnostic test and associated genetic variants in Plasmodium vivax malaria endemic setting in Mauritania
Zvyšte si kvalifikaci online z pohodlí domova
Všechny kurzy