Learning Human Activity From Visual Data Using Deep Learning

<p>Advances in wearable technologies have the ability to revolutionize and improve people's lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers...

Full description

Saved in:
Bibliographic Details
Main Author: Taha Alhersh (16888806) (author)
Other Authors: Heiner Stuckenschmidt (16888809) (author), Atiq Ur Rehman (8843024) (author), Samir Brahim Belhaouari (9427347) (author)
Published: 2021
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p>Advances in wearable technologies have the ability to revolutionize and improve people's lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers' bodies and their immediate environment. Human activities recognition, which involves the use of various body sensors and modalities either separately or simultaneously, is one of the most important areas of wearable technology development. In real-life scenarios, the number of sensors deployed is dictated by practical and financial considerations. In the research for this article, we reviewed our earlier efforts and have accordingly reduced the number of required sensors, limiting ourselves to first-person vision data for activities recognition. Nonetheless, our results beat state of the art by more than 4% of F1 score.</p><h2>Other Information</h2><p>Published in: IEEE Access<br>License: <a href="https://creativecommons.org/licenses/by/4.0/legalcode" target="_blank">https://creativecommons.org/licenses/by/4.0/</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1109/access.2021.3099567" target="_blank">https://dx.doi.org/10.1109/access.2021.3099567</a></p>