Vis enkel innførsel

dc.contributor.authorde Lange, Sindre Eik
dc.contributor.authorHeilund, Stian Amland
dc.date.accessioned2019-09-18T06:31:52Z
dc.date.available2019-09-18T06:31:52Z
dc.date.issued2019-06-28
dc.date.submitted2019-06-27T22:00:07Z
dc.identifier.urihttps://hdl.handle.net/1956/20845
dc.description.abstractThe demographic challenges caused by the proliferation of people of advanced age, and the following large expense of care facilities, are faced by many western countries, including Norway (eldrebølgen). A common denominator for the health conditions faced by the elderly is that they can be improved through the use of physical therapy. By combining the state-of-the-art methods in deep learning and robotics, one can potentially develop systems relevant for assisting in rehabilitation training for patients suffering from various diseases, such as stroke. Such systems can be made to not depend on physical contact, i.e. socially assistive robots. As of this writing, the current state-of-the-art for action recognition is presented in a paper called ``Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition'', introducing a deep learning model called spatial temporal graph convolutional network (ST-GCN) trained on DeepMind’s Kinetics dataset. We combine the ST-GCN model with the Robot Operating System (ROS) into a system deployed on a TurtleBot 3 Waffle Pi, equipped with a NVIDIA Jetson AGX Xavier, and a web camera mounted on top. This results in a completely physically independent system, able to interact with people, both interpreting input, and outputting relevant responses. Furthermore, we achieve a substantial decrease in the inference time compared to the ST-GCN pipeline, making the pipeline about 150 times faster and achieving close to real-time processing of video input. We also run multiple experiments to increase the model’s accuracy, such as transfer learning, layer freezing, and hyperparameter tuning, focusing on batch size, learning rate, and weight decay.en_US
dc.language.isonobeng
dc.publisherThe University of Bergenen_US
dc.rightsCopyright the Author. All rights reservedeng
dc.titleAutonomous mobile robots - Giving a robot the ability to interpret humanmovement patterns, and output a relevantresponse.en_US
dc.typeMaster thesis
dc.date.updated2019-06-27T22:00:07Z
dc.rights.holderCopyright the Author. All rights reserveden_US
dc.description.degreeMasteroppgave i informatikken_US
dc.description.localcodeINF399
dc.description.localcodeMAMN-PROG
dc.description.localcodeMAMN-INF
dc.subject.nus754199
fs.subjectcodeINF399
fs.unitcode12-12-0


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel