Promote worker’s safety awareness during human-robot collaboration
In recent years, human-robot collaboration is becoming a thriving work configuration in which human workers and collaborative robots share the same workplace and work side by side. In this project, we seek to understand the profile of workers’ safety awareness and mental stress in response to co-robot appearance and actions, and to develop and evaluate an intervention method to promote workers’ safety awareness and mental health through co-robot actions. This research is supported by NSF 2024688.
Enhance the cross-support of data science and biomechanics in undergraduate education
Biomechanics is the study of the movement of living things using motion data and the sciences of mechanics. Data science is a field that uses scientific methodologies, processes, and algorithms to extract insights from structured and unstructured data. This project will 1) enable students to become the source of the body movement data, and 2) help students to analyze these personally relevant datasets by using data science methods. This research is supported by NSF 2013451.
Visualize biomechanical exposure in augmented reality
The goal of this study is to design and develop an immersive approach by exploring augmented reality (AR) to deliver the knowledge regarding the biomechanical exposures to workers and to encourage workers to perform occupational tasks using appropriate body movements to reduce the risk of musculoskeletal disorders. This research is supported by NSF 1822477.
Vision-based ergonomic assessment through automated pose detection
The current assessment on MSD risk exposure predominantly relies on pen-paper based observational methods, which is very time-consuming and highly subjective to observers’ experiences. In the research, we seek to apply computer vision algorithms to automatically infer worker’s exposure level (i.e. RULA score) directly from images. This research is supported by the NC Occupational Safety & Health Education & Research Center (NC OSHERC) [Video]. In this project, we have also created a multimodal benchmark dataset which include human kinematics data as well as the synchronized videos of 12 participants performing 25 different occupational tasks. This dataset is available for public access. Please click MOPED25 for further details.
Inferring driving distraction through driver’s body motion
Driver distraction is a critical issue in transportation safety. In this study, we seek to apply deep neural networks on human kinematics data/images to infer driver’s distracted driving behaviors. The results can helpful for designing adequate distraction mitigation strategies in intelligent vehicles. This research is supported by NCSU Research and Innovation Seed Funding (RISF)