In a Human-Robot Collaboration (HRC) manufacturing system, human operators and robots team up and collaborate on completing complex tasks, in a diverse range of scenarios with highly dynamic and uncertain shop floor environments. The robots are expected to assist humans besides independently performing tasks. The goal of the collaboration is to: 1) ensure safety in the collaborative workspace and 2) increase production efficiency. For this purpose, the robots should be able to accurately capture the human operator’s actions and understand their intentions, while taking into consideration the variability and heterogeneities among human operators in performing the same tasks.
Human body motions associated with certain tasks may be similar regardless of the context of the tasks. For example, there could be no distinct difference between body motions when grasping a part and a tool (e.g., a screwdriver). In an HRC system, human actions are first recognized in terms of generic body motions (e.g. standing, grasping, and holding). After motion recognition is performed, the context of the actions are recognized to assist in the identification of the operator’s intention. This is aimed at helping the robot understand what specific actions the human operator intends to perform so that the robot can assist correspondingly. For example, when the robot captures the scenario of a human holding a screwdriver, the robot would recognize that the human intends to drive screws. As a response, it would fetch a screw and pass it on to the human operator.
Demo of Human Motion Recognition in HRC