Controlling and fastening, under noise and poor lightings conditions, the communication inside the workplace between operators and robots and/or machines is essential to make robust collaborative sets. Humans can control their task by making use of speech and gestures while machine and robot can adapt to human beings by regulating movements, space accessibility and contingencies. This acts on quality and productivity of process in different context (robotic surgery, dis/assembly task, virtual control, augmented training) and un-clustered jobs (as per maintenance actions) maintenance. Safety and reliability are benefitted from robust communication, especially in flexible working environments where there are no physical barriers between human and robots. Due to the dynamic interactions, the complex workplace faces increased uncertainty, which in turn increases the probability of human-system error. In this work, we explore the use - with an integrable, non-invasive, configurable and open access solution - of Machine Learning algorithms paired with Computer Vision to investigate human operators' fatigue and its possible use in understanding reliability inside the workplace. We propose a solution to integrate gesture recognition in collaborative robotics by controlling robot actions remotely by acting on the bi-direction (stopping robots and/or warning humans) perceived adaptability level with the intent of further investigating reliability in the workplace. (C) 2022 The Authors. Published by Elsevier B.V.
File in questo prodotto:
Non ci sono file associati a questo prodotto.