
an Articulated Human Body”, Master Thesis, 2013
Project Description
This project aims to implement a safety sensor prototype that can recognize and estimate the distance of a human by 3D vision. The program is based on point cloud segmentation performed through Deep Learning techniques. The vision-based algorithm, therefore, will be trained over samples available online and finely tuned with data collected in the laboratory. The resulting human distance in the monitored scene will be then used as a trigger for the collaborative robot to switch to reduce mode or safety-monitored stop. To achieve this goal, the integration of a digital I/O device between the workstation and the cobot controller will also be required and developed for the use case.
Prerequisites
- Sound knowledge of algebra
- Good C/C++ programming skills
- Basic knowledge of electrotechnics
- Strong initiative and attitude problem solving
- Strong will to cope with demanding challenges
- Keen on studying complex new theory
Not required but good to have:
- Knowledge of Python programming
- Knowledge of ROS
- Foundation of Computer Vision
Target Objectives
- Foundation of Point Cloud (PC) manipulation and analysis
- Foundation of Deep Learning (DL)
- Improved programming skills and ability to use third-party software (ROS)
- Communication and wiring management
- How to interface to a collaborative robot
Expected Results
- Study and analysis of PC generation and manipulation techniques
- Study and analysis of DL (with a particular focus on PC labelling and clustering)
- Integration of a digital I/O device to interface a workstation to the cobot control box
- Development of a ROS-based package to train a DL model that can cluster and recognize a human in a fixed scene