You’ve surely seen all of those videos of robots opening and walking through doors. The dirty little secret is that most or all of them involved a good bit of human hand holding. That can come in the form of manual remote guidance wherein a user remotely controls the process in real-time or a guided training, in which the robot is walked through the process once so it can mimic the activity exactly the next time.
New research from ETH Zurich, however, points to a model that requires “minimal manual guidance.” It’s effectively a three step process. First the user describes the scene and action. Second, the system plans a somewhat convoluted route and third, it refines the route into a minimal viable path.
Want the top robotics news in your inbox each week? Sign up for Actuator here.
“Given high-level descriptions of the robot and object,” Read Entire Article