An interactive program from computer scientists helps with motion planning for environments with obstacles – ScienceDaily

Just like us, robots cannot see through walls. Sometimes they need a little help to get where they’re going.

Engineers at Rice University have developed a method that allows humans to help robots “see” their environment and perform tasks.

The strategy called Bayesian Learning IN the Dark – BLIND, for short – is a new solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.

The peer-reviewed study by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the International Robotics Conference from the Institute of Electrical and Electronics Engineers. and automation at the end of May.

The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “increase the robot’s perception and, most importantly, prevent dangerous movements from being performed,” according to the study. .

To do this, they combined Bayesian inverse reinforcement learning (whereby a system learns from continuously updated information and experiences) with established motion planning techniques to help robots that have ” high degrees of freedom”, i.e. a lot of moving parts.

To test BLIND, the Rice lab directed a Fetch robot, a seven-joint articulated arm, to grab a small cylinder from one table and move it to another, but in doing so it had to jump over a barrier.

“If you have more joints, the instructions to the robot are complicated,” Quintero-Peña said. “If you’re leading a human, you can just say, ‘Raise your hand.’ »

But a robot’s programmers need to be precise about the movement of every joint at every point in its path, especially when obstacles block the machine’s “view” of its target.

Rather than programming a trajectory in advance, BLIND inserts a human intermediary process to refine the choreographed options – or best guesses – suggested by the robot’s algorithm. “BLIND allows us to take information from the human head and calculate our trajectories in this high degree of freedom space,” Quintero-Peña said.

“We use a specific mode of feedback called critical, basically a binary form of feedback where the human receives labels on elements of the trajectory,” he said.

These labels appear as connected green dots that represent possible paths. As BLIND moves from point to point, the human approves or rejects each move to refine the path, avoiding obstacles as efficiently as possible.

“It’s an easy-to-use interface for people, because we can say ‘I like this’ or ‘I don’t like that’, and the bot uses that information to schedule,” Chamzas said. Once rewarded with a set of approved moves, the robot can perform its task, he said.

“One of the biggest things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña said. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think apps will get the most out of this work.”

“This work is a great example of how small, but focused, human intervention can dramatically improve robots’ abilities to perform complex tasks in environments where some parts are completely unknown to the robot but known to humans,” said Kavraki, a robotics pioneer whose CV includes advanced programming for NASA’s humanoid Robonaut aboard the International Space Station.

“It shows how human-robot interaction methods, the research topic of my colleague Prof. Unhelkar, and automated scheduling developed over years in my lab can combine to provide reliable solutions that also respect preferences. human.”

Former Rice undergraduate student Zhanyi Sun and Unhelkar, assistant professor of computer science, are co-authors of the paper. Kavraki is the Noah Harding Professor of Computer Science and Professor of Bioengineering, Electrical and Computer Engineering, and Mechanical Engineering, and Director of the Ken Kennedy Institute.

The National Science Foundation (2008720, 1718487) and an NSF Graduate Research Fellowship Program grant (1842494) supported the research.

Video: https://youtu.be/RbDDiApQhNo

Comments are closed.