Draper Teaches Robots to Build Trust with Humans – new research

New study shows methods robots can use to self-assess their own performance

CAMBRIDGE, MASS. (PRWEB) MARCH 08, 2022

Establishing human-robot trust isn’t always easy. Beyond the fear of automation going rogue, robots simply don’t communicate how they are doing. When this happens, establishing a basis for humans to trust robots can be difficult.

Now, research is shedding light on how autonomous systems can foster human confidence in robots. Largely, the research suggests that humans have an easier time trusting a robot that offers some kind of self-assessment as it goes about its tasks, according to Aastha Acharya, a Draper Scholar and Ph.D. candidate at the University of Colorado Boulder.

Acharya said we need to start considering what communications are useful, particularly if we want to have humans trust and rely on their automated co-workers. “We can take cues from any effective workplace relationship, where the key to establishing trust is understanding co-workers’ capabilities and limitations,” she said. A gap in understanding can lead to improper tasking of the robot, and subsequent misuse, abuse or disuse of its autonomy.

To understand the problem, Acharya joined researchers from Draper and the University of Colorado Boulder to study how autonomous robots that use learned probabilistic world models can compute and express self-assessed competencies in the form of machine self-confidence. Probabilistic world models take into account the impact of uncertainties in events or actions in predicting the potential occurrence of future outcomes.

In the study, the world models were designed to enable the robots to forecast their behavior and report their own perspective about their tasking prior to task execution. With this information, a human can better judge whether a robot is sufficiently capable of completing a task, and adjust expectations to suit the situation.

To demonstrate their method, researchers developed and tested a probabilistic world model on a simulated intelligence, surveillance and reconnaissance mission for an autonomous uncrewed aerial vehicle (UAV). The UAV flew over a field populated by a radio tower, an airstrip and mountains. The mission was designed to collect data from the tower while avoiding detection by an adversary. The UAV was asked to consider factors such as detections, collections, battery life and environmental conditions to understand its task competency.

Findings were reported in the article “Generalizing Competency Self-Assessment for Autonomous Vehicles Using Deep Reinforcement Learning,” where the team addressed several important questions. How do we encourage appropriate human trust in an autonomous system? How do we know that self-assessed capabilities of the autonomous system are accurate?

Human-machine collaboration lies at the core of a wide spectrum of algorithmic strategies for generating soft assurances, which are collectively aimed at trust management, according to the paper. “Humans must be able to establish a basis for correctly using and relying on robotic autonomy for success,” the authors said. The team behind the paper includes Acharya’s advisors Rebecca Russell, Ph.D., from Draper and Nisar Ahmed, Ph.D., from the University of Colorado Boulder.

The research into autonomous self-assessment is based upon work supported by DARPA’s Competency-Aware Machine Learning (CAML) program.

In addition, funds for this study were provided by the Draper Scholar Program. The program gives graduate students the opportunity to conduct their thesis research under the supervision of both a faculty adviser and a member of Draper’s technical staff, in an area of mutual interest. Draper Scholars’ graduate degree tuition and stipends are funded by Draper.

Since 1973, the Draper Scholar Program, formerly known as the Draper Fellow Program, has supported more than 1,000 graduate students pursuing advanced degrees in engineering and the sciences. Draper Scholars are from both civilian and military backgrounds, and Draper Scholar alumni excel worldwide in the technical, corporate, government, academic, and entrepreneurship sectors.

Draper

At Draper, we believe exciting things happen when new capabilities are imagined and created. Whether formulating a concept and developing each component to achieve a field-ready prototype, or combining existing technologies in new ways, Draper engineers apply multidisciplinary approaches that deliver new capabilities to customers. As a nonprofit engineering innovation company, Draper focuses on the design, development and deployment of advanced technological solutions for the world’s most challenging and important problems. We provide engineering solutions directly to government, industry and academia; work on teams as prime contractor or subcontractor; and participate as a collaborator in consortia. We provide unbiased assessments of technology or systems designed or recommended by other organizations—custom designed, as well as commercial-off-the-shelf. Visit Draper at http://www.draper.com.

First Day of Safety, Security and Rescue Robots 2010 (SSRR-2010)

Currently I’m participating at the workshop of Safety, Security and Rescue Robots 2010 in Bremen.

The first day is now gone and a lot of interesting talks have been given:

Tetsuya Kinugasa has shown a Flexible Displacement Sensor in his talk of “Measurement of Flexed Posture for Mono-tread Mobile Track Using New Flexible Displacement Sensor“. His group develops and uses this sensor to control the posture of a robot which is a combination of snake, worm and tank.

Jimmy Tran presented his works on “Canine Assisted Robot Deployment for Urban Search and Rescue“. The basic idea is as simple as brilliant, use a equipped dog to find victims and to inform operators about him. So, dogs are well used in rescue and they have a high mobility. They can easily overcome huge rubles and are able to carry video cameras or rescue material. So, his approach is to use the dogs to deploy a small robot next to a victim, which would allow to investigate medical status of the person. The idea is hilarious.

Development of leg-track hybrid locomotion to traverse loose slopes and irregular terrain” is so far the most interesting technical approach of this workshop. It shows a way how a tracked like vehicle can be combined with a semi-Walker.

Donny Kurnia Sutantyo  presented his work on “Multi-Robot Searching Algorithm Using Levy Flight and Artificial Potential Field“, while Julian de Hoog showed a solution for team exploration in “Dynamic Team Hierarchies in Communication-Limited Multi-Robot Exploration”.

The invited speaker Bernardo Wagner showed the outcomes of his department. The Leibniz University of Hannover has worked intensively in the field of “Perception and Navigation with 3D Laser Range Data in Challenging Environments“.

Potential Field based Approach for Coordinate Exploration with a Multi-Robot Team” is topic of Alessandro Renzaglia.

Bin Li showed another nice approach of a shape shifting robot. His robot is able to shape shift it self by rearranging its three motion segments. “Cooperative Reconfiguration between Two Specific Configurations for A Shape-shifting Robot

Jorge Bruno Silva presented a approach of trajectory planing while respecting time constrains in “Generating Trajectories With Temporal Constraints for an Autonomous Robot
Noritaka Sato closed the day by presenting novel a HMI approach for teleoperation. Instead of showing only the direct camera image his group uses temporal shifted images to generate an artificial bird eye view, like it is given in computer car games. “Teleoperation System Using Past Image Records Considering Moving Objects

I am looking forward to listen to the next talks.

Interesting designs for Rescue Robots – Part 2

Professor Dr. Satoshi Tadokoro from the Tohoku University  presents his ASC. ASC is an search camera for usage in emergency situations and stands for Active Scope Camera. In basic it is a flexible endoscope which is able to move by it self. With the help of vibrating inclined cilia this endoscope can like a caterpillar crawl into smallest voids (>30 mm). Its maximum speed is 47 mm/s and the operating range is 8 m. This allows rescue workers to search in rubbles for victims or checking the structure of it.

The following video shows Professor Dr. Satoshi Tadokoro at the Tokyo International Fire and Safety Exhibition 2008 presenting the ASC.

During the Collapse of the Historical Archive of the City of Cologne (March 2009),  Professor Dr. Satoshi Tadokoro, Professor Dr. Robin R. Murphy (Texas A&M University), Clint Arnett (Project Coordinator for Urban Search and Rescue in TEEX), members of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) were trying to support the local fire department. Therefore I was able to test the ASC which was in use during this disaster.

The ASC performs extremely well. It can crawl in a reasonable speed into the rubble and is (after a little training) easy to use. But the biggest problem is the user interface. The ASC camera system does not compensated tilting or turning if the “robot” does flip/turn over, which happens quite often. Hence, it is hard for the Operator to keep track of the orientation. In addition the opening angle of the camera is extreme small, which does even more handicap the situational awareness.

Johnny Chunge Lee´s HMI Projects

Prof. Johnny Chunge Lee is a researcher and currently working for “Microsoft – Applied Sciences” in Redmond. He has a Ph.D. in Human-Computer Interaction gained at the Carnegie Mellon University on his thesis “Projector-Based Location Discovery and Tracking” [website].

On his website he has some great projects related to HMI that could also be helpful in robotics.

So for example his experiments on the usage of the WII controlers,

or his projects on Projector Calibration and RFID usage.