Robots are now being deployed for a broad range of tasks using advanced touch, vision and sensing technology.
Most humans, however, rely heavily on sound to get around. So giving robots the power to hear could be a game-changer in the world of AI and robotics.
Now researchers at Carnegie Melon University’s (CMU) robotics institute have found that sounds may actually be used to enable a robot to better tell one objects from another.
The addition of this new element in its functioning would enable robots to differentiate between different kinds of objects and determine the action behind the sound.
It would also help them to predict the physical properties of different objects by utilizing the sound.
"A lot of preliminary work in other fields indicated that sound could be useful, but it wasn't clear how useful it would be in robotics," explains CMU researcher Lerrel Pinto in a recent interview.
Methodology used by ‘hearing’ robots
The researchers used a square tray, which was attached to the arms of a Sawyer Robot (previously developed by Rethink Robotics) to create an apparatus known as Tilt Bot, that captured interactions.
An object was placed in the tray, letting Sawyer spend some hours with it, by moving the tray with the object in different directions.
The large dataset of 60 different objects such as balls and toys, was then recorded by using videos and cameras. The dataset catalogued 15,000 interactions to be used by other researchers.
Some other datasets were also collected by enabling the robot to push objects on a surface.
A detailed report of this finding was presented last month, at the Virtual Robotics Science and Systems Conference. The researchers pointed out that the performance rate of Robot was considerably high and in 76% of the cases, the robot was able to differentiate between objects.
The US Department of Defence's research grant arm, DARPA, supported the latest research, along with the Office of Naval Research. Both are big investors in automation technologies and research.