SUBSCRIBE TO OUR FREE NEWSLETTER
PUBLISHED |

Here’s cheers: A toast to QUT’s Carlie robot

Here’s cheers: A toast to QUT’s Carlie robot article image

QUT’s latest robot, Carlie – a mini-autonomous car – is getting a taste of some of Australia’s best wine country as part of a new research project to help growers improve yields.

Carlie has a mobile robotic platform that is a miniature version of a full-size autonomous car, with range and camera sensors and on-board compute capability.

Researchers at QUT’s University of Adelaide node are collaborating with the Australian Institute for Machine Learning (AIML) to put their mini autonomous vehicle to the test.

As part of this, QUT researcher Mohammad Mahdi Kazemi Moghaddam is leading an “off-road” navigation project, taking the mobile platform to a local vineyard.QUT

The research includes “auto-park” training and “follow me” scenarios in the Lab.

These novel projects are being conducted as part of a wider group project under the supervision of QUT Associate Investigator Qinfeng “Javen” Shi.

“In the off-road project, we aim to help Australian vineyards use AI and robotics to improve their final yield,” says Moghaddam.

“As part of this, we've developed a GPS-based navigation method using the mini car to autonomously navigate through rows in a vineyard.

“While navigating, the car captures footage of the vines and grapes. Using computer vision techniques, we then detect grape counts, canopy size and other key information in the footage to feed to our yield estimation models.”

In the auto-park project, the final goal is to enable the mini car platform to park autonomously in various scenarios using simple RGB camera inputs.

This is different to the currently available systems in the sense that no other input sensors are used like proximity, LiDAR or even depth. This helps prevent a lot of design and engineering work and also reduces the computational load.


“In order to do this, we used the idea of Imitation Learning (IL),” says Moghaddam.

QUT autonomous vehicle2

“In IL we try to learn a policy that maps the input (RGB image) directly to a distribution over possible control actions. The actions here are the steering wheel angle and speed.”

The training is based on samples from a human expert demonstrating the desired actions under different scenarios. 

As the first step, the car learns to park in the nearest available spot keeping in the lane without hitting other cars.

“This could be the future of valet-parking,” says Moghaddam.

related

comments

Leave A Comment
SUBSCRIBE TO OUR FREE NEWSLETTER

Featured Products