Robots are beginning to deliver pizza and parcels, drive cars and help the kids with their homework. How will they communicate with us? They can download a dictionary and start talking, but a dictionary doesn’t understand its own words and nor would the robots. Language is not that simple. So, could a robot ever say-what-it-means and mean-what-it-says? This talk is about how robots, called Lingodroids, understand language by inventing their own words to describe their own experiences. Such robots roam the world, asking each other questions like “Where are we” and “What time is it?” Through these conversations, they invent words for places and times, linking the meanings to their own experiences. Linking a place in the world to its name is called “grounding” and it is the first step in having robots truly understand what they say. Human languages can refer not only to previous experiences, but also to possible experiences, and even impossible ones. The talk will describe how the Lingodroids invent robot languages, and generalise their words to refer to places they have never been, and even places they can reach only in their imagination. The talk will finish with a glimpse into how conversations with people can help robots learn the meanings of words in human languages.
Bio: Janet Wiles, Professor of Complex and Intelligent Systems University of Queensland Janet Wiles’ research involves bio-inspired computation in complex systems, with applications in cognitive science and biorobotics. She completed a PhD in Computer Science at the University of Sydney, a postdoctoral fellowship in Psychology at the University of Queensland, and served as faculty in the Cognitive Science program for 12 years. In 2003 she formed the Complex and Intelligent Systems research group at the University of Queensland where she has been Professor since 2006. She currently coordinates the UQ node of the ARC Centre of Excellence for the Dynamics of Language, where her research focuses on social robots and language.