I have decided a while ago to update rewrite my research statement every a couple years to evaluate/refocus my directions, but it took until now to have one completed…
I am a person with many interests. Lack a focus and depth, you may say, and the evidence is quite clear. I spent many years designing avionics, UAVs, robots, and telescopes; I worked on flight controls, sensor fusion, fault tolerance, and decision-making; I often wondered about the futures on Earth and dreamed of mission concepts for exploring other planets and moons; I am also interested in the humans’ interactions with robots, from safe co-existence, collaborations, to influencing robots’ decisions and behaviors. When I started my lab in 2012, I called it “Interactive Robotics Laboratory,” a catch all name to cover things I may want to explore.
With this trait and experience, I found myself a natural for envisioning new robot systems, which made myself a small reputation. I have gone from being introduced by others as the guy “who worked on formation flight,” to the person “that won a NASA challenge”, to someone “who is trying to make a robot pollinator.”
I have asked myself numerous times, if I were to make one small difference to the world, what would that be? The answer has converged slowly over the years. I want to find innovative robotic solutions to some of the world’s big challenges (e.g., hunger, inequality, access to education, and exploring the unknowns). Every time I tried, there always seems to be this one puzzle piece missing: making robots resilient in the real-world. That is, robots actually work in new environments and under dynamic situations (e.g., around people), instead of just cool demos on YouTube. I think something important is lacking in the field of robotics.
I call this “soft autonomy,” a term we made up and started using since 2014 to contrast the rigid decision rules governing Cataglyphis, our most prized robot (pun intended). As a “probabilistic robot,” Cataglyphis was built to handle a variety of uncertainties (e.g., localization, object recognition, terrain, and manipulation). Despite of its success, Cataglyphis is a robot runs on the designers/programmers’ predictions of what the situation may become. Deviations from these predictions, or any surprises to Cataglyphis, often leave it confused.
Most roboticists working on the topic of “autonomy” today make a living dealing with known unknowns, so am I. This means we spend a significant amount of time and effort on identifying, modeling, and propagating the uncertainties in the problems as probability distributions (or beliefs), and try to make decisions factor in these uncertainties. Many problems have been solved this way, and we have tried (or are trying) a few. This includes attitude estimation for UAVs (not for humans…), terrain-aware navigation on Mars, active perception for flower pollination, cooperative localization of space/underwater vehicles, among others. As we are contended solving one after another of this type of problems (e.g., getting publishable results and YouTube demos), we are knowingly ignoring a bigger problem. Every time we crafted detailed models for a source of uncertainty, it makes the solution more specialized and less flexible. Whenever the uncertainty assumptions are violated (call it the uncertainty of uncertainty?), there is no mechanism available for the robot to make productive decisions. In other words, the robots today are overly confident, but are not truly autonomous.
I can think of many “simple” creatures that are autonomous: earthworms, ants, bees, trees, and the list goes on. Intelligent? maybe not so much, especially when they are alone; autonomous? yes! So, what are some of the differences between our robots and, say, a worm? A robot has sensors and actuators while a worm has a lot more of them. A robot has powerful computers; the worm? maybe not (e.g., Caenorhabditis elegans, a simple kind of worm, only has 302 neurons and about 7,500 synapses), but it has a soft body that directly interacts with the environment for morphological computing. For a robot, we often design the hardware and then write the software; for a worm, there is probably no distinction between the two (and everything is soft!). A robot came mostly from the top-down decisions of the designers/programmers, while the worm has gone through a fierce bottom-up competition process for as long as us humans.
It seems like a way, and there may be more than one way, towards achieving “autonomy” has not been found. In additional to work on solving perception and decision-making under uncertainties problems (the known unknowns), we are exploring two directions toward dealing with the unknown unknowns: bottom-up designs and problem solving (e.g., swarm intelligence), and decision-making under the uncertainty of uncertainty. Sounds like spread thin again? yes; but hopefully not without a direction.