Problem-Solving in Robotics?

One of the primary obstacles impeding the widespread application of robotics lies in their lack of problem-solving capabilities. Enabling robots to independently resolve unforeseen situations could facilitate the adoption of autonomous robots in a broad range of applications such as agriculture, healthcare, exploration, and environmental protection.

As robot designers or programmers, we have a tendency to break-down and solve problems for the robots from our own perspectives. When developing a robot, we often translate our understandings of a problem (e.g., through a ‘problem statement’) into a set of structured ‘decision making’ or ‘control’ algorithms, which are then programmed into the robot. As a result, the ‘autonomy’ of today’s robots is limited. The resulting robot behaviors frequently turn out to be brittle and unnatural. Therefore, a great challenge in robotics research is allowing the robots themselves to play a bigger role in solving problems. When new problems arise, these robots are better equipped to address them.

I would like to propose the following research and development areas associated with problem solving in robotics:

  • Define problem solving in the robotics context. Unlike the research on decision making, which has well-defined problems such as POMDP (Partially Observable Markov Decision Processes), problem-solving is currently not a well-recognized research topic in robotics.
  • Learning from nature. Most examples of problem solving are from nature, exhibited by people, insects, plants, even individual cells when interacting with the environments. The mechanisms that led to the problem-solving ability of these creatures are not clear.
  • Swarm robotics. Another place that we may find examples of problem solving are the collective intelligence of natural swarms (e.g., transportation, foraging, construction) and robotic swarms. The emergence of sometimes unexpected global behaviors through local interaction rules can be interpreted as solutions to environmental challenges, which no single agent in the swarm fully comprehends.
  • Case based reasoning, imitation learning, and transfer learning. How could past experiences offer guidance for solving new problems?
  • Large Language Models (LLMs). Can a pretrained LLM provide “common sense” to a robot’s problem-solving mechanisms?
  • Benchmark problem. How do we evaluate the progress made? Can we identify benchmark problems that are experimentally simple to set up yet challenging to resolve?
  • Ethical Robotics and Safety. What are the implications when robots can solve their own problems?

Depending on if the research is inspired by human-like or primitive creature-level problem solving abilities, there could be very different pathways toward problem-solving in robotics.

AI Neuroscience & AI Psychology?

When people on Earth don’t understand something, they create new disciplines. A lot of what we don’t understand comes from emergent behaviors, that is, the whole is drastically different than the building blocks. We may know the blocks reasonably well, but not the complex things built with them. For example, economics is a field to help understand the collective outcomes of a lot of interacting people. With ChatGPT and other large language models started showing abilities that we don’t quite know how to explain, other than they have emerged from the tools we (thought we) know, maybe it’s the time to have new fields? What about splitting the AI research field to “AI Neuroscience” and “AI Psychology”?

On Time and Research Productivity

What is a good heuristic linking the time spent and the research progress made?

If we use bean counting as an example, the progress made would be a function of talent, skill, and effective time spent. This is not as simple a relationship as it may look like at first glance. The talent (e.g., good hand-eye coordination and dexterity) separates us to some degree, but skill can make up for most of the differences. Skill is also developed over time, as a function of effective time spent on bean counting in the past. Notice the word effective here. Three people (A, B, and C) can be spending the same amount of time counting beans; but,

  1. A spends 30% of the time wondering: B seems to be more talented than I am?
  2. B spends 20% of the time managing the group.
  3. C is not thinking at all, just counting.

Who may be the one that has more beans counted and has more bean counting skills gained over the time?

However, research is not bean counting. If instead, we use mountain climbing as an example, we may consider progress as a function of talent, skill, time spent, and the direction we take. If we are making poor choices by going on the longer route, we could be climbing fast and hard but still be late.

However, research is not mountain climbing. If instead, we use mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, and luck. If we are lucky, we would find many (and good) mushrooms on the way with less time and effort. But luck is not something we have direct control of. The only part that we can do is to increase the number of trials, e.g., explore more, which is also a function of the time spent.  Also, it’s assuring to think that no one can be always unlucky on a time scale of, say, 40 years.

However, research is not mushroom foraging by one person. That would be ignoring the bigger picture, e.g., the role played by others on one’s progress. If instead, we use group mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, luck, and interaction. The time spend on teaching, making friends, brainstorming, and cooperation may payoff in ways we can’t anticipate.

How do we know if we have been managing our time productively? We probably won’t know for sure. It usually takes years to develop a skill (e.g., language, playing musical instrument, writing) and the process is not linear. Your skill could be stagnating for a long time before making the next jump. I would suggest following a simple heuristic: the research progress, along with many of its contributing factors, have a positive correlation with the effective time spent. There is also a simple test. Just think about in the last week (while still can remember), how much time did I spend on research related topics, such as reading, learning, thinking, doing, writing, teaching, and debating, etc. Ok, for me, I worked quite hard last week (Oct 30 – Nov 05, 2022), but a good chunk of my time was spent on attending meetings, replying to emails, grading homework, dealing with logistics/paperwork. I also spent a good amount of time teaching and thinking (which were productive), but had little time to read and write, and even less time to use my hands for doing anything other than typing …

Why am I not a better researcher than I am now? That was probably the main reason. I can blame it on not having enough time; but that probably is just an excuse. Maybe it’s because there are too many things (other than just research) that I chose not to give up? Or maybe the time I spent was just not effective enough. One progress I made was that I have learned, after a long while, not to waste time thinking that I am not good enough as a researcher.

RootBots: Sprawling Robot Networks for Lunar Resource Extraction

Lunar Helium-3 (3He) mining has the potential of addressing the energy and environmental challenges on Earth. As an essential fuel of future nuclear fusion reactors, 3He is extremely rare on Earth, with currently less than 2 kg/year production in the US, but is much more abundant on the lunar surface, with an estimated reserve of over one million tons. Just 40 tons of 3He, combined with deuterium, a resource available on Earth, could meet the current annual energy need of the U.S.

Mining 3He on the Moon would be quite different than traditional mining process on Earth. Over the time, 3He, along with other volatiles, were implanted into the lunar regolith by the solar wind. This means that surface mining operations would need to cover vast areas at shallow depths. Agitation to the regolith will lead to a significant release of the volatiles, which presents both a challenge and an opportunity. Other challenges associated with large-scale space industrialization, such as the limited human presence and the high reliability/autonomy requirements, meaning radically different mining concepts need to be developed.

Inspiration: Vascular plants use roots to anchor themselves, to explore and extract resources from earth, to store nutrients, and to compete with others. The roots grow by producing new cells, which are regulated by both extrinsic and intrinsic stimuli. Sensing of extrinsic factors, such as gravity, barriers, light, air, water, and nutrients, directs roots to grow toward favorable directions. Hormones (i.e., intrinsic stimuli) such as Cytokinins and Auxin (with often opposite effects), as a part of both local and long-distance signaling, also help coordinate the root development. The plant roots are time and real-world tested designs for functionality, intelligence, and resilience.

Vision: Entering rootbots, sprawling networks of robot organisms, for performing planetary resource extraction tasks at an unprecedented scale in the second half of the 21st century. A rootbot will be made of modular and interlocking smart components called the cells. Each cell interacts with the environment and makes its own decisions, influenced by local stimuli and decisions of nearby cells. One important decision to be made is whether for a cell to stay in place or relocate. If a cell decides to move, it will wait for a mobile agent, called a transporter, to come by and pick it up. Traveling only on top of the existing “root” network of cells, the transporters carry cells and deploy them in places where they desire. With two decentralized and interacting robot swarms (i.e., the cells and the transporters), a rootbot can grow like a plant root for exploring and exploiting favorable conditions.

Ok, the discussions above were modified from an unsuccessful NASA NIAC proposal. Feedback from NASA? “Overall, while there were many interesting aspects suggested, the proposal overreached without enough justification for any one concept, especially given the scale proposed.

A Trip to Mars… Desert Research Station (MDRS)

A couple weeks ago, a team of WVU students and I traveled to Utah to compete in the University Rover Challenge (URC) for the first time. It has been 5-years since I was last at a robot competition. This time, we have a new group of passionate and talented students, which brought back a lot of memory and excitements. We ended up doing well for a first-time team, but that was not without struggles and some luck.

Going to a robot competition is to get out of ones’ normal life routine. In a short a few days, unexpected events are rapidly unfolding in front of everyone’s eyes, followed by rapid and intense problem solving by the team members. In this post, I will mention just a few of these surprises.

Imagine you are sending a rover to Mars for a science mission. Your rover needs to be in other people’s hands for transportation and payload integration. It has to survive the rocket launch, months of interplanetary travel, and the short but horrifying landing process. It may not end up in the exact location on Mars as you hoped. Once it’s there, there is only so much you can do about the rover, and things start to break as the rover moves from one place to another …

URC was a bit like that. As a good robot challenge should be, there are many elements of surprises. Some of these surprises are imposed by the physical world, like a real Mars mission, and some are exclusively for the first timers like us.

Our launch vehicle was a brown UPS truck. We packed everything in five wooden crates and a cardboard box, with almost 300kg of gear. After traveling on the Earth surface for three days the shipment arrived at Denver. A two-person team picked it up with a van and completed the remaining 7-hour journey.

Several parts broke during this trip, mostly 3D printed ones. Luckily, we brought backups. Our 3D printer also had a motor mount broken. A team member (Tyler) zip tied the motor to print a new part to fix the problem, practically creating a self-repairing 3D printer. To our surprise, all the steel bolts on the rover were heavily rusted, as if the UPS truck took a sea route.

Getting the robot ready for the first two missions (Equipment Servicing and Autonomy) on the first competition day took a long time. Some testing were pushed to after dark. At close to 11pm (1pm Easter time), things started to look really good with everything working. When powering down the system, an (unpowered) GPS cable fall into the electronics box and got close (but not quite touching) the power distribution board. After a small flash under one of the darkest night skies in the US, everything went quiet.

The night of excitement renewed after the incident and sleep was no longer important. Close inspection of the power board revealed that an inductor melted down. The inline fuse was still intact and there was no way to tell if the electronics downstream (e.g., computer) were still ok. Swapping out the power board with a backup piece took some careful deliberation and planning. Luckily everything worked and there were still over 2 hours left to sleep before we need to get on the road.

It was a small miracle that the robot worked for the Equipment Serving task without having a chance to do a full system testing after putting everything back together. We probably wouldn’t do much better than what we did without more in-depth understanding of the tasks, which could only be acquired through being there.

The Autonomy task was more … dramatic, for a lack of better word. The robot held its position (like the picture below) for almost the entire duration of the 30-minute mission. At the very last moment, it took off and reached its first waypoint. For us outside of the command station trailer, a motionless robot can trigger many emotions and speculations. For the members inside the trailer, they were in a frantic problem-solving mode. Clearly, the time went by at very different rates just a few meters apart.

What turned out to be happening was that the terrain map loaded on the rover was centered around the habitats of MDRS. For the actual URC competition, the organizers split the four different missions at three locations about 1km apart. The starting point of the autonomy mission was just outside of our prior map. Knowing it’s not on the map, the robot did not know what to do. It took the team members just a few minutes to diagnose the problem, and then many trials and errors to place a blank map in the right place so the robot can move. It worked!  I have seen many “autonomous” robots made up its mind to not go anywhere during the competitions …, this was the first time that a robot changed the mind (with some human help, of course).

With a bit more time and experience, we were better prepared for the next two missions on the following days: Science, and Extreme Retrieval and Delivery. There was no shortage of surprises and issues, but the team (and the rover) held up well.

An adventure like the URC trip teaches us the meaning of real-world engineering. To know a system works, we need to put it through the test of truly new environments and unexpected situations, out of the control of the robot designers. The thought that we shipped a rover to one of the most uninhabitable deserts in the continental US thousands of kilometers away and still managed to make it work in all four missions is quite satisfying. Many other teams, especially international ones, had to cope with even harder constraints, like designing the rover to fit in airline carry-on cases.

When a new problem arises during a competition, and it almost certainly will, the problem needs to be understood and solved quickly, either by the robot itself or by team members. Luckily, robot designers and programmers are trained problem solvers, although their performance can be further improved with more systematic approaches. For autonomous robots? problem-solving is a much harder challenge and perhaps the greatest gap in the current robotics research.

Here is a group photo of the team along with our judge (second from the left), taken in front of a MDRS habitat after the Extreme Retrieval and Delivery mission.

Mastering the Master’s Study at IRL

After spending 16+ years in school, and you still feel like needing more schooling, then a master’s program may be suitable for you…

This is the “super-undergrad” view of master’s study. Another way of looking at your journey as a master’s student is to consider it as a “pre-Ph.D.” Whether you want to get a Ph.D. later or not, you can build yourself into a capable engineer and an independent researcher in 2-2.5 years of time. If you take this latter view, the time spent on your master’s degree may become the most consequential period in your career. However, this would only come as the result of hard working and wanting to make a change.

The first challenge for a master’s student is the transition process. Life as an undergrad was quite structured. You come to classes, do homework, prepare for (one after another) exams, keep yourself fed, clean, and healthy, and squeeze in other things you want to do. Your schedule is largely dictated by the curriculum, the professors, and the computers. You don’t have to plan much; just responding to endless deadlines, and having things done on time, you would probably do well in school. The grad school is different though. Initially, about half of a master’s student’s time is spent on classes. The other half? Not so well defined (well, research is a pursuit of truth…). Just like you were giving the responsibility of managing your free times when going to the college, now you are giving the freedom (just another way to say responsibility) to manage half of your (potentially) productive time. Can you make it actually productive? Without the constant pressure of homework and exams, will you still learn as fast and as focused as you could? You will be giving guidance on research directions. The projects and papers do have deadlines. Beyond that, you will be responsible to manage your day-to-day research activities.

The second challenge for a master’s student is the transition process. When you were doing a class project, you would be pretty sure that the project was feasible, and the knowledge needed to complete it was mostly discussed already in the class. Now, you will be swimming in the ocean of human knowledge and trying to solve open-ended problems. Do you have a sense of direction? Can you find the right tools for the right task? More experienced people will be there to help you, but you need to be venturous, diligent, and resilient.

The third challenge for a master’s student is the transition process. The projects are getting a bit bigger now. Big enough that a few all-nighters no longer matter (e.g., writing a thesis). You are going to have to learn the slow and steady way towards success. Finding milestones (e.g., finishing a literature review) and base camps (e.g., submitting a paper) become important. Not kicking things that you tend to not like (synonyms of not good at) doing down the road is also important. You have to fight the principle of least effort with willpower!

The fourth challenge for a master’s student is the transition process. You wanted to be surrounded by smart, thoughtful, and knowledge people? Now you get your wish. When you hear people talking about things that are way over your head, what would be your response? Join the discussion, ask them to explain, and let people help you! When you feel ignorant, it’s probably the time you are learning.

A bit of structure for IRL master’s students (an experiment starting summer 2022):

  • 1st 6 months – identify a paper topic and complete the literature review for the paper. Present it to the lab.
  • 2nd 6 months – identify the thesis topic and complete the literature review for the thesis. Present it to the lab.
  • 3rd 6 months – complete and submit a conference paper. Present it to the lab.
  • 4th 6 months – complete and defend the thesis. Work on a second paper if possible.
  • 5th 6 months – fall back if needed.

A few heuristics for IRL master’s students:

  • Aligning the thesis topic with an ongoing project and IRL’s general research vision makes life simpler.
  • The thesis writing must be an individual effort, but the research is not. Good collaboration makes everyone better off.
  • Converting your written paper(s) into chapters of your thesis makes thesis writing a lot less stressful.

P.S. I wanted to go to grad school since when I was a kid. This was partially because I admired scientists and partially influenced by my uncle. He told me that when I become a grad student, I will get to work in a lab, and only on things I like to do. This turned out to be not entirely true. I did manage to get into a good grad school for my master’s study, and it made a big impact on me. I met many smart people and learned (collectively with them) that none of us can outsmart the rest of us. Each of us has some talent distributed in very different ways. Each of us has some serious flaws distributed in very different ways as well. I guess understanding those was a part of me growing up. After I got my master’s degree, I felt liberated, for two reasons. First, I was no longer so confused about what I could do (unlike when I graduate from college). I felt I could be useful in some ways. Second, I was confident I could always find an engineering job with a salary that can keep me alive, but I didn’t have to. This gave me the freedom and the power to try things that may not pay back. I became a life tourist.

Building Precision Pollination Robots

We have been working on the topic of robotic precision pollination for a few years now and will continue down this path in the foreseeable future. With the word “precision”, we mean treating crops as individual plants, recognizing their individual differences and needs, like how we would interact with people. If our robots can touch and precisely maneuver small and delicate flowers, they could be used to take care of plants in many different ways.

But why does anyone want to take over bees’ pollination job with robots? No, we don’t, and I much rather seeing bees flying in and out of flowers. What we like to have is a plan-B in case there is not enough bees or other insects to support our food production. With a growing human population, the rate of bee colony loss, and the climate change, this could become a real threat. We also want to be able to pollinate flowers in places where bees either do not like or cannot survive, such as confined indoor spaces (e.g., greenhouses, growth chambers, vertical agriculture settings, on a different planet, etc.).

In our previous project, we designed BrambleBee to pollinate Bramble (i.e., blackberry and raspberry) flowers. BrambleBee looks like a jumble sized bumblebee with a big arm, but cannot fly. We did not want to mimic bee’s flying ability; instead, we learned from bee’s micro hair structures and motions (thanks to our entomology team led by Dr. Yong-Lak Park) and used a custom-designed robotic hand to brush the flowers for precision pollen transfer.

BrambleBee served as a proof of concept and it was fun to watch it work, but there are still many challenges. For example, each flower is unique and there are many complex situations for a robot pollinator to handle (e.g., tightly clustered flowers, occlusion, deformable objects, plant motion, etc.). How to covert an experimental robot system to an effective agriculture machine and be accepted by growers is another major challenge.  These are the research topics we will tackle with our next robot, StickBug.

Wait,…, I should say “robots” because StickBug is not a single robot. It would be a multi-robot system with four agents (one mobile base and three two armed robots moving on a vertical lift).

We have a talented, motivated, and diverse team that includes horticulturists (Dr. Nicole Waterland and her students), human-systems experts (Dr. Boyi Hu and his students from the University of Florida), roboticists (Dr. Jason Gross and I, along with undergraduate and graduate students from #WVURobtics). This project will be open sourced, starting with sharing our proposal. If you have any suggestions on our approach or are interested in collaborating on the project, please feel free to contact us.

Attachment: NRI 2021 StickBug Proposal

Program: 2021 National Robotics Initiative (NRI) 3.0

NSF panel recommendation: Highly Competitive

Funding Agency: USDA/NIFA

Problem Solving: Getting Stuck?

Are you currently stuck on a hard problem? I am. I’m actually stuck on several problems, which I think is a good thing. Let me explain why.

Let’s first look at the relationship between people and problems. I can think of four possibilities:

  1. One person vs. one problem. Assuming this problem is solvable but is very hard (i.e., there are only a few possible ways of solving it), what would be the chance that this one person is able to find a solution? It’s possible but would take a lot of (relevant) skill, luck, and persistence. If a person is stuck on a problem, does this mean he or she is not a good problem solver? No; maybe just not a good fit, or lucky enough, or spent enough time on it (e.g., time was consumed by self-doubting instead). Also, some problem may not be a good fit to any one person, it would take a team of diverse background to solve.
  2. Multiple people vs. one problem. Things are a bit more promising here. Maybe someone would come up with something, and another person can build on it, and so on. This is partially collective intelligence and partially just the tried-and-true brute force method of increasing the number of trials. For example, if you invite enough people to a party, someone will bring a gift you actually wanted.
  3. One person vs. multiple problems. From the perspective of increasing trails, this is similar to the one above, without having to bother other people! If you keep several problems in (the back of your) mind, and you are maybe not able to solve most of them most of the time, but you may get lucky occasionally on one of them. I often work in this mode, but I am aware of my limitations.
  4. Multiple people vs. multiple problems. I don’t fully understand this one yet, but to a limited degree this is how IRL operates.

Let’s now think about the problem of robot problem solving. Robots today are generally not good problem solvers, but we are trying to improve that.

  1. One robot vs. one problem. If the problem is what the robot programed for, it would work extremely well. Arguably this is not robot problem solving though, because the problem has already been solved. If the problem is not what the robot was prepared for, most likely it would get stuck. Yes, robots do get stuck quite often.
  2. Multiple robots vs. one problem. If each robot here makes its own decisions, this may become a robot swarm. A swarm is known to be able to solve problems not explicitly planned for, but we don’t fully understand it yet.
  3. One robot vs. multiple problems. If we stop optimizing around one particular cost/reward function, but instead celebrating every meaningful thing a robot did, we may find novel ways for a robot to solve many different problems. Those problems and solutions are mostly irrelevant, but could become useful at some point. There are much to be done to harvest these randomly gathered experiences.
  4. Multiple robots vs. multiple problems: If many robots are operating like #3 above, I really have no clue what would be the implications. It would be fun to try though.

Now back to us. You and I are particles participants in a giant Monte Carlo experiment called the human society, that’s how I see it at least. From a global perspective, it’s more important that someone solves the society’s big problems than who solves these problems. If the system is well designed (that’s a big if, and itself one of the hardest problems), creative ideas would emerge one after another. This is the hope for the multiple people vs. multiple problems argument. For us, individually, we don’t have control of when a good idea may come. But we can choose to be not totally stuck, and there may be ways to do that.

Venture into the Unknowns – My 2021 Research Statement

I have decided a while ago to update rewrite my research statement every a couple years to evaluate/refocus my directions, but it took until now to have one completed…

I am a person with many interests. Lack a focus and depth, you may say, and the evidence is quite clear. I spent many years designing avionics, UAVs, robots, and telescopes; I worked on flight controls, sensor fusion, fault tolerance, and decision-making; I often wondered about the futures on Earth and dreamed of mission concepts for exploring other planets and moons; I am also interested in the humans’ interactions with robots, from safe co-existence, collaborations, to influencing robots’ decisions and behaviors. When I started my lab in 2012, I called it “Interactive Robotics Laboratory,” a catch all name to cover things I may want to explore.

With this trait and experience, I found myself a natural for envisioning new robot systems, which made myself a small reputation. I have gone from being introduced by others as the guy “who worked on formation flight,” to the person “that won a NASA challenge”, to someone “who is trying to make a robot pollinator.”

I have asked myself numerous times, if I were to make one small difference to the world, what would that be? The answer has converged slowly over the years. I want to find innovative robotic solutions to some of the world’s big challenges (e.g., hunger, inequality, access to education, and exploring the unknowns). Every time I tried, there always seems to be this one puzzle piece missing: making robots resilient in the real-world. That is, robots actually work in new environments and under dynamic situations (e.g., around people), instead of just cool demos on YouTube. I think something important is lacking in the field of robotics.

I call this “soft autonomy,” a term we made up and started using since 2014 to contrast the rigid decision rules governing Cataglyphis, our most prized robot (pun intended). As a “probabilistic robot,” Cataglyphis was built to handle a variety of uncertainties (e.g., localization, object recognition, terrain, and manipulation). Despite of its success, Cataglyphis is a robot runs on the designers/programmers’ predictions of what the situation may become. Deviations from these predictions, or any surprises to Cataglyphis, often leave it confused.

Cataglyphis during the 2016 NASA Sample Return Robot Challenge. Photo Credit: (NASA/Joel Kowsky)

Most roboticists working on the topic of “autonomy” today make a living dealing with known unknowns, so am I. This means we spend a significant amount of time and effort on identifying, modeling, and propagating the uncertainties in the problems as probability distributions (or beliefs), and try to make decisions factor in these uncertainties. Many problems have been solved this way, and we have tried (or are trying) a few. This includes attitude estimation for UAVs (not for humans…), terrain-aware navigation on Mars, active perception for flower pollination, cooperative localization of space/underwater vehicles, among others. As we are contended solving one after another of this type of problems (e.g., getting publishable results and YouTube demos), we are knowingly ignoring a bigger problem. Every time we crafted detailed models for a source of uncertainty, it makes the solution more specialized and less flexible. Whenever the uncertainty assumptions are violated (call it the uncertainty of uncertainty?), there is no mechanism available for the robot to make productive decisions. In other words, the robots today are overly confident, but are not truly autonomous.

I can think of many “simple” creatures that are autonomous: earthworms, ants, bees, trees, and the list goes on. Intelligent? maybe not so much, especially when they are alone; autonomous? yes! So, what are some of the differences between our robots and, say, a worm? A robot has sensors and actuators while a worm has a lot more of them. A robot has powerful computers; the worm? maybe not (e.g., Caenorhabditis elegans, a simple kind of worm, only has 302 neurons and about 7,500 synapses), but it has a soft body that directly interacts with the environment for morphological computing. For a robot, we often design the hardware and then write the software; for a worm, there is probably no distinction between the two (and everything is soft!). A robot came mostly from the top-down decisions of the designers/programmers, while the worm has gone through a fierce bottom-up competition process for as long as us humans.

It seems like a way, and there may be more than one way, towards achieving “autonomy” has not been found. In additional to work on solving perception and decision-making under uncertainties problems (the known unknowns), we are exploring two directions toward dealing with the unknown unknowns: bottom-up designs and problem solving (e.g., swarm intelligence), and decision-making under the uncertainty of uncertainty. Sounds like spread thin again? yes; but hopefully not without a direction.

The Future of Remote Work? Maybe Humanoid Telepresence Robots Can Help

Tired of being stuck at home working alone remotely? You are not alone in that sense! Since COVID-19 turned the world upside down, many of us were forced to work from home. After a while, once we get used to it, working from home, or working remotely, is actually not all that bad. We can spend less time in the traffic and enjoy more flexibility sometimes. But there are important things missing, like face-to-face discussions and the ability to modify the environment (many jobs depend on these!). Or in short, we are missing out on the social and physical interactions.

Will remote work be the same as what we are doing today (e.g., meeting on Zoom), say, 20 years from now? Hopefully not…, OF COURSE NOT! So what may change?

Let’s envision a future hybrid workplace, for example, an office with local workers and a group of telepresence humanoid robots as “avatars” of remote workers. Whenever a remote worker needs to do something beyond a computer task, for example helping a customer or turning a knob, she/he may do so through one of the robots. The humanoids have articulated arms and bodies to support human-like interactions, e.g., during a “face-to-face” conversation. When an office worker or a customer puts on a pair of Augmented Reality glasses, the live image of the remote person would be overlaid over the robot. At homes, the workers also feel they are physically experiencing the remote work environment, instead of feeling isolated.

My student Trevor Smith created this illustration in Gazebo using images of a VR treadmill the robot Pepper.

Of course, a lot of research needs to be done for this dream to become a reality, but that’s what we roboticists are here for. Communication technology has allowed us to hear from a distance, then to see each other, maybe this time we would finally get to “travel, touch, feel, and experience” through internet and robots? Sounds farfetched but not impossible.

A 2019 MIT report on the Work of the Future pointed out that “Ironically, digitalization has had the smallest impact on the tasks of workers in low-paid manual and service jobs. Those positions demand physical dexterity, visual recognition, face-to-face communications, and situational adaptability. Such abilities remain largely out of reach of current hardware and software but are readily accomplished by adults with moderate levels of education.” By focusing on labor-complementing instead of labor-substituting technology development, improving remote work may be a way of using robotics and AI to support middle-class workers (e.g., teachers, social workers, farmers, and factory workers) of the future.