The AI Echo Chamber

This is only the beginning…

Imagine a time where billions of people use large language models (LLMs) such as ChatGPT to help write everything from novels and academic articles to emails and social media posts. Many of these pieces will end up circulating on the internet, and over time, be used to train the LLMs. What might the consequences be?

Most people would probably agree that this creates an echo chamber. The feedback loop risks amplifying specific viewpoints while sidelining others, diminishing the diversity of thought and expression. Over time, the distinction between original human thoughts and AI-generated contents could blur, making it harder to trace the origins of information—and misinformation.

I think the greatest danger is that the change may happen on a vast scale without being detected by us. We are moving toward a parallel reality shaped significantly by AI, distinct from a reality where these tools are absent. Such a shift would emerge on a societal scale, altering perceptions and biases without us noticing.

The questions then arise: Can we detect and measure these shifts in reality? Can we set up probes, observatories, and experiments to quantify the impact of LLM on our collective intelligence?

I don’t have an answer to these questions, but I feel it might be important. Any suggestions?

Disclosure: This blog post was co-edited with ChatGPT4.

Problem-Solving in Robotics?

One of the primary obstacles impeding the widespread application of robotics lies in their lack of problem-solving capabilities. Enabling robots to independently resolve unforeseen situations could facilitate the adoption of autonomous robots in a broad range of applications such as agriculture, healthcare, exploration, and environmental protection.

As robot designers or programmers, we have a tendency to break-down and solve problems for the robots from our own perspectives. When developing a robot, we often translate our understandings of a problem (e.g., through a ‘problem statement’) into a set of structured ‘decision making’ or ‘control’ algorithms, which are then programmed into the robot. As a result, the ‘autonomy’ of today’s robots is limited. The resulting robot behaviors frequently turn out to be brittle and unnatural. Therefore, a great challenge in robotics research is allowing the robots themselves to play a bigger role in solving problems. When new problems arise, these robots are better equipped to address them.

I would like to propose the following research and development areas associated with problem solving in robotics:

  • Define problem solving in the robotics context. Unlike the research on decision making, which has well-defined problems such as POMDP (Partially Observable Markov Decision Processes), problem-solving is currently not a well-recognized research topic in robotics.
  • Learning from nature. Most examples of problem solving are from nature, exhibited by people, insects, plants, even individual cells when interacting with the environments. The mechanisms that led to the problem-solving ability of these creatures are not clear.
  • Swarm robotics. Another place that we may find examples of problem solving are the collective intelligence of natural swarms (e.g., transportation, foraging, construction) and robotic swarms. The emergence of sometimes unexpected global behaviors through local interaction rules can be interpreted as solutions to environmental challenges, which no single agent in the swarm fully comprehends.
  • Case based reasoning, imitation learning, and transfer learning. How could past experiences offer guidance for solving new problems?
  • Large Language Models (LLMs). Can a pretrained LLM provide “common sense” to a robot’s problem-solving mechanisms?
  • Benchmark problem. How do we evaluate the progress made? Can we identify benchmark problems that are experimentally simple to set up yet challenging to resolve?
  • Ethical Robotics and Safety. What are the implications when robots can solve their own problems?

Depending on if the research is inspired by human-like or primitive creature-level problem solving abilities, there could be very different pathways toward problem-solving in robotics.

AI Neuroscience & AI Psychology?

When people on Earth don’t understand something, they create new disciplines. A lot of what we don’t understand comes from emergent behaviors, that is, the whole is drastically different than the building blocks. We may know the blocks reasonably well, but not the complex things built with them. For example, economics is a field to help understand the collective outcomes of a lot of interacting people. With ChatGPT and other large language models started showing abilities that we don’t quite know how to explain, other than they have emerged from the tools we (thought we) know, maybe it’s the time to have new fields? What about splitting the AI research field to “AI Neuroscience” and “AI Psychology”?

On Time and Research Productivity

What is a good heuristic linking the time spent and the research progress made?

If we use bean counting as an example, the progress made would be a function of talent, skill, and effective time spent. This is not as simple a relationship as it may look like at first glance. The talent (e.g., good hand-eye coordination and dexterity) separates us to some degree, but skill can make up for most of the differences. Skill is also developed over time, as a function of effective time spent on bean counting in the past. Notice the word effective here. Three people (A, B, and C) can be spending the same amount of time counting beans; but,

  1. A spends 30% of the time wondering: B seems to be more talented than I am?
  2. B spends 20% of the time managing the group.
  3. C is not thinking at all, just counting.

Who may be the one that has more beans counted and has more bean counting skills gained over the time?

However, research is not bean counting. If instead, we use mountain climbing as an example, we may consider progress as a function of talent, skill, time spent, and the direction we take. If we are making poor choices by going on the longer route, we could be climbing fast and hard but still be late.

However, research is not mountain climbing. If instead, we use mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, and luck. If we are lucky, we would find many (and good) mushrooms on the way with less time and effort. But luck is not something we have direct control of. The only part that we can do is to increase the number of trials, e.g., explore more, which is also a function of the time spent.  Also, it’s assuring to think that no one can be always unlucky on a time scale of, say, 40 years.

However, research is not mushroom foraging by one person. That would be ignoring the bigger picture, e.g., the role played by others on one’s progress. If instead, we use group mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, luck, and interaction. The time spend on teaching, making friends, brainstorming, and cooperation may payoff in ways we can’t anticipate.

How do we know if we have been managing our time productively? We probably won’t know for sure. It usually takes years to develop a skill (e.g., language, playing musical instrument, writing) and the process is not linear. Your skill could be stagnating for a long time before making the next jump. I would suggest following a simple heuristic: the research progress, along with many of its contributing factors, have a positive correlation with the effective time spent. There is also a simple test. Just think about in the last week (while still can remember), how much time did I spend on research related topics, such as reading, learning, thinking, doing, writing, teaching, and debating, etc. Ok, for me, I worked quite hard last week (Oct 30 – Nov 05, 2022), but a good chunk of my time was spent on attending meetings, replying to emails, grading homework, dealing with logistics/paperwork. I also spent a good amount of time teaching and thinking (which were productive), but had little time to read and write, and even less time to use my hands for doing anything other than typing …

Why am I not a better researcher than I am now? That was probably the main reason. I can blame it on not having enough time; but that probably is just an excuse. Maybe it’s because there are too many things (other than just research) that I chose not to give up? Or maybe the time I spent was just not effective enough. One progress I made was that I have learned, after a long while, not to waste time thinking that I am not good enough as a researcher.

RootBots: Sprawling Robot Networks for Lunar Resource Extraction

Lunar Helium-3 (3He) mining has the potential of addressing the energy and environmental challenges on Earth. As an essential fuel of future nuclear fusion reactors, 3He is extremely rare on Earth, with currently less than 2 kg/year production in the US, but is much more abundant on the lunar surface, with an estimated reserve of over one million tons. Just 40 tons of 3He, combined with deuterium, a resource available on Earth, could meet the current annual energy need of the U.S.

Mining 3He on the Moon would be quite different than traditional mining process on Earth. Over the time, 3He, along with other volatiles, were implanted into the lunar regolith by the solar wind. This means that surface mining operations would need to cover vast areas at shallow depths. Agitation to the regolith will lead to a significant release of the volatiles, which presents both a challenge and an opportunity. Other challenges associated with large-scale space industrialization, such as the limited human presence and the high reliability/autonomy requirements, meaning radically different mining concepts need to be developed.

Inspiration: Vascular plants use roots to anchor themselves, to explore and extract resources from earth, to store nutrients, and to compete with others. The roots grow by producing new cells, which are regulated by both extrinsic and intrinsic stimuli. Sensing of extrinsic factors, such as gravity, barriers, light, air, water, and nutrients, directs roots to grow toward favorable directions. Hormones (i.e., intrinsic stimuli) such as Cytokinins and Auxin (with often opposite effects), as a part of both local and long-distance signaling, also help coordinate the root development. The plant roots are time and real-world tested designs for functionality, intelligence, and resilience.

Vision: Entering rootbots, sprawling networks of robot organisms, for performing planetary resource extraction tasks at an unprecedented scale in the second half of the 21st century. A rootbot will be made of modular and interlocking smart components called the cells. Each cell interacts with the environment and makes its own decisions, influenced by local stimuli and decisions of nearby cells. One important decision to be made is whether for a cell to stay in place or relocate. If a cell decides to move, it will wait for a mobile agent, called a transporter, to come by and pick it up. Traveling only on top of the existing “root” network of cells, the transporters carry cells and deploy them in places where they desire. With two decentralized and interacting robot swarms (i.e., the cells and the transporters), a rootbot can grow like a plant root for exploring and exploiting favorable conditions.

Ok, the discussions above were modified from an unsuccessful NASA NIAC proposal. Feedback from NASA? “Overall, while there were many interesting aspects suggested, the proposal overreached without enough justification for any one concept, especially given the scale proposed.

Problem Solving: Getting Stuck?

Are you currently stuck on a hard problem? I am. I’m actually stuck on several problems, which I think is a good thing. Let me explain why.

Let’s first look at the relationship between people and problems. I can think of four possibilities:

  1. One person vs. one problem. Assuming this problem is solvable but is very hard (i.e., there are only a few possible ways of solving it), what would be the chance that this one person is able to find a solution? It’s possible but would take a lot of (relevant) skill, luck, and persistence. If a person is stuck on a problem, does this mean he or she is not a good problem solver? No; maybe just not a good fit, or lucky enough, or spent enough time on it (e.g., time was consumed by self-doubting instead). Also, some problem may not be a good fit to any one person, it would take a team of diverse background to solve.
  2. Multiple people vs. one problem. Things are a bit more promising here. Maybe someone would come up with something, and another person can build on it, and so on. This is partially collective intelligence and partially just the tried-and-true brute force method of increasing the number of trials. For example, if you invite enough people to a party, someone will bring a gift you actually wanted.
  3. One person vs. multiple problems. From the perspective of increasing trails, this is similar to the one above, without having to bother other people! If you keep several problems in (the back of your) mind, and you are maybe not able to solve most of them most of the time, but you may get lucky occasionally on one of them. I often work in this mode, but I am aware of my limitations.
  4. Multiple people vs. multiple problems. I don’t fully understand this one yet, but to a limited degree this is how IRL operates.

Let’s now think about the problem of robot problem solving. Robots today are generally not good problem solvers, but we are trying to improve that.

  1. One robot vs. one problem. If the problem is what the robot programed for, it would work extremely well. Arguably this is not robot problem solving though, because the problem has already been solved. If the problem is not what the robot was prepared for, most likely it would get stuck. Yes, robots do get stuck quite often.
  2. Multiple robots vs. one problem. If each robot here makes its own decisions, this may become a robot swarm. A swarm is known to be able to solve problems not explicitly planned for, but we don’t fully understand it yet.
  3. One robot vs. multiple problems. If we stop optimizing around one particular cost/reward function, but instead celebrating every meaningful thing a robot did, we may find novel ways for a robot to solve many different problems. Those problems and solutions are mostly irrelevant, but could become useful at some point. There are much to be done to harvest these randomly gathered experiences.
  4. Multiple robots vs. multiple problems: If many robots are operating like #3 above, I really have no clue what would be the implications. It would be fun to try though.

Now back to us. You and I are particles participants in a giant Monte Carlo experiment called the human society, that’s how I see it at least. From a global perspective, it’s more important that someone solves the society’s big problems than who solves these problems. If the system is well designed (that’s a big if, and itself one of the hardest problems), creative ideas would emerge one after another. This is the hope for the multiple people vs. multiple problems argument. For us, individually, we don’t have control of when a good idea may come. But we can choose to be not totally stuck, and there may be ways to do that.

Turning Plants into Animals? Plantimals?

Do plants have feelings? Do they have desires? Do they have friends?

They probably do, but we cannot tell easily, because plants are quiet and don’t travel very often (other than through their offspring). There is little the plants can do because they are “landlocked” …

Can we free the plants? What if we give plants mobility so they can go wherever they want to, like animals?

Let’s say when a plant needs water, it goes to get water; when it needs sunlight, it moves out of the shade; when it needs pollinators, it gets close to bee hives; and when it needs friends, it goes to hang out with other plants…

It’s conceptually quite simple: first, we put motors and wheels on the plant pots; second, we plug in sensors into the plants and the soil to detect what the plants need/want; finally, we connect the collected signal to motors to control the pots. Ta da!

It’s of course not that easy, but not entirely impossible either (I actually know how to do the first and third steps above 😊). Wouldn’t it be fun to watch plants make their own decisions, learn to control themselves, compete and cooperate with each other, and have more control of their own fate? Would you like a plant more if it comes to you for treat once for a while? Would it makes your garden more beautiful by changing the layout every minute, all decided by the plants themselves?

Now, if only someone can help me to figure out what these plants want …

Getting Tossed Around on the Moon?

Well, I still haven’t gotten over the throwing and catching idea

Have you seen astronaut bunny hopping videos from the moon? The gravity seems to be a lot less there (0.166 g) so things can fly higher, with less effort. There is no air there either, so objects would fly more predictively, and easier to catch.

Let’s say if we have a network of robotic catching and tossing stations on the moon, objects will just hop up and down to get to places. By objects, I mean equipment, construction material, and astronauts.

Astronauts? yes, especially space tourists. Won’t it be fun to be throwing off a cliff, and get (gently) caught by a robot later?

A Swarm of One Robot?

Lately I’ve been looking at things a bit differently. Instead of seeing the big picture, I started to notice the trees in a forest…

Take this happy squirrel for an example, many of us see it as an autonomous, smart, and cute creature, capable of doing things we hope our robots can do. If we were to copy the design of a squirrel build a squirrel-inspired robot, we would put together a design with similar biomechanics and program it to mimic the observed squirrel behaviors. If this doesn’t work, we may blame it on not having good/similar materials or actuators, or not having good enough models of a squirrel’s body and mind.

This is a top-down way of copying nature’s designs, but we can look from the opposite direction as well.  A squirrel is made of billions of cells and the interaction of these cells shapes almost everything a squirrel does, theoretical speaking. Maybe a squirrel is not a good example here as it’s too complicated for us to understand its behaviors from a bottom-up perspective. If we look at plants instead, a quick YouTube search would lead to many cool time lapse videos of growing vines and roots and it would be hard to argue that these are not intelligent creatures making their own decisions. These high-level behaviors emerged from the local interactions of numerous cells. Trying to copy the behaviors without going through the bottom-up process (i.e., swarm #1) may not always be fruitful. I think people making animated movies understood this a long time ago. It’s like drawing clouds, snowflakes, and shadows by the hands of artists vs. using a physics engine.

Even if we don’t look at a squirrel from the cellular level, we can still see one as multiples. Have you watched the movie Groundhog Day? In the movie, every day is the same day, …, for everyone else, but not for the main character, Phil. For a squirrel, everyday is a new day, …, but more or less like the previous days. A squirrel could try different strategies each day, or copy a previous one and make adjustment based on lessons learned. In that sense, a month for a squirrel may be considered as a swarm (#2) of 30 interacting squirrels, each with a one-day experience.

Now, at a particular time, when a squirrel needs to make a particular decision (e.g., where to go for foraging?), could the decision also be made by a swarm instead of relying on a single decision-maker in the squirrel’s mind? It’s possible. There may be many rivaling thoughts (swarm #3, with some thoughts maybe coming from swarm #2) and the one that wins out at a moment would take the control of the squirrel body. At a different time, another opinion may prevail. This is like having several drivers fighting for the control of a bus, or political parties competing for influence of a country. Chaotic, maybe, but not without its merit.

So, can we make a swarm of one robot? I think so. We may actually be able to make three (or more) swarms out of one robot!

Squirrel drawing credit: Shuttlrstock.com 228696961

Venture into the Unknowns – My 2021 Research Statement

I have decided a while ago to update rewrite my research statement every a couple years to evaluate/refocus my directions, but it took until now to have one completed…

I am a person with many interests. Lack a focus and depth, you may say, and the evidence is quite clear. I spent many years designing avionics, UAVs, robots, and telescopes; I worked on flight controls, sensor fusion, fault tolerance, and decision-making; I often wondered about the futures on Earth and dreamed of mission concepts for exploring other planets and moons; I am also interested in the humans’ interactions with robots, from safe co-existence, collaborations, to influencing robots’ decisions and behaviors. When I started my lab in 2012, I called it “Interactive Robotics Laboratory,” a catch all name to cover things I may want to explore.

With this trait and experience, I found myself a natural for envisioning new robot systems, which made myself a small reputation. I have gone from being introduced by others as the guy “who worked on formation flight,” to the person “that won a NASA challenge”, to someone “who is trying to make a robot pollinator.”

I have asked myself numerous times, if I were to make one small difference to the world, what would that be? The answer has converged slowly over the years. I want to find innovative robotic solutions to some of the world’s big challenges (e.g., hunger, inequality, access to education, and exploring the unknowns). Every time I tried, there always seems to be this one puzzle piece missing: making robots resilient in the real-world. That is, robots actually work in new environments and under dynamic situations (e.g., around people), instead of just cool demos on YouTube. I think something important is lacking in the field of robotics.

I call this “soft autonomy,” a term we made up and started using since 2014 to contrast the rigid decision rules governing Cataglyphis, our most prized robot (pun intended). As a “probabilistic robot,” Cataglyphis was built to handle a variety of uncertainties (e.g., localization, object recognition, terrain, and manipulation). Despite of its success, Cataglyphis is a robot runs on the designers/programmers’ predictions of what the situation may become. Deviations from these predictions, or any surprises to Cataglyphis, often leave it confused.

Cataglyphis during the 2016 NASA Sample Return Robot Challenge. Photo Credit: (NASA/Joel Kowsky)

Most roboticists working on the topic of “autonomy” today make a living dealing with known unknowns, so am I. This means we spend a significant amount of time and effort on identifying, modeling, and propagating the uncertainties in the problems as probability distributions (or beliefs), and try to make decisions factor in these uncertainties. Many problems have been solved this way, and we have tried (or are trying) a few. This includes attitude estimation for UAVs (not for humans…), terrain-aware navigation on Mars, active perception for flower pollination, cooperative localization of space/underwater vehicles, among others. As we are contended solving one after another of this type of problems (e.g., getting publishable results and YouTube demos), we are knowingly ignoring a bigger problem. Every time we crafted detailed models for a source of uncertainty, it makes the solution more specialized and less flexible. Whenever the uncertainty assumptions are violated (call it the uncertainty of uncertainty?), there is no mechanism available for the robot to make productive decisions. In other words, the robots today are overly confident, but are not truly autonomous.

I can think of many “simple” creatures that are autonomous: earthworms, ants, bees, trees, and the list goes on. Intelligent? maybe not so much, especially when they are alone; autonomous? yes! So, what are some of the differences between our robots and, say, a worm? A robot has sensors and actuators while a worm has a lot more of them. A robot has powerful computers; the worm? maybe not (e.g., Caenorhabditis elegans, a simple kind of worm, only has 302 neurons and about 7,500 synapses), but it has a soft body that directly interacts with the environment for morphological computing. For a robot, we often design the hardware and then write the software; for a worm, there is probably no distinction between the two (and everything is soft!). A robot came mostly from the top-down decisions of the designers/programmers, while the worm has gone through a fierce bottom-up competition process for as long as us humans.

It seems like a way, and there may be more than one way, towards achieving “autonomy” has not been found. In additional to work on solving perception and decision-making under uncertainties problems (the known unknowns), we are exploring two directions toward dealing with the unknown unknowns: bottom-up designs and problem solving (e.g., swarm intelligence), and decision-making under the uncertainty of uncertainty. Sounds like spread thin again? yes; but hopefully not without a direction.