The AI Echo Chamber

This is only the beginning…

Imagine a time where billions of people use large language models (LLMs) such as ChatGPT to help write everything from novels and academic articles to emails and social media posts. Many of these pieces will end up circulating on the internet, and over time, be used to train the LLMs. What might the consequences be?

Most people would probably agree that this creates an echo chamber. The feedback loop risks amplifying specific viewpoints while sidelining others, diminishing the diversity of thought and expression. Over time, the distinction between original human thoughts and AI-generated contents could blur, making it harder to trace the origins of information—and misinformation.

I think the greatest danger is that the change may happen on a vast scale without being detected by us. We are moving toward a parallel reality shaped significantly by AI, distinct from a reality where these tools are absent. Such a shift would emerge on a societal scale, altering perceptions and biases without us noticing.

The questions then arise: Can we detect and measure these shifts in reality? Can we set up probes, observatories, and experiments to quantify the impact of LLM on our collective intelligence?

I don’t have an answer to these questions, but I feel it might be important. Any suggestions?

Disclosure: This blog post was co-edited with ChatGPT4.

AI Neuroscience & AI Psychology?

When people on Earth don’t understand something, they create new disciplines. A lot of what we don’t understand comes from emergent behaviors, that is, the whole is drastically different than the building blocks. We may know the blocks reasonably well, but not the complex things built with them. For example, economics is a field to help understand the collective outcomes of a lot of interacting people. With ChatGPT and other large language models started showing abilities that we don’t quite know how to explain, other than they have emerged from the tools we (thought we) know, maybe it’s the time to have new fields? What about splitting the AI research field to “AI Neuroscience” and “AI Psychology”?

RootBots: Sprawling Robot Networks for Lunar Resource Extraction

Lunar Helium-3 (3He) mining has the potential of addressing the energy and environmental challenges on Earth. As an essential fuel of future nuclear fusion reactors, 3He is extremely rare on Earth, with currently less than 2 kg/year production in the US, but is much more abundant on the lunar surface, with an estimated reserve of over one million tons. Just 40 tons of 3He, combined with deuterium, a resource available on Earth, could meet the current annual energy need of the U.S.

Mining 3He on the Moon would be quite different than traditional mining process on Earth. Over the time, 3He, along with other volatiles, were implanted into the lunar regolith by the solar wind. This means that surface mining operations would need to cover vast areas at shallow depths. Agitation to the regolith will lead to a significant release of the volatiles, which presents both a challenge and an opportunity. Other challenges associated with large-scale space industrialization, such as the limited human presence and the high reliability/autonomy requirements, meaning radically different mining concepts need to be developed.

Inspiration: Vascular plants use roots to anchor themselves, to explore and extract resources from earth, to store nutrients, and to compete with others. The roots grow by producing new cells, which are regulated by both extrinsic and intrinsic stimuli. Sensing of extrinsic factors, such as gravity, barriers, light, air, water, and nutrients, directs roots to grow toward favorable directions. Hormones (i.e., intrinsic stimuli) such as Cytokinins and Auxin (with often opposite effects), as a part of both local and long-distance signaling, also help coordinate the root development. The plant roots are time and real-world tested designs for functionality, intelligence, and resilience.

Vision: Entering rootbots, sprawling networks of robot organisms, for performing planetary resource extraction tasks at an unprecedented scale in the second half of the 21st century. A rootbot will be made of modular and interlocking smart components called the cells. Each cell interacts with the environment and makes its own decisions, influenced by local stimuli and decisions of nearby cells. One important decision to be made is whether for a cell to stay in place or relocate. If a cell decides to move, it will wait for a mobile agent, called a transporter, to come by and pick it up. Traveling only on top of the existing “root” network of cells, the transporters carry cells and deploy them in places where they desire. With two decentralized and interacting robot swarms (i.e., the cells and the transporters), a rootbot can grow like a plant root for exploring and exploiting favorable conditions.

Ok, the discussions above were modified from an unsuccessful NASA NIAC proposal. Feedback from NASA? “Overall, while there were many interesting aspects suggested, the proposal overreached without enough justification for any one concept, especially given the scale proposed.

Building Precision Pollination Robots

We have been working on the topic of robotic precision pollination for a few years now and will continue down this path in the foreseeable future. With the word “precision”, we mean treating crops as individual plants, recognizing their individual differences and needs, like how we would interact with people. If our robots can touch and precisely maneuver small and delicate flowers, they could be used to take care of plants in many different ways.

But why does anyone want to take over bees’ pollination job with robots? No, we don’t, and I much rather seeing bees flying in and out of flowers. What we like to have is a plan-B in case there is not enough bees or other insects to support our food production. With a growing human population, the rate of bee colony loss, and the climate change, this could become a real threat. We also want to be able to pollinate flowers in places where bees either do not like or cannot survive, such as confined indoor spaces (e.g., greenhouses, growth chambers, vertical agriculture settings, on a different planet, etc.).

In our previous project, we designed BrambleBee to pollinate Bramble (i.e., blackberry and raspberry) flowers. BrambleBee looks like a jumble sized bumblebee with a big arm, but cannot fly. We did not want to mimic bee’s flying ability; instead, we learned from bee’s micro hair structures and motions (thanks to our entomology team led by Dr. Yong-Lak Park) and used a custom-designed robotic hand to brush the flowers for precision pollen transfer.

BrambleBee served as a proof of concept and it was fun to watch it work, but there are still many challenges. For example, each flower is unique and there are many complex situations for a robot pollinator to handle (e.g., tightly clustered flowers, occlusion, deformable objects, plant motion, etc.). How to covert an experimental robot system to an effective agriculture machine and be accepted by growers is another major challenge.  These are the research topics we will tackle with our next robot, StickBug.

Wait,…, I should say “robots” because StickBug is not a single robot. It would be a multi-robot system with four agents (one mobile base and three two armed robots moving on a vertical lift).

We have a talented, motivated, and diverse team that includes horticulturists (Dr. Nicole Waterland and her students), human-systems experts (Dr. Boyi Hu and his students from the University of Florida), roboticists (Dr. Jason Gross and I, along with undergraduate and graduate students from #WVURobtics). This project will be open sourced, starting with sharing our proposal. If you have any suggestions on our approach or are interested in collaborating on the project, please feel free to contact us.

Attachment: NRI 2021 StickBug Proposal

Program: 2021 National Robotics Initiative (NRI) 3.0

NSF panel recommendation: Highly Competitive

Funding Agency: USDA/NIFA

Problem Solving: Getting Stuck?

Are you currently stuck on a hard problem? I am. I’m actually stuck on several problems, which I think is a good thing. Let me explain why.

Let’s first look at the relationship between people and problems. I can think of four possibilities:

  1. One person vs. one problem. Assuming this problem is solvable but is very hard (i.e., there are only a few possible ways of solving it), what would be the chance that this one person is able to find a solution? It’s possible but would take a lot of (relevant) skill, luck, and persistence. If a person is stuck on a problem, does this mean he or she is not a good problem solver? No; maybe just not a good fit, or lucky enough, or spent enough time on it (e.g., time was consumed by self-doubting instead). Also, some problem may not be a good fit to any one person, it would take a team of diverse background to solve.
  2. Multiple people vs. one problem. Things are a bit more promising here. Maybe someone would come up with something, and another person can build on it, and so on. This is partially collective intelligence and partially just the tried-and-true brute force method of increasing the number of trials. For example, if you invite enough people to a party, someone will bring a gift you actually wanted.
  3. One person vs. multiple problems. From the perspective of increasing trails, this is similar to the one above, without having to bother other people! If you keep several problems in (the back of your) mind, and you are maybe not able to solve most of them most of the time, but you may get lucky occasionally on one of them. I often work in this mode, but I am aware of my limitations.
  4. Multiple people vs. multiple problems. I don’t fully understand this one yet, but to a limited degree this is how IRL operates.

Let’s now think about the problem of robot problem solving. Robots today are generally not good problem solvers, but we are trying to improve that.

  1. One robot vs. one problem. If the problem is what the robot programed for, it would work extremely well. Arguably this is not robot problem solving though, because the problem has already been solved. If the problem is not what the robot was prepared for, most likely it would get stuck. Yes, robots do get stuck quite often.
  2. Multiple robots vs. one problem. If each robot here makes its own decisions, this may become a robot swarm. A swarm is known to be able to solve problems not explicitly planned for, but we don’t fully understand it yet.
  3. One robot vs. multiple problems. If we stop optimizing around one particular cost/reward function, but instead celebrating every meaningful thing a robot did, we may find novel ways for a robot to solve many different problems. Those problems and solutions are mostly irrelevant, but could become useful at some point. There are much to be done to harvest these randomly gathered experiences.
  4. Multiple robots vs. multiple problems: If many robots are operating like #3 above, I really have no clue what would be the implications. It would be fun to try though.

Now back to us. You and I are particles participants in a giant Monte Carlo experiment called the human society, that’s how I see it at least. From a global perspective, it’s more important that someone solves the society’s big problems than who solves these problems. If the system is well designed (that’s a big if, and itself one of the hardest problems), creative ideas would emerge one after another. This is the hope for the multiple people vs. multiple problems argument. For us, individually, we don’t have control of when a good idea may come. But we can choose to be not totally stuck, and there may be ways to do that.

Turning Plants into Animals? Plantimals?

Do plants have feelings? Do they have desires? Do they have friends?

They probably do, but we cannot tell easily, because plants are quiet and don’t travel very often (other than through their offspring). There is little the plants can do because they are “landlocked” …

Can we free the plants? What if we give plants mobility so they can go wherever they want to, like animals?

Let’s say when a plant needs water, it goes to get water; when it needs sunlight, it moves out of the shade; when it needs pollinators, it gets close to bee hives; and when it needs friends, it goes to hang out with other plants…

It’s conceptually quite simple: first, we put motors and wheels on the plant pots; second, we plug in sensors into the plants and the soil to detect what the plants need/want; finally, we connect the collected signal to motors to control the pots. Ta da!

It’s of course not that easy, but not entirely impossible either (I actually know how to do the first and third steps above 😊). Wouldn’t it be fun to watch plants make their own decisions, learn to control themselves, compete and cooperate with each other, and have more control of their own fate? Would you like a plant more if it comes to you for treat once for a while? Would it makes your garden more beautiful by changing the layout every minute, all decided by the plants themselves?

Now, if only someone can help me to figure out what these plants want …

Getting Tossed Around on the Moon?

Well, I still haven’t gotten over the throwing and catching idea

Have you seen astronaut bunny hopping videos from the moon? The gravity seems to be a lot less there (0.166 g) so things can fly higher, with less effort. There is no air there either, so objects would fly more predictively, and easier to catch.

Let’s say if we have a network of robotic catching and tossing stations on the moon, objects will just hop up and down to get to places. By objects, I mean equipment, construction material, and astronauts.

Astronauts? yes, especially space tourists. Won’t it be fun to be throwing off a cliff, and get (gently) caught by a robot later?

Making Robotics a Popular Hobby?

Photography, electronics, ham radio, model airplane, 3D printing, astronomy, …, there are many science and engineering related hobbies. Most of these hobbies have their dedicated forums, magazines, trade shows, and competitions. Behind each of these hobbies, there is an ecosystem of companies, large and small, making general or specialized equipment and software.

Robotics as a hobby? It’s starting to be a thing now, but not quite comparable to the established ones yet. There are several organized robotics events, like FIRST, involving tens thousands of kids. But a hobby is something more personal and spontaneous… Where are the robotics hobbyist forums? magazines? organizations, trade shows? competitions? To be fair, some of them are popping up, but the reach has been limited.

What makes a hobby popular? In my opinion, based on experiences with my two hobbies, amateur astronomy and photography, several factors are important. First, it needs to be intriguing. Second, it needs to have a low entry barrier: e.g., a kid with no one around to help can get started and accomplish something, enough to sustain the interest. Third, it needs to have no upper bound in terms of what can be achieved; e.g., room for fiddling, imagination, and creativity. Anyone in a hobby would know that it’s an endless endeavor to complexity and perfection. Afterall, a hobby is a form of obsession. Finally, the connections between the easy (entry) parts and the hard (advanced) parts need to be there.

I think robotics is intriguing for an enough number of people. It has the room for people to show their talent and creativity, in limitless ways. Getting a low-end robot kid is also not much more expensive than a low-end camera/radio/telescope; maybe a bit harder to use at first. So what is the problem? maybe it’s in the connections. What would you do once you are ready to move up from your Lego Mindstorms? Do you have to discard all the kit you have and start over with a new system? Is the knowledge you gained with a Lego robot transferable to a robot based on Raspberry Pi? What if you want to add soft manipulator, mapping, and natural language processing to your hobby robot? There are open-sourced ways to do that, but not obvious to most people.

Of course, a hobby does not have to be easy, and we don’t want it to be easy as well. Most hobby involves hard problems. For example, few amateur astronomers know how to grind a mirror (although that was how astronomy became popular during the Great Depression) and even fewer know how to make a lens. However, imaging near telescopes’ diffraction limit, discovering exoplanets, amateur astronomers are making great contributions to both engineering and science.

Amateur Roboticists can be this successful too. Of course, the growth of a hobby is an emergent behavior, depending on many factors such as people’s influence on each other. Here, I have a few ideas that may help improve the connections between “entry-level” and “advanced” activities. First, coming up with standardization of key robot components (e.g., hardware and software interfaces) so multiple companies and amateurs can contribute to their developments. Second, making ROS (Robot Operating System), an already successful middleware platform to researchers, accessible to high-school students and hobbyists, through easier interface and readable documentation, and provide demos on common platforms (e.g., robots developed for FIRST and VEX competitions). Three, leveraging the 3D simulation capabilities and people’s interest in gaming to develop open-source robot simulations for the hobbyist community.

Once robotics becomes a popular hobby, companies would make more money, hobbyists would have more toys, researchers would have more helpers, the acceptance of robotics would improve, and the field of robotics would advance faster!

P.S. a key difference between professionals and hobbyists: professionals get paychecks to do certain things. Hobbyists spend their paycheck to do the same things. You can probably tell that the hobbyists are often more motivated…

The Future of Remote Work? Maybe Humanoid Telepresence Robots Can Help

Tired of being stuck at home working alone remotely? You are not alone in that sense! Since COVID-19 turned the world upside down, many of us were forced to work from home. After a while, once we get used to it, working from home, or working remotely, is actually not all that bad. We can spend less time in the traffic and enjoy more flexibility sometimes. But there are important things missing, like face-to-face discussions and the ability to modify the environment (many jobs depend on these!). Or in short, we are missing out on the social and physical interactions.

Will remote work be the same as what we are doing today (e.g., meeting on Zoom), say, 20 years from now? Hopefully not…, OF COURSE NOT! So what may change?

Let’s envision a future hybrid workplace, for example, an office with local workers and a group of telepresence humanoid robots as “avatars” of remote workers. Whenever a remote worker needs to do something beyond a computer task, for example helping a customer or turning a knob, she/he may do so through one of the robots. The humanoids have articulated arms and bodies to support human-like interactions, e.g., during a “face-to-face” conversation. When an office worker or a customer puts on a pair of Augmented Reality glasses, the live image of the remote person would be overlaid over the robot. At homes, the workers also feel they are physically experiencing the remote work environment, instead of feeling isolated.

My student Trevor Smith created this illustration in Gazebo using images of a VR treadmill the robot Pepper.

Of course, a lot of research needs to be done for this dream to become a reality, but that’s what we roboticists are here for. Communication technology has allowed us to hear from a distance, then to see each other, maybe this time we would finally get to “travel, touch, feel, and experience” through internet and robots? Sounds farfetched but not impossible.

A 2019 MIT report on the Work of the Future pointed out that “Ironically, digitalization has had the smallest impact on the tasks of workers in low-paid manual and service jobs. Those positions demand physical dexterity, visual recognition, face-to-face communications, and situational adaptability. Such abilities remain largely out of reach of current hardware and software but are readily accomplished by adults with moderate levels of education.” By focusing on labor-complementing instead of labor-substituting technology development, improving remote work may be a way of using robotics and AI to support middle-class workers (e.g., teachers, social workers, farmers, and factory workers) of the future.

Shake Your Camera to Take Sharper Photos

Computational photography is changing the way how photos are taken. More and more cell phones are using computation to offset the small lenses allowed on the phones and the progress has been amazing. What I am still waiting on is a way to allow shaky cameras to take sharper photos. Arguably, a shaky camera can provide more information of a scene than a steady camera. It seems like our brain-eye (and inertial?) system can process it, which gives us a stable (and sharp!) perception of the world while moving. Since most phone users are not so good at holding their cameras steady, why not taking advantage of the shaking? Even better would be allowing a shaky telescope to provide a sharper view of the planets! Has this been done before? Can someone point me to a product or a paper using this approach?