Building Precision Pollination Robots

We have been working on the topic of robotic precision pollination for a few years now and will continue down this path in the foreseeable future. With the word “precision”, we mean treating crops as individual plants, recognizing their individual differences and needs, like how we would interact with people. If our robots can touch and precisely maneuver small and delicate flowers, they could be used to take care of plants in many different ways.

But why does anyone want to take over bees’ pollination job with robots? No, we don’t, and I much rather seeing bees flying in and out of flowers. What we like to have is a plan-B in case there is not enough bees or other insects to support our food production. With a growing human population, the rate of bee colony loss, and the climate change, this could become a real threat. We also want to be able to pollinate flowers in places where bees either do not like or cannot survive, such as confined indoor spaces (e.g., greenhouses, growth chambers, vertical agriculture settings, on a different planet, etc.).

In our previous project, we designed BrambleBee to pollinate Bramble (i.e., blackberry and raspberry) flowers. BrambleBee looks like a jumble sized bumblebee with a big arm, but cannot fly. We did not want to mimic bee’s flying ability; instead, we learned from bee’s micro hair structures and motions (thanks to our entomology team led by Dr. Yong-Lak Park) and used a custom-designed robotic hand to brush the flowers for precision pollen transfer.

BrambleBee served as a proof of concept and it was fun to watch it work, but there are still many challenges. For example, each flower is unique and there are many complex situations for a robot pollinator to handle (e.g., tightly clustered flowers, occlusion, deformable objects, plant motion, etc.). How to covert an experimental robot system to an effective agriculture machine and be accepted by growers is another major challenge.  These are the research topics we will tackle with our next robot, StickBug.

Wait,…, I should say “robots” because StickBug is not a single robot. It would be a multi-robot system with four agents (one mobile base and three two armed robots moving on a vertical lift).

We have a talented, motivated, and diverse team that includes horticulturists (Dr. Nicole Waterland and her students), human-systems experts (Dr. Boyi Hu and his students from the University of Florida), roboticists (Dr. Jason Gross and I, along with undergraduate and graduate students from #WVURobtics). This project will be open sourced, starting with sharing our proposal. If you have any suggestions on our approach or are interested in collaborating on the project, please feel free to contact us.

Attachment: NRI 2021 StickBug Proposal

Program: 2021 National Robotics Initiative (NRI) 3.0

NSF panel recommendation: Highly Competitive

Funding Agency: USDA/NIFA

Problem Solving: Getting Stuck?

Are you currently stuck on a hard problem? I am. I’m actually stuck on several problems, which I think is a good thing. Let me explain why.

Let’s first look at the relationship between people and problems. I can think of four possibilities:

  1. One person vs. one problem. Assuming this problem is solvable but is very hard (i.e., there are only a few possible ways of solving it), what would be the chance that this one person is able to find a solution? It’s possible but would take a lot of (relevant) skill, luck, and persistence. If a person is stuck on a problem, does this mean he or she is not a good problem solver? No; maybe just not a good fit, or lucky enough, or spent enough time on it (e.g., time was consumed by self-doubting instead). Also, some problem may not be a good fit to any one person, it would take a team of diverse background to solve.
  2. Multiple people vs. one problem. Things are a bit more promising here. Maybe someone would come up with something, and another person can build on it, and so on. This is partially collective intelligence and partially just the tried-and-true brute force method of increasing the number of trials. For example, if you invite enough people to a party, someone will bring a gift you actually wanted.
  3. One person vs. multiple problems. From the perspective of increasing trails, this is similar to the one above, without having to bother other people! If you keep several problems in (the back of your) mind, and you are maybe not able to solve most of them most of the time, but you may get lucky occasionally on one of them. I often work in this mode, but I am aware of my limitations.
  4. Multiple people vs. multiple problems. I don’t fully understand this one yet, but to a limited degree this is how IRL operates.

Let’s now think about the problem of robot problem solving. Robots today are generally not good problem solvers, but we are trying to improve that.

  1. One robot vs. one problem. If the problem is what the robot programed for, it would work extremely well. Arguably this is not robot problem solving though, because the problem has already been solved. If the problem is not what the robot was prepared for, most likely it would get stuck. Yes, robots do get stuck quite often.
  2. Multiple robots vs. one problem. If each robot here makes its own decisions, this may become a robot swarm. A swarm is known to be able to solve problems not explicitly planned for, but we don’t fully understand it yet.
  3. One robot vs. multiple problems. If we stop optimizing around one particular cost/reward function, but instead celebrating every meaningful thing a robot did, we may find novel ways for a robot to solve many different problems. Those problems and solutions are mostly irrelevant, but could become useful at some point. There are much to be done to harvest these randomly gathered experiences.
  4. Multiple robots vs. multiple problems: If many robots are operating like #3 above, I really have no clue what would be the implications. It would be fun to try though.

Now back to us. You and I are particles participants in a giant Monte Carlo experiment called the human society, that’s how I see it at least. From a global perspective, it’s more important that someone solves the society’s big problems than who solves these problems. If the system is well designed (that’s a big if, and itself one of the hardest problems), creative ideas would emerge one after another. This is the hope for the multiple people vs. multiple problems argument. For us, individually, we don’t have control of when a good idea may come. But we can choose to be not totally stuck, and there may be ways to do that.

Turning Plants into Animals? Plantimals?

Do plants have feelings? Do they have desires? Do they have friends?

They probably do, but we cannot tell easily, because plants are quiet and don’t travel very often (other than through their offspring). There is little the plants can do because they are “landlocked” …

Can we free the plants? What if we give plants mobility so they can go wherever they want to, like animals?

Let’s say when a plant needs water, it goes to get water; when it needs sunlight, it moves out of the shade; when it needs pollinators, it gets close to bee hives; and when it needs friends, it goes to hang out with other plants…

It’s conceptually quite simple: first, we put motors and wheels on the plant pots; second, we plug in sensors into the plants and the soil to detect what the plants need/want; finally, we connect the collected signal to motors to control the pots. Ta da!

It’s of course not that easy, but not entirely impossible either (I actually know how to do the first and third steps above 😊). Wouldn’t it be fun to watch plants make their own decisions, learn to control themselves, compete and cooperate with each other, and have more control of their own fate? Would you like a plant more if it comes to you for treat once for a while? Would it makes your garden more beautiful by changing the layout every minute, all decided by the plants themselves?

Now, if only someone can help me to figure out what these plants want …

Getting Tossed Around on the Moon?

Well, I still haven’t gotten over the throwing and catching idea

Have you seen astronaut bunny hopping videos from the moon? The gravity seems to be a lot less there (0.166 g) so things can fly higher, with less effort. There is no air there either, so objects would fly more predictively, and easier to catch.

Let’s say if we have a network of robotic catching and tossing stations on the moon, objects will just hop up and down to get to places. By objects, I mean equipment, construction material, and astronauts.

Astronauts? yes, especially space tourists. Won’t it be fun to be throwing off a cliff, and get (gently) caught by a robot later?

A Swarm of One Robot?

Lately I’ve been looking at things a bit differently. Instead of seeing the big picture, I started to notice the trees in a forest…

Take this happy squirrel for an example, many of us see it as an autonomous, smart, and cute creature, capable of doing things we hope our robots can do. If we were to copy the design of a squirrel build a squirrel-inspired robot, we would put together a design with similar biomechanics and program it to mimic the observed squirrel behaviors. If this doesn’t work, we may blame it on not having good/similar materials or actuators, or not having good enough models of a squirrel’s body and mind.

This is a top-down way of copying nature’s designs, but we can look from the opposite direction as well.  A squirrel is made of billions of cells and the interaction of these cells shapes almost everything a squirrel does, theoretical speaking. Maybe a squirrel is not a good example here as it’s too complicated for us to understand its behaviors from a bottom-up perspective. If we look at plants instead, a quick YouTube search would lead to many cool time lapse videos of growing vines and roots and it would be hard to argue that these are not intelligent creatures making their own decisions. These high-level behaviors emerged from the local interactions of numerous cells. Trying to copy the behaviors without going through the bottom-up process (i.e., swarm #1) may not always be fruitful. I think people making animated movies understood this a long time ago. It’s like drawing clouds, snowflakes, and shadows by the hands of artists vs. using a physics engine.

Even if we don’t look at a squirrel from the cellular level, we can still see one as multiples. Have you watched the movie Groundhog Day? In the movie, every day is the same day, …, for everyone else, but not for the main character, Phil. For a squirrel, everyday is a new day, …, but more or less like the previous days. A squirrel could try different strategies each day, or copy a previous one and make adjustment based on lessons learned. In that sense, a month for a squirrel may be considered as a swarm (#2) of 30 interacting squirrels, each with a one-day experience.

Now, at a particular time, when a squirrel needs to make a particular decision (e.g., where to go for foraging?), could the decision also be made by a swarm instead of relying on a single decision-maker in the squirrel’s mind? It’s possible. There may be many rivaling thoughts (swarm #3, with some thoughts maybe coming from swarm #2) and the one that wins out at a moment would take the control of the squirrel body. At a different time, another opinion may prevail. This is like having several drivers fighting for the control of a bus, or political parties competing for influence of a country. Chaotic, maybe, but not without its merit.

So, can we make a swarm of one robot? I think so. We may actually be able to make three (or more) swarms out of one robot!

Squirrel drawing credit: Shuttlrstock.com 228696961

Venture into the Unknowns – My 2021 Research Statement

I have decided a while ago to update rewrite my research statement every a couple years to evaluate/refocus my directions, but it took until now to have one completed…

I am a person with many interests. Lack a focus and depth, you may say, and the evidence is quite clear. I spent many years designing avionics, UAVs, robots, and telescopes; I worked on flight controls, sensor fusion, fault tolerance, and decision-making; I often wondered about the futures on Earth and dreamed of mission concepts for exploring other planets and moons; I am also interested in the humans’ interactions with robots, from safe co-existence, collaborations, to influencing robots’ decisions and behaviors. When I started my lab in 2012, I called it “Interactive Robotics Laboratory,” a catch all name to cover things I may want to explore.

With this trait and experience, I found myself a natural for envisioning new robot systems, which made myself a small reputation. I have gone from being introduced by others as the guy “who worked on formation flight,” to the person “that won a NASA challenge”, to someone “who is trying to make a robot pollinator.”

I have asked myself numerous times, if I were to make one small difference to the world, what would that be? The answer has converged slowly over the years. I want to find innovative robotic solutions to some of the world’s big challenges (e.g., hunger, inequality, access to education, and exploring the unknowns). Every time I tried, there always seems to be this one puzzle piece missing: making robots resilient in the real-world. That is, robots actually work in new environments and under dynamic situations (e.g., around people), instead of just cool demos on YouTube. I think something important is lacking in the field of robotics.

I call this “soft autonomy,” a term we made up and started using since 2014 to contrast the rigid decision rules governing Cataglyphis, our most prized robot (pun intended). As a “probabilistic robot,” Cataglyphis was built to handle a variety of uncertainties (e.g., localization, object recognition, terrain, and manipulation). Despite of its success, Cataglyphis is a robot runs on the designers/programmers’ predictions of what the situation may become. Deviations from these predictions, or any surprises to Cataglyphis, often leave it confused.

Cataglyphis during the 2016 NASA Sample Return Robot Challenge. Photo Credit: (NASA/Joel Kowsky)

Most roboticists working on the topic of “autonomy” today make a living dealing with known unknowns, so am I. This means we spend a significant amount of time and effort on identifying, modeling, and propagating the uncertainties in the problems as probability distributions (or beliefs), and try to make decisions factor in these uncertainties. Many problems have been solved this way, and we have tried (or are trying) a few. This includes attitude estimation for UAVs (not for humans…), terrain-aware navigation on Mars, active perception for flower pollination, cooperative localization of space/underwater vehicles, among others. As we are contended solving one after another of this type of problems (e.g., getting publishable results and YouTube demos), we are knowingly ignoring a bigger problem. Every time we crafted detailed models for a source of uncertainty, it makes the solution more specialized and less flexible. Whenever the uncertainty assumptions are violated (call it the uncertainty of uncertainty?), there is no mechanism available for the robot to make productive decisions. In other words, the robots today are overly confident, but are not truly autonomous.

I can think of many “simple” creatures that are autonomous: earthworms, ants, bees, trees, and the list goes on. Intelligent? maybe not so much, especially when they are alone; autonomous? yes! So, what are some of the differences between our robots and, say, a worm? A robot has sensors and actuators while a worm has a lot more of them. A robot has powerful computers; the worm? maybe not (e.g., Caenorhabditis elegans, a simple kind of worm, only has 302 neurons and about 7,500 synapses), but it has a soft body that directly interacts with the environment for morphological computing. For a robot, we often design the hardware and then write the software; for a worm, there is probably no distinction between the two (and everything is soft!). A robot came mostly from the top-down decisions of the designers/programmers, while the worm has gone through a fierce bottom-up competition process for as long as us humans.

It seems like a way, and there may be more than one way, towards achieving “autonomy” has not been found. In additional to work on solving perception and decision-making under uncertainties problems (the known unknowns), we are exploring two directions toward dealing with the unknown unknowns: bottom-up designs and problem solving (e.g., swarm intelligence), and decision-making under the uncertainty of uncertainty. Sounds like spread thin again? yes; but hopefully not without a direction.

A Story of Unknown Unknowns

…there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” – Donald Rumsfeld.

If you work with robots, you know that there are lots of unknowns. Most roboticists, including me, make a living on solving known unknown problems. However, the robots would still get stuck from time to time. It actually doesn’t take much to surprise a robot. The trouble is, it’s hard to say what are the unknown unknowns, and it’s even hard to give examples. By definition, if we can describe it, it’s not an unknown unknown… Helping robots to deal with surprises is what I like/want/hope to do (can’t depend on it for funding though).  I need to find examples to tell people what I am working on…

Here is a personal travel story from over a year ago. My wife and I were taking the kids to China to see grandparents. There were many unexpected things happened along the way. I will let you be the judge on what were the known unknowns and what were the unknown unknowns.

Day One, Dec 24, 2019, Christmas Eve, Morgantown, Pittsburgh, Houston

This was the day to fly to China. The flight was scheduled to leave at 4:24pm. We left home at 12:59 and the weather was perfect; sunny, 50 plus degrees, with very little wind. As we drove past Washington PA, the air became foggy. It was almost like a dust storm coming at us; but it was fog. An alert appeared on the phone, said our flight to Houston (connecting to Beijing) was canceled. We called United Airlines (famous for dragging an old man off plane) while still driving towards the airport. The first lady (not the First Lady) from United was not very helpful, and it was a little difficult to understand her sometimes. She said the reason for cancellation was due to “severe weather conditions”, which sounded bogus to us at the time. She was not able to find any alternative solution (e.g., all other flights were full) and offered to refund the tickets. I told her that was not an option we would consider and asked to speak to her manager. The manager lady had many more options. She offered to check other airlines for solutions and suggested we could go through other cities such as Singapore or Hong Kong. That sounded a bit more exciting. We pulled over at the airport entrance waiting on her to find something for us. Unfortunately, the phone cut off after about 35 minutes into the call and we didn’t have a way to reach the manager lady again…

We went ahead and parked at the short-term lot and went to the United counter. The screen was now showing several canceled flights, including two flights to Houston.  While waited in the long line I dialed the United number again, in hope to connect back to the manager lady. This time it was a guy, who was quite helpful. I also gave him my number just in case. He helped us to find a flight to Houston at 9pm, which would give us just about 1-hour layover time in Houston. Sounded like a feasible option, and the best chance we had; I took the suggestion. The kids and I went to move the car from short-term to extended parking. I found out for the first time that you can drive to extended from short-term without having to pay for the latter… Looks like a loophole in the system. The fog was getting denser by then and the visibility was ~ 100m. It became clear that the weather poses a risk to flights. The road ahead of us would be full of unknowns.

Passing through the airport security was uneventful. We had dinner and started to wait at the gate. Many more flights were canceled. The fog outside looked very dense. Our airplane was supposed to come from San Francisco, but it had to land in Chicago first to wait for an opportunity to come over to Pittsburgh. The plane kept getting delayed (e.g., the phone showed that it taxied for 30-40 minutes in Chicago) and the passengers waited at the gate became more and more worried. There were about 15 Chinese, hoping to get on the same Houston flight to Beijing. As the window of opportunity getting narrower, people started to discuss the distance between C11 and D12 gates in Houston, how fast can they run, who can run the fastest, would the airplane wait for so many passengers, etc. I went to talk to the lady at the counter, she searched around and offered me a Plan B: if we missed the flight in Houston, we would then be automatically put on the next flight to San Francisco, and connect there to Beijing. I never knew they could do that (i.e., a prebooked contingency plan) and happily accepted the offer. Of course, there was another flight to San Francisco from Pittsburgh at 9pm (after many hours of delay as well) that we could get on, but I wanted to try our luck in Houston first.

The plane for the San Francisco flight came from Houston (sounds confusing, right?) and it was supposed to land at 8:22, and it did. People cheered. The United staff were also happy, known that a plane could safely land in that kind of condition. I got a brief moment to chat with the Pilots for the San Francisco flight. Apparently, to fly in that kind of weather required airplanes with special equipment and training of the pilots. They had to use autopilot during landing as there was almost no visibility for the pilot to do anything meaningful. Taking off was not so much an issue.

Our plane (from Chicago) was not so lucky. It was first predicted to land at 8:23pm but kept getting delayed. Eventually it landed at 8:56 and the scheduled take off time got delayed from 9:00 to 9:46. This would leave about 19 minutes in Houston to catch the Beijing flight. The Chinese passengers got into more vivid discussions of the possible options, but there was still a slim slice of hope. A plan was formulated: everyone needed to call Air China to hold their plane on the ground a little longer, everyone needed to tell the flight attendant to let people under time pressure to exit the plane first, the first (fastest) person who gets to the gate could tell the airline to keep the door open a little longer, among others…

Boarding was relatively fast. People patiently waited for the door to close. After the lights flashed a few times, the pilot announced that there was a maintenance related issue.  A smoke alarm needed to be reset but can not be done through software. Someone had to physically get down under the plane to check it out. Some passengers started to get impatient. The procedure took 20-30 minutes, wear out the remaining hope of catching the flight in Houston. Finally, the airplane started moving, but instead of headed for the runway, it was asked to go through a de-icing procedure. That helped to seal the deal. By this time, nobody would have believed to be able to get to Houston on time. It actually felt more relaxing this way. We wouldn’t need to run, and we had a backup plan in the pocket…

It was about 12:30am when we landed in Houston. The flight to China had long gone (12:04 am). We started to line up at the counter. The lady at United was already prepared (!). She had our boarding pass to San Francisco waited for us. For the boarding pass from San Francisco to Beijing, it was a different story. She could only print one (my son, Anderson’s) and we would need to get the remaining tickets at the gate, in San Francisco.

Day Two, Dec 25, Christmas Day, Houston, San Francisco

We had about 4 hours to spend in the San Francisco airport, but the first thing we wanted to do was to get our boarding passes. First, we went to the gate where another flight to China was boarding. However, the people there told us that they worked for United and only people from Air China could help us with the tickets. With nobody from Air China we can find in the airport, I called their number. The guy on the phone was 1. not very patient, 2. claimed this must be addressed by United since they reserved the tickets; and 3. offered to refund the tickets… He also told me that our names were in the system but there was no guarantee that we would be allowed to board the plane. We had to walk to another terminal to find the United office. The lady there suggested us to exit the airport security to talk directly to Air China at ticketing to get the boarding pass (the reason been that we were required to show them the passports, and only the Air China people could issue the tickets). So, we did that. The people at the Air China were neither patient nor helpful. They told us that 1. the plane was full and there was no room for more people (in the meantime, the lady on the phone in the background just offered tickets to four “important visitors” for the same flight…); 2. the United people did not do their job right and give us the seats; and 3. it can only be addressed by United. While listening, I was planning vacation plans in San Francisco in case we could not make to China. We were suggested a “black uncle” at the United counter as someone maybe willing to help. So we were at the United line and talking to the “black uncle” a few minutes later. Without much trouble, the nice guy replaced our tickets and told us that the Air China people would be waiting for us. We had to wait in the long Air China line, again, but finally we got our boarding passes. BTW, through this process, I learned that airline tickets and boarding passes are two different things… I also asked the lady at the counter to double check our checked baggage and she confirmed. Soon, we were on a Boeing 747 to Beijing.

Day Three, Dec 26, Beijing

This was a short day with only about 15-20 hours, and most of them were in the air. We were chasing the sun the whole time. At first the sun was faster, and finally set behind the horizon. As we got closer to the North pole, the airplane was able to gain some ground.

The kids were surprised that we had to go through the Custom (they were a little tired). The line was not too long, and the process was smooth. Finding our luggage was not so easy though. There were just a few bags left and none of them was ours. Once again, we were at the customer service and they couldn’t locate the bags in the computer system. A big problem was that all our heavy winter Jackets were in the checked bags and it’s freezing outside. Another problem was that we would be heading to Baotou (a very cold city in northern China) in a day so it was not clear where should the bags be delivered. The nice lady there gave each of us a red blanket, so we wouldn’t be too cold waiting for the Taxi outside. That made the four of us looked like Tibetan monks in the Beijing Streets.

Day Four-Seventeen, Dec 27-Jan 09, 2020, Beijing, Baotou

Many interesting things happened but I am going to skip this part.

About the bags, they were able to find them in Houston and delivered them to Baotou directly. Before that, we borrowed clothes from relatives. We then had to drag four large bags back from Baotou to Beijing on the train.

On Jan 7, there were a few seconds on the TV news about a new virus found in Wuhan that made some people sick.

Day Eighteen, Jan 10, Beijing

It was the time to think about going home. I checked the tickets online and only Anderson’s was shown. I was not surprised. I picked up the phone to call Air China and was told to “talk to United”. Talked to the United and was assured “everything is solved, just not showing up on the website.”

Day Nineteen, Jan 11, Beijing, Washington DC, Pittsburgh, Morgantown

It was going to be a long day with over 30 hours. We arrived at the Beijing airport in the morning. Still no tickets can be found in the system… Expected to pick a fight, I walked to the Air China custom service. A nice lady there was happy to help. She even walked out her station to help us to get the bags checked…

We got to DC no trouble, then we jumped on a wrong airport shuttle bus. It was until everyone else got their bags for us to find out that our bags were at a different part of the terminal… That wasted a few hours but we did barely made to the flight to Pittsburgh that night.

After Came Back Home

A day after we came back, my wife had to fly to Dallas to help her sister with the birth of twin babies (a known unknown). No one was thinking much about the coronavirus in the US for the first week. People heard about it in the news, but that seemed to be a distant thing in China. At the end of the second week, people (mostly Asians) started to be cautious. Things went progressively (exponentially) bad after that…

Some Random Thoughts

Unlike robots, we rarely get totally stuck in normal lives, probably because we have accumulated a lot of different experiences growing up. But that doesn’t mean we can’t get stuck. Sometimes it just takes a couple steps out of our normal routines to find such examples.

Some examples of us getting stuck are probably taking a math exam or trying to solve problems (e.g., when doing research). I also feel that we sometime get stuck when facing a system built on procedures (you cannot move to step B if step A was not completed).  Is this a problem of our decision-making process or how the system was designed?

Some profound events that would have a huge impact on our life are happening somewhere (far away, close to us, or in plain sight) when we are busy worrying about other things…

 

Photos from the Mars 2020 Opposition

Here is a summary of my 2020 Mars imaging season.

After a 16-year break from astrophotography, I am back in the game gain. Compared to the 2003 opposition, Mars has changed quite noticeably. There have been two global dust storms (2007, 2018) and more smaller storms that altered the planet’s albedo features. I tried to label some of the potential changes here (please click on the photo to see a higher resolution one).

I also produced my first Mars map from my deck in Morgantown WV. The regions around the North Pole is still missing (just like Earth, the Mar’s rotation axis is tilted). I will have to wait for another opposition to complete the map.

The weather on Mars had been mostly clear, with hardly any clouds as compared to 2003. This was great for map making though. I did catch the beginning of a dust storm, which went on to spread to almost half of the globe in just a few days. Well, those were the few days that Mars turned its back on me, so I did not get to see it.

Sometimes, it’s hard to imagine the orange colored blob in the telescope is actually another planet; smaller but not that much smaller than our own planet Earth, and it’s changing over time. Maybe one day, we will get to visit Mars so we can point a scope at Earth and see it as a blue colored blob.

Making Robotics a Popular Hobby?

Photography, electronics, ham radio, model airplane, 3D printing, astronomy, …, there are many science and engineering related hobbies. Most of these hobbies have their dedicated forums, magazines, trade shows, and competitions. Behind each of these hobbies, there is an ecosystem of companies, large and small, making general or specialized equipment and software.

Robotics as a hobby? It’s starting to be a thing now, but not quite comparable to the established ones yet. There are several organized robotics events, like FIRST, involving tens thousands of kids. But a hobby is something more personal and spontaneous… Where are the robotics hobbyist forums? magazines? organizations, trade shows? competitions? To be fair, some of them are popping up, but the reach has been limited.

What makes a hobby popular? In my opinion, based on experiences with my two hobbies, amateur astronomy and photography, several factors are important. First, it needs to be intriguing. Second, it needs to have a low entry barrier: e.g., a kid with no one around to help can get started and accomplish something, enough to sustain the interest. Third, it needs to have no upper bound in terms of what can be achieved; e.g., room for fiddling, imagination, and creativity. Anyone in a hobby would know that it’s an endless endeavor to complexity and perfection. Afterall, a hobby is a form of obsession. Finally, the connections between the easy (entry) parts and the hard (advanced) parts need to be there.

I think robotics is intriguing for an enough number of people. It has the room for people to show their talent and creativity, in limitless ways. Getting a low-end robot kid is also not much more expensive than a low-end camera/radio/telescope; maybe a bit harder to use at first. So what is the problem? maybe it’s in the connections. What would you do once you are ready to move up from your Lego Mindstorms? Do you have to discard all the kit you have and start over with a new system? Is the knowledge you gained with a Lego robot transferable to a robot based on Raspberry Pi? What if you want to add soft manipulator, mapping, and natural language processing to your hobby robot? There are open-sourced ways to do that, but not obvious to most people.

Of course, a hobby does not have to be easy, and we don’t want it to be easy as well. Most hobby involves hard problems. For example, few amateur astronomers know how to grind a mirror (although that was how astronomy became popular during the Great Depression) and even fewer know how to make a lens. However, imaging near telescopes’ diffraction limit, discovering exoplanets, amateur astronomers are making great contributions to both engineering and science.

Amateur Roboticists can be this successful too. Of course, the growth of a hobby is an emergent behavior, depending on many factors such as people’s influence on each other. Here, I have a few ideas that may help improve the connections between “entry-level” and “advanced” activities. First, coming up with standardization of key robot components (e.g., hardware and software interfaces) so multiple companies and amateurs can contribute to their developments. Second, making ROS (Robot Operating System), an already successful middleware platform to researchers, accessible to high-school students and hobbyists, through easier interface and readable documentation, and provide demos on common platforms (e.g., robots developed for FIRST and VEX competitions). Three, leveraging the 3D simulation capabilities and people’s interest in gaming to develop open-source robot simulations for the hobbyist community.

Once robotics becomes a popular hobby, companies would make more money, hobbyists would have more toys, researchers would have more helpers, the acceptance of robotics would improve, and the field of robotics would advance faster!

P.S. a key difference between professionals and hobbyists: professionals get paychecks to do certain things. Hobbyists spend their paycheck to do the same things. You can probably tell that the hobbyists are often more motivated…

The Future of Remote Work? Maybe Humanoid Telepresence Robots Can Help

Tired of being stuck at home working alone remotely? You are not alone in that sense! Since COVID-19 turned the world upside down, many of us were forced to work from home. After a while, once we get used to it, working from home, or working remotely, is actually not all that bad. We can spend less time in the traffic and enjoy more flexibility sometimes. But there are important things missing, like face-to-face discussions and the ability to modify the environment (many jobs depend on these!). Or in short, we are missing out on the social and physical interactions.

Will remote work be the same as what we are doing today (e.g., meeting on Zoom), say, 20 years from now? Hopefully not…, OF COURSE NOT! So what may change?

Let’s envision a future hybrid workplace, for example, an office with local workers and a group of telepresence humanoid robots as “avatars” of remote workers. Whenever a remote worker needs to do something beyond a computer task, for example helping a customer or turning a knob, she/he may do so through one of the robots. The humanoids have articulated arms and bodies to support human-like interactions, e.g., during a “face-to-face” conversation. When an office worker or a customer puts on a pair of Augmented Reality glasses, the live image of the remote person would be overlaid over the robot. At homes, the workers also feel they are physically experiencing the remote work environment, instead of feeling isolated.

My student Trevor Smith created this illustration in Gazebo using images of a VR treadmill the robot Pepper.

Of course, a lot of research needs to be done for this dream to become a reality, but that’s what we roboticists are here for. Communication technology has allowed us to hear from a distance, then to see each other, maybe this time we would finally get to “travel, touch, feel, and experience” through internet and robots? Sounds farfetched but not impossible.

A 2019 MIT report on the Work of the Future pointed out that “Ironically, digitalization has had the smallest impact on the tasks of workers in low-paid manual and service jobs. Those positions demand physical dexterity, visual recognition, face-to-face communications, and situational adaptability. Such abilities remain largely out of reach of current hardware and software but are readily accomplished by adults with moderate levels of education.” By focusing on labor-complementing instead of labor-substituting technology development, improving remote work may be a way of using robotics and AI to support middle-class workers (e.g., teachers, social workers, farmers, and factory workers) of the future.