Blog

The AI Echo Chamber

This is only the beginning…

Imagine a time where billions of people use large language models (LLMs) such as ChatGPT to help write everything from novels and academic articles to emails and social media posts. Many of these pieces will end up circulating on the internet, and over time, be used to train the LLMs. What might the consequences be?

Most people would probably agree that this creates an echo chamber. The feedback loop risks amplifying specific viewpoints while sidelining others, diminishing the diversity of thought and expression. Over time, the distinction between original human thoughts and AI-generated contents could blur, making it harder to trace the origins of information—and misinformation.

I think the greatest danger is that the change may happen on a vast scale without being detected by us. We are moving toward a parallel reality shaped significantly by AI, distinct from a reality where these tools are absent. Such a shift would emerge on a societal scale, altering perceptions and biases without us noticing.

The questions then arise: Can we detect and measure these shifts in reality? Can we set up probes, observatories, and experiments to quantify the impact of LLM on our collective intelligence?

I don’t have an answer to these questions, but I feel it might be important. Any suggestions?

Disclosure: This blog post was co-edited with ChatGPT4.

A 20 Year Project

August 27, 2003: the day when Mars reached its closest point to Earth in 60,000 years.

There was a lot of hype (and hoaxes) around this Mars opposition. I was also preparing for it by making a 12.5” F6 Newtonian. Still a Ph.D. student at the time, I got help from several friends to make this happen.

The Mars season was a success! With a modified webcam (I sourced a monochrome CCD chip from Italy to replace the color sensor in the webcam!), I captured some really good Mars images.

The success fueled a dream, a larger scope for planetary imaging. I started to plan out a 16” F7.2 Newtonian on a tracking Dobsonian mount. As an engineer student, nothing seems to be impossible at the time.

The design used a mixture of steel, aluminum, and carbon fiber. While the mirror was on order, I built a 6” finder scope first as a testing piece. It was completed in 2004:

By summer 2007, the main scope was taking shape, again with the help of friends, and it was quite impressive 😊

The first light happened on Mar 2008, I believe (I have lost a few pictures of that…), and I remember the view of the moon was incredible.

If you look carefully, you may see motors on the dob mount in the picture above. Building the tracking system for the scope took a long time and was not successful. At the same time, I started on the market for a tenure-track position, and started realizing how inadequate my CV was. The work on the scope slowed, and then stopped.

Many things had happened between 2008 and 2020… The scope spent most of this period in the darkness of a garage, as evidence of “I once had a dream.” I still had the dream, just not the time and energy to pursue it. In the meantime, I found a 16” F5.85 mirror to replace the F7.2 mirror to make the scope more practical.

In 2020, I got a call from Mars again and resumed my astrophotography journey after a 16 year break. With improved cameras and image processing tools, I was able to take better Mars images with 11” SCT.

But what I really wanted was to complete the 16” scope. So, I started working on it again. I bought an equatorial platform to allow tracking but was disappointed to find out the poor quality of my new F5.85 mirror.

After an 18 month wait, now I have another mirror, a 16” F5.25 made by Zambuto. I also modified the scope to be mounted on an equatorial mount. It looks great, but a bit too tall on a pier… A shorter pier (see the photo at the beginning) solved the problem so I can take down the scope by myself at night.

My spaceship is finally ready to go, and I have been enjoying the ride since. Should I dare to dream bigger?

(Click on the photos to see larger size, and check out more photos in Gallery)

Problem-Solving in Robotics?

One of the primary obstacles impeding the widespread application of robotics lies in their lack of problem-solving capabilities. Enabling robots to independently resolve unforeseen situations could facilitate the adoption of autonomous robots in a broad range of applications such as agriculture, healthcare, exploration, and environmental protection.

As robot designers or programmers, we have a tendency to break-down and solve problems for the robots from our own perspectives. When developing a robot, we often translate our understandings of a problem (e.g., through a ‘problem statement’) into a set of structured ‘decision making’ or ‘control’ algorithms, which are then programmed into the robot. As a result, the ‘autonomy’ of today’s robots is limited. The resulting robot behaviors frequently turn out to be brittle and unnatural. Therefore, a great challenge in robotics research is allowing the robots themselves to play a bigger role in solving problems. When new problems arise, these robots are better equipped to address them.

I would like to propose the following research and development areas associated with problem solving in robotics:

  • Define problem solving in the robotics context. Unlike the research on decision making, which has well-defined problems such as POMDP (Partially Observable Markov Decision Processes), problem-solving is currently not a well-recognized research topic in robotics.
  • Learning from nature. Most examples of problem solving are from nature, exhibited by people, insects, plants, even individual cells when interacting with the environments. The mechanisms that led to the problem-solving ability of these creatures are not clear.
  • Swarm robotics. Another place that we may find examples of problem solving are the collective intelligence of natural swarms (e.g., transportation, foraging, construction) and robotic swarms. The emergence of sometimes unexpected global behaviors through local interaction rules can be interpreted as solutions to environmental challenges, which no single agent in the swarm fully comprehends.
  • Case based reasoning, imitation learning, and transfer learning. How could past experiences offer guidance for solving new problems?
  • Large Language Models (LLMs). Can a pretrained LLM provide “common sense” to a robot’s problem-solving mechanisms?
  • Benchmark problem. How do we evaluate the progress made? Can we identify benchmark problems that are experimentally simple to set up yet challenging to resolve?
  • Ethical Robotics and Safety. What are the implications when robots can solve their own problems?

Depending on if the research is inspired by human-like or primitive creature-level problem solving abilities, there could be very different pathways toward problem-solving in robotics.

AI Neuroscience and AI Psychology: A New Era in Understanding Artificial Intelligence

As humans, when we encounter concepts or phenomena that we don’t fully understand, we create new disciplines to study and comprehend them. Emergent behaviors, in particular, often baffle us – these are instances where the whole is drastically different from its individual building blocks. We might comprehend the blocks fairly well, but the complex constructs they form can be mystifying. Economics, as an example, is a field established to decipher the collective outcomes of numerous interacting individuals.

Today, we are witnessing similar emergent behaviors in the realm of artificial intelligence. Large language models like ChatGPT are demonstrating capabilities that we can’t entirely explain, other than acknowledging that they’ve emerged from tools we thought we knew well. As AI systems become more complex and demonstrate capabilities resembling cognition, we may need to extend our methods of understanding and investigating them. In this context, it could be beneficial to consider the creation of new fields: “AI Neuroscience” and “AI Psychology.”

AI Neuroscience

AI Neuroscience would be a discipline focused on understanding the ‘mechanics’ of AI – the intricate layers of artificial neural networks, how they interact, and how they produce the outputs that they do. This field would delve into the structure and interconnections of AI models, much like how neuroscience studies the brain’s physical structure and the neural networks within it.

AI neuroscientists would investigate the detailed workings of AI models, looking into how information flows through the network, how weights and biases change during learning, and how different architectures impact the model’s behavior. They would strive to map the AI’s “connectome” and understand how various components contribute to the overall functionality.

AI Psychology

On the other hand, AI Psychology would be more concerned with the ‘behavior’ of AI – its outputs, interactions, and ‘decisions.’ Rather than focusing on the AI’s internal structure, AI psychology would look at how AI perceives its inputs, responds to different scenarios, and changes its behavior over time.

AI psychologists would develop tests and experiments to probe AI behavior, much like how psychologists use various tests to study human cognition, personality, and behavior. They would analyze how AI systems learn over time, how they generalize from past experiences, and how they respond to novel situations.

Why do we need them?

Splitting AI research into these two fields could provide a more nuanced understanding of AI systems. AI Neuroscience would help us understand what’s happening ‘under the hood’ of AI systems, while AI Psychology would give us insights into their behavior and interactions with the world.

This division mirrors the dichotomy in human cognition research, where neuroscientists study the physical brain, and psychologists study behavior and mental processes. Both perspectives are crucial for a full understanding of cognition, whether in humans or AI.

As AI systems continue to grow in complexity and importance in our lives, developing a more sophisticated understanding of how they work and how they behave becomes increasingly important. The creation of AI Neuroscience and AI Psychology could be a significant step in that direction, fostering a more nuanced understanding of AI and enabling us to use, regulate, and improve these systems more effectively.

——————-

PS from Gu: this was written by GPT 4, using my previous blog post as a prompt. I largely agreed with the points here and admired GPS4’s ability to organize language.

AI Neuroscience & AI Psychology?

When people on Earth don’t understand something, they create new disciplines. A lot of what we don’t understand comes from emergent behaviors, that is, the whole is drastically different than the building blocks. We may know the blocks reasonably well, but not the complex things built with them. For example, economics is a field to help understand the collective outcomes of a lot of interacting people. With ChatGPT and other large language models started showing abilities that we don’t quite know how to explain, other than they have emerged from the tools we (thought we) know, maybe it’s the time to have new fields? What about splitting the AI research field to “AI Neuroscience” and “AI Psychology”?

On Time and Research Productivity

What is a good heuristic linking the time spent and the research progress made?

If we use bean counting as an example, the progress made would be a function of talent, skill, and effective time spent. This is not as simple a relationship as it may look like at first glance. The talent (e.g., good hand-eye coordination and dexterity) separates us to some degree, but skill can make up for most of the differences. Skill is also developed over time, as a function of effective time spent on bean counting in the past. Notice the word effective here. Three people (A, B, and C) can be spending the same amount of time counting beans; but,

  1. A spends 30% of the time wondering: B seems to be more talented than I am?
  2. B spends 20% of the time managing the group.
  3. C is not thinking at all, just counting.

Who may be the one that has more beans counted and has more bean counting skills gained over the time?

However, research is not bean counting. If instead, we use mountain climbing as an example, we may consider progress as a function of talent, skill, time spent, and the direction we take. If we are making poor choices by going on the longer route, we could be climbing fast and hard but still be late.

However, research is not mountain climbing. If instead, we use mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, and luck. If we are lucky, we would find many (and good) mushrooms on the way with less time and effort. But luck is not something we have direct control of. The only part that we can do is to increase the number of trials, e.g., explore more, which is also a function of the time spent.  Also, it’s assuring to think that no one can be always unlucky on a time scale of, say, 40 years.

However, research is not mushroom foraging by one person. That would be ignoring the bigger picture, e.g., the role played by others on one’s progress. If instead, we use group mushroom foraging as an example, we may consider progress as a function of talent, skill, time spent, direction, luck, and interaction. The time spend on teaching, making friends, brainstorming, and cooperation may payoff in ways we can’t anticipate.

How do we know if we have been managing our time productively? We probably won’t know for sure. It usually takes years to develop a skill (e.g., language, playing musical instrument, writing) and the process is not linear. Your skill could be stagnating for a long time before making the next jump. I would suggest following a simple heuristic: the research progress, along with many of its contributing factors, have a positive correlation with the effective time spent. There is also a simple test. Just think about in the last week (while still can remember), how much time did I spend on research related topics, such as reading, learning, thinking, doing, writing, teaching, and debating, etc. Ok, for me, I worked quite hard last week (Oct 30 – Nov 05, 2022), but a good chunk of my time was spent on attending meetings, replying to emails, grading homework, dealing with logistics/paperwork. I also spent a good amount of time teaching and thinking (which were productive), but had little time to read and write, and even less time to use my hands for doing anything other than typing …

Why am I not a better researcher than I am now? That was probably the main reason. I can blame it on not having enough time; but that probably is just an excuse. Maybe it’s because there are too many things (other than just research) that I chose not to give up? Or maybe the time I spent was just not effective enough. One progress I made was that I have learned, after a long while, not to waste time thinking that I am not good enough as a researcher.

RootBots: Sprawling Robot Networks for Lunar Resource Extraction

Lunar Helium-3 (3He) mining has the potential of addressing the energy and environmental challenges on Earth. As an essential fuel of future nuclear fusion reactors, 3He is extremely rare on Earth, with currently less than 2 kg/year production in the US, but is much more abundant on the lunar surface, with an estimated reserve of over one million tons. Just 40 tons of 3He, combined with deuterium, a resource available on Earth, could meet the current annual energy need of the U.S.

Mining 3He on the Moon would be quite different than traditional mining process on Earth. Over the time, 3He, along with other volatiles, were implanted into the lunar regolith by the solar wind. This means that surface mining operations would need to cover vast areas at shallow depths. Agitation to the regolith will lead to a significant release of the volatiles, which presents both a challenge and an opportunity. Other challenges associated with large-scale space industrialization, such as the limited human presence and the high reliability/autonomy requirements, meaning radically different mining concepts need to be developed.

Inspiration: Vascular plants use roots to anchor themselves, to explore and extract resources from earth, to store nutrients, and to compete with others. The roots grow by producing new cells, which are regulated by both extrinsic and intrinsic stimuli. Sensing of extrinsic factors, such as gravity, barriers, light, air, water, and nutrients, directs roots to grow toward favorable directions. Hormones (i.e., intrinsic stimuli) such as Cytokinins and Auxin (with often opposite effects), as a part of both local and long-distance signaling, also help coordinate the root development. The plant roots are time and real-world tested designs for functionality, intelligence, and resilience.

Vision: Entering rootbots, sprawling networks of robot organisms, for performing planetary resource extraction tasks at an unprecedented scale in the second half of the 21st century. A rootbot will be made of modular and interlocking smart components called the cells. Each cell interacts with the environment and makes its own decisions, influenced by local stimuli and decisions of nearby cells. One important decision to be made is whether for a cell to stay in place or relocate. If a cell decides to move, it will wait for a mobile agent, called a transporter, to come by and pick it up. Traveling only on top of the existing “root” network of cells, the transporters carry cells and deploy them in places where they desire. With two decentralized and interacting robot swarms (i.e., the cells and the transporters), a rootbot can grow like a plant root for exploring and exploiting favorable conditions.

Ok, the discussions above were modified from an unsuccessful NASA NIAC proposal. Feedback from NASA? “Overall, while there were many interesting aspects suggested, the proposal overreached without enough justification for any one concept, especially given the scale proposed.

A Trip to Mars… Desert Research Station (MDRS)

A couple weeks ago, a team of WVU students and I traveled to Utah to compete in the University Rover Challenge (URC) for the first time. It has been 5-years since I was last at a robot competition. This time, we have a new group of passionate and talented students, which brought back a lot of memory and excitements. We ended up doing well for a first-time team, but that was not without struggles and some luck.

Going to a robot competition is to get out of ones’ normal life routine. In a short a few days, unexpected events are rapidly unfolding in front of everyone’s eyes, followed by rapid and intense problem solving by the team members. In this post, I will mention just a few of these surprises.

Imagine you are sending a rover to Mars for a science mission. Your rover needs to be in other people’s hands for transportation and payload integration. It has to survive the rocket launch, months of interplanetary travel, and the short but horrifying landing process. It may not end up in the exact location on Mars as you hoped. Once it’s there, there is only so much you can do about the rover, and things start to break as the rover moves from one place to another …

URC was a bit like that. As a good robot challenge should be, there are many elements of surprises. Some of these surprises are imposed by the physical world, like a real Mars mission, and some are exclusively for the first timers like us.

Our launch vehicle was a brown UPS truck. We packed everything in five wooden crates and a cardboard box, with almost 300kg of gear. After traveling on the Earth surface for three days the shipment arrived at Denver. A two-person team picked it up with a van and completed the remaining 7-hour journey.

Several parts broke during this trip, mostly 3D printed ones. Luckily, we brought backups. Our 3D printer also had a motor mount broken. A team member (Tyler) zip tied the motor to print a new part to fix the problem, practically creating a self-repairing 3D printer. To our surprise, all the steel bolts on the rover were heavily rusted, as if the UPS truck took a sea route.

Getting the robot ready for the first two missions (Equipment Servicing and Autonomy) on the first competition day took a long time. Some testing were pushed to after dark. At close to 11pm (1pm Easter time), things started to look really good with everything working. When powering down the system, an (unpowered) GPS cable fall into the electronics box and got close (but not quite touching) the power distribution board. After a small flash under one of the darkest night skies in the US, everything went quiet.

The night of excitement renewed after the incident and sleep was no longer important. Close inspection of the power board revealed that an inductor melted down. The inline fuse was still intact and there was no way to tell if the electronics downstream (e.g., computer) were still ok. Swapping out the power board with a backup piece took some careful deliberation and planning. Luckily everything worked and there were still over 2 hours left to sleep before we need to get on the road.

It was a small miracle that the robot worked for the Equipment Serving task without having a chance to do a full system testing after putting everything back together. We probably wouldn’t do much better than what we did without more in-depth understanding of the tasks, which could only be acquired through being there.

The Autonomy task was more … dramatic, for a lack of better word. The robot held its position (like the picture below) for almost the entire duration of the 30-minute mission. At the very last moment, it took off and reached its first waypoint. For us outside of the command station trailer, a motionless robot can trigger many emotions and speculations. For the members inside the trailer, they were in a frantic problem-solving mode. Clearly, the time went by at very different rates just a few meters apart.

What turned out to be happening was that the terrain map loaded on the rover was centered around the habitats of MDRS. For the actual URC competition, the organizers split the four different missions at three locations about 1km apart. The starting point of the autonomy mission was just outside of our prior map. Knowing it’s not on the map, the robot did not know what to do. It took the team members just a few minutes to diagnose the problem, and then many trials and errors to place a blank map in the right place so the robot can move. It worked!  I have seen many “autonomous” robots made up its mind to not go anywhere during the competitions …, this was the first time that a robot changed the mind (with some human help, of course).

With a bit more time and experience, we were better prepared for the next two missions on the following days: Science, and Extreme Retrieval and Delivery. There was no shortage of surprises and issues, but the team (and the rover) held up well.

An adventure like the URC trip teaches us the meaning of real-world engineering. To know a system works, we need to put it through the test of truly new environments and unexpected situations, out of the control of the robot designers. The thought that we shipped a rover to one of the most uninhabitable deserts in the continental US thousands of kilometers away and still managed to make it work in all four missions is quite satisfying. Many other teams, especially international ones, had to cope with even harder constraints, like designing the rover to fit in airline carry-on cases.

When a new problem arises during a competition, and it almost certainly will, the problem needs to be understood and solved quickly, either by the robot itself or by team members. Luckily, robot designers and programmers are trained problem solvers, although their performance can be further improved with more systematic approaches. For autonomous robots? problem-solving is a much harder challenge and perhaps the greatest gap in the current robotics research.

Here is a group photo of the team along with our judge (second from the left), taken in front of a MDRS habitat after the Extreme Retrieval and Delivery mission.

Mastering the Master’s Study at IRL

After spending 16+ years in school, and you still feel like needing more schooling, then a master’s program may be suitable for you…

This is the “super-undergrad” view of master’s study. Another way of looking at your journey as a master’s student is to consider it as a “pre-Ph.D.” Whether you want to get a Ph.D. later or not, you can build yourself into a capable engineer and an independent researcher in 2-2.5 years of time. If you take this latter view, the time spent on your master’s degree may become the most consequential period in your career. However, this would only come as the result of hard working and wanting to make a change.

The first challenge for a master’s student is the transition process. Life as an undergrad was quite structured. You come to classes, do homework, prepare for (one after another) exams, keep yourself fed, clean, and healthy, and squeeze in other things you want to do. Your schedule is largely dictated by the curriculum, the professors, and the computers. You don’t have to plan much; just responding to endless deadlines, and having things done on time, you would probably do well in school. The grad school is different though. Initially, about half of a master’s student’s time is spent on classes. The other half? Not so well defined (well, research is a pursuit of truth…). Just like you were giving the responsibility of managing your free times when going to the college, now you are giving the freedom (just another way to say responsibility) to manage half of your (potentially) productive time. Can you make it actually productive? Without the constant pressure of homework and exams, will you still learn as fast and as focused as you could? You will be giving guidance on research directions. The projects and papers do have deadlines. Beyond that, you will be responsible to manage your day-to-day research activities.

The second challenge for a master’s student is the transition process. When you were doing a class project, you would be pretty sure that the project was feasible, and the knowledge needed to complete it was mostly discussed already in the class. Now, you will be swimming in the ocean of human knowledge and trying to solve open-ended problems. Do you have a sense of direction? Can you find the right tools for the right task? More experienced people will be there to help you, but you need to be venturous, diligent, and resilient.

The third challenge for a master’s student is the transition process. The projects are getting a bit bigger now. Big enough that a few all-nighters no longer matter (e.g., writing a thesis). You are going to have to learn the slow and steady way towards success. Finding milestones (e.g., finishing a literature review) and base camps (e.g., submitting a paper) become important. Not kicking things that you tend to not like (synonyms of not good at) doing down the road is also important. You have to fight the principle of least effort with willpower!

The fourth challenge for a master’s student is the transition process. You wanted to be surrounded by smart, thoughtful, and knowledge people? Now you get your wish. When you hear people talking about things that are way over your head, what would be your response? Join the discussion, ask them to explain, and let people help you! When you feel ignorant, it’s probably the time you are learning.

A bit of structure for IRL master’s students (an experiment starting summer 2022):

  • 1st 6 months – identify a paper topic and complete the literature review for the paper. Present it to the lab.
  • 2nd 6 months – identify the thesis topic and complete the literature review for the thesis. Present it to the lab.
  • 3rd 6 months – complete and submit a conference paper. Present it to the lab.
  • 4th 6 months – complete and defend the thesis. Work on a second paper if possible.
  • 5th 6 months – fall back if needed.

A few heuristics for IRL master’s students:

  • Aligning the thesis topic with an ongoing project and IRL’s general research vision makes life simpler.
  • The thesis writing must be an individual effort, but the research is not. Good collaboration makes everyone better off.
  • Converting your written paper(s) into chapters of your thesis makes thesis writing a lot less stressful.

P.S. I wanted to go to grad school since when I was a kid. This was partially because I admired scientists and partially influenced by my uncle. He told me that when I become a grad student, I will get to work in a lab, and only on things I like to do. This turned out to be not entirely true. I did manage to get into a good grad school for my master’s study, and it made a big impact on me. I met many smart people and learned (collectively with them) that none of us can outsmart the rest of us. Each of us has some talent distributed in very different ways. Each of us has some serious flaws distributed in very different ways as well. I guess understanding those was a part of me growing up. After I got my master’s degree, I felt liberated, for two reasons. First, I was no longer so confused about what I could do (unlike when I graduate from college). I felt I could be useful in some ways. Second, I was confident I could always find an engineering job with a salary that can keep me alive, but I didn’t have to. This gave me the freedom and the power to try things that may not pay back. I became a life tourist.

Reprocessing of Mars Images from 2003

Mars opposition in 2003 was a big deal. The media hype was that the two planets (Mars and Earth) would be at their closest points in almost 60,000 years. A hoax claimed that Mars would look as large as the full Moon… That was a bit exaggerated, of course. Mars reached 25.1 arcsec on Aug 27, 2003, about 1/70 the apparent diameter of the moon.

I had to see all that myself; I made some plans.

First, I wanted needed to upgrade the scope. I was using a Celestron C9.25 and it gave good results on Jupiter and Saturn. But I felt I wanted something bigger. I was luckily approached by TEC’s Yuri Petrunin to loan me (for free!) a 10” Maksutov Cassegrain and a mount. That would have been a great setup; but at over $10k in cost, it was too much for a poor grad student without a car to handle (what if it was damaged or lost?). Plus, I had my own scope in the making…

It was a 12.6” F6 Newtonian on an equatorial mount. The primary mirror was made by Pegasus Optics. It was a very good mirror but was thick (2.1” Pyrex) and heavy. It would take a long time to cool down. With a lot of help from friends, a scope was finished on time.

Another challenge was that Mars would be low in the sky. In 2003, Mars would not rise more than 30 degrees above horizon from my location in Morgantown. It means the telescope had to see through a lot more atmosphere (as compared to higher angles), and the seeing would be poorer. There would also be more atmospheric dispersions. I had tried to make my own dispersion correctors using prisms, but with no luck. Another way of battling this was to image with a monochrome camera and color filters. However, specialized mono cameras were expensive at the time. The popular camera for planetary imagers was a webcam made by Phillips, called ToUcam, which had a color sensor. I was able to find (from someone in Italy) a mono replacement of the ToUcam CCD. After a small surgery, I had a working mono camera. I also made filter sliders that can be controlled by a RC transmitter to change filters remotely (not pretty looking but worked).  Finally, I was all set for imaging Mars.

I planned to observe Mars every clear night starting from April (I was young and ambitious at the time), but it rained the whole April.  The weather between August and October ended up being great, when Mars was at its best. I remember how proud I was with this set of photos (click to enlarge):

This animation shows the rotation of a cloudy planet. The images on the right were taken with a blue filter, which show orographic clouds over the volcanoes on the left and the morning fogs on the right side.

Fast forward 18 years. After completed a successful Mars imaging season in 2020, I became curious about older images. I thought the original raw data were lost but was fortunate to find some back in a hard drive. Here are reprocessed Aug 21 and Sep 17 images. What has changed since 2003 was mainly the ability to de-rotate the planet and stack images collected over a longer capture time (with WinJUPOS software). But it was really fun to play with older data and bring back those memories.

I also made a new animation of the blue channel images from the Aug 21 data. On that night, there was hardly any clouds on Mars, but the rotation of the south polar cap was fun to see.

Mars will not look any bigger than it was in 2003 for a very long time. But what’s more important is to not lose that passion.