Computational photography is changing the way how photos are taken. More and more cell phones are using computation to offset the small lenses allowed on the phones and the progress has been amazing. What I am still waiting on is a way to allow shaky cameras to take sharper photos. Arguably, a shaky camera can provide more information of a scene than a steady camera. It seems like our brain-eye (and inertial?) system can process it, which gives us a stable (and sharp!) perception of the world while moving. Since most phone users are not so good at holding their cameras steady, why not taking advantage of the shaking? Even better would be allowing a shaky telescope to provide a sharper view of the planets! Has this been done before? Can someone point me to a product or a paper using this approach?
Month: June 2020
So, What Do We Do with It?
I am more of a telescope collector than a sky watcher. I have about a dozen telescopes of different designs: achromatic, apochromatic, doublet refractors, triplets, Newtonians, Maksutov-Cassegrain, Maksutov Newtonian, H-alpha solar scope, Dobsonian mounts, German equatorial mounts, roof prism binoculars, porro prism binoculars… you name it. I know way more about telescope designs than constellations of the sky or features on Mars. Most of my telescopes spend years collecting photons in a very dark place: my closet.
I am more of a camera lover than a photophile. I have several cameras from the film era to the mirrorless age. I have a couple dozen lenses with focal lengths ranging from 14mm-500mm, not counting telescopes.
I have learned to accept this. There is nothing wrong with being obsessed with equipment, I told myself, the hobby is supposed to be fun!
I also like robots. My lab, IRL, has about two dozen robots, plus a 50-robot swarm. The UAV lab I worked in before had about a dozen UAVs. Most (but not all) of these robots and UAVs were custom developed. I, as someone who always like toys, had a hand in the design of most of these systems.
So now, what happens when we have all the hardware we ever wanted? Of course, we can only get close, but not there. There is a pretty big difference between “wants” and “needs”, and we often rationalize “wants” as “needs”. As engineers and perfectionists, seeing small issues with the current setup makes us feel itching. We are constantly dreaming up next design iterations. We are telling ourselves better robots will make our research better.
Is that true? Do we really need more/better robots to do better research? Maybe to some degree. If we don’t have the appropriate tools, we can’t do certain experiments. If we don’t have high quality equipment, some work may be very hard to do (e.g., mapping without 3D Lidar or robotic pollination without a precision manipulator). I think another important reason for having the best robots, like having the best telescopes/cameras, is that we have no one else but ourselves to blame for the underperformance…
So, let me ask again, what happens when we have all the hardware we ever wanted? What do we do with it? The answer is simple: let’s focus on research. Instead of rushing to start on the next generation design and letting the existing robots collect dusts, let’s make them do things nobody else can dream of or believe!
What Makes a Good Grand Challenge?
I am a big fan of Grand Challenges.
I was super motivated when reading about John Harrison, a self-taught engineer (carpenter) in the 18th century, who won the longitude reward through decades of perfecting clockmaking skills. I also watched several DARPA Challenges with great interests. I have participated in, for three years, NASA’s Centennial Challenge on Sample Return Robot. Those three years left me with countless memorable moments to be enjoyed for the rest of my life.
What I like most about Grand Challenges is that they give people excitement and hope. Grand Challenges allow someone, who otherwise would not be known by people, such as John Harrison and Charles Lindbergh, to shine through their courage, dedication, and talent. They also can accelerate technology development, by bringing together a broader range of conventional and unconventional innovators and solutions.
However, many Grand Challenges failed to achieve these effects, for a variety of reasons. Here are a few factors I think maybe worth considering when designing a new Challenge.
- It needs to be relevant. If a Challenge addresses one of humanity’s most urgent needs, more people would likely to follow and participate.
- It must be a Challenge. A Grand Challenge needs to be hard. It should be a jump from any of our known abilities. It may sound impossible at first, but It’s so cool that it makes people imagine. The Challenge shall also not be too big a jump, otherwise everyone would fail (which is an acceptable but not desirable outcome).
- The Challenge description must be clear, rigorous, and stable. Like any games, there should be no ambiguity and room for interpretation. The actual tests must also precisely match the description. Unfortunately, quite often, the organizers did not fully think through all the issues at the beginning. They would come up with a set of rules that cause confusions (and potentially unfairness) and then they dumb down the challenge after most participants failed (this happens more often than you may want to believe!).
- Human factors must be kept at a minimum. One of the Grand Challenge’s greatest strengths is that it gives everyone a fair chance. You do not have to be a world renown thinker/scientist/engineer, you do not have to be rich, you do not even need to have a stable job; as long as you have a good idea, the skills, and the will (easy to say than done), you have your fair chance of winning. The success of a Grand Challenge should be defined by beating the problem, not anyone or anything else. If we allowed humans (e.g., the Challenge organizers) to pick winners based on their pre-conceived ways of solving the problem, John Harrison would had no chance against big name astronomers at the time (note: the Longitude Board, including Newton’s preference on finding an astronomy-based solution did cause hardship to Harrison for many years…). Let the results speak for themselves!
- Teams shall come up with their own resources, at least initially. This one may sound strange to you. Would it not be rewarding people with deeper pockets and leave the poor guys out of the fight? It might, but let’s consider the alternatives for a moment. What if the organizer picks a few promising teams, give each of them a few $M, so they don’t have to be sidetracked by fund raising and other resource constraints? The question would then be: based on what criteria? prestige? track-record? how likely a team’s idea may work? If you read the history of Grand Challenges, you would know that none of these are reliable indicators of success. What this funding approach does is effective disincentivize the selected teams to push envelopes hard (they already have the cake; the final Challenge prize is just the icing) and block out all other competitors. In my opinion, just like any startups, each team needs to fight for its own survival the entire time. If you think you have a good idea, try to convince someone to fund you, or join another team with adequate resources. I think the phased approach being used by NASA Centennial Challenges is very good. Let teams compete for some initial phases (e.g., a simplified Challenge with a low-entry barrier) on their own dime, provide teams some funds once passed the initial phase. This record of success also helps teams to raise more funds from other sources.
- Give it a longer time frame. Most government funding mechanism have a short time horizon, but that is not necessarily good for getting the best outcomes. If a problem is of such importance to the society (e.g., determining longitude), why not keep the challenge alive for decades until it’s solved (luckily, it was!)? Short term focus leads to more applied solutions, discourages risky/crazy ideas, and more likely leads to the picking of lower-hanging fruits. Grand challenges for picking lower-hanging fruits? Does not sound good!
- Follow up after the challenge. Don’t let the whole thing ends the moment a victory is declared. Each participant probably has developed something unique/valuable; creating mechanisms (with funding) to support them working together for a little while may spark more innovations.
Of course, we all live within the real-world constraints. I will continue to be excited whenever a new Challenge is announced!
My Anti-CV
I have been wanting to do this for several years: create an anti-CV that documents the major academic/career failures that I had. I used to have a folder for rejection letters, but it soon ran out of space, and I stopped collecting them a long time ago…
Time to find a different way to remember my failures! So here it is, an incomplete version of my anti-CV. I will keep it updated from now on.
My goal has always been to receive no less than 10 rejections a year.
Updated June 2024: Anti-CV
Presenting Your “Whole Package” During Faculty Interviews
It looks easy. All you have to do is to pretend as someone who is better than yourself for half an hour (on the phone/Skype/Zoom), and then a day (on campus), to get that dream faulty job offer.
I tried that a few times as a candidate but was not very successful. I didn’t know what the problems were until years later after serving as the chair or a member on several faculty search committees.
Faculty interview is like speed dating. By the time you made to the interview, we (the search committee) have been impressed by your achievements (on paper), but we haven’t got to know you as a person. We are afraid of picking someone that we may regret and be stuck with for years… That scary thought motivates us to do a careful job.
So what do we care about? First and foremost, we like to know if you are a person we want to work with as a colleague. Are you an open and frank person? Would you see our institution as your future home? Are you the type that can make the people around you better? Do you hold a balanced view of different matters?
Second, we try to predict if you would become a star (not just meeting the tenure requirements) with our yet-to-be-proven fortune telling skills. Do you have a solid grasp of the fundamentals in your area? Are you passionate about something? Can you think critically and independently? Are you aware of ongoing trends in research and in the society? Are you an ambitious person with big dreams and a strong vision? Can you communicate well with different audience, make sound arguments, and be persuasive? Can you be an effective teacher? Can you handle pressure, stress, and setbacks?
Finally, we also want to know if your success would matter to other people. Are you bringing complementary skills (teaching, research) to the institution? Are you a team player? Are you more interested in yourself, the community, or the society? Can you lead a team to build something bigger than us individuals can do?
In a nutshell, we are looking for someone who is way closer to perfection than ourselves …
Of course, we don’t know how to evaluate all that… In robotics terms, we are facing a decision making under uncertainty problem, an active perception problem, and a bounded rationality problem (e.g., making decisions with incomplete information and limited time). Each of us on the committee tries to observe and to probe you with questions. We fall victims to our cognitive biases, jump into conclusions with insufficient data, while trying hard not to fill the missing pieces with stereotypes, imagination, and random thoughts/mood of the day.
So how to survive this complicated, stressful, inherently stochastic, and often biased process? There are a lot you can do before coming to an interview. Preparation and experience help. Iterate on the answers to common questions leads to better, more focused answers. Known what may come helps you to prepare and know when to relax.
However, a skillful search committee can/may see through some of the facade. There are things that can be couched by a good advisor in a matter of hours (e.g., which funding programs to target), but we are not hiring your advisor. Interview experience can be learned (someone had many interviews in the past is not necessarily preferred to someone on the first trip). Always done you homework is a good quality; but is only one of many that we are looking for. Worked on a cool project, like a NASA mission, only means you were on a large team. There are also things that any intelligent person can learn on the job later, without much risks.
What we really want to get to know better is you. We try to focus on things that would take real effort and experience to understand. For example, someone who has never taught a class before would likely not understand the true challenges in teaching and learning. Someone who only did what the advisor told him/her to do may not have deep insights on what is the next step, the step after that, and why. You are unlikely to be a good leader without appreciating the meaning of compromise and sacrifice. Your strong desire to help the community needs to be backed up with a purpose and a track record. You may pretend well for a minute, but if you didn’t have the real experience, this may not survive a few rounds of probing.
Faculty candidates often don’t know why they failed (or succeeded). Almost no search committee can provide frank and detailed feedback due to a variety of reasons. We won’t/can’t tell you that you’ve been acting like a teenage; your accent was not a problem; that name dropping/talking down other researchers did not serve your interest; your honesty and willingness to expose your vulnerability was appreciated; you didn’t seem to be prepared to write a proposal/teach a class/run a lab; etc. I have seen candidates apparently interviewed at many different places didn’t quite understand why they haven’t landed a job. I was in that boat for a few years as well.
So, pretending to be a better, more desirable version of yourself during the interview may not work out. I think a better strategy is to act like that version now, to identify and build up these experiences, and to collect honest feedback. It is never too late for doing that. After you have started on this path, you can follow what many people suggested to do during interviews: “just be yourself”. You no longer have to pretend to be someone better; you are a better faculty candidate.