Skip navigation
autonomous driving

MIT Researchers Working on Better 'Vision’ for Autonomous Driving

The research involves superhuman sensors that let machines see what is in front of them down to three micrometers, giving improved accuracy in autonomous driving.

Autonomous vehicle development is in full swing, but if such vehicles are really to succeed in the marketplace and society, they will need computerized controls that are even more accurate than the ones they use today.

One of the biggest challenges for autonomous driving systems is when the weather is not perfect, according to Achuta Kadambi, an MIT PhD student who is the first author on a research paper and project that is looking at finding ways to improve "vision" for autonomous vehicles so they can operate safely in any conditions.

The paper, "Rethinking Machine Vision Time of Flight with GHz Heterodyning," looks at how distance can be gauged by measuring the time it takes light projected into a scene to bounce back to a sensor. That information, Kadambi told ITPro Today, can be used to help create more accurate measurements to better guide autonomous vehicles with much more precision.

"The way we drive a car, we see photons that hit our eyes and make decisions based on that information," he said. "A computer is not as smart as a human brain in terms of driving. To compensate for this, we want to build superhuman sensors that can help with driving."

The researchers envision depth sensors that can capture a piece of a light wave, or photon, and see exactly how far it has traveled – the so-called "time of flight," he said.

"That means that for every object in the world, I could tell you exactly how far it is away from the car with extreme precision," said Kadambi. "Right now this is an active topic of work, to be able to tell us how far away objects are. It is clear we need this capability. It is one of those things that are still being refined to get autonomous cars to work safely."

Kadambi and the rest of the research team is working on a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold, which has promising uses for self-driving cars. At a range of two meters, existing time-of-flight systems have a depth resolution of about a centimeter. While that's good enough for the assisted-parking and collision-detection systems on today's cars, much better accuracy is needed for autonomous vehicles, particularly when weather conditions would degrade due to rain, fog, snow and other hazards, said Kadambi. Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group at MIT, is the thesis adviser for the project.

A time of flight depth sensor is like a 3D camera which not only forms a picture of an object, but also tells you how far away it is located.

"We want to be able to tell you exactly where an object is in the world down to micrometer or micron measurements," he said. "The sensors that exist today, one could argue that they are not sensitive enough, which is why we are not yet seeing these cars in widespread use on the road yet."

So far, the team's work went has made its way through concept stages, he said. Experiments have been completed and reported, patents have been filed and the researchers are talking to car companies about the research to see if there is interest in including it in future autonomous driving research and designs.

"Like any research that comes out of a university, it's going to require development and support from industry partners to really see this through to fruition," said Kadambi. "I certainly think we are on to something."

Interestingly, doctors at Harvard Medical School also heard of the team's research and are looking at how it might also be used to see internal structures within the human body to aid in surgeries, according to MIT.

"I am excited about medical applications of this technique," Rajiv Gupta, director of the Advanced X-ray Imaging Sciences Center at Massachusetts General Hospital and an associate professor at Harvard Medical School, said in a statement. "I was so impressed by the potential of this work to transform medical imaging that we took the rare step of recruiting a graduate student directly to the faculty in our department to continue this work."

Gupta called the research "a significant milestone in development of time-of-flight techniques" because it removes the need for very fast cameras. "The beauty of Achuta and Ramesh's work is that by creating beats between lights of two different frequencies, they are able to use ordinary cameras to record time of flight."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish