Campus News

New Computer Program Transforms Blurry Photos Into Sharp Ones

SANTA CRUZ, CA–The conventional wisdom that a machine is only as good as its parts may no longer apply to cameras and other imaging devices. An engineer at the University of California, Santa Cruz, has found a fast way to take blurry, imprecise pictures and turn them into crisp, clear images by using software to […]

By

SANTA CRUZ, CA–The conventional wisdom that a machine is only as good as its parts may no longer apply to cameras and other imaging devices. An engineer at the University of California, Santa Cruz, has found a fast way to take blurry, imprecise pictures and turn them into crisp, clear images by using software to "guess" what the picture really should be. His work may have a wide range of applications, from improving surveillance cameras to building quality control monitors for tiny electronic devices. Eventually, this "superresolution" technique may enable ordinary, inexpensive video cameras to produce clean, high-quality images.

"If you tell people that you can take video from their lousy, low-resolution video camera and turn it into high-resolution video, they just don’t believe it," said Peyman Milanfar, an assistant professor of electrical engineering at UCSC.

Milanfar is working with Nhat Nguyen at KLA-Tencor Corporation in Milpitas and Gene Golub of Stanford University to develop the technique. The researchers reported their latest results in the April 2001 issue of IEEE Transactions on Image Processing.

"Soon everyone and their cousin will have a webcam on top of their PC," Milanfar predicted. "It will cost just $100, and will produce video at a resolution that’s now impossible."

A digital camera is equipped with a grid of "charge-coupled devices" (CCDs) that detect light passing through the camera lens–the more devices, the more sensitive (and expensive) the camera is. Each CCD element corresponds to a single pixel, or dot, of the image. So a camera’s resolution is limited by the density of its CCD grid.

But Milanfar has found a way to stretch that limit. The idea is to start by taking not one photo of the object you want to capture, but several, each one from a slightly different vantage point. Then collect all the pixels of information from these different shots, and use a computer program to figure out what image must have created those several shots.

That’s easier said than done. If a computer were to do the calculation directly, then improving the resolution of a typical photo by a factor of four would entail solving a system of equations with more than a million unknowns.

To get around this difficulty, Milanfar uses interpolation techniques to shortcut the calculation. Guessing what a photo should look like is a bit like trying to make a topographical map of unknown territory. You can start by taking elevation measurements at a few different spots on the land–this corresponds to taking a single photograph of an object. To get a more refined map, you could next take measurements, say, a single pace away from the spots you’ve already measured.

This second ‘shot’ will give you additional elevations, but it will also give you some more valuable information: Since you already know the elevation at close points, comparing the different measurements will tell you whether a particular point is on a hill, and how steep the hill is. If you take a few more sets of nearby measurements, you can tell whether the hills are convex or concave. And using your knowledge of the shapes of the hills, you can make educated guesses about the elevations of points farther and farther away, gradually filling in the picture.

Techniques for achieving superresolution of photos have been around for about a decade, but their extreme slowness has prevented them from being used in practical applications. Milanfar’s new method is about 10 times faster than its fastest competitor, he said, but it is still short of the speed necessary for real-time applications. He is working to improve his technique’s performance even further.

"From a practical point of view, we want to have a device that can do this on the fly," Milanfar said. "So you’ll buy your cheap camera, plug it into a circuit that has our algorithm on a chip, and on the other end you’ll get high-resolution images."

Related Topics

Last modified: Mar 18, 2025