In the future, cameras may not need lenses. Bell Labs, in Murray Hill, NJ, has already built a prototype of a lens-less camera that could topple the 150-year reliance on “glass” to create images.
The Bell Labs concept, reported by MIT’s Technology Review, relies on a method for assembling images called comprehensive sensing (or comprehensive imaging). The basic principle is that any data set with many similar measurements will contain significant overlapping — and, therefore, redundant — information. The trick is knowing which measurements to take and how to reassemble them.
Gang Huang and his team from Bell Labs said the new device is simple, calling it a “lensless compressive imaging architecture.”
“The architecture consists of two components, an aperture assembly and a sensor," Huang said. "No lens is used.”
It consists of an LCD panel that acts as an array of apertures that each allows light to pass through and a single sensor capable of detecting light three colors.
The Technology Review said that each aperture in the LCD array is individually addressable and so can be open to allow light to pass through or closed. The array of open and closed apertures must be random in order to function properly.
The process begins with the sensor recording the light from the scene that has passed through a random array of apertures in the LCD panel. It then records the light from a different random array and then another and so on.
Although seemingly random, each of these snapshots is correlated because they record the same scene in a different way. And this is the key that the team uses to reassemble an image. The process of compressive sensing analyzes the data, looking for this correlation, which it then uses to recreate the image.
This lens-less camera has a number of advantages over conventional cameras. First is the tiny amount of data required to create images. Without a lens, these images suffer none of the aberrations and focusing problems associated with lenses. The scene is entirely in focus, and the resolution of the image depends on the size and number of the apertures and the point-like nature of the light sensor.
By using two sensors behind the same aperture array, it is possible to create two different images of the scene at the same time. Indeed, multiple sensors produce multiple images.
The current disadvantage is that it takes time to acquire the data for each image. So the current prototype only creates images of still scenes. Moving images will come when the processing time is decreased.