By ANNE EISENBERG
Published: July 29, 2004
POETS, lovers, and amateur psychologists have long looked deep into people's eyes to read their thoughts and feelings. But the reflections in full view on the surface of the eye rarely receive much attention.
Now two Columbia University scientists have come up with a computer-based way to extract detailed information from the fleeting images of the world mirrored on the curved surface of the eye.
Shree K. Nayar, a professor of computer science and co-director of the Columbia Vision and Graphics Center, took high-resolution photographs of people that include their eyes and, in particular, the transparent part of the eye called the cornea. Then, with a postdoctoral researcher, Ko Nishino, he devised computer algorithms that analyze the images reflected in these natural mirrors, revealing a wealth of information.
The system can automatically recover wide-angle views of what people are looking at, including panoramic details to the left, right and even slightly behind them. It can also calculate where people are gazing -- for instance, at a single smiling face in a crowd.
Because the algorithms can track exactly where a person is looking, the system may one day find use in surveillance cameras that spot suspicious behavior or in interfaces for quadriplegics who use their gaze to operate a computer.
Dr. Nishino and Dr. Nayar plan to try their corneal imaging system with archival photographs. ''It will be fascinating to go back and look at photographs of important people like John Kennedy,'' Dr. Nayar said. ''From a single image of the eye, we may be able to figure out what was around him and what he was looking at.''
The image-processing system works both with high-resolution digital photographs and with conventional film that can be scanned and enlarged for high resolution in the area of the eyes.
Analyzing such images might reveal not only new information about the people's surroundings as the photograph was taken, but also the precise object or individual the person photographed was observing.
Jitendra Malik, a professor of electrical engineering and computer science at the University of California at Berkeley, described Dr. Nayar's system as a ''simple but elegant idea.''
''Professor Nayar treats the cornea as a mirror, so if you look at what's imaged in the mirror, you can see many details of what a person is looking at,'' he said.
Dr. Nayar, the inventor of a 360-degree omnidirectional camera called the Omnicam, has long been interested in imaging systems that combine lenses and mirrors, called catadioptric systems.
About a year ago, he and Dr. Nishino realized that the cornea could be viewed as the mirror in such a system, and the lenses as the camera.
The detailed wide-angle information recovered by the new system is possible because the image reflected by the cornea is broader than that captured on the retina. The retinal field of view is considerably less than a hemisphere -- 160 degrees horizontally and 130 degrees vertically. But the corneal image is roughly about a hemisphere or more, permitting objects to the side and behind the person to be seen so long as the person is not looking away from the camera at an extreme angle.
''My hunch is that the best applications of this work will be with human-computer interactions,'' like using one's gaze to start a computer, Dr. Malik said. ''The advantage of the technique is that it's passive,'' and does not direct additional energy at the eye, he added. (With some common eye-tracking methods, infrared light is projected into the user's eyes.)
The crucial algorithm in the system automatically computes the relative position and orientation of the cornea in relation to the camera, using the elliptical shape of the limbus, or border, between the cornea and the white of the eye. ''The shape of the limbus tells you where the eye is in the three-dimensional scene and which direction the eyeball is pointing,'' Dr. Nayar said. The wide-angle image can then be created from this information.
Dr. Malik said the technique was particularly timely because of the advent of high-resolution cameras. ''Now it makes sense to exploit the information we can get from them.''
The system may be a boon to marketers who use cameras to track what people are looking at in a room or in a store. It may also prove important to journalists, said John V. Pavlik, a professor and chairman of the department of journalism and media studies at Rutgers University. ''One problem with eyewitness accounts that journalists and others rely on is that these accounts are limited,'' he said, by people's ability to recall accurately what they have seen.
''This technique could reveal things that were in front of them that they weren't aware of seeing so that we can understand the truth of what happened, and advance the veracity of eyewitness accounts,'' he said.
Dr. Nayar suggested that the system could also be applied to security cameras, although the picture would have to be of high quality and the eye would have to be in focus. The Columbia group shot high-resolution pictures, typically of 3,000 by 2,000 pixels, with the eye taking up a circle of about 120 pixels by 120 pixels.
The algorithms may also play a part in computer graphics, for example, to recover the original lighting in old movies from information reflected in actors' eyes. Then virtual objects could be inserted in the films that blended in realistically with the original images.
Takeo Kanade, a professor of computer science and robotics at Carnegie Mellon
University, said the new system would have many other applications, too. ''It's
really intriguing to use the eye as a natural mirror to reflect the world,'' he
said. ''You observe this every day, and yet no one had ever thought to use it
for computer image processing.''