Project Learning Features (N1)
The goal of the project Learning Features (N1) is to make the computer, auditory and visually smarter. The team focuses on extraction of multimedia information concepts. This way, the project has been succesful to let computers analyze human emotions. The emotions exposed behind Mona Lisa’s centuries-old mysterious smile made it to the world press in 2005.
To find the right fragment in the digital haystack, the video search program developed by this project collects knowledge at the concept level. A concept can be an object (a person, a house, or a car), but also an event (an explosion or a sunset). The search program learns by presenting many examples of images and videos from the concept concerned. It keeps doing this until the program is able to search by itself for the right features that are typical for the given concept.
Local descriptors are important for recognizing concepts. Together, such descriptors give an enumeration of the concept. A descriptor can be a description of a local color or a distribution of form. The descriptors are calculated at points where something interesting happens. This could be a point where color changes or a corner point. Around these points information is gathered: the descriptor. Because this information depends on the circumstances during recording (tl-light, bright sunlight), it is necessary that the search program corrects for them. Within the project Learning Features, some successes have already been achieved. The search system is able to find concepts under different lightning and camera positions, although some concepts are more difficult to recognize than others.
For example, it’s easier to recognize sports fragments than abstract concepts like a romantic scene in a movie. And fragments in news broadcasts are easier to trace than specific images in a home video collection. The fact is that the search system cleverly uses the fixed pattern with which the news broadcasts have been recorded. Home videos don’t have this fixed pattern and layout.
Applications are tracing background images, video, and music in big multimedia collections. The scientists from the University of Utrecht support the Meertens Institute in the conceptual search of music fragments. The Technical University Delft does similar work together with Philips. Image and Sound cooperates with the University of Amsterdam (UvA), in finding image and sound fragments from our cultural heritage. Within the framework of Learning Features, the UvA shares a worktable together with Ilse Media. This worktable occupies itself with finding video fragments on the internet.
Project leader: Theo Gevers (Universiteit van Amsterdam)
Learning Features (N1)
Multimodal Interaction (N2)
Ambient Multimedia Databases (N3)
Semantic Multimedia Access (N5)
Professional's Dashboard (N6)
Video At Your Fingertips (N7)
PERsonal Information Services (N9MI)