Neuroscientist to Take People Under the Hood of Image Search at Infonortics Search Meeting in Boston April 27-28

Share Article

Dr. Naveen Agnihotri to Demonstrate Computer Image Recognition Method That Mimics the Human Brain

Visual-media classification and indexing is what provides developers of Visual Web applications with the base for innovating a whole new generation of Internet applications

Dr. Naveen Agnihotri, co-founder and chief technology officer of Milabra, will present a method that enables computers to identify and understand images as people do at the Infonortics Search Engine Meeting April 27-28 in Boston.

The method, known as parts-based representation, promises to significantly improve image indexing and search software, which typically relies on more basic techniques such as indexing text labels that people add to images or comparing images pixel by pixel to known images.

Automated image recognition is at the foundation of the next generation of search and web applications. Images - both still and video - comprise the fastest-growing type of content on the web. When images on the web can be classified and indexed as readily as text is today, the foundation is in place for the kind of explosive creativity that the text-based web fostered with Web 2.0 applications. Dr. Agnihotri calls this next generation of image-based search and applications the "visual web".

"Visual-media classification and indexing is what provides developers of Visual Web applications with the base for innovating a whole new generation of Internet applications," says Dr. Agnihotri.

Using the parts-based representation method, developers train software to recognize features common to an entire class of images. An example is training the software to recognize noses, which are common to faces. Another example is training the software to recognize sand, which is common to beaches. Once trained, these software-based classifiers can learn to recognize any images that have these parts much as humans do. For instance, the software can understand that a photo or video depicts a beach with people on it even if they've never seen that particular beach or the people before.

The advantages of parts-based representation over common image search techniques include:

  • scalable image processing
  • greater accuracy in identifying classes of images
  • lower processing costs
  • better indexing to enable application development.

A neuroscientist and a computer scientist, Dr. Agnihotri applies his neural network research to Milabra's software development. He has a M.S. in biological engineering from the University of Georgia and a Ph.D. in neuroscience from Columbia University, where he worked with Nobel laureate Eric Kandel on brain network processes. He conducted a postdoctoral fellowship in computational neuroscience at MIT and taught neuroscience at Columbia University.

Dr. Agnihotri's presentation is part of a broader panel discussion: "Non-Text Search Technologies: Speech, Images, Video" on Monday, April 27 at 2 p.m. The panel is chaired by Sue Feldman, IDC's vice president for search and discovery technologies. Additional panelists include Tom Wilde, chief executive officer of EveryZing, and Michael Phillips, co-founder and chief technology officer of vlingo.

The Infonortics Search Engine Meeting, currently in its 14th year, is an in-depth exploration of search and content processing.

For more information on Dr. Agnihotri's session, please contact Milabra.

# # #

Share article on social media or email:

View article via:

Pdf Print

Contact Author

Lynda Radosevich
Milabra
917-922-7020
Email >

Darra Langman
Milabra
646.519.4499
Email >
Visit website