I think Google efforts to automate image search is admirable, but I have doubts that object recognition will produce much improvement in educational searches, even if success is demonstrated in commercial searches. It would be helpful though if search engines would enable you to specify a "thing" parameter like person, animal, plant, or building. Advanced image recognition software should be able to make these distinctions. With this capability, I should be able to get many more relevant hits when I search for "Roman emperor" and not have to sift through pictures of somebody's cat!
Another aspect of a successful image search though, at least to an education researcher, is time frame. With my passion for history, I am constantly writing about historical people, places and things but I am interested in a particular time period. Unfortunately, time dimension is not something that can be determined by image analysis. This is the reason I usually include a century reference, like "1st century BCE" in my Flickr image tags.
Speaking of Flickr, Google should also reconsider some of its own biased search parameters. For example, millions of Flickr images are carefully tagged and represent a wealth of information but few turn up in the top results of a Google search. I can only assume this is because Flickr is owned by Yahoo and either Google purposefully chooses not to search Flickr or does not make any effort to optimize its search facilities to accommodate Flickr's database requirements.
"Although image search has become popular on commercial search engines, results are usually generated today by using cues from the text that is associated with each image.
Despite decades of effort, image analysis remains a largely unsolved problem in computer science, the researchers said. For example, while progress has been made in automatic face detection in images, finding other objects such as mountains or tea pots, which are instantly recognizable to humans, has lagged.
“We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework,” said Shumeet Baluja, a senior staff researcher at Google, who made the presentation with Yushi Jing, another Google researcher. The company’s expertise in creating vast graphs that weigh “nodes,” or Web pages, based on their “authority” can be applied to images that are the most representative of a particular query, he said.
The research paper, “PageRank for Product Image Search,” is focused on a subset of the images that the giant search engine has cataloged because of the tremendous computing costs required to analyze and compare digital images. To do this for all of the images indexed by the search engine would be impractical, the researchers said. Google does not disclose how many images it has cataloged, but it asserts that its Google Image Search is the “most comprehensive image search on the Web.”
The company said that in its research it had concentrated on the 2000 most popular product queries on Google’s product search, words such as iPod, Xbox and Zune. It then sorted the top 10 images both from its ranking system and the standard Google Image Search results. With a team of 150 Google employees, it created a scoring system for image “relevance.” The researchers said the retrieval returned 83 percent less irrelevant images."