The website "romip.narod.ru." is not registered with uCoz.
If you are absolutely sure your website must be here,
please contact our Support Team.
If you were searching for something on the Internet and ended up here, try again:

About uCoz web-service

Community

Legal information

п»ї Content-Based Retrieval Track
RIRES: Russian Information Retrieval Evaluation Seminar

 News 
 About 
 Manifesto 
 Call for participation 
 General principles 
 Participation 
 Tracks 
 Participants 
 Test collections 
 Publications 
 Relevance tables 
 History 
 2004 
 2005 
 Forum 

По-русскиПо-русски
 

Content-Based Retrieval Track

Overview

The purpose of this track is to evaluate content-based image retrieval (CBIR) from generic color photo collection with heterogeneous content.

Heterogeneous content means that considered image collection has no common subject. There are everyday photos that can be found in private photo collections. Photos are made by non-professional photographers, so they are sometimes of poor quality (for example, too dark or too light). This fact makes the task harder and closer to real-world retrieval tasks.

There is no additional information about images is provided (no annotations, keywords or context info). Pure content-based retrieval methods are to be evaluated. This kind of retrieval is known to be a hard task. Various low-level features and similarity measures have to be applied to get satisfactory results.

The objective of the Content-Based Retrieval Track is to identify the images in the entire collection which have global or local matches to the query-image by visual and semantic concepts. We consider two images having global matches when they depict the similar scene (for example, two night urban shots). Images have local matches when there are presented similar objects with possibly different backgrounds.

There are very few near duplicate images in the test dataset, so the task is to find images with similar but not the same visual concepts. Searching for near-duplicates is usually an easier problem. The reason why we feel this task is important is that it allows checking the capability of pure content-based methods on the real-world data. Till now most evaluations for content-based methods are still performed using small datasets of few thousand images or even of several hundreds. Most of the existing search engines working with bigger collections do still retrieve images by textual annotations using methods of textual retrieval. Some of them combine text- and content-based methods. This track will evaluate the impact to the search performance that can be obtained by content-based methods.

For this track the standard procedure is used.

Test Collection

A subset of Flickr photo collection is used as a benchmark dataset for this task. The test dataset consists of 20,000 still natural images taken by Flickr users all around the world. It includes indoor and outdoor scenes, landscapes and urban views, portraits and pictures of groups of people, as well as images with now particular subject when it is hard to recognize what is on it. Image size doesn.t exceed 500 pixels in the biggest dimension (typical size is 500x375 pixels).

Most of the photos are taken by ordinary people; the photos are of different quality. A rate of near-duplicates is small.

We provide a list of 1000 images randomly selected from the same data collection. These images are considered as queries for content-based query-by-example searchers.

Task Description for Participating Systems

Each participant is granted access to the photo collection and a list of queries.

Participants are required to submit runs for every query. The result of run is a ranked list of top 50 images ranked in descending order of similarity for particular query.

Participants are allowed to submit more than one result per query.

Evaluation Methodology

Evaluation will be performed by assessors. Taking into consideration the high subjectivity of image similarity judgment for this kind of task several independent assessors will take part in the evaluation process.

We will evaluate the results for 100 randomly selected queries from the given query set. Pooling approach will be used to evaluate the results. Image pools will be created from submissions for selected queries. These pools will be judged for relevance by assessors based on three-level scale (relevant/partially relevant/not relevant). Relevance assessments are entirely based on the visual content of images.

Data Formats

  • collections
  • tasks
  • results
Поздравления с днем золотой свадьбы. Тосты поздравления с днем свадьбы. Поздравления с днем свадьбы.