Contact at or 8097636691
Responsive Ads Here

Wednesday, 7 February 2018

Web Image Re-Ranking Using Query-Specific Semantic Signatures

Web Image Re-Ranking Using Query-Specific Semantic Signatures

Abstract— Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images are first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images’ semantic meanings which interpret users’ search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods. Web Image Re-Ranking Using Query-Specific Semantic Signatures
We propose a novel framework, which learns query-specific semantic spaces to significantly improve the effectiveness and efficiency of online image re-ranking. The  visual features of images are projected into their related semantic spaces automatically learned through keyword expansions offline. The extracted semantic signatures can be 70 times shorter than the original visual features, while achieve 25-40 percent relative improvement on reranking precisions over state-of-the-art methods. In the future work, our framework can be improved along several directions. Finding the keyword expansions used to define reference classes can incorporate other metadata and log data besides the textual and visual features. For example, the co-occurrence information of keywords in user queries is useful and can be obtained in log data. In order to update the reference classes over time in an efficient way, how to adopt incremental learning [72] under our framework needs to be further investigated. Although the semantic signatures are already small, it is possible to make them more compact and to further enhance their matching efficiency using other technologies such as hashing [76].

No comments:

Post a Comment