Facilitating Image Search with a Scalable and Compact Semantic Mapping

Meng Wang, Weisheng Li, Dong Liu, Bingbing Ni, Jialie Shen, Shuicheng Yan

Research output: Contribution to journalArticlepeer-review


This paper introduces a novel approach to facilitating image search based on a compact semantic embedding. A novel method is developed to explicitly map concepts and image contents into a unified latent semantic space for the representation of semantic concept prototypes. Then, a linear embedding matrix is learned that maps images into the semantic space, such that each image is closer to its relevant concept prototype than other prototypes. In our approach, the semantic concepts equated with query keywords and the images mapped into the vicinity of the prototype are retrieved by our scheme. In addition, a computationally efficient method is introduced to incorporate new semantic concept prototypes into the semantic space by updating the embedding matrix. This novelty improves the scalability of the method and allows it to be applied to dynamic image repositories. Therefore, the proposed approach not only narrows semantic gap but also supports an efficient image search process. We have carried out extensive experiments on various cross-modality image search tasks over three widely-used benchmark image datasets. Results demonstrate the superior effectiveness, efficiency, and scalability of our proposed approach.

Original languageEnglish
Article number15292361
Pages (from-to)1561-1574
Number of pages14
JournalIEEE Transactions on Cybernetics
Issue number8
Early online date17 Sep 2014
Publication statusPublished - 1 Aug 2015
Externally publishedYes


Dive into the research topics of 'Facilitating Image Search with a Scalable and Compact Semantic Mapping'. Together they form a unique fingerprint.

Cite this