Learning compact visual representation with canonical views for robust mobile landmark search

Lei Zhu, Jialie Shen, Xiaobai Liu, Liang Xie, Liqiang Nie

Research output: Contribution to journalConference articlepeer-review

Abstract

Mobile Landmark Search (MLS) recently receives increasing attention. However, it still remains unsolved due to two important issues. One is high bandwidth consumption of query transmission, and the other is the huge visual variations of query images. This paper proposes a Canonical View based Compact Visual Representation (2CVR) to handle these problems via novel three-stage learning. First, a submodular function is designed to measure visual representativeness and redundancy of a view set. With it, canonical views, which capture key visual appearances of landmark with limited redundancy, are efficiently discovered with an iterative mining strategy. Second, multimodal sparse coding is applied to transform multiple visual features into an intermediate representation which can robustly characterize visual contents of varied landmark images with only fixed canonical views. Finally, compact binary codes are learned on intermediate representation within a tailored binary embedding model which preserves visual relations of images measured with canonical views and removes noises. With 2CVR, robust visual query processing, low-cost of query transmission, and fast search process are simultaneously supported. Experiments demonstrate the superior performance of 2CVR over several state-of-the-art methods.

Original languageEnglish
Pages (from-to)3959-3965
Number of pages7
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2016-January
Publication statusPublished - Jul 2016
Externally publishedYes
Event25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States
Duration: 9 Jul 201615 Jul 2016

Fingerprint

Dive into the research topics of 'Learning compact visual representation with canonical views for robust mobile landmark search'. Together they form a unique fingerprint.

Cite this