Unsupervised deep video hashing with balanced rotation

Gengshen Wu, Li Liu, Yuchen Guo, Guiguang Ding, Jungong Han, Jialie Shen, Ling Shao

Research output: Chapter in Book/Report/Conference proceedingConference Paper published in ProceedingsResearchpeer-review

Abstract

Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsuper-vised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature bi-narization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.

Original languageEnglish
Title of host publication26th International Joint Conference on Artificial Intelligence, IJCAI 2017
Place of PublicationMelbourne; Australia
PublisherInternational Joint Conferences on Artificial Intelligence
Pages3076-3082
Number of pages7
ISBN (Electronic)9780999241103
DOIs
Publication statusPublished - 1 Jan 2017
Externally publishedYes
Event26th International Joint Conference on Artificial Intelligence, IJCAI 2017 - Melbourne, Australia
Duration: 19 Aug 201725 Aug 2017

Conference

Conference26th International Joint Conference on Artificial Intelligence, IJCAI 2017
CountryAustralia
CityMelbourne
Period19/08/1725/08/17

Fingerprint

Hash functions
Feature extraction
Experiments

Cite this

Wu, G., Liu, L., Guo, Y., Ding, G., Han, J., Shen, J., & Shao, L. (2017). Unsupervised deep video hashing with balanced rotation. In 26th International Joint Conference on Artificial Intelligence, IJCAI 2017 (pp. 3076-3082). Melbourne; Australia: International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/429
Wu, Gengshen ; Liu, Li ; Guo, Yuchen ; Ding, Guiguang ; Han, Jungong ; Shen, Jialie ; Shao, Ling. / Unsupervised deep video hashing with balanced rotation. 26th International Joint Conference on Artificial Intelligence, IJCAI 2017. Melbourne; Australia : International Joint Conferences on Artificial Intelligence, 2017. pp. 3076-3082
@inproceedings{aa58e35008534e6ca477c0e6b8bcd8ab,
title = "Unsupervised deep video hashing with balanced rotation",
abstract = "Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsuper-vised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature bi-narization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.",
author = "Gengshen Wu and Li Liu and Yuchen Guo and Guiguang Ding and Jungong Han and Jialie Shen and Ling Shao",
year = "2017",
month = "1",
day = "1",
doi = "10.24963/ijcai.2017/429",
language = "English",
pages = "3076--3082",
booktitle = "26th International Joint Conference on Artificial Intelligence, IJCAI 2017",
publisher = "International Joint Conferences on Artificial Intelligence",

}

Wu, G, Liu, L, Guo, Y, Ding, G, Han, J, Shen, J & Shao, L 2017, Unsupervised deep video hashing with balanced rotation. in 26th International Joint Conference on Artificial Intelligence, IJCAI 2017. International Joint Conferences on Artificial Intelligence, Melbourne; Australia, pp. 3076-3082, 26th International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, 19/08/17. https://doi.org/10.24963/ijcai.2017/429

Unsupervised deep video hashing with balanced rotation. / Wu, Gengshen; Liu, Li; Guo, Yuchen; Ding, Guiguang; Han, Jungong; Shen, Jialie; Shao, Ling.

26th International Joint Conference on Artificial Intelligence, IJCAI 2017. Melbourne; Australia : International Joint Conferences on Artificial Intelligence, 2017. p. 3076-3082.

Research output: Chapter in Book/Report/Conference proceedingConference Paper published in ProceedingsResearchpeer-review

TY - GEN

T1 - Unsupervised deep video hashing with balanced rotation

AU - Wu, Gengshen

AU - Liu, Li

AU - Guo, Yuchen

AU - Ding, Guiguang

AU - Han, Jungong

AU - Shen, Jialie

AU - Shao, Ling

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsuper-vised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature bi-narization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.

AB - Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsuper-vised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature bi-narization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two real-world datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.

UR - http://www.scopus.com/inward/record.url?scp=85031923652&partnerID=8YFLogxK

U2 - 10.24963/ijcai.2017/429

DO - 10.24963/ijcai.2017/429

M3 - Conference Paper published in Proceedings

SP - 3076

EP - 3082

BT - 26th International Joint Conference on Artificial Intelligence, IJCAI 2017

PB - International Joint Conferences on Artificial Intelligence

CY - Melbourne; Australia

ER -

Wu G, Liu L, Guo Y, Ding G, Han J, Shen J et al. Unsupervised deep video hashing with balanced rotation. In 26th International Joint Conference on Artificial Intelligence, IJCAI 2017. Melbourne; Australia: International Joint Conferences on Artificial Intelligence. 2017. p. 3076-3082 https://doi.org/10.24963/ijcai.2017/429