A novel topic feature for image scene classification

Mujun Zang, Dunwei Wen, Ke Wang, Tong Liu, Weiwei Song

Research output: Contribution to journalJournal Articlepeer-review

28 Citations (Scopus)

Abstract

We propose a novel topic feature for image scene classification. The feature is defined based on the thematic representation of images constructed by using topics, i.e., the latent variables of LDA (latent Dirichlet allocation) and their learning algorithms. Different from the related works, the feature defined in this paper shares topics in different classes, and does not need class labels before classification, so that it can avoid the coupling between features and labels. For representing a new image, our approach directly extracts its topic feature by codewords linear mapping instead of the inference of latent variable. We compared our method with three other topic models under similar experimental condition, as well as with pooling methods on the 15 Scenes dataset. The results show that our approach is capable of classifying the scene classes with a higher accuracy than the other topic models and pooling methods without using spatial information. We also observe that the performance improvement is due to the proposed feature and our algorithm, rather than the other factors such as additional low-level image features and stronger preprocessing. Highlights: •We propose a novel topic feature for image scene classification.•The feature is extracted from an established LDA topic model for images.•The feature can reflect abundant scene environment information of the images.•This method can increase the accuracy of image scene classification.

Original languageEnglish
Pages (from-to)467-476
Number of pages10
JournalNeurocomputing
Volume148
DOIs
Publication statusPublished - 19 Jan. 2015

Keywords

  • Gibbs sampler
  • Image scene classification
  • LDA model
  • Topic features

Fingerprint

Dive into the research topics of 'A novel topic feature for image scene classification'. Together they form a unique fingerprint.

Cite this