Learn-Dictionary (Index of Posts):

This categoy of tools aims to extract a dictionary of sub-patterns from data..
This includes:


Learning the Dependency Structure of Latent Factors

Paper: @NeurIPS12

  • Yunlong He, Yanjun Qi, Koray Kavukcuoglu, Haesun Park

GitHub

Poster PDF

Abstract:

In this paper, we study latent factor models with the dependency structure in the latent space. We propose a general learning framework which induces sparsity on the undirected graphical model imposed on the vector of latent factors. A novel latent factor model SLFA is then proposed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. The main benefit (novelty) of the model is that we can simultaneously learn the lower-dimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is devised to make the model feasible for large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by our model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance.

slf

slf

slf

Citations

@inproceedings{he2012learning,
  title={Learning the dependency structure of latent factors},
  author={He, Yunlong and Qi, Yanjun and Kavukcuoglu, Koray and Park, Haesun},
  booktitle={Advances in neural information processing systems},
  pages={2366--2374},
  year={2012}
}

Support or Contact

Having trouble with our tools? Please contact Yanjun Qi and we’ll help you sort it out.


Unsupervised Feature Learning by Deep Sparse Coding

Paper: @Arxiv

  • Y He, K Kavukcuoglu, Y Wang, A Szlam, Y Qi

Talk PDF

Abstract:

In this paper, we propose a new unsupervised feature learning framework, namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer architecture for visual object recognition tasks. The main innovation of the framework is that it connects the sparse-encoders from different layers by a sparse-to-dense module. The sparse-to-dense module is a composition of a local spatial pooling step and a low-dimensional embedding process, which takes advantage of the spatial smoothness information in the image. As a result, the new method is able to learn several levels of sparse representation of the image which capture features at a variety of abstraction levels and simultaneously preserve the spatial smoothness between the neighboring image patches. Combining the feature representations from multiple layers, DeepSC achieves the state-of-the-art performance on multiple object recognition tasks.

gakco

Citations

@misc{he2013unsupervised,
    title={Unsupervised Feature Learning by Deep Sparse Coding},
    author={Yunlong He and Koray Kavukcuoglu and Yun Wang and Arthur Szlam and Yanjun Qi},
    year={2013},
    eprint={1312.5783},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Support or Contact

Having trouble with our tools? Please contact Yanjun Qi and we’ll help you sort it out.