Resumen
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi’s axiomatic definition of entropy and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. As an application example, we derive a supervised metric learning algorithm using a matrix based analogue to conditional entropy achieving results comparable with the state of the art.
| Idioma original | English |
|---|---|
| Estado | Published - 2013 |
| Evento | 1st International Conference on Learning Representations, ICLR 2013 - Scottsdale, United States Duración: may 2 2013 → may 4 2013 |
Conference
| Conference | 1st International Conference on Learning Representations, ICLR 2013 |
|---|---|
| País/Territorio | United States |
| Ciudad | Scottsdale |
| Período | 5/2/13 → 5/4/13 |
Nota bibliográfica
Publisher Copyright:© 2013 International Conference on Learning Representations, ICLR. All rights reserved.
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
Huella
Profundice en los temas de investigación de 'Information theoretic learning with infinitely divisible kernels'. En conjunto forman una huella única.Citar esto
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver