It was prescriptive that an image matrix was transformed into a vector before the kernel-based subspace learning. In this paper, we take the kernel discriminant analysis (KDA) algorithm as an example to perform kernel analysis on 2D image matrices directly. First, each image matrix is decomposed as the product of two orthogonal matrices and a diagonal one by using singular value decomposition; then an image matrix is expanded to be of higher or even infinite dimensions by applying the kernel trick on the column vectors of the two orthogonal matrices; finally, two coupled discriminative kernel subspaces are iteratively learned for dimensionality reduction by optimizing the Fisher criterion measured by Frobenius norm. The derived algorithm, called coupled kernel discriminant analysis (CKDA), effectively utilizes the underlying spatial structure of objects and the discriminating information is encoded in two coupled kernel subspaces respectively. The experiments on real face databases compared with KDA and Fisherface validate the effectiveness of CKDA.
Coupled kernel-based subspace learning
2005-01-01
151651 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Multiple Similarities Based Kernel Subspace Learning for Image Classification
British Library Conference Proceedings | 2006
|Multiple Similarities Based Kernel Subspace Learning for Image Classification
Springer Verlag | 2006
|Visual Tracking via Efficient Kernel Discriminant Subspace Learning
British Library Conference Proceedings | 2005
|Kernel-Based Adaptive-Subspace Self-Organizing Map As A Nonlinear Subspace Pattern Recognition
British Library Conference Proceedings | 2004
|