The inhomogeneous Gibbs model (IGM) (Liu et al., 2001) is an effective maximum entropy model in characterizing complex high-dimensional distributions. However, its training process is so slow that the applicability of IGM has been greatly restricted. In this paper, we propose an approach for fast parameter learning of IGM. In IGM learning, features are incrementally constructed to constrain the learnt distribution. When a new feature is added, Markov-chain Monte Carlo (MCMC) sampling is repeated to draw samples for parameter learning. In contrast, our approach constructs a closed-form reference distribution using approximate information gain criteria. Because our reference distribution is very close to the optimal one, importance sampling can be used to accelerate the parameter optimization process. For problems with high-dimensional distributions, our approach typically achieves a speedup of two orders of magnitude compared to the original IGM. We further demonstrate the efficiency of our approach by learning a high-dimensional joint distribution of face images and their corresponding caricatures.
An efficient approach to learning inhomogeneous Gibbs model
2003-01-01
745042 byte
Conference paper
Electronic Resource
English
An Efficient Approach to Learning Inhomogeneous Gibbs Model
British Library Conference Proceedings | 2003
|Learning Inhomogeneous Gibbs Model of Faces by Minimax Entropy
British Library Conference Proceedings | 2001
|British Library Online Contents | 1995
|Engineering Index Backfile | 1905