报告摘要:
| Information theoretical learning (ITL) is an important research area in signal processing and machine learning. It uses concepts of entropies and divergences from information theory to substitute the conventional statistical descriptors Of variances and covariances. The empirical minimum error entropy (MEE) Algorithm is a typical approach falling into this this framework and has been successfully used in both regression and classification problems. In this talk, I will discuss the consistency analysis of the MEE algorithm. For this purpose, we introduce two types of consistency. The error entropy consistency requires the error entropy of the learned function to approximate the minimum error entropy. It holds When the bandwidth parameter tends to 0 at an appropriate rate. The regression consistency requires the learned function to approximate the Regression function. We proved that the error entropy consistency implies the regression consistency for homoskedastic models Where the noise is independent of the input variable. But for heteroskedastic models, a counterexample is constructed to show that the two types of consistency Are not necessarily coincident. A surprising result is that the regression consistency holds when the bandwidth parameter is sufficiently large. Regression consistency of two classes of special models is shown to hold. With fixed bandwidth parameter. These results illustrate the complication of the MEE algorithm.
|