In many of these cases, time or cost constraints prohibit sampling more than a few attributes. Also, many application domains use linear regression as a method of prediction, and evaluate the quality of attributes in terms of the R^2 fit with the quantity to be predicted. This motivates the following formal problem definition: "Given the covariances between observable variables X_i and a target variable Z, select k of the variables X_i such that the selected set has the best possible R^2 fit with Z."
The main result presented in this talk is that so long as the covariance matrix between the X_i variables is far from singular, greedy algorithms frequently used in practice are provably constant-factor approximations. The proof is based on extending the widely used concept of submodularity to a notion of approximate submodularity, and relating it to the spectrum of the covariance matrix. Furthermore, we will investigate various graph-theoretical properties of covariance matrices which allow for efficient exact or approximate algorithms.
We conclude with several exciting open questions.
[This talk is based on joint work with Abhimanyu Das, appearing in ICML 2011 and STOC 2008.]
Bio: David Kempe received his Ph.D. from Cornell University in 2003, and has been on the faculty in the computer science department at USC since the Fall of 2004, where he is currently an Associate Professor.
His primary research interests are in computer science theory and the design and analysis of algorithms, with a particular emphasis on social networks, algorithms for feature selection, and game-theoretic and pricing questions. He is a recipient of the NSF CAREER award, the VSoE Junior Research Award, the ONR Young Investigator Award, and a Sloan Fellowship.
Refreshments will be provided at 10:45AM