Monday, August 06, 2012

Thoughts on ICML

This year's ICML conference was quite the blast. With a record number of attendants and papers published, there were many exciting papers and talks.

David MacKay gave an interesting invited talk based on his book, Sustainable Energy – without the hot air, which you can download for free. The goal of his book is take the issues surrounding sustainable energy and break them down to first principles. The book provides simple and informative ways to compare different options in energy generation and consumption. For instance, the amount of energy saved from unplugging your phone charger for an entire day is equivalent to the energy expended from driving your car for one second, or as David MacKay puts it, "Obsessively switching off the phone-charger is like bailing the Titanic with a teaspoon." So while unplugging your charger does help, it perhaps should not belong on the moral pedestal that some people have placed it.

Another interesting talk was given by Kiri Wagstaff on her contributed paper, Machine Learning that Matters. I'm not sure that I fully accept some of her premises or solutions, but it's definitely food for thought. It's certainly the case that many existing benchmarks have become rather outdated in the sense that improved performance along these benchmarks no longer implies improved performance for the motivating practical problem.

ICML also had an invited applications track, where researchers from other fields were invited to talk about how they used machine learning as part of solutions they devised for their technical challenges. I particularly enjoyed the talk on Data-driven Web Design by Ranjitha Kumar (my old friend Jerry Talton collaborated on the project). I think there is a great opportunity for machine learning to aid in developing intelligent systems for computer-assisted content generation. Another example of this is the work on Metro Maps by Dafna Shahaf (although that system is fully automated).

I'll conclude by highlighting some technical papers that I found particularly interesting.

Safe Exploration in Markov Decision Processes -- a Markov Decision Process (MDP) is a way of modeling how an autonomous agent interacts with an environment. An example application is modeling how a robotic vehicle exploring over hazardous terrain (such as a Mars rover). One of the limitations of many MDP approaches is that their performance guarantees depend on the "cost" of the worst possible outcome. In cases where conditions are hazardous, the worst possible outcome could have infinite cost, which makes these types of guarantees vacuous (i.e., no useful exploration trajectories are discovered). This paper proposes a new, flexible way to model avoiding hazardous outcomes in a way that still yields useful exploration trajectories.

Online Structured Prediction via Coactive Learning -- most approaches for modeling personalized recommender systems assume that users provide feedback on the quality of an individual recommendation or a pair of recommendations (e.g., news article A is interesting, or article A is more interesting than article B). However, most recommender systems actually present a structured set of recommendations (e.g., a top-down ranking in web search, or a 2-dimensional layout in image search), which can be much more complicated to model than individual recommendations. This paper proposes a framework that explicitly models the entire structured set of recommendations. The proposed approach is able to infer a preferable structured set of recommendations (e.g., image layout set A is more preferable to image layout set B) from user feedback, and can quickly learn to personalize using such feedback.

Active Learning for Matching Problems -- active learning is the setting where a system asks (a human) for feedback or labels with respect to a prediction instance. One example is a movie service asking you to rate a movie. The goal of active learning is to quickly (w.r.t. the number of labels elicited) learn a reliable prediction model. This paper considers the setting where the final goal is to predict a good matching. This has applications in the scientific community where reviewers must be matched to submitted papers. Each paper (even the unpopular ones) must be assigned to a minimum number of reviewers, and each reviewer cannot be assigned more than some maximum. The active learning component chooses a few papers and asks each reviewer them based on interest level and expertise. The challenge, which this paper addresses, is to develop active learning approaches that focus on learning what are good feasible matchings, rather than (as in normal active learning) what each reviewer likes.

Latent Collaborative Retrieval -- the conventional approach to modeling web search is to assume a fixed representation of query-document compatibility, and then learn a model that scores relevant documents higher that non-relevant ones for any given query. In order to have a compact representation, such approaches utilize features that generalizes across queries and documents (e.g., # of query words appearing in title of the document). However, such features often mean different things for different queries (e.g., some queries care more about title-matching than others), but having an individual model per query is not feasible. This paper proposes an approach that allows for such a flexible model by actually learning a compact representation that best captures the variations in a per-query model. This is similar to collaborative filtering approaches used for things like movie recommendation, and it seems to work really well for web search.

Parallelizing Exploration-Exploitation Tradeo ffs with Gaussian Process Bandit Optimization -- imagine a laboratory system that automatically plans and conducts experiments. This system can run multiple experiments in parallel (a batch) and also adapt its plan based on the results of finished experiments. To intelligently select the next batch of experiments, the system must be able to model the distribution of possible outcomes of each possible batch (an exponentially large search space). Note that the underlying model is quite general and can be applied to many other settings (such as a recommender system servicing a population of users and then integrating all collected user feedback once per day). This paper proposes an efficient approach for choosing such a batch within the Gaussian process framework (which is a probabilistic framework for modeling the distribution of possible experimental outcomes).

Learning Object Arrangements in 3D Scenes using Human Context -- as the title suggests, this paper proposes an approach to learning how objects in a scene should be arranged using human context. For example, by understanding how humans might use a living room, the model can learn that the TV should be placed opposite the sofa at some comfortable distance. Alternatively, one can model every pair of relationships (e.g., TV and couch, TV and coffee table, TV and lamp, couch and remote control, etc.), but that quickly leads to an intractable model whose computational (running time) and statistical (size of training data) requirements scale quadratically with respect to the number (or number of types) of objects in the scene. The common way to scale this back down to linear is to model how all objects interact with a single register model, and the selection of a good register model is crucial. This approach registers all objects in the scene to a model of human poses, which is quite clever.

Building High-level Features Using Large Scale Unsupervised Learning -- one dirty secret of most machine learning approaches is their the reliance on a useful representation of the learning problem. For example, if you use a linear model, you're hoping that there exists a linear combination of the features you're using that leads to a useful predictor. This has led to a lot of feature engineering by domain experts so that standard machine learning algorithms can work well different applications. This paper proposes an approach to actually do automated feature engineering, and do it well. For example, they were able to learn a cat filter feature when training on 10 million web images (what a surprise).


1 comment:

Evsej Baldi said...

Fortunately, everything is distinct now, in fact it is our essay crafting support at edubirdie that greets every one of your demands for professional and cheap essay crafting on the web.