Tuesday, July 16, 2013

Interpretable Predictive Models

I recently attended a workshop on media at the NYU Center for Urban Science and Progress. Many thanks to Arun Sundararajan and Maria Liakata for planning the workshop, which facilitated a very interesting exchange of ideas from people in very different areas.

One issue that was discussed extensively during the workshop is the need for interpretable predictive models. Since the actual definition of "interpretable" is up for debate, I concluded that (at least for now) it's more useful to talk about when and why a model isn't interpretable and what problems that might cause.

The primary over-arching use-case is when an end-user is using the model not only for superior predictive power but also for deriving insight from a large dataset or problem domain.

Examples include:
  • Investigative analysis that aims to use a predictive model for policy making.
  • Debugging/building a complicated predictive model for commercial purposes.

    In both examples, the decision-making process of the model needs to be somehow understandable (at least at a high level), and thus trustworthy. Unfortunately, effective predictive models are typically very complex, with potentially billions of parameters, makes them difficult to interpret/understand using conventional means (i.e., inspecting the individual parameters).

    For many settings, one effective approach could be to explain the behavior of the model on specific problem instances, rather than the model as a whole. There appears to be (at least) two types of approaches for this.

    Sensitivity Analysis

    One approach that Foster Provost mentioned at the workshop is to perform a form of sensitivity analysis to understand which alternative scenarios would cause the model predict something different.

    For example, weather patterns such as tropical storms are notoriously difficult to model, and any prediction on the future behavior of such storms also come with a so-called cone of uncertainty. One type of useful analysis would be to understand what factors might case the hurricane to fall within different regions of the cone of uncertainty (assuming the model isn't just using simple context-free stochastic process to explain uncertainty).

    This type of sensitivity analysis can be done by varying the different input attributes into the model and then summarizing evaluating the likely outcomes. Typically, analysts would do a significant chunk of this work manually, which significantly limits the scalability of such techniques. It would be interesting to develop automated "meta-analysis" algorithms that can perform large-scale sensitivity analysis and summarization for large classes of predictive models.

    Structured Models

    Another way for models to explain, or justify, their predictions is to actually build such capabilities into the predictive model. Many prediction domains require structured models that can make complex predictions.

    For example, recent work by Yun Jiang use a model of hallucinated humans to understand a scene (such as a room) and how to predict where objects should be placed. In this case, the hallucinated human serves as a way for the model to explain that, e.g., a sofa should be placed opposite of a television. When analyzing any particular scene, it would be interesting to develop useful ways of exposing salient aspects of the hallucinated humans (which is typically referred to as a hidden or latent part of the model) to the end user.

    Another example is my previous work on sentiment analysis with Ainur Yessenalina, where we built a model to predict the sentiment of a movie review or congressional speech. The model justifies its predictions by also extracting the sentiments that best explain the overall prediction.

    These models make the (sometimes implicit) assumption that, for any particular problem instance, a small set of factors contribute the bulk of reasoning behind the model's prediction for that instance. Note that the set of contributing factors can vary for different problem instances. Such "structured" models are essentially modeling the data at a level of granularity that is more expressive than a simple prediction (e.g., the likely human poses or the most opinionated sentences), but also less complex than the raw data. It would be interesting to develop these types of models to be more amenable to human inspection and modification.
  • No comments: