Wednesday, July 28, 2010

Towards Interactive Information Systems and Automated User Interface Design

Attending conferences like SIGIR provides a good opportunity to reflect on the major (perceived) trends in the field. One current trend that I find particularly exciting is progress towards more adaptive and interactive systems.

The ten blue links metaphor has dominated the design of information retrieval systems for about twenty years now. This is really a shame given that displays are large enough and computers are fast enough to accommodate richer forms of content display. Rich interactive frameworks are particularly useful for information discovery or vaguely targeted browsing tasks, where most of the useful information is not concentrated in just a few results.

For example, suppose I wanted to learn more about the history of research progress regarding a certain family of diseases. This is very much an exploratory information need, where I don't actually even know what I'd find interesting (until I see it), and being able to interactively re-organize and visualize the data (e.g., via different clusters or histograms, etc) might prove immensely useful.

One challenge is, of course, finding the right user interface. Microsoft's Gary Flake gave a very impressive demonstration of an experimental ZUI (zoomable user interface) called Microsoft Pivot during an invited talk at SIGIR. During the demo, Flake showed how one can seamlessly navigate and re-organize a large data collection to better understand its content.

Get Microsoft Silverlight

Such interfaces should become increasingly easier to develop in the future as more SDKs become available.

Flake's demonstration did come with limitations. Most obviously, there were many things that were specially engineered for a particular type of data or application, and things obviously won't work so smoothly for other settings and tasks (this was irrelevant with regards to his talk, since he was presenting the only the user interface, and not relevance models).

This leads me to consider the two-pronged problem of figuring out not only how to find the most useful information, but also how to best present it to the user. It makes me wonder if we might be headed towards an era where machine learning approaches are used to automatically test out different ways to organize and display content (i.e., automatically learning to optimize the user interface). To some extent, people are already doing this, such as for nexus pages with lots of mash ups (I believe the folks at Yahoo! call this the page optimization problem).

It remains to be seen to what extent the two aspects (content/relevance and display/interface) can be addressed separately. Given the trend towards interactive systems such as Pivot, I suspect that significant improvements to the user experience will be realized when we have effective methods that can model both content and display simultaneously (i.e., what to show and how to show it).

No comments: