Friday, December 19, 2014

Thoughts on NIPS 2014

NIPS 2014 happened last week, and what a great conference it was. Lots of great papers, workshops, and invited talks.
The NIPS Experiment
In contrast to previous years, the most talked about thing from NIPS this year was not any new machine learning approach, but rather a reviewing experiment called the NIPS Experiment.

In a nutshell, about 10% of submissions were reviewed independently by two sets of reviewers (including two different Area Chairs). The goal of the NIPS Experiment was to assess to what extent reviewers agreed on accept/reject decisions. The outcome of the experiment has been a challenge to interpret properly.

The provocative and thought provoking blog post by Eric Price has garnered the most attention from the broader scientific community. Basically, one reasonable way of interpreting the NIPS Experiment results is that of the papers accepted for publication at NIPS 2014, roughly half of them would be rejected if they were reviewed again by a different set of reviewers. This, of course, highlights the degree of subjectivity and randomness (likely exacerbated by sub-optimal reviewing) inherent in reviewing for a such a broad field as machine learning.

The most common way to analyze this is from a certain viewpoint about fairness. I.e., if we had a budget for K papers, did the top K submissions get published? From that standpoint, the answer seems to be a resounding no, no matter how you slice it. One can argue about the degree of unfairness, which is a much murkier subject.
Alternative Viewpoint via Regret Minimization
However, as echoed in a blog post by Bert Huang, NIPS was AWESOME this year. The poster sessions had lots of great papers, and the oral presentations were good.

So I'd like to offer a different viewpoint about NIPS, one based on regret minimization. Let's assume that the accepted papers that were more likely to be rejected in a second review are "borderline" papers (seems like a reasonable assumption, but perhaps there are arguments against it). Then, had we swapped out a bunch of borderline papers with other borderline papers that got rejected, would the quality of the conference have been that much better?

In other words, given a budget of K papers to accept, what is the collective quality of K papers actually accepted versus the quality of the "optimal" set of K papers we should've accepted? It's conceivable that the regret on quality difference could be quite low despite the paper overlap being substantially different.

One might even argue, as alluded to here, that long-term regret minimization (i.e., reviewing for NIPS over many years) requires some amount of randomness and/or disagreement between reviewers. Otherwise, there could be a more serious risk of group-think or intellectual inbreeding that can cause the field to stagnate.

Not sure to what extent this viewpoint is appropriate. For instance, NIPS is also a venue by which junior researchers become established in the field. Having a significant amount of randomness in the reviewing process can definitely be detrimental to the morale and career prospects of junior researchers.
On to the Actual Papers
There were many great papers at NIPS this year. Here are a few that caught my eye:

Sequence to Sequence Learning with Neural Networks
by Ilya Sutskever, Oriol Vinyals & Quoc Le.
Ilya gave, hands down, the best talk at NIPS this year. Ever since it started becoming popular, Deep Learning has carried with it the idea that only Geoff Hinton & company could make them work well. Ilya spent most of his talk describing how this is not the case anymore. He also showed how to incorporate a type of gradient momentum called Long Short-Term Memory in order to do sequence-to-sequence prediction with deep neural networks.

Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics
by Sergey Levine & Pieter Abbeel.
This paper combined reinforcement learning and neural networks in order to do policy search. What's shocking about this approach is how few training examples they needed to train a neural network. Granted, the neural network wasn't very deep, but still, the low amount of training data is quite surprising.

Learning to Optimize via Information-Directed Sampling
by Dan Russo & Benjamin Van Roy.
Dan Russo has been doing some great work recently on analyzing bandit/MDP algorithms and proposing new algorithms. This paper proposes the first (mostly) fundamentally new bandit algorithm design philosophy that I've seen in a while. It's not clear yet how to make this algorithm practical in a wide range of complex domains, but it's definitely exciting to think about.

Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets
by Adarsh Prasad, Stefanie Jegelka & Dhruv Batra.
This paper deals with how to do submodular maximization when the ground set is exponentially large. This paper exploits specific structure in the ground set, e.g., it can be solved via cooperative cuts, in order to arrive an efficient solution. It would be interesting to try to learn the diversity/submodular objective function rather than hand-craft a relatively simple one (from a modeling perspective).

From MAP to Marginals: Variational Inference in Bayesian Submodular Models
by Josip Djolonga & Andreas Krause.
Log submodular models are a new family of probabilistic models that generalizes things like associative Markov random fields. This paper shows how to perform variational marginal inference on log submodular functions, which might be wildly intractable when viewed through the lens of conventional graphical models (e.g., very large factors that obey a submodular structure). Very cool stuff.

Non-convex Robust PCA
by Praneeth Netrapalli, Niranjan U N, Sujay Sanghavi, Animashree Anandkumar & Prateek Jain.
This paper gives a very efficient and provably optimal approach for robust PCA, where a matrix is assumed to be low-rank but except for a few sparse components. This optimization problem is non-convex, and convex relaxations can often give sub-optimal results. They also have a cool demo.

How transferable are features in deep neural networks?
by Jason Yosinski, Jeff Clune, Yoshua Bengio & Hod Lipson.
Along with a scientific study on the transferability of neural network features, Jason Yosinski also developed a cool demo that can visualize the various hidden layers of a deep neural network.

Conditional Random Field Autoencoders for Unsupervised Structured Prediction
by Waleed Ammar, Chris Dyer & Noah A. Smith.
This paper gives a surprisingly efficient approach for learning unsupervised auto-encoders that avoids making overly restrictive independence assumptions. The approach is based off CRFs. I wonder if one can do this with a more expressive model class such as structured decision trees.

A* Sampling
by Chris J. Maddison, Daniel Tarlow & Tom Minka.
I admit that I don't really understand what's going on in this paper. But it seems like it's doing something quite new so there are perhaps many interesting connections to be made here. This paper also won one of the Outstanding Paper Awards at NIPS this year.

32 comments:

Chong said...

hi yisong,

thanks for the great summary.

mamam said...

Thanks for sharing.I found a lot of interesting information here. A really good post, very thankful and hopeful that you will write many more posts like this one.
https://kodi.software/
https://plex.software/
https://luckypatcher.pro/

Top Images Wishes Quotes said...

Thanks for sharing this post, it was great reading this article! would like to know more! keep in touch and stay connected! Also Check here

ShowBox Apk

gbwhatsapp 6.55 apk download

GbWhatsapp

Mini Militia Hack

Best Apk Mods

Cheap Smm Panel

Just Smm Panel

Chinmoy Borah said...

Cartoon HD
Lucky Patcher
https://appvnpro.com

Chinmoy Borah said...

http://authorityapk.com
http://apk9appsapp.in
http://9apps.org.in

Sample Data said...

music paradise pro apk

Unknown said...

https://instagramcaptions.me
funny instagram captions
instagram captions for friends
birthday wishes
instagram captions for selfies

Sample Data said...

tubemate for pc

Sample Data said...

musicparadiseproapk.com

Nick John said...

Vidmate
9Apps APK

Unknown said...

Game killer Apk Download may be a powerful nonetheless straightforward to use, supports most variety of games and is compatible on latest humanoid versions Lollipop and candy.

Shamima said...

prepaidgiftbalancex.info/ dgcustomerfirstx.info/

Ankita Singh said...

WATCHONLINEMOVIES Apk
cyberflix tv
Hotstar Premium Mod Apk TR Vibes
Taraftar TV

Apk

Ankita Singh said...

major league baseball

Anonymous said...

Tutuapp Free Download Latest Version

netflix apk said...

Thank you for this great article please check my article too
click here
i really appritiate your afforts Framaroot for Android one again thankyou so much
Framaroot Descargar please visit my site too.

latest bollywood songs 2019 said...

Looking for the best site for Vidmate Download ? Vidmate Official site https://www.vidmateapp.com is the right place for you! Vidmate is the best app that lets you download videos and songs from YouTube, Facebook, Twitter, etc and various other sites.

Ahmad Suhaib said...
This comment has been removed by the author.
Ahmad Suhaib said...

APKdodo snaptube Mobdro

Rocky said...

This is a blog you can get useful information and better stuff right here cara mengecilkan lengan

vinith kumar said...

Thanks for the information about this Really nice Post.to know more good and intresting topics. dental implant cost

Pankaj Singh said...

Nice blog, thank you so much for sharing such an amazing blog with us. Visit for the best Website Designing and SEO Services in Delhi, India.
SEO Service in Delhi

Nucle said...

There is no doubt in the app you can easily download the app the most exciting features are waiting for you can easily go with, so Download TutuApp APK Official App For Android, iOS devices, iPad, iPhone TutuApp. There are nothing extra features required in this app because the most important items are already available in his update versions.

Unknown said...

Content Writing Company in Delhi
Content Writing Services in Delhi
Mobile App Development Company Delhi
PPC Company in Delhi
PPC Company in India

HowTo said...

Thanks for this post and To Learn Cooking By Sitting At Home

Thanks for this post and How To Make A Tea

Thanks for this post and How To Make Chicken Role

Thanks for this post and How To Make Chicken Vegetable Sandwich

HowTo said...

Thanks for this post and To Learn Cooking By Sitting At Home

Thanks for this post and How To Make A Tea

Thanks for this post and How To Make Chicken Role

Thanks for this post and How To Make Chicken Vegetable Sandwich

lajwantidevi said...


This is a nice Site to watch out for and we provided information on
vidmate make sure you can check it out and keep on visiting our Site.

shilpi said...

Vidmate is one of the best known applications currently available for downloading videos and songs from online services like Vimeo,

Dailymotion, YouTube, Instagram, FunnyorDie, Tumblr, Soundcloud, Metacafe, and tons of other multimedia portals. With this highly

recommended app, you’ll get to download from practically any video site.A free application for Windows users that allows you to download online

videos.
An entertaining application
Vidmate latest update in pie
Amazing features of Vidmate

lajwantidevi said...

This is a nice Site to watch out for and we provided information on
vidmate make sure you can check it out and keep on visiting our Site.

kajal singh rajput said...

Download and install Vidmate App which is the best HD video downloader software available for Android. Get free latest HD movies, songs, and your favorite TV shows.

pat said...

vidmate app
9 apps

Periocenter said...

Thanks for the sharing this. pls check this also.
Periodontist Houston TX