Monday, July 20, 2009

Pondering Intelligence

What is intelligence? It's a question that can be approached from numerous angles. Many (most?) people treat intelligence as a uniquely human property, one that is intricately bound to notions of tremendous intrinsic value, such as having souls, unalienable rights, self-awareness, etc. Chimps might be pretty smart (for a non-human), but they're still just chimps.

The AI community, having recovered from the failures of the 1960s and the ensuing AI winter, has now developed many techniques which have greatly improved (and in some cases enabled) a wide range of applications. Currently, our most successful AI systems use very narrow AI (e.g., chess, web search, spam filters, fraud detection, the Mars rovers), but that holy grail of general AI (AGI) still lurks in our thoughts.

Most discussions I have regarding AGI are met with stiff resistance. One of strongest biases I've noticed is a very homo-centric view that only humans can be truly intelligent or sentient. Moreover, it seems many believe, at some fundamental level, certain properties -- our sentience, for instance -- extend only to other beings who are like us. After all, I only experience my own thoughts. As such, I don't know for certain if you are sentient; I can only infer based on your behavior and my own mental model of how sentient beings (aka, myself) should behave.

So what does that imply? Well, for one, we accept other humans as being sentient, intelligent beings despite being given a very little evidence. In many cases, we're simply projecting (or empathizing, if you prefer). Does that also imply that, at least in principle, we can devise a comprehensive suite of tests or measurements to characterize the general notions sentience or intelligence?

Currently, I'm inclined to consider intelligence in very practical terms. Suppose, for the sake of argument, that there does indeed exist a suite of tests to measure all levels of intelligence (at least up to human level). It's clear that our AI systems are becoming more general every year (although they're still a far cry from human-level intelligence). However, if we dare imagine that the exponential growth curves achieved by many other currently pervasive technologies (e.g., the internet, mobile) might also apply to AI development, then we might quickly see systems of far greater complexity and generality. Some more realistic milestones include a general secretary program that can learn to perform personalized appointment/travel management tasks, and a general household robot that can learn to perform various chores for different home and lifestyle configurations.

One of the greatest conceptual difficulties when contemplating intelligence arises from the fact that no other extant beings (natural or artificial) even comes close to rivaling human intelligence. The situation might be much more intriguing had many of our pre-modern cousins and ancestors survived to present day. I think it's reasonable to assume that different hominid species would display varying degrees of intelligence. Given that type of evidence, it might be more reasonable to believe that AI could one day also climb up the general intelligence ladder.

2 comments:

Omar said...

A quick one, but one of the things we do as humans is to make assumptions. To design a successful algorithm we need to include suitable assumptions.

I wonder when we can expect a computer to make sensible assumptions prior to solving a problem and what might be involved. I think it borders on the whole 'common sense' debate.

Anonymous said...

Here is one way in which ``human intelligence" differs from AI:

An AI cannot question/modify its own programming.

Perhaps it can to some extent, but it still cannot change its last program...