Main menu:

Site search

Categories

Tags

Blogroll

Monica's Other Sites

Could AI be easy?

A million man-years has been spent on Artificial Intelligence. (1)

Since the beginning in the early 1950s we’ve gone through dozens of paradigms and thousands of approaches trying to build software systems and robots that would behave “more intelligently”, more like humans do. This research has given us many advanced programming tricks and a few amazing consumer products, but nothing has come of all this effort that truly deserves the label “intelligence”.

But what if all these failed paradigms and approaches shared a common fatal flaw? Could it be that AI was actually a much easier problem than most people working in the field believe it is?

How would we go about discovering such a flaw? We could look at what all these failing attempts and paradigms have had in common and scrutinize that. Many approaches map to other approaches, which simplifies the task, but we’re left with a very small kernel that’s common to the majority of the failing approaches:

Reductionism.

Reductionism is the basis of western science; the most outstanding performer in problem solving since 1650, by a wide margin. It’s the method that says “Simplify the problem, and then solve the simpler problem” in so many different ways. It is aptly named for several reasons since most of the simplifications imply a reduction of something:

  • Reduce complex systems to straightforward collections of simpler components. A frog can be split into a skeleton, a circulatory system, a nervous system, a digestive system, etc. and if you understand all of these, then you understand the frog.
  • Reduce the number of free variables by separating a core of the problem away from its open environment into a closed and controlled laboratory environment. In science textbooks this is expressed as “all else being constant”. The simplified view of the original problem is called “a model” and is, in the best cases, formalized into some number of manageable equations that describe the most important aspects of the original problem.
  • Reduce the problem to a more fundamental discipline. Problems in Biology should be attacked at the level of Biochemistry, which are merely problems of Chemistry, which are really just problems involving atoms, which is what we deal with in Physics.
  • Reduce the number of types of matter by finding even lower level components. Chemicals are combinations of atoms which are made of electrons, protons, etc. which are made of quarks, which may be “made of strings”.
  • Reduce the number of models, equations and formulas to a single great Theory of Everything that explains everything in the entire universe.
  • Reduce the complexity of a description of a system by only considering interactions from components to wholes (only consider upward causation)
  • Reduce the number of grad student hours wasted by only attacking problems that we already almost know how to solve (a form of hill climbing).

The so-called “Reductionist Stance” and these meta-methods have served us extremely well ever since Newton, Bacon, Descartes, John Stuart Mill, and others collectively discovered the efficiency of these methods in the seventeenth century. But much of it has origins in antiquity. Aristotle said “The whole equals the sum of its parts”, implying that if you understood the parts, then you fully understood the whole.

If this is the way Science is done, do we really have an alternative? Wouldn’t any alternative to Science have to be UNSCIENTIFIC?

To many people’s surprise, the answer is NO. This was established decades ago (and will be the topic of a future blog post). There are several Non-Reductionist ways to attack problems, including problems that “require intelligence”. The Life Sciences (such as Biology, Genetics, Psychology etc) have always been dealing with problem domains that resist Reductionist methods and have had to find other ways to make progress.

The criticisms targeting these reductions have always been variations on the theme that “this reduction discards something important so the answer you get is incomplete or incorrect”. The alternative to reduction is to consider wholes rather than parts:

Holism

The opposite stance is called “The Holistic Stance” and here the battle-cry is “The whole is more than the sum of its parts”. This stance also goes back to antiquity, which makes the Reductionism/Holism debate thousands of years old. From 1650 to the 1920’s the Holistic viewpoint was largely suppressed by the barrage of successes produced by people using Reductionist methods. The hapless Holists supposedly had no methods. A Holistic stance was useful in getting an overview, to see what problems existed and which ones would be appropriate to attack, but once you were working on a problem, the only tools available were the tools of Reductionism. “Holists had the superior ontology while the Reductionists were the masters of method” (2). In this manner, we solved thousands of scientific and engineering problems over the past three centuries.

The simpler problems.

Certain problem domains have refused to yield. Some problems were simple, and some complicated which meant they would yield eventually, but many problems exhibited a “deep complexity” that went beyond the “merely complicated”. Other kinds of difficulties were identified. An amazing number of problems were found to be at once deeply complex, irreducible, riddled with ambiguities, and to exhibit emergent effects. We call these kinds of problem domains “Bizarre”; they are described elsewhere in quite some detail but I will summarize the highlights:

The thing that makes certain problem domains deserve the “Bizarre Domain” label is that no useful models can be built of these domains.

  • The complexity and nonlinearity of the domain prevents accurate and precise longer term predictions (Chaotic systems)
  • Any simplification you attempt to make obviously discards something of vital importance (Irreducibility).
  • The input data available is incomplete, ambiguous, and inconsistent. (Ambiguity)
  • Emergent behavior makes the whole behave different than a collection of its parts (Emergence or Downward Causation).

Some examples: The laws of Thermodynamics only apply to closed systems, but the world is open and time-variant, and hence “irreducible” which simply means “Reductionist models cannot be used”. Many Scientific methods require correct input data for correct results. In the real world, input may be ambiguous, incomplete, and self-contradictory. Chaos and emergence can be found all around us, once you learn to
recognize them. Intelligence in humans emerges from unintelligent neurons. Meaning of language emerges from mere words and letters.

After solving so many relatively simple problems using the Reductionist stance we’re left with a number of really hard ones. They are all problems in Bizarre Domains:

  • The world is Bizarre. Any attempt to model the world, completely or in significant part, will fail. Models cannot be made of the global economy, or stock markets. (3) Any partial model will be bleeding at the edges where it was cut loose from its context.
  • Life is Bizarre. All life sciences deal with the complexity of life. Organisms, human physiology, drug design and drug interactions cannot be completely modeled. If you take a frog apart, it is no longer alive.
  • The Mind is Bizarre. The Brain is too complex to be modeled. Intelligence is Bizarre.
  • Language is Bizarre. All attempts to model human languages using grammars etc. to date have failed and will continue to fail. The meaning of language cannot be retrieved from a grammatical analysis.

It is possible that certain domains are regarded as Bizarre today but with advances in theory, computing power, etc. we might someday find ways to build models and to attack the domain using Reductionist methods. And it is possible that certain sub-problems in Bizarre domains can be partially solved, for instance using statistical methods. Recognizing you have such a borderline case is harder than recognizing you are in a Bizarre Domain to start with; we will leave it to those who insist on using Reductionist methods to identify and tackle these borderline cases.

But real progress at the core of any Bizarre Domain requires adopting Holistic Methods. Yes, they actually do exist. They are also called “Model Free Methods” and deserve to be discussed in a separate post.

(1) This is my own estimate. Some large AI conferences have had attendance numbers in the 10-50,000 people range; but while working in an AI department at a university it was clear that most researchers that labeled their work as “AI” would only attend a small fraction of the conferences that might have been useful to them. Add to this all “AI related” (by some suitable definition) work in the industry. Multiply this fan-out factor with the number of subdisciplines, and over 60 years of history this starts looking like a clear underestimate.

(2) Attributed to Robert Brandom, but I have not been able to verify this

(3) Friedrich Hayek received the Nobel Prize for telling us things like this

Comments

Pingback from A.I. and The Prime Directive « Catastrophic Forgetting
Time November 25, 2009 at 10:37 am

[...] and their surroundings.  That’s all they would need is the need and desire to learn (@pandemonica) and the goal of improving themselves.  Maybe if we considered specific rules for communicating [...]

Pingback from An introduction to cybernetics « The Cybernetic Trader
Time March 17, 2011 at 7:14 pm

[...] and control, merged with concepts from general systems theory, to incorporate notions such as holism vs. reductionism, nominalism vs. [...]

Write a comment

You need to login to post comments!