The Minority Viewpoint of AGI
It seems like there are hundreds of ideas about how to create an Artificial Intelligence. But if we examine the foundations of AI, and look at the tacit assumptions of historical AI projects, we discover something interesting.
Before we even get into the AGI (Artificial General Intelligence) field there are two major decisions that have been made for us. First is the question about whether AGI is possible at all. Those who claim it is not, have made arguments that are basically variations of Dualist thinking, claiming that there is a soul, or something much like it, that computers just cannot have. Everyone in the AGI field is, by actively working on the problem, demonstrating that they believe the opposite.
Then there is what I would call “Weak AI” or “Practical AI”. This is the kind of AI that works today. It’s largely the legacy of the AI research of the seventies and early eighties, and consists to a large extent of a bag of tricks, a catalog of programming techniques, that can be used to make computers exhibit surface behaviors that mimic what humans would do in specific circumstances. Examples would be expert systems that approve bank loans or detect credit card fraud, and AI in games that computes how a non-player character should behave in a battle.
But the true battle in AI itself is in the field of “Strong” AI, in the quest for Artificial General Intelligence. This is what I’ll be talking about henceforth.
The Dichotomies that Define AGI
Philosophy is full of dichotomies, which is why philosophers never run out of things to talk about; for every major idea, there seems to exist an equal and opposite idea. Philosophers have given some of these ideas names, and in each case they form pairs that define the distinction under debate. If you want to work in AI, then you need to make at least a half-dozen of these important decisions.
The most important dichotomies are the Reductionist / Holist split, the Symbolic / Subsymbolic split, the Essentialist / Nominalist split, the Instructionist / Selectionist split, The Infallible / Fallible split, and the Logic / Intuition split (which could also be called the Reasoning / Understanding split). We’ll examine some of those in more detail later (in later blog entries). Let’s pick one of these, say the Essentialist / Nominalist dichotomy, to use as an example.
If you are an Essentialist, you believe that there is such a thing as a dog, and that most dogs share the properties of “dogness”, if you will. Nominalists in turn say there is no such thing as a dog, it’s just a label that we apply to things that look enough like dogs. Essentialists get in trouble at the borders - Is a statue of a dog a dog? How about a wolf-dog half-breed? A dead dog? Nominalists on the other hand cannot really say much about dogs, since they don’t believe in a prototype dog with specific properties that you could use for logical reasoning about what dogs are or what they can do.
For now, let’s treat these dichotomies as opaque labels for our binary dimensions of choice. If we take one dichotomy we can imagine two squares side by side. You can stand in one or the other - You can be an Essentialist or a Nominalist. If we add another dichotomy, we can make a two-by-two checkerboard of four combinations.
If we want to add a third dichotomy we need two copies of that entire two-by-two checkerboard, which gives us a 4 x 2 board. As we add more dimensions, each a binary choice, we can illustrate those with larger and larger boards. At 6 dichotomies we have a checkerboard of 64 squares. Let’s say that this is enough for now.
These decisions are very important. If you believe one thing and then change your mind halfway through an AI project, you would have to throw away what you had created so far and start over.
When we examine the largest existing theories for AI, and largest active AI projects in the world, we get a minor surprise. Most popular theories and projects are standing in the same square. Almost all of them are
and they agree in many of the other existing dichotomies that I have decided not to discuss here.
This should not have been a surprise. If you select any one of the above, then you pretty much have to select the others since doing otherwise would give rise to internal conflicts - both in your mind and in the AI systems you are creating. If you want create a symbolic world model, say an Ontology, then you’d better be an Essentialist since Nominalists wouldn’t want to attach properties - essences - to symbols. And if you believe your symbol for “dog” captures all aspects of dogness, then you’d view a borderline or corner case as a problem requiring attention and refinement of the concept.
Surely there may have existed AGI efforts that chose differently in one or two of these dimensions. I imagine working on these projects involved repeated and heated design discussions as the conflicts inherent in the mixed mode theory manifested themselves as impossible choices at the code level.
So most AI theories, and most popular AI projects all start from the same handful of assumptions and therefore stand on the same square on our checkerboard. Fair enough, there’s room for all. We can safely call this the Majority View of AI.
But what is this? Over in the very opposite corner, there are some other theories and a few projects. Granted, not that many.
It turns out that by selecting the opposite answer for each dichotomy gives you a second set of theories, that are internally just as consistent as the majority view AI theories are. These theories and projects are almost all
This is the minority view corner. As long as we’ve had AI research, we’ve had activity in both corners.
But why is there so much more activity in the Majority View corner? There are several reasons. One is that if you are a programmer, then this is where you start out. In order to build, say, ontology based systems you need to learn very little; you can immediately start programming. And the mere fact that you are a programmer makes it likely that you prefer Reductionist methods. In order to move to the other corner you’d have to study quite a lot of Philosophy and Epistemology, some Neuroscience, etc., which constitutes a barrier that keeps most people from shifting viewpoint.
People in the Minority View corner may have started as Biologists, Psychologists or even Philosophers. They are less inclined to start AI projects since they may not have sufficient background and interest in programming and computer science. Proponents of the Minority View can claim that their theories are more Biologically Plausible than the Majority view theories. If you build a system according to the Minority View, it will have more similarities with brains than Majority View systems.
When I discuss my minority view AI theory with people from the majority camp I have to explain everything down to the basics of holistic thinking and I never seem to get through to them; they are often very skeptical. My “unproven theory of AI” is about as consistent as any Reductionist/majority view “unproven theory of AI” so I rarely get any outright arguments against my ideas; a typical response is “show me”.
In contrast, when I discuss my theories with competent people from other fields, such as Biology, they will nod their heads vigorously and say, “Of course, all along I’ve been thinking it has to be something like this”. This is also true for many people without any science education, since these theories are quite congruent with naive ideas of how the mind works.
I think that aversion to Holistic thought and Model Free Methods is an occupational hazard for people working in AI. Holistic thinking is regarded as unscientific fluff and it is more difficult to get funded; better to tow the party line. You need to get the Holistic Thinking meme early in your AI career since once the Reductionist meme package takes hold it will very effectively block these competing ideas.
It is time for this to change. Systems based on Model Free Methods and Holistic Patterns are starting to show results. The Four Color Theorem proof and Google’s breakthrough performance machine translation systems are early signs of this. I’ll talk more about this in another blog.
Write a comment
You need to login to post comments!