003-Artificial Intelligence and the importance of philosophy

How can we develop an artificial consciousness, or an artificial (general) intelligence (AI)?  * [*I use artificial intelligence, AI, throughout many of these essays.  AI is interchangeable with machine consciousness, computer consciousness, computer intelligence, or artificial mind all refer to the same kind of thing.  The way I think of an AI is as computer that runs programs and interacts with it's environment.  The magical creature story (#002) illustrates what an AI might look like - a stand alone computer box running software. ]

Can we develop an artificial intelligence without confronting the problems and conundrums of philosophy that underlay concepts of mind and consciousness?  I don't believe we can.  

We must confront, and RESOLVE problems that have been the purview of philosophy.  We must deal those problems directly.  The problems of making an artificial intelligence are problems of physics, and math, and computer science, and biology.   But we cannot ignore the philosophical issues that arise when thinking about intelligence, mind, and consciousness. We must consider those problems first, because underneath physics, and math, computer science, and biology lurks philosophy. 

If we began working on artificial mind and our viewpoint of the world is a materialist conception, that viewpoint will guide us down particular paths of development.  A materialist point of view is natural when pursuing a computer based artificial intelligence.  But a materialist starting point cuts us off from non-materialist problems of experience.  For instance, what is the materialist explanation for the paranormal? or for "make believe"? or for imaginary numbers? or the color magenta?  or doubt?

If we view the world as following mathematical laws, that view will direct us towards certain kinds of problems, approaches, and solutions to develop an artificial mind.  Mathematical solutions may not be actual solutions to any philosophical issues of mind.   A mathematics based approach will produce mathematical outcomes such as algorithms to problems that may best be treated in other modes of expression.  For instance, what is a mathematical explanation for pretending or for pink unicorns, or love, or obstinacy?  

A computer based simulation of a biological brain is going to produce problems too.  Even if we simulated a brain, where are the experiences and memories and personality kept in the brain?  Certainly, experiences must be manifesting from our brains and bodies,  but how does a simulation instantiate a mind without a body?   What would cause a mind to emerge from a bodiless brain simulation?  Even if a mind would emerge from a simulated brain, we don't know HOW it emerges.  Consciousness and minds would still appear as some kind of magic or alchemy.   In fact, this idea that something emerges from some set of conditions is a fancy of way of saying "And then something magic happens."  

What we must do, is go back to first principles.  We must start at the beginning of knowledge so that we can explain how knowledge itself can even exist; because, it is only minds that know.  Mathematics, physics, chemistry, computer science, these are not fields about knowing.  They are fields about things that are known.  Mathematics is ignorant about how mathematics itself exists. We must deal with the concept of knowing and ignorance directly.

Every idea we have about how experience works, about what living means, about consciousness and the self are, about the way the universe works, are tested in the crucible of creating an artificial mind.  To create an artificial mind, we must have a theory of intelligence, of mind, and consciousness, that answers all the questions we can throw at it.  If our theories do not answer the wide varieties of questions we have about life, consciousness, meaning, and existence, than the theories must at least provide a path to understand how these questions exist, how the questions have meaning, and how the questions play out in  artificial and non-artificial consciousness.  We must know how questions exist. 

As an example: How can we construct an artificial mind that has a moral dimension?  What would morality mean to an alien life form like a computer mind?  How would an artificial mind learn morality?  What is happening in our brains when we learn morality?

Or, How does an artificial mind abandon ideas as it learns and develops?  Human beings do this somewhat easily and so must an artificial intelligence.   How does an AI come to discern something that might be "good" but is also "false"?  Many people do this very thing with their mythical, religious, and cultural ideas.  For example, Santa is good, but Santa is also false.   

In some discussion of artificial intelligence, there is as a problem of nuance, or of understanding common sense. But there is no theory that delineates some kinds of learning into categories of nuance and "common sense".   Creating an artificial mind will require explaining how an intelligence GENERATES common sense.  

The notion that there is "common sense" is a statement that means nothing more than "people believe things".  The problem with an artificial intelligence is how to generate "believing" at all.  Our current crop of artificial intelligences do not engage in acts of belief.  And they do not engage in acts of disbelief.  So how could an AI suspend disbelief to actually experience a story or a movie?  What are our the neurons in our brains doing when we suspend disbelief? 

To make an artificial intelligence requires a top to bottom model of mind that can be reproduced with hardware and software.   Gaps in our ability to describe functions, behavior, and experience with a coherent model will lead to failures to develop an artificial mind.  Gaps in our models of consciousness or intelligence force us into guessing how a behavior is produced through some lower level function.  What we find in all the research that guesses do not lead to workable implementations.  Guesses lead to more questions, not to answers.  

We do not guess at our experiences at all.  We have experiences.  There is no guessing  that an experience happens or not.  We might question what an experience means, or if what we see and feel is "real".  But there is no doubt we see, that we smell, that we  feel, that we think.  An artificial mind must be equally aware, it must experience for itself.

Most of the approaches to date come of the idea that we do not need to solve the same problems the body and evolution has solved to create consciousness, we just need to solve the functional problems of consciousness.  This is most often expressed with the airplane argument.  Human beings achieved mechanical flight, not by imitating birds, but by expanding on the notion of gliding.  The wright brothers invented powered gliding.  And powered gliding is functionally the same as flight.  This idea that if we understand the functions of something, we can then reproduce those functions mechanically is an argument by analogy.  

Reasoning by analogy can be useful to pointing us in new directions, but it is not a substitute for understand and discovery from first principles. 

The belief that a function which models some behavior or human action is sufficient for producing an artificial intelligence is not  grounded in fact.   Functional models, particularly physical functional models are not explanations of experience. Functional models describe what happens in experience.  Functional models are representations of what happens in experience, but they are not experiences. Turning to functional descriptions of behavior as explanatory for experience is a kind of Cargo Cult view of mind.   That if we just clear a long wide path, setup some lights, and speak into a box, metal birds will land and bring us riches.  

Physical functional descriptions are not experiential descriptions.  It is our experiences that happen.  We are affected by our bodies and our environments in our experiences, our bodies and the environment are not merely inputs to functions.  Our bodies and environments are part of experience.  The functional duplication of behaviors are not experiences.  A functional substitute of an experience is not the experience.   

Robotic cars that "drive themselves" are not in fact driving.  Robot cars perform functions necessary for driving.  Robot cars cannot recognize when they are driving or when they are crashing.  It is the engineers who are the recognizers and then program conditions into software to produce functional changes.    

Robot cars do not know what a road is.  Robotic card do not know how a road differs from a path, or what the difference is between a road to nirvana, or a path to enlightenment.  This is not a problem of linguistics and the definition of words.  This is a problem of experience.  Unless a computer system can make it's own associations from it's own experiences, it can never KNOW anything, it can only do, it is only ever functional.*  [* this is where we would make philosophical zombie argument but David Chalmers does it so much better than we do.]   

Functional achievements in AI will never get us to AI's that have experiences, unless we have a function that describes experience itself.   A function which describes experience itself must deal with the layers and the meaningfulness of layers of experience.  The idea of meaning must be dealt with directly.  To have an understand of what meaning is, we must have a functional description of how meaning happens. 

What does a function of meaning look like?  How can we describe meaning without first describing experience?  If we do not know what the function of meaning is, how could we make an AI that has meaningful experiences? 

For instance, will we have fast driving robotic cars and slow driving robotic cars?  Will our current approach to robotic cars produce cars that drive too fast for their own good?  Will robotic cars be able to drive matchbox cars?  Am I going to start driving this discussion of robot cars into the ground?  Can robotic cars "drive a discussion"?   An artificial mind must understand meaning and it must have experience.  How else will it be able to drive anything of it's own into the ground?   How else will it learn to stop driving around in circles?

A physical oriented functional approach never gets us to robots that like to dance. "Joy from dancing" is not the result of a physical function but is the result of experience.  The idea that one day, I may like dancing and the next day I may not want to dance ever again, is a believable experience.   Could we create a function of that?  

A real mind, artificial or not, must have experiences.  And through experience, a mind builds up knowledge, abilities, and more EXPERIENCES.  This is a recursive proposition.  Experience itself is the problem that we must confront with the creation of an artificial mind.  Otherwise our artificial intelligences will always be functional automatons that mimic experiences  but never have them. (ie. zombies)  Which is okay for many problems, but does not solve the deeper problem of understanding consciousness, intelligence, and how to create a mind. 

What we need is a functional description of experience itself.  Assuming that physical functions come first, never gets us over the gap of what experience is, or how it occurs.  We must understand what experience is first, in functional terms, and then we can start looking at what kinds of physical processes instantiate experience functions.  * [*this is another requirement for a theory and model of consciousness.  A theory must describe experience functionally.] 

There is a corollary to this assertion.  That experience functions can then be treated in a physically agnostic way.  We know that biological organisms instantiate consciousness.  Thus, if there is a function of experience, that functions gives us with the possibility that other kinds  of physical processes, such as computational ones, can also instantiate experience functions, and produce consciousness.  that is, any functional description of experience should be agnostic to the physical processes or functions which instantiate it.  Experience should be describable apart from a reliance upon biological or machine processes, but a good theory should show how both kinds of physical processes can instantiate consciousness.
previous next