There are a series of problems and misconceptions that creep into any discussion of consciousness and mind when we begin from somewhere other than the critical fact of representation and how representations must be instantiated from a non-representational substrate.  

First, we accept that representations are real.  That is demonstrated.
Second, we accept that representations, whatever kind of representation there may be, are representations and not something else.  Truth and falsity are representations, they are not physical phenomena.  There is no "truth" particles or a "false" wavelength frequency.  Truth and false are non-iconic representations, but the red blood of a battle scene painting is iconic representation.
Third, we accept that there is a physical substrate which exists, and which by obvious evidence seems to produce representational experience in physical organism.  

How representational instantiation happens is dealt with elsewhere.  Here we want to show how certain approaches cannot produce workable models for how representations can be instantiated in physical substrates. 

-----

Materialism fails to explain the fact of representations, of ideas and the nature of experience.  

The simple argument is that it is not necessary for physical processes to have ideas.  Causal purpose that molecules should form structures which have ideas.  Except that the ideas already exist, and then there is some mechanism by which ideas are instantiated in non-representational objects, in physical material.  


The Earth rotates and this is what CAUSES night and day.  In fact, night is not night at all.  It is merely the blocking of the sunlight by the earth itself.  At night time, we are in the Earth's shadow of the nearest sun.  But we are no in the earths shadow further stars.   But in a description of planetary rotation and radiation emitted from the sun it is not necessary to ever speak about night or the concepts of night and day.  

Night is a unicorn. 

But day and night are part of our history as organisms.  Day and night affect our chemistry.  Who has not had night time thoughts?  When the world and life seems hopeless?  Or the flip side, when ideas spectacular fill our minds?  

But "night" is, at best, a kind of side effect of planetary rotation. 

This is the mystery of representation.  We intrinsically experience the horrors and pleasures of nighttime.  Just as we experience the joys of sunrises, and morning, and waking.  These are primeval experiences.  ALL of our representations about night and day are the result of our planets rotation.  There is no mystery in it.  But we experience a mystery.  Our biology and chemistry gives us a different experience in day time and in night time.  The experience is not chemistry or planetary rotation, is it an illusion?

When we improve our representations, we find that our new representations are better than our old representations.  A spherical earth is a better, more accurate, and consistent representation than a flat earth.  The argument of materialism is that the material world, the scientific explanations of the world, and the scientific process that is used to explain the world provides better explanations than non-materialist and non-scientific explanations.  These explanations are not categorically better, they are different with some portions better explained.

There is no materialist explanation of night that is meaningful beyond that it is the result of planetary rotation in the orbit of a single star.  Circadian rhythms, sleeping, our sense of night and day, our emotional responses to night and day, are all side-effects of this physical process.  The associations and representations that we make about day and night, the poetry, and emotional experiences, the fears, and the joys are fabrications from a material fact.  Fabrications are made up; fabrications are not illusions. 

Materialism demands experience be reduced to material facts.  And when emotional and dramatic experiences are reduced to material facts, the meaning of those experiences vanish.  Night time, reduced to a material fact, is just planetary rotation.  Why should we engage in magical thinking about planetary rotation (night time) events?  Night time events events occur at some point during a planets rotational cycle.  There is nothing meaningful or magical about planetary rotation.  One position on a spinning earth is not somehow more menacing or less menacing, more exciting or less exciting, more enlightening or less enlightening than any other position.  

In this context, "night" doesn't mean anything at all.  But we experience "night time" differently.  Night is something we experience in much the same way we can experience a unicorn. 

As a general rule, representations we create to describe our experiences resulting from material facts are more true than mystical or magical explanations or the stories we tell.  Ghost stories are night time stories.  Is there any evidence that planetary rotation should CAUSE ghosts to appear more?  Does the absence of sunlight, because of planetary rotation CAUSE ghosts to appear?   This is a non-sensical question.  Ghosts come out at night.... ghosts do not come out because of planetary rotation.  These ways of representing experience show us how powerfully a materialist view point can dispel magical thinking.  It is magical thinking that night time is a thing itself. That night is some state of being, when it is not state of being but is a result of planetary rotation.  The planet is not in a state of day or night at all. 

It is this conundrum of representation and our experience and emotion which is contradicted by the materialist perspective.  Materialism gives us more accurate physical models and representations.  That is it's power.  But it doesn't give us a way to understand representation itself.  that is the failure of materialism.  Materialism fails to explain how night time can be meaningful.  That is it's weakness. 





-----
Computational models and logical approaches   

The basic computational model for brain and mind, which includes such theories as the global workspace and algorithmic and mathematical approaches has serious flaws because these approaches are all post-representional.  That is, these models rely on pre-existing representational processes for the models to work.  but we know that the physics contains no pre-existing representations.  Representations is something that gets instantiated by physics and is not a feature of physics.   

Programming and computational approaches rely on logic and either explicitly or implicitly of grammars.  The power of context free grammars is well proven and established [Introduction to Automata Theory, Languages, and Comptation. John Hopcroft Jeffrey Ullman 1979]   

All language related approaches to understanding how representation occurs are necessarily flawed.  For some system to use a language or grammar to achieve representational ability will require that the language or grammar pre-exist the system.  It will requires the automata rules to also pre-exist the system.  This presents us with a basic problem. Where are the grammars or language and rules?  

In biological organism, there is no location where the rules of functioning are stored.  Certainly there is DNA, which encodes information for protein expression, but where are the rules for DNA formation?  There simply are no rules or grammars for biology.  The rules and grammar for biology is the underlying chemistry, which means, the underlying physics.  At best we could say molecular binding processes are the rules of biology.  But where are the rules for the binding of atoms together to form molecules.  And indeed, there are no such "rules".   The formation of molecules is function of the underlying particle physics of electrons.  Are there rules for electron behavior?  No.  The electromagnetic force is not a rule that controls the behavior of electrons, the electromagnetic force is a description of how electrons act.  

I'm saying this very poorly, so let me be clear.  The classical view of physics and the universe is as a system that follows certain rules, that if we were to discover them, we could use those rules to describe and predict events in the universe.   This view is wrong.  It implicitly requires that there is some set of rules that are magically affecting the behavior of particles and atoms in the universe and forcing them to behave in certain ways.  Which leads us to the question, where are these rules that force physical phenomena to behave the way they do?   And the answer is, there are no such rules.  Particles and atoms behave the way they do because of their intrinsic nature, not because of extrinsic rules.  

All computational models make exactly the same kind of error.  The view in computational approaches is that by discovering what the extrinsic rules of thought are mind are, we can then simulate or model those same rules with a context free grammar and automata and thereby create minds which think.  The problem with this approach is "where are the rules?"  

We discover so called rules or forces of physics or physical phenomena and think the phenomena follows the rule.  But in fact, the rule is just and only a description of the phenomena.  The rule cannot force the phenomena to happen.  the rule for "forces" of physics in fact have not causal power, because they are just ideas.  The rules may in fact be deeply flawed, and thus not true.  The rules do not go away because of that.  The idea of a universal ether in which light travels is still an idea in physics, it's just an idea that is wrong.  The idea continues to exist, it's status has merely changed.  That idea does not determine how phenomena happened in the past, or happens now.  This is true of of our modern descriptions of physical phenomena. 

the modern crop of computational and logical models have this same flaw.  The assumption is that the universe is a kind of simulation that follows certain rules.  if we copy that simulation we will be doing the same thing the physical phenomena are doing and therefore will create artificial intelligences.  The universe is not a simulation.  We simulate the universe.  How can we develop a machine consciousness that simulates the universe if it is constrained by existing simulation rules?  The answer is, we can't.


this computational and logic approach also suffers from the practical effect of Gödel's theorem.  An artificial intelligence cannot have the axioms of it's functioning kept in the minds of programmers, because it will then be limited by the constraints of those axioms. But what we know about representational ability is that we are not limited to the logical rules and axioms.  We can prove theorems, which means we exist outside the bounds of Gödels theorem.  We are not constrained to the axioms we are programmed with.  Any approach which relies on grammars will be so constrained and will therefore be purely a machine.  


Next, the algorithmic models of computation exclude the kind of representational phenomena we see in organisms.  Specificially, computation excludes the asemic and the stigmergic.  But specifically, computations have end points.  In some sense, computations are stories,  computations are series of actions that produce some result.  And computational systems, such as Turing machines, adhere to this basic principle.  

But what we find with organisms is there are no endpoints.  Rather, from the cell on up to human beings, organisms are not linear at all.  Rather organisms are deeply ecological and  homeostatic.  The interactions, either the "network" like interactions in the cell as shown in systems biology or the network interactions in the brain are like circular webs.  And the purpose these webs serve is to maintain their functioning.  

Whereas with logic and computation, the purpose of computation is to achieve an endpoint.  either a cyclic endpoint, or a linear endpoint of computation.  This model is easily illustrated by the tree structure of computations and computational development using grammars.    Trees do not loop on themselves and change the meanings of their grammars.  But this is certainly a feature of human intelligence and consciousness and even of cellular functioning.  (salamander regeneration shows this phenomena, as well as human development.  GABA for instance changes it's affects in brains as human beings mature [Neuroscience Bulletin  June 2008, Volume 24, Issue 3, pp 195-200  The role and the mechanism of γ-aminobutyric acid during central nervous system development   Ke Li, En Xu]  While it is possible to create automata models that have changing values, current models are not automata with mutating rules.  

This end point or conclusion or result form computational models lies in the question, where is the end point?  And this is the problem with the global workspace theory?  Where do the results from computations in the global workspace go?  When a computation completes, what does that mean?  And in terms of a computer system, the results of a computation are meaningless.  The results of a computation are meaning to a programmer or to user, but the computational results are meaningless to a computer.  Which is leads us to ask, where are the results of the computation reported?  

Just as rules in a computation are extrinsic features of the computation meaningful to the programmer, the results of a computation are extrinsic to the computation and are meaningful to the programmer.   It is a disconnect between the rule and the results which informs a programmer that the a program has a bug, or that a logical argument has a flaw, or that a proof is erroneous.  A computer which relies on computation in the traditional sense can never know this.  

Lastly, A computational process cannot, categorically, have bugs.  The idea of a bug is extrinsic to the computational process.  the nature of bugs and errors is not contained in either a logical argument or any algorithm.  The correctness of a computation is a feature that is extrinsic to the computation itself.  And the natural response to an algorithm that does not produce the desired result is to start developing exception rules and then more exception rules.  but this process is not one the computer or computational process engages in, but is one the programmer engages in.  

These extrinsic rules, answers, and correctness issues are invisible to the computational and algorithmic development processes.  But these problems are fundamental to representation making and the process of representation making itself.  the simple game of "Simon says" can illustrate just how easy it is to move into a world instructions, when we are actually operating in a world of much richer representations that contain instructions, grammars, algorithms, and computations.  

As we will see later, the solution to the computational problems listed above is to auto-generate automata the care about the outputs of other automata in self-sustaining, in a homeostatic system. 


-----


This extrinsic problem echoes the ideas of cartesian duality and the argument Descartes made about God's existence.  If there are extrinsic rules of physical phenomena, then where are these rules?  And the obvious answer is they must be God's rules.  But even having extrinsic rules of phenomena leaves us with the deeper problem of how life and consciousness arise and exist.   Appealing to extrinsic rules for the functions of life and of consciousness presents us with a deeply unresolvable problem. if the rules of functioning for physics and consciousness are in fact extrinsic, that implies that we cannot produce the phenomena of machine consciousness because what produces consciousness is an extrinsic rule that is outside of the simulation God has made called our Universe.   However, we have demonstrated that Descartes's initial conclusions are deeply flawed, and thus so are his further conclusions based on the Cogito argument. 


-----


Emergence.  

Emergence is an argument that also implicitly relies upon the idea that physical phenomena are subject to extrinsic rules, and that given some condition, a special phenomena will emerge based on those extrinsic rules.  The question to ask about this is, where are the rules that govern the behavior of emergent phenomena?  And how do the extrinsic rules "force" or cause the emergent phenomena to occur?  

The conclusion is that there are no extrinsic rules which "force" emergent phenomena to occur in certain physical conditions.  Rather, the emergent phenomena is not emergent at all, but is a natural outcome of the underlying atomic interactions. 

The emergent argument for machine consciousness is that once the computational simulations (for brain simulations) or algorithmic approaches are produced, consciousness in a machine will automatically emerge.  Or rather, consciousness will magically emerge. Emergent arguments simply do not explain either the physical or computational processes that are necessary to give rise to consciousness.  

In fact, emergent arguments do not explain the more fundamental problem of ideas or representations come about.  And this is the key problem:  How do representations exists at all?

Emergent machine consciousness simply ignores this question by proposing that if the right representations interact on a computer in a specific representational way, representation making and representations themselves will come into existence as if by some magic loopback process where representations made by programmers in computers auto-generate representations of their own and become conscious.   Meaning, if we do X, then something magic happens and we get Y.  

To produce machine consciousness, we must adhere to the principle of "no magical happenings".


-----
Bit representation and computational neural networks


We have already shown that the neural structures (built up of neurons and glial cells for example) must form the representation objects and actions that we experience.  

And it must be at must be at the level of the individual cell or node that we grow connections, modify how cells signal etc.  and that this process is stigmergic.  (there is no "signal"  it's all interactions of molecules at that level - stigmergy is not signaling it is itself representational processing.)  

In a deep-learning computational "neural net", what do we see?  a cell connected to other cells.  and at each layer, each cell in that layer is connected to all the cells in another layer.  the cells in layer 1 can be treated as a string of bits, or values.  the cells in layer 2 read a string or value and produce an output from it.   eg.  1010100110010001101111010110111001011100111 ; 1

this is the simplest kind of representation, but how does this representation get created?   it starts like this:  _____________________________ ; 1     and then the input values get modified to only produce outputs in certain states. 


e.g.as a string:  __11001______1__00_____1___1 ; 1  where each _ is a value that can be either 0 or 1.   in deep learning neural networks (which are not really networks at all, but processed data sets or matrices)  an algorithm is used to determine what values to respond to - to determine how to assign the values to the responses.  

as a corollary, each cell may keep a set of  strings or vectors for the input layer and have responses to each set.  arguably, using a generic like "_" for either is more efficient at reducing the number of sets a cell will respond to.  in this way, each cell behaves as a simple automata.  The nodes these computational neural nets ban be thought of as cellular automata structure with a complex rule. Where the neighbors are not near, but are the neural net matrix connected to that cell.

if a cell in layer 2 can receive two signal values from the preceding layer, one of the signal and one of "importance", which is like a decaying chemical signal in systems biology, then the cell could bias responses.   it also means that cells at every layer could receive side messages that affect the processing of each.  This behavior would look very stigmergic.    such stigmerigic message propagation may be able to produce "pathways" through the cellular levels.    

In a human, the whole neural network is not accessed by each cell, but is grown or trimmed.  so that cells create structures of signaling  eg.  10101110111010110101 ; 1   but each cell has a different group of cells it connects to without regard to "levels".  thus we capture 3 facts of representation.   connected signal, connected no signal, and not connected.  the not connected representation is critical in understanding representation, but it is not obvious when looking at networks.  not being connected can be very meaningful (representational) but has no meaning structurally.   

the actual neural cell actually modify receptivity and expressivity all the time.  by transporting more vesicles of neuro-transmitters to be released on the cell surface, the cell increases it's "signal" (there really is not 1.0 signal).   also, by modifying the cellular membrane, the cell can create more protein receptors for various neuro-transmitters which have various effects.  it's not that the cell runs an algorithm.  The cell modifies it's membrane structure stigmergicly in response to systems biology level chemical changes.  

But the core issue is learning.  It is creating a representation, at the cell to cell level, and within the cell. What are deep-learning computational neural net structures doing?   is it learning?  is it representation?   (it's certainly not representation because it cannot be arbitrary and it cannot deviate from the algorithm employed - no wrong answers allowed.  

in an actual cell, the action potential is probably regulated by a variety of factors.  the cell regulates itself, and then neuro-transmitters affect when the action potential reaches a threshold.  

so instead of 1101010101001010010011111 ; 1, where each connecting between the cell is a received and one value is expressed.  we would have something like:    summation[91761610002001001893474] ; [expression]183931013044039837278393

where each digit represents the value to affect the action potential and once the action potential changes the membrane the axon synapses release a varying quantity of neurotransmitters at the other end, represented by each of the expression digits.   of course, at each firing of potential, the expressing synapses may release different quantities of neurotransmitters.  but for practical purposes, each digit on the output is a value that expresses a release at each synapse, or as a group of synapses.   or is a received value to whatever connected receiving neuron synapses. 

for instance a neuron in level 1 may have 10 axon synapses (expressing)  but 1 "connects" to one receiving neuron synapses and 9 others "connect" to another neuron with 9 receiving synapses.  so for these three neurons   where the level 1 neuron produces an output like this: 1234567891  the receiving level 2 neurons receive inputs like this 9, 123456781   and then the receiving neurons respond to these received neurotransmitters in different ways.  the first level 2 neuron that receives the 9 may only fire on values less than 5 (neurons can signal when there is no synaptic signal and the synaptic signal actually suppresses neuro-tranmission).   while the other may only fire on summation values greater than 15.   



what does this mean?  for one, this multi connected cell to cell network would be the kind of structure that is selected for in a randomized connection and disconnection process. A biological network would optimize it's connections between cells as a side-effect of cellular homeostasis vs optimizing the values of hard coded connections between cells.  Moreover the development of connections is deeply representational.   With biological neurons are creating structures, in their membranes, that are at least as important as the actual signals or neuro-transmitters propagating from neuron to neuron.   A neuron that "fires" when there is no signal is powerfully different than one that fires when there is some signaling going on. 

If this fact of biological neurons were modeled in a deep-learning neural network, we should expect to see nodes of the network producing outputs when there is no input at all.    and isn't that what representation is?  something from nothing?    one thing(absence of a signal) as another thing (a signal)?   nothing as something.   And we we deeply care about absences.  We deeply care when the kids get suspiciously quiet, and this tells us we must have some signal producing mechanism that activates in the absence of signals, a node which signals when it receives no signal. 

For a deep-learning neural network to work, it must be massively more connected, and have many cases where connected neurons respond to strings like this:  000000000000000000 -> 1.   but the sheer volumes of connections and processing will overwhelm the development of such networks.  the more optimum strategy is to create network structures.  then "learning algorithms" are not about managing the values of fixed connections but are about creating the right connections.  the neurons themselves may behave in much more predictable ways, ie, there are neurons like 00000000000 ; 1 and 10101110101010101 ; 1  and 1111111111 ; 0 and how they wire up indicates what the representation is.  we put the representational problem into building connections instead of in balancing or biasing the neuron to neuron signals to get representational values.   

if we balance the neuron to signals, where does the learning occur?   where do the representations take place?   the representations must be in a kind of algorithm if we balance, or reinforce, or back propagate the neuron to neuron signal.  but who decides what the algorithm is?  not the neurons.  so the representational problem is moved off to the "selector" of algorithms and what results are valued.  that isn't really learning or representation by a computer at all.  

for any algorithmic approach to work, the algorithms must be created from simple first conditions… algorithms must be selected for because they are valued to the learning system itself, and not because they are valued by us.   you can 't learn to see if you can't fiddle around with what you are seeing -it's the inverse of the awareness as power principle.  for a system to have awareness it must be able to change it's representations.

if a deep-learning network is to have awareness it must be able to change it's algorithms and what they mean or to have so many networks as to capture a variety of algorithms and meaning from all of them.  so how does a deep-learning network make up it's algorithms?   That is the basic problem; computational neural networks do not create their own algorithms.   the representations must be done by the computer and not by human beings.  

which means using deep-learning neural networks, or any other kind of computational neural network will simply not be able to instantiate consciousness, because it cannot instantiate representations of it's own.  these computational approaches produce outcomes whose values mean something to human beings, but are meaningless to the computational networks themselves. 

Something that might appear to be a better approach is that there is another layer, in each node that develops what it's algorithms are - ie. a proto algorithmic level.  but all these algorithmic approaches just push the homunculus further down, when the actual homunculus is the programmer. 

If computational neural nets auto-generate themselves, then whatever results they come with would be auto-meaningful.  But that isn't how algorithms work.  There is no 'evaluation' algorithm these approaches can take whereby "wrong" answers in an auto generated computational neural net produces "meaningful" outcomes.  meaning is extrinsic to deep learning computational neural networks, and indeed to any computational neural network. 

The second issue with computational neural networks is that the node connections are fixed.   In reality, the node connections of an actual brain are not fixed, but are grown and vary over time.  This growth and modification of network connections is a key feature of representation.  whereas the neural network approach is not about representation itself, but is about values of representations, nd for that computational neural networks can be perfectly useful. Valuation is not the problem of consciousness, or more precisely, of representation making.   Computational neural networks are incapable of making representations.  But they are a valuable computational tool for assigning values algorithmically to representations which already exist.  

The fixed nodes problem of artificial neural networks constrains the network.  Adding and subtracting nodes would destroy an ANNs "learning".   However, adaptive learning is the critical role of neurons. That is why ANNs fail as a model for representation making because they do not build new connections to each other.  The algorithm that controls learning is not about embodiment or homeostasis or new connective structures, but is about finding the right balance between existing nodes to correlate to an externally verified representation. 

Actual neurons build new connections and drop connections because of their homeostatic impulses. The stigmergic interactive process and internal molecular interactions of the cell and membrane effect the behavior of the cell.  These processes "choose" how a cell develops new connections and what it's "signal sending algorithm" may be.  Factually, the cell does not choose in a classical sense nor have any algorithms.  The whole cell is a complex of molecular interactions and steady state driven networks of systems biology.  [Introduction to Systems Biology  Uri Alon 2007] 

This may be one reason we have limited neuro-genesis.  Neurons coming into and going out of existence would significantly affect representation formation and instantiation to a degree greater than what adding connections between neurons would do..  


-----

the problems of mathematics and computation:  bits from nothing


By following the rule of "no magic happenings"  we encounter a particular problem when we rely on any mathematical approach in creating a computational or machine consciousness.  

consider these two inputs of binary values:

0001 0011.  If we "add" these numbers together, what do we get? 

0100 is the result of integer addition.   00010011 is the result of string addition.  Which way should representation happen?  Relying on a cellular stigmergic model to instantiate a representational simulation environment, which approach is best?  Or more simply, how should a machine consciousness do addition? 


How is cell to cell interaction done in biology?  The primary answer is that molecules are split apart and put together, mostly by proteins in cells, to produce the interactions and functions of the cells.  From growing microtubles to extend dendrites, to producing neurotransmitters, and ionic gates on the cell walls.  The cell is engaged in a complex interaction of homeostasis that produces phenomena and leads to interactions between cells.  All of these interactions can be thought as molecules being traded between cells, and the molecules have different stigmergic effects in the cells.

The cell is a physical object, and thus adheres to the conservation of mass.  The cell does not create any molecules out of nothing, nor are molecules destroyed.  Everything the cell does, all the atoms in the cell and that are part of the cell itself are conserved.  In the cell, two molecules do not come together and once combined, lose mass.  when an enzyme  such as enolase in the process of glycolysis catalyzes  2-phosphoglycerate (2PG) to phosphoenolpyruvate (PEP) what happens?  the 2PG molecule binds to Enolase and this changes the Enolase enzyme to Enolase+2PG.  Enolase+2PG changes shape from it's Enolase structure and in the process alters the binding relationships at the 2PG site.  These alterations produce a release of water producing the molecule Enolase+2PG-H2O which is unstable.  Enolase+2PG-H2O is the same as Enolase+PEP, and PEP is released from Enolase because of it's instability returning Enolase to it's original structure.  [Structural and mechanistic studies of enolase  George H Reeda, Russell R Poynera, Todd M Larsena, Joseph E Wedekindb, Ivan Raymenta] [http://en.wikipedia.org/wiki/Enolase] [The Machinery of Life David Goodsell 1992]  



Thus Enolase, 2PG -> Enolase+2PG ->  Enolase+PEP, H2O ->  Enoloase, PEP   The mass of Enolose and 2PG is the same as Enolase + H2O + PEP





All the molecular interactions of a cell follow the same basic model we see in Enolase's roll in glycolysis.  A process of interaction that produces outputs with the same total substance of the inputs.  In chemistry, the atoms themselves stay the same, it's just the arrangements which change.  So not only is mass conserved, but the the atomic elements are conserved as well.  

In a computational process, especially one which relies on cell like structures, any process of the cells which causes inputs and outputs to appear and disappear looks like magic.  

0001 + 0011 ->  0100  is a loss of bits.   The bits 0001 and 0011 and "+"  all exist at the beginning of the function, but only 0100 exists at the end.   Where did the bits go?   The bits magically disappeared in the mathematical function.  This is exactly why mathematics is representational.  

The neurons in a brain are not making bits appear and disappear by doing functions.  There are no bits in the neurons or any of the cellular and molecular structures of the brain.  The mass of the brain is conserved.  It is primarily the exhalation process of the cells that produces carbon dioxide where molecules are removed from the cell and eventually exhaled by the body.  But nowhere in the biological process do mathematical or computational bits disappear.  The mathematics are functions completely separate, but dependent on the cellular functioning, which explicitly conserves mass and is not mathematical. 

For computation to be used to create representations from non-representational starting points requires that we maintain all the bits (or the bytes) that are used to produce the representations.  In the same way that cells do not make molecules appear and disappear to create networks and to signal each other and instantiate representations and experience.  

When you drink a cup of coffee, those caffeine molecules interact with molecules on and in cells and produce changes to neural cell functioning.  But this is not a change where the "value" of a cell was altered to produce a different behavior.  The cell itself actually interacts with caffeine molecules that change it's functioning.  That is molecules interact and go through changes and that is the totality of the cells functioning.  

Mathematical functions which cause bits to appear and disappear create all kinds of hidden problems and force us problems of thinking and into a point of view which requires an extrinsic model of functioning to explain when, where, and why bits appear and disappear.  And these extrinsic approaches always get us back to the problem of duality and homunculi.  

For this reason, any computational machine consciousness must bit or byte conserving.   And therefore non meaning based mathematical approaches to produce machine consciousness will work.  The problem of bit or byte conservation is a major obstacle mathematical approaches all seem to overlook in the problem of creating representations.

The mathematical approaches create representations out of nothing, and cause us to ask:  why are these the right or wrong or useful representations?  why do these representations exist?   And the answer invariably requires invoking some extrinsic purpose. But we know there cannot be an extrinsic purpose to the function of organic cells and organisms, because the functioning is all driven by the atoms and physics.  


-----
How do you get to numbers and to logic?

physical processes do not create numbers and logic - there is no causal reason to.
logical/mathematical approaches cannot produce logic and numbers either because it violates Godel's theorem and is a circular enterprise: We can't make the axioms from the mathematics itself.
also, any instance of creating axioms with mathematics relies on a computational or physical substrate which violates the above rule.  That is the substrate is a substitution for the axiom, but not a progenitor of it.

the only solution is that representations such as numbers and logic exist independent of the instances of their expressions.  which means the problem is how do we instantiate numbers and logic from a non-representational substrate?

the reverse is also true:  math, logic, numbers cannot be used to instantiate physical phenomena because they find expression symbolically. the expression is already a physical thing.  this shows that the deeper issue is understanding representation itself, not relying on one class of representations like mathematics or logic to somehow produce all kinds of representations, functions, actions, and ideas.

Said another way, computation is a subset of representation. representation is not a subset of computation.


-----

Logic is only one class of representational objects.   there are many others that must exist and may even be symbolic.  syntax for instance cannot be derived from logic.  syntax precedes logic.  any of the non-symbolic forms of communication and function production are all going to be extra-logical where a logic is used to describe, but has no functional power over the non-symbolic communication process.  

This is easily demonstrated with improvisation and improvisation exercises, where meaning can change.  But it is also demonstrated by many kinds of art making but also by other asemic experiences and activities.  [Impro Improvisation and the the theatre  Keith Johnstone 1987]  




-----

The problem of state and states:

states are illusions.  by focusing on states, it removes the focus on elements.  in physics, the elements themselves are what matter.  e.g. protein folding is driven by the EM structure of atoms.  see VSPER theory as a model of the arrangement of atoms in molecules. the electron structure produces folding with electron orbital change or introduction/loss of new atoms to any molecule (a b) -> c   in computer theory, states are representations.  the state is not a computer generated structure but a human generated idea.  states are illusions.  all computations are atomic.   the structuring of computation (transformations) looks like state (e.g. the state of a deterministic finite automata DFA for instance) but it is not.  The halting problem is not resolved via state approaches but via homeostatic development of computations.     

this approach is forced by the conservation of mass in physics.  And it must be attained by a conservation of bytes in computation.   no magic transformations of bytes as we could do with mathematics.  eg. 8 + 3 = 11 is wrong.  (0100 + 0011) = 0111 is wrong instead:  (0100 0011) -> 01000011

higher order representations such as addition must be achieved not through computation directly, but through the development of representational structures that do mathematics.  the reason being that the rule for addition is not an atomic computation.

states are explicitly ideas.  states are representations of phenomena that are descriptive.  But states are not functions.  A state describes what happens, it is not functionally causative.  Functional causation appears to always be the result of fundamental objects and functions.  (e.g. physical forces - EM ,gravity etc)   Whereas states are extrinsic conditions that we assert cause physical and computational phenomena when they actually don't. 

The idea of the sun moving across the sky, of night and day, is deeply illustrative of how state thinking is embedded in our language and viewpoints.  But none of the state notions of the moving sun, day, night etc are actual things; they are all unicorns.  

State ideas are unicorns. 

When state is referenced as a physical phenomena, the reference actually reflects the subjective perception of that phenomena.  Subjective perception is limited compared to the physical facts. We accept that limit as definitive when referring to state, but the details of physical phenomena never reduce to a state.  The conceptual reduction of phenomena to state is just that, conceptual.  State descriptions are an idea of how physical phenomena occur that is a matter of convenience or the result of the constraints on our conceptual and representational abilities. 

-----

Why machines cannot be in states:  because there is no such physical thing as a machine.  a machine is a concept.  the concept is not in a state.  the concept refers to a set of objects and functions.  it is the objects and functions that matter.  from a physics or computation point of view, we can't get to representation if we assume states, because states are already representations.  to get to representations and representations making, state ideas must be abandoned in favor of a lower level non-representational combinatorial process of transformation and reflection of objects and functions.  We cannot have a perspective that comes out of set theory.  We have to look at representation making as a process that occurs from element driven regulation, from atomized computation.  And only from this starting point can these combinations of molecules or computational atoms develop representations of their own, including representations of states.  

States are not properties intrinsic to non-representational phenomena.  States are a way that we describe such phenomena.  Thus machines cannot be in states in a self generative process; because. states are some of the representational objects the machine must generate.  Asserting states as initial conditions puts us into the problem of infinite recursion looking for the first state that arises from what?   No, the solution is that physical and atomic computational process instantiate representational functions and later representations as objects.  States are one class of those objects. 


-----

The brain as pattern recognizer:

This is lay viewpoint, that what makes us conscious or representational is that the brain is a pattern recognition machine.  and thus pattern recognition machines are the way to achieve representation making and consciousness.   This view ignores the many and varied kinds of brain activity and experience which is clearly not pattern recognition.  Rather a machine consciousness is not pattern recognition process but must be a constant representation making/dreaming process.   Where interruptions to the constant representation making that must be incorporated. 

The problem with the pattern view is: what is the pattern?  Where is the pattern?   We run down a rabbit hole of patterns. 

Instead we should think of the system as a constant simulation/representation/dreaming that gets embodied, and inputs are incorporated into that constant representation making process and simulation.   Pattern recognition is thus a side-show of the homeostatic representation making process that is going on, it's important to survival, and thus inputs get matched to existing representational processes, and the whole system looks to be homeostatic and sensible.  

One of the fascinating things about dreams is that even though they may be bizarre, they are internally sensible.  This "sensibility" is the key feature of a representation making system.  Dreams adjust when they don't make sense. insensibility distorts dreaming, just as insensibility distorts daily life.j

It is only on waking that the insensibility of dreams becomes apparent, and then we start to make sense of those experiences, to make them sensible by representing them as dreams for one thing.  

The key realization about the problems with the pattern recognizer idea is that our consciousness, or representational simulation is not a thing that is on or off, but is always happening.  representation making is always occurring.  We are always dreaming.  The difference when we are awake is the we have inputs and outputs that interrupt and create conflicts with our dreaming, and this requires us two make sense and build structures and make representations of these inputs and outputs so that the whole representation making process continues, because that is what is happening, representations are always going on, and thus for the representation making organism to survive, the representations must tend toward sensibility in the large. 



-----

Will, Purpose, and "Soul"

These are illusions.  Like state, these make appeals to an extrinsic and causal force that inserts itself in the functioning of physical phenomena and produces effects.   There is simply no way to talk about such things meaningfully, let alone how to take that "force" and apply it to the problem of machine consciousness.  Will, Purpose, and "Soul" as motive forces are not descriptive of their method of action, or how they function.  

Instead, what we need is a model that shows how representations arise at all, and how those representations produce causal changes to the physical substrate in which the representations are instantiated.   Once this process is understood, then we can look at how the representations of Will and Purpose are instantiated to create the kind of causal effects that Will and Purpose clearly do have in the physical world. 

I am not suggesting that Will or Purpose or "Soul" do not exist, but what I am saying is that they are representations.  The problem is how do those representations produce causal effects?   Asserting that Will and Purpose and "Soul" have causal power in a pre-explanatory way puts the cart before the horse, and tells us nothing about how Will actually works.    It is an argument along the lines that something must be moving the sun across the sky, and we call that something Will or Purpose, or "Soul".  

This argument is also a subtle variation on Descartes Cogito argument.  Something moves, therefore there is a mover, and that mover is Will, Purpose, or "Soul".  Rather, we need to pay attention only to the objects moving and the details and facts of that movement to determine how the phenomena we observe, which includes the human produced physical phenomena that arise from Will and Purpose, occur.  


 
-----


Representations qua representations have no causal powere.  Representations as representations only have no force on physical phenomena.  

This is true because causation and force are themselves ideas without any superiority so some other idea.  What is the mechanism  that makes one idea more important than another idea but itself just an idea?  thus representations themselves are inert. 

The other reason is that the path of affectation for nonrepresentational phenomena, such as molecules and atoms must be from other molecules and atoms.  therefore for representations to have any effect on the physical is must be because some physical structure itself instantiates the representation and this instantiation also instantiates the physical effect that would give the representation the ability to force physical phenomena to occur. 

The assertion that ideas somehow affect or cause non-representational processes to occur is itself just an idea.  There is no demonstrable basis for the fact, apart from the embodiment of ideas by organisms which act upon those ideas.  

-----

Searle's Chinese Room:

Searle's Chinese room doesn't work in practice because the person inside would engage In embodied anticipation. 

Anticipation of whatever kind starts off as asemic representations.  An input means something but the person inside doesn't know what.  Over time the inputs. And the outputs become associated together - the inputs induce anticipation of outputs. These anticipations become more specific over time, less asemic.  

Where are these anticipations?  They are embodied in the translator.  Aw:x = x;y

This relationship is embodied and shows attention. When translator sees x x is y in new language. The development of asemic anticipation that leads to actual translation. 

Srigmerguc action must be the method of producing embodied anticipations. The signal of input and output words the translator experiences produces structure and signaling firming actions in the experiencer, in the organism.  

Stigmergic phenomena are not representational of the  organism. Stigmergic phenomena are the continuous functional interactions of an organisms fundamental parts for a cell it's the fluid and membrane molecules continually interacting. For multicellular organisms its the interaction between and by he cells driven by the intracellular processes and affected by the structure of cell formations which are side effect outcomes of the stigmergic :systems.biology process. If the cell. 

Lots of embodied structures can interact and thus inter affect the structure building processes stigmergicly.  Anticipation is inter structure signing that affects inter structure signaling stigmergicly.  It's building circuits to achieve "meaning". Meaning is just better anticipation. .  

The measure of correctness of anticipation is if a word is novel or familiar. Then where it's novel and where it's familiar can be fund in the structure, the embodied circuit/network representations of that word. 

The stigmergic phenomena is intercellular and intra cellular. The asemic anticipation is embodied as the organism. Fir the organism to know its doing translation it must do the sane asemic / stigmergic process on itself and its own behaviors. It must embody representations of itself and its actions, its inputs and its outputs, its environment. 

All algorithmic approaches fail without embodiment. Embodiment fails without asemia.  The learning process is asemic.  Embodiment also fails without stigmergy. Homeostasis of the body is the primary function of stigmergy. Learning is a side effect outcome of better homeostasis. Homeostasis itself is not a goal but is the selective outcome for enduring cells. Multicellular organisms are a homeostatic outcome that increases cellular survival.  (Evolutionary selected)   Structures of cells are the result of stigmergic processes that increase homeostasis. Structures of cells, of neural cells  are the embodied learning (representations) the embodied anticipations. 

Algorithms fail because of where the representation happens. Where is the representation and representation making happening?  For algorithmic approaches the answer is "not in the box".   Therefore the box argument shoes a failure of consciousness for algorithmic approaches.  Because the thing in the box following the algorithm is not aware and is not making representations and is not embodying representations.  (Which would not be the case if a person was in the box.)

-----

A maxim:  try to see the thing itself and not our representations of it - dasein