MAIL ME THIS POSTING!
MY E-MAIL ADDRESS:
For example: homer@lightlink.com

Arthur C. Clarke 4/9                        ART MATRIX - LIGHTLINK
http://www.lightlink.com/theproof           PO 880 Ithaca, NY 14851-0880
                                            (607) 277-0959      Voice
                                            (607) 277-8913      Fax
                                            (607) 277-5026      Modems
                                            homer@lightlink.com E-mail
                                            jes@lightlink.com   E-mail

                                            01/22/07 2:52pm

     Dear Esteemed Sir,

     I wish to take up in more detail the 'simple but convoluted'
question of whether a machine can verify with certainty its own
existence.

     This can be answered by analyzing an earlier question which is
can a machine verify with certainty its own circuitry?

     Consider a simple learning machine, with two video cameras for
learning inputs, whose purpose is to capture pictures of the world 'as
it is'.

     The two video cameras give it stereo vision thus allowing it to
record a continuous stream of standard visual attributes of the
alleged physical world around it such as color and distance.

     Other sensors give it temperature, humidity, atmospheric
pressure, quality of the air, and sound at the same time.

     Each picture that is taken of the external world is recorded with
a spacetime stamp taken from the global positioning service, that
details the machine's exact location in space and time at the moment
the picture was taken.

     Thus later the owner of the machine can peruse through the memory
banks of the machine and see that Goober walked past the machine's
field of view at 12 noon on Jan 1st, 2007.  Further, because the
machine recorded where IT was at that same time, it can compute from
its stereo vision, where Goober was when he walked past the cameras.

     We would trust the recordings of such a machine, would we not?

     Good enough for a court of law?

     OK, so say during the night, when the machine isn't looking,
someone opens up the back and replaces a circuit chip with another one
intended to produce consistent but false readings in the recordings.

     The next day when Goober walks by, the machine records a clear
image of Dufus walking by instead.

     Not good, since this machine's recordings are going to be used in
a court of law, right?

     Ok, as a set up to the question under scrutiny let's say when
this machine was built it was given a purportedly complete and
accurate set of diagrams of all of its components, right down to the
last transistor.

     Put in another way, it has a complete image of itself in its own
recordings.

     Further let's say both video cameras of this machine are very
special cameras, they can see through and into any part of the machine
they want through x-ray vision.

     Thus in theory they should be able to compare the present time
parts that make up the machine and how they are connected with each
other, with the parts on the circuit diagrams.

     More simply the machine should be able to compare its present
state with its past 'known good' state and report any differences
found.

     We assume a few things.

     We assume that all circuits are working properly, because if they
are not, how is a mis behaving circuit going to properly report on the
condition of other circuits?

     Yes the machine has two cameras for redundancy in case one fails.
 
     The probabilities that there are errors in both cameras and the
circuits that connect them to the machine are less than the probability
that only one camera is bad,
 
     But there is always going to be a finite non zero probability that
all cameras are bad no matter how many cameras the machine has!

     For example if the machine had 10 cameras, and 9 reported one
thing, and 1 reported another, normal probabilities would indicate to
trust the majority report.  But trust is not certainty.  And if someone
has messed with the machine intentionally, then all bets are off.

     Because causation is not sufficient to witness causation, meaning
you can't witness the causation between two events merely by witnessing
the two events, a circuit can not verify itself because it can not
verify the causal pathways necessary to its functioning properly.

     By this we mean that observing effects will never give one direct
perception of the causation between the effect and its cause.

     Thus just because the end effect is that the circuitry reports all
is well and good, it is only a theory that this was produced by a
properly working machine.

     Thus it becomes impossible for a machine to verify its own
conformance to original specs EVEN IF IT HAS ACCESS TO THOSE SPECS TO
COMPARE TO.

     It can't even verify that it HAS original specs, because even if it
had a secret copy and the public copy was changed, it couldn't know for
absolute sure that the comparison circuitry itself was reporting
properly.

     Worse in this case, the machine has been intentionally changed to
observe or report incorrectly, and when those changes were made, the
machine was also changed to incorrectly report that its circuitry was
NOT changed even if at some point in its investigation it gets some hint
that it was changed.

     Thus even if its circuit diagrams were left intact by the intruder,
the changed machine wouldn't report properly anyhow.

     And if the embedded circuit diagrams were changed to match the
changes made to the circuits themselves, then spotting the changes
becomes utterly hopeless.  The machine could go off happily
hallucinating little green martians everywhere, and consider itself
quite sane because its observations of itself match exactly its original
specs which it knows are correct because its maker said they were!

     Thus we must conclude that a machine can not verify its own
operational integrity at any time and under any circumstances.

     Thus if the machine reports that it exists or does not exist, then
any such report must always be in doubt.

     This raises however another question.  How can a machine report
that it exists unless it does?!  Doesn't the mere fact of any report at
all necessarily imply the machine's existence?

     The answer is not to confuse the machine itself with an observer of
the machine, particularly a conscious observer!

     The observer of the machine may rightly conclude that the machine
exists because it reports that it exists, but the same observer would
conclude that the machine exists even if the machine reports that it
doesn't exist!

     What we need to do is look at it from the point of view of the
machine.

     And we need to go back to basics.

     Learning is a relationship between two different objects, most
fundamentally, learning is a tracking in the state of one object about
the state of another object.

     If B is learning about A, then B's state must track with the state
of A.

     For example if A is red, B must change state to include a
representation of 'A is red'.  If A changes state to green, then B must
also change state to 'A is green'.  In this way the state of B tracks
the state of A and this tracking is the process of learning.

     Tracking is the result of a causal pathway between A and B, in
other words A has an effect on B, A puts a causal imprint on B, B
changes state as a result of A, and thus B has 'learned' something about
A.  B's new state IS it's learning about A.

     Any change in state at all in B caused by A can be considered
learning by B about A.

     More to the point, in the absence of any change in state in B,
there can be no learning at all about A.

     For example, B is moving along and passes A, and after the
encounter B is in exactly the same state as it was in before the
encounter with A.  Clearly B didn't learn anything about A.

     So in the above we call B the symbol and A the referent.

     A's being red is the referent state, and B's idea 'A is red' is the
symbol state.

     Two completely different objects, A and B.

     Two completely different states, 'A being red' and 'B thinks A is
red'.

     A is the learned about, and B is the learner.  The change in state
in B, is B's learning about A, which state in B acts as a symbol for
whatever learning it represents about A as the referent.

     Learning thus implies tracking between the symbol and the referent,
and that tracking must include a causal pathway between the referent and
the symbol or else no learning has taken place.

     Now notice A might be red, and B might have the idea 'A is red',
which would make B right, but unless B got the idea through a process of
learning (tracking BECAUSE of causation), then B's idea is mere
guesswork, or coincidental good luck.

     There is a difference between being right, and LEARNING you are
right.

     The first involves being in a state that happens to coincidentally
track for the moment the object you are learning about.

     The second demands that your state be a CAUSAL CONSEQUENCE of the
object you are learning about.

     Being right is meaningless and useless unless it is engendered
through the causal process of learning.

     So we have a machine that want's to know if it exists, and in its
memory banks is a statement 'I exist'.

     The first object A is the machine existing.

     The second object B is the statement 'I exist' in that machine's
memory banks.

     Notice in this case the first and second object are the same object
because in this case the machine is trying to learn about itself!

     There is no problem with a single object playing both roles in the
causal pathway, referent and symbol, because in reality the symbol state
is a subset of the whole machine.

     So really one smaller part of the machine is learning about another
part of the machine or the 'machine as a whole'.

     Notice however that the referent state, the fact of the machine
existing, comes before the symbol state in time.
 
     There is a time distance (delay) between the machine existing and
its final report that it does exist.
 
     It takes time for the image of the existing machine to be
transfered through the video cameras and back into the machine's own
memory.
 
     Thus the symbol state of 'I exist' is a different event than the
referent state of the machine existing.

     Two different events happen here.
 
     One is the referent state of the machine existing, at which time
the symbol state doesn't yet exist.

     The second is a moment later in time, which is the symbol state,
which records the alleged fact that the machine exists a MOMENT BEFORE.

     Notice the symbol state NOW can not possibly be about the machine
state NOW, because there HAS to be some time distance between cause and
effect for learning to take place in the physical universe!
 
     So, the symbol state represents "I existed 1 second ago", there is
no way for the machine to learn that it exists NOW as there will always
be a time delay between the machine existing and the symbol state being
recorded that it existed.

     Now we already know that being right is not sufficient to prove
learning, as the machine can say 'I exist' whether or not it has
bothered to learn that it has.

     And we admit that a machine could very well exist and yet still be
utterly incapable of learning that it exists with perfect certainty!

     And we know that an external observer would think that saying 'I
exist' would be sufficient evidence that the machine existed, and so it
would to the external observer.
 
     But so would the machine saying 'I don't exist' also be sufficient
evidence that the machine existed to the external observer.

     What we want to know is, did the machine *LEARN* that it exists via
a causal pathway to itself, or did it just guess or happen to be right
by coincidence?

     If there is no causal pathway at all between the machine's memory
banks saying 'I exist' and the existence of the machine, then no
learning has taken place.

     If there is a causal pathway, if the machine concluded 'I exist'
BECAUSE it looked and interacted with itself via cause and effect, then
some learning has taken place.

     However we have already determined that no circuit can verify
itself, thus if the machine has been modified to report wrongly, or
worse randomly, then whatever it reports can not be trusted even if it
does exist and reports 'I exist'!

     Since the machine can not verify the integrity of the causal
pathway between it's own existence and its report that "I exist", it can
not be said to have learned with certainty of its own existence.

     And lastly a machine can't even look at its own report that 'I
exist!", and conclude from that report that it must exist.
 
     This is because, as already detailed, a machine can't be certain of
anything it observes at all, including it's own statement that 'I
exist'.
 
     Thus the machine has no clue from that observation whether or not
its own statement actually exists, nor that it or anything really caused
the report in the first place.

     It is bad enough that effect never proves cause, but if you can't
even be certain of the effect, what hope can there ever be of being
certain of the cause?

     The machine has no light of self luminous consciousness.

     Without consciousness there is no certainty.

     Without certainty there is no consciousness.

     Final conclusions.

     One can never learn with certainty about a referent by looking at a
symbol unless the referent and the symbol are one and the same event,
with no spacetime dimensional separation between them.

     Trust can of course come from certainty, but certainty can not come
from trust.

     Mechanics of any kind consists of parts interacting via cause and
effect across a spacetime distance.  Parts that are separated by
spacetime distance are of necessity two or more different parts and thus
must learn about each other via cause and effect.

     Since effect does not prove cause, no part can ever learn with
certainty that any other part exists or is cause over the first part's
state.

     Worse a machine can not even tell it has changed state with perfect
certainty, thus without certainty of having been an effect, how can it
be certain there might have been a cause?

     Thus a machine can not be certain of anything.

     Since consciousness can be certain of red/green experience
differences and many other things, consciousness and the conscious unit
can not be a machine.

     By which we mean not a system of parts interacting via cause and
effect across a space time dimension.

     Which leaves us with the conclusion that the conscious unit is
zero dimensional.

     Your faithful servant,

     Homer Wilson Smith

------------------------------------------------------------------------
Homer Wilson Smith     The Paths of Lovers    Art Matrix - Lightlink
(607) 277-0959 KC2ITF        Cross            Internet Access, Ithaca NY
homer@lightlink.com    In the Line of Duty    http://www.lightlink.com

Sun Aug 12 01:40:56 EDT 2007