MAIL ME THIS POSTING!
MY E-MAIL ADDRESS:
For example: homer@lightlink.com

Arthur C. Clarke 4/9                        ART MATRIX - LIGHTLINK
http://www.lightlink.com/theproof           PO 880 Ithaca, NY 14851-0880
                                            (607) 277-0959      Voice
                                            (607) 277-8913      Fax
                                            (607) 277-5026      Modems
                                            homer@lightlink.com E-mail
                                            jes@lightlink.com   E-mail

                                            01/22/07 2:52pm

     Dear Esteemed Sir,

     I wish to take up in more detail the 'simple but convoluted'
question of whether a machine can verify with certainty its own
existence.

     This can be answered by analyzing an earlier question which is can
a machine verify with certainty its own circuitry?

     Consider a simple learning machine, with two video cameras for
learning inputs, whose purpose is to capture pictures of the world 'as
it is'.

     Its purpose in other words is to create an internal
subjective symbol in its own memory of the alleged object
external physical referent 'out there'.

     The question then is does the subjective symbol accurately track
the objective referent, and if so can the machine verify this with perfect
certainty?

     In other words can a machine know if it is hallucinating
object existences or not?

     The subjective symbol inside the machine draws the picture
of a cube in the objective outside world, does the existence
of the symbol necessarily imply the actual existence of an
objective cube?

     The two video cameras give the machine stereo vision thus allowing
it to record a continuous stream of standard visual attributes of the
alleged physical world around it such as color and distance.

     Other objective sensors give it sybjective symbols for temperature,
humidity, atmospheric pressure, quality of the air, and sound at the
same time.

     Each picture that is taken of the external world is recorded with a
spacetime stamp taken from the global positioning service, that details
the machine's exact location in space and time at the moment the picture
was taken.

     Thus later the owner of the machine can peruse through the memory
banks of the machine and see that Goober walked past the machine's
field of view at 12 noon on Jan 1st, 2007.  
 
     Further, because the machine recorded where the machine was at that
same time, it can compute from its stereo vision, where Goober was when
he walked past the cameras.

     We would trust the recordings of such a machine, would we not?

     Good enough for a court of law?

     OK, so say during the night, when the machine isn't looking,
someone opens up the back and replaces a circuit chip with another one
intended to produce consistent but false readings in the recordings.

     The next day when Goober walks by, the machine records a clear
image of Dufus walking by instead.

     Not good, since this machine's recordings are going to be used in a
court of law, right?

     Ok, as a set up to the question under scrutiny let's say when this
machine was built it was given a purportedly complete and accurate set
of diagrams of all of its components, right down to the last transistor.

     Put in another way, it has a complete image of itself and its
internal causal pathways in its own recordings.

     Further let's say both video cameras of this machine are very
special cameras, they can see through and into any part of the machine
they want through x-ray vision.

     Thus in theory they should be able to compare the present time
parts that make up the machine and how they are connected with each
other, with the parts in the circuit diagrams.

     More simply the machine should be able to compare its present state
with its past 'known good' state and report any differences found.

     We assume a few things.

     We assume that all circuits are working properly, ie as intended,
because if they are not, how is a mis behaving circuit going to properly
report on the condition of other circuits?

     Yes the machine has two cameras for redundancy in case one fails.
 
     The probabilities that there are errors in both cameras and the
circuits that connect them to the machine are less than the probability
that only one camera is bad,
 
     But if one camera is bad, how is the maching going to know
which one it is.  Both camera's will be reporting that the other
is bad!

     The good camera will correctly report the error in the bad
camera, but the bad camera will incorrectly report an error
in the good camera.

     This is called the minority report problem, named after
the movie of the same name.

     The answer of course is to put in 3 cameras, thus if one
goes bad it will show errors in the other two, but the other two
will show errors only in the first.

     Thus the 'minority report' of the one bad camera has a higher
probability of being the wrong than if we assume both of the
other cameras are bad at the same time.

     This is why the Raman's always did things in threes.

     But is this a perfect certainty that the minority report
is the wrong answer?

     Clearly there is a non zero probability that the minority report is
correct as maybe the other two broke at the same time in the same way.

     So we wouldn't bet our eternity in hell that the majority report
is correct, now would we?

     In fact there will always be a finite non zero probability that all
cameras are bad no matter how many cameras the machine has!

     For example if the machine had 10 cameras, and 9 reported one
thing, and 1 reported another, normal probabilities would indicate to
trust the majority report.  But trust is not certainty.  And if someone
has messed with the machine and its cameras intentionally, then all bets
are off.

     And that IS a perfect certainty.

     Because causation is not sufficient to witness causation, meaning
you can't witness the causation between two events merely by witnessing
the two events, a circuit can not verify itself because it can not
verify the causal pathways necessary to its functioning properly.

     By this we mean that observing effects will never give one direct
perception of the causation between the effect and its cause.

     Thus if a circuit is broke, it could report that it isn't broke.
 
     Thus just because the end effect is that the circuitry reports all
is well and good, it is only a theory that this was produced by a
properly working circuit.

     Since a machine is MADE only of circuits checking circuits, the
whole machine and everything it determines to be true is eternally
suspect.

     Thus it becomes impossible for a machine to verify its own
conformance to original specs EVEN IF IT HAS ACCESS TO THOSE SPECS TO
COMPARE TO.

     It can't even verify that it HAS original specs, because even if it
had a secret copy and the public copy was changed, it couldn't know for
absolute sure that the comparison circuitry itself was reporting
properly.

     Worse in this case, the machine has been intentionally changed to
observe or report incorrectly, and when those changes were made, the
machine was also changed to incorrectly report that its circuitry was
NOT changed even if at some point in its investigation it gets some hint
that it was changed.

     Thus even if its circuit diagrams were left intact by the intruder,
the changed machine wouldn't report properly anyhow.

     And if the embedded circuit diagrams were changed to match the
changes made to the circuits themselves, then spotting the changes
becomes utterly hopeless.  
 
     The machine could go off happily hallucinating little green
martians everywhere, and consider itself quite sane because its
observations of itself match exactly its original specs which it knows
are correct because its maker said they were!

     Even machines have religion.

     Thus we must conclude that a machine can not verify its own
operational integrity at any time and under any circumstances.

     Thus if the machine reports that it exists or does not exist, then
any such report must always be in doubt.

     This raises however another question.  How can a machine report
that it exists unless it does?!  Doesn't the mere fact of any report at
all necessarily imply the machine's existence?

     The answer is not to confuse the machine itself with an observer of
the machine, particularly a conscious observer!

     The observer of the machine may rightly conclude that the machine
exists because it reports that it exists, but the same observer would
conclude that the machine exists even if the machine reports that it
doesn't exist!

     The conscious observer would conclude the machine exists
no matter what the machine reports.

     What we need to do is look at it from the point of view of the
machine.

     And we need to go back to basics.

     This comes from basic Referent and Symbol theory which is not
covered here.

     In short a referent is anything referred to by symbol, and a symbol
is anything that refers to a referent.

     A referent is any object that exists in a state prior to the
symbol, and the symbol is any object that exists in a state after the
referent.

     For learning to take place between referent and symbol, there MUST
be a causal pathway between the original referent's state and the final
symbol's state, such that the symbol's state tracks accurately the
referent's prior state.

     Thus the way that the symbol has learned about the referent, and
the symbol's state related to the causal effects of the referent IS that
learning.

     The only thing any symbol can learn about a referent is how the
referent caused the symbol to change state.

     The symbol state is a rendition of some part of the referent's
state, and from the symbol state we can interpret the relevant part of
the referent's state that was involved in the causal pathway between
them.

     Thus to learn about the original referent we look at the symbol,
which we call the symbol of final authority as regards this particular
event of learning between referent and symbol.
 
     Learning is a relationship between two different objects, most
fundamentally, learning is a tracking in the state of one object about
the state of another object.

     If B is learning about A, then B's state must track with the state
of A.

     For example if A is red, B must change state to include a
representation of 'A is red'.
 
     If A changes state to green, then B must also change state to a
representation of 'A is green'.
 
     In this way the state of B tracks the state of A and this tracking
is the process of learning.

     Notice that just because A is red, doesn't mean B must become
red, B need only change state to something as a result of A being
red.

     Symbols have their own qualities different than the qualities of
the referents that they learn about, sometimes the qualities in the symbol
match the qualities they track in the referent, but usually not.

     Thus you can't look at a quality in a symbol and conclude that the
reference necessarily has that quality too.

     You can only theorize that the quality in the symbol tracks the
causally related quality in referent.

     Tracking is the result of a causal pathway between A and B.
 
     In other words A has an affect on B,
 
     A puts a causal imprint on B, B changes state as a result of A, and
thus B has 'learned' something about A.  
 
     B's new state IS it's learning about A.

     Any change in state at all in B caused by A can be considered
learning by B about A.

     More to the point, in the absence of any change in state in B,
there can be no learning at all about A.

     For example, B is moving along and passes A, and after the
encounter B is in exactly the same state as it was in before the
encounter with A.  
 
     Clearly B didn't learn anything about A.

     So in the above we call B the symbol and A the referent.

     A's being red is the referent state, and B's idea 'A is red' is the
symbol state.

     Two completely different objects, A and B.

     Two completely different states, 'A being red' and 'B thinks A is
red'.

     A is the learned about, and B is the learner.  
 
     The change in state in B, is B's learning about A, and that state
in B acts as a symbol for whatever learning it represents about A as the
referent.

     Learning thus implies tracking between the symbol and the referent,
and that tracking must include a causal pathway between the referent and
the symbol or else no learning has taken place.

     Thus wherever there is learning there is causation between
two different object.

     In the absence of causation there is no learning.

     More formally:

     Learning implies causation.

     No causation implies no learning.
 
     Causation implies learning.

     No learning implies no causation.

     Now notice A might be red, and B might have the idea 'A is red',
which would make B right, but unless B got the idea through a process of
learning (tracking BECAUSE of causation), then B's idea is mere
guesswork, or coincidental good luck.

     There is a difference between being right, and LEARNING you are
right.

     The first involves being in a state that happens to coincidentally
track for the moment the object you are learning about.

     The second demands that your state be a CAUSAL CONSEQUENCE of the
object you are learning about.

     Being right is meaningless and useless unless it is engendered
through the causal process of learning.

     So we have a machine that want's to know if it exists, and in its
memory banks is a statement 'I exist'.

     The first object A is the machine existing.

     The second object B is the statement 'I exist' in that machine's
memory banks.

     Notice in this case the first and second object are the same object
because in this case the machine is trying to learn about itself!

     There is no problem with a single object playing both roles in the
causal pathway, referent and symbol, because in reality the symbol state
is a subset of the whole machine.

     So really one smaller part of the machine is learning about another
part of the machine or the 'machine as a whole'.

     Notice however that the referent state, the fact of the machine
existing, comes before the symbol state in time.
 
     There is a time distance (delay) between the machine existing and
its final report that it does exist.
 
     It takes time for the image of the existing machine to be
transfered through the video cameras and back into the machine's own
memory.
 
     Thus the symbol state of 'I exist' is a different event than the
referent state of the machine existing.

     Two different events happen here.
 
     One is the referent state of the machine existing, at which time
the symbol state doesn't yet exist.

     The second is a moment later in time, which is the symbol state,
which records the alleged fact that the machine exists a MOMENT BEFORE.

     Notice the symbol state NOW can not possibly be about the machine
state NOW, because there HAS to be some time distance between cause and
effect for learning to take place in the physical universe!
 
     That's a big statement, don't go by it.

     You get a brownie point for merely understanding it, even if you
don't agree with it.

     So, the symbol state represents "I existed 1 second ago", there is
no way for the machine to learn that it exists NOW as there will always
be a time delay between the machine existing and the symbol state being
recorded that it existed.

     Now we already know that being right is not sufficient to prove
learning, as the machine can say 'I exist' whether or not it has
bothered to learn that it has.

     And we admit that a machine could very well exist and yet still be
utterly incapable of learning that it exists with perfect certainty!

     And we know that an external observer would think that saying 'I
exist' would be sufficient evidence that the machine existed, and
perhaps so it would to the external observer.
 
     But so would the machine saying 'I don't exist' also be sufficient
evidence that the machine existed to the external observer.

     But the external observer is not relevant to this discussion.

     What we want to know is, did the machine *LEARN* that it exists via
a causal pathway to itself, or did it just guess or happen to be right
by coincidence?

     If there is no causal pathway at all between the machine's memory
banks saying 'I exist' and the existence of the machine, then no
learning has taken place.

     If there is a causal pathway, if the machine concluded 'I exist'
BECAUSE it looked and interacted with itself via cause and effect, then
some learning has taken place.

     However we have already determined that no circuit can verify
itself, thus if the machine has been modified to report wrongly, or
worse randomly, then whatever it reports can not be trusted even if it
does exist and reports 'I exist'!

     Since the machine can not verify the integrity of the causal
pathway between it's own existence and its report that "I exist", it can
not be said to have learned with certainty of its own existence.

     And lastly a machine can't even look at its own report that 'I
exist!", and conclude from that report that it must exist.

     For one that would merely be yet another event of learning later in
time, which can never prove the existence of something earlier in time.
 
     Thus a machine can't be certain of anything it observes at all,
including it's own statement that 'I exist'.
 
     Thus the machine has no clue from it's own observation whether or
not its own statement actually exists, nor that it or anything really
caused that report in the first place.

     It is bad enough that effect never proves cause, but if you can't
even be certain of the effect, what hope can there ever be of being
certain of the cause?

     The machine has no light of self luminous consciousness.

     Without consciousness there is no certainty.

     Without certainty there is no consciousness.

     More formally:

     All consciousness-of is certainty-of.

     All certainty-of is consciousness-of.

     If there is NO certainty-of there is NO consciousness-of.

     If there is NO consciousness-of, there is NO certainty-of.

     Final conclusions.

     One can never learn with certainty about a referent by looking at a
symbol unless the referent and the symbol are one and the same event,
with no spacetime dimensional separation between them.

     Trust can of course come from certainty, but certainty can not come
from trust.

     Mechanics of any kind consists of parts interacting via cause and
effect across a spacetime distance.  Parts that are separated by
spacetime distance are of necessity two or more different parts and thus
must learn about each other via cause and effect.

     Since effect does not prove cause, no part can ever learn with
certainty that any other part exists or is cause over the first part's
state.

     Worse a machine can not even tell it has changed state with perfect
certainty, thus without certainty of having been an effect, how can it
be certain there might have been a cause?

     Thus a machine can not be certain of anything.

     No machine that learns solely by being an effect can
be ever be certain of or prove the existence of cause.

     (To prove here means to prove with perfect certainty, not merely to
provide theoretical evidence for.)

     Since consciousness can be certain of red/green experience
differences and many other things, consciousness and the conscious unit
can not be a machine.

     By which we mean not a system of parts interacting via cause and
effect across a space time dimension.

     Which leaves us with the conclusion that the conscious unit is zero
dimensional.

     Thus consciousness is a scalar phenomenon.

     Your faithful servant,

     Homer Wilson Smith

------------------------------------------------------------------------
Homer Wilson Smith     The Paths of Lovers    Art Matrix - Lightlink
(607) 277-0959 KC2ITF        Cross            Internet Access, Ithaca NY
homer@lightlink.com    In the Line of Duty    http://www.lightlink.com

Sun Aug 12 01:40:56 EDT 2007