Re: [OPE-L] marx's conception of labour

From: Ian Wright (wrighti@ACM.ORG)
Date: Mon Nov 27 2006 - 16:23:25 EST


Hi Howard

May have strayed from the point in this.

> We can think of reference in terms of following rules or in
> terms of interpreting meaning.  It is not clear to me that things without
> consciousness interpret meaning.

What do you mean by "interpret meaning"? A thermostat has a sub-part
that refers to temperature. But it does not interpret its own state.
It does not communicate to others. Its capacity of action is limited,
for example it does not evaluate choices and make decisions. It is not
a human mind.

If you mean by "interpret meaning" some of the above causal powers I
mentioned, then I agree -- the thermostat does not "interpret
meaning".

> John Searles for example made what he
> called a Chinese room argument.  Assume a person that does not understand
> Chinese.  You put her in a room.  In the room are Chinese characters and a
> rule book in a language she does understand.  Chinese characters come under
> the door and she looks in the rule book and does what she's supposed to and
> passes the results out under the door. The door is locked.  In fact the
> characters ask questions and the actions she performs in response to the
> characters provide answers.  From outside the room it looks as if a dialogue
> is going on.  But in fact the person in the room understands nothing.

The system as a whole understands Chinese. On condition that the
input/ouput behaviour of the room is indistinguishable from a Chinese
speaker.

Of course, Searle's Chinese Room could never produce behaviour
indistinguishable from a Chinese speaker: it would be far too slow.
But we can give Searle the benefit of the doubt.

> Suppose instead the questions came in in the language the person does
> understand.  Then she would understand the questions and her answers would
> reflect that understanding.

In this case, both the system as a whole, and a sub-part of the system
-- the person in the room -- understand the natural language.

> That is, nothing in your argument suggests anything more than rule
> following.  Rule following can be as complicated as you like and this can go
> forward without meaning.

In the first case, the person in the room is following rules. They
don't understand those rules. Nonetheless, when the rules are executed
by the person, and linked via sensors and actuators to a world (in
this case slips of paper pushed under the door), the system as a whole
implements behaviour that is indistinguishable from a Chinese speaker.
Hence, the system understands Chinese. That "hence" is important here,
because it relies on the logic of the Turing Test.

Searle's Chinese Room is an attempted critique of the Turing Test.
Turing makes a simple point: if a machine produces behaviour
indistinguishable from a human then we must also apply our
folk-psychological concepts (e.g. "understanding", "consciousness") to
the machine. Searle tries to wriggle out of this by putting a person
inside the machine. We are ready to accept that the person has
"understanding", but less ready to accept that the machine has
"understanding". But Searle begs the question: why are we ready to
accept that the person has "understanding"? He assumes a solution to
the other minds problem. Yet the only solution to the other minds
problem that makes sense is essentially a Turing Test: I think other
humans have my kind of understanding and mentality because they behave
as I do, and exhibit the same causal powers.

In interview with Geoffrey Hinton I've seen Searle revert to the
position that only "biological" things can have understanding. So it
seems that only things made out of meat can have mentality.

Other philosophical idealists try to wriggle out of Turing's logic by
positing that human consciousness has properties that do not make
causal differences. So a machine could pass the Turing Test but still
lack human consciousness (e.g., they would be "Zombies" --
behaviourally equivalent but lacking "consciousness"). But it hard to
understand what properties they can be referring to if by definition
they cannot be detected.

But I think that what fuels this debate from the idealist side is a
fundamental belief in the mysteriousness of first-person
consciousness. It appears separate from the material world of which it
contemplates. But this is another matter.

> The robot can be
> exquisitely programmed without understanding.  In other words, following a
> syntax does not mean we have a semantics of reference.

The content of a data structure of an AIBO robot refers to properties
of a real ball that it tracks. The sub-part of a thermostat refers to
the ambient temperature of the room. A squirrel has beliefs about
where it hid its nuts. I know where I live. All these are more or less
complex examples of parts of material reality referring to other parts
of material reality in virtue of causal connections between a
reference and a referent. But, it goes without saying, that the level
of "understanding" and the complexity of "reference" that a human mind
can achieve, is much greater than that of current robots, thermostats,
and squirrels.

> I want to insist, though, that we can interpret and understand the processes
> as they unfold,

I agree with you, but I don't think this has bearing on the issue of
the objectivity of semantics.

> and what we do with our interpretation does not reduce to rule following.

I disagree, if you mean that "breaking rules" or "creativity" cannot
be explained by the computational approach to mind.

The interesting thing about economics is that it is a virtual machine
with sub-parts that are sufficiently complex to be able to decode the
machine they are part of. This is a condition of possibility of the
critique of political economy. Yes we are not ants, we don't follow
the rules forever.

> For example, the operation of value's causal processes
> depends on a certain success in prohibiting theft.  Criminal penalties
> ensure that human subjectivities implement the causal processes of value in
> the right sort of way.  Suppose someone violates one of those rules without
> understanding what they are doing.  They do what looks like (say, to somone
> on the other side of a door) stealing.  But if they don't understand, you
> don't punish in the same way or not at all.  You punish a dog.  What sense
> does it make to punish a thermostat?

None. It wouldn't do any good. It doesn't have the causal power to
learn. The law of value acts very much like a reinforcement learning
algorithm however.

Best wishes,

-Ian.


This archive was generated by hypermail 2.1.5 : Thu Nov 30 2006 - 00:00:06 EST