Re: [OPE-L] marx's conception of labour

From: Ian Wright (wrighti@ACM.ORG)
Date: Fri Nov 24 2006 - 14:34:09 EST


Hi Howard

Sorry for delayed reply, have been away.

> Later you refer to absenting an absence, and I don't have any problem with
> that.  But the key idea is reference.  I don't think the thermostat
> "refers."  Reference takes an entity to interpret.   An interpreter is
> essential to sign making.  And interpretation takes consciousness or
> proto-consciousness.  What the thermostat gives us is a thing in process.

My use of the example of the thermostat is intended to attack the idea
that the route to understanding semantics and reference is primarily
via conceptual analysis of human language and conciousness.  A better
approach to understanding such issues is to build robots. One insight
that immediately falls out of this approach is the idea that reference
is ultimately grounded in causal processes between a reference and a
referent. Human consciousness isn't immediately relevant to this
issue. This is why Dennett, Sloman etc. employ the example of the
thermostat: it is the simplest example of a causal process that
sustains "intentional" descriptions. A thermostat has a sub-part that
represents the ambient temperature of the room (it has a "belief-like"
state), a sub-part that represents an absent temperature of the room
(it has a "desire-like" or "goal-like" state), and causal connections
that link the goal state to actions that change the state of the
world. These causal connections instantiate "loop-closing" semantics.
From a more Hegelian or Bhaskarian perspective, we can also use the
example of the thermostat to understand how absence and negativity is
real. The material world can be so constituted to represent things
that do not exist, and entail processes that cause non-existent things
to become existent. And this is not very mysterious because, for
example, natural and artificial thermostats do this every day.

Another example. Consider a machine-code program that increments the
contents of a specified memory address. Let's assume it runs forever.
Does it make sense to deny that a sub-state of the machine's memory
refers to another sub-state of the machine's memory? My point is that
reference is natural and ubiquitous.

I think this is important from another perspecitve. For example,
prices refer to labour-time due to the causal processes instantiated
by the law of value, which happens to be partially implemented via
human subjectivity but is not reducible to it. The semantics of money
are in this sense objective and do not require human interpretation or
consciousness. In fact, most of the time the human actors are not
aware of these higher-level semantics. So I think social structures
plus causal processes can instantiate semantic reference. More
specifically, the law of value, considered at a certain level of
abstraction, and in isolation from other mechanisms, is an equilibrium
mechanism: it implicitly represents a goal-state in which social
labour is allocated according to social need and prices are
proportional to labour values. The transfers of money are
labour-allocation control signals -- even if the human actors within
the economic system are hypothetical robot zombies that lack the
property "consciousness" -- because the semantics of the law of value
are objective.

> A ball rolling down a hill is a thing in process.  Dominos falling.
> Mechanical things are processes.  We can interpret all such things as goal
> oriented, but I think this takes interpretation and interpretation takes
> consciousness.  I don't know anything about artificial intelligence, really,
> and have no judgment on whether or not machine consciousness is possible.

"Consciousness" is what minds do, so it needs to be unpacked into
claims about causal powers, for example the ability to self-reflect,
to attend to one's own actions, to self-categorise one's own mental
processes in terms of natural language, etc. etc.
Clearly a ball rolling down a hill is a process that does not have
these particular causal powers. But what about Sony's AIBO robot
(http://www.sony.net/Products/aibo/)? What cognitive powers does it
have? For example, it knows where you are, and can decide to approach
or avoid etc. It has a representation of you. It seems very natural to
take the intentional stance to such an artifact: it has beliefs,
desires etc. My guess is that the engineers who built this robot have
a better understanding of the material implementation of semantics and
reference than most pen-and-paper philosophers.

> I do think that efforts to suggest that the activity of reference or
> representation is peculiar to humans are wrong.  It is clear that the
> capacity to refer exists in some at least crude form for many forms of life
> (all?) -- clyder's wonderful example of the spider makes this point very
> clearly.  It follows that by saying intentionality is characteristic of
> humans I do not mean to say that it is peculiar or exclusive to humans.   In
> general we should be very suspicious of anything that looks to cut us off
> from the rest of the natural world.

Yes agreed.

Best wishes,
-Ian.


This archive was generated by hypermail 2.1.5 : Thu Nov 30 2006 - 00:00:06 EST