Re: Mind

From: Ian Wright (iwright@GMAIL.COM)
Date: Mon Jun 07 2004 - 13:29:23 EDT


Hi Andy

I think our philosophies of mind are not very far apart -- I don't
detect any differences of principle. Your last post clarified things,
but also raised a large number of issues -- too many issues to
address, so I'll try to be as brief as possible.

I was fortunate to be taught by the philosopher Aaron Sloman, who is
very much the realist, and wrote a book in 1978 called "The Computer
Revolution in Philosophy: Philosophy, science and models of mind"
(1978), which contains an extended discussion of philosophy of science
that he acknowledges shares a good deal of common ground with
Bhaskar's early work, although they were unaware of each other. If
you're interested, Sloman's book is available on his web-site:
http://www.cs.bham.ac.uk/research/cogaff/crp/ . He now operates mainly
in the philosophy of mind, and in particular has developed the concept
of "virtual machine". Here is a draft paper that discusses "virtual
machines": http://www.cs.bham.ac.uk/research/cogaff/sloman.virtual.slides.pdf
. Many other papers are available from his web-site, including
discussions on semantics.

Your stress on the inner relation between mental phenomena and
practical activity has surfaced under the rubric of the "symbol
grounding problem" in philosophy of mind and AI. In brief, if symbols
are not grounded via causal interaction with an external world then
those symbols do not have semantics. I think this is fine -- as far as
it goes. So forget about disembodied computers, and instead think of
embodied robots -- essentially computers with sensors and actuators.

But there are many counter-examples to the notion that semantics must
derive from causal relations between mind and matter, such as, say,
thinking of unicorns, forming plans prior to action, or a
mathematician forming and checking conjectures in an abstract domain.
These examples suggest that semantic relations can exist between mind
and mind.

Ultimately, I think semantics must be grounded, but there are a lots
of questions regarding derivative semantics, which involve performing
virtual actions in virtual worlds. Here again the notion of a virtual
machine implemented in the brain is crucial for explaining semantics
that are not directly related to (external) bodily activity.

The concept of the information processing level of description,
distinct both from the neurophysiological level of description
(neurons etc.) and the intentional level of description (semantics
etc.), is important. My talk of mechanisms of mind is primarily about
the information processing level of description, which is implemented
in neural activity and supports semantic states. For example, a data
structure such as an array is a type of information store that
supports certain transformations and operations. But an array is
distinct from its physical realisation, and distinct from its contents
and what those contents may refer to, and also how that reference is
maintained. The concept of an array belongs to the information
processing level of description.

The kinds of information processing mechanisms implemented in the
brain both constrain and allow the types of semantic (and other)
states we may have. We inhabit our data structrures. People are quite
comfortable talking about neurons because they can see them, and also
mental phenomena such as beliefs, desires, and so forth, because they
experience them. But there is an amazingly rich ontology of
information processing mechanisms hidden to vision and introspection.
It is important to get beyond the prejudices of the external eye and
the internal eye when developing theories of mind.

I agree with you that theories of mind must indeed explain how it is
possible that we can reflect, learn, adapt and form adequate concepts
of an objective world. The exciting field of machine learning is
relevant here. Sloman's work is also worth reading in this context --
he has spent many years formulating and refining a theory of the
information processing architecture of the human mind, which tries to
give an account of these kinds of abilities. He has recently gained
funding to build a robot, so he understands the importance of
practical activity in mind design, both in terms of trying to
construct intelligence rather than just theorise it, and also in terms
of building minds connected to an external world.

If you decide to work on the relation between Ilyenkov and western
philosophy of mind I'd be happy to help out in any way I can.

ATB,
-Ian.


This archive was generated by hypermail 2.1.5 : Tue Jun 08 2004 - 00:00:01 EDT