(OPE-L) Re: The Church-Turing thesis

From: gerald_a_levy (gerald_a_levy@MSN.COM)
Date: Mon Jan 19 2004 - 16:41:51 EST


Hi Ian.

> I think we are machines, ultimtately built by a more encompassing process
> of  induction, which is evolution. So I think machines are already
> discussing  political economy.

Funny, I thought we were animals.  A particular type of animal -- homo
sapiens -- sometimes called the "tool-making animal".   Looking at it from
that perspective,  we _create_ machines -- which are basically _just_ tools,
but are not ourselves machinery -- even though we have mechanical and
system properties akin to machinery.

> The crux of the question is what is meant by "machine". I do not have a
> 19th
> centrury machine metaphor in mind, in which systems are composed of
> mechanical relations. The human mind is not like that. Instead I have a
> modern metaphor in mind, one inspired by the computer, that considers that
> minds are semantic information processors. The discipline of trying to
> build  mind-like artifacts has taught us a lot about how it is possible
> that  minds operate in material terms.

Are minds _only_ information processors?   If that were the case, then
human behavior would be far less contradictory and complex and there
would be no need for the subject of psychology.

> So by "machine" all I am really saying is that the mind is a natural
> object  that in principle can be  understood by science, and that some
> of our current artificial machines have mental properties, albeit of a
> simple kind.

The modern science of psychology (in all its variants, I think) would
reject the idea that human behavior can be comprehended merely
through computation.  I haven't heard much talk of computers replacing
psychoanalysts and therapists, have you?

> The best hypothesis of how minds work is the computational approach, as
> unlike previous approaches it is both more explanatory and has practical
> consequences, such as the ability to automate mental operations.
> The Church-Turing thesis doesn't assume that minds work according to the
> principles of formal logic, or at least the field of AI does not make that
> assumption. Formal logical systems, of which there has been an enormous
> proliferation of types, do have their uses when building robots, but are
> useless for controlling effectors to move around, catch balls and the
> like.
> What engineers have found is that to build mind-like control systems
> requires a whole range of mechanisms and representations, from quick,
> fast-acting feedback loops, to statistical inference mechanisms,
> pattern-matching, and at levels less tightly-coupled to the physical
> world,  logical and symbolic deductive mechanisms, such as kinds of
> formal logics.

Engineers may be learning more and more about adaptive control and other
systems (e.g. machine vision, tactile ability, etc.), but:

a) the 'learning curve' has been much steeper and protracted than engineers.
scientists,  and designers believed regarding robotics.  Thus, what was
commonly  believed in the early 1980's to be commercially available at
relatively low prices by the late 1980's and early 1990's still hasn't been
developed and marketed.

b) there are great advances in the field of medicine and, with it, our
comprehension of human anatomy and physiology ... yet, most scientists in
those fields will admit that the more we learn the more we realize how
little we really know about the human body, especially the human brain.

c) individual human mental activity can not be separated from social
conditions and interaction of the individual with groups, institutions,
and classes.  That is an essential _part_ of human mental activity. This
interaction can not be precisely replicated by computers since many
human *emotions* -- e.g. fear, guilt, desire -- are alien to computer
logic.  We are not Vulcan, like Dr. Spok, and we never will be.

> Sorry to get off topic. This is not exactly political economy! But to be a
> little cheeky I'll claim that any theory of a dialectical logic worth its
> salt should be able to be formalized and implemented on a computer. The
> problem in general with philosophies is they don't run. So they're hard to
> test.

A computer couldn't 'run' -- or test anything --  without implicit
philosophical conceptions.

In solidarity, Jerry


This archive was generated by hypermail 2.1.5 : Wed Jan 21 2004 - 00:00:01 EST