Just a thought I had, again. In the mathematical explanation of equilibrium
systems, the base data are random events. If they are truly random events,
they cannot be explained or predicted - because they are random. The very
meaning of chaos is unpredictability and uncertainty. What the analyst then
does, is that he groups the apparently random events according to
qualitative categories in clusters, and he discovers certain patterns which
suggest an order anyhow - perhaps a hidden order obscured by the surface
appearance of randomness. This is his "theoretical act". The patterns can be
fitted to an equation, and, Bob's your uncle, you have an explanation of
events - which seemed random, but really aren't; the fact, that the events
conform to an equation with predictive power provides proofs, or at least
grounds, for believing they are determinate after all, and not merely a
random fluke.
The question however is whether such an approach is really apposite in the
case of Marx's theory. After all, human activity according to Marx is mainly
not random. In the first instance, the reason is that the activity is
purposive, but secondly also, the activity is constrained by definite
parameters which cannot easily be escaped from; the human species is
compelled to do certain things to survive and prosper. There are, therefore
definite means-ends logics in human behaviour, even if several of such
logics operate at the same time and may contradict each other. If that
wasn't
the case, one might as well throw out all ideas of a rational politics and
close down the law courts.
The first problem is really that in order to understand seemingly random and
arbitrary events as non-arbitrary, non-random events, we require
categorizations which define constants and variables. But what is it, that
entitles the analyst to adopt those categorizations, if there is prima facie
nothing in the random distributions that would suggest any particular
categorization as preferable to any other? The analyst argues, that if
certain categorizations are adopted, then events are predictable. In
principle of course one could succeed in predicting something without
knowing why the prediction succeeds. But where does he get his
categorizations from? In the last analysis, the "model" assumes the
hypothesis of some causal relationships, a causal theory which tells us
where to look for patterns and explanations. What makes the
choice of assumptions non-arbitrary?
Problem number two is that in the procedure, we might well be assuming what
we seek to explain. We set out with theoretical assumptions to find
empirical
corroboration, but in reality we may not explain very much but just
describe something. The fact that one variable can predict another
variable does not necessarily say anything yet about the possible
causal relationship between them.
Problem number three is, if the result of the analytical procedure is to
supposed to demonstrate that random events are in truth not random events,
why should we assume to start off with, that they are random events? Why
should we try to demonstrate the likelihood that a result is, or is not, due
to chance, when we know very well that it is determinate?
Problem number four is that a theory, if it is at all adequate, if it has
any
depth, must explain the choice of its own assumptions. That is what
models typically do not do, or only superficially. The model is an
isomorphism or "likeness" (analogy) which is offered precisely in
advance of a comprehensive theory. It signals, that we do not
really know how to theorize phenomena yet.
The whole edifice of equilibrium theory is based on the simple idea
that, other things being equal, supply and demand will tend to adjust
to each other. But is this all that we can say about the capitalist economy?
Marx certainly thought not, and penned three fat volumes to provide
a causal explanation of its workings.
There is a direct connection between the use of models and
ideological fabrications. That is, the model permits us to fathom
an empirical relationship without any more comprehensive knowledge
of the relevant phenomena - this is a convenient ploy to abstract
away from any knowledge or assumption that would be highly
uncongenial to the modeller. It is not just that the model is often
mooted in advance of comprehensive theory, but that the model is
substituted for a comprehensive theory. Thus, the proliferation of
models is a neat way to obscure the interconnection of the
phenomena modelled as a whole, which a comprehensive theory
would reveal.
Jurriaan
_______________________________________________
ope mailing list
ope@lists.csuchico.edu
https://lists.csuchico.edu/mailman/listinfo/ope
Received on Sun Dec 26 11:34:53 2010
This archive was generated by hypermail 2.1.8 : Fri Dec 31 2010 - 00:00:02 EST