by Simon Smith
The flashbacks are finally
beginning to fade. That boiling sea of faces, at once horrified and fascinated,
witnesses to a pedagogic train crash, no longer haunt my dreams. That succubus
of sex and ethics, demon of a Thursday afternoon in late November, is banished
once again; and in its wake, our thoughts turn, once again, to the matter of
philosophical fragmentations.
Just before
that forest of wildly thrashing limbs, exercised frantically in an indescribable
series of arcane gesticulations, blotted out the sun, the point we had reached,
or almost reached, was that ethics is ethics, no matter the context or point of
application. This is not to say that context adds nothing at all to our moral
reasoning; only that, if our basic principle is, say, not to harm others, then
that ought to be upheld no matter what the circumstances. Of course, the
circumstances will, to some degree, dictate the manner in which our principle is upheld; the specific way in which
we avoid harming others will depend largely on what we are doing at the time.
One field of
doings, as it were, which always seems to face greater demands for moral
scrutiny is technology. And rightly so, one might say, given the faith we place
in it to solve all our problems and make us happy, despite the lack of evidence
that it can do any such thing. There is, as Susan Pinker shows in The Village Effect (Atlantic Books,
2014), very little evidence to support the commonly accepted dogma concerning
the value of exposing children to computer technology at the earliest possible
opportunity; quite the opposite, in fact. Given how easy touch-screens are to
use – easy enough for most adults to get to grips with – it’s not clear what
advantage there might be for a three-year old to be able to do likewise rather
than, say, active reading with a real human being.
One especially
obvious, because potentially dramatic, field which seems to demand some hard
moral thinking is that of Artificial Intelligence. As it happens, I don’t hold
any special fears regarding the “rise of the machines”, either for them or us.
Even if the Dotard in Chief hasn’t actually sucked the world into a nuclear
firestorm long before it becomes an issue, the application of intelligence,
artificial or otherwise, might well be a good idea. We’ve tried everything else
and look where we’ve ended up. All around the world, people live and die in
poverty; wars are fought over nothing very worthwhile and, wherever possible,
ignored; mendacity, corruption, and outright criminality remain exceedingly
popular; there’s every chance that we are polluting the world beyond
habitability; and the Americans have a game-show host for a president. Surely
now, more than at any other time in history, intelligence, any intelligence,
has to be worth a try.
That said, the
sheer temperamentally of much modern technology, downright obtuseness on
occasion, do sometimes make me wonder.
Speaking
philosophically, which is to say, less cynically, it remains to be seen whether
the conception of intelligence which lies at the heart of AI research will ever
be rich enough to present any real moral concerns. At present, it seems rather
too nebulous to warrant the kind of alarm some people – including made-up
people like Elon Musk and the Pope – are keen to generate. Where specificity
has crept into the conceptualising, the analogical extension of “intelligence”
has been degraded to the point where it’s hardly recognisable. More worrying is
the way that degraded analogy is projected back upon its original source: i.e.
the persons who constructed it in the first place.
During the
latter part of the 20th Century, it became fashionable to
anthropomorphise computer technology, in direct contravention, nota bene, of IBM’s own code of conduct.
Despite the rampant anthropomorphism, there remains a world of difference
between “memory” and “data storage”, a difference which carelessness and
linguistic ineptitude have all but erased.
This marks a
dismal, though not particularly surprising, failure to understand how analogies
work. It is, sadly, a symptom of the naïve realism that continues to pollute so
much modern thought. (Naïve but never innocent, no matter what Peter Byrne
thinks, never innocent; realism is always, ultimately, pernicious.) At the root
of it is the assumption that, between analogue and analogate, there is a simple
one-to-one correspondence: x is like y, therefore y is like x to the same
degree and in the same way.
That’s
“analogate”, by the way, not, as I assumed for many years, “analgate”; that, I
suspect, would require a very different theory of language. And lubricant, lots
of lubricant.
So x is to y
as y is to x. Well yes, of course. Or rather, no, not really. That’s not how
analogies work. Analogies aren’t mirror-images; they refract when they, or
rather we, reflect. It is just possible that my love really is like a red, red
rose (prickly and covered in greenfly); and I’m sure you bear more than a
passing resemblance to a summer’s day (hot, sweaty, and buzzing with flies). Such
analogies cannot simply be reversed, however. It would be a different thing
entirely to suggest that a particular Tuesday afternoon in August was just like
you.
So much seems
perfectly obvious; except that it clearly isn’t, otherwise we wouldn’t be so
quick to forget it. What began as a way of describing computers – data storage,
it works a bit like our memory; so all the stuff you put in, the computer sort
of “remembers” it – has turned back upon us. It’s like the threefold law of
return in medieval witchcraft; language is a lot like magic in many ways. A
computer’s data storage works a bit like our memory, therefore memory works a
bit like data storage. Well the logic is sound enough but the idea is still
cobblers. (A prime example, here, of how logic can be sound without actually
being correct; demands for what necessarily
must be so often are though.)
Cobblers, it
may be, but there we are, stuck with the now widely held supposition that our
memories, and by extension our brains, work like computers. The next step is a
simple one: we just need to let those rather vague, allusive words slip out of
sight; “sort of” and “a bit” goes first, of course, then “like”. No matter that
these are what made the analogy work in the first place. Once they’ve been properly
sublimated and repressed, we can conveniently forget that we were ever talking
analogically at all and pretend that we’re really being terribly precise and
scientific and technical. My, aren’t we clever?
Well, no, not
really, because along with forgetting that we started out talking analogically,
we’ve also, quite carefully, forgotten what we did to make the analogy work in
the first place. It is unlikely that the clever dick who came up with the
analogy between brains and computers actually meant it to be taken literally;
the idea that those who first heard it would have taken it that way seems
highly improbable.
In
order to make the analogy work at all, we had to strip it down to its bare
essentials, pull out all those bits that made it a genuine human experience.
Stuff like consciousness and agency, all the various ways in which we ourselves
are involved in our recollections; then there’s the whole web of connections,
interlaced ideas and images, not to mention the way all our senses can get involved,
sounds and smells can be especially evocative. And what about how we remember things? Not just as bits
of data, that’s for sure; more like sudden flashes, lightening in a
thunderstorm; or images of people, places, and events; sometimes a memory is a
whole narrative, rich with layers of interpretation and personal perspective,
wherever our attention, our consciousness came most sharply into focus.
Sometimes it’s just a feeling. Just? Oh hardly that. Do you remember the smell
of school dinners? Or the Sunday night feeling you used to get when you were a
child? What about the lead-up to Christmas in years gone by? Or when you had to
say “goodbye” to someone terribly, terribly important to you. Not “just”, oh
no.
The
point here is that what we remember and how we remember are vastly more complex
than what a computer does with its data. In order to apply the notion of
“memory” to a machine, or any other object for that matter, all the human, all
the personal, aspects of it have to
be stripped away.
Of
course, there’s nothing wrong with doing that, it can be a very informative
process, especially when it comes to diagrammatising the physical universe – oh
yes, and just where did you think all those images of energy and process and
function actually came from? Indeed they did not; Hume was absolutely spot on
there.
That
doesn’t change the fact that the extension of (severely watered down) personal
analogies is an incredibly informative and therefore valuable process. Only, it
can become a problem if we forget that we were talking analogically in the
first place, if we mistake our analogies for literal truths. Obviously, doing
that in this case wouldn’t make the slightest bit of sense. Computers don’t do
all the things we do when we remember things. Of course they don’t. Unless we
convince ourselves that the stripped down analogy of memory is all that memory
really is, in and of itself; unless, that is, we take that stripped down
analogy and return it unto its source, re-apply it to the place from which it
came. Then, we might be making a serious mistake.
There,
if you like, is the real moral problem at the heart of modern technology: the
ease with which it allows us, even encourages us, to objectify ourselves and
others, to treat real human beings as machines. And what a very old problem
that is; Descartes’ legacy inviting us to step outside the world of real
experience and reduce everything – and everyone – left inside to automatons. As
observers and describers, we, of course, remain above such crude mechanics. But
here’s the new twist: we don’t just treat people as things, actual objects that
we might, just possibly, have to encounter in the real world and so think about
how we conceive of them. Now we treat them, and ourselves, as idealised
objects. We’ve become our own diagrammatic fictions. And all in the name of
science, of truth, of the advancement of human knowledge. How very clever of
us.
But
can that really be right? Those realities that can only be seen and understood,
which, frequently, can only be thought, by means of analogies and diagrams, can
they be more true, more real than the human beings we encounter everyday? Can they be more real than the people
who taught us to speak and think and construct diagrams in the first place?
Mathematics
is the language of the sciences, it provides access to the to the world as it really is, allegedly. Mathematics is a
logical system of signs and symbols. Those symbols have no natural or actual
corollary in the world; if they did, they’d be no good to science. What, I
wonder, makes us so sure that a world which can only be conceived of and made
sense through those signs and symbols is more real than the people who sit
across from us at the dinner table?
Oh
dear, I seem to have digressed quite lamentably and given myself a case of the
vapours in doing so. We started out well enough, ruminating on the
fragmentation of philosophy, specifically ethics. Now here we are grumbling
about the apparently universal failure to understand how analogies work and the
objectification of people that follows. Quite evidently, our ruminations lack
proper focus and attention; a consequence and continued effect, no doubt, of
having taught classes on sex and ethics. Post-Traumatic Stress Disorder is, I
believe, the right phrase.
We
shall, once again, attempt to return to the matter in hand once all thoughts of
talking about sex in front of sixty first-year undergraduates have returned to
the dark and shadowy places of the human psyche.
…No! It’s in the trees! It’s coming!
No comments:
Post a Comment