Sunday, 28 April 2019

Philosophical Confusions Part III: Derrida, Difference, and, Quite Possibly, Da Point

by Simon Smith

Last time you may recall, we had finally got to the point where Derrida – the very fellow this whole thing is mean to be about – stepped on to the stage. Then everything went a bit pear-shaped. The Illuminati kicked the door and dragged everyone off for ‘an attitude adjustment’. I’ve spent the last three weeks in a grain silo in Gorslava with nothing to eat but Soylent Green and a dozen little impenetrable sachets of mayonnaise.
The SG isn’t bad as long as you mash it up properly so you can’t see the eyebrows and bits of ear.
However, assuming that conspiracy theories and feeble attempts to get a running gag off the ground aren’t your thing, let us resume our discussion. The point we were trying to make, you may recall was that there may be some good reasons for Derrida’s suggestion that neither author nor reader are actually necessary for a message to be a message. And just what, I hear you ask, might those reasons be? Reasons want you to share ‘em; got their tongues hanging out, waiting to be said.
First, ontologically harness the message to author and reader and we seem to make the message a unique event, one never to be repeated or replicated. Obviously, everything I write is unique, uniquely unique even (as somebody famous once said); but I doubt if that’s true of anybody else’s communications. For one thing, people have the same conversations with one another all the time, often using exactly the same words and meaning exactly the same thing.  


For the love of merciful Christ! 
And your own fragile body! 
Will you please! 
Please! 
Stop doing that?!



Whatever ‘that’ may be. Now, I fully realise that sentiment doesn’t entirely make sense in Derridean terms. Although, it may be worth keeping in mind that if we didn’t at least think it was possible for words to bear the self-same meaning at different times, such outbursts, with which I am sure we are all perfectly familiar, wouldn’t really get going in the first place. Given that, it might be worth bearing in mind J. L. Austin’s wise words about the underlying assumptions of ordinary language.[1]
     That, as it happens, may well be a topic for further consideration at some point: given the flexibility, or even fluidity, that Derrida seems to find in language-use, how do those ordinary language presuppositions arise in the first place? Why, that is, do we suppose that the things we say mean the same thing every time we say them? For that matter, do we suppose that the things we say mean the same thing every time we say them or is that just what we think we suppose we mean?

Another interminable ramble for another day. For the present, let us assume that, as Derrida suggests, differance remains perpetually at play in every single linguistic act – written or spoken – hurling the signified over a garden wall while smooching up a storm with the signifier. Meanwhile, there’s another reason, which has just occurred to me, for being suspicious of the idea that linguistic acts might be unique. In Science, Faith, and Society, Michael Polanyi talks about the nature of objectivity and the ways in which we designate things as ‘real’. (I can’t for the life of me remember where it comes, but it’s a short book – read it and you’ll be bound to come across it). In essence, Polanyi argues that the objectively real, or rather our experience of the objectively real, is ‘future-oriented’; that’s my expression, not Polanyi’s, obviously. What he means, I think, is that, in our encounters with the world, it’s the things which turn up time and again which count as ‘real’; and this is because those repeated encounters first allow us to form theories and beliefs about the world – they are the material from which those theories and beliefs are constructed – and then either confirm or disconfirm those theories and beliefs.
That, by the way, is why the sciences are essentially pragmatic: crudely put, the proof of any particular theoretical pudding is in the future opportunities that reality affords to dig in and have a taste. And, of course, the fact that dessert comes with a number of spoons so that everyone qualified and capable of doing so can also have taste is all part of it. Although these days, thanks to ‘publish-or-perish’ and publishers’ insatiable demand for novelty, most scientists are more like the person who orders dessert and then pulls a butter knife, threatening to gut anyone who come near them with a spoon.
The point here is that, if a pudding or indeed phenomenon of any kind occurs only once and with only a few witnesses, then we have no way to verify that it is, in fact, real, let alone what kind of reality it might be. A unique phenomenon, or pudding, cannot be checked or tested: it cannot be measured or assessed, or analysed in any way. We can’t even get someone else to come along and have a look to see if it really is what we think it is.
For example: is that stuff they serve in little plastic wine bottles on aeroplanes really the true, the blushful Hippocrene,
            With beaded bubbles winking at the brim,
Or merely evidence that there’s a sick donkey somewhere on your flight? Who can tell?
Unlike that sick donkey, a unique phenomenon must stand outside the causal network that is the universe as we know and understand it. What’s more, any phenomenon that’s encountered only once can hardly impact on the way we think or act in the future. So there goes our most basic epistemological principle.
In any case, the point is— 

Wait, who are you and what are you doing with our most basic epistemological principle? Put that back! I’ve got an axiom and I’m not afraid to use it! I’m not kidding! Stay back! Stay back or I’ll…
                        KAPOW! KAPOW! KAPOW!
                                                            Urrrrrrr. Thud.




[1] This was in that exceptional essay, without perusal of which no philosophical education can be considered complete, ‘A Plea for Excuses’ in Philosophical Papers (eds J. O. Urmson & G. J. Warnock. Oxford: Clarendon, 1961). There, Austin reminds philosophers that ‘[i]f a distinction works well for practical purposes in ordinary life (no mean feat, for even ordinary life is full of hard cases), then there is sure to be something in it, it will not mark nothing’ (p. 133; my emphasis). Ordinary language (whatever that means) quite obviously is cannot and should not be the last word. ‘Only remember, it is the first word.’

No comments:

Post a Comment