In a footnote to ‘the Meaning of Truth’ (published 100 years ago), William James suggested a thought experiment: Suppose what appears to be a loving young woman is really an ‘automatic sweetheart (merely programmed, as we would now say). ‘Would anyone regard her as a full equivalent?’ asks James, and robustly answers ‘Certainly not’ in full expectation of his readers’ total, immediate agreement. How very different from the present consensus on consciousness which assumes that it is quite unnecessary, indeed exorbitant, to regard humans as other than automata. In effect the whole trend of behaviouristic theories of mind, and of the functionalist and cognitive science theories that have sprung from them is to dismiss James’s (and humanity’s) response to the automatic sweetheart question as primitive and whimsical.
Jane O'Grady lectures in the philosophy of psychology at City University. On openDemocracy, she has written one of the most contested pieces of 2009, Can a machine change your mind?
Her Guardian obituary of Timothy Sprigge is here
According to the famous Turing Test, the criterion for anything (organism or machine) to count as a thinking thing is whether it is able to produce audible or visual symbols that persuade a human observer that it can think. This has the intended upshot that there is no reason why computers should not count as thinking things, as well as the often-added corollary that the human mind should itself be regarded as a sort of information-processor or computer. For what else, demand some advocates of this human-computer view, do we ever do, with humans any more than with computers, than observe output -- behaviour and speech? What we require from the computer is simply that it responds automatically and appropriately – which is all we get in human-computer interactions anyway. Rather than arguing the toss as to whether there is any conscious thought going on anywhere, said Turing, ‘it is usual to have the polite convention that everyone thinks’.
As to the ‘inner’ workings that actually produce the product, alongside the cognitive science trend goes the avid pursuit of neuroscience, and the flourishing of identity theories. And although more inward-looking, identity theories are ultimately just another form of the Turing-test style human-computer, output-is-all theories. Since they can at best establish only the correlation of a brain process with reports, or behavioural evidence, of certain mental processes, they can – in fact, need -- to disregard or even deny the qualia, the quiddities, the ‘what it is like’ to be or feel something.
However, even prior to contemporary physicalism, the whole trend of metaphysics was to pay insufficient attention to subjectivity and especially to inter-subjectivity, the intermeshing of different consciousnesses. Metaphysics, until very recently, has gone into decline, and physicalists begin with the immediate-physical, dispensing with attempts at an account of reality. Yet when they need to get a ‘starting out standpoint’, in effect they start from the same standpoint as the Presocratics -- looking out upon a world that is supposedly uncharted by the senses or by reason, and wanting to know what reality is like objectively, as if it could be known irrespective of us.
By asking, ‘What is the world made of? What is the One underlying the Many?’, Thales and his successors set up a distinction between appearance and reality; and this distinction involved an implicit dualism, a split between whatever does the appearing (and being real) and whatever it is (me, for example) that is appeared to. The question about the nature of reality inherently contained questions – which were articulated even before Plato – as to how far and how accurately, if at all, reality can be known. The early philosophers may have failed to notice the knower at first, but were soon forced to acknowledge a dichotymy between knower and known, and that there is a problem of what can count as knowledge. Wanting to know what the world is like objectively, independent of observation, soon led them to swing round and look at the knowing subject, which, in a strange reversal, became, with Descartes, the most certain, because most known, thing.
Problems about the nature of knowledge and the capacity to know still prevail in epistemology, but, when doing anything like metaphysics or philosophy of mind, the naturalistic way of doing philosophy tries to flatten the knower back into the known---just another part of the world to be seized.
Contemporary philosophy starts at the opposite end from Descartes – instead of with the lone, or the collective-human, knower, with objective, physical things. The picture of reality as a whole is now rarely sought for, and it is the immediate-physical that is now the starting-point and certainty; the mental that is debatable, with Descartes often fingered as the guilty party for having introduced it in the first place. Just as Descartes had a problem with the external world, contemporary philosophers have a problem with the ‘I’; in fact, mostly, they dispense with it. In the now-misnamed philosophy of mind, they concentrate instead on mental processes, which they proceed to argue are actually nothing but brain processes or behaviour.
But don’t repudiaters of qualia and intentionality actually presuppose what they disparage and/or repudiate? Don’t they inadvertently begin from a Cartesian position? A frequent criticism of Descartes is the way he fails to notice that just by using language, and inviting his readers to undertake a process of doubt similar to his own, he is presupposing from the outset not only other consciousnesses similar to his, but also a common conceptual system. He unwarrantedly assumes that reality is inherently mapped and cut along certain joints; if not pre-packaged for input by human senses at least accessible by means of mathematical and other inferences. In other words, he doesn’t start from a level playing-field of the total doubt of a lone knower but has already smuggled in other knowers and a public world of shared, or easily shareable, categories and concepts. Yet physicalism commits a similar sleight of hand. It inadvertently smuggles in an observer, an ‘I’ -- something that perceives and organises the information, and has the context and key to interpret the symbols produced by the artefact or person, or inherent in the biological stuff.
The Turing Test criterion for whether something thinks is (as said above) whether an artefact or organism persuasively counts as producing what seem to be spontaneous, self-engendered responses. But this persuadability criterion assumes (to borrow one of Descartes’ formulations of the cogito argument) that there is, in principle anyway, something to be persuaded; the whole test is based on a model drawn from folk-psychological (i.e.,common sense) notions of human autonomy as being the marker of authentic thought. But the observer is also meant to be autonmous in being able to freely grant or withhold the mantle of consciousness based on observation, interpretation and judgement.
The observer in the Turing Test is supposed to be a putative, dispensible counterfactual like Descartes’evil demon (a prop in a hypothesis), but in fact is as ineluctable as Descartes’ I. Without an observing consciousness, how would the stuff of the physical world, or the computer-human output which is part of it, even be meaAningful information? Surely physicalism assumes, because it needs, the subjective consciousness it denies.
So physicalists are being disingenuous. Yet the source of their being able to overlook the unacknowledged observer springs from the way metaphysics, from the Presocratics onwards, has always been done. The Presocratics assumed a single dichotymy, a uniformity of perception even before they articulated the notion of perceiving. And so too did Descartes, whose lone ‘I’ was an Every-I, and yet whose Every-I was a lone ‘I’ -- a single viewpoint, just as the Presocratics’ (implicit) viewpoint had been collective.
The individual outlook that became the starting-point for philosophising was so naked and uncluttered as to apply uniformly to all humans irrespective of gender, race and any of the other categories we now set up as so primary. Thus the stuff of reality devolved into a two-fold distinction, between I/we and It -- the stuff that the lone or collective knower peers into or at.
Descartes could make the peerer single because he did not yet face any of the difficulties for individual and cultural knowing that post-19th century relativism (which in fact his subjecitivity would lead to) would introduce. Whatever the truth of relativism, it is right to emphaise that there must be, in any assertion of truth or objectivity, somebody’s viewpoint on the whole.
How, though, do many viewpoints co-exist and how do they communicate? The whole trend of metaphysics has been to downplay inter-subjectivity; that is, the intermeshing of different consciousnesses. There are actually two splits, not just one – a split between the knower and the known, and between the knower and other knowers, who are only ever at best partly known. Once the knower/known distinction is set up, there is not only the problem of how to bridge the gap between consciousness and material stuff (I & it), but that of how to bridge the gap between my consciousness & other people’s (I & thou).
Our view of reality has to accommodate not just subjectivity and objectivity but also multiple subjectivities. If subjectivity is not lone or joint, how do different subjectivities exist? Where? And how do they interact? Are other knowers part of the outside stuff that I am trying to know about, or (like me, the lone knower) separate from it as knowers? Are other people just enclosed in my net of consciousness, opaque to me, static objects for me to be aware of? Are they warring world-containers -- one solipsism clashing with another; as they are for the Romantic poet whose ‘all-feeling’ is violated by the human figure struggling up the mountain towards him, or as they are as for Sartre, for whom ‘the Other’ is an affront? Or aren’t they rather, as Emanuel Levinas and Timothy Sprigge see them, the basis of knowing?
It is not just into a physicalist view of reality that other minds fit uneasily, but into that of Descartes and representative realism. In the chronology of argument, they only make an appearance after the problem of the external world is sorted out, and they are then simply slotted in, usually as some sort of deduction from the physical, to be inferred from observable stuff and behaviour. They are treated as an afterthought, and, in post-Cartesian philosophy, one of the most boring of philosophical problems.
But they are a problem, and one that persists even with attempts by idealism to heal the mental/physical breach. Wittgenstein said (in the Tractatus) that if he wrote a book called The World as I found it, he would have to include his body, and that the thing that ‘alone could not be mentioned in that book’ would be the subject ‘I’, for ‘in an important sense there is no subject [.] … The subject does not belong to the world: rather, it is the limit of the world.’. In fact, says Wittgenstein, ‘… solipsism, when its implications are followed out strictly, coincides with pure realism. The self of solipsism shrinks to a point without extension, and there remains the reality co-ordinated with it’. new insert It is as if individual consciousness swallows the world in a giant moebius curve. The invisible, taken-for-granted ‘I’ is outside the picture looking in, and the world is what the ‘I’ observes.
But how do other people’s ‘I’s fit into a shared world? In this collective solipsism or this collective objectivity, what about the other limits of the world (or should that be ‘of other worlds’)? Subjective idealism is too subjective to accomodate the world and other minds. It wouldn’t work, any more than ‘The Matrix’ would. Supposedly such a brilliant portrayal of representative realism, the film’s virtual reality premise is surely unfeasible, since it would be impossible for the sense-data of the various trapped brains to produce, share, and interact in, a communal world.
In the film, the evil aliens are able to manipulate earthlings’ brains to give each of them a virtual reality. But how could the aliens specify the exact nature of each of these virtual realities, which would surely depend on each brain owner’s memories, associations and ways of perceiving? How could one brain owner’s virtual reality be able to enter the virtual reality of another’s, since (a la Wittgenstein) each subject really is at the limits of the virtual reality world that balloons out of it, and it is as if each brain is singly creating the world? There would have to be a mass telepathising, a place where all consciousnesses meet; which of course is supposed to be provided by the ‘matrix’ (invoked, like the familiar portentous ‘portal’ between worlds and times, as the ultimate (non)explanation).
The matrix is an impossibly supernatural contrivance, on which ‘The Matrix’ relies, as Bishop Berkeley relied o his own impossible contrivance,n God. ‘What do we perceive besides our own ideas or sensations’, demanded Berkeley, and he denied that there could be any unperceived matter and substance underlying the primary and secondary qualities, which he first collapsed into one another, and then into the sense impressions (ideas) they give rise to. So that ultimately all that exists (for Berkeley) is ideas, plus the minds (‘spiritual substances’) that perceive them; God being another non-explanation; an infinite spiritual substance that is supplying the virtual reality.
But how, for Berkeley, do the minds that have the virtual reality interact with one another? His position involves a process similar to Descartes’ of inference from bodies and behaviour to minds, an inference gleaned from the physical. For presumably, what each mind actually perceives is a cluster of ideas (someone’s body and its behaviour). In addition each has what Berkeley called a ‘notion’ of someone else’s mind, which sounds pretty feeble and uninformative. Of course, in a sense, the mind, because it is a ‘spiritual substance’, is more substantial than the ideas that enclose it; but, insubstantial or not, these enclosing ideas surely need to exist in a shared, public space in order to be perceived by other minds.
Which in fact (for all the initial exciting fanfares of a mind-dependent reality) it ultimately turns out they do. Berkeley is, after all, an Anglican bishop, and he believes literally in the Genesis account of creation. God, it emerges, did indeed create the world and its objects, which are one sort of ‘idea’ (‘archetypes’) and have ‘a real existence’ as enduring, objective things. And then there is also another sort of idea – ‘ectypes’ – which are each mind’s individual subjective perceptions of the persisting archetypes.
Ultimately, then, Berkeley’s idealism does coincide with realism (as the early Wittgenstein puts it), but with a sort of dualistic realism (which the early Wittgenstein did not have in mind). And Berkeley unintentionally preserves the mind-thing gap that he was attempting to abolish, even if things, for him, are less thingy than in usual dualisms -- archectypes in God’s mind of which human minds have ectypes. There are two types of thing in the world, perceiving stuff and perceived stuff; and a correspondence between them. And maybe there is a public sort of space, too, in which God keeps the archetype ideas in existence -- not separately for each person, like a sort of virtual reality, but in a public sort of way. Where otherwise would the finite spirits be ‘present’?
But mental substances are still left cut off from one another, each bounded by (solid) ideas. Berkeley, like Descartes, neglected, as his physicalist successors neglect, the human aetiology of knowledge -- that the way we become aware of the world and of the human conceptualisation of it is through the mediation of adults, primarily our mothers. Babies surely have a sense of other people, their emotions, etc, and interact personally and emotionally before they have a sense of things. They are introduced to the world, as they are humanised, by the adults around them. The first dichotymy is that of I-Thou, not I-It.
Ultimately, William James’s revulsion at the notion of an ‘automatic sweetheart’ is not that much of an advance on Turing. Of course noone could treat that notion seriously, James says, ‘[b]ecause, framed as we are, our egoism craves above all things inward sympathy and recognition, love and admiration. The outward treatment is valued mainly as an expression, as a manifestation of the accompanying consciousness believed in.’
This is still to treat the other merely in relation to oneself, the knower. So too are more modern attempts to champion subjectivity, qualia or intentionality against physicalist reductionism. Nagel’s ‘what it is like …’ formula, Searle’s Chinese Room, even Kripke’s a posteriori necessity, resemble the arguments of their opponents in reducing human awareness to a lone, discrete perceiver or set of perceivers – the single or joint observers of, and agents in, a passive reality. Physicalists are unintentionally Cartesian, and not just in the obvious way – being unable properly to dispense with the non-physicality of the mental – but in effectively assuming a unitary, rather than plural, starting-point. And this single or collective solipsism fails to do justice to the way human minds interact.