Home

Are we machine?

Advertising puts a lot of money in the way of machines that predict us better. But also of social network services that makes our behaviour more predictable. Do we run the risk of losing the Turing test by becoming less human?

Marius Brill
17 March 2014

Know someone well enough and they can be fairly easy to predict.  I can, for example, predict my wife’s reactions to me forgetting to put the rubbish out, forgetting to scrape my plate before it goes in the dish-washer or forgetting a pair of women’s knickers on the back seat of the car.

Mini Machine Man Robot

Wikimedia/D J Shin. Some rights reserved.Predicting general human behaviour, though, is a lot harder. As a species we have relied on diversity to give evolution the best chance to improve. So whatever a majority of us may predictably do, a minority will almost always exist that defies predictions.  Human reactions can only be anticipated in very generalised percentages. Once our basic survival needs, sustenance and shelter, are satisfied, there’s little that is truly universal about human experience, desires or behaviour[1]. But there are rich rewards for those who can come close to predicting people’s needs. Knowing what a potential customer wants before they want it puts a retailer way ahead of the competition.

A couple of years ago the aptly named ‘Target’ chain store in America was exposed as a master of predictive analytics when a man went in to a Minneapolis branch to complain.[2] He angrily waved a coupon book at the manager.

‘My daughter got this in the mail! She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?’

The mailer did, indeed, contain advertisements for maternity wear, nursery items and photos of gurgling babies. The manager apologized profusely and then, apparently, called a few days later to apologize again.

On the phone though, the father was a bit embarrassed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

What none of them knew was that Target’s computers crawl through buying data harvested from loyalty cards. They had identified about 25 products that, when analysed together, allowed them to assign a “pregnancy prediction” score to every shopper. What’s more, they could estimate a due date to within a small window and send coupons timed to specific stages of pregnancy.

‘Take a fictional Target shopper named Jenny Ward,’ one Target employee told The New York Times, ‘she’s 23, lives in Atlanta and in March bought cocoa-butter lotion, a purse large enough to double as a diaper bag, zinc and magnesium supplements and a bright blue rug. There’s, say, an 87 percent chance that she’s pregnant and that her delivery date is sometime in late August.’

Now the social media behemoths, Facebook and Twitter, have begun to show off their abilities in this kind of predictive analytics. In a valentines tie-in PR move, Facebook published figures showing that they could predict, from the content and frequency of users’ posts, when a couple would fall in love.[3]

A couple with a lot of friends in common is a prime predictor. Then, when they appear in lots of pictures together, and start checking out each other’s online activity, they’re well on the way. Apparently, there is then a flurry of Facebook ‘interaction’ when two people are about to enter a relationship. But, 12 days before the official ‘In a relationship’ update, everything goes quiet with both posting an average 1.67 updates per day; presumably finding better things to do than sitting on Facebook.

Interestingly for Facebook-stalkers, if you want to know if a breakup is about to occur, a good sign is when couples stop appearing in pictures together or commenting on each other’s updates.

More seriously, Twitter, it was claimed, had the potential to predict post-natal depression.[4] Dr Eric Horvitz, head of Microsoft Research, analysed the tweets of several hundred new mothers over the three months before and after giving birth. He studied the changes in the amount of time they spent on Twitter, the people they were in touch with, and the language used. According to Horvitz, even an increase in the use of ‘I’ can suggest someone is becoming more introspective and self-focused: symptoms linked to the onset of depression. Though what Twitter is all about if it’s not me, me, me oh and, yeah, sometimes you, I’m not sure.

Still, in a market where the social media giants have struggled to meet their full ‘monetized’ potential, there is more to these displays of benign analytics than meets the eye.  Both, no doubt, hope to court more targeted and, therefore, lucrative business from advertisers who are still stuck randomly advertising their ‘Asian Babes’ on the BNP social media pages.

So ubiquitous has social networking become we forget the medium is still a computer. The databanks of our emotional zeitgeist are being filed away as arrays of numbers and there’s nothing computers like doing more than crunching them like bowls of Frosties. Forget the vulnerability of your personal data, it’s the human meaning that they can extract from our massed data that will make us truly defencelessness. Future targeted advertising won’t be about cosying up to you by knowing your name and what pets you have, it’ll be about anticipating why you went to that site, visited that pub, lingered longer on those images, and what that means you absolutely won’t be able to resist buying based on the behaviour of millions of others. The nature of capitalism is to let money lead the way. If it pays to produce machines honed to predict human behaviour, then they will only get better and better at it.

However, is there something more insidious going on?  Are we in our turn making ourselves more predictable? Are we so delighted with our ‘app’y existence that, as nature’s paragon of adaptability, we are changing our behaviour to become more amenable to the devices we use? If Siri needs certain commands to understand us we supply them, we’re ‘calling’ our friends, never ‘ringing’ them. We’re ceding the independence of our information memories, happy to rely on instant access to the opinion of the collective disseminated by Google and Wikipedia. My handwriting is going to pot and slowly our daily lives, our attention, our time, our priorities, are being sucked in by the need to keep our eyes on our screens. It doesn’t take a Deep Blue or HAL to predict what you’ll be doing at, say, 9.30am on a weekday morning.  More than likely you’ll be staring at a computer.  You’re audial interface holes - read ears - are probably plugged-in to a machine whenever you’re in transit and, who knows if it’s true that if you’re a bloke you’re thinking about sex every two minutes, but one thing you’re almost sure to be doing is checking your phone screen every six.[5]

Originally, machines were developed to do tasks more predictably than humans. But, the more predictable we are, the more like machines we become.  We are, like eager monsters of Frankenstein, becoming an amalgamation of technology and tissue. What was Frankenstein after all but the Romantic vision of a logical end of the Industrial Revolution?  Our willingness to envision ourselves as machines is endemic. I cannot remember a secular analogy for how a body functions, or a brain operates, or how life works which wasn’t predicated on mechanical principles or computer systematics; that is despite the evidence of our every waking moment of the –so far – inexplicable, incomputable, mechanical-analogous-resistant human consciousness.  We seem eager to see ourselves as machines.  Even our most sophisticated brain scanning equipment is based on measuring electrical currents. We have had centuries of this mechanistic perspective that challenged the ancient ideas of there being a ‘divine spark’ or a ‘soul’, is it any wonder that far from fearing our convergence with the machine, we’re welcoming it?

In 1958, the mathematician John von Neumann described the ‘Singularity’; the moment when AI would surpass human intelligence, fundamentally changing civilization and human nature for ever.[6]  It all must have seemed a long way off but how could von Neumann have anticipated the apparent willing self-dumbing of humans in the face of technology offering such low hanging fruits as diary reminders, instant access to trivia, infuriated birds and crushed candy?

Perhaps then, the real singularity is, more realistically, the convergence of smarter less predictable machines and dumber more predictable humans. There’s no doubt we feel it’s close. In November Google admitted that they no longer completely understood how their "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos. At the Machine Learning Conference in San Francisco, Google software engineer Quoc V. Le said he couldn’t actually work out why his software was better at telling that an image of a machine was a paper shredder than the humans that he had polled.[7]

It would seem that this is just good old human error. After millennia of making mistakes, we’re particularly good at it.  We make errors – that’s how we learn; indeed evolve. I’d suggest that this simply shows that there are a significant number of humans who are too stupid, or simply not bothered enough, to recognise a paper shredder - or that there are some other humans who have produced some really dumb designs for paper shredders.

Not Google, another giant corporation eager to attract advertisers and never shy of blowing its own trumpet. They appear to be claiming this ostensible failure as a victory.

"We had to rely on data to engineer the features for us, rather than engineer the features ourselves," Quoc explained.

In other words, Google’s AI is so powerful they have effectively brought us to the borders of the Singularity; the programming seems to think independently from its programmers, and its cognitive processes are so complex they are, apparently, inscrutable. At last - a machine that eludes the essence of a machine: predictability. And it’s got Google branded right across it.

It is far beyond the Artificial Intelligence posited by Alan Turing.[8] He envisioned a computer that could converse with such human-like qualities it could deceive a person into thinking it too was human. Looking at the number of people apparently fooled by the digital come-on show-babes on dating websites, it suggests that this is actually quite a low bar.  Turing’s was a sort of ‘fake it ‘til you make it’ paradigm; not intelligence as such, just apparently so, based on the predictability of most human conversation.  So as we act more predictably, is our own intelligence more artificial?

With all the information, the details of our lives, that we have uploaded into the machines of the social networks, we have become, to their analytics and analytics to come, as predictable as the vanishing of those ‘show-babes’ from the www.desperateforatouchofhumanwarmth.com website the moment payment clears.

And yet we seem to love playing out our lives through the eyes of the machine. We’ve instagrammed ourselves, we’ve filtered and recoloured, punched up, lomoized and retroized our memories. It seems such a nicer place to have been than the boring ‘real world’.

And in this have we been coerced, or co-opted? In Google and the social networks’ lust for commercial opportunity, isn’t it time to ask if all the updates and tweets, the stories from our souls that we have invested in them - and their machines – are simply mechanising us? Forget the singularity. Are we machine?

Screw their data, do something surprising today… and don’t tell Facebook.

 


[1]Maslow's hierarchy of needs is a theory in psychology proposed by Abraham Maslow in his 1943 paper "A Theory of Human Motivation" in Psychological Review.’ 

[2] How Companies Learn Your Secrets by Charles Duhigg Published: February 16, 2012 

[3] Predicting Love And Breakups With Facebook Data by Gregory Ferenstein Feb 14, 2014

[4] Predicting Depression via Social Media by Munmun De Choudhury, Michael Gamon, Scott Counts, Eric Horvitz

[5]Here’s The Cold, Hard Proof That We Can’t Stop Checking Our Phones by Charlie Warzel October 7, 2013 ‘According to new data, users are unlocking their phones an average of 110 times per day.’

[6] ‘The first use of the term "singularity" in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue"’ http://en.wikipedia.org/wiki/Technological_singularity

[7] If this doesn't terrify you... Google's computers OUTWIT their humans - 'Deep learning' clusters crack coding problems their top engineers can't by Jack Clark, 15 Nov 2013

[8] ‘The Turing test is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being.’ http://en.wikipedia.org/wiki/Turing_test

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData