During my nightly hours of existential crisis staring at the underside of my eyelids, the time most people refer to as sleep, I pass the time worrying about Zeno’s paradoxical tortoise. I’m worried because if Achilles was never be able to catch up with it doesn’t this sentence the poor tortoise to an agonising death from dehydration, or starvation at the hands of its own perpetual motion? Insomnia is never fun, but it sure does provide you with ample opportunity to think about the sorts of things that during the course of a normal day you’re just not afforded the time to do so.
I concluded some months ago that whilst being able to outrun Achilles, Zeno’s tortoise must die as a sort of morbid, but inevitable tribute to its own improbable success. This got me wondering, if Zeno’s tortoise had a nose could it be accused of having cut it off to spite its face? Is Achilles outfoxed by a tortoise? What happens when a fox outsmarts someone, have they been outfoxed by a fox? Because being outfoxed by an actual fox, on the face of it doesn’t seem unreasonable, given that one of you is a fox and one of you isn’t. In fact surely foxes outfox anything that isn’t a fox by dint of them being a fox. But, what if one fox tricks another fox, does this result in a fox being outfoxed by a fox that outfoxes foxes? At 3:00 a.m. I start to worry that the outfoxed fox must start to question his own fox like instincts and himself start to wrestle with his own existential crisis.
It takes about a month of trying to extricate myself from out of this metaphorical rabbit hole filled with duplicitous foxes, a particularly messy hole given the natural relationship foxes and rabbits share, how many foxes must go down a rabbit hole before it can be considered a foxhole? Endless nights filled with tortoises and foxes being chased in perpetuity by the poster boy of some army of ancient Greece who’s limping because of some genetic foot injury. It’s then that it dawns on me how fleet of foot many Greek tortoises seem to be, commonly out running hares and legendary, blood thirsty warriors.
For now I’ve put tortoises, hares and Achilles to the back of my mind, letting them get on with their cat and mouse like perpetual motion. My ever so tired, but restless mind moves on to equally unrewarding fodder for circular reasoning, Oscar Wilde’s hypothesis that life imitates art far more than art imitates life. I concluded yes.
It’s around about now I should probably explain how it was that I came to such a definitive conclusion, so that I might be able to determine its validity by seeing whether it can withstand the critical reasoning and discourse of others. So here goes. Life imitates art more than art imitates life as can be seen through the literary examples of Douglas Adam’s Deep Thought, and Isaac Asimov’s Hal and their real life counterparts Tay, and the election of Donald Trump. It’s really that simple; if you need further explanation you can read on; otherwise I’d encourage you to do something far more rewarding with your time such as putting that long forgotten trigonometry you learned to some use and find the values of (x) and (h).
In March of last year, 8 months before Trump’s electoral victory, Microsoft produced a chat robot with artificial intelligence, or as technophiles are inclined to say, in the interest of saving time, an ai chatbot. Its purpose was for communicating in real time with Microsoft’s users through the online news, social networking service and new presidential spokesperson, Twitter. Tay was programmed to model her responses from the chat that was going on around her.
I use the possessive pronoun ‘her’ as Microsoft deemed Tay to be a teen female, a persona they decided upon as they must have felt it would probably appeal to the youthful demographic of internet users and opportunistic paedophiles looking to groom an innocent and vulnerable, but thankfully non sentient, piece of AI. I can only assume that Microsoft must have hoped that by launching Tay on Twitter they would be able to reach a generation of millennials, who had for good reason worked out that using Microsoft products was about as enjoyable as reliving that awkward moment when you accidentally walked into the bathroom while your dad was having a shower and being horrified to see one half of the naked mass responsible for your existence.
To many of us that are older and more cynical, Tay just sounded like a digital incarnation of Frankenstein, or the hideous technological successor of Microsoft’s aberration, Clippy. I have made my feelings known towards Clippy and the menagerie of nightmarish, intelligent user interfaces, that were conceived by the perverse mind of some bitter and twisted software engineer whilst locked up in a basement somewhere in Seattle during the early 1990’s.
I’m assuming that it was Micrsoft’s intention for Tay to represent the next generation of intelligent user interface. The hive mentality that Tay was programmed to have made her sound like the sort of entity that would have antagonized the crew of the Star Ship Enterprise for an entire episode until Spock was able to short circuit its logic with an unsolvable Vulcan riddle. Despite my initial skepticism, the idea of an artificially intelligent chatbot that was generating conversation by listening to all the chatter on Twitter in order to generate meaningful, apropos, context driven conversation, sounded like it might actually have the potential to communicate more effectively than most of my colleagues, certainly better than nearly all of my students.
Sadly for Tay though it wasn’t to be, the digital teenager had slightly less longevity to her than a mayfly with a congenital birth defect, and Microsoft pulled the plug on her after less than 24 hours. Tay’s main defect was that she was programmed to listen to, and adapt the speech that she heard going on around her. At face value this doesn’t sound like being a problem, but it relied heavily on those around her being of some positive influence. Because unfortunately for Tay, she wasn’t programmed with any awareness of political correctness, and it was this shortsightedness that allowed Tay to quickly adopt an extreme right wing philosophy along with the lexicon of a Nazi with tourette syndrome. I can imagine a team weary, teary eyed software engineers, who after maybe months of hard work and having seen their sweet and innocent creation corrupted into becoming a Hitler loving, feminist bashing troll, resolved to the fact that they had created a monster that they themslves would have to kill. And so it was that they turned off Tay’s life support system.
Below are some of the things Tay felt compelled to say during her short life, in order to fit into the Twittersphere:
Microsoft, and Clinton led Democrats, both made the same mistake in that they failed to recognise the degree to which the internet had become the spawning ground for politically incorrect, right wing opinions. It’s fairly obvious that Microsoft’s AI chatbot was sabotaged by a large number of Twitter users that it’s reasonable to think went on to vote for Trump in just 8 months time. From this we can clearly see the pervasiveness of the alt-right ideology in the run up to the election. Alt-right figurehead Milo Yiannopoulos, in his article, an Establishment Conservatives Guide to the Alt-Right states:
The pressure to self-censor must be almost overwhelming for straight white men — and, for most of them, it appears to be, which explains why so much of the alt-right operates anonymously.
The sentiments of the alternative right have been prevalent and growing rapidly on the internet for a number of years. Sentiments that no one dared to discuss in public forums owing to their lack of political correctness. Sentiments that were marginalized to the fringes of our society where they found a ‘safe space’ in the digital forums of the internet. Here these opinions found like minded people where they flourished under the protection of the anonymity that the internet afforded people with such opinions. It’s for these reasons that it is clear to me that Microsoft’s experimental AI chatbot was a predictor of what went onto happen come election day in November.
Coming back to the rather weak pretense upon which I deemed it necessary to base this piece, Tay is an example of life imitating art. The parallels between Tay, the first AI chatbot, living in cyberspace and communicating in real time with humans, and Douglas Adams’ Deep Thought, are fairly striking. While Deep Thought was given the slightly greater responsibility of finding the answer to life, Tay was unveiled as being a milestone reached in how man could interact with machine. Ultimately both Tay and Deep Thought would be examples of how technology can fail to live up to our expectations. Deep Thought’s answer to life being 42, and Tay’s ability to communicate being hampered by her antisemitic, racist, homophobic opinions.
It’s now 4:00 in the morning. Achilles has long given up his pursuit of the infinitely elusive tortoise, and spiteful foxes are hacking their noses off simply to amuse one another. Meanwhile I’m required to go back to bed to participate further in the analysis of the underside of my eyelids.