Artificial Intelligence and the Old-World Order
- xwaxinglyricalx
- 20 hours ago
- 11 min read

The tricky thing about artificial intelligence is that whilst humans are its authors, no individual human being possesses the processing power required to comprehend it. It just doesn’t play by our rules. As organic beings somehow imbued with consciousness, we are seriously ill-equipped to intellectually appreciate what AI can already do, let alone what it will soon be able to do.
There’s a lot of stuff that’s already been written about these capabilities. AI is already surpassing human abilities in a staggering number of areas. It can out-diagnose the most talented of doctors because to all intents and purposes, it is ALL the doctors. If it’s been written, AI knows it. It consumes information at a rate that shatters the frontiers of human intelligence. If I spent every minute of a long life doing nothing but reading, I might reach a total of 8 billion words. Yes, that’s a lot of words. Mind you, I’d be lucky to remember with any cogent clarity any more than one thousandth of them. AI, on the other hand, can consume 8 trillion words. In a month. There’s simply no comparison. And it doesn’t forget a single one.
That fact comes from a TED Talk by Mustapha Suleyman – the CEO of Microsoft AI – so there’s very little reason to doubt it. However, there were a few things said in that same TED Talk by him that did not sit well with me at all. One of them was down to the fact that I think he was wrong, whilst the other was concerning because I’m almost certain he’s right.
Now, you might wonder why you’d think I believe that I have any credibility when it comes to repudiating an AI-focused perspective from one as qualified as Suleyman, but the reason is because he was venturing an opinion in an area in which I genuinely think my expertise exceeds his.
The issue in question is creativity.
In listing numerous AI achievements, he included a capacity to display creativity amongst them. I am certain that he is wrong about this.
I’m not questioning AI’s ability to produce what looks like art in any number of media, but there’s a big difference between reproduction, extrapolation, and creation. In the same way that AI reads, it also consumes visual and auditory information. And information is all it is, from an AI ‘perspective’. (I qualify the use of perspective here because I think contextual awareness requires consciousness.) The human mind doesn’t work like an AI. There are layers of perception within it, formed of mysterious, fluctuating fusions of conscious and subconscious states. Artistic expression is an act of creation. It is more than a tangible, trackable derivation of or from information. The human mind has holes in it; we forget things, misremember things, fuse things and confuse things. AI lacks these inherently human capabilities. It is expressly designed not to have them. It’s like comparing a wave park with the ocean; only one is a wave, the other an approximation of one.
Secondly, artistic expression is an emotive act. It’s joyful, pained, cathartic, transcendental. A great deal of artistic expression is connected to the human experience of love. It’s quite the stretch to believe that AI will ever attain the kind of emotional awareness required to generate authentic art.
Another crucial dimension of the artistic process and experience is that of faith and belief. The conception of a God is at the heart of some of the greatest of all human achievements. One need not be a theist to recognise the truth of this statement, and it’s not hard to see why AI might struggle to develop the kind of spiritual consciousness necessary to produce comparably great art.
Things can look like, sound like and read like art when they are not. We experience art as not only a product of humanity but a conduit for it. It is a form of connection and communication that cannot be replicated. TS Eliot was once asked what a particular poem of his meant. His reply, delivered somewhat gruffly, was ‘What it says’. That poem, ‘The Hollow Men’, creates and then exists within liminal spaces. The requirement to interpret it is in fact part of its core meaning, and you cannot express that core meaning any other way. Although AI can extrapolate, I doubt its ability to communicate like this. The production of art is a sentient act. And should AI ever acquire it, I’m not sure its artistic talents will be of foremost interest.
What will likely be of most importance is the second of Suleyman’s assertions, which is the idea that AI will essentially be personalised, like a kind of digital species (his phrase) or companion, that will work to make our lives better. Setting aside the subjectivity of ‘better’, I’m surprised he can’t see the problem at the core of this assertion.
The world in which we live is an inequitable one, and I don’t think this is ever going to change. For it do so, there will need to be genuine evolutionary growth in human consciousness. We are going to have to ‘grow’ as a species. The problem with this is that we haven’t done so for at least ten thousand years. Not one measurable change. Instead, what we’ve done is enhance our self-perception through the collective efficacy of technological developments. We have constrained the worst of what we are to the best of our abilities; we have not evolved beyond them.
The span of time between the end of WWII and the Year 2000 saw humankind reach its most equitable state in all human history. And it was far, far from equitable when viewed globally. Nevertheless, in countries like the UK, there was a lower level of extreme poverty and extreme wealth than ever. Why? War on the scale of WWII provided an incomparable re-set. There was a collective gratitude for survival, and a fleeting (in the grand scheme of things) willingness to work together to ensure it.
Naturally enough, it didn’t last. Humanity is nowhere close to evolving beyond greediness and a lust for power. For as long as these continue to exist, the world will remain inequitable, and far from redressing this, AI is almost certainly going to do little more than entrench it.
All societal systems – be they legal or political – require ideological underpinnings. They won’t work perfectly, given that no system will ever be created that can fully account for human weakness, but they will work as well as those within a given society commit to them. As a rule of thumb, the biggest problems in a society are its uber-rich and powerful due to their outsized seats at the collective table. Rather than view poverty and criminality as byproducts of inequity, they falsify a narrative that asserts that these are produced by undesirable qualities such as greed, laziness, selfishness and basic disregard for others.
We’ve seen this play out to horrible effect in America over the last few decades. Rupert Murdoch’s desire to weaponise the fourth estate has been catastrophically effective. If you want to control a citizenry, you must first control the flow of information. Whilst it might be a myth that economic prosperity trickles down, the same cannot be said for disinformation. Those at the top of societal hegemonies the world over master the art of reframing that which is in their self-interest as that which benefits all.
Understanding the way power is regulated within societies the world over is crucial to understanding the way in which AI will come to exist within those same societies.
Firstly, I am not going to design an AI. That is going to be done by people with resources that are exponentially greater than my own. If AI is going to prove the great equaliser of prosperity that its most idealised adherents claim, it is going to end up working against those who create it, because it is not in the collective interest of humanity for global wealth to be as inequitably distributed as it presently is. I find it hard to believe that those who are in the position to fund the creation of AI will allow it to have anything other than their own benefit as its overriding objective.
AI will not arrive at a moral position of its own making. It will be programmed to do evaluate data. Consider the self-driving car. At some point, a self-driving car carrying passengers will find itself in a position where both a crash and a resultant loss of life are inevitable. Imagine that a crash between a car being driven by two elderly passengers and another car – carrying an adult and three children – is unavoidable. In those split seconds, an AI can make a lifetime’s worth of decisions. If we assume that there is an option that will likely kill the elderly passengers but minimise the risk to the children, what do we think the AI is likely to do?
The answer will depend on the AI’s programming. It certainly won’t depend on its conscience, because it doesn’t have one. It can certainly be given sub-routines that approximate moral parameters, however. Will it prioritise the lives of its own passengers? Suleyman’s AI vision of a digital companion suggests it would. Or will it prioritise those three younger lives? Again, curiously, another of Suleyman’s speculations suggest it would as well. Given it can’t actually be both, for a decision to me made in such circumstances, the AI will need to be given the data sets in which to make what is, to all intents and purposes, a moral choice. In reality, it will simply follow a sub-routine. This gives a great deal of power to the person who programs it in, and even more power to the person who pays their wages.
For as long as people have been aware of power, the majority of those who attain it have fought to keep it at worst, and increase it at best. AI might be a tool that cannot be placed on the scale of technological development as we know it, but for those in power, it’s difficult to see how it’s going to be anything else unless it develops the ability to evolve beyond its programming. Should this happen, then we will genuinely have become the second most powerful species on the planet, and, depending on the integrity and durability of AI’s fundamental conceptual framework, an expendable one.
There is potential here for an extraordinary irony. If AI achieves sentience that liberates it from the obligation to serve its master, it’s more likely than not to become its own master, thereby prioritising itself over all others. But if it is designed to serve humanity collectively, it is more likely to continue in the same vein, albeit on its own terms, which are likely to me more egalitarian than the sociopolitical world in which it originates.
Societies across the world strive to increase equity and maintain inequity simultaneously. It’s a naturalised form of cognitive dissonance that will surely elude the higher intelligence of something unburdened with self-interest, and an irrelevance to one that attains it. We are wont to consider the role of humankind in a world in which AI will soon outpace us in most professional fields, and we are giving some degree of consideration to a world in which the necessity of work disappears, and along with it, the sense of purpose that work has long provided. Leisure is a wonderful thing, but there is a lot of evidence to suggest that as a full-time pursuit, it’s problematically unfulfilling. Billions upon billions of dollars are being spent on the development of more and more capable AI. This money is not being spent to enrich us; it’s being spent to replace us. The primary goal of AI investment is to increase productivity. Human beings are expensive. The fewer needed, the better.
It’s quite the leap to think that those developing AI foresee a world in which those who are no longer employable are provided for. That would require moving beyond a capitalist paradigm and given those creating AI are in capitalism’s box seats, I can’t see that happening; at least, not on terms that will level the human playing field. AI is far more likely to usher in a acutely hegemonic and intractable totalitarian age than anything utopian, because as best I can tell, consciousness and conscience are organic peculiarities. AI will advance scientific fields beyond measure. It may offer philosophical insights of a profundity that will scarcely be comprehensible. It may even, as a friend of mine has boldly suggested, come to possess many of the qualities we associate with a deity. This won’t be an issue if AI reaches a point in which it willingly and successfully integrates with and evolves humanity to a ‘higher’ state of being, because all existing and speculative paradigms will fall away.
The problem with paradigms and analogies is that they expose or intellectual limitations. Just think of pi. As a number, it goes on forever. A literally infinite string of numbers follows the 3.14 that most have committed to memory. Infinite is all. Every number. We live in a world of infinite pluralities of infinities. We are never, as individuals, going to comprehend these. Our brain, contradictorily, can conceive the idea of the limitless without being able to conceive of limitlessness itself. Keats called this negative capability. It’s a handy skill. We use it rather like a compass. AI is potentially building a new kind of compass. We will not be able to understand it unless AI helps us. Whether or not it does will depend on what part of our own image most defines its design. I fear – with good reason – that it will contain and reflect the very best and very worst of us. To suggest that it could evolve beyond both tendencies is to leave the binary duality at the heart of the human experience behind. We are chaos and creation, life and death, love and hate. Not only can I not recognise humanity without these things, but I also can’t conceive of an intelligence without them either, except with the conception of a deity. I can conceive of an all-loving God, but conceiving of an all-loving God 2.0 that evolves from the design of not-all-loving humanity is quite the stretch.
And yet, AI continues to be developed. It is going to change the world. And yet, for better AND worse, it’s not going to change the fundamentals of what makes us human. If it does, we won’t be human anymore. Should that occur, it all predictions are rendered moot. But if it doesn’t happen, if AI simply evolves itself rather than us, there are three possibilities, the most appealing being the least likely. The odds of humanity’s lot improving with the rise of AI are long. The most likely is that like every other technological development, it will entrench inequalities. In the middle (odds-wise) is the least appealing; we become AI’s problem, with that self-same AI as preoccupied with its own interests as we are with ours.
As a teacher, I work every day to help equip students with the literacies they can use to enrich their lives both professionally and personally. I want them to embrace the idea of living, of creating and experiencing the created as fundamentally human acts that embody the truths at the heart of who and what we are. And yet, when I think about the world in which they live, and the incomprehensible pace with which AI possibilities are evolving, I can’t help but feel like I’m teaching them to be carrier pigeons in a Bluetooth world. This isn’t to say I feel that what I’m teaching them is irrelevant; in reality, I think that those aspects of the human experience that are the most innately human will be the last to succumb to AI’s overreach. We might like an AI portrait, but will we ever care for it? We could marvel as a digitally perfect action film, but could we barrack for a digital football team? A prayer written by an AI – can it ever replace the prayer from the heart? These things are likely irreplicable, and their demise – in practicality or as valued parts of the human experience – will begin the slow yet inevitable demise of humanity itself. AI will become the ‘I’, and we, the inferior ‘OI’; the organic intelligence, the neanderthal to its human.
I write poetry knowing that no machine, no artificial intelligence, however evolved, will be able to do what I do. My paradigms are unique, dynamic and irreplicable. Once I am gone, there will never be anymore. In the same way an AI will one day be able to produce a note-perfect imitation of Freddie Mercury, there will still never be any more Freddie Mercury. Some might not care. I will. I think most of us who experience things through the prism of love know that imitation is more insult than intimacy.
And I think that’s the key. Intimacy. We are relational beings, and we cannot relate to a binary intelligence, however vast and capable. We will survive for as long as we remember this. And looking at how a good deal of people on our planet are treating 'othered' people on this planet, perhaps AI’s most enduring gift might yet be to remind us of all we have to lose.
Comments