There is a lot of talk these days on the imminent danger posed to humans by AI. I think, the principal worries can be classified by the following three categories:
- Job fear: The worry of getting replaced by robots or artificial agents.
- “Deep Fake” anxiety: The apprehension that it will be increasingly impossible to distinguish between truth and falsehood, reality and phantasy.
- Fear of existential threat: The angst of getting extinguished at some point either because some vile robots or artificial agents wish to eliminate us, or because some defunct AI unleashes a nuclear catastrophe/ war or the like, or, finally, because we are surrendering ourselves to some half human, half artificial humanoid existence, and thus inadvertently stop being what we used to be: fully fledged humans.
There may exist further types of anxiety related to AI which do not fit under either one of these three categories: insufficient privacy related to face recognition for instance. But I am pretty confident that more than 90 percent of the public AI frenzy does fit under one of these umbrellas.
So, how scared should we be? I think one aight perhaps attach some labels of justification to these three types of worry:
@1. Job Fear
The fear to lose one’s job is a perfectly justified worry: The evidence is there already and it is acute not just in view of “stupid” jobs.
However, even though this is no consolation for a specific worker losing his or her job, it has correctly been stated over and over again that from a purely economic point of view, for each job lost to AI, AI will create one or even more new jobs that cannot be executed by A because AI will not have been programmed to replace them as yet. Often, it will not even be possible to replace them at all, because AI is incapable of handling trade-offs between efficacy/ efficiency on the one hand and human well-being on the other. The latter target is far too complex to be comprehensively “understood” by humans themselves. So, how on earth could they possibly design corresponding algorithms?
On the job front what we do need is public and private investments into a better education of work forces. But we need not worry about our brains being left with nothing more to do.
@2. Deep Fake
The apparent fading away of more or less evident criteria for what is real or true is the signage of these days. Here the danger is already globally perceived though: by governments, by public agents and by civil society.
Deep fakes, “evil” bots, and other “vile” virtual agents right now celebrate public attention, because they are fairly new phenomena. Yet, what is also evident is that nobody wants to trade fake agents for real ones or a virtual reality for a real one. Not for good, that is. There will be plenty of fake realities, detected and undetected, but, leaving temporary excursions into fun and games aside, it is inconceivable that people will attribute a greater value to a fake or virtual reality than to a real one.
The main reason why the “virtual” is so extremely hyped these days is, that it carries with it our utter astonishment as to how easily our senses may get fooled. Once people get used to it, once the surprise is gone, the attraction will vanish and resistance will grow. The more difficult it will become to distinguish the real from the virtual, the more people will actually cherish the real.
The clearest empirical evidence for this is that augmented reality, surprisingly for many a market connoisseur, has turned out to be a significantly larger market than virtual reality. Humans don’t want to live in an unreal world or lead an unreal life, but they are perfectly happy to see their limited human resources assisted by technical tools of their choice.
I am therefore confident that people will always be most eager to retain an eagerness and the capacity to distinguish the true from the false and the real from the unreal by whatever resources.
@3. Existential Threats
When it comes to the extinction of the human race, I think we should distinguish between
- an unintended, accidental such event due to mistakes or evil artificial agents, both happenings being perfectly conceivable and worrisome, and
- the creeping self-destruction of the human race by means of artificial agents being surmounted on or intertwined with them.
Events falling under the first bullet point would not be a novelty to us, would they? They would not pose a threat that was specific to AI only. We are e.g. most familiar with the risk of nuclear self-destruction and know that this risk may at best be contained politically and/or militarily.
As for the second bullet point: I think this worry belongs to the realm of fictional Sci-Fi and need not worry us at all.
We are so worried about the chip in our brain because we think it will alter our conscious self. We are scared of turning into some merely partly human tech clone.
Is that a justified anxiety? I don’t think so.
Nobody wants to subordinate him- or herself to a machine. But people have always been most ready to accept tools that make their lives easier. They invented the wheel, the book, glasses, hearing aids…. So, what is so worrisome about accepting AI to augment natural human intelligence.
Would a human no longer be human, once he has got a chip implanted in his brain helping him or to expressing himself in 20 foreign languages or assists him multiplying the third faculty of 0.398765 by the square root of 367.99981 within a brief second?
People improve their physique by visiting cosmetic surgeons or taking muscle building drugs. They change or align their sexual identities. And if they happen to suffer from some mental problems, they take drugs to minimize them. Would any of us seriously contend that people undergoing some such self-alteration by means of chemistry and mechanical intervention stops being a human in the traditional sense of the word? Come off it. So, where is the diff when it comes to AI? There is none!
The mistake people commit, when they fantasize about half humane and half artificial hybrids is that they forget or do not see where the difference between humans and technology lies: Tech has got zero intentionality, it aims for nothing at all. Tech is invented and executed by humans with intentionality and yes, at times tech, accidentally, lets things happen which were unintended and which humans do actually not want to happen (like let a BOEING 737 Max crash). But that is a risk falling under category a.
Tech is by definition incapable of replacing, even partly, human intentionality, conscience or judgement. Tech will never ever replace those qualities because for that it had to be programmed accordingly and as long as humans do not understand even remotely, why they take certain judgements, why they aim for x and not for y and why they refrain from doing z, humans will remain humans. Will they ever fully understand any of that? Thank God, no! Humans will at best be able to describe their intentions, their decsions and their reasons. But description is not explanation.