Sarah Spiekermann-Hoff, who chairs the Institute for Information Systems and Society in Vienna, explains why machines are not intelligent in the human sense, how we are falling for the marketing ploy of the ICT industry, and why we should return to old virtues.
It’s the ideas from movies that people want to see become real: the “robot with emotions” from iRobot, or the loving female robot Ex Machina. These are the classic notions of modern machines, and artificially intelligent ones at that. But we have reached a point on our digital transformation journey where we expect more than these machines could ever deliver. Because, let’s be clear on this: machines are not intelligent in the human sense.
They are highly sophisticated and powerful automatons, no more, no less. They derive a largely expectable output from the input they are given. They perform brilliantly at evaluating data, for instance, or identifying conspicuous patterns. But on other things their performance is decidedly shaky.
Machines are not intelligent in the human sense.
Is targeting a worthwhile exercise in the digital world?
They fall short of the mark even on supposedly simple tasks, such as in advertising. We believe that, thanks to all the data collected, machines are able to predict human behavior provided the targeting is right. But this is wishful thinking. In reality, targeting works only to a limited extent, and only for a few global big-data players thanks to all the data they have amassed about their users. Perhaps between two and five percent of swing voters can be influenced in elections. Maybe four or five percent more people click on a personalized ad than in a control group where the ad is not personalized. But that’s about it.
Machines indubitably make processes more efficient, cheaper and faster. What’s equally certain is that they will never develop a digital sense of ethics: values such as courage or honesty.
AI is no different
The same goes for artificial intelligence . AI is falling well short of expectations, as evidenced by the first studies attempting to unpick how neural networks “think”. Having 20 layers connected in series won’t make a machine any more intelligent in the human sense. Ultimately, it is the input that determines the quality of the output, and the machine’s input will never come close to the human way of thinking. To take just one simple example: image recognition. Even very young children can recognize a ship when they see one. But a machine needs thousands and thousands of data sets. And even then, it makes mistakes. It might identify only the water and shape, and then figures it must be a ship. But without the water, the image recognition gets nowhere.
So, if a machine is just an automaton, dependent on its input and unable to perceive meaning, it obviously cannot have ethics, either. The first condition for ethics is meaning. Meaning happens in a non-linear, highly information-intensive context that can be permanently in flux. However, no machine in the world can really map context completely. On this point, researchers and programmers are up against their limits.
This sense of “meaning” is of little consequence in decisions such as picking which ad to display. Advertising ultimately influences only decisions that are of little relevance, and the extent of this influence is limited. But as soon as a machine is allowed to make a court ruling, or to take decisions with far-reaching impact on people’s lives – on benefits entitlements, for example – the people who allow this use should, in my view, be prosecuted. They ought to know a computer cannot take decisions of this magnitude. And if, despite this knowledge, they insist on using computers for highly sensitive matters of human importance, this poses a danger.
Should people who allow machines to make a court rulings be prosecuted? (Image: iStock/DNY59)
How we arrive at healthy digital ethics
We should each take a more deliberate decision where we wish to use digital machines for reasons of cost and efficiency, and where they are actually causing us harm. The design of machines should not be underestimated, as Nir Eyal’s book “Hooked ” describes so aptly: machines can deliberately get us addicted, and can undermine social relationships. On very large platforms, they can manipulate buying behavior and distort elections and decisions. They can spread misinformation and fake news.
What I advocate is being aware of the risks and side effects of tech devices, digital platforms and social media. We already have this information for medicines. And just about every service in the world comes with a leaflet. We would never give our children alcohol. But what about a smartphone? Digital ethics is therefore about diet, about limiting intake of the drug called “digitality”.
So far, we’ve played down the impacts, because as soon as a digital product arrives people always say, “Oh, it’s digital, it has to be good, it makes our lives better, it’s progress.” This is all just one big marketing ploy, perfected over years by tech companies. I don’t buy it. Because progress in technology – which the industry invariably celebrates, and which we invariably go along with – always follows the same pattern: New = Good. Automatically. This frames our way of thinking, creating our belief that the latest iPhone is better than the one before. In terms of progress, it would however be better to focus on the changes that re-empower us as human beings to live and develop human values. That is true progress, and we have to get back to achieving it.
Why we need role models for digital ethics
We must therefore ask ourselves as a society how we want to engage with digital channels and supposed helpers. As a first step we need to ask ourselves: what values are truly needed? It’s an age-old question: even the sign above the entrance to the oracle of Delphi read, “Who am I?”
Everyone in society should ask themselves this question at some point, especially today’s elites. They should be role models. But today’s “aristocracy of wealth” often conducts itself like a modern version of absolutism. It has been forgotten that aristocracy does not just mean “power”. It is composed of the words “power” and “virtue”. The rich of today should take up the old virtues, such as Aristotle’s Nicomachean ethics. That would have exemplary character and would bring real progress.
The question of responsibility, however, does not only concern the elites. Everyone should ask themselves: which values are important to me? What do they actually mean for my life? And how can I live my values? We should all consciously ask ourselves such questions, and make no distinction between our private and professional lives. We tend to think we can keep them separate, but that is a fiction. The lines are long since blurred. We take our private lives into work, and work into our private lives.
Machines indubitably make processes more efficient, cheaper and faster.
Digital with moderation
It’s why business leaders also have to ask themselves: how do I myself wish to engage with digital transformation and digital ethics – and what does this mean for my company and staff? It is about working out what is truly important. This is the only way we can build companies that are valuable to society. This way, today’s collectors of social network data might become a valuable community.
Until then, we have the task of reflecting, consciously questioning our habits and ultimately changing them. We should strike the right balance for engaging with all things digital. Then we as a society have a real chance to move from a passive to an active role and help to shape the digital world.
Header: istock/Chiradech
Sidebar: Sarah Spiekermann-Hoff