Artificial Intelligence
Scientists are exploring the benefits and risks of a tool we are still learning how to use
Artificial intelligence – or AI – can already outclass specialists in some arenas, but in others it’s still easily outperformed by any toddler. It is, on the other hand, particularly hard to beat when it comes to recognizing complex patters within large data sets. There’s many areas where AI helps simplify our lives. But where it makes decisions that previously were only made by people, for people, there’s no getting around the ethical concerns.
A Brief Intro to AI
Artificial intelligence is becoming ever more prevalent in people’s daily lives—and it’s also seeing increased coverage in the media. A Bitcom survey of around 1,000 people showed that around 95 percent had already heard of artificial intelligence. But only half were confident enough to have a go at explaining what the term means. In our data-driven world, AI is set to play an ever more important role. So what capabilities does artificial intelligence have? And where does it fall short?
Looked at in human terms, AI has extremely savant-like abilities. It has to be understood as a tool for special applications. In the work it’s been trained for, it’s usually quicker and more reliable than any human. Artificial intelligence’s mathematical algorithms can categorize images, sort online search results, and prioritize posts from the tide of social media messages. And AI is making its way into our daily lives—even in places we don’t notice it.
Scientists at the Max Planck Society have been working on developing machine-learning algorithms for new applications. They are looking into the constrains under which these algorithms might be used in contexts where AI’s decisions would have far-reaching consequences for human beings. They are also exploring how people can mediate the decisions that the algorithms make. So, as well as highlighting the possibilities—at various levels—that arise from AI, they’re also laying the groundwork so that society is able to decide how it deals with the opportunities and risks that artificial intelligence brings with it as a tool. But the right place to start is, perhaps, with an overview of artificial intelligence and the many areas in which it is used.
AI and Climate Research
Science is one of the many fields making use of artificial intelligence. Data streams can very quickly become too massive for any one person—or even any one computer—to handle. Artificial intelligence, on the other hand, is able to quickly recognize patters and correlations. One example are huge data sets containing petabytes of climate and Earth-system data. Hidden within these data are answers to basic, existential questions concerning humanity. When and do extreme weather events happen? Why? What tipping points are there within the highly complex Earth system—and what triggers them?
AI and ChatGPT
Recent developments in AI algorithms can translate texts and even—with ChatGPT, for example—talk to internet users. ChatGPT’s answers are entirely based on information and text blocks that are freely available online. The power of the chat system is in its nature—it’s a tool. And like all tools, you have to learn how to use it. ChatGPT can sum up thousands of lines of text in a fraction of a second—much faster than a human could. But for tasks it isn’t trained for, an AI like ChatGPT is useless.
And it’s often unclear how these algorithms arrive at their evaluations and answers. This is not least because—unlike even a young child—it has no understanding of the connections it uncovers or the world around it. The term “intelligence,” then, should be treated with caution. In this context, it’s usually the product of what is called “machine learning”—a subdivision of artificial intelligence.
What Is Artificial Intelligence?
The human brain controls the body, processes sensory information, and links new information with existing information. This makes it possible for us to categorize events from our environment and to think ahead. Artificial intelligence is very different. It evaluates data to make a prediction or to provide a specific answer.
You have to resort to science fiction to find AI that’s capable of solving more general problems at an equal or better intellectual level to human beings. The artificial intelligence—machine learning algorithms, to be more precise—that media users are currently playing with, or which are still under development, are only capable of solving particular problems. There’s ChatGPT, for example, and smart assistants like as Siri or Alexa. But there are also programs in medicine that can detect tumors on ultrasound images where even experienced doctors would have difficulty interpreting the information.
Often enough, the computer has to be taught the rules for solving a particular problem. But outside of games like chess, this is fairly rare. Far more often, artificial intelligence is used precisely because it would be impossible to know and cover all the rules in advance. And this is why researchers have made such rapid progress in recent years with self-learning machine learning programs.
How Do Machines Learn?
Programs can learn in a variety of ways: supervised, unsupervised, or reinforced. With unsupervised learning, the computer searches for similarities and differences in data, but doesn’t have a clear target. With complex data sets, this helps researchers to find patterns that would otherwise remain hidden: climate data, soil samples from the foothills of the Alps, or satellite images of thawing permafrost soils in Siberia are all good examples of this. The goal is to understand how—and why—the changing climate affects ecosystems. During the COVID-19 pandemic, data centers were populated with huge amounts of information on diverse and previously unknown disease pathways. A team at the Max Planck Institute for Intelligent Systems developed a machine learning-based method and, with this data, identified a mortality risk for the novel virus. Unsupervised learning algorithms can also be used to discover more about the human psyche. Diagnosing depression is difficult, given the diversity of symptoms—listlessness, irritability, and so forth. At the Max Planck Institute of Psychiatry, researchers have used machine learning to mine patients’ medical data. The goal is to detect mental illnesses at an early stage.
With supervised machine learning, the algorithm is given a clear goal, like recognizing which species of animal is shown in any particular animal picture. This is a classic example of what’s called “computer vision.” Previously, people had to manually train the algorithm by feeding it as many examples as possible, loading images one-by-one into the computer and telling it that this one shows a dog, this one shows a cat, and so on. Humans immediately can recognize an animal by way of its fur, eyes, ears, tail, or teeth—a fully trained algorithm, on the other hand, will recognize it based on features that are not straight forward for us humans to perceive.
But AI can even get seemingly very simple tasks wrong: in one photo that clearly showed a cat, the algorithm “saw” a dog. Maybe the cat’s tail and head formed a shape that seemed, for the algorithm, like the two protruding, pointed ears of a dog. The confusion is caused by an imbalance in the training data: it was mainly comprised of images of dogs whose pointy ears were clearly visible. If the algorithm had been fed a more balanced range of reference images, the error might not have happened. Even just a larger number of photos would have helped improve the number of accurate results. Translation programs and software for recognizing speech or handwriting work according to the same principle. So machine learning actually involves a huge amount of effort by researchers—alongside huge amounts of data and massive computing power. Whether it is worth the effort depends on the purpose.
Where rules can be defined, so-called “reinforced learning” helps. Take the example of a self-driving vehicle that drives through a red light at full speed: this is a non-permissible action—and one that the implemented computer algorithm has to correct for the next time. Scientists at the Max Planck Institute for Biological Cybernetics have been exploring reinforcement learning in a less critical field. They teach a robot how to perform the fluid, precise movements of a table tennis player. The robot is given a choice in how and where it positions a ping-pong paddle: the main thing is that the bat has to hit the ball.
AI: Brilliance on the One Hand, Hallucinations on the Other
Judgement is important where AI comes into contact with humans or where it’s involved in making decisions. Two examples of “computer vision”: researchers at the Max Planck Institute for Informatics have developed video software that can adjust a person’s recorded lip movements and sync them to an audio track. It’s a valuable tool for dubbing speech in movies into different languages. But at the same time, it’s a manipulation—so abuse has to be prevented. When it comes to self-driving cars, though, it’s a matter of life and death. Software from the Max Planck Institute for Intelligent Systems keeps on top of things, even in unclear traffic situations. It categorizes objects that surround a vehicle while it’s in motion. Artificial intelligence has to be able to reliably distinguish a car from a person and a road from a bike path.
Computational linguistics has developed techniques for understanding language as comprehensively as possible. One application is the chatbot, ChatGPT. While ChatGPT does seem to respond intelligently to questions, it still repeats the mistakes found across the internet. This is because the tool was trained with texts that are freely available online. And if it’s confronted with so many new questions that it feels backed into a corner, the algorithm behind ChatGPT can even hallucinate. ChatGPT cannot outgrow itself or take on a life of its own. AIs like ChatGPT also don’t have any concept of the rule of law or of human dignity.
When AI Discriminates
The use of artificial intelligence raises social, ethical, and legal questions. A computer program trained by machine learning was asked to assess the quality of job applications, for example: the algorithm was found to rate women disadvantageously. This wasn’t due to bad intentions on the part of the authors of the code, however: it was due to the computer reproducing biases in preceding previous application processes, which had been shown to involve structural disadvantages for women—especially when it came to more senior roles. At the Max Planck Institute for Intelligent Systems and the Max Planck Institute for Software Systems, teams led by Bernhard Schölkopf and Krishna Gummadi have been researching why some algorithms discriminate—and how this can be prevented.
They’d be closer to the answer if they could determine how a program arrives at any one decision in the first place. Researchers at the Max Planck Institute for Intelligent Systems are also teaching artificial intelligence how to understand cause and effect—something that young children can grasp as a matter of course.
Human Values and Rules for AIs
Who is responsible if a self-driving car hits and injures a person? How did it happen? Was there an imbalance in the data used to train the AI that caused the car to not see the person? AIs have no sense of social values or justice. In certain sensitive areas, legal parameters setting out how algorithms can and cannot be used are unavoidable. In what ways will robots be allowed to assist overworked care workers? This is a question that scientists at the Max Planck Institute for Innovation and Competition are working on.
There are, undoubtedly, many potential applications for machine learning. But it’s not always first and foremost a question of social value. AI can generate fake news in the blink of an eye—and spread it in a highly targeted way on social media. At the Max Planck Institute for Software Systems, researchers are looking to achieve the precise opposite. They’ve developed an effective process for recognizing fake news. And scientists have developed similar systems for detecting hate speech online.