In this picture, two heads are schematically shown overlapping. One of the heads is filled with circuits. A symbolic image for artificial intelligence.

Artificial Intelligence

Scientists are exploring the benefits and risks of a tool we are still learning how to use

Artificial intelligence – or AI – can already outclass specialists in some arenas, but in others it’s still easily outperformed by any toddler. It is, on the other hand, particularly hard to beat when it comes to recognizing complex patters within large data sets. There’s many areas where AI helps simplify our lives. But where it makes decisions that previously were only made by people, for people, there’s no getting around the ethical concerns.   

<span><span>Can we scare AI?</span></span><span> </span>

The "Spook the Machine" experiment begins more

Multi-panel microscopic image analysis. The top row displays three circular, high-resolution images, presumably of a tissue section. Each circle likely represents an identical tissue sample, likely stained and viewed under a microscope. The colors within each circular image are varied and represent different cellular components or marker expression patterns. The middle row shows three sets of two-dimensional plots (UMAP1 vs. UMAP2). These plots visually group and arrange the data points from the corresponding image in the top row. The data points vary in color. The color schemes within each plot likely correspond to the color schemes in the matching image above, thus providing a visual correlation of spatial location to cellular identity within the larger tissue sample.  The axes are labeled ("X" and "Y") with the axis labels "UMAP1" and "UMAP2". These UMAP plots effectively reduce and represent the high-dimensionality of multiple markers into a more readily visualizable two-dimensional space. The overall image suggests an analysis workflow where detailed microscopic images of a tissue section are processed to identify and cluster distinct cell types based on various markers. The color-coded representation in the microscopic images, and the scatter plots help in classifying the cells present in the image.

A new method can be used to predict how a cancer will progress more

Learning without feedback

Self-reinforcing learning can help you understand new things, but it can also reinforce false beliefs more

Show more

A Brief Intro to AI

Artificial intelligence is becoming ever more prevalent in people’s daily lives—and it’s also seeing increased coverage in the media. A Bitcom survey of around 1,000 people showed that around 95 percent had already heard of artificial intelligence. But only half were confident enough to have a go at explaining what the term means. In our data-driven world, AI is set to play an ever more important role. So what capabilities does artificial intelligence have? And where does it fall short?

Looked at in human terms, AI has extremely savant-like abilities. It has to be understood as a tool for special applications. In the work it’s been trained for, it’s usually quicker and more reliable than any human. Artificial intelligence’s mathematical algorithms can categorize images, sort online search results, and prioritize posts from the tide of social media messages. And AI is making its way into our daily lives—even in places we don’t notice it.

Scientists at the Max Planck Society have been working on developing machine-learning algorithms for new applications. They are looking into the constrains under which these algorithms might be used in contexts where AI’s decisions would have far-reaching consequences for human beings. They are also exploring how people can mediate the decisions that the algorithms make. So, as well as highlighting the possibilities—at various levels—that arise from AI, they’re also laying the groundwork so that society is able to decide how it deals with the opportunities and risks that artificial intelligence brings with it as a tool. But the right place to start is, perhaps, with an overview of artificial intelligence and the many areas in which it is used.

Illustration of a street crossing with bus, car, cyclists and pedestrians, onto which a robot arm lowers a car.
We are increasingly encountering artificial intelligence (AI) in our everyday lives, from bots in call centers and robotic colleagues on assembly lines, to electronically controlled players in computer games. At the Max Planck Institute for Human Development in Berlin, Iyad Rahwan and his team are investigating how people behave when they interact with intelligent machines and what they expect from their artificial counterparts. more

AI and Climate Research

Science is one of the many fields making use of artificial intelligence. Data streams can very quickly become too massive for any one person—or even any one computer—to handle. Artificial intelligence, on the other hand, is able to quickly recognize patters and correlations. One example are huge data sets containing petabytes of climate and Earth-system data. Hidden within these data are answers to basic, existential questions concerning humanity. When and do extreme weather events happen? Why? What tipping points are there within the highly complex Earth system—and what triggers them?

Close-up of a riverbed riddled with drying cracks against the backdrop of a city skyline and blue sky.
Droughts, heatwaves, and floods – climate change
is likely to make extreme weather and climate events such as these more frequent and more intense. Markus Reichstein, Director at the Max Planck Institute for Biogeochemistry in Jena, and his team are working on predicting the impacts of such events. Reichstein uses large volumes of data in conjunction with artificial intelligence to carry out this research, which he hopes will make societies more resilient to extreme climate events. more

AI and ChatGPT

Recent developments in AI algorithms can translate texts and even—with ChatGPT, for example—talk to internet users. ChatGPT’s answers are entirely based on information and text blocks that are freely available online. The power of the chat system is in its nature—it’s a tool. And like all tools, you have to learn how to use it. ChatGPT can sum up thousands of lines of text in a fraction of a second—much faster than a human could. But for tasks it isn’t trained for, an AI like ChatGPT is useless.

And it’s often unclear how these algorithms arrive at their evaluations and answers. This is not least because—unlike even a young child—it has no understanding of the connections it uncovers or the world around it. The term “intelligence,” then, should be treated with caution. In this context, it’s usually the product of what is called “machine learning”—a subdivision of artificial intelligence.     

Artificial Intelligence from a psychologist’s point of view
Researchers test cognitive abilities of the language model GPT-3 more

What Is Artificial Intelligence?

The human brain controls the body, processes sensory information, and links new information with existing information. This makes it possible for us to categorize events from our environment and to think ahead. Artificial intelligence is very different. It evaluates data to make a prediction or to provide a specific answer.

You have to resort to science fiction to find AI that’s capable of solving more general problems at an equal or better intellectual level to human beings. The artificial intelligence—machine learning algorithms, to be more precise—that media users are currently playing with, or which are still under development, are only capable of solving particular problems. There’s ChatGPT, for example, and smart assistants like as Siri or Alexa. But there are also programs in medicine that can detect tumors on ultrasound images where even experienced doctors would have difficulty interpreting the information.

Often enough, the computer has to be taught the rules for solving a particular problem. But outside of games like chess, this is fairly rare. Far more often, artificial intelligence is used precisely because it would be impossible to know and cover all the rules in advance. And this is why researchers have made such rapid progress in recent years with self-learning machine learning programs.

How Do Machines Learn?

Programs can learn in a variety of ways: supervised, unsupervised, or reinforced. With unsupervised learning, the computer searches for similarities and differences in data, but doesn’t have a clear target. With complex data sets, this helps researchers to find patterns that would otherwise remain hidden: climate data, soil samples from the foothills of the Alps, or satellite images of thawing permafrost soils in Siberia are all good examples of this. The goal is to understand how—and why—the changing climate affects ecosystems. During the COVID-19 pandemic, data centers were populated with huge amounts of information on diverse and previously unknown disease pathways. A team at the Max Planck Institute for Intelligent Systems developed a machine learning-based method and, with this data, identified a mortality risk for the novel virus. Unsupervised learning algorithms can also be used to discover more about the human psyche. Diagnosing depression is difficult, given the diversity of symptoms—listlessness, irritability, and so forth. At the Max Planck Institute of Psychiatry, researchers have used machine learning to mine patients’ medical data. The goal is to detect mental illnesses at an early stage.

With supervised machine learning, the algorithm is given a clear goal, like recognizing which species of animal is shown in any particular animal picture. This is a classic example of what’s called “computer vision.” Previously, people had to manually train the algorithm by feeding it as many examples as possible, loading images one-by-one into the computer and telling it that this one shows a dog, this one shows a cat, and so on. Humans immediately can recognize an animal by way of its fur, eyes, ears, tail, or teeth—a fully trained algorithm, on the other hand, will recognize it based on features that are not straight forward for us humans to perceive.

But AI can even get seemingly very simple tasks wrong: in one photo that clearly showed a cat, the algorithm “saw” a dog. Maybe the cat’s tail and head formed a shape that seemed, for the algorithm, like the two protruding, pointed ears of a dog. The confusion is caused by an imbalance in the training data: it was mainly comprised of images of dogs whose pointy ears were clearly visible. If the algorithm had been fed a more balanced range of reference images, the error might not have happened. Even just a larger number of photos would have helped improve the number of accurate results. Translation programs and software for recognizing speech or handwriting work according to the same principle. So machine learning actually involves a huge amount of effort by researchers—alongside huge amounts of data and massive computing power. Whether it is worth the effort depends on the purpose.

Where rules can be defined, so-called “reinforced learning” helps. Take the example of a self-driving vehicle that drives through a red light at full speed: this is a non-permissible action—and one that the implemented computer algorithm has to correct for the next time. Scientists at the Max Planck Institute for Biological Cybernetics have been exploring reinforcement learning in a less critical field. They teach a robot how to perform the fluid, precise movements of a table tennis player. The robot is given a choice in how and where it positions a ping-pong paddle: the main thing is that the bat has to hit the ball.

Puzzle pieces flying out of a mouth
New findings enable early diagnosis and individual therapy more
The image shows two hands wearing blue gloves, positioned at the bottom left and top right. The top hand holds a pen-like tool, which is gripping a sample tube with an orange cap containing blood plasma. The bottom hand holds a tray with additional sample tubes.
A new combination of single infrared light measurement and machine learning can be used to detect metabolic disorders and high blood pressure more
Robot arm with table tennis bat over table tennis table, controlled from a computer stand
In the world of science fiction, robots are intelligent and adaptive, but reality differs significantly. Robot programming is expensive manual labor, and the resulting programs are inflexible. A key step in making current robots more like their sci-fi counterparts requires endowing them with the capability to learn how to react appropriately and at the right time. Jan Peters is trying to teach exactly this skill to his machines. The computer scientist and mechanical and electrical engineer heads up a research group at the Max Planck Institute of Biological Cybernetics in Tübingen. more

AI: Brilliance on the One Hand, Hallucinations on the Other

Judgement is important where AI comes into contact with humans or where it’s involved in making decisions. Two examples of “computer vision”: researchers at the Max Planck Institute for Informatics have developed video software that can adjust a person’s recorded lip movements and sync them to an audio track. It’s a valuable tool for dubbing speech in movies into different languages. But at the same time, it’s a manipulation—so abuse has to be prevented. When it comes to self-driving cars, though, it’s a matter of life and death. Software from the Max Planck Institute for Intelligent Systems keeps on top of things, even in unclear traffic situations. It categorizes objects that surround a vehicle while it’s in motion. Artificial intelligence has to be able to reliably distinguish a car from a person and a road from a bike path.

Computational linguistics has developed techniques for understanding language as comprehensively as possible. One application is the chatbot, ChatGPT. While ChatGPT does seem to respond intelligently to questions, it still repeats the mistakes found across the internet. This is because the tool was trained with texts that are freely available online. And if it’s confronted with so many new questions that it feels backed into a corner, the algorithm behind ChatGPT can even hallucinate. ChatGPT cannot outgrow itself or take on a life of its own. AIs like ChatGPT also don’t have any concept of the rule of law or of human dignity.      

Lip-syncing thanks to artificial intelligence
A new piece of software adapts the facial expressions of people in videos to match an audio track dubbed over the film more

When AI Discriminates

The use of artificial intelligence raises social, ethical, and legal questions. A computer program trained by machine learning was asked to assess the quality of job applications, for example: the algorithm was found to rate women disadvantageously. This wasn’t due to bad intentions on the part of the authors of the code, however: it was due to the computer reproducing biases in preceding previous application processes, which had been shown to involve structural disadvantages for women—especially when it came to more senior roles. At the Max Planck Institute for Intelligent Systems and the Max Planck Institute for Software Systems, teams led by Bernhard Schölkopf and Krishna Gummadi have been researching why some algorithms discriminate—and how this can be prevented.

They’d be closer to the answer if they could determine how a program arrives at any one decision in the first place. Researchers at the Max Planck Institute for Intelligent Systems are also teaching artificial intelligence how to understand cause and effect—something that young children can grasp as a matter of course. 

Colour illustration of a robot with a school cone, surrounded by children with school cones for the start of school

Not without a reason

Artificial intelligence (AI) has long been able to recognize patterns much better than humans can. However, in order to truly be worthy of its name, it would also need to understand causal relationships. And that is precisely what researchers at the Max Planck Institute for Intelligent Systems in Tuebingen are working on.
Visit to Krishna Gummadi
When he was still in primary school, Krishna Gummadi learned to play musical instruments and studied programming. He soon gave up on music, but programming turned out to be his  calling. These days, as director at the Max Planck Institute for Software Systems in Saarbrücken, he is researching, among other things, why artificial intelligence often makes decisions that are just as discriminatory as the ones humans make, and how this can be prevented. more

Human Values and Rules for AIs

Who is responsible if a self-driving car hits and injures a person? How did it happen? Was there an imbalance in the data used to train the AI that caused the car to not see the person? AIs have no sense of social values or justice. In certain sensitive areas, legal parameters setting out how algorithms can and cannot be used are unavoidable. In what ways will robots be allowed to assist overworked care workers? This is a question that scientists at the Max Planck Institute for Innovation and Competition are working on.

There are, undoubtedly, many potential applications for machine learning. But it’s not always first and foremost a question of social value. AI can generate fake news in the blink of an eye—and spread it in a highly targeted way on social media. At the Max Planck Institute for Software Systems, researchers are looking to achieve the precise opposite. They’ve developed an effective process for recognizing fake news. And scientists have developed similar systems for detecting hate speech online.

Go to Editor View