Data science creates facts  

Researchers analyze disinformation in social media and develop ways to promote facts and fair dialogue online

Hate and fake news are on the rise online and on social media. Researchers are also increasingly becoming targets. At the Max Planck Institute for Security and Privacy, a team led by the new director Mia Cha is working on understanding the dynamics behind this and how the trend towards real facts and fair dealings with one another can be reversed.

An ever-increasing part of our lives takes place online. Social media brings together people who would otherwise never have met in the real world and helps important information to spread rapidly across the globe. But the brave new world of the internet also has its downsides. All too often, platforms also provide a breeding ground for hatred and misinformation.

“These phenomena are really negative for society,” says Meeyoung (called Mia) Cha. She is Director of Data Science for Humanity at the Max Planck Institute for Security and Privacy. ”That’s why we are looking at it from from the data and social science perspective.” An important part of Cha's work is therefore analyzing the spread of fake news in social media. The way information is shared already contains important clues about its content.

While confirmed facts from official sources or reputable media are often spread by so-called superspreaders, who reach a large number of followers with them, the spread of fake news is usually more fragmented. Although misinformation and rumors also attract a great deal of interest, people tend to read them in silence and not pass them on. “They know, if they react, their reputation is at stake,” explains Cha.

To capture such propagation patterns, Mia Cha repeatedly works with social media platforms that release information about who communicates with whom for research purposes. After all, it is also important for her to know what is going on in her network. “These platforms don't have the research capacity to do that”, says Cha. “But to be successful, they have to be able to cut down on any harmful information or illegal content.” Once the patterns of information spread are available, Cha and her colleagues rely on machine learning to determine with a high degree of probability whether a particular message is fake news or fact-based.

But observing how such content spreads is only one aspect of Cha's work. Ultimately, the question also arises as to what can be done about disinformation. “In our research during the COVID-19-pandemic we obseveed in real time how fake news were spreading from one conutry to the next along with the virus”, says Mia Cha. That's why the researchers quickly launched the “Facts before Rumors” campaign and distributed fact checks in 151 countries before the corresponding disinformation even reached them. “It was kind of like a vaccine against fake news”, reports Cha. “If you know the facts beforehand, disinformation and rumors lose their impact.”

„It was kind of like a vaccine against fake news.“

Of course, the content of problematic messages also plays a role in their automatic detection. While rumors often begin with phrases like “I'm not sure, but...”, hate speech is more difficult to detect because the style in which it is written changes quickly. “Hate speech just used to be very blunt and people would easily recognize it”, says Cha. „Now people use combinations of emojis and letters or deliberately spell words the wrong way. That makes it more difficult to detect.“ Mia Cha and her team have therefore recently introduced a new method of detecting offensive language based on machine learning that is not so easy to circumvent.

However, the large AI language models can also be used to automatically analyze social media posts, for example to assess a user's basic mood. And in addition, the same models allow an attacker to easily modify the emotional content of a message in a targeted manner in order to achieve a greater effect on that user. “The emotional state of people will decide how they're going to react”, says Cha. „AI models are very good at paraphrasing. So you could easily add a little bit of anger or joy to a message to manipulate people.“ Under the buzzword ‘cognitive warfare,’ people are repeatedly manipulated in this way over long periods of time to advance a political agenda.

In the future, Cha would like to expand her fight against hate and fake news to include the dark web. “There's a lot more problematic content on the dark web, which we haven't even started looking at”, she says. But there is still a lot for her and her team to do on social media as well. “Those platforms could generate so much good as well as bad”, says Mia Cha. “The companies want to get more advertisement money. But ultimately we should force the platforms to use human well-being as the optimization goal.”

Other Interesting Articles

Go to Editor View