David Nemer was perplexed by friends and family members in his native Brazil.
They were all well-educated, but there they were consuming and believing misinformation about the 2018 Brazilian presidential election that was being spread via the social media platform WhatsApp.
The “fake news” led them to not only vote for now-Brazilian president Jair Bolsonaro, but to become assiduous supporters.
“I had a personal motivation to understand, analyze and theorize what was going on,” said Nemer, who lived in Brazil until he was 23 and is now a newly minted assistant professor of media studies at the University of Virginia.
UVA Today chatted with Nemer about his findings.
Q. How was WhatsApp used to spread misinformation in Brazil’s presidential election?
A. WhatsApp has been popular in Brazil since it entered the market in 2009. It was very cheap in comparison to SMS, which could sometimes cost around 55 times the amount charged for it in North America. Currently, there are around 120 million active WhatsApp users in Brazil, out of a total population of 210 million. About 96% of Brazilians with access to a smartphone use WhatsApp as one of their main methods of communication.
I began monitoring pro-Bolsonaro WhatsApp groups in March 2018, at the outset of the election the social media app eventually helped Bolsonaro win. I found that fake news spread in typical fashion, through a structure of groups that resembled a pyramid, as I detailed for The Guardian last year.
Small groups of “influencers” sat at the top of the pro-Bolsonaro WhatsApp ecosystem. They actively manipulated news stories and created misinformation meant to go viral. The influencers then pushed that misinformation down into larger groups made up of Bolsonaro’s most ardent supporters, who served as his troll army. Unified in their support of the right-winger, they pushed the influencers’ fake news to virality.
From there, the fake news spread into even larger groups of ordinary Brazilians, who used WhatsApp to go around traditional media outlets and receive news that reinforced their inclination to vote for Bolsonaro in discussions that acted as echo chambers for the cause.
Q. Are there similarities between what happened in the Brazilian elections and the ones in the U.S. in 2016?
A. Yes, in both cases, Trump and Bolsonaro campaigns worked hard to delegitimize the mainstream media by calling them “fake media.” Their supporters lost trust in those venues and turned their information-consuming habits to where the actual misinformation were being produced and shared: Facebook (Trump) and WhatsApp (Bolsonaro).
Q. Should what happened with WhatsApp in Brazil cause concern as we head into our next elections here in the U.S.?
A. Absolutely. Here in the U.S., Facebook and Twitter have constantly been called out for being accomplices in the misinformation deluge that we all experienced in 2016.
Although these companies are far from deploying a solution that tackles the issue at its roots, they are still moving forward with small increments targeted at banning fake news, and hate and violent speech, from their platforms. Facebook and Twitter have banned figures like Milo Yiannopoulos and Alex Jones, limiting their influence. But they still have thousands of followers on Telegram.
As I’m observing in my research, fake news producers are skeptical that Facebook and Twitter will start banning them and their content; thus, they have been inviting people to join their groups on messaging apps such as Telegrams, WhatsApp and Signal. As these apps are getting more and more popular in the U.S., it is expected that they will play a major role in our next elections.
Q. How does the uniqueness of the WhatsApp platform potentially make it more dangerous than other social media platforms?
A. WhatsApp, Telegram and Signal use end-to-end encryption, which means even the company is unable to see or track content. There’s no moderation from their parent companies on what is circulated on these groups. Also, the affordances of these apps make it very easy to share any content, including heavy videos.
Q. Are there any safeguards that can be put into place for WhatsApp and other platforms like it?
A. The problem of the safeguards that are put into place is that these tech companies only think about technical solutions – but the problem with fake news is that they are social problems as well, as it is with hate and violent speech.
The solution to this issue requires a multifaceted response. Law enforcement and courts have to impose the law – in many countries, including Brazil, Germany and France, violent hate speech is criminalized.
Radicalization can also start at an early age; thus, parents need to pay attention to what their children are doing on the internet and be ready to intervene. Tech companies also have to continue to deplatform these radicalizing spaces and figures.
Q. Google, Facebook and Twitter have attempted to crack down on hate speech and fake news. How good of a job do you think they have done? In your opinion, does there need to be more oversight?
A. These companies have been more reactive than proactive when it comes to tackling down misinformation. They hide behind their discourse about being anti-censorship and free speech platforms in order to justify their blasé attitude about misinformation. However, as I’m observing in my research, fake news leads to defamation, which can be a crime as well as a civil wrong, as well as hate and violent speech.
Q. Anything else interesting about this topic you’d like to touch on?
A. Though WhatsApp has made changes in response to the election in Brazil, it still serves as a unique platform for the spread of dangerous misinformation in Brazil and elsewhere. Radicalization happens at high speed; combatting it requires an even faster response.