Automated internet and social media sites are generating and sharing posts, often spreading information, disinformation and misinformation in hopes of influencing public opinion on behalf of some countries.
Known as “bots,” these sites may serve advertisers to influence spending, political organizations to influence voting, foreign governments to influence policies, or even create social discord.
To find out more about bots and how they can affect everything from disaster relief to presidential elections, UVA Today talked with Ali Ünlü, a researcher and policy analyst with the University of Virginia School of Education and Human Development.
Q. What exactly are bots, and how can they influence public opinion?
A. Bots on social media generally refer to automated accounts often used by organizations, like newspapers or businesses, to automatically share content such as news or event updates. Bots are popular because they reduce human labor and reach larger audiences quickly, often at a lower cost.
We often don’t know who controls these accounts or what their goals are. Recent advancements in AI, especially generative AI, allow bots to blur the line between human and machine. This makes it challenging for even experts to determine whether they’re interacting with a person or a bot.
Bots can influence public opinion by shaping what topics are discussed or “agenda setting.” By flooding social media with posts on specific issues, bots can create a false sense of urgency or consensus, leading people to believe that certain topics are more significant than they are. This practice, known as “astroturfing,” can deceive not only everyday users but also the media and government agencies.
Additionally, bots play a major role in spreading misinformation and disinformation. With generative AI, they can create content tailored to specific audiences using humor, memes and even sarcasm to make messages more compelling. This makes it easier for bots to sway public opinion or mislead people on important issues.
Q. Who is using bots, and what are their motives?
A. Bots are mainly used to influence public opinion (by entities), including domestic interest groups, extremist organizations and even foreign governments. While some bots are openly declared as official accounts, the majority operate without transparency.
Malicious bots are often used to destabilize societies, increase public distrust, push specific issues onto the agenda, influence elections, or promote ideologies such as anti-vaccine or extremist movements. Without knowing who is behind these campaigns, it is difficult to determine their true intentions.
For example, misinformation related to FEMA is often designed to undermine trust between citizens and federal agencies. This becomes especially problematic during crises, such as major hurricanes, when people are more emotionally vulnerable. Various studies show that misinformation spikes during disasters and crises, such as earthquakes, the COVID-19 pandemic, or conflicts like Russia-Ukraine and Israel-Gaza.
Those behind bots exploit situations by pointing out government agencies’ failures in crisis communication, delays in responses, or gaps in handling the event. The scale of the crisis often gives them more material to work with. Similarly, election periods are fertile ground for these groups. The competitive nature of elections, combined with the rhetoric used by political candidates, can provide opportunities to deepen political divides, foster mistrust and erode social cohesion.
Q. How can the average person tell if something is bot-generated?
A. With bots becoming more sophisticated, it is increasingly difficult for the average person to recognize them. People should be cautious when reading content on polarized or publicly debated issues, keeping in mind that they may be exposed to bot-generated messages. Since identifying account authenticity is challenging, social media platforms need to take greater responsibility.