Nope, that’s not a lie. And that title was generated by AI. As the 2024 election gets closer and closer, media literacy is as important as ever. While we often think of propaganda as those cartoons from WWI depicting Germany as a gorilla headed for the streets of New York, there is a much broader definition with the growth of the internet. Currently, we can think of hit ads, direct to consumer mail, rallies, debates and catchphrases as ways that you may be drawn to one campaign over another, but a recent contender has entered the ring. The growth of AI in the past 10-15 years has been hugely effective at infecting our brains without us even realizing it.
With the recent tragedies of Hurricane Helene, AI generated images have been used to damage Biden’s administration by prompting an emotional reaction to young children and animals. The picture of a young girl wearing a life vest and holding a small puppy while crying in a boat as we see houses being flooded around her is meant to provoke an emotional response, to make people feel bad for those in the hurricane.
“I don’t know where this photo came from and honestly, it doesn’t matter.” says Republican Amy Kremer defending her decision to keep the AI generated image of the "survivor" on her X feed.
And to be honest, she’s kind of right. The spread of misinformation is everywhere and although we shouldn’t be spreading misinformation, people’s opinions on a subject often don’t change after they’ve already formed an opinion based on a visual. Photos tend to evoke emotions into people before they have time to process what they are seeing.
In a hypothetical study done by the Royal Society of Open Science in 2018, they found that between the political candidates both parties generally didn’t change their view point after finding out that information about their candidate was fake or that their candidate had been fact checked and this is still relevant today, where Trump supporters don’t care whether or not Harris is actually a communist, or if she even wore the outfit, but the image reinforced the idea that they had heard, that Harris is a communist.
Some of Trump’s claims over the years have been ridiculously absurd, like the idea that the illegal aliens are eating the dogs and cats in Springfield or that Harris is a communist. And while these may seem like harmless jokes or drawn out extremes that no one would believe, it can gradually warp your brain. While the AI generated images of pets in Springfield holding up signs that say “Don’t Let Them Eat US! Vote for Trump!” may seem satire, many believe they are really perpetuating a conspiracy theory about the immigrant community, prompting threats to schools and government buildings.
These may not be liked or posted by Trump himself, it is unlikely he has even seen some of the propaganda but there is no doubt that Elon Musk posting Kamala wearing a communist uniform to his audience of over 201 million people had an impact on the favor Trump has been receiving as the polls get closer and closer come to election time.
It is unclear whether or not these kinds of images have affected the polls this election, but since then, even with the generally positive response to how the Biden Administration is handling Hurricane Helene, Harris has fallen in the polls. But more than public image, we should really be concerned about how the media can manipulate public opinion. If people are perhaps literate enough to understand what is fake and what is not, it is still abundantly clear that they don’t care.
"The standard of live news has sagged deep into the entertainment department, several floors below." Ed Turner, Editor at large, CNN, 1998
https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
https://royalsocietypublishing.org/doi/full/10.1098/rsos.180593#RSOS180593C39
12 comments:
I think the issue of AI in the media has been painfully overlooked, and as of late this dilemma has been ramped up and up by people who seemingly don't care about the consequences of their actions. AI generated images are easy to spread online, and often receive million's of views within hours of being posted. This is an unprecedented level of spreading misinformation, and is tricky to counteract for companies such as Facebook or Tik Tok that don't have the ability to regulate the thousands of posts per minuet that are circulated throughout the web. Images that are AI generated are extremely damaging to political candidates in particular, such as the hate and criticisms towards Joe Biden regarding fake images of Hurricane Helena. Cracking down of AI generated posts is no feeble task and will require serious oversight, as well as stronger security software, on part of social media outlets.
As Silas mentioned, cracking down on AI will need a lot of security software measures. More importantly however, I think that the government needs to add some sort of regulation -- whether that be an embedded watermark-type object in AI images that makes it harder for individuals to pass off AI images as real, or laws and punishments that disallow and incentivize individuals to not try to post unreal pictures (and claim that these pictures are truth). Unfortunately, as a whole, the cat truly is out of the bag on this one and I believe that there's no real solution to this ever-growing problem besides dealing with it and educating everyone on being skeptics about everything and everyone.
Sometimes, I find it difficult to believe that others get fooled by AI images such as the ones displayed here, but I think Bridgette gets it right when she says people readily ignore fact-checking when they see something they agree with. For this reason, I agree with Alex when he says there needs to be some sort of government regulation. I think enforced education on how to fact-check online, strengthening security software, and adding watermarks are all good solutions as mentioned by Silas and Alex above. I just hope to see some efforts to regulate this problem soon.
The rapid spread of AI-generated propaganda has deeply impacted public perception, exposing how fragile our media literacy is in the face of misinformation. AI manipulates narratives by exploiting emotional triggers, as seen with the fabricated images of Hurricane Helene, swaying public perception without regard for truth. People are often quick to accept visuals that align with their preconceived beliefs, making it easy for misleading content to flourish. As others have mentioned, addressing the issue will require more than just software advancements; governmental regulation, education, and heightened skepticism are essential.
In today's day and age any content that circulates the internet can't fully be trusted. Almost everything posted online contains some sort of fabrication, however, prior to the eruption of AI at least people could guarantee that there's some reality to it. AI's ability to completely create images that depict falsehoods shows a dangerous power which has already taken over. As people have become so accustomed to the exposure of "fake news," a fabricated image is nothing new to them. If whatever is depicted supports their belief system or political party, they will take it and run. Just like how the term "fake news" has been thrown around so carelessly, people claiming that a source is AI generated doesn't have the significant impact that it should. This generated propaganda will only add to the stark division in society as people prioritize any supportive content over the truth of the matter.
I think that it is ironic that political activist, and avid trump supporter Amy Kremer, has called for sympathy over a faked image- while her political platform has silenced the REAL people ACTUALLY suffering by the effects of these hurricanes. Following hurricane Helene, former president Trump and supporters of his party delivered many lies including blaming the issue to be a bi-partisan issue, and fault of democratic representatives ; when in reality, regardless of party a national crises calls for the nation to be as united as possible in order to successfully designate all efforts possible. Of the lies he stated, claiming that the Biden-Harris administration had spent all the FEMA money on immigrants, and that illegal migrants had stolen '1 billion dollars' from FEMA was the most notable; especially in light of the republican calls to defund FEMA, an emergency fund for victims of national disasters. These falsely generated images, when contrasted by false statements covering the immediate needs of a suffering community, are reflect the strong combination of harm mass media does in times of national suffering and fear; where in many instances, it is harnessed to further political standings, instead of draw in help and support for victims.
The rise of AI is definitely a threat towards democracy with the 2016 and 2020 flood of fake news and misleading information the onset of deepfakes and other AI content will only make this problem more severe. AI is a dangerous wild west and if restriction or methods of detecting AI content isn't developed it will quickly become impossible to tell real headlines from the fake. However an aspect of AI that some may not realize is the ability for politicians to deflect blame for poor statements by claiming it's deepfake or AI generated, effectively removing any way to fact-check politicians themselves.
AI has expanded so much over the past couple years that it surprises me that there isn't really any regulation on it. Especially when we know AI's abilities and how harmful the false images and other things it creates can be. Fake news stories during an election can be so harmful to the candidate and can seriously decrease their chances of winning, regardless of the fact that the news is fake. Like Bridgette said people easily believe things that they see on the internet especially when it comes to confirmation bias. Also just in general people tend to believe things really easily so overall AI will be a serious threat if not regulated in some way for the future.
The growing use of AI in the media today unfortunately creates false narratives, and almost undermines the people who have endured the events themselves. As it grows more popular in our society, it is also becoming more “normal” to spread this kind of false information through propaganda. It is mainly used, as said in the post, to confirm or to exaggerate beliefs of a political party to appeal to a side. It plays a role in tricking people to believe what they see on the internet, and makes more sources untrustworthy as the pictures or sources they are using are from false AI.
While AI can in some ways be used for good, I also agree that there needs to be regulations and higher security to monitor the output of these images that evoke people’s emotions in a manipulative way. AI itself really scares me because of how much it has taken over, not just in propaganda, but also its presence in social media, education, and even the workforce. With programs like chat gpt becoming a part of everyday life for students especially, it’s concerning how seamlessly AI has begun to work its way into society.
It’s wild to think how far we've come from traditional campaign ads to AI-generated content that can completely twist people’s opinions. I see memes used all the time on X(Twitter) by both amateur users and those with high authority like Elon musk; and they can often convey a very strong message alongside just a few words. Especially when a younger viewer with a lower level of media literacy sees these memes, the first impression can very much influence their political viewpoints. People form strong opinions before even verifying if the image is real. It’s scary because when these impressions are formed they’re almost extremely hard to undo. That 2018 study that you mentioned is a reminder that people rarely change their minds, even when presented with facts. AI-driven misinformation just feeds into pre-existing beliefs, making it harder for anyone to see through the manipulation. And while some of the claims, like the bizarre idea of "immigrants eating pets," sound outlandish, they still manage to shift public opinion subtly, even if most people recognize them as jokes or satire. As for Elon Musk, whether he’s endorsing some of these wild memes or not, the impact on public perception is there. And it seems like we’re moving further away from news as a source of reliable information and closer to pure entertainment, like Ed Turner pointed out decades ago. The real issue here isn't just the candidates or the policies, but how easy it’s become to manipulate public opinion with technology and misinformation.
It’s wild to think how far we've come from traditional campaign ads to AI-generated content that can completely twist people’s opinions. I see memes used all the time on X(Twitter) by both amateur users and those with high authority like Elon musk; and they can often convey a very strong message alongside just a few words. Especially when a younger viewer with a lower level of media literacy sees these memes, the first impression can very much influence their political viewpoints. People form strong opinions before even verifying if the image is real. It’s scary because when these impressions are formed they’re almost extremely hard to undo. That 2018 study that you mentioned is a reminder that people rarely change their minds, even when presented with facts. AI-driven misinformation just feeds into pre-existing beliefs, making it harder for anyone to see through the manipulation. And while some of the claims, like the bizarre idea of "immigrants eating pets," sound outlandish, they still manage to shift public opinion subtly, even if most people recognize them as jokes or satire. As for Elon Musk, whether he’s endorsing some of these wild memes or not, the impact on public perception is there. And it seems like we’re moving further away from news as a source of reliable information and closer to pure entertainment, like Ed Turner pointed out decades ago. The real issue here isn't just the candidates or the policies, but how easy it’s become to manipulate public opinion with technology and misinformation.
Post a Comment