Thursday, October 29, 2020

Misinformation and Its Circulation Before the Election

In our current world that has been so overly saturated with technology and media, news and information are easily accessible from numerous types of outlets. According to a study conducted by Pew Research, 55% percent of adults get their news from social media either "often" or "sometimes." Today, we see that social media can have incredible political influence through popular sites such as Twitter and Instagram. While it does make information more widespread, it allows for misinformation to be propagated easily, affecting citizens and voters who may have stumbled across it. 


This week, researchers at the University of Southern California discovered the presence of Twitter bots to be used to spread misinformation prior to the election. They identified thousands of accounts that were attempting to post information about President Trump and Joe Biden's campaigns, with many trying to spread falsehoods through a flurry of incorrect statements and far-right conspiracy theories. 13% of these accounts were found to be automated, and due to their high tweeting rates, the amount of this misinformation is increased steadily. Emilio Ferrara, who led this study, notes that the most concerning part is that "they are increasing the effect of the echo chamber." With little way to stop these accounts, it becomes difficult to halt the stream of misinformation circulating throughout these sites. 

This year overall has been steeped with controversial and provoking events. From President Trump's impeachment to COVID-19, these sensational events allow for exaggerations, assumptions, and wild statements to circulate easily. It becomes difficult to fact-check all the sources, and as the amount of information only increases, they blend together in a conglomeration of truths, falsehoods, and opinions. As the line between information and misinformation blurs, individuals become more susceptible to having their opinions and decisions manipulated. 

With the current pandemic, individuals have been more likely to turn to social media for their information, and major companies have made greater efforts to curb the influx of misinformation. Recently, Facebook removed ads from the Trump and Biden campaigns that could mislead voters in states where early voting has not started. If this kind of behavior can become a continuous habit for popular media platforms, we can potentially curb some of the detrimental effects of misinformation. 

New York Times

PBS

Forbes



12 comments:

Anonymous said...

In my opinion, all this action taken against misinformation is too late. By not securing their platforms earlier, many of these media companies let misinformation spread because they often profited from it. Many companies are only making changes because there is controversy and attentions towards them right now; I highly doubt that they would be making changes without all this pressure. However, credit must be given when credit is due — these changes are better late than never. And as consumers, we must remember that these media companies are business — they exist to make money, not educate us. As readers, it is our responsibility it sort through information. This brings up the question: Should the government place more restrictions on businesses when dealing with misinformation or should more be done to educate the public and teach them how to properly sort through sources?

Anonymous said...

Similarly to what we discussed in class, these media companies are driven by profit, and hold no direct responsibility to the misinformation. Targeting, narrowcasting, and ad algorithms are only a few of the many media techniques used on daily consumers of the media sites. Political parties are able to exploit this business model by pouring their campaign funds into creating favorable narratives for themselves, which further polarizes the political scene. One’s confirmation bias and the internet algorithms that curb ads, videos, and articles to your personal taste manifest an echo chamber or infinite feedback loop. Controversy sells, truth normally doesn’t (ex: QAnon conspiracies). For example, Youtube has complex algorithms that promote viral videos and calculate your recommended video feed; viral videos are clicked on more; more ads are run.

To bring up Arnav’s question, I ultimately think that there is a fine line between censorship and the ideal balance of freedom of speech and truth that has yet to be discovered. As citizens, we need to become more proficient at differentiating between the truth and conspiracies by constantly checking our sources for bias and overall reliability and creating a large and diverse number of sources that we can trust and corroborate with.

Anonymous said...

The rise of bots in all types of social media has grown rapidly over the last few years, though the use of automated mechanisms to push a particular message goes back much further than Twitter bots. Robocalls and spam emails have been around for a long time, and it looks like Twitter bots are the next evolution of this archetype. To combat this, Twitter and other websites would have to come up with some way of automatically identifying bots and flagging them as such, although this method would probably result in a lot of false positives. However, with the advance of machine learning, this should be possible if the social media companies really want to make their sites better. Facebook's removal of political ads is a step in the right direction, though it remains to be seen how social media companies will combat misinformation in the future.

Harbani said...

I agree with Howard's points. To build off of them, social media platforms like Facebook were never meant to be places to get the news; they simply started as a way to connect with others online. Politicians have exploited this purpose by reaching certain crowds on social media and spreading misinformation to users. The challenge becomes: how can this misinformation be controlled? I believe that social media users should be wary about what they read (ex. fact check their sources) and social media companies should be extremely proactive about removing misleading information. Unfortunately, politicians and other parties will continue to exploit the business models of social media, so both social media companies and users should be extremely careful regarding misinformation.

Anonymous said...

Traditionally, media has served as a gateway used by political parties or candidates to get their message towards the public. It started with newspapers back in the 19th century and as times have evolved so has the media ushering into new forms from the nightly news to social media. I believe that as media continues to transform, the protections, especially those against fake news, should continue to evolve. As Howard had mentioned, most types of media are a business and thus we cannot always trust within the media organization themselves to be responsible for what ends up on their sites. Fake news, especially around the recent election seasons, have become nothing less than an epidemic which has often been fueled by media organizations themselves who encourage such type of reporting. Although, I do applaud corporations like Twitter for taking action within their own boundaries, I believe that there should be a higher emphasis on governmental fact checking and regulation of media sources. Although this could spark anger over the violation of the 1st amendment the reality is that as media bias and fake news continue to persist rapid polarization is soon to continue.

Anonymous said...

In terms of Covid-19, we have seen the detrimental effects of this ‘infodemic,’ where as Christina explained from the USC study, misinformation is spread through an “echo chamber effect.” As so much false information is dispersed increasingly rapidly, it’s become very difficult for people to draw individual conclusions, perhaps even after consulting various sources. And as this effect heightens, perhaps it is also important to note that the mainstream media are part of the problem as well. President Trump’s constant reference to Covid-19 as the “China virus” is broadcasted on large platforms, allowing the dangerous effect of this sentiment to be spread to millions of Americans.
I agree with many of the statements above, particularly with Arnav and Howard’s ideas regarding the importance of recognizing that media companies are businesses, while also reiterating the critical need for balance between government and citizens’ responsibilities in sorting through misinformation. I also agree with Harbani’s points on control, and I wonder if there are specific methods that could be implemented to engage the government and media companies in preventing and eliminating this misinformation.

Anonymous said...

Especially with the rise of social media, tons of people use it for both personal use and since people, especially younger people, begin to use it as a source of news. With these sites using ads as a source of money, it's very easy to create ads with fake news. At first, with the use of telemarketers before and spam calls, it has now moved on to automated bots and ads. Of course with these fake news they make outrageous claims that attract people who don’t know much attention and these people are susceptible to thinking this is true. But what’s the purpose of doing this, gaining these clicks? But there’s also extreme fake news, which I hope that most normal people will see and recognize that it is so outlandish that people know it's not true. But there are people who use their campaign money to create good views about themself and maybe slandering the opposition. Reading or watching about controversies are much more interesting than reading about the truth, there’s algorithms in every type of social media, and more popular media which would be the controversies, would be pushed on top. We just need to make sure that we can differentiate the truth and fake news by checking what sources we read from and the biases they carry and to always fact check your sources with other ones to make sure that the facts you know are true and reliable.

Kayla Li said...

As many of the previous commenters mentioned, the rise of misinformation comes with the rise and polarization of social media. Social media prioritizes engagement in order to make money meaning any attention is good attention because it means it will make the company money. Social media companies wouldn't be inclined to place restrictions because doing so would staunch their profit. Additionally, social media companies are not held to the same standards as professional publications. Journalists are help accountable by libel and privacy laws and such violations for the most part would result in severe consequences whether it be a lawsuit or being terminated from the company. Social media does not have the same degree of consequences because the outflow of information is not proportional to the robustness of the algorithms in place. There has been a increased distrust in the media in particular with Trump's presidency meaning his followers are more likely to listen to what he has to say rather than publications because such followers view the media as biased.

Isabella Liu said...

With the rise of media and the internet, many businesses use this system as the main way to promote ideals. The goal of many media sources is functioning as a business transaction- selling pieces of information to the public to satisfy what they want to hear in order to be more “informative” and “entertaining”, rather than have a stern point of belief. This mess of the media and internet strays many Americans from having a fair political belief as their information is often manipulated for revenue rather than for educational purposes.

Anonymous said...

As Howard talked about, the pursuit of profits plays an important role. All the major social media platforms are businesses that prioritize profit and they know that even if they don’t do anything about misinformation most users will continue using their platform. I think it’s unlikely that there will ever be a way to completely limit misinformation, and even if there was I doubt the companies would be excited to use it given that it would reduce engagement which would in turn lower their profits. I think the best solution is to educate people (the younger generations most importantly) on how to sort through the information they see.

Anonymous said...

Along with others, I agree that the greed and fake success coming with the presentation of fake news adds to the needs for it by media.

Often times fake news its so eye catching that one is ready to spread it around like wildfire to family, friends and more. This results in less fact checking and therefore prohibit the limit of the spread of misinformation.

With the new age of social media it is so easy to post something fake that may go viral, I think it's hard to stop the spread of fake news. However educating others to refrain from contributing to this spread is probably more effective.

$horyoung Gong said...

Misinformation is just an extension of entropy, it essentially becomes a given when the world's population is over seven billion. Especially with the information era, the spread of fake news and unintentionally fake news occurs day to day. There isn't really stopping misinformation unless there is a 24/7 AI that thoroughly combs through forums, articles, and anything on the net. The best we can do is limit the spread of misinformation, and I think as of right now we are doing a pretty detestable job of it. The influx of misinformation is very coincided with free speech, and the suppression of that is the suppression of a constitutional amendment. Social Media platforms have done their best by sweeping through in private sectors, but it is not like they can take down false information posted in droves. The best method I believe is through internal action. Schools should really hammer down the spread of fiction and normalize facts. There isn't really a realistic approach to this but I think that is one of the better methods.