Monday, March 9, 2020

Twitter Uses New Manipulated Media Alert on Edited Video



On Saturday, Dan Scavono Jr., the White House Director of Social Media and Assistant to the President, tweeted an altered video of Joe Biden during a campaign event. The edited video consists of Biden saying, “Excuse me. We can only re-elect Donald Trump,” cutting off the remainder of the sentence (full sentence: “Excuse me. We can only re-elect Donald Trump if in fact we get engaged in this circular firing squad here. It’s gotta be a positive campaign”). Trump later retweeted the tweet.

Twitter responded to the release of this video by marking it as “manipulated media,” an attempt to notify viewers when they are seeing altered media. This is part of Twitter’s new Synthetic and Manipulated Media Policy (policy details). Unfortunately, the tweet wasn’t labeled as manipulated until 18 hours after its posting and the label can only be seen when the tweet is viewed on a person’s timeline. However, Twitter only recently implemented this policy and therefore they are still working out some of the details. 

I personally am in support of this new Twitter policy. Following the incidents of altered media during the 2016 presidential election, it is reassuring to learn that steps are being taken to prevent voters from receiving false information from social media sites. However, when I looked at the tweet, the manipulated media alert was quite small, and many of the comments and related retweets did not seem to understand that the video was edited. This may be because they did not see the label or it could be due to the fact that the label doesn’t always show up. Either way, there is still progress to be made in ensuring that viewers are informed of the accuracy of the media they view. I thought it was interesting how this video also went up on Facebook, but they did not recognize the video as falsified until after Biden’s campaign manager released a comment basically saying that Facebook’s inability to notify viewers of false information is unacceptable and unethical. I understand why the Biden team reacted this way, as voters may be influenced by the clip and overall it just doesn’t create the best image for the Biden campaign. 

  1. Do you think all social media platforms should have a shared policy or strategy to alert users when they are viewing false or edited videos/images?

  1. What is your opinion regarding the Trump administration creating an edited video of Biden and releasing it to the public?


5 comments:

Anonymous said...

I think that all social media platforms should definitely have some sort of alert to notify users when manipulated images/videos are being posted. The edited video of Biden is a clear example that posting such media can easily misinform others and greatly affect other areas of life. A video like this could change the direction of the electoral votes, should people actually believe it. As for the Trump administration editing the video and releasing it, I think that it is extremely immature and a ridiculous attempt to gain more votes. The fact that Trump actually thinks people may whole-heartedly change their votes in favor of himself, just because of a clearly edited video, is very naive. He may have been able to cause some ill-informed individuals to be confused over the post, but it is still very unbelievable that one of the lead Democratic nominees would voice their support for Trump.

Anonymous said...

Every platform that disseminates information should endeavour to ensure that what they are sharing is factual. But, this is easier said than done. A.I. is not yet up to this task, so companies, like Twitter, have to pay humans to sift through post after post determining its their truth. And these employees themselves don't know everything, it takes time to know/find out what is really true and false. This particular case would take lots of users reporting the video (as it came from a government official) and an employee with appropriate political knowledge to check out the legitimacy of the reports and flag the post. Thus why, companies, like Facebook, are reluctant to invest time, money, and manpower on this inefficient and costly process (and why it may take eighteen hours). Yet, at the same time, Facebook and it's child companies, have billions of users and it's very important to protect them from false news, videos, propaganda, etc. This is especially important with the election just months away. Users/voters should be free to make up their minds without manipulation from outside candidates, parties, and nations (Russia!). Spreading only the truth should not only be a job for companies, the people posting these falsehoods (like the President) should think, logically and morally, before they post (as they cannot really be stopped from posting, due to their 1st Amendment rights).

Shirleen Fang said...

While some people may not have seen the Twitter warning, I think other people may have just chosen to ignore that the video had been altered. When a Trump opponent seems to admit defeat, Trump supporters rally behind Trump, emphasizing his "greatness." I think in this case, some supporters did just that, even despite the warning signs they might have seen. This combined with the fact that Trump administration failed to verify the legitimacy of the video could be another example of political partisanship in media clouding a person's judgement and further splitting Republicans and Democrats.

Anonymous said...

I think Shirleen brings up a great point. Whether there is a warning or not, some people choose to see and believe whatever fits in with their personal views and beliefs. When looking at the responses to the tweet, it was obvious that some people noticed that the video was altered, while others continued to support Trump. This shows how human beings are sometimes ignorant and blindly support whatever lines up with their views.

Anonymous said...

I think this is a really good idea that more social media sites should implement. Especially in an era where the phrase “fake news” is thrown around a lot, I think having a more objective alert like this will help distinguish true information from deception. Implementing this might also increase the amount of trust people have in media since they know that there is something to stop manipulated media from slipping through the cracks and into social media feeds. At the same time, however, similar to Shirleen’s point, there are always going to be people who will ignore such alerts and only choose to believe/share media that confirms their beliefs even if there is an objective proof, like scientific data/research or videos that have proven to be altered.