Tuesday, February 27, 2024

US Supreme Court to Hear Landmark Social Media Cases

We’ve devoted numerous class periods to the discussion of the US Supreme Court’s role around social media and online free speech, and now, the Supreme Court is set to make a pivotal decision which may transform the internet as we know it. As expected, the controversy primarly circulates around the theme of whether social media companies engage with First Amendment-protected speech by moderating content, here more specifically around misinformation and hate speech. The Supreme Court’s concerns have risen around two laws passed by Florida and Texas after the 2021 Capitol riot, and now they are ultimately being addressed as court justices hear two landmark cases on Feb 26, 2024.

The Republican-backed laws, passed by Florida and Texas, prohibited tech companies from removing certain political content which they deemed objectionable on their social media platforms. At the time, the states claimed that such laws were necessary to prevent platforms from discriminating against conservatives. State officials add on, claiming these restrictions on content moderation are constitutional as “they seek to regulate social media platforms’ business behavior, not their speech.” Yet this directly leads to, as a group of political scientists claim, “dangerous and violent election-related speech” which is then treated equal to innocuous posts. 


(Conservative Supreme Court heard arguments Monday: https://www.latimes.com/politics/story/2024-02-26/supreme-court-hears-a-1st-amendment-clash-on-whether-texas-and-florida-can-regulate-social-media)

Now, the Supreme Court considers arguements on if Texas and Florida should be given such control over tech companies. The two cases being held on Feb 26, 2024, NetChoice v. Paxton and Moody v. NetChoice, will ultimately result in a pivotal ruling: whether states can forbid social media companies from blocking or removing user content that goes against platform rules.

With the the First Amendment protecting the freedom of speech and expression of citizens from being censored by the government, supporters of the state laws claim against tech companies, who they believe are left-leaning, stating that the laws “protect the First Amendment rights of conservative users from censorship.” After being removed from Facebook, Twitter, and YouTube due to his inflammatory comments during the Capitol Riot, Donald Trump displayed support for the state laws, arguing that the right for tech companies to “discriminate” against a user is not protected by the Constitution. Similarly, conservatives within the US have long attacked major companies on moderation policies under their belief that they are unfairly biased towards left-wing views, and Gov Greg Abbot, who signed the Texas bill, claimed the law made it so “conservative viewpoints in Texas cannot be banned on social media.” Florida's solicitor general also added on, claiming companies act with too much power when they attempt to moderate posts, treating the First Amendment as if designed to enable the suppression speech rather than prevent it.


(Social Media: https://www.cnn.com/2024/02/26/tech/supreme-court-social-media/index.html)

While at the current stage of the case it remains unclear how justices will ultimately rule, the divide against some of the court’s conservatives is evident and many strong points against the state laws have been voiced. As discussed within our lessons of numerous court cases, precedent has carried an enourmous weight in being used as a strong justification for the support or disapproval of a new case. Paul Clement, a lawyer presenting cases on behalf of NetChoice, brought up previous Supreme Court rulings which held that “private organisers could not be forced to carry messages they did not agree with.” Similarly, Federal opposition to the state laws have brought up prior rulings which have ephasized editorial control as being “fundamentally protected by the First Amendment.” Furthermore, a major justification for such opposition is that the platforms which the state laws attempt to attack, are private parties which thus does not make them bound to the First Amenment. Clement offerred a humorous example to support this arguement which prompted laughter in the courtroom, stating that a Catholic website could exclude a Protestant from participating in a discussion as it is a private forum and the government can not tell the website, as a private party, that they have to let the Protestant into the Catholic party. 

If ruled in the favor of the states, decades of precent against “compelled speech” could potentially be reversed, and such could incite consequences which reach far beyond social media. Firstly, if companies were prevented from moderation content, they would practically be forced to carry all content, nevertheless the amount of antisemitism or pro-suicide content the posts carry as suggested by Clement. Moreover, the Florida law and arguement is so broad, that it brings out the question that if the law continuous to be upheld, not only would social media most definitely change in a variety of ways, but platforms such as Gmail, Amazon’s web services, and even Google lose all power of moderation. Despite such claims, the ruling remains uncertain as of now, and some justices are even signaling a desire to send the case to lower courts, suggesting a ruling will not be made until further review on the states’ laws’ provisions are made.

Sources:
https://www.bbc.com/news/world-us-canada-68407977
https://www.cnn.com/2024/02/25/tech/us-supreme-court-landmark-social-media-cases/index.html
https://www.cnn.com/2024/02/26/tech/supreme-court-social-media/index.html
https://abc7chicago.com/supreme-court-social-media-texas-florida/14469953/
https://www.nytimes.com/live/2024/02/26/us/supreme-court-arguments-social-media

7 comments:

Chris L said...

I understand both sides of the argument here. In general, I think the tech companies should try to remove speech that is intentionally hateful, or dangerous. I don't think they should be blocking out views only because they disagree with them. In Donald Trump's case with being banned from Twitter after encouraging his supporters to fight at the capitol, I think this should be regarded as dangerous speech, and rightfully removed.



Dayrin Camey said...

I think that people don't like to be called out when thier views are very extreme or not accepted by all. I believe that platforms have the right to remove posts that promote hate/violence/etc, although the first amendment protects speech, I think we are in a time where we call out things we don't agree with just like conservatives point out at loud how much they disagree with left-leaning views. As Paul Clement claimed "private organisers could not be forced to carry messages they did not agree with," and Donald Trump is a perfect example of that. He supported the attack of the Capital and he was banned becuase of his posts on various social media platforms and rightuflully so, he's post were extreme and it was dangerous speech as Chris L said.

Amit Shilon said...

Conservatives claiming that they are being “censored” on social media platforms and using Donald Trump as an example is rather ironic as Trump often had his posts flagged or even removed as they contained false information regarding the 2020 election. Claiming an election that was proved to be lawful is “rigged” is false information that should be censored or flagged. Instead of showing concern for false information being spread, conservatives want more opinions like Trump’s to be spread on social media platforms. While social media is not unproblematic, considering its tendency to create echo chambers and promote confirmation bias, state governments should not get control over social media companies so they can prevent hate speech from being censored.

Aurin Khanna said...

I think that it is important to know that social media has such a big affect on people's beliefs and millions of people have fell into a trap of believing misinformation / information coming from not credible people. So I do think that social media apps should be able to flag information that is proven to either be not credible or just false, but I do think there should be a line that is made on what gets deleted, a social media app cant just delete one political side / idea as that would make the app look biased, there has to be a line on what is allowed to get posted and what isn't. For instance, twitter banned trump because he violated their rules, Twitter set a precedent and if you break it you will be banned, so there isnt a pin point on twitter that they're biased because they put a boundary on what is allowed and what isnt.

Rachel Ma said...

I agree with pretty much what the previous commenters have said, and just wanted to add that I thought there was an interesting connection to be made with the claim of “dangerous and violent election-related speech” to Brandenburg v. Ohio, the case that overturned Schenck v. US and established the standard of incitement of "imminent lawless action." In this case, I think what Trump did definitely applies.

Also, I think X currently is a great example of the consequences of lack of moderation on social media platforms -- rampant misinformation, extreme, arguably harmful views, and the massive loss of users and general bad reputation. Just like how speech in real-life can have consequences if they impede on the rights of others, this should apply to social media too in my opinion.

Alexandra Ding said...

If people want to express extreme or unsavory views, they can do so on a personal website. Social media platforms shouldn't be obligated to maintain everything that gets posted on them. I think it's a little ironic that conservatives are increasing regulations on companies to promote their opinions. If the roles were flipped and liberals were the ones who had passed those laws, conservatives would definitely be calling them out for infringing on company's First Amendment rights.

Sarah Hu said...

I think finding the right balance between protecting free speech and allowing companies to manage their platforms according to their standards is crucial. I believe there's a need for some regulation on social media, especially when it comes to harmful or misleading content. It might also be time to rethink how the First Amendment applies in the context of free speech online, including setting boundaries around speech that could lead to threats or illegal activities.