Monday, February 19, 2024

OpenAI's Sora: Navigating the Realities and Risks of AI-Generated Videos

        Last Thursday marked a significant milestone in the world of artificial intelligence as OpenAI unveiled its latest creation, Sora—an AI video generator capable of crafting 60-second videos from minimal input, such as a short prompt or still image. While there are some bugs in “some spatial and cause-and-effect elements” as OpenAI puts it, Sora's creations are undeniably astonishing, boasting a level of realism that could easily be mistaken for genuine footage. However, this breakthrough raises critical questions about the authenticity of videos and the potential misuse of AI technology.



Experts are already concerned with these very questions before the software has even been released to the public. Sora is one of many products made with the goal of AI videos. Competitors including Amazon, Meta, and Elon Musk’s startup xAI have attempted this very feat, however, Sora stands out as the most successful in creating realistic content, sparking a significant conversation about the ethical implications and potential risks associated with this technology.. Hany Farid of the University of California, Berkeley notes, "This technology, if combined with AI-powered voice cloning, could open up an entirely new front when it comes to creating deepfakes of people saying and doing things they never did.” The main concern around misinformation with AI seems to be focused on the upcoming presidential election. With previous extreme reactions to misinformation around politics like the insurrection in reaction to a “stolen election” accusation, it is unnerving to think about the consequences of more easily spread information from AI videos. Oren Etzioni, the founder of a non-profit fighting against misinformation in politics due to AI claims AI videos “ [lead] to an Achilles heel in our democracy and it couldn't have happened at a worse time," right before the 2024 presidential elections. Other concerns have been raised as well around mental health, evidence in trials, and more. Fred Havemayer, head of US AI and software research at Macquarie, has predicted, “[AI videos are] a substantial issue that every business and every person will need to face this year.”

While there is an abundance of worries that come with this new technology, OpenAI (claims) to be doing its best to minimize negative outcomes. For example, the company plans to take “several important safety steps ahead of making Sora available in OpenAI’s products” including preventing users from creating violent, sexual, or hateful images along with the inability of the software to make videos of politicians or celebrities. Your next-door neighbor, on the other hand, might not be as safe. Other suggestions around safety have been made like watermarks on videos, however, these watermarks could be easily edited out. While these measures seem like good intentions, it is a question whether they will make a difference in the negative effects of Sora.



This situation is interesting as there is not a Supreme Court case that has set a precedent directly related to AI involving misdeeds. Artificial Intelligence was not even a conceivable idea when the Constitution was created which may hinder the federal government’s ability to form laws around it. A case that we learned about in class that could be related to a situation around false information from AI is New York Times Co. v. Sullivan. This case limited the First Amendment right to freedom of speech when involving defamation. Prior restraint is still unconstitutional, however, so suing for defamation because of an AI video may not undo all the damage it has done. If one is not a public figure this case will not help either. 

As society grapples with the advent of AI-generated videos, the landscape remains uncertain, with numerous challenges and ethical considerations. OpenAI's efforts to mitigate negative outcomes are commendable, but the true impact of Sora on our society is yet to unfold. With potential ramifications for politics, mental health, legal proceedings, and beyond, navigating the path ahead requires a delicate balance between innovation, regulation, and safeguarding democratic principles. The future will undoubtedly test the wisdom of our government officials in finding a safe and constitutional solution to this evolving technological landscape.


Sources:
https://www.cbsnews.com/news/openai-sora-text-to-video-tool/
https://www.euronews.com/next/2024/02/18/sora-openais-new-text-to-video-tool-is-causing-excitement-and-fears-heres-what-we-know-abo
https://www.newscientist.com/article/2417639-realism-of-openais-sora-video-generator-raises-security-concerns/
https://www.forbes.com/sites/roberthart/2024/02/16/openais-sora-has-rivals-in-the-works-including-from-google-and-meta/?sh=37b69faf2843

14 comments:

Aurin Khanna said...

I feel that each day as ai improves our futures just get scarier. Because AI is so "easy to use" and cheap, it is going to take jobs away from humans and instead they will just be ran by AI as it is cheaper for the company and cost effective. In regard to what you said about supreme court cases I think it will be interesting to see if the government puts limitations on AI and how they will combat the eventual AI take over on simple human jobs. For example how many video editors or video makers jobs get put into risk with AI-generated videos?

Luke Phillips said...

This article does a great job bringing about the countless ethical, moral, and societal risks that are no doubt arising as we continue push the boundaries into the world of Artificial Intelligence day after day. More specifically, in regards to image development AI, as you mentioned, I believe there are many risks, that even with good intentions of the developers of said AI (being OpenAI) will still arise. Additionally, I believe that if the government DOES attempt to regulate these AI models on an extreme scale, it will only lead to more negatives, as doing so will simply force said AI into a black-market, making it only of use to criminals, as the reality is that AI is already out there source wise so there is no true way to limit on the internet, similarly to how to the government has tried to limit other internet sources before but largely failed. Lastly, the fact this development instantly destroyed nearly 10 startups in similar fields is definitely alarming, as in addition to the risks mentioned, a monopoly on such technologies would seem to pose even a worse threat... if one company held all the rights to cutting edge AI, what would happen? It would no doubt be disastrous.

Alexandra Ding said...

I looked at examples of video generated by Sora, and while some close-up shots have the slightly uncanny AI look, in wide angle-shots with lots of movement, that gets lost in the noise, and I'd find it basically impossible to tell if something was AI-generated or not. It's scary. OpenAI might say that they'll take steps to prevent users from using their tool to generate violent content or depictions of famous people, but based off of ChatGPT, it's very easy to circumvent these safeguards.

The other thing that worries me about this is how it's going to affect animators and actors. The WGA and SAG-AFTRA agreements don't address video or audio content much. Animators and actors won't disappear, but if companies chose to use AI more and more, there will be less need for them.

https://www.youtube.com/watch?v=HK6y8DAPN_0
https://www.akingump.com/en/insights/alerts/ai-concerns-of-wga-and-sag-aftra-what-is-allowed

Evan Li said...

As AI grows more and more powerful, it's becoming more and more apparent that our social media platforms are ill-suited to handle this type of content. Back when sites such as Facebook or Instagram were being developed, the idea of being able to generate such astonishingly hyperrealistic video footage could've been disregarded as science fiction. However, nowadays, some people even rely on social media to receive important news information. As lots of people don't watch the news or even keep ties with any sort of current events medium, the danger of AI-generated deep-fakes grows stronger and stronger.

Recently, I've been seeing this one advertisement on Youtube where the celebrity Robert Downey Jr. supports some products known as "beta-blockers." Although the endorsement was real, I couldn't help but immediately think that the advertisement had to have been some sort of AI-generated footage.

On a more positive note, I happen to intern for an AI video commerce company, and here are some positive use cases for realistic AI video generation!
- Imagine being able to shop online, and instead of companies having to pay wages to apathetic and disinterested customer service employees, you could receive real-time help with your shopping queries.
- AI video generation can become a massively helpful educational aid and could become extremely helpful in fields such as geometry or multivariable calculus where visualization can sometimes be a tough task for students.
- And of course, entertainment. As this technology becomes more powerful, it also grows in functionality as a tool for content creators. Your social media content may become more polished as creators step up their game, integrating much higher-quality footage into their content, while decreasing the labor required.

Annie Saban said...

I thought it was really interesting how Olivia tied this to the issue of free speech, namely the case of New York Times Co. v. Sullivan. Will safe for work yet fake AI generated videos of non-public individuals be considered legal and constitutional if any malice behind the videos cannot be proven? I think a fabricated video would be a lot more harmful than libel, and to me, this seems like a bit of a slippery slope.

I definitely agree with the point Evan made about the dangers that Sora and other AI technologies can have regarding fake news. Already, we find that people on social media use old/unrelated/non-contextualized footage to falsely blame certain people or spread certain agendas. Having perfectly tailored AI-generated clips could be exponentially more dangerous than this.

Satvik Reddy said...

I think the capacity for misinformation using AI has not been fully realized, and will only get worse from here. For the past few years, we have had access to deepfake tech, but it was fairly obvious when used. Now that OpenAI has combined their language model with a video generation model, it stands to reason that as it's language comprehension capabilities increase, the complexity and detail of the videos will also increase, eventually approaching a state where doctored videos are indistinguishable from real ones. I fear for how this will affect things like video and photographically evidence in court, and how it can be used to completely ruin someone's life on social media (for example, by creating a fraudulent video of them saying or doing something terrible). I think that over time, access to AI models like these will slowly degrade the mutual trust that many societal institutions are built on. I don't know if strict government regulation would do much, because AI models going underground and into black markets would very likely be much worse for society.

Maya Pappas said...

I think a lot of people don't realize the widespread consequences of this new technology. Videos permeate almost every aspect of our daily lives, to the point we don't even realize their value anymore. It's true that AI videos can benefit our daily lives (shoutout Evan's comment) but fake videos that seem real are really just a recipe for disaster, and personally, I think the possible harm they could cause outweighs the possible good they could do. We rely on videos so much because they portray what REALLY happens. Someone got in a school fight? You have the videos as evidence. Someone stole a package off your porch? The videos from your porch camera are solid proof. The point is, videos are used as reliable evidence for almost everything, and until now, have rarely ever been able to be challenged. AI changes all this, and I can't help but think it's for the worse.

Katie Rau said...

Despite people trying to point out how this may be positive, I think the ability to easily create fake videos is honestly terrifying. Tying it back to fake news, these videos could cause people who are easily influenced by social media to continue to spread things that are fake. I agree with Maya that we do rely on videos as proof for a lot of things, and if we can now create fake ones, who is to say what is real or not? The fact that AI is so easily accessible makes this even more terrifying to me. Overall, I agree with many other people who commented that despite some ways this may be positive, the ability to create fake videos in today's society is very dangerous and is for the worse.

VishalDandamudi said...

Dangers of the technology aside, I am interested in how it will affect the distribution of labor in our society. In the past, automation has always been feared (for example in The Ballad of John Henry about a contest between a miner and a steel-driving machine) but has allowed people to take on safer or more interesting work. For example, the number crunchers in the 1900s (kind of) became programmers after calculators and computers became more widespread. I guess with AI the automation is a little different though, since more than augmenting our skill it has the capacity to completely replace it.

Taylor Martin said...

The incredibly fast progression of improvement in AI technology is terrifying to me. These videos obviously present a range of problems in terms of potentially harmful deepfakes. Additionally, with technology like this readily available, all videos will become less reliable because there will always be some possibility that they are deepfakes themselves or otherwise highly edited with AI. This technological advancement could actually set us back significantly, preventing us from relying on video evidence in certain situations.

Abigail Lee said...

The advancement of AI is definitely something to be worried about. Resources such as chatgpt, AI voice generators, and now Sora are so accessible and easily abused. I remember seeing news articles about parents who were scammed into sending strangers money because those strangers created fake voice audios of what sounded like those parents' children screaming and crying for help. Cases like those where people may have been traumatized and actual harm was done is proof that AI is becoming a tool for crime to take a whole new level. I realize the regulations that OpenAI are putting on Sora to prevent negative videos, but I genuinely think that with time, people will find ways to pass this obstacle to use this tool for really bad/disturbing things. I've always found AI pretty terrifying and the recent advancements have felt so fast and sudden that I get worried of what AI could in merely 5 or 10 years, all within our lifetime.
https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/
https://nypost.com/2023/09/30/nyc-scammers-use-ai-against-parents-by-mimicking-kids-voice/

Mia Sheng said...

I agree that the development of videos is an extremely scary reality. I also think that politics will be one of the most concerning areas affected by AI videos. Many people who run for any kind of office are extremely careful about their reputation and digital footprint. Leaked videos or photos could potentially be detrimental to a candidate's campaign. Although there are efforts to reduce defamation and videos concerning politics, I think it will prove difficult to actually restrict these, especially since even now, it can be hard to restrict fake news. If videos about politicians start arising, it will definitely create a lot of tension and unease in voters trying to differentiate between what is real and what is fake.

Chris L said...

As AI starts to find its way into more areas, I wonder how it will effect the job market and whether or not people's fears of AI taking over jobs will become real. The AI generated video got me thinking about how it will eventually influence the entertainment industry in particular. Will we eventually see entire AI generated movies and TV shows?

Sarah Hu said...

The ability of AI to create misleading images, videos, and voice changes that show unreliable or incorrect information is definitely concerning. For example, I've seen art projects that mimic the styles of real-world artists. I believe AI gathers vast amounts of data from the internet to create these images. However, this is considered a violation of copyright matters, as it does not seek consent from the original creators of these ideas, which is illegal. The First Amendment also includes copyright rules. Article I, Section 8 states, "To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries." Additionally, there are AIs that can generate one person's voice into another's and even in different languages. A person can record their voice on the AI and then select different languages, producing the native sound of that language. While this sounds amazing, it could potentially be used to spread lies and deceive people, leading to misinformation. So, I do believe AI is a very debatable topic and still needs more development to become fully mature for people to use. And when creating these AIs, the owner should try as hard as possible to eliminate copyright issues