Today, where the lines between truth and deception are increasingly difficult to differentiate, a new danger emerges – “deepfakes”. Using artificial intelligence, deepfakes are computer-manipulated images and videos, typically developed to depict individuals engaging in actions they didn't actually partake in.
In response to the limited safeguards offered by federal and state governments, New Jersey State Senator John Bramnick is currently in the midst of examining existing state laws to determine if any—according to Arden Dier from Newser—“criminalize” the creation and use of deepfakes. If no such legislation exists, Bramnick claims he will draft a new bill to address the issue.
The Westfield incident underscores the power of media and the detrimental and harmful impact of misinformation. Society’s growing dependence on artificial intelligence has led to and encouraged its misuse in ways that can severely harm individuals, particularly children, as demonstrated in this case. The deepfake created by the male students can be considered as “fabricated” and “manipulated” content as exemplified by the students’ exploitation of artificial intelligence to create convincing falsehoods, revealing the pressing need for protection from the evolving digital threats. Not only did the incident violate privacy rights but it also unveiled a large governmental policy issue. If there continues to be a lack of control and protection over deepfakes, it has the potential to influence political propaganda, exacerbating distrust in media and exposing more people to fabricated narratives.
Technology is here to stay, and in a world increasingly reliant on media, implementing protection from misinformation and misuse is imperative. Just like we rely on cars, if there were no traffic lights guiding intersections, the amount of car accidents would increase as a result. The consequences of deepfake-driven misinformation should be more than enough reason for state and federal governments to implement digital protection policies.
Sources:
Image Links:
19 comments:
It is extremely saddening to see AI be abused in ways like these. Congress hasn't taken any action yet but thankfully, a few states have begun moving.
Perhaps even more helpful in the fight against deepfakes (and generally differentiating between AI-generated content and human content) will be the watermarks AI companies are currently in the process of implementing. Somewhat recently Deepmind, an AI frontrunner, established its SynthID watermark.
https://www.technologyreview.com/2023/08/29/1078620/google-deepmind-has-launched-a-watermarking-tool-for-ai-generated-images/
Obviously some deepfakes are poorly done and are easy to identify but many more are scarily realistic. Hopefully social media services can build filters to screen for watermarks like they do for explicit content. In the case of non consensual porn, maybe they can create a filter that identifies if is explicit AND AI-generated to then take action accordingly.
These privacy barriers AI can break must be fought back against. As Vishal said, watermarks are a good step in detecting AI content, however, it must be said that they are not great. It can be very easy to erase watermarks in image editing software, and nothing is stopping people from creating AI-deep fake sites without watermarks or limitations. The government needs to take action in protecting its citizens against these internet attacks as they have been keen to do, especially in states like Utah with porn restrictions. While this case is one of many similar, it could be one to set a precedent for AI use or force the government's hand in limiting the severely questionable use of AI.
Along with deepfakes, I think there are some other significant and potentially dangerous AI technologies to point out—one of which is voice cloning. There are currently multiple companies that provide a service allowing someone to clone a person's voice and make them say whatever a user wants. For example, there was a brief trend on various social media sites where people would post videos in which US presidents would be playing video games and holding humorous conversations. While these posts are just for laughs, this technology, especially if paired with deepfakes, can lead to misinformative and potentially dangerous videos that are difficult to tell apart from reality. As Vishal and Gabe suggested, AI companies need to start incorporating methods to differentiate content created using their technology from reality, along with the government regulating this technology and ensuring people don't use it with ill intent.
It is truly revolting that AI is being used to invade people’s privacy like that for sexual gratification. Posting images of your face online is becoming more dangerous as AI continues to develop without restrictions for privacy. Issues concerning AI and privacy will most definitely become a prevalent political issue in the coming years as more and more people are having their privacy violated through AI technology and deepfake. I think New Jersey is going in the right direction with criminalizing the creation of such images and I hope all those who have had their privacy violated will not have their opportunities ruined.
I agree that regulation in this case is absolutely necessary to protect citizens from the misuse of AI.
Women are most frequently the victims of deepfaked sexual content. For too many years already, AI has been used to create sexual content of women without their permission and published on online websites. According to the MIT Technology Review, a research company in 2018 tracked deepfake videos online and found that 90% of the content created was nonconsensual porn of women.
This content can ruin women's lives, taking a toll on their professional status and mental health. It is critical that the people creating these deepfakes are seriously punished to deter this type of violation.
https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/
The use of AI at the fingertips of such sadistic people such as these highschoolers is terrifying. As AI continues to develop, there must be restriction that are heavily enforced in order to prevent further abuse such as this. While humans are harder to restrict as even with rules they may feel empowered to break them, AI is likely easier to control. As such strict laws on certain topics should be put in place with harsh punishments for cases such as defamation.
AI is being abused and it's very disheartening that advanced technology is being used for the bad. For example, women are being dehumanized and disregarded due to the usage of AI. I have seen AI being used to put female celebrities' faces on pornography without consent. This same action can be seen in political speeches, such as Trump. I have seen AI-generated videos of presidents in order to spread misinformation.
What makes it more scary is that AI is available at the tips of everyone's figures. Recently, my younger brother downloaded an app that recognizes your face and it manipulates it to make it lip-sync certain things--such as a music video. It may seem harmless right now, but if AI manipulation is that easy to access, imagine what the future holds.
As AI continues to evolve, more people are at risk of manipulation and harm. In this instance, it's young women who are victims to deepfakes, but they are not the only ones. It is crucial that policymakers take initiative on restricting how AI is used, especially when it creates content that can harm an individuals reputation and potentially their loved ones too. Ultimately, the spread of deepfakes leads to the spread of misinformation and it is important that people who abuse AI face the consequences.
The instance talked about in this blog is so disappointing and disgusting. As people continue to use AI for harm, we need to see regulations. Deep fakes are especially concerning as we learned the impact of fake news on people. People who are unaware of deepfakes and lack skepticism in media will be victims of fake news if they see someone who they deem as reliable talking. If someone follows a specific politician and can't see what is real or not, it will cause discrepancies and confusion. This blogs specific example is so horrible as I have heard of deep fakes on celebrities to fake their nudes or porn, but for a classmate to do to another is beyond concerning. This is exploitation and just shows how far this has gone and needs to stop.
The use of deep fake technology to create non-consensual pornography is revolting. Another consequence of the improvement of deepfake technology is of course that people will try to use deepfakes to promote their political agendas. Already there are extremely convincing videos on social media of politicians saying outlandish things, but what people should really be worried about is deepfakes making videos in the realm of believability that can insight conflict and violence. However, the government, as well as many top universities such as MIT and Caltech are in the process of developing deepfake detection tools. A popular machine learning competition sight, Kaggle, is offering rewards in the millions of dollars for the best-developed deepfake-detection technology. What the invention of deep fakes might entail is that people will be forced to return to traditional national media in order to ensure that the political content they're consuming is non-doctored. This however begs the question: will people be willing to put in that effort? And if so, will the media take advantage?
Link To MIT Article on Counteracting Deep Fake Technology
Honestly this just scares me so much. It's already difficult having social media/posting things online and managing the hundreds/thousands of people that might see any post. It's so common to have really deep insecurities that become larger obstacles through constant use of technology. To see stuff like this makes me super frightened for the next generation, and all the new ways that twisted people will hurt children's lives. It's hard to imagine any way of counteracting this wave of intrusive technology other than severely limiting access. I hope that soon, a fool-proof system of cybersecurity and protection will emerge, potentially providing defenses for horrible situations like these.
This case in New Jersey is a prime example of how our generation- and beyond- will feel the detrimental and extremely adversive impacts of AI and deepfakes. What occurred seems like something that would come out of a movie about the future, not our current state. Because of the indistinguishable features between deepfakes and authentic pictures, this era of technology will have us asking: "What's real?" and "Can I trust anything anymore?" If the threat of misinformation is bad now, just imagine this in twenty to thirty years. According to a 2019 report, 96% of deepfake videos online were pornographic. Deepfakes have generated hundreds of millions of views, and are almost always non-consensual images/ videos. The Brookings Institution summed up the dire implications of deepfakes in this quote: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.” Time will only tell when something digital causes havoc in our world.
https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=6ae1437a7494
I agree that malicious use of deepfakes and ai in general is a problem that exists already and will continue to grow in the coming years. I think that updates on laws regarding security, privacy and copyright among other things is definitely called for especially since these laws were made when these technologies didn't exist. I think this also highlights the need for government to be more proactive in general since deepfakes aren't exactly a new thing (originated around 2017). I think another issue that may flare up as election season draws near is the use of deepfakes to create fake news headlines. I think the issue of social media as a means of becoming informed will be exacerbated by deepfakes since it opens up a whole new way for people to create false narratives. Deepfakes can be created to be sensational and thus will likely circulate a lot and even if people aren't entirely convinced of the credibility of such content, it can still lead to confusion and detract from intellectual discourse online.
Hearing about how people are using AI to abuse and hurt others makes me extremely upset and scared for the future. The fact that this disgusting person used AI to make nudes of HIGH SCHOOLERS, MINORS, is diabolical, and he should face serious repercussions.
Knowing that technology can make such realistic imitations of humans is scary. It can be used to hurt the average Joe and the most powerful person. I’ve seen clips of people using AI to make celebrities such as Snoop Dog and political figures like Barack Obama say random things. Most of them are relatively harmless, more so as a joke and I found myself enjoying watching some. However, there are always a few who take it too far and make them say highly controversial and inappropriate words. I'm glad that malicious uses of AI have gotten more widespread attention and precautions are taken to hopefully decrease disgusting acts like this.
Relating to our socratic about political polarization and distrust of the media, I think AI makes it even easier to unknowingly spread disinformation because of how realistic it seems. Speaking from personal experience, I've scrolled through youtube and instagram before and seen AI videos that seem too crazy to be true, but look very realistic without watermarks.
While AI helps make processes more efficient in analyzing data and monitoring things, I think its influence in social media is overall negative.
While the idea of deepfakes of private individuals being created for inappropriate purposes is very concerning, and damaging to those it impacts, I’m particularly worried about how a deepfake and voice clone of a politician, for instance, could be used on social media platforms to spread disinformation. Given the strong majorities who believe social media has made people easier to manipulate with false information (>80% of those surveyed by Pew Research) it seems plausible that the same people who are willing to trust someone who claims to be credible in writing would trust the image and voice of a trustworthy political figure giving information, especially if this is just seen scrolling through news feeds, with the reader not primed to think about potential misinformation. Similarly, a politician might have their public image destroyed or gravely wounded by a deepfake image or video, with their appearance and voice, espousing controversial or offensive ideas. Even though software teams are working to develop ways of identifying deepfakes, a study on misinformation published in Nature magazine shows that while fact checking can work temporarily, a reduction in belief in misinformation does not persist with time. Thus, a single damaging but viral video could color the public perception of someone even if identified and shown to be false, and I’m concerned that such things could deceive the public on important matters.
https://www.pewresearch.org/global/2022/12/06/social-media-seen-as-mostly-good-for-democracy-across-many-nations-but-u-s-is-a-major-outlier/
https://www.nature.com/articles/s41562-021-01278-3#Abs1
I agree that there needs to be more monitoring for ai-technology especially deepfakes which can be used to manipulate information. This matter is significant for the upcoming 2024 presidential election because a candidate could try and create a fake video or photo of one of their opponents. It doesn’t even have to be a candidate as is mentioned in this post, high school students are able to access and use this extremely detrimental technology. I heard of a company that is going to start searching for fabricated deepfake media as a response to the upcoming presidential election. Hopefully, this will make it more difficult for people to spread false information and hurt other people’s image. I think people will continue to try and create legislation to combat the rise of deepfakes like Senator John Bramnick is.
I think AI technology has become super prevalent, especially in recent years. From things like Chat GTP to Deep Fake, the internet has really become a place of limitless possibility. But I also think that AI has made the world more dangerous. The ability to turn into someone else just by 1 photo or video is extremely scary in the idea that these deep fakes can turn into a cyber threat. This accessibility to such a deceptive device is a real threat in today's society, making it easier to steal people's information, post false information using a deep fake of an important person, and even commit crimes. With the increase in technology, we really discover the amazing possibilities but we also face many more dangers and I often fear how much more advanced such technology can become. Where does it stop? or will there ever be a limit? It's pretty interesting but also very scary to think about the impact AI and technological advancements have on the future.
To respond to Konstantinos' point, I'm glad you brought up voice cloning. In addition to deepfakes, voice cloning is also a large concern for a lot of media users. Since AI technology has become more popular and a lot stronger, I think it's imperative that government policies are put into place that monitor the extent to which AI is used. If you think about it, there's not really a morally good reason to use voice cloning and deepfakes as they both portray false narratives. I also agree with your point that if AI tech companies wish to make their programs more powerful, it's important that they also make sure that their users can differentiate between what's real and what's fake.
Post a Comment