Sunday, January 21, 2024

AI Policy Shift: OpenAI Partners with U.S. Military

 

OpenAI, the parent artificial intelligence (AI) research company of ChatGPT, recently removed its prohibition on using their technology for “military and warfare” from its usage policy on January 10. The eased restrictions on military uses propelled OpenAI into the debate surrounding AI's growing military integration.

The change seems to be instigated by OpenAI’s collaboration with the U.S. Defense Department. OpenAI’s spokesperson, Niko Felix, stated that OpenAI wanted to pursue “national security use cases that align with our mission,” specifically citing OpenAI’s plans to develop “cybersecurity tools” with the Defense Advanced Research Projects Agency (DARPA). This progression aligns with Former Army General Mark Milley’s claim that the United States, its Defense Department, and the military, need to “embrace” AI to keep up or stay ahead globally.


When questioned about the January 10th policy changes, OpenAI representatives often mentioned the retained policies such as not using the service to “harm yourself or others.” In response, many critics have voiced concerns over the ambiguity of the language. The managing director of the AI Now Institute and former Federal Trade Commission AI policy analyst, Sarah Myers West, points out that in times when the use of AI systems has spread to the Israeli military’s offensive strategy in the Israel-Gaza conflict, OpenAI’s decision to drop “military and warfare” from its usage policy has questionable implications. She warns that OpenAI’s “permissible use policy [...] raises questions about how OpenAI intends to approach enforcement.” 


Although OpenAI representatives still maintain firm stances against AI usage in weapon development and harming others, discussion of AI throughout the previous year reveals that U.S. military leadership does not share the same vision. In 2023, the Pentagon developed an AI adoption strategy to optimize decision-making in “superior battlespace awareness” and “fast, precise and resilient kill chains.” On January 9, 2023, Michael Horowitz, the deputy assistant secretary of defense, reported a positive outlook: “We’ve launched initiatives designed to improve our [AI] adoption capacity, and I think we're really starting to see them pay off.” 


This is related to the bureaucratic structure that we learned in class, as the Department of Defense, an executive branch department of the U.S. federal government, uses DARPA to implement one of its assignments to use AI models to “remediate vulnerabilities [...] in national security systems” as structured by an Executive Order from the Biden Administration released in 2023. Due to the assigned responsibility, DARPA runs the cybersecurity pilot that OpenAI has joined. It isn’t just OpenAI, but also Anthropic, Google, and Microsoft assisting DARPA in developing cybersecurity software that will automatically reinforce infrastructure from cyberattacks.


While the same Executive Order also sets forth federal efforts to develop structured guidelines to ensure responsible usage of AI, I feel that both the technology and the surrounding policies still need more development to prevent intentional and unintentional harm. It is concerning that private AI companies have begun breaking down strict barriers that were set up to protect against the dangers of this new technology, especially since we’ve seen multiple accounts of its unreliability. One example is being able to trick ChatGPT into bypassing its safeguards against telling users how to make a bomb. As much as we are excited about the seemingly infinite possibilities of AI applications, are we ready for its potential? How far should we allow AI to expand into military usage? Or has this already been a step in the wrong direction?


Sources:

3 comments:

Alexandra Ding said...

I'm curious about whether OpenAI would develop software for the US military other than "cybersecurity tools," and although there's no definite answer, it might also be used to optimize supply lines. Perhaps military leaders envision using it to chose targets, which brings up a huge question of responsibility (would the developers, or the human who hopefully reviews the AI's decision, or someone else, be responsible)? I'm scared about what AI is currently and will continue to be used for in warfare, but at the same time, it's a tool that militaries will try to adopt as quickly as possible, and I doubt there's any stopping it.

https://www.pbs.org/newshour/show/how-militaries-are-using-artificial-intelligence-on-and-off-the-battlefield

Lipika Goel said...

Building off of what Alex said, there are definitely some really useful applications of AI in supply chain optimization both for the military and for private companies. If AI companies were going to seek profit and big name (big money) collaborations, I wish this is where their attention would have gone first rather than working with the military on actual combat. When the ethicality of AI is already under question, collaborating with one of the most controversial institutions out there seems like a risky move. At the same time though, the military is a well-established institution and if the AI companies are of use to the government in some way, they are less likely to call for strict regulation of them.

VishalDandamudi said...

I don't really see much of a difference between supply chain optimization, logistics, or just direct combat. Either way, the military is using AI to bolster its ability to kill as efficiently as possible. Like Alex said, there is probably no stopping the adoption of AI by the U.S. military because stopping means letting other militaries surpass them.

On the topic of OpenAI, after their very public decimation of their board (specifically kicking out the board members most cautious about the use of AI), their ostensible commitment to keeping AI away from weapons seems like virtue signaling more than anything else. All in all, it's a vain attempt to hold on to their position as “moral leader” of AI. Given that they edited their usage policy, they've already lost it.

Side note: If you are interested, perhaps check out Claude from Anthropic, a company that was actually founded by OpenAI researchers disillusioned with the ethical approach of OpenAI.