OpenAI is collaborating with the US Department of Defense on several initiatives, such as enhancing cybersecurity measures. This marks a shift from the company’s previous policy of not supplying its artificial intelligence technologies to military organizations.
The creator of ChatGPT is working on open-source cybersecurity projects in partnership with the US Defense Department, engaging with DARPA in their previously announced AI Cyber Challenge, according to Anna Makanju, the vice president for global affairs at the company.
In a discussion held at Bloomberg House during the World Economic Forum in Davos on Tuesday, she also mentioned that preliminary conversations have taken place with the US government about ways the company can contribute to efforts aimed at reducing the rate of suicide among veterans.
The firm recently eliminated clauses from its service agreement that prohibited the use of its AI for military and combat purposes. Makanju explained that this move was a component of a wider revision of the company’s policies in order to accommodate emerging applications of ChatGPT and its additional resources.
“Since there was effectively a total ban on military applications before, numerous individuals assumed that this would prevent many scenarios that align with our global aspirations,” she stated. However, OpenAI continues to prohibit the use of its technology for creating weaponry, causing property damage, or injuring individuals, according to Makanju.
Microsoft Corp., the primary investor in OpenAI, offers a variety of software agreements to the United States military and other governmental departments.
OpenAI, Anthropic, Google, and Microsoft are collaborating with the US Defense Advanced Research Agency to support its AI Cyber Challenge. The goal of the challenge is to discover software that can autonomously repair vulnerabilities and protect infrastructure against cyberattacks.
The Intercept had earlier disclosed amendments to OpenAI’s terms of service.
OpenAI has also announced that it is stepping up its efforts in protecting election integrity, committing resources to prevent its AI-generated content tools from being exploited to disseminate false political information.
“Elections are extremely important,” OpenAI’s CEO Sam Altman stated in the same conversation. “In my opinion, it’s beneficial for us to have a considerable amount of worry.”