<https://www.washingtonpost.com/technology/2024/07/13/openai-safety-risks-whistleblower-sec/>

OpenAI whistleblowers have filed a complaint with the Securities and Exchange 
Commission alleging the artificial intelligence company illegally prohibited 
its employees from warning regulators about the grave risks its technology may 
pose to humanity, calling for an investigation.

The whistleblowers said OpenAI issued its employees overly restrictive 
employment, severance and nondisclosure agreements that could have led to 
penalties against workers who raised concerns about OpenAI to federal 
regulators, according to a seven-page letter sent to the SEC commissioner 
earlier this month that referred to the formal complaint. The letter was 
obtained exclusively by The Washington Post.

OpenAI made staff sign employee agreements that required them to waive their 
federal rights to whistleblower compensation, the letter said. These agreements 
also required OpenAI staff to get prior consent from the company if they wished 
to disclose information to federal authorities. OpenAI did not create 
exemptions in its employee nondisparagement clauses for disclosing securities 
violations to the SEC.

These overly broad agreements violated long-standing federal laws and 
regulations meant to protect whistleblowers who wish to reveal damning 
information about their company anonymously and without fear of retaliation, 
the letter said.

“These contracts sent a message that ‘we don’t want … employees talking to 
federal regulators,’” said one of the whistleblowers, who spoke on the 
condition of anonymity for fear of retaliation. “I don’t think that AI 
companies can build technology that is safe and in the public interest if they 
shield themselves from scrutiny and dissent.”


In a statement, Hannah Wong, a spokesperson for OpenAI said, “Our whistleblower 
policy protects employees’ rights to make protected disclosures. Additionally, 
we believe rigorous debate about this technology is essential and have already 
made important changes to our departure process to remove nondisparagement 
terms.”

The whistleblowers’ letter comes amid concerns that OpenAI, which started as a 
nonprofit with an altruistic mission, is putting profit before safety in 
creating its technology. The Post reported Friday that OpenAI rushed out its 
latest AI model that fuels ChatGPT to meet a May release date set by company 
leaders, despite employee concerns that the company “failed” to live up to its 
own security testing protocol that it said would keep its AI safe from 
catastrophic harms, like teaching users to build bioweapons or helping hackers 
develop new kinds of cyberattacks. In a statement, OpenAI spokesperson Lindsey 
Held said the company “didn’t cut corners on our safety process, though we 
recognize the launch was stressful for our teams.”

Tech companies’ strict confidentiality agreements have long vexed workers and 
regulators. During the #MeToo movement and national protests in response to the 
murder of George Floyd, workers warned that such legal agreements limited their 
ability to report sexual misconduct or racial discrimination. Regulators, 
meanwhile, have worried that the terms muzzle tech employees who could alert 
them to misconduct in the opaque tech sector, especially amid allegations that 
companies’ algorithms promote content that undermines elections, public health 
and children’s safety.

The rapid advance of artificial intelligence sharpened policymakers’ concerns 
about the power of the tech industry, prompting a flood of calls for 
regulation. In the United States, AI companies are largely operating in a legal 
vacuum, and policymakers say they cannot effectively create new AI policies 
without the help of whistleblowers, who can help explain the potential threats 
posed by the fast-moving technology.

“OpenAI’s policies and practices appear to cast a chilling effect on 
whistleblowers’ right to speak up and receive due compensation for their 
protected disclosures,” said Sen. Chuck Grassley (R-Iowa) in a statement to The 
Post. “In order for the federal government to stay one step ahead of artificial 
intelligence, OpenAI’s nondisclosure agreements must change.”

A copy of the letter, addressed to SEC chairman Gary Gensler, was sent to 
Congress. The Post obtained the whistleblower letter from Grassley’s office.

The official complaints referred to in the letter were submitted to the SEC in 
June. Stephen Kohn, a lawyer representing the OpenAI whistleblowers, said the 
SEC has responded to the complaint.

It could not be determined whether the SEC has launched an investigation. The 
agency did not respond to a request for comment.

The SEC must take “swift and aggressive” steps to address these illegal 
agreements, the letter says, as they might be relevant to the wider AI sector 
and could violate the October White House executive order that demands AI 
companies develop the technology safely.

“At the heart of any such enforcement effort is the recognition that insiders … 
must be free to report concerns to federal authorities,” the letter said. 
“Employees are in the best position to detect and warn against the types of 
dangers referenced in the Executive Order and are also in the best position to 
help ensure that AI benefits humanity, instead of having the opposite effect.”

These agreements threatened employees with criminal prosecutions if they 
reported violations of law to federal authorities under trade secret laws, Kohn 
said. Employees were instructed to keep company information confidential and 
threatened with “severe sanctions” without recognition of their right to report 
such information to the government, he said.

“In terms of oversight of AI, we are at the very beginning,” Kohn said. “We 
need employees to step forward, and we need OpenAI to be open.”

The SEC should require OpenAI to produce every employment, severance and 
investor agreement that contains nondisclosure clauses to ensure they don’t 
violate federal laws, the letter said. Federal regulators should require OpenAI 
to notify all past and current employees of the violations the company 
committed as well as notify them that they have the right to confidentially and 
anonymously report any violations of law to the SEC. The SEC should issue fines 
to OpenAI for “each improper agreement” under SEC law and direct OpenAI to cure 
the “chilling effect” of its past practices, according to the whistleblowers 
letter.

Multiple tech employees, including Facebook whistleblower Frances Haugen, have 
filed complaints with the SEC, which established a whistleblower program in the 
wake of the 2008 financial crisis.

Fighting back against Silicon Valley’s use of NDAs to “monopolize information” 
has been a protracted battle, said Chris Baker, a San Francisco lawyer. He won 
a $27 million settlement for Google employees in December against claims that 
the tech giant used onerous confidentiality agreements to block whistleblowing 
and other protected activity. Now tech companies are increasingly fighting back 
with clever ways to deter speech, he said.

“Employers have learned that the cost of leaks is sometimes way greater than 
the cost of litigation, so they are willing to take the risk,” Baker said.

Reply via email to