Balancing Confidentiality: The Right to Warn in AI

The ethical development and deployment of artificial intelligence (AI) are increasingly critical issues as AI technologies continue to advance and integrate into our daily lives. The “Right to Warn about Advanced Artificial Intelligence” open letter, authored by current and former AI employees, both named and anonymous, has underscored the need to address the potential risks associated with AI. This letter raises crucial questions about the balance between company confidentiality and the imperative to protect public interest, particularly focusing on non-disparagement clauses for risk-related criticisms and non-retaliation for breaches of confidentiality agreements.

The Importance of the Right to Warn

AI has the potential to transform various sectors, including healthcare, finance, transportation, and entertainment. However, with these advancements come significant risks, such as ethical dilemmas, security vulnerabilities, and potential misuse. These risks must be addressed to ensure their safe and equitable deployment. Algorithmic bias is a critical concern, as AI systems can perpetuate and amplify existing prejudices, leading to discriminatory outcomes in areas like hiring, lending, and law enforcement. This can result in unequal treatment based on race, gender, or other protected characteristics. Additionally, the security vulnerabilities of AI systems pose severe threats, with the potential for hacking and breaches that could compromise autonomous vehicles or critical infrastructure, leading to catastrophic outcomes.

Moreover, the misuse and abuse of AI technologies for malicious purposes, such as creating deepfakes, enhancing surveillance for oppressive regimes, or developing autonomous weapons, highlight the ethical and safety challenges inherent in AI development. As AI systems become more advanced, there is a growing risk that they may act in ways misaligned with human values, leading to unintended and potentially harmful consequences. Furthermore, the reliance of AI systems on vast amounts of data raises significant privacy concerns, necessitating careful regulation to protect individuals’ privacy rights and prevent intrusive surveillance. Addressing these risks is essential to ensure that AI benefits society without compromising safety, equity, or privacy.

The “Right to Warn” letter highlights such concerns, advocating for greater transparency and accountability in the AI industry. A framework that allows employees to freely raise alarms about AI risks without fear of retaliation or legal repercussions is essential for the safe and ethical development of AI technologies.

AI companies, particularly those working on artificial general intelligence (AGI), operate in a largely unregulated environment. This oversight gap means that the responsibility for identifying and addressing risks often falls on the employees themselves. The proposed “Right to Warn” is crucial to fill this void until comprehensive regulations are in place.

California is taking steps to protect AI whistleblowers with proposed legislation (SB 1047)[1] aimed at workers in companies developing advanced AI systems. However, current whistleblower protections are fragmented and inadequate, covering only specific circumstances under federal or state laws. This patchwork leaves many whistleblowers vulnerable to retaliation. Therefore, broader federal protections are necessary. The letter, endorsed by AI luminaries such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, argues for a cohesive legal framework that allows employees to voice concerns without fear.

The signatories urge AI companies to:

  • Revoke Non-Disparagement Agreements: These agreements prevent employees from speaking out about risks. Companies like OpenAI have begun addressing this, but broader industry action is needed.
  • Create Anonymous Reporting Mechanisms: Establish processes for employees to report concerns to company boards, regulators, and independent expert organizations safely and anonymously.
  • Foster a Culture of Open Criticism: Encourage employees to discuss safety concerns openly, ensuring they do not compromise intellectual property or trade secrets.
  • Protect Confidentiality Breaches: Ensure employees who share confidential information to raise risk-related concerns are not retaliated against, provided they use established reporting channels.

Standstill on Non-Disparagement Clauses

Non-disparagement clauses, which legally bind employees from making negative public statements about their employer, often serve to protect a company’s reputation but can also stifle legitimate criticisms and hinder the disclosure of significant risks. In the context of AI, these clauses could prevent employees from exposing critical issues such as algorithmic biases that lead to discriminatory practices in hiring, lending, and law enforcement. For example, an employee aware of an AI recruiting tool that disadvantages certain demographic groups might feel unable to speak out due to these restrictive clauses.

A standstill on non-disparagement clauses for risk-related criticisms is essential. Encouraging employees to voice concerns without fear of legal retaliation fosters a culture of openness and accountability, crucial for whistleblowing. Whistleblowers have previously exposed significant issues in AI, such as biased facial recognition systems or surveillance technologies infringing on privacy rights. Allowing employees to freely discuss AI risks ensures that potentially harmful technologies are scrutinized and necessary safety measures are implemented before widespread deployment, contributing to the safer and more ethical development of AI technologies.

Non-Retaliation for Breaches of Confidentiality Agreements

Confidentiality agreements in the tech industry, while essential for protecting proprietary information and maintaining competitive advantage, can also hinder whistleblowing on risks with public safety implications. To address this, non-retaliation policies should be established to protect employees who breach confidentiality agreements to report significant risk-related concerns. Such policies would ensure that employees do not face punitive actions like termination, demotion, or legal threats when acting in the public interest, empowering them to prioritize ethical considerations and public safety over corporate secrecy.

Balancing confidentiality with ethical responsibility, non-retaliation policies create a safe environment for employees to disclose potential risks without fear of retribution. This is crucial for early detection and resolution of issues that could escalate into larger problems. For example, an AI researcher might need to report inaccuracies in a machine learning model used in medical diagnostics that could pose health risks. Clear legal and ethical frameworks protecting whistleblowers would promote a culture of accountability and transparency, ensuring that employees can responsibly report concerns about AI technologies, such as deepfake misuse or invasive surveillance, without facing retaliation.

Employees should have the right to disclose confidential information about a company if all other avenues for addressing risk-related concerns have been exhausted. This step should be taken as a last resort, only after internal mechanisms have proven ineffective. Companies must establish robust, transparent, and efficient internal reporting mechanisms to ensure that employees’ concerns are taken seriously and acted upon promptly. For example, an AI ethics board within a company could evaluate concerns about the ethical implications of new AI deployments.

If internal channels fail to address concerns adequately, employees should be protected under whistleblower laws when disclosing information externally, provided the disclosure is made responsibly and in the public interest. Clear guidelines should be established to ensure that such disclosures are pertinent to the risk at hand and do not unnecessarily harm the company’s legitimate interests. Legal and ethical support should be provided to employees who follow these guidelines, ensuring that an AI specialist revealing critical information about an algorithm’s potential misuse, after internal remedies fail, is protected and their disclosure is conducted responsibly.

Balancing Transparency and Corporate Confidentiality

Balancing the need for transparency with the protection of proprietary information is crucial when advocating for changes in whistleblower policies. Companies should develop and communicate clear policies that outline procedures for reporting concerns and the protections available to employees. These policies should emphasize the company’s commitment to ethical practices and public safety. For instance, an AI firm might publish a detailed whistleblower policy explaining the steps employees should take to report concerns about AI biases and the protections they will receive.

Regular training sessions are essential to ensure employees understand their rights and responsibilities regarding risk reporting. This training should cover the use of internal reporting mechanisms and protections against retaliation. AI companies could include training on recognizing and reporting issues like algorithmic discrimination or security vulnerabilities. Additionally, companies should collaborate with regulatory bodies to develop industry-wide standards and guidelines for risk reporting and whistleblower protection. This collaboration can help create a consistent approach across the industry, enhancing overall transparency and accountability. For example, AI firms could work with government agencies to establish protocols for reporting risks related to autonomous systems or data privacy concerns.

Conclusion

Employees have a front-row seat to the development of AI technologies and are often the first to notice potential risks. By empowering them to speak out, we can ensure that these technologies are developed responsibly. This is particularly important in a competitive industry where the race to innovate can sometimes overshadow safety considerations.

The “Right to Warn about Advanced Artificial Intelligence” open letter marks a significant step toward fostering a more transparent and accountable AI industry. By addressing the issues of non-disparagement, non-retaliation, and the responsible disclosure of confidential information, we can create an environment where employees feel empowered to report risks. This, in turn, will contribute to the ethical and safe development of AI technologies, ultimately benefiting society as a whole. It is time for AI companies to recognize the importance of these principles and take concrete steps to implement them, ensuring that the development of AI technologies prioritizes public safety and ethical considerations.

[1] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047

You can access the letter by following this link

Click here to subscribe to our AI Newsletter.

Looking for guidance on your AI implementation journey?

Connect with Ajay Mago or any member of EM3’s Artificial Intelligence practice for professional support. 

Ajay Mago, EM3 Law

Ajay Mago, Managing Partner at Maxson Mago & Macaulay, LLP (EM3 Law LLP).


Disclaimer: This publication is for information purposes only and should not be construed as legal advice or a substitute for legal counsel. This information is not intended to create an attorney-client relationship. Do not send us any unsolicited confidential information unless and until a formal attorney-client relationship has been established. EM3 Law is under no duty of confidentiality to persons sending unsolicited messages, e-mails, mail, facsimiles and/or any other information by any other means to our firm or attorneys prior to the formal establishment of such relationship. The views and opinions expressed herein are those of the author(s) and do not necessarily reflect the views of the firm.  

Leave a Reply

Discover more from EM3 Law Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading