Legal AI Implementation Advice for Businesses (5 Step Guide) | EM3 Law

This Article at a Glance: 

  • Before deploying Artificial Intelligence (AI), companies must take a systematic approach to address the legal, commercial, and ethical risks involved. This begins with assessing the technology architecture, rights/access to data, and use cases both from a technological perspective, on the one hand, and a legal and regulatory perspective, on the other, in parallel, and prior to beginning any implementations.  
  • Companies should (1) review the terms and conditions of their existing technology agreements and any new technology agreements proposed for AI initiatives, (2) evaluate their datasets, (3) create or update internal policies and procedures addressing the usage of Generative AI (“GenAI,”) and (4) establish robust procedures to ensure ongoing compliance with applicable laws and regulations. 
  •  A holistic approach to adopting AI includes:
    • Understanding the rules that apply to each of the company’s data sets;
    • Evaluating each vendor’s technology architecture for fitness and suitability for specific enterprise purposes; 
    • Assessing the vendors and negotiating their contracts; 
    • Assessing the use cases and legal requirements for each such use case; 
    • Establishing internal policies and procedures for the use of AI; and  
    • Maintaining ongoing compliance in a nascent, but rapidly evolving global legal and regulatory environment. 
  • Finally, enterprises should update their existing contracts throughout their supply chain and through to their end customers to support their AI initiatives now and in the future. 

Jump to Section:

Introduction

Artificial Intelligence (AI), and specifically, Generative AI (GenAI) is witnessing rapid adoption in today’s technology-driven world. Recent statistics suggest that more than 60% of businesses believe that GenAI has the potential to increase business productivity and improve customer relationships.1

Currently, around 42% of companies are looking to implement GenAI into their businesses, while 35% of companies are already reaping GenAI’s benefits.2 It is evident that AI is rapidly transforming the business landscape, presenting massive opportunities for companies that can harness the right tools to unlock value in their data and gain competitive advantage.

There are numerous vendors in the marketplace (with many more new entrants daily), providing different technologies, large language models (LLMs), and solutions that allow businesses to drive successful outcomes by applying AI technologies to such data.  

While hyperbolic doomsday notions that AI will replace people are misplaced, what is not an overstatement is simply this: enterprises who use AI will replace enterprises who do not.

Enterprises that successfully adopt GenAI will have a significant competitive advantage in their respective fields, while those who choose to ignore GenAI do so at their own peril.

GenAI has the potential to revolutionize a wide range of industries, from retail to manufacturing to healthcare. It can be employed to create new products and services, expand existing offerings, automate tasks, improve productivity, and reduce costs.

However, simply adopting AI is not sufficient. Companies must be mindful of the legal, commercial, and ethical risks inherent in AI technologies, taking into consideration technical architecture and design, prior to implementing AI technologies into their enterprises to be able to mitigate such risks.

A methodical approach to addressing these legal, commercial, and ethical challenges prior to implementation will allow companies to avoid costly implementation mistakes. 

“While hyperbolic doomsday notions that AI will not replace people are misplaced, what is not an overstatement is simply this: enterprises who use AI will replace enterprises who do not.” 

How Does EM3 Law Help Businesses with AI Implementation?

At Maxson Mago & Macaulay, LLP (EM3), we assist companies and legal departments stay ahead of the legal and regulatory challenges involved in the integration of AI, keep pace with the modernization efforts of enterprise operations and stay ahead of the competition in this rapidly evolving AI-enabled world.

EM3’s team has a wide range of experience in advising both vendors and customers on the legal and regulatory issues that arise in the development and implementation of technology, artificial intelligence, and machine learning solutions for businesses.

We provide full-service legal support to clients throughout their AI implementation journey: from vendor evaluation and contract negotiations to implementation and post-production monitoring and maintenance.

If you have questions regarding managing and mitigating legal or regulatory risks that accompany the implementation of AI and other emerging enterprise technologies, you can reach out to Ajay Mago or any member of EM3’s technology transactions and artificial intelligence practices 

In this article, we use the terms ‘business,’ ‘enterprise,’ or ‘company’ to refer to entities that intend to incorporate AI into their operations, and ‘vendor’ to refer to technology companies and solution providers that provide AI products or services. 

This section outlines several steps that companies should take from a legal perspective to keep pace with their enterprises’ rapid adoption of AI.

As a first step, it is always prudent to have a robust non-disclosure agreement in place with all vendors prior to sharing any sensitive information. 

1. Understand the Contracts, Laws and Regulations that Apply to the Use of Each of the Company’s Data Sets

Before a company shares sensitive enterprise data, it should establish a robust and methodical assessment process. This will allow it to engage the appropriate vendor partners and evaluate the suitability and efficacy of LLMs.

A robust assessment process will also prevent a company from inadvertently sharing trade secrets, confidential or privileged information, other sensitive enterprise data, and/or taking a piecemeal approach and inadvertently implementing inadequate solutions.

Businesses should take a comprehensive approach to understand how AI will them help achieve their goals, examine their data sets and any and all agreements related to such data sets, and assess their vendors to ensure compliance with the myriad of applicable laws across the globe. 

Contractual Requirements

Make sure to evaluate your existing contracts governing the company’s data usage to determine questions, which may include: 

  • Whether such data is permitted to be used for AI or not;  
  • Who has ownership over the data;  
  • Whether separate permission needs to be obtained before employing the data for AI;  
  • Is data sharing allowed with third parties and what are the restrictions on such sharing;  
  • Who possesses ownership of derivative works and who may use it; and,  
  • What are the data security, privacy, or other obligations associated with such data. 

For example, different regulations apply to data collected by the entity itself, data collected by third parties, and data collected via web scraping, all of which implicate different requirements to which enterprises need to comply. 

Regulatory Requirements

Companies need to adhere to different regulations depending on the type of data they use to develop and train their AI models. Data may be classified into various categories, such as personally identifiable information (PII) or personal health information (PHI).

Such data may be subject to specific rules, such as providing notice, in clear and concise language, to data subjects and obtaining consent from the subjects before employing their information to train AI models or to deliver services using AI.

For instance, the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule requires an entity to obtain the patient’s consent before using such patient’s data for AI models1 and requires an individual’s written authorization for any use or disclosure of PHI that does not fall under the category of treatment, payment, or health care operations, or that is not an otherwise permitted disclosure under the HIPAA Privacy Rule.2

Moreover, companies using AI to provide healthcare services are well advised to take into account the AI Playbooks issued by the Department of Health and Human Services (HHS)3 and the Centers for Medicare and Medicaid Services (CMS)4 that highlight each agency’s positions on certain aspects of AI technology usage. 

Enterprises using personal or sensitive data to train AI models should exercise diligence and understand the types of data they are using as this may trigger significant legal obligations. It is important to judiciously select the datasets, evaluate data licensing rights and responsibilities, and review the applicable legal directives, such as data privacy and intellectual property (IP) regulations.

To ensure compliance of such regulations, it is important to investigate some crucial aspects, including:  

  • Whether copyrighted data has been legally obtained;  
  • Whether adequate security measures are being undertaken to protect the training datasets used in AI from infringement risk; and,  
  • How are the company’s trade secrets being secured.  

For example, a company may be required to obtain a license from the copyright holder if it wishes to use a copyrighted dataset to train a GenAI model. 

A copyright infringement claim can disrupt your business and cause a financial hardship. The risk is higher if the creator registered the copyright with the U.S. Copyright Office.5 In that case, a creator can recover statutory damages and attorneys’ fees.

Moreover, if an enterprise is aware that the training datasets include unlicensed works or that a GenAI model may generate unauthorized derivative works not governed by fair use, it could be charged with willful infringement and incur hefty fines of $150,000 for each such instance.

2. Conduct Assessments of Vendors and Negotiate Vendor Contracts.

An enterprise should conduct third-party AI vendor due diligence, evaluate its vendor’s technology, and review the contract terms before selecting a vendor. Often, the terms of a vendor’s contract’s representations, warranties, and disclaimers can provide good guidance on potential gaps in a vendor’s technology and business practices.

Well-negotiated vendor agreements can go a long way towards safeguarding an enterprise’s interests, properly allocating risks, defining internal procedures, and ensuring legal and regulatory compliance.

Additionally, such agreements need to be periodically updated to meet the changing requirements during and post AI implementation, for instance, when renewing contracts with an enterprise’s vendors or rolling out AI to customers. At this stage, a company should consider evaluating a few relevant factors, which include: 

  • How does a vendor’s technology work; 
  • What is the intended use of data; 
  • Whether such data leaves the local system or is stored or processed in the cloud (and, if so, in what jurisdiction); 
  • What aspects of the architecture are open source; 
  • How will the technology achieve interoperability;  
  • What training data has been used;  
  • How are prompts processed; and,  
  • Whether the vendor’s technology architecture is fit to have access to a company’s data.  

When entering into technology agreements, a company should also undertake an assessment of the vendor’s security practices ensuring that they are adequate and that these aspects are in alignment with the company’s values, goals, and objectives.

For instance, understanding how the AI model will be deployed may impact its output and decisions. Common questions and issues that arise include: 

What ownership does the vendor retain or confer?

It is important for the company to assess ownership rights during and after implementation. This involves determining, among other things:  

  • Will the vendor retain any or all ownership of the AI model after being trained on the company’s data;  
  • Will any ownership rights be shared jointly;  
  • Will any licenses be on an exclusive or non-exclusive basis;  
  • Will the company retain ownership of any derivative works that the AI model may create;  
  • How does human feedback reinforce learning factor into ownership rights;  
  • Does the vendor have the necessary rights to deliver the services or products; and,  
  • Does the vendor have legal title to the rights it seeks to confer on the company.   

Where will the data be processed?

A company should also check whether the data will be processed in the cloud or on-premises. Processing the data in the cloud would require the company to ensure that the vendor’s data centers are in jurisdictions with adequate data protection to prevent potential breaches.

In addition, it is pertinent to verify that state laws governing the use and/or misuse of intellectual property are favorable to the owner. This will likely deter potential infringers or allow you adequate protection to recover any damages you may suffer because of any infringement.

Moreover, assessment of the data sovereignty laws of the jurisdictions where the data will be processed is essential, as some states and/or countries may have stricter regulations for the transfer of data outside their jurisdiction. 

What will the vendor use the data for?

Another important aspect a company needs to consider at this stage is evaluating the vendor’s purpose for using such data and assessing whether such data will be applied to internal or external use cases.

The company’s data may be used for training a GenAI model, solely for delivering services to a customer, or with the intention of using the data for training, refinement, and delivery of services.

By ensuring that data is used only for agreed-upon and authorized purposes, a company can avoid future disputes involving infringement, ownership, and breach of contract claims.  

What kind of data will be used?

While examining the datasets that will be used or shared with vendors, an enterprise needs to document the data lineage, consider the level of sensitivity accorded to such data, and evaluate the controls and guardrails governing such data.

If the data is considered personal or confidential information, the company needs to enforce a separate data processing addendum to address the issues of processing limitations, security standards, deletion, and vendor confidentiality.

Such agreements are necessitated by the California Consumer Privacy Act7 and the Virginia Consumer Data Protection Act,8 among other laws. 

Before allotting datasets to particular use cases, companies should conduct data system discovery. Data discovery is the process of detecting outliers, patterns, and trends in your data that you would not discover otherwise.  This is commonly referred to as Exploratory Data Analysis (EDA). 

When done properly, EDA will gather analytics from many sources, including databases, SaaS tools, and software applications, each with its own personal data. Tools would then find all the datapoints within each system, such as names, email addresses and financial information, and classify each datapoint into a data type, such as financial data, personal data, and demographic data.  

After conducting the EDA, you should create unified data warehouses while simultaneously considering how the data will be cleaned to enhance AI’s performance. You should also assess how frequently new data systems and new data processing activities are added.

The answer to this inquiry will also dictate how often you should revise contracts governing AI’s usage, as well as inform new marketing strategies or new product features or services to roll out to your customers.  

Additionally, it is important to evaluate the ethical and legal implications of use cases. For example, a company using AI to provide healthcare services will be required to consider:  

  • Whether the training data comes from a fair and unbiased source;  
  • Whether it is representative of the entire target population; and,  
  • Whether the AI models are in harmony with HHS promulgations.9  

Below are some of the key legal requirements that often come up.

Notice Requirement

A company is required to provide a notice to the data subject disclosing, among other things, the purposes of AI, the data that will be used for such AI, and the subject’s rights. Some state laws, such as in California10 and Virginia11 may also mandate that you disclose the sale or sharing of personal information with third parties.

Such notices may be necessary when collecting or using sensitive data, including health or biometric information. 

For many use cases, an enterprise should also seek informed consent from data subjects and/or provide a right to opt-in or opt-out before using their data for AI, as per the applicable state or federal regulations.

For instance, companies using AI to provide healthcare services are required to adhere to the consent requirements provided by governing laws such as HIPAA. To illustrate, an entity using mobile data to train and build an AI model relating to the symptoms of Parkinson’s disease and providing tailored recommendations to users first needs to obtain prior consent from patients.12

Certain states, like Virginia,13 Colorado,14 California15 and Connecticut16 explicitly provide their residents with the right to opt-out and require prior consumer consent in cases of secondary uses of personal information, for example, to build LLMs.

If you fail to comply with these provisions, you may incur hefty fines or be sued by private parties. For instance, the California Consumer Privacy Act (CCPA) permits a fine of $7,500 for every intentional violation and $2,500 for every unintentional violation.17 

Data minimization and purpose limitation

Before sharing sensitive data, companies need to comply with the principles of data minimization and purpose limitation by encrypting or anonymizing such data. It should also use personal data only for the intended purpose or for which consent was obtained from the data subject. Lastly, companies should minimize sharing sensitive data with third-party vendors. 

In sum, businesses of all kinds, including, but not limited to, those working in the healthcare sector, must ensure that AI outputs are comprehensible to the end users, which allows the business to take appropriate action as recommended by the HHS AI Playbook, and/or other relevant guidelines and playbooks.18

4. Establish Internal Policies and Procedures for the Use of AI

The next step for companies is to develop and implement internal policies and procedures for the proper use of AI. By doing so, companies not only protect their datasets, monitor AI use, and mitigate risks, but also protect their reputation and build trust with clients.  

Data Use Policies

These policies assist in mapping data assets, establishing how AI will be used within the company, what internal approvals are required for specific types of uses, what data can be used for AI and, more importantly, what data cannot be used for AI. 

Data Access Policies

These policies evaluate and maintain a record of the persons who can view, access, modify, or retrieve certain data that they hold to prevent any unauthorized access to or use of that data. A company not only has to protect the datasets but also implement technical measures to protect the AI system from any unauthorized or illegal access.

Further, to comply with applicable laws and regulations, enterprises need to define the limits of the data that is permitted to be used for AI. Adopting adequate mechanisms and tools to secure the data that is to be used to train and operate their AI models is a vital element in the process of AI implementation. This may be achieved by encrypting data, restricting access to data on a need-to-know basis, and using strong passwords. 

AI Usage Policies

These policies outline how AI models will be developed, classified, monitored, and maintained, how such models will be used in decision-making, and how such models will be explained to users. AI Usage Policies enable constant assessment of AI models, allow entities to retain versions of AI to disgorge or deprecate the algorithm by removing unauthorized data from the AI, and aid in maintaining AI event logs as a record-keeping system to avoid claims of intentional or negligent conduct. 

Data Monitoring and Governance Policies

Businesses should also establish a data monitoring and governance model to regulate the use of data in AI, monitor AI outputs, and ensure that required approvals are obtained. Such policies help define the roles and responsibilities within the organization’s structure, eliminate inaccuracies and biases from datasets, define responsible AI metrics, and execute a system of incident reporting.  

Data Risk Management Policies

Such policies assist the companies in evaluating the data and AI model’s level of vulnerability, the types of risks associated with such data and models, particularly to identify heightened risks that may occur while using sensitive personal data such as biometric or location information, and the potential impacts of such risks.

A company will be able to assess the legal risks an AI model may pose to their data by conducting data privacy impact assessments (DPIAs). Additionally, enterprises need to be mindful of the diverse types of risks associated with AI systems and how those risks may arise, which may include harm to an individual, society, or organization by hampering their civil rights and liberties or business operations.

The NIST AI Risk Management Framework provides good information to enable companies to identify AI-related risk types in the earlier stages of an AI lifecycle.19  For example, an initial AI developer of pre-trained models will identify risks at a different stage than an AI user who deploys such a pre-trained model in a specific use case and it is important for both to be aware of what to look for.20 

See, for example, the Local Law 144 of NYC, which prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.21

A company can mitigate these risks by classifying and prioritizing the level of risks, and establishing risk tolerance in accordance with reasoned internal policies and the applicable legal requirements.

Educating and Training Employees 

Of course, policies and procedures are of limited value without a properly trained workforce. It is vital for employees to be aware of the use of AI within the company, as many instances have been reported where an employee inadvertently disclosed trade secrets, confidential, or privileged information to open-source models simply because they did not fully understand the implications of inputting the data into such models.

Employees should be educated on the internal policies and procedures governing AI and their ethical and legal obligations to avoid any accidental or intentional disclosures. A proper training program also involves multiple modalities like video training, handbooks, workshops, data literacy programs and frequent reminders that will improve their skills and knowledge relating to AI technologies.

A company may also need to build and train an interdisciplinary team of experts that manage their AI operations, ranging from AI specialists and IT professionals to internal and external counsel. 

5. Maintain an Ongoing Compliance Program

Novel legal issues are and will continue to constantly emerge with the rapid advancement of AI technology. The complexities surrounding the use and ownership of data will continue to evolve. Businesses need to stay abreast of these issues and maintain ongoing compliance with the rapidly evolving laws regulating AI.

As the AI sector becomes increasingly regulated, companies need to design thoughtful and defensible strategies to avoid innocent and unintended compliance failures. Companies are also advised to be cognizant of the possibilities of increasing future liabilities that may arise due to AI bias, such as, biased decisions in hiring, housing, or lending that may arise from hastily and improperly implemented AI systems.

Taking measures to prevent such bias, which may include conducting AI bias assessments, using diverse data to train AI models, and putting proper policies and tools in place for regularly monitoring such models, will go a long way in support of successful and legally compliant implementations of AI not only for today but for the future as well. 

Conclusion

As AI becomes an integral part of businesses, it is undeniable that AI will help companies achieve tremendous success and growth. The reality is that businesses will use not one, but numerous AI technologies in their operations. This requires a cohesive strategic approach to engaging vendors and developing technologies while simultaneously being mindful of the legal and regulatory obligations when using such technologies.

However, before deploying AI into their operations, companies must be careful to evaluate and mitigate the legal, ethical, and regulatory risks that are associated with it. Enterprises should leverage their legal departments and outside counsel to create and/or evaluate their contracts with vendors, assess vendors, assess use cases, and implement internal policies governing the use of their data for AI throughout its lifecycle and through to their end customers.

As this article highlights, companies should be proactive and prepare, document, and regularly update their comprehensive AI strategy. They should begin with data and technology assessments and progress through agreement and vendor evaluations. This analysis should include contract updates, creation of data security and privacy frameworks, internal policy implementation, and ongoing compliance with the ever-evolving laws and regulations.

Therefore, enterprises can maximize the benefits of AI while minimizing risks by carefully implementing these steps and making decisions that will be beneficial for them in the long-term. 

Furthermore, the rapid evolution of technology and AI, especially GenAI, is resulting in a swift expansion of the laws governing AI and privacy. Implementing GenAI in a business can be a challenging process.

Companies need to stay updated with the evolving regulations, maintain an ongoing compliance management system, and have a clear roadmap that navigates them through the issues that stem from their desire to implement GenAI in their businesses. 

Ajay Mago, Managing Partner at Maxson Mago & Macaulay, LLP (EM3 Law LLP). 

To receive articles like this directly in your inbox, please subscribe to our newsletter by clicking here

About the Author

For more information, please contact Ajay Mago, Managing Partner, at amago@em3law.com.

Mr. Mago’s full bio is available here.

Disclaimer: This publication is for information purposes only and should not be construed as legal advice or a substitute for legal counsel. This information is not intended to create an attorney-client relationship. Do not send us any unsolicited confidential information unless and until a formal attorney-client relationship has been established. EM3 Law is under no duty of confidentiality to persons sending unsolicited messages, e-mails, mail, facsimiles and/or any other information by any other means to our firm or attorneys prior to the formal establishment of such relationship. The views and opinions expressed herein are those of the author(s) and do not necessarily reflect the views of the firm. 

Leave a Reply

Discover more from EM3 Law Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading