Listen to this post

While you were asking ChatGPT to create a 3-course menu for the upcoming book club you’re hosting or to explain the Rule Against Perpetuities, several federal government agencies announced initiatives related to the use of artificial intelligence (AI) and automated systems, focusing on the potential threats stemming from the misuse of this powerful technology. As the development and use of AI becomes integrated into our daily lives and employee work routines, and companies begin to leverage such technology in their solutions provided to the government, it is important to understand the developing federal government compliance infrastructure and the potential risks stemming from the misuse of AI and automated systems.

Some of these issues are specific to doing business with the government and others are broader issues that all companies face when using AI, and generative AI (GAI) in particular. This rapid increase in the use of AI creates a number of legal issues, some of which are addressed below. Many employees are experimenting with AI in connection with their work. While such uses may be beneficial, it is important to set some guard rails to avoid unwanted legal issues. As a result, many companies are developing corporate policies on employee use of AI. If you have not done so yet, now is a good time to get started.

Federal Government Initiatives

Recently announced federal government initiatives seek to leverage collective authorities to monitor the development and use of AI and automated systems. On April 21, 2023, the Secretary of Homeland Security, Alejandro N. Mayorkas, announced a new initiative that seeks to combat evolving threats, including the revolution created by generative AI. The Secretary announced the first-ever AI Task Force, which will drive specific applications of AI to advance critical homeland security missions including:

  • Integrating AI to enhance the integrity of supply chains and the broader trade environment, such as deploying AI to improve screening of cargo and identifying the importation of goods produced with forced labor
  • Collaborating with government, industry, and academia partners to assess the impact of AI on DHS’s ability to secure critical infrastructure

Additionally, on April 25, 2023, officials from the Federal Trade Commission (FTC), the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement on “Enforcement Efforts Against Discrimination and Bias in Automated Systems.” The joint statement outlines each respective agencies’ commitment to enforce their respective legal and regulatory authority to ensure responsible innovation in the AI space. Further, the joint statement reiterates that these agencies “take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.” Joint Statement at 2. The joint statement also describes recent efforts by these agencies, including:

  • The DOJ’s recent filing of a “statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services”
  • The CFPB’s publication of “a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used”
  • The FTC’s issuance of a report “evaluating the use and impact of AI in combatting online harms identified by Congress,” which “outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance”
  • The EEOC’s issuance of “a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees”

These recent efforts follow publications by the federal government last year acknowledging that artificial intelligence is here to stay and, as such, we should be mindful of the risks and pitfalls associated with its continued use. In October 2022, the White House published a document titled, “Blueprint for an AI Bill of Rights, Making Automated Systems Work for the American People.” The AI Bill of Rights included five principles:

  1. Safe and Effective Systems: The American people should have appropriate protection from unsafe or ineffective systems and systems should be developed with consultation from diverse communities, stakeholders, and domain experts.
  2. Algorithmic Discrimination Protections: Individuals should not face discrimination by algorithms, and systems should be developed and utilized in an equitable manner.
  3. Data Privacy: Systems should be developed with built-in protections from abusive data practices for individuals and the ability for individuals to have agency over how their data is used.
  4. Notice and Explanation: Individuals should be notified regarding when an automated system is being used and provided information regarding how and why the automated system will contribute to outcomes that impact the individual.
  5. Human Alternatives, Consideration, and Fallback: Individuals should be able to opt out, where appropriate, and have the ability to discuss with another individual any considerations and remedies encountered.

The AI Bill of Rights is a voluntary, non-binding framework, but federal agencies likely will consider it as they craft guidance and requirements regarding the development and use of artificial intelligence.

Additionally, on January 23, 2023, the National Institute of Standards and Technology (“NIST”) released the first version of its “Artificial Intelligence Risk Management Framework” (“AI RMF 1.0”). Like the AI Bill of Rights, compliance with this framework is voluntary but the purpose of the AI RMF 1.0 is to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” AI RMF 1.0 at 2. The AI RMF 1.0 equips federal contractors with guidance regarding approaches for increasing the trustworthiness of AI systems and fostering the development, deployment, and utilization of AI systems over time.

The potential threats stemming from the use of AI systems could affect cybersecurity, fair competition, consumer protection, equal opportunity, and civil rights. Therefore, it is crucial for companies, including federal contractors, to understand and keep in mind these federal government frameworks and initiatives when developing, deploying, and using AI systems.

Broader Issues with Generative AI

Some other issues companies face with employee use of AI relate to IP and open source. The “generative” aspect of GAI implies that something new is being created. New creations implicate IP issues, including the protection of what is created, potential infringement of preexisting IP, and ownership and licensing issues of the output. Both the U.S. Copyright Office and the U.S. Patent and Trademark Office have developed initiatives to focus on IP issues with AI.

Copyrights

On March 16, 2023 the Copyright Office issued Guidance on the examination and registration of works that contain material generated by AI technology. This Guidance covers some important topics. Key points include the following:

  • Copyright can protect only material that is the product of human creativity – the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans.
  • In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or are, instead, of an author’s “own original mental conception, to which [the author] gave visible form.” The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.
    • If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it (e.g., when AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response).
    • In cases where a work containing AI-generated material also contains sufficient human authorship to support a copyright claim, copyright will only protect the human-authored aspects of the work, which are “independent of” and do not affect the copyright status of the AI-generated material itself (e.g., a human may select or arrange AI-generated material in a sufficiently creative way that the resulting work as a whole constitutes an original work of authorship or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection).
  • This policy does not mean that technological tools cannot be part of the creative process – what matters is the extent to which the human had creative control over the work’s expression and “actually formed” the traditional elements of authorship.
  • Applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work. For pending applications and registrations that have already issued, the applicant must update them to meet the duty of disclosure.
  • AI-generated content that is more than de minimis should be explicitly excluded from the application.

There is no black and white test for the level of input a human must provide to be deemed an author of the output. From the Guidance, it is reasonable to conclude that merely influencing the output is not enough. Rather, the human must “dictate or control” the output.

Key takeaway: If your employees are using AI to generate content that you would normally want to ensure is copyright protectable, you need to give them guidance and develop policies for such use cases.

Patents

On February 14, 2023, the U.S. Patent and Trademark Office published a Federal Register Notice (FRN) requesting comments regarding AI and inventorship. It also announced an AI Inventorship Listening Session – East Coast which it held on April 25, 2023. This listening session sought stakeholder input on the current state of AI technologies and inventorship issues that may arise in view of the advancement of such technologies. A West Coast session will be held on May 8, 2023 at Stanford University.

There are significant questions about the ability to patent inventions that were conceived with the assistance of AI. It is important to consider this in your patent process and AI use policies.

Open Source

AI-based code generators (e.g., CoPilot) are a powerful application of GAI. These tools leverage AI to assist code developers by using AI models to auto-complete or suggest code based on developer inputs or tests. These tools raise at least the following potential legal issues:

  • Does training AI models using open source code constitute infringement or, even if the use is licensed, does doing so require compliance with conditions or restrictions of the open source licenses?
  • Does using the output of an AI code generator subject the developer and/or user to infringement claims? Can the developer’s terms of service (TOS) effectively shift liability to the user as some try to do?
  • How must the compliance obligations of open source licenses be met in this context and by whom (developer or user)?
  • Does use of AI-generated code by developers creating a new software application require the application to be licensed under an open source license and its source code to be made available?

If your employee developers are using GAI code generators, you need to develop or update open source policies to prevent unwanted issues. If you do not already have an open source policy, see here for why you need one and what it should include. If you have not already updated your open source policies to address AI, see here for some of the issues to consider and solutions to some of the common legal issues that arise from use of AI code generators.

IP Infringement

Various infringement lawsuits have been filed against AI tool providers alleging that the training of their AI models and/or the generated output are based on third party IP-protected materials. One issue that companies need to consider is that the terms of service for many of these tools try to shift liability to users. Some require users to indemnify the tool providers for infringement. This is one of the reasons that some companies judiciously decide which GAI tools the company will approve for employee use.

Policies on Employee Use of AI

It is highly advisable to develop a policy on employee use of AI. Each company is developing its own criteria, but some of the key factors to consider are:

  • The TOS for these tools vary widely and some companies are whitelisting the tools that the legal team approves after reviewing the TOS and prohibiting use of others. Often this is done on a per version basis, as each version of the same tool (e.g., ChatGPT 3.5 vs. 4.0) may have different features and often a different TOS.
  • For some tools there are different methods of access (e.g., browser-based vs. API-based) and paid vs. free versions that can all involve different features and different legal terms. So for each version, the method of access and paid vs. unpaid use needs to be considered as well in whitelisting a tool.
  • Which tools are permitted for which use cases (e.g., content for internal use only vs. external use)?
  • The types of AI-generated content that can be used. For example, there can be different considerations when the content is text vs. images. AI code generators create a distinct set of issues as mentioned above and some of the policies separately address use of code generators.
  • Some criteria are based on IP protection availability. For example, in many cases the GAI output may not be copyright protectable. So some companies are prohibiting use of GAI to produce works for which the company would traditionally want to obtain copyright protection.
  • Companies should address the need to comply with the FTC guidance regarding the use of AI content. Some companies are advising employees not to advertise the use of AI. There is a tricky balance between having employees not advertise that they are using AI, but being transparent and truthful where necessary per the FTC guidance.
  • For companies that use third party contractors the policies need to address the third party contractor’s use of GAI. Companies need to make sure third parties do not use GAI to generate content for the company without prior knowledge and approval. Some companies’ AI policies prohibit the use of AI by vendors and contractors that generate content for the company. Some companies require contractors to disclose if they have used GAI in the past. This is important because if the company has filed any copyright registrations based on contractors’ work product that was generated via AI, the company may need to go back and disclose that to the Copyright Office or risk losing their copyright protection.

These are just some examples of the criteria that may be relevant. Often there are other company specific issues as well.

Conclusion

Many companies are scrambling to understand the scope of the legal issues arising from the use of AI and develop policies that are consistent with guidance from the federal government. Often, a helpful first step is to have an in-house presentation on these issues by knowledgeable attorneys who have a deep understanding of AI legal issues and associated business risks. This better enables companies and their in-house counsel to discuss what their policies on use of AI should cover. If a presentation would be helpful to you, or if you have any questions, please contact us.