Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > [Legal and Business Risk] + AI

Partners and Counsel from the law firm WilmerHale consider how [Legal and Business Risk] + AI = “early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them.” (1)

Ten Legal and Business Risks of Chatbots and Generative AI

Introduction

It took just two months from its introduction in November 2022 for the artificial intelligence (AI)- powered chatbot ChatGPT to reach 100 million monthly active users—the fastest growth of a consumer application in history.

Chatbots like ChatGPT are Large Language Models (LLMs), a type of artificial intelligence known as “generative AI.” Generative AI refers to algorithms that, after training on massive amounts of input data, can create new outputs, be they text, audio, images or video. The same technology fuels applications like Midjourney and DALL-E 2 that produce synthetic digital imagery, including “deepfakes.”

Powered by the language model Generative Pretrained Transformer 3 (GPT-3), ChatGPT is one of today’s largest and most powerful LLMs. It was developed by San Francisco-based startup OpenAI—the brains behind DALL-E 2—with backing from Microsoft and other investors, and was trained on over 45 terabytes of text from multiple sources including Wikipedia, raw webpage data and books to produce human-like responses to natural language inputs.

LLMs like ChatGPT interact with users in a conversational manner, allowing the chatbot to answer follow-up questions, admit mistakes, and challenge premises and queries. Chatbots can write and improve code, summarize text, compose emails and engage in protracted colloquies with humans. The results can be eerie; in extended conversations in February 2023 with journalists, chatbots grew lovelorn and irascible and expressed dark fantasies of hacking computers and spreading misinformation.

The promise of these applications has spurred an “arms race” of investment into chatbots and other forms of generative AI. Microsoft recently announced a new, $10 billion investment in OpenAI, and Google announced plans to launch an AI-powered chatbot called Bard later this year.

The technology is advancing at a breakneck speed. As Axios put it, “The tech industry isn’t letting fears about unintended consequences slow the rush to deploy a new technology.” That approach is good for innovation, but it poses its own challenges. As generative AI advances, companies will face a number of legal and ethical risks, both from malicious actors leveraging this technology to harm businesses and when businesses themselves wish to implement chatbots or other forms of AI into their functions.

This is a quickly developing area, and new legal and business dangers—and opportunities—will arise as the technology advances and use cases emerge. Government, business, and society can take the early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them:

1. Contract Risks

2. Cybersecurity Risks

3. Data Privacy Risks

4. Deceptive Trade Practice Risks

5. Discrimination Risks

6. Disinformation Risks

7. Ethical Risks

8. Government Contract Risks

9. Intellectual Property Risks

10. Validation Risks

What Next?

The authors offer the following next steps regarding risk, threats, and opportunities, “as businesses will encounter both the potential for substantial benefits and the evolving risks associated with the use of these technologies. While specific facts and circumstances will determine particular counsel, businesses should consider these top-line suggestions:

  • Be circumspect in the adoption of chatbots and generative AI, especially in pursuing government contracts, or generating work required by government or commercial contracts;
  • Consider adopting policies governing how such technologies will be deployed in business products and utilized by employees;
  • Recognize that chatbots can often err, and instruct employees not to rely on them uncritically;
  • Carefully monitor the submission of business, client, or customer data into chatbots and  similar AI tools to ensure such use comports with contractual obligations and data privacy rules;
  • If using generative AI tools, review privacy policies and disclosuresrequire consent from users before allowing them to enter personal information into prompts, and provide opt-out and deletion options;
  • If using AI tools, be transparent about it with customers, employees, and clients;
  • If using AI software or chatbots provided by a third party, seek contractual indemnification  from the third party for harms that may arise from that tool’s use;
  • Bolster cybersecurity and social engineering defenses against AI-enabled threats;
  • Review AI outputs for prejudicial or discriminatory impacts;
  • Develop plans to counter AI-powered disinformation;
  • Ensure that AI use comports with ethical and applicable professional standards; and
  • Copyright original works and patent critical technologies to strengthen protection against unauthorized sourcing by AI models and, if deploying AI tools, work with IP counsel to ensure outputs are fair use.” (1)

About the OODA Loop [X]+AI Series

https://oodaloop.com/archive/2023/02/13/innovation-in-xai-applications/

For posts in this series, go to  [X]+AI | OODA Loop

https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/

Tagged: X+AI
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.