Start your day with intelligence. Get The OODA Daily Pulse.
Partners and Counsel from the law firm WilmerHale consider how [Legal and Business Risk] + AI = “early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them.” (1)
It took just two months from its introduction in November 2022 for the artificial intelligence (AI)- powered chatbot ChatGPT to reach 100 million monthly active users—the fastest growth of a consumer application in history.
Chatbots like ChatGPT are Large Language Models (LLMs), a type of artificial intelligence known as “generative AI.” Generative AI refers to algorithms that, after training on massive amounts of input data, can create new outputs, be they text, audio, images or video. The same technology fuels applications like Midjourney and DALL-E 2 that produce synthetic digital imagery, including “deepfakes.”
Powered by the language model Generative Pretrained Transformer 3 (GPT-3), ChatGPT is one of today’s largest and most powerful LLMs. It was developed by San Francisco-based startup OpenAI—the brains behind DALL-E 2—with backing from Microsoft and other investors, and was trained on over 45 terabytes of text from multiple sources including Wikipedia, raw webpage data and books to produce human-like responses to natural language inputs.
LLMs like ChatGPT interact with users in a conversational manner, allowing the chatbot to answer follow-up questions, admit mistakes, and challenge premises and queries. Chatbots can write and improve code, summarize text, compose emails and engage in protracted colloquies with humans. The results can be eerie; in extended conversations in February 2023 with journalists, chatbots grew lovelorn and irascible and expressed dark fantasies of hacking computers and spreading misinformation.
The promise of these applications has spurred an “arms race” of investment into chatbots and other forms of generative AI. Microsoft recently announced a new, $10 billion investment in OpenAI, and Google announced plans to launch an AI-powered chatbot called Bard later this year.
The technology is advancing at a breakneck speed. As Axios put it, “The tech industry isn’t letting fears about unintended consequences slow the rush to deploy a new technology.” That approach is good for innovation, but it poses its own challenges. As generative AI advances, companies will face a number of legal and ethical risks, both from malicious actors leveraging this technology to harm businesses and when businesses themselves wish to implement chatbots or other forms of AI into their functions.
This is a quickly developing area, and new legal and business dangers—and opportunities—will arise as the technology advances and use cases emerge. Government, business, and society can take the early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them:
1. Contract Risks
2. Cybersecurity Risks
3. Data Privacy Risks
4. Deceptive Trade Practice Risks
5. Discrimination Risks
6. Disinformation Risks
7. Ethical Risks
8. Government Contract Risks
9. Intellectual Property Risks
10. Validation Risks
The authors offer the following next steps regarding risk, threats, and opportunities, “as businesses will encounter both the potential for substantial benefits and the evolving risks associated with the use of these technologies. While specific facts and circumstances will determine particular counsel, businesses should consider these top-line suggestions:
https://oodaloop.com/archive/2023/02/13/innovation-in-xai-applications/
For posts in this series, go to [X]+AI | OODA Loop
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/