Everyone’s experienced the regret of telling a secret they should’ve kept. Once that information is shared, it can’t be taken back. It’s just part of the human experience. Now it’s part of the AI experience, too. Whenever someone shares something with a generative AI tool — whether it’s a transcript they’re trying to turn into a paper or financial data they’re attempting to analyze — it cannot be taken back. Generative AI solutions such as ChatGPT and Google’s Bard have been dominating headlines. The technologies show massive promise for a myriad of use cases and have already begun to change the way we work. But along with these big new opportunities come big risks. The potential dangers of AI have been discussed at length — probably as much as the technology itself. What will an eventual artificial general intelligence (AGI) mean for humanity? And how will we account for things like the AI alignment problem, which states that, as AI systems become more powerful, they may not do what humans want them to do? Prior to AI, whenever humans have developed a new technology or product, accompanying safety measures were put into place. Take cars, for example. The earliest versions didn’t feature seatbelts and people were hurt in accidents, which led to seatbelts becoming standard and eventually enforced by law. Applying safety measures to AI is much more complicated because we’re developing an intangible intelligent entity — there are many unknowns and gray areas. AI has the potential to become a “runaway train” if we’re not careful, and there’s only so much we can do to mitigate its risks.
Full opinion : How Companies Can Cope With the Risks of Generative AI Tools.