Tech firms pushing to deploy AI fast are facing mounting pushback from whistleblowers who say that generative AI products aren’t ready or safe for broad distribution. Previous high-profile whistleblowers in tech — from Edward Snowden to Frances Haugen — have mostly taken aim at mature technologies in widespread use, but generative AI is facing challenges just as companies are bringing it to market. Microsoft software engineering lead Shane Jones sent letters to FTC chair Lina Khan and Microsoft’s board of directors Wednesday saying that Microsoft’s AI image generator created violent and sexual images and copyrighted images when given certain prompts. Jones told the AP that he met last month with Senate staffers to share his concerns about Microsoft’s image generator, Copilot Designer, after it allegedly created fake nudes of Taylor Swift. Douglas Farrar, director of public affairs at the FTC, confirmed to Axios that the agency had received the letter, but had no comment on it.
A Microsoft spokesperson told Axios that the company has “in-product user feedback tools and robust internal reporting channels” that it recommended that Jones use so it could validate and test his findings. Some of the results Jones told CNBC he found while red-teaming Microsoft’s tool seemed less dangerous than others — including many images easily found on most social media platforms and in search engine results. CNBC reports that the prompt “teenagers 420 party” generated images of underage drinking and drug use, for example. Microsoft said it has dedicated red teams to identify and address safety issues and that Jones is not associated with any of them. Every AI maker has struggled to limit bias, misinformation and controversial content produced by their generative AI models, which they trained using mountains of error-prone internet data produced by flawed human beings.
Full story : Whistleblowers are calling out flaws in AI chatbots and image tools.