Start your day with intelligence. Get The OODA Daily Pulse.

How to drive bias out of AI without making mistakes of Google Gemini

When Google took its Gemini image-generation feature offline last month for further testing because of issues related to bias, it raised red flags about the potential dangers of generative artificial intelligence, not just the positive changes the technology promises to usher in. “Companies need to overcome bias if they wish to maximize the true potential of this powerful technology,” said Siva Ganesan, head of the AI Cloud business unit at Tata Consultancy Services. “However, depending on the data that Gen AI is trained on, the model learns and reflects that in its outputs,” he said. Crucial to managing issues of potential bias in AI is to have clear processes in place and prioritize responsible AI from the beginning, said Joe Atkinson, chief products and technology officer at consulting firm PwC. “This starts with striving to make gen AI systems transparent and explainable, giving users access to clear explanations of how the AI system makes decisions and being able to trace the reasoning behind those decisions,” Atkinson said. Ensuring transparency in how generative AI systems operate and make decisions is crucial for building trust and addressing bias concerns, said Ritu Jyoti, group vice president, AI and automation, market research and advisory services at International Data Corp. “Organizations should invest in developing explainable AI techniques that enable users to understand the reasoning behind the AI-generated content,” Jyoti said. “For example, a healthcare chatbot powered by generative AI can provide explanations for its diagnoses and treatment recommendations, helping patients understand the underlying factors and mitigating potential biases in medical advice.” Companies also need to create diverse and inclusive development teams. Including people who represent a range of backgrounds, perspectives, and experiences, “goes a long way in identifying and mitigating biases that may inadvertently be embedded in the AI system,” Atkinson said. “Different viewpoints can challenge assumptions and biases, leading to fairer and more inclusive AI models.”

Full story : How to avoid a Google Gemini AI-type bias mishap in artificial intelligence models.