Start your day with intelligence. Get The OODA Daily Pulse.
Margaret Mitchell, researcher and chief ethics scientist at artificial intelligence developer and collaborative platform Hugging Face, is a pioneer in responsible and ethical AI. One of the most influential narratives around the promise of AI is that, one day, we will be able to build artificial general intelligence (AGI) systems that are at least as capable or intelligent as people. But the concept is ambiguous at best and poses many risks, argues Mitchell. She founded and co-led Google’s responsible AI team, before being ousted in 2021.
Melissa Heikkilä: You’ve been a pioneer in AI ethics since 2017, when you founded Google’s responsible AI ethics team. In that time, we’ve gone through several different stages of AI and our understanding of responsible AI. Could you walk me through that?
Margaret Mitchell: With the increased potential that came out of deep learning — this is circa 2012-13 — a bunch of us who were working on machine learning, which is basically now called AI, were really seeing how there was a massive paradigm shift in what we were able to do. We went from not being able to recognise a bird to being able to tell you all about the bird. A very small set of us started seeing the issues that were emerging based on how the technology worked. For me, it was probably circa 2015 or so where I saw the first glimmers of the future of AI, almost where we are now.