Start your day with intelligence. Get The OODA Daily Pulse.
For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content? Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators. But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated. In practice, this means things like critical updates to Meta’s algorithms, new safety features and changes to how content is allowed to be shared across the company’s platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers tasked with debating how a platform change could have unforeseen repercussions or be misused. Inside Meta, the change is being viewed as a win for product developers, who will now be able to release app updates and features more quickly. But current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta’s apps could lead to real world harm.
Full report : Meta plans to replace humans with AI to assess privacy and societal risks.