Registration now open for OODAcon 2024.
Can trust and safety in the modern Internet be improved? This post reviews conclusions from the Digital Forensics Research Lab (DFRLab) that can lead to improvements in trust and safety at scale.
The future of trust is a broad theme here at OODA Loop, overlapping with topics like the future of money (ie. the creation of new value exchange mechanisms, value creation and value storage systems – and the role trust will play in the design of these new monetary systems). Likewise the future of Generative AI, AI governance (i.e. Trustworthy AI) and the future of autonomous systems and exponential technologies generally.
The DFRLab recently released the final report from the Task Force for a Trustworthy Future Web, which offers a vital working definition of the growing specialization in and practice known as “Trust and Safety”.
Overview: This comprehensive report was put together with input from over 40 experts at breakneck speed and charts an actionable path forward to support human dignity and innovation, and mitigate harms online. The report’s annexes feature further reporting and exploration of topics including: generative AI, children’s rights, open tooling, gaming, Trust & Safety, federated spaces, and more.
This framing of “Trust and Safety” is remarkably useful and especially interesting from an OODA Loop perspective – in that the authors differentiate the “area of specialty and practice” known as “Trust and Safety” from “cybersecurity” and “information security” in the following manner:
“For decades, an area of specialty and practice that is increasingly referred to as “Trust & Safety” (T&S) has developed inside US technology companies to diagnose and address the risks and harms that face individuals, companies, and now—increasingly—societies on any particular online platform.
No single definition of T&S holds across all audiences. Stated most generally, T&S anticipates, manages, and mitigates the risks and harms that may occur through using a platform, whereas “cybersecurity” and “information security” address attacks from an external actor against a platform.
A T&S construct may describe a range of different verticals or approaches. “Ethical” or “responsible” tech; information integrity; user safety; brand safety; privacy engineering—all of these could fall within a T&S umbrella. T&S practice is equally varied and can include a variety of cross-disciplinary elements ranging from defining policies, to rules enforcement and appeals, to law enforcement responses, community management, or product support.
The types of harms that T&S may take on (when considering online spaces) include coordinated inauthentic behavior, copyright infringement, counterfeiting, cross-platform abuse, child sexual abuse material (CSAM), denials of service (DOS) / distributed denials of service (DDOS), disinformation, doxing, fraud, gender-based violence, glorification of violence, harassment, hate speech, impersonation, incitement to violence or violent sentiment, misinformation, nonconsensual intimate imagery, spam, synthetic media (for example, deepfakes), trolling, terrorist and violent extremist content (TVEC), violent threats, and more.
These harms are specific to online spaces and are not meant to denote the range of harms that T&S considers as a field. While T&S is now expanding globally as a field, it is important to note that the standards, practices, and technology that scaffold T&S were constructed overwhelmingly from American value sets. This American understanding of harms, risks, rights, and cultural norms has informed decades of quiet decision-making inside platforms with regard to non-US cultures and communities. Because its roots are so culturally specific to the United States and to corporate priorities, the emerging T&S field only represents one element of a much broader universe of actors and experts who also play a critical role in identifying and mitigating harm—including activists, researchers, academics, lawyers, and journalists.”
In addition to broad points of consensus outlined in the report, the task force arrived at the following key findings. These findings reflect collective input gathered through task force processes, rather than the individual views of any particular member. Their order of presentation does not reflect any ranking:
Decades of effort from trust and safety practitioners, civil society actors, academics, and policymakers, and other counterparts can now be consolidated to redefine how technology is developed in the twenty-first century. It is our hope that the insights captured in Scaling Trust on the Web galvanize investments in systems-level solutions that reflect the expanding communities dedicated to protecting trust and safety on the web, the trailblazers envisioning the next frontier of digital tools and systems, and the rights holders whose futures are at stake.
For the full DFRLab report, go to this link.
We pointed to the path forward in the 2023 OODA Almanac:
We continue to track several thematics around the disruption of social integrity in the U.S. to include stress points like homelessness, crime, and under-reported risks like Fentanyl deaths. Fentanyl is of particular interest given the increasing number of deaths and the drug having strong ties to foreign illicit chemical supply chains including origination from China.
Cognitive infrastructure degradation and associated misinformation and influence campaigns also continue to be issues we will closely monitor in 2023 and beyond. Rather than build models for cognitive resilience – including investment in education platforms – current initiatives are focused on platform banning which create an environment of cat and mouse rather than addressing root causes.
Our world is becoming a house of mirrors as years of misinformation and disinformation – and attacks on the credibility of institutions – has eroded trust. Lines between fact and opinion are increasingly blurred in the media and sponsored content playbooks dominate what were previously technology-focused platforms. Distractions are prevalent and new platforms including the metaverse will encourage withdrawal from reality anytime and anywhere. Even our best approaches at conversational AI demonstrate inherent tendencies to manufacture facts and create faux authority to include manufactured citations. This will create unprecedented challenges and the development of new technologies and approaches.
Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.
AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.
Track Technology Driven Disruption: Businesses should examine technological drivers and future customer demands. A multi-disciplinary knowledge of tech domains is essential for effective foresight. See: Disruptive and Exponential Technologies.