Start your day with intelligence. Get The OODA Daily Pulse.

Can trust and safety in the modern Internet be improved? This post reviews conclusions from the Digital Forensics Research Lab (DFRLab) that can lead to improvements in trust and safety at scale.  

The future of trust is a broad theme here at OODA Loop, overlapping with topics like the future of money (ie. the creation of new value exchange mechanisms, value creation and value storage systems  – and the role trust will play in the design of these new monetary systems). Likewise the future of Generative AI, AI governance (i.e. Trustworthy AI) and the future of autonomous systems and exponential technologies generally. 

Scaling Trust on the Web 

The DFRLab recently released the final report from the Task Force for a Trustworthy Future Web, which offers a vital working definition of the growing specialization in and practice known as “Trust and Safety”. 

Overview:  This comprehensive report was put together with input from over 40 experts at breakneck speed and charts an actionable path forward to support human dignity and innovation, and mitigate harms online. The report’s annexes feature further reporting and exploration of topics including: generative AI, children’s rights, open tooling, gaming, Trust & Safety, federated spaces, and more.

What is “Trust and Safety”?

 This framing of “Trust and Safety” is remarkably useful and especially interesting from an OODA Loop perspective – in that the authors differentiate the “area of specialty and practice” known as “Trust and Safety” from “cybersecurity” and “information security” in the following manner:  

“For decades, an area of specialty and practice that is increasingly referred to as “Trust & Safety” (T&S) has developed inside US technology companies to diagnose and address the risks and harms that face individuals, companies, and now—increasingly—societies on any particular online platform.

No single definition of T&S holds across all audiences. Stated most generally, T&S anticipates, manages, and mitigates the risks and harms that may occur through using a platform, whereas “cybersecurity” and “information security” address attacks from an external actor against a platform. 

A T&S construct may describe a range of different verticals or approaches. “Ethical” or “responsible” tech; information integrity; user safety; brand safety; privacy engineering—all of these could fall within a T&S umbrella. T&S practice is equally varied and can include a variety of cross-disciplinary elements ranging from defining policies, to rules enforcement and appeals, to law enforcement responses, community management, or product support.

The types of harms that T&S may take on (when considering online spaces) include coordinated inauthentic behavior, copyright infringement, counterfeiting, cross-platform abuse, child sexual abuse material (CSAM), denials of service (DOS) / distributed denials of service (DDOS), disinformation, doxing, fraud, gender-based violence, glorification of violence, harassment, hate speech, impersonation, incitement to violence or violent sentiment, misinformation, nonconsensual intimate imagery, spam, synthetic media (for example, deepfakes), trolling, terrorist and violent extremist content (TVEC), violent threats, and more.

These harms are specific to online spaces and are not meant to denote the range of harms that T&S considers as a field. While T&S is now expanding globally as a field, it is important to note that the standards, practices, and technology that scaffold T&S were constructed overwhelmingly from American value sets. This American understanding of harms, risks, rights, and cultural norms has informed decades of quiet decision-making inside platforms with regard to non-US cultures and communities. Because its roots are so culturally specific to the United States and to corporate priorities, the emerging T&S field only represents one element of a much broader universe of actors and experts who also play a critical role in identifying and mitigating harm—including activists, researchers, academics, lawyers, and journalists.”

From the Report

Key Findings

In addition to broad points of consensus outlined in the report, the task force arrived at the following key findings. These findings reflect collective input gathered through task force processes, rather than the individual views of any particular member. Their order of presentation does not reflect any ranking: 

  1. An emerging T&S field creates important new opportunities for collaboration.
  2. Academia, media, and civil society bring crucial expertise to building better online spaces.
  3. Protecting healthy online spaces requires protecting the individuals who defend them.
  4. Learning from mature, adjacent fields will accelerate progress.
  5. The gaming industry offers unique potential for insights and innovation.
  6. Existing harms will evolve and new harms will arise as technologies advance.
  7. Systemic harm is exacerbated by market failures that must be addressed.
  8. Philanthropies and governments can shape incentives and fill gaps.

Key recommendations

Decades of effort from trust and safety practitioners, civil society actors, academics, and policymakers, and other counterparts can now be consolidated to redefine how technology is developed in the twenty-first century. It is our hope that the insights captured in Scaling Trust on the Web galvanize investments in systems-level solutions that reflect the expanding communities dedicated to protecting trust and safety on the web, the trailblazers envisioning the next frontier of digital tools and systems, and the rights holders whose futures are at stake.

For the full DFRLab report, go to this link. 

What Next?

We pointed to the path forward in the 2023 OODA Almanac

Disruption of Social Integrity and Cognitive Infrastructure Resiliency

We continue to track several thematics around the disruption of social integrity in the U.S. to include stress points like homelessness, crime, and under-reported risks like Fentanyl deaths. Fentanyl is of particular interest given the increasing number of deaths and the drug having strong ties to foreign illicit chemical supply chains including origination from China.

Cognitive infrastructure degradation and associated misinformation and influence campaigns also continue to be issues we will closely monitor in 2023 and beyond. Rather than build models for cognitive resilience  – including investment in education platforms – current initiatives are focused on platform banning which create an environment of cat and mouse rather than addressing root causes.

Our world is becoming a house of mirrors as years of misinformation and disinformation  – and attacks on the credibility of institutions – has eroded trust. Lines between fact and opinion are increasingly blurred in the media and sponsored content playbooks dominate what were previously technology-focused platforms. Distractions are prevalent and new platforms including the metaverse will encourage withdrawal from reality anytime and anywhere. Even our best approaches at conversational AI demonstrate inherent tendencies to manufacture facts and create faux authority to include manufactured citations. This will create unprecedented challenges and the development of new technologies and approaches.

Annexes from the DFRLab Report: 

1 – TRUST AND SAFETY

2 – OPEN TOOLING

3 – CHILDREN’S RIGHTS

4 – GAMING ECOSYSTEM

5 – FEDERATED SPACES

6 – CYBERSECURITY + GEN AI

Additional Resources 

Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.

AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.

Track Technology Driven Disruption: Businesses should examine technological drivers and future customer demands. A multi-disciplinary knowledge of tech domains is essential for effective foresight. See: Disruptive and Exponential Technologies.

Tagged: Trust Zero Trust
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.