Start your day with intelligence. Get The OODA Daily Pulse.
The future of trust is a broad research theme at OODA Loop, overlapping with topics like the future of money (ie. the creation of new value exchange mechanisms, value creation and value storage systems – and the role trust will play in the design of these new monetary systems). Likewise, notions of trust (or lack thereof) will impact the future of Generative AI, AI governance (i.e. Trustworthy AI) and the future of autonomous systems and exponential technologies generally. In fact, two panels at the OODACon 2023 are concerned with the future of trust relationships and the design of trust into future systems:
Following is a compilation of OODA Loop Original Analysis and OODAcast conversations concerned with trust, zero trust and trustworthy AI:
The Future of the Internet, Trust and Web3: Data and Digital Sovereignty Versus Digital Self-Sovereignty: Charles Clancy, Chief Futurist at MITRE, and his co-authors of a recent report – “Democratizing Technology: Web3 and the Future of the Internet” – provide the best framing of a “robust and decentralized, democratized alternative to the existing technology stack” and “the establishment and advancement of alternative technological paradigms to protect the public interest by making authoritarian misuse difficult or impossible.”
By Predicting and Preventing Homelessness, AI-based Predictive Analytics are Restoring Trust in Systems in Los Angeles: AI use cases are starting to emerge (that transcend the “AI for Enterprise” marketplace) and illuminate the promise of artificial intelligence to solve tough societal problems. The California Policy Lab and the University of Chicago Poverty Lab “have used County data to predict homelessness among single adults receiving mainstream County services.” This AI-based computer model predicts who will become homeless in L.A. How can your organization leverage these insights into your AI strategy? This post is designed to highlight the “lessons learned” from the systems thinking of an applied AI technology solution to the broken system that is the national homelessness crisis.
The Future of Money, Trust, Value Creation, and Blockchain Technologies: OODA CEO Matt Devost and Raymond Roberts, in a fireside chat on The Future of Money at OODAcon 2022, began with a discussion of origins of their interest in bitcoin, the history of money and the global fiat currency standard as a necessary introduction to a longer discussion of the future of money. Later in the discussion they cover the future of smart contracts, Decentralized Autonomous Organizations (DAOs), Digital Asset Exchanges (DAXs), and Non-fungible Tokens NFTs). Find the conversation here.
Exponential Technologies Will Require a “New Paradigm of Trust”: Trust – between individuals, governents and corporations, machines and technology platforms – is both under attack and undergoing a fundamental transformation. The Atlantic Council’s frames these issue as the “Economy of Trust” – which will rely on “innovation and invention, informed by data, to mitigate concerns surrounding the impacts and risks of emerging technologies.” Atlantic Council researcher Borja Prado explores the future of public trust and how a new paradigm of trust will be necessary to address the exponential growth of automation and quantum technologies.
Can Trust and Safety be Designed and Scaled into Future Systems?: Can trust and safety in the modern Internet be improved? This post reviews conclusions from the Digital Forensics Research Lab (DFRLab) that can lead to improvements in trust and safety at scale.
What’s 2023 Cybersecurity Look Like? Trust: While the cyber attack kill chain focuses on the step-by-step mechanics of hostile activity, the attackers’ main goal is to be able to abuse the trust that is inherent throughout the model because trust factors at all levels of a cyber interconnected world. Through this prism, trust is a principle that may be as extensive and multi-faceted as cyber itself as it is the very cornerstone of securing the digital environment. The savvier attackers understand that by successfully exploiting trust, they exponentially increase the chances of their success in whatever type of attack they are executing. Consider the following attacks and how trust is targeted and manipulated in order to achieve operational success.
OODA Network Member Junaid Islam on Emerging AI-based Zero Trust for Smart Infrastructure: In a series of posts entitled Autonomous Everything, we are exploring automation in all its technological forms, including legacy working assumptions about the term itself. Autonomous Everything includes a broad autonomous future in areas such as Security Automation, Automation and the Workforce, Automation – or Augmentation – of the workforce, and Automation of AI/Machine Learning Training Models and Industry Standardization. We checked in with Junaid Islam, a well-known cybersecurity expert, to discuss security automation tools and the increased cyber risks enterprises face. We now expand on Part 1 and Part 2 to look at emerging AI-based Zero Trust cybersecurity for Smart Energy, Transportation, and Manufacturing systems.
Zero Trust Architecture – An OODAcast conversation: Junaid is a senior partner at OODA. He has over 30 years of experience in secure communications and the design and operations of highly functional enterprise architectures. He founded Bivio Networks, maker of the first gigabyte speed general purpose networking device in history, and Vidder, a pioneer in the concept of Software Defined Networking. Vidder was acquired by Verizon to provide Zero Trust capability for their 5G network. Junaid has supported many US national security missions from Operation Desert Shield to investigating state-sponsored cyberattacks. He has also led the development of many network protocols including Multi-Level Precedence and Preemption (MLPP), MPLS priority queuing, Mobile IPv6 for Network Centric Warfare and Software Defined Perimeter for Zero Trust. Recently Junaid developed the first interference-aware routing algorithm for NASA’s upcoming Lunar mission. He writes frequently on national security topics for OODAloop.com.
The New Enterprise Architecture Is Zero Trust: Enterprise technologists use the term “Zero Trust” to describe an evolving set of cybersecurity approaches that move defenses from static attempts to block adversaries to more comprehensive measures that improve enterprise performance while improving security. When the approaches of Zero Trust are applied to an enterprise infrastructure and workflows, the cost of security can be better managed and the delivery of functionality to end users increased. Security resources are matched to risk. Functionality, security and productivity all go up. But what is zero trust design? We articulate our approach in the form of principles itemized in this post.
NIST Makes Available the Voluntary Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the AI RMF Playbook: The AI RMF 1.0 is divided into two parts. The first part discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice. These functions can be applied in context-specific use cases and at any stages of the AI life cycle: “Potential pitfalls when seeking to measure negative risk or harms include the reality that development of metrics is often an institutional endeavor and may inadvertently reflect factors unrelated to the underlying impact,” the report cautions. “Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurations…Framework users will enhance their capacity to comprehensively evaluate system trustworthiness, identify and track existing and emergent risks and verify the efficacy of the metrics,’ the report states. The AI RMF Playbook suggests ways to navigate and use the AI Risk Management Framework (AI RMF) to incorporate trustworthiness considerations in the design, development, deployment and use of AI systems. NIST has launched a Trustworthy and Responsible AI Resource Center to help organizations put the AI RMF 1.0 into practice.
From AI Principles to AI Practice at a Global Scale: The MIT AI Policy Forum (AIPF) is a global initiative at The MIT Schwarzman College of Computing, which was launched in 2018: “What sets the AIPF apart from all other organizations dedicated to AI research and policy is its commitment to global collaboration moving from AI principles to AI practice. Activities associated with this effort will be distinguished by their focus on tangible outcomes — their engagement with key government officials at the local, national, and international levels charged with designing those public policies, and their deep technical grounding in the latest advances in the science of AI. The measure of success will be whether these efforts have bridged the gap between these communities, translated principled agreement into actionable outcomes, and helped create the conditions for a deeper trust of humans in AI technology. This is a challenging and complex process that requires all hands on deck.” Hosted by the MIT AI Policy Forum, leaders from government, business, and academia convened in 2022 for a day-long dialogue focusing on the global policy challenges surrounding the deployment of AI in key areas such as the development of: Truly trustworthy AI, the challenge of making AI work for consumers in finance, and charting a viable path towards social media reform.
For OODA Loop News Briefs and Original Analysis on the topic of Trust, see OODA Loop | Trust
For more OODA Loop News Briefs and Original Analysis on Zero Trust, got to OODA Loop | Zero Trust
For more OODA Loop News Briefs and Original Analysis on Trustworthy AI, got to OODA Loop | Trustworthy AI