Recent OODA Loop research underscores one central theme: human capital is the decisive factor in cybersecurity and national tech competition. From board-level governance of human risk to the race for emerging tech talent, a “Talent Superpower Strategy” reframes people—not just technology—as the critical infrastructure of the future.
We provide an overall analysis of our findings, as well as an archive of OODA Loop Original Analysis posts for a deeper dive.
Why This Matters
- Boards must own human risk. Cybersecurity failures tied to social engineering or insider threats are governance issues, not just IT problems.
- Talent is national security. Retaining STEM graduates, building cyber/AI workforce capacity, and countering foreign human targeting efforts directly shape U.S. technological sovereignty.
- Geopolitical talent competition is accelerating. China, among others, leverages recruitment and repatriation programs to erode U.S. advantages in AI, biotech, semiconductors, and quantum.
- AI and quantum expand the battlefield. Workforce strategies must adapt to specialized skills in data science, adversarial AI, and next-gen quantum infrastructure.
Key Points
- Human Risk Management / Social & Human Engineering
- Evolved from awareness training (2020) to board accountability (2023–2024).
- Social engineering remains the “coin of the realm” for ransomware groups and advanced persistent threats.
- Boards are urged to treat human risk as a cultural and strategic concern, not a siloed IT issue.
- Talent Strategy & Emerging Tech Talent
- OODA launched the Talent Superpower Strategy series in 2023, highlighting domestic talent as a strategic resource.
- Priority fields: AI, semiconductors, biotechnology, and quantum.
- Adversaries actively target these talent pools through recruitment, espionage, and collaboration fronts.
- Cyber & AI Workforce Development
- DoD Cyber Workforce Strategy (2023–2027) emphasizes AI/data roles and four pillars of human capital.
- Training innovations include cybergames, red teaming, and gamified skill retention.
- AI workforce themes are increasingly integrated into cyber workforce planning.
- STEM Stay Rates
- Retaining international STEM graduates in the U.S. remains a national security priority.
- Attrition undermines long-term competitive advantage.
- Human Targeting
- Case studies (e.g., Los Alamos recruitment) show foreign adversaries weaponizing talent pipelines.
- Counter-strategies include awareness, resilience planning, and stronger counterintelligence.
- Quantum Talent Strategy (2025 expansion)
- OODA’s research extends the Talent Superpower Strategy into quantum computing, stressing elite talent as essential infrastructure.
What Next?
- Expect greater board-level accountability for human risk management.
- Watch for policy initiatives linking STEM retention, immigration, and national competitiveness.
- Anticipate AI workforce specialization across both private and defense sectors.
- Quantum and biotech talent will increasingly be framed as strategic battlegrounds in U.S.–China competition.
Recommendations from OODA Loop Research
- Boards & Executives: Elevate human risk to the governance agenda; require regular reporting on insider threats and workforce resilience.
- Policymakers: Tie immigration and STEM retention policy directly to national security goals; expand cyber/AI workforce pipelines.
- Investors & Innovators: Fund startups and platforms that merge AI oversight with human-centric training, compliance, and risk mitigation.
- National Security Leaders: Harden defenses against human targeting, particularly in critical technology research hubs.
A Deeper Dive: from the OODA Loop Research Archive
Human Risk Management, Social Engineering, and Human Engineering
- Cyber Defense Insights and Resources for the Corporate Board (Human Risk Management, Social and Human Engineering): Frames human risk as a board-level strategic issue; emphasizes culture, governance, and oversight in mitigating insider threats and social engineering.
- What the Board Needs to Know About Exponential Disruption, Cybersecurity, Risk Management, and Strategy: Calls for boards to proactively integrate cybersecurity and human risk into their governance frameworks, rather than delegating solely to CISOs.
- Social Engineering Remains the Coin of the Realm for Ransomware Gangs (or APTs- Advanced Persistent Threats): Explains how ransomware and APT groups continue to rely on phishing, vishing, and human manipulation tactics.
- The Social Engineering Tactics of Ransomware-as-a-Service Operator Black Basta: Provides a case study of Black Basta’s methods, highlighting the evolution of professionalized social engineering.
- UK CERT Introduction to Social Engineering: Social engineering is one of the most prolific and effective means of gaining access to secure systems and obtaining sensitive information, yet it requires minimal technical knowledge. Attacks vary from bulk phishing emails with little sophistication through to highly targeted, multi-layered attacks which use a range of social engineering techniques. Social engineering works by manipulating normal human behavioural traits, and as such, there are only limited technical solutions to guard against it.
- Building Trust Into Blockchain: Discusses trust frameworks, including human factors, in blockchain adoption and security.
- What’s 2023 Cybersecurity Look Like? Trust: While the cyber attack kill chain focuses on the step-by-step mechanics of hostile activity, the attackers’ main goal is to be able to abuse the trust that is inherent throughout the model because trust factors at all levels of a cyber-interconnected world. Through this prism, trust is a principle that may be as extensive and multifaceted as cyber itself, as it is the very cornerstone of securing the digital environment.
- Is Your Insider Threat Risk Management Program Ripe for Innovation? Part 1: In Part 1 of this series, we took a look at the Transportation Security Administration (TSA) Insider Threat Roadmap 2020 and advanced analytics. The following are two more initiatives that are thinking differently about insider threat program implementation through innovative architectures, collective intelligence, advanced analytics, and the use of publicly available information (PAI). Community-based and partner collaborations up and down the supply chain are also a hallmark of these efforts, as there is a growing acknowledgment that internal-facing and traditionally siloed insider threat efforts are part of the problem.
- Is Your Insider Threat Risk Management Program Ripe for Innovation? Part 2: In Part II, we examine the approaches taken and the resources available at the Carnegie Mellon University Software Engineering Institute (SEI) and the MITRE Center for Threat-Informed Defense (CTID).
Human Targeting & Cyber Espionage
Cyber Workforce Development & AI Workforce
Talent Strategy & Talent Superpower Strategy
Emerging Tech and National Security Context
Broader Policy & Innovation Context
- USA to Host Global Cybersecurity Competition and Conference (IC3): Highlights international competitions as a tool for skill-building, workforce pipeline development, and talent identification.
- HHS Launches $50 Million ARPA-H Program to Improve Hospital Cybersecurity: Shows government investment in sector-specific workforce and security improvements, including human risk and resilience.
- The OODA Network on the 2023 National Cybersecurity Strategy: Community insights on the U.S. strategy, emphasizing workforce, human risk, and the integration of public-private efforts.
- OODA Weekly Dispatch: Rate of change in government, the new OODA AI, and update on NYC Deep Tech Week: Community perspective on the U.S. national strategy, emphasizing human factors, workforce, and public-private partnerships.
On Trust
The theme of trust runs directly through all of these OODA research topics: it’s an implicit backbone connecting human risk, talent strategy, and workforce development.
Trust is not just a “soft” issue — it’s a hard strategic factor:
- Cybersecurity collapses without trust between users, systems, and institutions.
- Talent strategies succeed or fail depending on whether workers and innovators trust the system they are part of.
- In AI, trust is the foundation of safe deployment and oversight.
Resources:
OODA Network Member Junaid Islam on Zero Trust Data
About the Author
Daniel Pereira
Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.
Subscribe to OODA Daily Pulse
The OODA Daily Pulse Report provides a detailed summary of the top cybersecurity, technology, and global risk stories of the day.