Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ Shaping the global future together Fri, 06 Jun 2025 17:10:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ 32 32 G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. https://www.atlanticcouncil.org/blogs/geotech-cues/g7-leaders-have-the-opportunity-to-strengthen-digital-resilience-heres-how-they-can-seize-it/ Fri, 06 Jun 2025 17:10:35 +0000 https://www.atlanticcouncil.org/?p=852065 At the upcoming Group of Seven Leaders’ Summit in Canada, member state leaders should advance a coherent, shared framework for digital resilience policy.

The post G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. appeared first on Atlantic Council.

]]>
The 2025 Group of Seven (G7) Leaders’ Summit in Kananaskis, Alberta, Canada, on June 15-17 will take place amid a growing recognition of the importance of digital resilience. This is especially apparent in Canada, the summit’s host country and current G7 president. Following his election win, Canadian Prime Minister Mark Carney announced the creation of a new Ministry of Artificial Intelligence and Digital Innovation. This bold step positions Canada to champion a digital resilience agenda at the summit that unites security, economic growth, and technological competitiveness while strengthening the resilience of its partners and allies.

The G7 must seize this opportunity to advance a coherent, shared framework for digital policy, one that is grounded in trust, reinforced by standards, and aligned with democratic values. To do so, it can build on some of the insights from the Business Seven (B7), the official business engagement group of the G7. The theme of this year’s B7 Summit, which was held from May 14 to May 16, in Ottawa, Canada, was “Bolstering Economic Security and Resiliency.” The selection of this theme emphasized the importance of defending against threats and enhancing the ability of societies, governments, and businesses to adapt and recover.

In the spirit of that theme, the Atlantic Council’s GeoTech Center, in partnership with the Cyber Statecraft Initiative and the Europe Center, convened a private breakfast discussion alongside the B7 in Ottawa on May 15. The roundtable brought together government officials, business leaders, and civil society representatives to discuss how digital resilience can be strengthened within the G7 framework. The participants laid out foundational principles and practical approaches to building digital resilience that support economic security and long-term competitiveness. As G7 leaders gather for the summit in Kananaskis later this month, they should consider these insights on how its member states can work together to bolster their digital resilience.

1. Develop a common language for shared goals on digital sovereignty

When developing a common framework, definitions (or taxonomy) are critical. Participants emphasized that shared vocabulary is a prerequisite for meaningful cooperation. Discrepancies in how countries define concepts such as digital sovereignty can lead to fundamental misunderstandings in critical areas such as risk, which creates friction and confusion.

For example, a G7 country might frame sovereignty in terms of national control over infrastructure while another country, such as China, defines it as regulating the digital information environment. In that case, this misalignment will hinder cooperation from the outset. Specifying precise definitions of each government’s goals, including “trust,” “resilience,” and “digital sovereignty,” would enable governments and industry to align on priorities and respond more effectively to emerging standards. This definitional clarity is crucial for policymaking and a prerequisite for compliance, implementation, and interoperability across borders.

2. Build on existing multilateral and regional frameworks

Participants stressed the importance of building on existing progress toward digital resilience, both in and out of the G7, rather than discarding it in pursuit of novelty. The G7 and its partners already possess a strong foundation of digital policy initiatives. Key milestones such as the Hiroshima AI Process, launched under Japan’s 2023 G7 presidency, established International Guiding Principles and an International Code of Conduct for the development and use of artificial intelligence (AI) systems, which included frontier models. Prior to the Hiroshima AI Process, several consecutive G7 Summits committed to developing the data free flow with trust framework, which prioritizes enabling the free flow of data across borders while protecting privacy, national security, and intellectual property.

Beyond the G7, participants cited European Union (EU) partnerships as examples of forward-leaning policy environments that balance innovation with safeguards. These included the EU AI continent action plan, which aims to leverage the talent and research of European industries to strengthen digital competitiveness and bolster economic growth, as well as Horizon Europe, the EU’s primary financial program for research and innovation.

With these partnership frameworks already in place, G7 leaders should build on existing work and avoid seeking to design unique solutions that may become time-consuming—particularly when it comes to gaining political buy-in. Even in areas like AI and the use of data, where policymakers have observed rapid changes since last year’s summit, the B7 discussion participants emphasized that governments can leverage work they’ve already completed in designing and implementing existing standards. If prior technical standards and regulations are inapplicable or insufficient, policymakers can still learn lessons from an in-depth assessment, including by taking note of where they’ve fallen short of their goals.

3. Start new initiatives with small working groups and pilot projects  

Ensuring digital resilience requires managing inevitable trade-offs between national security, economic vitality, and open digital ecosystems. As one participant remarked, “the digital economy is the economy,” so policies shaping cyberspace must consider both national security and economic impacts. The G7 provides a platform for frank discussions among allies and partners about how to get these trade-offs right. But waiting for buy-in from all like-minded partners risks missed opportunities in the short term.

Participants noted that by starting with smaller forums, policymakers can build consensus that can lead to real progress. Pilot projects and working groups among smaller clusters of G7 countries could build momentum and inform scalable solutions. Participants emphasized that despite the contentious nature of some of the issues surrounding digital resilience, such as protectionism and market fragmentation, G7 governments are operating with a shared set of values. These values can motivate collaboration across the G7 on the many areas of common ground they already share, but they can also provide the basis for projects among smaller groups within the G7 to get new ideas off the ground.

A pivotal summit for digital resilience

As G7 leaders meet in Kananaskis and work toward a common framework that balances digital security and economic growth, a few key lessons can be garnered from this B7 meeting. G7 member states should prioritize developing a common taxonomy and building on the progress made on digital resilience both inside and outside the G7, all while remaining responsive to shifting geopolitical dynamics.

Disagreements among member states should be viewed not as a barrier, but as evidence of a maturing policy landscape. Constructive tension can drive refinement so long as partners are clear about their priorities. The G7’s unique value lies in its ability to forge alignment among diverse actors. False consensus only delays progress. It will take transparency, specificity, and trust to move the digital resilience agenda forward.


Sara Ann Brackett is an assistant director at the Atlantic Council’s Cyber Statecraft Initiative.

Coley Felt is an assistant director at the Atlantic Council’s GeoTech Center.

Raul Brens Jr. is the acting senior director of the Atlantic Council’s GeoTech Center.

Further Reading

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

The post G7 leaders have the opportunity to strengthen digital resilience. Here’s how they can seize it. appeared first on Atlantic Council.

]]>
Cyberattacks are hurting US businesses. Here’s how Congress can upgrade cybersecurity information sharing. https://www.atlanticcouncil.org/blogs/new-atlanticist/cyberattacks-are-hurting-us-businesses-heres-how-congress-can-upgrade-cybersecurity-information-sharing/ Thu, 05 Jun 2025 14:11:42 +0000 https://www.atlanticcouncil.org/?p=851689 Hackers are targeting small and medium-sized businesses, and the existing framework for sharing important information is leaving these US companies out of the loop.

The post Cyberattacks are hurting US businesses. Here’s how Congress can upgrade cybersecurity information sharing. appeared first on Atlantic Council.

]]>
Cybersecurity is a team sport, yet small and medium-sized businesses (SMBs) have spent years on the sidelines, despite being the targets of an estimated 43 percent of cyberattacks in the United States. As Congress discusses renewing the United States’ cybersecurity information-sharing framework, it’s time to finally welcome SMBs into the cybersecurity community. 

On September 30, the framework for sharing important cybersecurity information between government and industry, the Cybersecurity Information Sharing Act of 2015 (CISA 2015), will expire unless Congress acts. This law—distinct from the similarly named Cybersecurity and Infrastructure Security Agency (also CISA)—provides essential legal protections that allow private companies to share cyber threat information among themselves and with the government.

There is already bipartisan support for renewing CISA 2015. Senators Gary Peters (D-MI) and Mike Rounds (R-SD) introduced legislation to extend the current law for another ten years without changes, an approach supported by major trade associations. The bill’s authors correctly emphasize the importance of preserving the established information-sharing environment. Yet, renewing CISA 2015 unchanged leaves the cybersecurity community blind to critical threat intelligence that SMBs uniquely hold.

As originally passed, CISA 2015 removed legal barriers and disincentives to sharing cyber threat data. It provides liability protections and exemptions from certain public disclosure requirements or regulatory penalties for companies that share threat indicators in good faith. These protections significantly reduce the risk of lawsuits or regulatory enforcement when organizations exchange information with the Department of Homeland Security (DHS) or other companies under the framework, provided the information was anonymized and used strictly for a “cybersecurity purpose.”

These protections dramatically enhanced cybersecurity information sharing. In the private sector, entities such as the Cyber Threat Alliance formed to facilitate voluntary company-to-company information sharing. Information Sharing and Analysis Centers (ISACs), organizations dedicated to collecting, analyzing, and disseminating sector-specific threat data, have also grown substantially. The National Council of ISACs now comprises twenty-seven sector-specific ISACs, while the Multi-State ISAC alone exceeded 18,000 members last year. These members share cyber threat information directly because of the protections offered by CISA 2015. Even government programs have evolved in response. DHS’s Automated Indicator Sharing (AIS) platform has significantly improved rapid information exchanges and threat awareness, aided by CISA 2015 protections.

SMBs are being left behind

Still missing from this list, however, are the large number of SMBs that operate across the United States. SMBs have largely been overlooked, are subject to a large number of attacks, and their employees face social engineering threats such as phishing and fraud 350 percent more than those at large companies. While platforms such as DHS’s AIS are beneficial to larger corporations, SMB participation remains limited due to high costs, technical complexity, and inadequate outreach. This exclusion leaves SMBs vulnerable and deprives the cybersecurity community of a significant source of threat intelligence.

Since 2015, the cyber threat landscape has evolved, with SMBs now frequent targets. Roughly one in three small businesses will suffer a cyberattack in the next year, with each incident costing an average of nearly $255,000, almost an order of magnitude greater than the 2014 average cost of $27,752. This changed threat landscape and lack of participation in information sharing leaves a gap. 

Any new CISA 2015 authorization should address this gap to benefit the entire cybersecurity ecosystem. SMBs represent a valuable source of threat data, and integrating their insights would significantly enhance predictive capabilities and resilience. Strengthening SMB defenses would also reduce opportunities for attackers to exploit smaller entities as gateways to larger networks. 

How Congress can update CISA 2015

To achieve this integration, Congress should ensure any reauthorization addresses four targeted reforms. 

First, clarify definitions. The term “cybersecurity purpose” should explicitly include protections against social engineering threats such as fraud and phishing, ensuring SMBs receive comprehensive coverage for the threats they face.

Second, incentivize more participation among SMBs. Congress should authorize a DHS-managed initiative specifically designed to provide smaller businesses with accessible, actionable threat intelligence and affordable cybersecurity resources. Federal support could take the form of grants, vouchers, or subsidized cybersecurity solutions. 

Third, codify successful operational models into law. This was attempted last year with a bill introduced by Representative Eric Swalwell (D-CA-14) that would codify CISA 2015’s Joint Cyber Defense Collaborative (JCDC). The JCDC has successfully united federal agencies and private companies to effectively respond to high-profile cyber incidents, including the exploitation of Ivanti gateway vulnerabilities and the July 2024 CrowdStrike outage. Currently, JCDC and many similar programs lack explicit statutory authority, making them vulnerable to termination by executive action, which is what happened to the Critical Infrastructure Partnership Advisory Council in March of this year. Codifying such programs ensures sustained and consistent cybersecurity collaboration irrespective of political shifts.

Fourth, rename the law to clearly distinguish it from the Cybersecurity and Infrastructure Security Agency. Cybersecurity acronyms are hard enough as it is. A new name, such as the Cyber Intelligence Sharing and Protection Act (CISPA), a name from an earlier version of CISA 2015, would eliminate the confusion caused by acronym duplication. 

Reauthorizing CISA 2015 with these targeted improvements—clearer definitions, SMB support, codification of proven programs, and a distinct identity—will ensure that SMBs play their part in and benefit from making the next decade of cybersecurity more resilient than the last.


Tanner Wilburn is a recent graduate of the Indiana University Maurer School of Law with an MS in cybersecurity risk management from the Luddy School of Informatics, Computing, and Engineering. 

Sara Ann Brackett is an assistant director with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. 

Urmita Chowdhury is an assistant director for trainings and competitions at the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. 

The post Cyberattacks are hurting US businesses. Here’s how Congress can upgrade cybersecurity information sharing. appeared first on Atlantic Council.

]]>
The Pentagon’s software approval process is broken. Here’s how to fix it. https://www.atlanticcouncil.org/blogs/new-atlanticist/the-pentagons-software-approval-process-is-broken-heres-how-to-fix-it/ Wed, 04 Jun 2025 17:07:52 +0000 https://www.atlanticcouncil.org/?p=851037 To equip US military personnel with the tools they need, the Department of Defense must treat secure software delivery as a warfighting imperative.

The post The Pentagon’s software approval process is broken. Here’s how to fix it. appeared first on Atlantic Council.

]]>
In today’s rapidly evolving battlefields, the Department of Defense (DoD) faces a paradox: It is awash with advanced technologies, yet warfighters often wait months, even years, for approval to use the software they desperately need. Why? The bottleneck often lies in a well-intentioned but outdated process: the Risk Management Framework (RMF) and the painful path to achieving an Authority to Operate (ATO).

The ATO process, designed to safeguard national security systems, is rooted in sound principles. But in practice, it has become a procedural obstacle course—one that sidelines innovative software with lengthy, bureaucratic delays. Having gone through my fair share of ATOs across the Air Force, Army, and Marine Corps, I can attest that this process needs serious reforms. From mission planning tools to logistics dashboards, critical capabilities are too often stuck in limbo because of inconsistent, manual, and subjective risk determinations. For instance, this process has stalled the use of critical Identity Access Management software such as Okta. These software enable zero trust enforcement, rapid user authentication, and centralized access control across multi-domain, cloud, and on-premise environments without significant delays and bandwidth constraints into key warfighting systems.

To ensure US warfighters receive the tools they need in a timely fashion, the DoD should invest in updated technical training for cybersecurity professionals and implement automated, continuous security checks on software. But for these reforms to succeed, the DoD will need to institute a broader cultural shift among the cybersecurity and acquisitions workforces toward recognizing compliance as the crucial aspect of US national security policy that it is.

A subjective standard of risk

RMF is the US government’s structured approach to ensuring information systems are secure and resilient before they are allowed to operate within government networks. It was designed to replace checklist-style compliance with a risk-based decision-making process. Under RMF, systems go through several stages—categorization, control selection, implementation, assessment, authorization, and continuous monitoring. At the heart of the process is the ATO—a formal decision by an authorizing official that a system’s security posture is acceptable for use. To reach this decision, program teams must document security controls, undergo assessments by independent cybersecurity experts, and respond to findings. The intent is to ensure systems are secure before they are fielded—but in practice, the process often results in extended delays, overly cautious reviews, and inconsistent standards across organizations.

One of the most challenging aspects of the ATO process is the subjectivity of risk determination. What is deemed an acceptable risk by one authorizing official may be an unacceptable liability to another. With no shared standard of risk tolerance, system owners must often start from scratch depending on who sits in the approval seat. This variability leads to costly rework, long delays, and disillusioned program teams. Worse, it creates a culture where innovation is stifled not by bad technology, but by indecision and fear.

This is not just a bureaucratic issue; it’s a mission-impact issue. Delays of twelve to eighteen months for an ATO mean that a new targeting application, mission planning software, or AI-enabled intelligence tool never reaches the unit that needs it. When marines or soldiers are using outdated or spreadsheet-based tools while Silicon Valley technologies sit behind compliance gates, something is broken. Compliance activities do have their place. They provide a framework and a set of standards that system owners should utilize. But compliance activities make up only one facet of a resilient security posture.

When it comes to the documentation for this process, the only thing consistent about it is its inconsistency. Each security control assessor, information systems security manager, and authorizing official has their own preferences for how security controls, and security requirement guides should be documented. Even when software as a service systems have received accreditation in one military service, the ATO often does not carry over to other services, requiring the process to start over again at each service.

Across most systems in the DoD, ATOs are manual one-time reviews that only look at a snapshot in time rather than monitoring software continuously. What’s more, this inadequate review takes a significant amount of time, labor, and resources. It requires a team of cybersecurity professionals to manually review and analyze all ATO documentation to meet compliance thresholds. Because there are few security assessor teams across the DoD, there is often a delay in getting the third-party assessor on schedule to conduct the manual review.

These one-time ATO reviews, which often approve a software for one to three years, are not useful for tracking a system’s long-term security posture. In fact, leaving a system approved for this long without further review increases its security risk. Continuous monitoring is a key step in the RMF, but it is often haphazardly implemented, with security scans sometimes occurring only monthly or even quarterly. Moreover, authorizing officials ultimately accept the risk with critical or high vulnerabilities to keep systems available for users. Instead, ATO and security posture should be continually assessed through an agreed-upon standard for security guardrails and thresholds. This continual assessment should in no way be manual. Rather, it should be baked into the day-to-day software development lifecycle through automated regression, quality, and security testing with each delivery of code.

The talent gap in modern cybersecurity

Compounding the problems with the ATO process is a talent management challenge. Many cybersecurity professionals tasked with evaluating and authorizing systems are not trained in modern software development or cloud-native architectures. Developments such as the shifts to hybrid cloud, containerized applications, and infrastructure as code have dramatically outpaced cybersecurity workforce training.

Security professionals steeped in legacy systems may treat every cloud deployment as a threat, rather than an opportunity for enhanced resilience, scalability, and automation. As a result, the process designed to manage risk often ends up misunderstanding it—focusing on outdated indicators instead of real attack vectors. In one of the ATO renewals I supported, our cybersecurity assessor subject matter experts didn’t know about cloud-hosted Kubernetes technologies, which are widely implemented across DoD software organizations. They also did not understand how to implement the Kubernetes security technical implementation guide, even though they were supposed to be assessing our security compliance. As a result, the first few days of the assessment were spent teaching assessors about containers, Kubernetes, microservices, and ephemeral IP ranges before the ATO process could move forward.

The DoD can’t automate trust, but it can automate verification. And that’s where the changes to the process must begin.

Recommendations for reform

To speed up the delivery of secure software, the DoD must rethink how it defines and manages risk. The following actions would make the ATO process more efficient, ensuring that warfighters can use the software they need to meet mission success.

  • Invest in talent management and training. The DoD must invest in a new cadre of cyber professionals who understand development security and operations, continuous integration/continuous deployment pipelines, and cloud-native patterns. This starts with developing targeted training, incentives for continuous learning, and career pathways that reward technical skills over legacy tenure. It also requires an incentive structure that holds authorizing officials accountable for delayed ATO timelines, especially for software-as-a-service products that have already received ATOs in other organizations.
  • Automate guardrails and thresholds. To embrace a continuous ATO framework, programs should implement automated security checks that enforce zero trust principles, identity policies, and vulnerability scanning. They should also require logging standards directly in the pipeline. When software is built with these guardrails from the start, this reduces the need for manual reviews, bolstering confidence in the system. That way, when code is pushed and meets the predefined security guardrails, it can go straight into production environments.
  • Reduce redundant documentation. Much of the RMF burden is paperwork for paperwork’s sake. By adopting living documentation generated from automated pipelines—like real-time architecture diagrams, test coverage, and security telemetry—the Pentagon can save thousands of hours that are currently being wasted on static Word documents no one ever reads.

The SWFT strategy: A moment for culture change

The DoD’s new Software Fast Track (SWFT) methodology, announced on May 5, offers a hopeful roadmap. SWFT aims to make software development more agile by implementing regular software releases, modern and modular architectures, and outcomes-based measures that meet warfighter needs. But to be truly transformative, it must be paired with a culture shift across the acquisition and cybersecurity communities.

Acquisition and cybersecurity personnel must move away from compliance as a box-checking exercise and toward compliance as a byproduct of good engineering. The future lies in continuous ATOs, risk quantification tools, and AI-assisted cybersecurity—if the Pentagon is willing to invest in people and process changes.

If the DoD wants to outpace its adversaries and empower its warfighters with the tools they need, it must treat secure software delivery as a warfighting imperative—not a compliance chore. The ATO process, as it stands today, is a bottleneck the United States can no longer afford.

The call to action is clear: upgrade the workforce, automate security, and embrace a cultural change toward cybersecurity compliance. SWFT provides an opportunity—now it’s time to put it into practice.


Hannah Hunt is a nonresident senior fellow with the Atlantic Council’s Forward Defense program within the Scowcroft Center for Strategy and Security and a distinguished technical fellow at MetroStar Systems. She was previously the chief of product at the Army Software Factory under Army Futures Command and chief of staff at the US Air Force’s Kessel Run.

The post The Pentagon’s software approval process is broken. Here’s how to fix it. appeared first on Atlantic Council.

]]>
Unpacking Russia’s cyber nesting doll https://www.atlanticcouncil.org/content-series/russia-tomorrow/unpacking-russias-cyber-nesting-doll/ Tue, 20 May 2025 10:00:00 +0000 https://www.atlanticcouncil.org/?p=842605 The latest report in the Atlantic Council’s Russia Tomorrow series explores Russia’s wartime cyber operations and broader cyber web.

The post Unpacking Russia’s cyber nesting doll appeared first on Atlantic Council.

]]>

Russia’s full-scale invasion of Ukraine in February 2022 challenged much of the common Western understanding of Russia. How can the world better understand Russia? What are the steps forward for Western policy? The Eurasia Center’s new “Russia Tomorrow” series seeks to reevaluate conceptions of Russia today and better prepare for its future tomorrow.

Table of contents

When the Russian government launched its full-scale invasion of Ukraine on February 24, 2022, many Western observers braced for digital impact—expecting Russian military and security forces to unleash all-out cyberattacks on Ukraine. Weeks before Moscow’s full-scale war began, Politico wrote that the “Russian invasion of Ukraine could redefine cyber warfare.” The US Cybersecurity and Infrastructure Security Agency (CISA) worried that past Russian malware deployments, such as NotPetya and WannaCry, could find themselves mirrored in new wartime operations—where the impacts would spill quickly and globally across companies and infrastructure. Many other headlines and stories asked questions about how, exactly, Russia would use cyber operations in modern warfare to wreak havoc on Ukraine. Some of these questions were fair, others clearly leaned into the hype, and all were circulated online, in the press, and in the DC policy bubble ahead of that fateful February 24 invasion.

As the Putin regime’s illegal war unfolded, however, it quickly belied these hypotheses and collapsed many Western assumptions about Russia’s cyber power. Russia didn’t deliver the expected cyber “kill strike” (instantly plummeting Ukraine into darkness). Ukrainian and NATO defenses (insofar as NATO has spent considerable time and energy to support Ukraine on cyber defense over the years) were sufficient to (mainly) withstand the most disruptive Russian cyber operations, compared at least to pre-February 2022 expectations. And Moscow showed serious incompetencies in coordinating cyber activities with battlefield kinetic operations. Flurries of operational activity, nonetheless, continue to this day from all parties involved in the war—as Russia remains a persistent and serious cyber threat to the United States, Ukraine, and the West. Russia’s continued cyber activity and major gaps between wartime cyber expectations and reality demand a Western rethink of years-old assumptions about Russia and cyber power—and of outdated ways of confronting the threats ahead.

Russia is still very much a cyber threat. Patriotic hackers and state security agencies, cybercriminals and private military companies, and so on blend together with deliberate state decisions, Kremlin permissiveness, entrepreneurialism, competition, petty corruption, and incompetence to create the Russian cyber web that exists today. The multidirectional, murky, and dynamic nature of Russia’s cyber ecosystem—relying on a range of actors, with different incentives, with shifting relationships with the state and one another—is part of the reason that the Russian cyber threat is so complex.

Policymakers in the United States as well as allied and partner countries should take at least five steps to size up and confront Russia’s cyber threat in the years to come:

  • When assessing the expectations-versus-reality of Russia’s wartime cyber operations, distinguish between capabilities and wartime execution.
  • Widen the circle of analysis to include not just Russian state hackers but the broader Russian cyber web, including patriotic hackers and state-coerced criminals.
  • Avoid the trap of assuming Russia can separate out cyber and information issues from other bilateral, multilateral, and security-related topics—maintaining its hostility toward Ukraine while, say, softening up on cyber operations against the United States.
  • Continue cyber information sharing about Russia with allies and partners around the world.
  • Invest in cyber defense and in cyber offense where appropriate.

Russia’s cyber ecosystem

Russia is home to a complex ecosystem of cyber actors. These include military forces, security agencies, state-recruited cybercriminals, state-coerced technology developers, state-encouraged patriotic hackers, self-identified patriotic hackers acting of their own volition, and more. Even Russian private military companies offer cyber operations, signals intelligence (SIGINT), and other digital capabilities to their clients. Together, these actors form a large, complex, often opaque, and dynamic ecosystem. The Kremlin has substantial power over this ecosystem, both guiding its overall shape (such as permitting large amounts of cybercrime to be perpetuated from within Russia) and leveraging particular actors as needed (discussed more below). Simultaneously, decisions aren’t always top-down, as entrepreneurial cybercriminals and hackers—much like “violent entrepreneurs” in Russian business and crime, or the “adhocrats” vying for Putin’s ear to pitch ideas—take initiative, build their own capabilities, and sell them to the state as well.

The relationships that different security agencies, at different levels, in different parts of the country and world, have with Russian hackers also vary over time. A local security service office might provide legal cover to a group of criminal hackers one day (after the necessary payoffs change hands, of course), only for a Moscow-based team to recruit them for a state operation the next. While the Kremlin has a sort of “social contract” with hackers—focus mainly on foreign targets; don’t undermine the Kremlin’s geopolitical objectives; be responsive to Russian government requests—its tolerance for a specific cybercriminal group can change on a whim, too. Security officials might take a bribe from a cybercriminal, much as their colleagues do on the regular, and still find their patrons in prison and their own wrists in handcuffs.

On the Russian government side, the principal units involved in offensive cyber operations are the Federal Security Service (FSB), the military intelligence agency (GRU), and the Foreign Intelligence Service (SVR). Russia does not have a proper, centrally coordinating cyber command; it was never launched despite attempts in the 2010s. The Ministry of Defense’s initial efforts to make one happen by circa 2014 were, it came to be understood later, overtaken by the subsequent establishment of Information Operations Troops with seemingly some coordinating functions—though experts still debate its analogousness to a “cyber command” and its level of shot-calling compared to bodies like the Presidential Administration. So while it is possible for the Russian security agencies to coordinate their (cyber) operations with one another, their engagements are marked more by competition than cooperation.

The most prominent example of this potential overlap or inefficiency is when GRU-linked APT28 and SVR-linked APT29 both hacked the Democratic National Committee in 2016, making it unclear whether each knew the other was carrying out a similar campaign. This operational friction is exacerbated by the fact that the agencies’ general remits—SVR on human intelligence, for instance, and FSB mostly domestic—do not translate to the digital and online world. All three agencies hack military and civilian targets and, for example, the FSB actively targets and hacks organizations outside of Russia’s borders. Each agency approaches cyber operations differently, too, often in line with their overall institutional cultures—such as the GRU, known for its brazen kinetic operations including sabotage and assassination, carrying out the boldest and most destructive cyber operations, contrasted with the SVR, and its emphasis on secrecy, focusing on quiet cyber intelligence gathering like in the SolarWinds campaign. Still, the Russian state agencies with cyber operations remain active threats to the United States, Ukraine, the West, and plenty of others through intelligence-gathering efforts, disruptive operations, and efforts that meld both, such as hack-and-leak campaigns.

Beyond government units themselves, the state encourages patriotic hackers—sometimes just young, technically proficient Russians—to go after foreign targets through televised and online statements (such as disinformation about Ukraine). Different security organizations, such as the FSB, may hire cybercriminals for specific intelligence operations and pay them based on the targets they penetrate. Other private-sector companies pitch their own services to the state of their own volition, bid on government contracts, and support a range of offensive capability development, research and development, and talent cultivation efforts (including defensive activities and benign or even globally cybersecurity-positive activities beyond the scope of this paper). Russian private military companies increasingly offer capabilities related to cyber and SIGINT to their private and government clients around the world, too. All the while, the state retains the capability to target specific people and companies in Russia that otherwise have nothing to do with the state, apply the relevant pressure, and compel them to assist with state cyber objectives, which it can wield to extraordinary effect.

As the historian Stephen Kotkin notes, “The Russian state can confound analysts who truck in binaries.” While there are several core themes to this ecosystem—complexity; state corruption; overwhelming tolerance for and even tacit support of cybercrime; myriad offensive cyber actors in play—Russia’s cyber ecosystem neither fits into a neat box nor is a neatly run one at that.

For all the threats these actors pose to Ukraine and the West, assuming that the Putin regime controls all cyber activity emanating from within Russia’s borders is not just inaccurate (e.g., the country’s too big; there are too many players; it’s not all top down), but is the kind of assumption that serves as a “useful fiction” for the Kremlin. It makes the system appear ruthlessly efficient and coordinated, gives disconnected or tactically myopic actions a veneer of larger strategy, and puts Putin at the center of all cyber operation decision-making. Thinking as much can, intentionally or not, further feed into the idea that the Kremlin’s motives are clear and fixed or driven by some kind of “hybrid war” strategy. It also obscures the fact that—unlike many Western countries that do, in fact, publish official “cyber strategies”—Russia does not have a defined cyber strategy document, instead drawing on a range of documents and sweeping “information security” concepts to frame information, the internet, and cyber power.

On the contrary, it is the multidirectional, murky, and dynamic nature of Russia’s cyber ecosystem that makes cyber activity subject to sudden change, feeds opportunities for interagency rivalries, contributes to effects-corroding corruption and competition, and provides the Kremlin with a spectrum of talent, capabilities, and resources to tap, direct, and deny (plausibly or implausibly) as it needs. It is in part this dynamism and multidirectional nature that makes Russia’s cyber threat so complex—as mixes of deliberate state decisions, Kremlin permissiveness, entrepreneurialism, competition, petty corruption, and incompetence blend together to create the Russian cyber web that exists today. Relationships between the state proper, at different levels, in different organizations, with nonstate cyber affiliates are often shifting; ransomware groups persistently targeting Western critical infrastructure, for example, may be prolific for months before collapsing under internal conflict and reconstituting into new groups, with new combinations of the old tactics and talent. It is also the reason that what is known to date about cyber operations during Russia’s full-out war on Ukraine provides such a valuable case study in assessing the status quo of this ecosystem—and, coupled with lessons from past incidents (like Russian cyberattacks on Estonia in 2007, Georgia in 2008, and Ukraine in 2014), helps to better weigh the future threat.

What happened to Russia’s cyber might?

Cyber operations have played a substantial role in Russia’s full-on invasion of Ukraine in February 2022 and the ensuing war. These activities range from distributed denial of service (DDoS) attacks knocking Ukrainian websites offline and Ukrainian patriotic hackers’ attacks on Russian government sites (what Kyiv calls its “IT Army”) to Russia using countless malware variants to exfiltrate data and targeting Ukrainian Telegram chats and Android mobile devices. Without getting into a timeline of every major operation—neither this paper’s focus nor possible given limits on public information—it is clear that Russian and Ukrainian forces and their allies, partners, and proxies have made cyber operations part of the war’s military, intelligence, and information dimensions.

There are many ways to define cyber power, which is by no means limited to offensive capabilities. In Russia’s case, analysts could focus on anything from Russia’s national cyber threat defense system—the Monitoring and Administration Center for General Use Information Networks (GosSOPKA), which effectively brings together intrusion detection, vulnerability management, and other technologies for entities handling sensitive information—to the enormous IT brain drain problems the country suffered immediately following the full-on invasion of Ukraine. As explored in a study last year for the Atlantic Council, Russia’s growing digital tech isolationism—both a long-standing goal and increasing reality for the Kremlin—has driven more independence in some areas, like software, while heightening dependence and strategic vulnerability in others, such as dependence on Chinese hardware. This paper’s focus, though, will remain on Russia’s offensive capabilities.

Pre-February 2022 expectations in the United States and the West, as highlighted above, were dominated by those predicting extensive Russian disruptive and destructive cyber operations. In these scenarios, Russia would leverage its state, state-affiliated, state-encouraged, and other capabilities to cause serious damage to Ukrainian critical infrastructure (telecommunications, water systems, energy grids, and so forth) and cleanly augment its kinetic onslaught. Russia would “employ massive cyber and electronic warfare tools” to collapse Ukraine’s will to fight through digital means.

To be sure, some predictions were more measured. Some pointed to the 2008 Russo-Georgian War, as an illustration of Russian forces effectively using DDoS attacks (Moscow’s shatter-communications approach) in concert with disinformation and kinetic action to prepare the battlefield, and conjectured that Moscow would do the same if it moved troops further into Ukraine. Others highlighted Russia turning off Ukrainian power grids as a possible menu option for Moscow as it escalated. Cybersecurity scholars Lennart Maschmeyer and Nadiya Kostyuk, contrary to widely held positions, argued two weeks before Russia’s full-scale invasion that “cyber operations will remain of secondary importance and at best provide marginal gains to Russia,” incisively noting that press headlines talking of “cyber war” rest on “the implicit assumption that with the change in strategic context, the role of cyber operations will change as well.” The overwhelming sentiment, though, was worry and anticipation of what some considered true, cyber-enabled, twenty-first century warfare.

But the cyber operations that unfolded immediately before and after the February 2022 invasion defied what many Western (including American) commentators were predicting. Russia didn’t deliver the cyber kill strike expected (instantly plummeting Ukraine into darkness). Ukrainian and NATO defenses were sufficient to (mainly) withstand the most disruptive FSB and GRU cyber operations, compared at least to pre-February 2022 expectations. And Moscow showed serious incompetencies in coordinating cyber activities with battlefield kinetic operations. Many experts who did not expect cyber-Armageddon per se have still been surprised by the limited impact of Russian attacks, the focus on wiper attacks (that delete a system’s data via malware) and data gathering over critical infrastructure disruptions, and apparent poor coordination between cyber and kinetic moves made by the Russian Armed Forces and intelligence services.

What, then, explains the gulf between expectations—decisive moves, cleanly executed operations, and visible results—and reality, with some operations, certainly, but the overwhelming focus on kinetic activity and far less on destructive cyber movement than anticipated? Scholars and analysts have, since February 2022, put forward several buckets of hypotheses.

Various commentators argue, as National Defense University scholar Jackie Kerr compiles and breaks down, that Russia’s weak integration of cyber into offensive campaigns was symptomatic of broader problems with Russian military preparations for full-on war; that Western observers simply overestimated Russia’s cyber capabilities; that poor coordination and competition between Russian security agencies impeded operational success; or that Ukraine’s cyber defenses have been extraordinarily robust. Some have gone so far as to attribute Ukrainian cyber defenses, backed up by Western allies and partners, as the primary reason for Russian offensive failures. Russia cyber and information expert Gavin Wilde argues that Russia focused on countervalue operations (against civilian infrastructure, to demoralize political leaders and the public) more than counterforce operations (against Ukrainian military capabilities), to little effect, “a sign of highly sophisticated intelligence tradecraft being squandered in service of a deeply flawed military strategy.”

Professors Nadiya Kostyuk and Erik Gartzke write that Russia’s full-on war on Ukraine is about territory and physical control, making physical military activity far more important than cyber operations themselves. Cyber scholar Jon Bateman argues that traditional signals jamming and Russia’s cyberattack against the Viasat satellite communications system, coupled with a chaotic slew of data-deletion attacks, may have helped Russia initially—but that cyber operations from there had diminishing novelty and impact. Russia’s poor strategy, insufficient intelligence preparation, and interagency mistrust have been presented as causes for undermining Russia’s cyber-kinetic strike coordination, too. Others argue that Russians wanted to gather intelligence from Ukrainian systems more than disrupt them, that Russia’s information-focused troops have been more optimized for propaganda than cyber operations, and that cyber scholars’ and pundits’ expectations were plain wrong given that Russia wanted to inflict physical violence on Ukraine more than achieve cyber-related effects—necessitating bombs, missiles, and guns over malware, zero days, and DDoS attacks.

In reality, of course, many factors are likely in play at once. Plenty of the above scholars and commentators recognize this multifactorial situation and say it outright (although a few do push a single prevailing explanation for the war’s cyber outcomes). However, it’s worth explicitly stressing that many factors coexist, in light of occasional efforts to provide reductive explanations for complex wartime activities and effects. Concluding that Russia is no longer a cyber threat, for instance, is wrong. While Ukraine as a country has demonstrated extraordinary will and resilience, and while Ukrainian cyber defenses have been more than commendable, explanations that place the rationale solely on formidable Ukrainian cyber defenses are likewise reductive. Taking such explanations as fact simplifies the many factors involved and can veer analysis and debates away from the policy actions that are still needed, such as continued cyber threat information sharing between the United States and Ukraine.

The above, plausible, evidence-grounded explanations are not mutually exclusive. FSB officers, rife with paranoia, conspiratorialism, and a Putin-pleasing orientation, did indeed grossly misinterpret the situation on the ground in Ukraine in 2022 and fed that bad information to the Kremlin, potentially skewing assessments of cyber options as well.

Interagency competition may very well have undermined, once again, the ability of the FSB, GRU, and SVR to coordinate activities with one another, let alone with the Ministry of Defense and Russian proxies in Belarus, and therefore hampered more effective planning, coordination, and execution of cyber operations. For example, during the war’s initial stages, elements of the SVR may very well have sought to technically gather intelligence from targets that GRU- or FSB-tied criminal groups were indiscriminately trying to knock offline or wipe with malware, thrusting uncoordinated activities into tension.

Like in every other country on earth, Russian cyber operators are additionally subject to resource constraints: A hacker spending a day on breaking into a Ukrainian energy company is a hacker not spending time on spying on expats in Germany or setting up a collaboration with a ransomware group. Competition, therefore, not just between agencies—turf wars, budget fights, who gets the primary jurisdiction over Ukraine, and so forth—but within them, over who gets to spend what time and resources targeting which entities, sit within broader Russian government calculi over cyber, military, and intelligence operations. And, among others, Russia’s overall strategy did lead to bad moves, as Wilde and others have noted, with limited effect and burning away Russian capabilities (like exploits) in the process. Recognizing these many likely factors will facilitate better analysis of where Russia stands.

The gap between the imagined, all-out “cyber war” and the past three years’ reality also begs the question of whether the right metrics were considered in the first place. As much as cyber capabilities are inextricable from modern intelligence operations, and as much as cyber and information capabilities are embedded throughout militaries around the world, war is obviously about far more than cyber as a domain. But experts studying cyber all day, every day, may fall into the unintentional trap (as anyone can) of having their area of study become the focal point of analysis in a war with many moving pieces and considerations—hence, some of the commentary anticipated Russian destruction of Ukraine to happen through code, compared to a range of military weaponry. Academic theories, moreover, of how cyber conflict will unfold in political science-modeled simulations or think tank war games may similarly fail to map to battlefield realities, such as generalizing how cyber fits into warfare without adequately considering unique contexts in a country like Russia. Layered on top of all this—in the academies, in the media, in the data and artificial intelligence (AI) era—is a frequent desire to quantify everything, too, obscuring the fact that not everything can be effectively, quantifiably measured and that counting up the number of observed Russian cyber operations and scoring them may still not get to the heart of their inefficacy. Clearly, as US and Western perspectives on Russian cyber power shift with more information and time, it is worth rethinking Russia’s future cyber power—not just for how the West can recalibrate its assumptions and size up the threats, but in how the West can prepare to act and respond in the future.

Unpacking the (cyber) nesting doll

The takeaway from comparing predictions and reality shouldn’t be that pundits are always wrong or that Russia’s cyber operations are considerably less threatening in 2025. Nor should it be that Ukraine is propped up solely by Western government and private-sector cyber defenses, and that Russia is simply waiting to unleash a devastating cyber operation to end it all.

Russia remains a sophisticated, persistent, and well-resourced cyber threat to the United States, Ukraine, and the West generally. This is not going to change anytime soon. Kremlin-spun “crackdowns” on cybercrime (arrests that were little more than public relations stunts), frenetic talk of US-Russia rapprochement, and wishful thinking about Putin’s willingness to cease subversive activity against Ukraine do not portend, as some might suggest, that the United States can sideline Russia as a central cyber problem—and focus instead on China.

The Russian government views cyber and information capabilities as key to its military and intelligence operations, and the Kremlin still has one top enemy in its national security sights: the United States. Outside the Russian state per se, a range of ransomware gangs and other hackers in Russia will continue targeting companies, critical infrastructure, and other entities in the United States, Ukraine, and the West, too. There are at least five steps US policymakers and their allies and partners should take to size up this threat—against the full scope of Russia’s cyber web and integrating lessons learned so far from Russia’s full-out war on Ukraine—and confront it head-on in the coming years.

When assessing the expectations-versus-reality of Russia’s wartime cyber operations, distinguish between capabilities and wartime execution. Clearly, Russian offensive cyber activity during its full-on war against Ukraine has not matched up against Western assumptions that envisioned a cyber onslaught that turned off power grids, disrupted water treatment facilities, and blacked out communications. Evaluating how and why Russia did not make this happen is critical to understanding Russia’s operational motives, play-by-play planning and coordination between security agencies, targeting interests, and much more. But analysts and media must be careful to avoid thinking that Russia’s cyber capabilities themselves are weak. Clearly, when Russian hackers put the pedal to the metal, so to speak—ransomware gangs targeting American hospitals, or the GRU going after Ukrainian phones—they can deliver serious results. A better approach is policymakers and analysts in the United States, as well as in allied and partner countries, breaking out Russia’s continued cyber threats across ransomware, critical infrastructure targeting, mobile-device hacking, and so on while pairing the capabilities against where execution could fall short in practice. Doing so will give a better sense of Russia’s cyber strengths and weaknesses—and distinguish between the different components of carrying out a cyber operation.

Widen the circle of analysis to include not just Russian state hackers but the broader Russian cyber web, including patriotic hackers and state-coerced criminals. Focusing Western intelligence priorities, academic studies, and industry analysis mainly on Russian government agencies as the primary vector of Russian cyber power loses the importance of the overall Russian cyber web. Putting the focus mostly on Russian government agencies also loses, as my colleague Emma Schroeder has unpacked in detail, the role that public-private partnerships have played in cyber operations and defenses in the conflict, and the opportunity to assess similar public-private dynamics on the Russian side. Conversely, making sure to consider the roles of government contractors, military universities, patriotic hackers, state-tapped cybercriminals, and other actors as described above should help to fight the temptation to treat all Russian cyber operations as top-down—and illuminate the many ways in which Russia can build capabilities, source talent, and carry out operations against the West. Understanding these actors will allow for better tracking, threat preparation, defense, and, where needed, disruption.

Avoid the trap of assuming Russia can separate out cyber and information issues from other bilateral, multilateral, and security-related topics—maintaining its hostility toward Ukraine while, say, softening up on cyber operations against the United States. Whether the US government can or cannot separate out cyber issues vis-à-vis Russia from other elements of the US-Russia relationship (e.g., trade, nuclear security), Western policymakers should avoid the trap of assuming the Russian government is currently capable, let alone willing, of genuinely and seriously doing the same: separating out its cyber activities from other policy and security issues.

The Russian government has come to view the internet and digital technologies as both weapons that can be wielded against the state and weapons to use against Russia’s enemies. In this sense, cyber operations (as well as information operations) are core not just to Moscow’s approach to modern security, military activity, and intelligence operations but, perhaps more importantly, to the Kremlin’s conceptualization of regime security as well. Paranoia and propaganda about fifth columnists (with, sometimes, one feeding the other), persistent efforts to crack down on the internet in Russia, and a continued belief that Western tech companies and civil society groups are weaponizing the internet to undermine the Kremlin, mean that the regime will not truly believe it can put “information security” on the sidelines—and that includes not just internet control but cyber operations. Policymakers must go into diplomatic and other engagements with Russia with their eyes wide open.

Continue cyber information sharing about Russia with allies and partners around the world. For years, military and intelligence scholars and analysts have referred to Russia’s actions in Georgia, Ukraine, and other former Soviet republics as a “test bed” or “sandbox” for what Russia might do in other countries. It would be a strategic, operational, and tactical mistake to think that Russian cyber operations against Ukraine are just confined to Ukraine and that two-way information sharing with Ukraine about cyber threats is a waste of time and resources. Quite the opposite: Russia’s cyber and information activities against Ukraine today can give the United States and its allies and partners critical insights into the types of capabilities and operations that could, and very well might be, carried out against them at the same time or days or months later. Whether hack-and-leak operations designed to embarrass political figures, wiper attacks designed to destroy government databases, espionage operations, or anything in between, having real-time information about Russian cyber threats will only help the United States and its allies and partners better defend their own networks and systems against hacks and attacks.

Invest in cyber defense and in cyber offense where appropriate. Persistent, sophisticated Russian cyber threats to a range of key US and allied and partner systems—military networks, hospitals, financial institutions, critical infrastructure, advanced tech companies, civil society groups—demand continued investments in cyber defense. In addition to information-sharing, the United States and its allies and partners need to continue prioritizing market incentives for companies to enhance cyber defenses along with baseline requirements for essential measures such as multifactor authentication, detailed access controls, robust encryption, continuous monitoring, network segmentation, resourced and empowered cybersecurity decision-makers, and much more. Just as the Russians clearly possess a range of advanced cyber capabilities, any number of recent operations, including against Ukraine, show that Russian operations (like those carried out by many other powers) continue to succeed with basic moves such as phishing emails. The United States and its allies and partners need to continually increase cyber defenses. And, where appropriate, the United States and its allies and partners should ensure the right capabilities and posture to carry out cyber offensive operations—including to preemptively disrupt Russian attacks (the “defend forward” euphemism). As the Kremlin is more paranoid and conspiratorial, the notion of diplomatic talks and establishing cyber redlines is less and less realistic. Active mitigation and disruption of threats, rather than relying too heavily on diplomatic meetings or endless criminal indictments, are together a more feasible approach to protecting US and allied and partner interests against Russian cyber threats in the years to come.

Conclusion

Lessons from cyber operations—and about cyber operations and capabilities—from the Russian full-on war against Ukraine will continue to emerge in the coming years. This trickle of information may slowly dissipate some of the “fog of war” surrounding the back-and-forth hacks and shed much-needed light on issues such as coordination and conflict between Russian security agencies in cyberspace.

For now, however, the issue for the United States is clear: Russia remains a persistent, sophisticated, and well-resourced cyber threat to the United States and its allies and partners around the world. The threat stems from a range of Russian actors, and it stands to continue impacting a wide range of American government organizations, businesses, civil society groups, individuals, and national interests across the globe. As wonderful as the idea of cyber détente might be, Putin’s paranoia about Western technology, Russian officials’ insistence that the internet is a “CIA project” and Meta is a terrorist organization, and military and intelligence interest in conflict and subversion against the West will not evaporate with a wartime ceasefire or a newfound agreement with the United States. These are hardened beliefs and fairly cemented institutional postures that are not going to shift under the current regime.

Rather than dismissing Russia’s cyber prowess because of unmet expectations since February 2022, American and Western policymakers must size up the threat, unpack the complexity of Russia’s cyber web, and invest in the right proactive measures to enhance their security and resilience into the future.

Acknowledgements

The author would like to thank Brian Whitmore and Andrew D’Anieri for the invitation to write this paper and for their comments on an earlier draft. He also thanks Gavin Wilde, Trey Herr, Aleksander Cwalina, Ambassador John Herbst, and Nikita Shah for their comments on the draft.

About the author

Justin Sherman is a nonresident senior fellow with the Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; an incoming adjunct professor at Georgetown University’s School of Foreign Service; a contributing editor at Lawfare; and a columnist at Barron’s. He writes, researches, consults, and advises on Russia security and technology issues and is sanctioned by the Russian Ministry of Foreign Affairs.

Explore the programs

The Eurasia Center’s mission is to promote policies that strengthen stability, democratic values, and prosperity in Eurasia, from Eastern Europe in the West to the Caucasus, Russia, and Central Asia in the East.

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

Related content

The post Unpacking Russia’s cyber nesting doll appeared first on Atlantic Council.

]]>
Counting the costs: A cybersecurity metrics framework for policy https://www.atlanticcouncil.org/in-depth-research-reports/report/counting-the-costs/ Tue, 06 May 2025 12:00:00 +0000 https://www.atlanticcouncil.org/?p=844324 Improved cybersecurity metrics can unlock more efficient policy and give policymakers a better sense of how they are faring at improving security.

The post Counting the costs: A cybersecurity metrics framework for policy appeared first on Atlantic Council.

]]>

Table of contents

Executive summary
Introduction
Two problems
Reframing cybersecurity metrics
The cyber metrics state of play
Reading the curves: Interpreting outcome data
Starting construction: Two changes
Conclusion
Acknowledgments

Executive summary

US cybersecurity policy has a critical blind spot: the absence of reliable outcome metrics that can inform policymakers about whether the digital ecosystem is becoming more secure and which interventions are driving progress most effectively. Despite years of strategies, regulations, and best-practices campaigns, the field of cybersecurity metrics has room to grow, and policymakers still lack answers to fundamental questions. How much harm are cybersecurity incidents causing? Are things getting better or worse? Which policies deliver the greatest return on investment for reducing realized harm and the risk of future harm?

This report identifies two core problems holding back progress: first, the unknown state of the system, meaning policymakers cannot empirically describe how secure or insecure the digital landscape currently is; and second, unmeasured policy efficacy, which prevents policymakers from comparing which interventions are most effective at improving security and reducing harm. The result is a policymaking environment heavily reliant on intuition, anecdote, incomplete data, and proxy measures—all unsustainable for a domain with such systemic and escalating risks and so much security investment. To address these challenges, the report proposes a reframing of cybersecurity metrics along two dimensions:

  1. Treating cybersecurity as a complex system—acknowledging that incident outcomes result from dynamic, probabilistic interactions between policies, technologies, adversaries, and users.
  2. Focusing on harm as the key outcome metric—shifting emphasis from internal system attributes (e.g., the number of vulnerabilities discovered) to the real-world impacts of cyber incidents, such as financial losses, operational disruptions, and physical damage.

The report then explores the current limitations of available metrics, illustrating how wide-ranging estimates of incident costs and inconsistent data collection methods hamstring policymakers. It outlines the difficulty of measuring and interpreting harm data at scale due to factors such as silent failures, complex indirect costs, and underreporting, but it argues that such challenges are not insurmountable and that a desire for perfect metrics cannot impede progress toward better ones. Finally, the paper offers two actionable recommendations for near-term progress:

  1. Strengthen existing reporting requirements (e.g., CIRCIA, SEC disclosures) to include consistent, updated measures of incident impact.
  2. Centralize responsibility under a single federal entity to aggregate, analyze, interpret, and publish cybersecurity harm data across sectors.

While perfection in cybersecurity metrics may be impossible, measuring harms is the most direct way to track progress and guide investment and the most critical metric to bolster policymakers’ toolkit. Without such measurement, the United States risks continuing to navigate a complex, evolving system with an incomplete map.

Introduction

A recurring theme in cybersecurity policy is the failure to quantitatively describe the end state toward which it aims, or even to enumerate what metrics should be measured to that end. How many incidents occur, how much damage do they cause, and to whom? If these are the metrics to consider, what is their desired level and by how much does cybersecurity need to improve to get there? And if not these metrics, then which?

In rare moments when policymakers clearly define cybersecurity outcomes, they tend toward absolutes of dubious achievability; for example, “prevent catastrophe” and “defeat ransomware.”1 Even complex legislation and national strategies,2 while attempting to alter the incentives around building and using technology, rarely offer more than a glancing, qualitative description of what they strive for—a far cry from the clear, numerical state measurements and milestones in other spheres of public policy, such as inflation and unemployment rates for the Federal Reserve.

Even though more empirically developed policy fields such as economics still face routine crisis, US cybersecurity policymakers must adapt to the dizzying complexity, rate of change, and potential impact of failure in today’s digital systems by taking exactly that step toward better measurement. It is critical to understand the current state of cybersecurity, set quantitative goals for its improvement, and assess the efficacy of government policies against those goals. “Intuition alone is insufficient to manage a complex system,” as former National Cyber Director Chris Inglis put it.3 Without specifying target outcomes, there is little incentive to establish critical baseline measures in the first place. Identifying the effectiveness of specific policies at improving security and the cost of their implementation is a step even farther, and the quantitative toolkit required for the US government to make that step has not yet been created. The novelty and dynamism of the digital domain mean that policy missteps will happen, but without that toolkit, identifying which remedies fall short and which succeed—let alone by how much—will remain extraordinarily difficult, if not impossible, all while the rapid integration of digital systems across all levels of society increases the impacts and risks of cyber incidents.

This paper aims to reboot and reorient a long-simmering debate around cybersecurity metrics for the policy community. It starts with context about the state of and need for better cybersecurity measurement by discussing two central and related problems created by the field’s empirical immaturity:

  • Insufficient cybersecurity metrics mean that government cannot empirically assess, across the digital domain, whether cybersecurity is good or bad, improving or deteriorating.
  • Insufficient cybersecurity metrics also complicate the task of evaluating and prioritizing security practices and policies based on their efficacy.

After discussing these two problems, this paper offers two framings for cybersecurity metrics critical to improving their usefulness to policymakers: treating cybersecurity as a complex system and measuring harms. The guiding thesis of this paper is that the harms, in the broadest sense, caused by cyber insecurity are the most important outcome metrics for policymakers. Harms here refers to the bad things caused by cybersecurity incidents, from direct loss of money to intellectual property theft, from the compromise of national security information to the erosion of competitive economic advantage. Metrics for those harms at the macro level are an essential tool for policymakers seeking to manage and improve cybersecurity. After all, cybersecurity policymakers’ driving mandate is to reduce realized harms and the risk of future harm as much as reasonably possible, whether through increasing economic competitiveness, securing critical infrastructure, imposing costs on adversary activities, managing strategic competition, or any number of methodological priorities.

This paper does not claim a lack of effort in policy or technical circles at quantifying security, and indeed elements in both communities have been trying admirably for quite some time.4 Moreover, even without a broad base of empirical data, policymakers make much use of threat intelligence, observed trends, risk assessments, and other sources of evidence. Instead, this paper suggests a starting point for identifying, measuring, and analyzing cybersecurity outcomes with the goal of reorienting and rebooting these debates rather than arriving at a final answer. After discussing cybersecurity as a complex system and outcomes in terms of harms, this paper analyzes different approaches to interpreting outcome data. Finally, this paper proposes initial policy steps toward improving cybersecurity outcome data.

Importantly, these recommendations do not aim at some final architecture for perfect cybersecurity statistics—such policy systems take time, trial, and error to create in any field. Instead, they combine practical changes and a broader policy reframing to move the needle of cybersecurity policy toward realistic empiricism, while recognizing the risks of both cynicism and perfectionism. Empirically characterizing cybersecurity at the macro level and the efficacy of specific security policies is difficult but not hopeless. And while no policy system for metrics is perfect—debates in more matured fields such as public health, law enforcement, and economics abound—that does not render them all useless.

Two problems

Unkown system state: What is “the problem”?

The first issue created by insufficient cybersecurity metrics is that they leave policymakers with no concrete way to describe the current degree of harm caused by insecurity. More than a decade ago, Dan Geer listed several fundamental cybersecurity questions offered in the context of a conversation with a firm’s chief information security officer (CISO): “How secure am I? Am I better off than this time last year? Am I spending the right amount of [money]? How do I compare to my peers?”5 These questions are as important for policymakers, and as difficult for them to answer, as when originally posed in 2003.6 The primary US cyber policy coordinator, the Office of the National Cyber Director (ONCD) argued in 2024 that they were not answerable at all. A Government Accountability Office (GAO) report on the 2023 National Cybersecurity Strategy (NCS) criticized the NCS for its lack of “outcome-oriented performance measures,” as well as ignoring “resources and estimated costs,” to which the ONCD responded that “such measures do not currently exist in the cybersecurity field in general,”7 and the claim rings true. Current cybersecurity metrics and the field’s state have, after at least two decades, failed to provide policymakers with ways to answer the foundational question “how are we doing at cybersecurity?” at the highest level.

And yet, a general intuition that the current state of US cybersecurity is suboptimal animates industry, government, and the public alike. Headlines dominated by costly cybersecurity incidents and predictions that things will deteriorate without drastic change feed this perception. For example, former US Deputy National Security Advisor Anne Neuberger summarized data from the International Monetary Fund (IMF) and Federal Bureau of Investigation (FBI) data as suggesting that “the average annual cost of cybercrime worldwide is expected to soar from $8.4 trillion in 2022 to more than $23 trillion in 2027.”8At appreciable fractions of global GDP, these are dire numbers that all but mandate extreme intervention. The hypothesis behind this metric is that the current amount of harm caused by cyber incidents could be reduced by interventions less costly than the consequences of their absence. But intervention against what, and how? Testing and refining that thesis with quantitative data is a critical first step too often overlooked—how much harm do cyber incidents cause? How much would it cost to implement recommended interventions? How much harm would they prevent? Is the cost of preventing security incidents actually lower than the costs that those incidents impose? And above all, if the current level of harms is deemed unacceptable, what would be considered acceptable? Current metrics are unable to provide answers at a scale useful to policymakers, leaving them with no baseline measures against which to judge policy efficacy.

In absence of this key outcome data, cyber policy conversations frame metrics as, at best, an after-action exercise for validating efficacy, rather than the first critical step in defining the problems they seek to solve. Even then, empirical impact assessments are rare. The NCS’s “Assessing Effectiveness” section underlines this, providing just one paragraph on the strategy’s final page, with a key progress report that failed to materialize before the change in administration.9 The document’s accompanying implementation plan (the National Cybersecurity Strategy Implementation Plan, or NCSIP) reduces assessment to determining whether proposed policies were enacted and whether a budget for them was created, and nothing more.10 These are useful measures of output for policymakers, but do little if anything to track empirically how implementing the NCS changes the cybersecurity landscape; the strategy largely forgoes assessing its external impact, focusing instead on implementation—a familiar state for cyber policy, which more often concerns itself with adoption rates and completion progress than tangible effect on security outcomes.11 If policymakers cannot, from the outset and at a high level, measure how they are doing at cybersecurity, all follow-on policy rests on a flawed foundation and it will be difficult to empirically demonstrate success.

Policymakers must use cybersecurity metrics as the foundation for characterizing the status quo, identifying specific problems with it, and shaping solutions. When the GAO asks what outcomes would demonstrate the success of the NCS, the ONCD should be able to respond by pointing to the very issues and data motivating the creation of NCS in the first place. The usefulness of measuring incident costs is relatively uncontroversial and has long frustrated policymakers—see for example a 2020 CISA study on just that problem and its associated challenges.12 However, both cybersecurity policymaking writ large and efforts to imbue it with better metrics would benefit greatly from approaching metrics as a step toward problem definition first, then as solution assessment. Otherwise, the logical chain of cyber policymaking is broken, producing unbounded solutions with no clear, quantified statement of the problems they hope to solve, and thus no clear outcomes to strive for and measure success against.

Policymakers and practitioners are right to lament the dearth of cybersecurity statistics to inform their work, but they cannot afford to wait for the empirical field to mature on that same decades-long trajectory—they must proactively work to define, gather, and respond to cybersecurity metrics. It is unlikely that government can avoid a central role in gathering macroscale metrics and wait for the data they need to be developed for them. Monetary policy is guided by and assessed against the Consumer Price Index (CPI) and the unemployment rate, both of which are measured by the US Bureau of Labor Statistics. National crime statistics are collated and analyzed through the FBI’s Uniform Crime Reporting Program. The Center for Disease Control’s National Center for Health Statistics gathers a variety of public health metrics from across the country, as well as globally. Each of these programs is the result of decades of iterative policymaking and partnerships with experts in industry, academia, state and local governments, and civil society. The federal government has the clearest incentives and best means to gather metrics on a scale sufficient to describe the full ecosystem and assess policy efforts to shape it. Policymakers do require better cybersecurity metrics to guide them, but they have an active role to play in creating those tools.

For cybersecurity, some nascent policies might provide useful insight on data gathering and starting points for more matured, coordinated programs: for example, the FBI’s Internet Crime Complaint Center (IC3) database,13 the upcoming implementation of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA),14 the Securities and Exchange Commission’s (SEC) material cyber incident reporting requirements,15 and so on. All either currently or soon will gather data on cybersecurity incidents, but there is little consensus about what to measure and how, and worryingly little progress toward data collection at the ecosystem scale.16 For a young field—cybersecurity dates back to the 1970s as a defined field at the earliest, whereas econometrics began developing in the early 1930s—that status quo is understandable, but untenable.17

Unmeasured efficacy: What interventions address “the problem” best?

Second, insufficient cybersecurity metrics leave policymakers without measures of how effective specific policies are, meaning they can do little to prioritize or update policy interventions based on metrics. Policymakers are in the business of battling with long-perceived market inefficiencies that lead firms to under- and mis-invest in cybersecurity.18 For now, they do so through recommendations and requirements about security practices and reporting for certain sectors, products, and entities. The past few years have seen a flurry of movement in cyber policy, from the National Cybersecurity Strategy and its dozens of implementation objectives to agency-led efforts such as CISA’s Secure by Design (SBD) Initiative and the SEC’s new cyber incident reporting requirements, several critical executive orders, and even an effort designed to harmonize the many existing and forthcoming regulations.19

Choosing the initiatives to pursue and those to reinvent or discard requires an understanding of their ultimate impact on cybersecurity outcomes. Determining which policies are effective—when measured against the cost of their implementation—requires quantifying the costs of incidents that they prevent or mitigate. A firm’s ability to decide which SBD principles to prioritize necessitates understanding their cost and efficacy. And yet there are only early efforts at ranking these practices by their effectiveness, which challenges any attempt to identify the most urgent security practices or product security features to implement.20 In short, no one knows what the best thing to do is, whether that be policymakers deciding what practices to require or industry deciding which to implement, only a great number of security practices that are probably good to try.

This is more than simply an optimization challenge. Seemingly potent security controls can lead to unexpectedly poor outcomes, especially in a complex system. For example, the National Institute for Standards and Technology (NIST) prescribes security practices for federal agencies and their contractors, and industry writ large often uses its guidance documents as a starting point for security policies even when a company’s compliance is not required. One such publication, NIST SP-800-63B, offers recommendations on digital identity systems, including guidance about account credentials. Past versions of the document suggested the use of complex characters (a mix of numbers, capital and lowercase letters, and special symbols) and frequent password resets to prevent attackers from using dictionaries of common passwords to quickly guess their way into account access. The thinking was that complex characters would require attackers to brute force passwords (i.e. guess all possible combinations of characters in a password), and that the frequent rotation of credentials would limit the window of time in which attacks could guess a password successfully, since attackers would need to start over after every rotation. The reality was different. Users rotated between similar passwords, often repeating old ones, and attackers developed dictionaries to quickly guess at common, easily remembered uses of complex characters like the suffix “123!” and substituting numbers for letters.21 In other words, the intuition behind the practice was sound, but the ecosystem (users, here) reacted in a way that made the recommended practice insecure and costly.

Without metrics to provide an empirical understanding of the tradeoffs that recommended security practices create in practice, policymakers remain at risk for similar situations. For example, inconvenient authentication requests from multi-factor authentication (MFA) might lead users to share credentials in an insecure manner; rewriting software into notionally memory-safe programming languages might be effective at improving security but more costly than the incidents it prevents; or zero trust architectures might fail to meaningfully improve security across the digital ecosystem so long as they are not adopted past some unknown threshold. Without improving cybersecurity metrics, there is simply no way to know how new practices interact with the full ecosystem.

Reframing cybersecurity metrics

To address the connected problems cited above, policymakers must take two critical steps to reframe and develop their approach to empirical cybersecurity: to treat the digital domain as a complex system, and to measure incident harms as their key guiding outcome metric. These are closely related—understanding causality within a complex system and making predictions based on the arrangement of that system at any point in time are immensely difficult. Instead, focusing on the system’s outcomes (here, incident harms) over the system’s specific characteristics at a point in time (e.g., the adoption rate of memory-safe languages) will help policymakers avoid the trap of claiming progress in shaping behaviors without producing evidence that said behaviors have improved the cybersecurity status quo.

Treating cybersecurity as a complex system

Treating the cybersecurity landscape as a complex system-of-systems is key to assessing its status quo. This is the fundamental mandate for policymakers—to reduce bad cybersecurity outcomes across the board,22 and not just for the handful of firms that can measure their own implementation and outcomes well. Accordingly, visibility into as much of the ecosystem as possible is critical. A systems approach also helps policymakers deal with the domain’s complexity, which might lead to unforeseen interactions between policy interventions, technology design choices, and cybersecurity outcomes. The digital ecosystem has two key features that, unaccounted for, could mislead policymakers significantly as they approach improving its security: probabilistic incidents and extraordinary dynamism.

First, there is no deterministic formula to predict whether a cybersecurity incident will occur, when, or with what severity. An entity with extraordinary security practices might find themselves the target of an extremely sophisticated adversary or might remain critically vulnerable because of one simple oversight. Equally, a firm with poor security practices might avoid compromise by pure luck. While this probabilism is somewhat self-evident, it means that data with too small a sample size over too short a duration could significantly mislead policymakers. For example, observing fewer bad outcomes for a specific sector might indicate that changes to security practices in that field are stumping attackers who are now comprising fewer targets in general, or instead that attackers have simply moved on to another sector for any number of reasons without a net change in the ecosystem. There are hard limits on the usefulness and broad applicability of data provided on or by a handful of firms over a few years, and yet the majority of cybersecurity data available to the public today is often presented in the form of corporate annual reports.

Second, the ecosystem is constantly and rapidly changing and interacting with itself. Adversaries in the digital ecosystem are adaptive, the technologies they target change daily, the incentives of firms building technologies and those using them are in constant flux, and so on. Dynamism and unexpected interactions have consequences for measurement. By way of example, recall the NIST password guidance example cited earlier. All else remaining equal, passwords immune to dictionary attacks and changing too often to be brute forced would reduce account compromises, but all else does not stay equal in a complex system. The guidance changed user behavior in way that made accounts more vulnerable instead. Similarly, the relationships between security practices and outcomes are not immutable—techniques that stop would-be attackers one year might do little to slow them down the next as they refine their tactics and develop new tools. Capturing data on how outcomes in the entire digital domain shift over time is critical if policymakers hope to understand and manage it as a complex system. This should increase the urgency with which policymakers strive to better measure cybersecurity outcomes, as the relative lack of historical data means it will take time for newly gathered data to be of significant use.

To illustrate these dynamics in practice, consider the straightforward government-led disruption campaigns that the National Cybersecurity Strategy recommended,23 in which law enforcement organizations or the military attack the infrastructure of malicious actors to prevent their campaigns from causing harm. Fewer attackers carrying out less malicious activity should be a boon to the ecosystem, and the US government (with international partners’ assistance) accordingly increased the pace of its disruption operations through a combination of sanctions, prosecutions, and offensive cyber activities as part of its Counter Cybercrime, Defeat Ransomware strategic objective.24 And yet, Microsoft measurements appeared to show that the volume of ransomware attacks nearly tripled in the final months of 2024.25 Without vastly improved cybersecurity data, it is difficult what to make of these two facts. It might be that disruption campaigns mitigated some attacks, tempering cybercrime even as it continued to grow—if for example, without those disruption operations, ransomware attacks might have quadrupled. Alternatively, the expensive government interventions might have had little impact on the efforts of attackers who could easily buy or write new malware, procure new command and control servers, and move on to less well-defended targets. The disruption campaigns in this model might simply have prevented attacks against specific targets but shifted the attention of the attackers to undefended victims without a net effect. A third possibility is somewhere in between—disruption campaigns might work to reduce incidents at the ecosystem scale but with little return on investment. The thwarted incidents might’ve been drops in the ocean of cyber malfeasance not meriting the cost of disruption. Without macro-scale data or insight into specific adversary decision-making, there is no real way to know which of these models applies over a relatively narrow timeframe, let alone historical data.

The graphic below illustrates a high-level mapping of the digital ecosystem as a complex system, sorting potential metrics into three categories: inputs, attributes, and outcomes. Inputs are forces, policies, and decisions that are largely external to the digital ecosystem, though no doubt shaped by it. These are the incentives that drive decision making within the ecosystem, its technological design and development, and so on. By far the two most dominant inputs are market incentives and policy choices, which drive investment, design, and decision making within the cyber ecosystem. Attributes are measures or descriptors of the ecosystem itself. Within the ecosystem, firms, attackers, defenders, IT infrastructure, and connected real-world systems all interact at a vast scale and rapid pace in a blend of technical, social, and economic subsystems. These attributes provide the vast majority of cybersecurity metrics available today—for example, vulnerability counts and severity, incident frequency, and the adoption rate of various security practices and products. Parsing the ecosystem and its specific components—its attributes—provides much utility, especially to specific entities within it, but that analysis must be taken with a grain of salt. The ecosystem is constantly changing, its various components interact with different degrees of coordination, and how those forces balance out in the long run is difficult to understand, let alone predict. This system-of-systems produces outcomes in the form of benefits (the efficiency, productivity, and innovation enabled by the digital ecosystem) and harms—the material damage caused by incidents.

Figure 1

The goal of this mapping is to highlight how policymakers like those at the ONCD, CISA, or similar are interacting with the digital ecosystem at a different scale than firms and individuals. Many of the metrics useful to an individual firm are attributes, and they take on different meanings and behaviors for those concerned with system-of-systems security. For instance, vulnerability counts might tell a cloud provider what problems they have to fix, how often it creates those problems for itself, and how much effort to invest in patching. However, for policymakers, vulnerability counts indicate some vague blend of deficiency in technology design and success in vulnerability detection. Moreover, at the ecosystem scale, attributes interact with each other and outcomes in unpredictable or unknown ways—for example, it is unclear how attacker behavior adjusts to security practice changes at scale and with what effect on outcomes.

Importantly, this framing is not a call to anticipate all possible interactions or comprehensively measure all attributes. Such an approach to the management of a complex system is impractical. Rather, the complex system framing should highlight the importance of outcome measurements as a way for policymakers to navigate complexity or at least evaluate its consequences for the full set of stakeholders under their remit.

Measuring harms as outcomes

Taken together, the two abovementioned issues—an unknown system state and interventions with unmeasured efficacy—put policymakers in a difficult position. It is as if the Federal Reserve lacked data on unemployment rates and inflation while, at the same time, not knowing which policy tools most effectively influence those economic outcomes and how the rest of the economy reacts to their use. The task of assessing efficacy is difficult in the absence of data measuring realized harms. The Federal Reserve could not begin to know whether its interest rate hikes tempered inflation if the Bureau of Labor Statistics did not calculate the CPI. The cybersecurity arena resembles this, with policy more often being a response to singular incidents and anecdotes than to hard data, and with myriad vendors offering cybersecurity solutions in what could be charitably described as “a market for silver bullets” while at the same time producing much of the data currently available to inform policymaking.26 Past incidents and subjective anecdotes are helpful for policymakers, to a certain extent, and security products are not all ineffective. However, heuristics and hunches are only half a solution in managing the complexity of the cyber ecosystem. Metrics are the other critical and conspicuously absent component, and the first step to developing solid, ecosystem-wide metrics is figuring out what to measure and how.

The harms caused by cyber insecurity are the most important outcome metrics for policymakers, and measuring those harms at the macro level is essential if policymakers are to meaningfully manage and improve cybersecurity. Reducing bad cybersecurity outcomes in the form of harms, and mitigating the risk of future harm, is the implicit guiding principle of cybersecurity policy, and therefore measuring those harms broadly is the only path toward rigorous, empirical cybersecurity policymaking.27 Nonetheless, key policymaking offices in the United States seem so far unable to agree on what a cybersecurity outcome even is. The GAO has suggested measuring tallies of CIRCIA reports—i.e., creating raw counts of incidents reported from specific sectors—and the frequency of government disruption campaigns; but both are attributes, not outcomes.28 Few, if any, would disagree that reducing the harm caused by cyber incidents is progress, if not the entire point. Focusing on harms as outcomes in this complex system framing is critical to answering the core question about cybersecurity policy’s progress for several reasons:

  • Harms as outcomes do not depend upon untested hypotheses about the relationships between attributes or their impact on outcomes.
  • Harms are distinct from the dynamic system-of-systems that produces them.
  • Harms help reduce the breadth of units of measurement when compared to attribute metrics.
  • Harms are more salient to the public than the specific security flaws that lead to them.

First, harms are independent of hypotheses about cybersecurity and key to evaluating them. While there is good reason to believe that many cybersecurity practices and policies improve security and thus reduce harms, the empirical evidence backing these beliefs—let alone describing the amount of harm reduction they are responsible for—is vanishingly thin, and sometimes proves those practices to be ineffective or even harmful.29 It may be that currently identified best cybersecurity practices are indeed effective, but without knowing how the adoption of a practice interacts with the entire digital ecosystem, policymakers cannot make informed decisions about regulations or incentives. For example, MFA-secured accounts are almost certainly more secure than those backed by single passwords, all else remaining equal, but if the security offered by MFA requires a critical threshold of ecosystem adoption,30 great effort would be wasted if policymakers were content with an adoption rate below this unknown threshold, and even more would be lost if the cost of pushing adoption past that threshold exceeded the losses prevented by the greater security such implementation might lead to. The fact that any given practice can make a given computer system more secure is necessary but insufficient to urge its broad adoption precisely because of both the possibility for unforeseen interactions within the cybersecurity ecosystem and the general lack of information about costs and benefits at the macro scale that single system adoption provides, especially when that system might be connected to a critical power plant or something far more innocuous.

Second, harms are distinct from the system that produces them, rather than descriptive of it. The complex cyber system, as discussed above, contains billions of machines and users interacting and changing at incredible speed across and above the entire planet. While understanding this ecosystem and its internal attributes at any point in time is useful, the fundamental question for policymakers is how much harm its insecurity enables (relative to the benefits it provides). Any description of the ecosystem—for example, the point-in-time adoption rate of security best practices—still requires outcome data to be meaningful, and as attackers find new routes to compromise, the relationship between best practices and the outcomes they influence are ever changing. In other words, ecosystem attributes alone are insufficient metrics. Attributes do not describe the cost of insecurity, but rather the probability of future harm, and even then unreliably until causal links between attributes and outcomes are better understood.

This reduces, over time and with no further context, the usefulness of measures of specific security practice adoption or of the reduction of the number of certain vulnerabilities.31 For instance, in data about the types of memory safety vulnerabilities patched at Microsoft during an eight year period, use-after-free vulnerabilities dominated about 50 percent of vulnerabilities in 2015, compared to just 15 percent in 2022.32 While this data represents discovered rather than exploited vulnerabilities, the corollary for either observation is the same—the digital system changes, so attacker practices change, and thus defensive measures that worked one year can fail to protect a target entirely the next. In this example, a naive analysis might argue that the reduction in use-after-free vulnerabilities over seven years is a sign of security improvement at Microsoft. This conclusion does not account for the concurrent increase in almost all other kinds of memory safety vulnerabilities, nor does it discriminate among which types or individual vulnerabilities led to the most harm. Microsoft’s specific work to reduce use-after-free vulnerabilities succeeded, but what that meant for Microsoft’s cybersecurity outcomes remains unclear from the data gathered. It might be that use-after-free vulnerabilities were critical to attackers, and their elimination required a costly pivot to other means. It might be that the discovery and exploit techniques used for use-after-free vulnerabilities were easily converted to other exploit paths. Or it might be that use-after-free vulnerabilities were never abused by attackers that much to begin with. Without outcome data, it is difficult to know (as with MFA) if the cost of reducing entire classes of vulnerabilities might exceed the value of reduced harms up to a certain threshold of coverage.

Third, many harms can be expressed in the common unit of dollars—from identity theft caused by data breaches to the value of stolen intellectual property and the costs imposed by system downtime for critical infrastructure providers. Such monetary losses are often measured or measurable by entities that fall victim to cyber incidents as they quantify incurred costs. Harms can be categorized relatively exhaustively:

  1. Financial loss—such as ransomware payments, lost revenue, directly stolen funds, and the costs associated with an incident.
  2. Physical harm—including loss of life and physical injury.33
  3. System downtime or disruption—such as the time that a water treatment plant is taken offline, the time that a hospital operates at reduced capacity, or the inability to conduct government functions.34
  4. Compromised information—including stolen intellectual property, compromised passwords, and emails stolen from government networks.

Harms can accumulate toward other effects too, often greater than the sum of their parts. These might include reputational damage to a firm or state that experience a sufficient number of harmful incidents, psychological harm to a population subject to repeated cyber incidents, the loss of strategic advantage when an adversary has compromised sufficient amounts of national security information, or similar. This last item, compromised information, highlights a critical nuance. While the act of stealing information might in itself be a harm—e.g., damaging the reputation or share price of a firm subject to a massive data breach or revealing to an adversary information about an upcoming operation—more often it creates the risk for future harm dependent on what the adversary does with that information. Stolen information might give an adversary insight into system flaws or offensive tooling they can later exploit, provide them with credentials or personally identifiable information (PII) that they can abuse later, expose intellectual property that can be leveraged for economic gain at the original owner’s expense, or similar. Many other attributes of the complex cyber system contribute to the risk of future harm, from adversary prepositioning operations to the availability of data backups, or the average speed of patching critical vulnerabilities. Nonetheless, for policymakers, understanding how risks of future harm can manifest requires analysis of realized harms.

Overall, systematically measuring harms caused by cybersecurity failures can significantly contribute to understanding how much more or less secure the digital ecosystem is while helping to simplify the complexity and dynamism of the ecosystem, balancing and contextualizing the current focus on its attributes.35

The cyber metrics state of play

Policymakers today are not well equipped with the tools to help them describe the system state of cybersecurity over time, nor to measure and rank the efficacy of various interventions and practices in improving that state. Focusing cybersecurity metrics on harms as the key outcome metric for cybersecurity policy helps address these shortcomings while sufficiently navigating the ecosystem’s complexity. However, cybersecurity metrics as of now are not up to the formidable task of outcome measurement. This section will detail the challenges of gathering and interpreting data on cybersecurity outcomes and the reality on the ground.

Despite the many industry reports and headlines discussing or predicting global and national costs of cybersecurity incidents,36 no studies seek to examine differences between reported and forecasted losses, few estimates exhaustively describe their methodologies, cost estimates range significantly, and few predictions are adjusted for changes in the underlying ecosystem.37 Critically, there is no single source that systematically tracks incident harms across a wide swathe of the ecosystem.

For example, the 2024 IMF Global Financial Stability Report estimated that reported 2022 cyber incident losses were around $5 billion,38 while the FBI’s IC3 report put 2022 losses for just the United States at $10.3 billion.39 Statista, meanwhile, reports $7.08 trillion in losses for 2022 and projects $12.43 trillion in 2027, while then Deputy National Security Advisor Anne Neuberger’s figures were $8.4 trillion and $23 trillion for the same years.40 Two other reports, from Cybersecurity Ventures and Comparitech, estimate 2022 losses at $6.9 trillion and $42.8 billion respectively.41 Importantly, only the FBI IC3 report and IMF report seem based entirely on confirmed incidents, though Comparitech’s might aggregate similar such reporting. Rather than any specific estimate being wrong, the key issue is that few if any sources use the same methods or scoping, with differences in what is even considered a cyber incident. Additionally, many of these reports. or similar ones such as Verizon’s Data Breach Investigations Report, originate in industry, presenting concerns about long-term availability in the event that a company removes old reports or decides to stop publishing new ones, as well as the potential for conflicting business incentives to influence methodology and reporting.

One 2019 study of the costs of cybercrime summarizes well how these estimates can be further misconstrued, writing “in our 2012 paper, we scaled UK estimates up to global ones…and presented them in a table. We warned that ‘it is entirely misleading to provide totals lest they be quoted out of context…’ Yet journalists happily ignored this and simply added up the columns, proclaiming large headline figures for global cybercrime—which were essentially twenty times our estimate of UK income tax evasion, as this was the largest figure in the table.”42

There are several systematic incident reporting processes in the United States that could usefully gather outcome data, but they are not fully realized. The SEC recently began requiring the reporting of material cyber incidents from publicly traded companies, which had already occasionally disclosed such incidents in their filings. However, of the nearly two hundred cyber incident reports (required or not) available at the time of this piece’s writing, just seven contain cost estimates.43 CIRCIA, which has yet to be fully implemented, seems intent on capturing incident impacts, though the tight timeframe within which to report an incident (seventy-two hours) likely means that accurate outcome measurement will have to rely on updates to initial reports.44 While CIRCIA incident report updates are mandatory in its most recent proposal, whether they will capture outcome data remains to be seen, as full implementation will not begin until 2026.

Other useful incident reporting processes include (but are not limited to):

  • FISMA, which requires federal civilian executive branch (FCEB) agencies to report incidents to CISA.45
  • The US Department of Housing and Urban Development’s (HUD) Significant Cybersecurity Incident Reporting Requirements, which covers mortgagees approved by the Federal Housing Administration.46
  • The Gramm-Leach-Bliley Act requires a variety of financial institutions to report data breaches to the Federal Trade Commission.47
  • The Federal Communications Commission’s updated data breach notification rules, which cover telecommunications carriers.48
  • The Department of Defense’s (DOD) requirements for Defense Industrial Base contractors to report all cyber incidents involving “covered defense information.”49
  • The Department of Health and Human Services’ Breach Notification Rule.50
  • A tapestry of data breach reporting requirements across all fifty states and several US territories, as well as other sector-specific federal requirements both proposed and implemented.51

Together, these reporting requirements should notionally cover all publicly traded companies in the United States, critical infrastructure providers, FCEB agencies, and many smaller entities under state laws, with some entities facing multiple reporting requirements. Even more reporting requirements exist in the intelligence community, among defense contractors and recipients of federal grants, and others, while law enforcement captures at least an appreciable number of incidents targeting individuals through the FBI’s IC3. Given this sample would represent a massive proportion of the US attack surface, it should provide a sufficient starting point for systematic cybersecurity outcome data, if properly arranged to gather such data and coordinated to arrive at central clearing agency for analysis. Even then, disincentives to accurate reporting have long plagued cybersecurity,52 and the challenges in arriving at useful estimates of harms are significant.

Difficult numbers

Even with a robust reporting system tailored to capture incident costs from all the above sources while avoiding disincentives that lead to underreporting—a far cry from the current status quo—the task of estimating incident outcomes is not easy, with two notable hurdles standing out: silent failures and complex costs.

Silent failures refers to the fact that in cybersecurity, when information is stolen, it often remains present on the victim’s system, which makes noticing the compromise and its outcomes challenging.53 Take for example the extraordinary lag time between the deployment of malicious SolarWinds Orion updates in late March of 2020, and the discovery of the intelligence gathering campaign in December 2020.54 Attackers might have had access to target systems for at least nine months, with no “missing” data tipping off defenders. Such intelligence gathering is a fundamental feature of the cyber domain, and ensuring most of these compromises are discovered is ultimately a technical challenge, but it remains a key limiter on the value and feasibility of large-scale outcome data. Barring a complete technical solution, analysts will always need to assume that their data conveys an incomplete picture of ecosystem outcomes, especially when information theft is such a fundamental part of cybersecurity incidents.

Complex costs refer to the difficulties of quantifying many of the harms caused by cybersecurity incidents. Broadly, estimating the costs incurred by operational downtime, ransomware payments, and similar incidents is a tractable task for victim entities. However, attaching a dollar figure to harms resulting from stolen information is difficult, even when the extent of that compromise is definitively known, especially where that information might contribute to significant compromise but only when attached to other information (as in the case of linking phone numbers to email addresses to undermine MFA protections). Valuable information might include intellectual property, PII, information with national security value, account credentials, or similar. The quantity of information stolen by attackers and the sensitivity of that information can provide some insight into the risks of future harms, but precise measurement is difficult, especially when not all stolen data is abused successfully or when the abuse serves national security or intelligences ends, which are particularly hard (if not impossible) to quantify.

Complex costs also refer to other difficult-to-notice harms. For instance, the largest source of risk in the cyber ecosystem is its interconnection with effectively all layers of society: a cybersecurity incident can cause direct and immediate harms to any given sector with sufficient dependence on IT systems, affecting a huge number of entities even when only one entity was compromised. Even the most well-architected system for counting the costs of cyber incidents will struggle to accurately track total harms across sectors. These secondary costs can represent the bulk of harm caused by an incident but might remain buried in non-cyber reporting systems, if reported at all. Take, for example, the recent CrowdStrike outage, which led to flight cancellations globally as well as operational disruptions across many sectors. While one report from Parametrix Insurance estimated that the incident carried a net cost of $5.4 billion, tracking those costs all the way through different sector verticals is difficult.55 The same Parametrix report assessed losses of $860 million for airlines, but the losses reported by just Delta Air Lines in an SEC filing amounted to at least $500 million.56 This is not to criticize any particular estimate, but rather to highlight both the consequences of inconsistent methodologies and the challenges of tracking costs not funneled through established cyber incident reporting requirements. To the latter, Delta’s disclosure came through Item 7.01 of a Form 8-K for reporting specific material events, effectively tagging it as a massive, unexpected cost. Generally, cyber incident disclosures through 8-K forms have been made through Item 8.01 for non-material incidents and the SEC’s newly created Item 1.05 for material ones. In other words, accurately capturing all costs from cyber incidents is key to understanding their true impact, as cyber risk is generally a function of the critical role of systems connected to digital infrastructure. At the same time, such estimates are difficult to make and are difficult to capture by singular reporting mechanisms because of their appearance across all sectors.57

Reading the curves: Interpreting outcome data

If policymakers were able to measure with reasonable accuracy and precision the costs of cybersecurity incidents, they could use that data to begin addressing the two outstanding challenges with cybersecurity policy and metrics: assessing efficacy (or return on investment) and benchmarking system state. However, even with accurate measurement, interpretation of such data is not straightforward.

First, measuring return on investment requires the ability to answer two immediate, practical questions: How much harm does a specific practice reduce? How much do we spend where? While the latter is more tractable—expenditure is recorded somewhere, though general IT spend and cybersecurity spend can be difficult to separate in practice—at the micro level, robust outcome data would enable the study of return on investment for money spent implementing specific cybersecurity practices by revealing how much they reduced harms downstream. Heuristically, policymakers approach cybersecurity similarly, striving to maximize breadth and depth of impact against expenditure, but without a robust empirical body of evidence to back them. Such metrics would go a long way in helping prioritize the many different security controls recommended by both government and industry against their observed return on investment. There are some nascent efforts to carry out this analysis, including through CISA’s revitalized Cyber Insurance and Data Analysis Working Group,58 but they are primarily working with insurance claims data, which might not capture the full extent of costs given the above challenges in measurement and insurers’ focus on policyholder claims versus net costs  to claim holders (aside from the fact that they mainly have data on their customers rather than the ecosystem at large). Broadly, outcome data is the key to making attribute data about security practice implementation meaningful. It is the best way to point policymakers to both the best solutions and the right problems—for example, whether the harms of cybercrime results more from social engineering at scale or exploited vulnerabilities.

The second and more foundational application of complete outcome data is to give policymakers a macro-level picture of the size and nature of the cybersecurity challenge they face—and thus what scale of investment makes sense and what trends in success or failure at addressing cyber risk are worth pursuing. The first question that comes to mind when faced with net annual harms data is whether cybersecurity is improving or deteriorating. Interpreting outcome data is far from straightforward, and there are three broad approaches one might take, each with immediate policy consequences:

  1. Uncontrolled metrics
  2. Controlled metrics
  3. Catastrophic risks

Uncontrolled metrics: More is worse

Uncontrolled metrics refers to simply using total harms figures without further context. Regardless of which existing source one uses, annual tallies of cyber incidents and their costs seem to be increasing, implying that, far from getting better, the state of cybersecurity is on the decline year by year at a more-than-linear rate. This framing of outcome data can be observed on the cover image of Verizon’s 2023 Data Breach Investigations Report,59 raw estimates of annual total incident costs such as Neuberger’s figure referenced above, and the GAO’s suggestion that ONCD use aggregated ransomware incident and loss data to assess the efficacy of the National Cybersecurity Strategy: incidents are more common year after year, as are best estimates of harms.60 These are intuitive interpretations—more incidents causing more harm is bad—and, if the numbers are accurate, they do capture some objective truth about what occurs in the digital ecosystem. Such interpretations, however, are immature in comparison to other fields of empirical policymaking. Are harms growing per incident? Are there simply more incidents? Or are we getting better at observing and counting more of the incidents that occur?

Controlled metrics: More is relative

A controlled metrics interpretation argues that meaningful cybersecurity metrics must account for the ecosystem’s rapidly changing context, which uncontrolled metrics omit. Few other fields use uncontrolled metrics but instead account for changes in population or similar underlying variables. For example, public safety policy cares more about violent crime per capita than overall violent crime because a larger population in and of itself means more potential criminals and victims and therefore more crime in absolute terms. Similarly, the Federal Reserve cares more about the unemployment rate than raw unemployment counts. Parallel arguments could reasonably apply to cybersecurity—each passing day brings more potential cyber criminals, victims, and devices online as internet connectivity increases, and there are more dollars at stake in the digital ecosystem as more business grows intertwined with IT infrastructure. All else being equal, one could reasonably expect these trends to increase the overall number of cybersecurity incidents and losses year to year because, even if security remains constant, there are more people and dollars online. One 2015 study by Eric Jardine made such an argument and normalized cybercrime figures with data on the size of the internet and its userbase. In doing so, it found that most metrics improved year over year, or at least did not worsen.61 However, determining a reasonable denominator for cybersecurity is more challenging than in other fields where population is usually sufficient.62 Financial harms can befall individuals, but also abstract entities like businesses or larger constructs like national economies. It is most likely that a rigorous approach to analyzing harms data will use different denominators for different harms. For instance, the cost of individually targeted cyber fraud works well per capita, while business ransomware payment costs would be more reasonably adjusted by gross domestic product or a similar dollar figure. These control metrics also highlight well the continued importance of attribute measures. This paper does not argue that attribute metrics are irrelevant, but that on their own, they can mislead policymaking in eliding a key part of the complex system—its external impacts.

Catastrophic risk: More to come

A third interpretation of outcome data borrows from the risk management experience of the financial sector by considering the role of catastrophic events. If there are a sufficient number of extremely costly cyber incidents, interpreting time-series outcome data into the future becomes difficult, especially given the relative novelty of the field, which leaves analysts with a limited historical record to study.63 Similar to the economic growth preceding the Great Recession in 2008, years of improved outcomes might be interpreted as improved cybersecurity, but they might mean little if a significant catastrophe lies just around the corner. Unfortunately, without robust outcome data about past events, evaluating the possible severity, variance, and frequency of cyber catastrophes is challenging, particularly when potential harms might change suddenly with large shifts in geopolitical circumstance (e.g., the risks of cyber catastrophe might grow dramatically when two countries enter a formal war with each other).

One dataset sought to do just that by assembling a list of multi-firm cyber incidents estimated to have resulted in a loss of at least $800 million, inflation adjusted to 2023.64 The dataset counted twenty-five total catastrophic events, with the worst costing $66 billion, and the average event reaching $14.8 billion. The author concluded that cyber catastrophes are not as significant a risk as often made out based on this data and the observation that these costs are only fractions of the costs that natural disasters can incur. However, things might not be so simple. The cost estimates used are subject to the same measurement challenges mentioned above, which the author notes well: “Unfortunately, many estimates come from popular media sites and corporate blogs.”65

More specifically, the dataset omits the SolarWinds incident of 2019, for which one analysis estimates $100 billion in costs just for incident response across the thousands of victim organizations alone, not even accounting for the harms resulting from abuse of the information stolen during the intelligence gathering campaign, which for the reasons stated above is immensely difficult to quantify.66 There are also reasonably costly single-firm incidents such as the Equifax breach, omitted by methodology—direct costs to the firm topped $1.7 billion, not to mention the costs of whatever identity theft and fraud may have resulted.67

Other data from the IMF about the distribution of cyber incidents by cost shows that, even if cyber catastrophes are less costly than natural disasters, they do present similar irregularity, with most incidents being mild while a handful reach disastrous extremes.68

Another method for assessing whether an ecosystem is prone to catastrophic events looks for near misses—almost-incidents that, fully realized, would have been catastrophic and were avoided by chance rather than systematic prevention. In an article about interpreting outcome data, Geer describes how relatively trivial changes to a 2001 malware could have allowed it to block 911 emergency services across the United States, which would certainly qualify as a catastrophic event, and one with difficult-to-quantify psychological harms on top of loss of life.69 Moreover, given the rapid growth of the cyber ecosystem and its increasingly fundamental role in the functioning of all levels of society, Geer’s warning in the paper should temper claims that cyber catastrophes are not that significant: “this proof (that we escaped such an attack by dumb luck) puts to bed any implication that every day without such an attack makes such an attack less likely.”70 In other words, he argues that cyber catastrophes might not have been comparatively as extreme as financial crises or natural disasters, but only so far, and the potential for extreme incident grows as more real-world services rely on relatively homogenous digital systems. This interpretation of cyber metrics holds two key lessons. First, attribute measures can be extremely useful in highlighting the potential for future catastrophe. Just as measures of debt ratios, leveraged capital, liquidity reserves, and more can help analyze financial catastrophes, measures of concentrated dependency, cloud systems resilience, vulnerability patch time, and more can describe the risk posture of the digital ecosystem. Second, while outcome metrics should not be used in an attempt to predict future harms, they are still key to establishing historical record of cyber incidents and catastrophes and understanding the true scale of cyber harms. Again, outcome metrics should not supplant attribute metrics, but instead, at the macro scale, are key for policymakers trying to understand and manage cybersecurity risks and harms.

Starting construction: Two changes

The result of the many measurement challenges and shortfalls in cybersecurity is a set of fundamental unknowns for cybersecurity policymakers. At the ecosystem scale, the cybersecurity status quo remains unmeasured, as does the efficacy of security practices at reducing harms, while a plan to address those quantitative lapses does not yet exist. These obstacles go well beyond making policy optimization difficult. Moreover, as the fundamental question of the size of the cybersecurity problem goes unanswered, the gap in historical outcome data increases and unproven policy and investments grow more entrenched. These challenges should not, however, prompt paralysis. More measurement, even if imperfect, can improve the empirical toolkit of policymakers, and there is good reason to believe that some policies and security interventions, even if not empirically shown, have improved cybersecurity.

With all this in mind, the US government should use the abundant reporting requirements already in existence to begin assembling a robust cybersecurity metrics system comparable to the already established thirteen federal statistical agencies serving the fields of public economics, education, agriculture, public health, and more.71 Building such infrastructure and pulling meaningful analysis from the data it assembles will take time, but waiting only delays a fundamentally necessary process. Additionally, developing a new policy lens is as important as creating new policy mechanisms, and questions about measurable efficacy and return on investment should become commonplace in policy conversations. Below are two small recommendations focused on existing reporting processes and offices.

Counting harms

Given the importance of gathering outcome data both to understanding the cyber ecosystem and to making useful already-gathered attribute data, existing reporting requirements should incorporate impact estimates more rigorously. CISA’s final implementation of CIRCIA should include explicit provisions requiring at least one update to incident reports that includes a revised estimate of incident impact and notes on the methodology used to reach that estimate. This information will help CISA weight incidents by their impact and provide a large inflow of outcome data from all critical infrastructure sectors. Similarly, the SEC should update its guidance on cyber incident reporting to include similar requirements—Item 1.05 reports in 8-K filings should be updated at least once with impact estimates from the reporting company in a similar format as above and updated when the reporting entity arrives at a final estimate. Given Item 1.05 reports only apply to material cyber incidents, they already require the information leading to the determination of materiality, which already should assess incident impact, although there is ongoing debate about the difference between a material event and an event with material impact.72 Thus, not only should this data be generated already by the reporting company, but it is precisely the kind of information relevant to the shareholders that the item is designed to inform.

Like CIRCIA and SEC filings, all federal reporting requirements should include provisions mandating that outcome metrics and information be updated as an incident unfolds and an affected entity revises its estimates. Altogether, with tweaks to existing or forthcoming reporting requirements, the federal government can gather incident outcome data from publicly traded companies, critical infrastructure entities, DOD contractors, FCEB agencies, and others, creating a significant sample of high-quality outcome data without the need for new reporting regimes.

One office to count them all

Given the potential volume of outcome data from a wide variety of reporting sources and regulations, meaningful interpretation of that information requires that it flow to one entity, similar to how the Bureau of Labor Statistics collates price data from hundreds of goods and services in calculating the Consumer Price Index.73 Fortunately, the US Department of Homeland Security’s Office of Homeland Security Statistics (OHSS) is already on a course to assume this central role, with plans to report on cybersecurity incidents shared its way as a result of reforms to FISMA in 2025. This office should be enlarged and report annually on cybersecurity outcomes based not just on FISMA, but the myriad reporting systems through federal and state government. In collaboration with CISA’s Office of the Chief Economist, OHSS should focus on:

  • Developing a process for aggregating reports from disparate requirement systems with different timelines and data requirements
  • Anonymized reporting on outcome data sourced from reporting systems that do not publicly reveal individual incidents, such as CIRCIA and FISMA for FCEB branches
  • Researching and developing approaches to the gathering, analysis, and interpretation of cybersecurity harm data
  • Recommending consistent scoping definitions for cybersecurity incidents, cyber-relevant harms, and similar components of the ecosystem

Conclusion

Cybersecurity policy has matured significantly in recent years, but as steady as the flow of executive orders, legislation, strategy, and guidance documents has been, cyberattacks have continued with shocking consistency and significant impact. With the previous administration witness to the aftermath of the SolarWinds campaign, Colonial Pipeline, the United Healthcare hack, two Microsoft Exchange compromises, Volt Typhoon, and now Salt Typhoon—to name only a few—the question, “Are we getting better at cybersecurity?” is far from an academic exercise in empiricism.

The state of metrics for cybersecurity policy is insufficient to meet two core functions today: to assess the status quo of the cybersecurity ecosystem at the macro level, and to provide insight into the relative efficacy of different security controls, practices, and requirements at the micro level. Without these dual capacities, cybersecurity policymakers are left with intuition and risk assessments to guide them. These are necessary but insufficient tools for approaching the monumental task of improving cybersecurity, which will require measuring the harms caused by cyber insecurity as key outcome metrics, and understanding those harms as the product of a complex, dynamic system is critical to meaningfully interpreting them. Unsolved challenges to interpreting outcome data, assuming its successful measurement, remain. Knowing how much harm cybersecurity incidents have caused over a given timeframe is a start toward understanding trends in improvement, but nuanced questions about what “better” and “worse” look like, and what the data can and cannot reveal about the future still persist. In the near term, the need for this data to be systematically gathered at all and for continued progress toward interpreting it demand consistently reported outcome measures and some degree of centralization within the federal government of that information. Those embarked on improving cybersecurity can no longer afford to guess as to the best remedies for insecurity and hope that they work once implemented—policymakers will benefit immensely from measuring the harms caused by cyber incidents to see how well their remedies have worked, too.

Acknowledgments

The author would like to thank the many contributors to this piece, including peer reviewers Sara Ann Bracket, Alex Gantman, Stefan Savage, Emma Schroeder, Nikita Shah, and Adam Shostack. Thank you also to Amelie Chushko, Nancy Messieh, Donald Partyka, and Samia Yakub for their work editing, designing, and producing this report.

About the author

Our work

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “National Cybersecurity Strategy,” The White House, March 1, 2023, https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
2    “Secure by Design: Shifting the Balance of Cybersecurity Risk,” Cybersecurity and Infrastructure Security Agency, October 25, 2023, https://www.cisa.gov/sites/default/files/2023-10/SecureByDesign_1025_508c.pdf; “Bicameral, Bipartisan Leaders Introduce Legislation to Strengthen Federal Cybersecurity,” US Senate Committee on Homeland Security and Governmental Affairs, July 12, 2023, https://www.hsgac.senate.gov/media/dems/bicameral-bipartisan-leaders-introduce-legislation-to-strengthen-federal-cybersecurity/.
3    Katherine Golden, “National Cyber Director Chris Inglis: We Need to Become a ‘Harder Target’ for Our Adversaries,” New Atlanticist, August 4, 2021, https://www.atlanticcouncil.org/blogs/new-atlanticist/national-cyber-director-chris-inglis-we-need-to-become-a-harder-target-for-our-adversaries/.
4    Dan Geer, “Measuring Security,” (Metricon 1.0, Vancouver, British Columbia, Canada, August 1, 2006), http://all.net/Metricon/measuringsecurity.tutorial.pdf; “Cost of a Cyber Incident: Systematic Review and Cross-Validation,” Cybersecurity and Infrastructure Security Agency, October 26, 2020, https://www.cisa.gov/sites/default/files/publications/CISA-OCE_Cost_of_Cyber_Incidents_Study-FINAL_508.pdf;  “Cross-Sector Cybersecurity Performance Goals (March 2023 Update)” Cybersecurity and Infrastructure Security Agency, March 2023, https://www.cisa.gov/sites/default/files/2023-03/CISA_CPG_REPORT_v1.0.1_FINAL.pdf.
5    Geer, “Measuring Security.”
6    Dan Geer, Kevin Soo Hoo, and Andrew Jaquith, “Information Security: Why the Future Belongs to the Quants,” IEEE Security & Privacy 1, no. 4 (July-August 2003): 24–32, https://doi.org/10.1109/MSECP.2003.1219053.
7    “Report to Congressional Addressees – Cybersecurity: National Cyber Director Needs to Take Additional Actions to Implement an Effective Strategy,” US Government Accountability Office, February 1, 2024, https://www.gao.gov/assets/d24106916.pdf.
8    “Digital Press Briefing with Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technologies,” US Department of State (transcript), October 18, 2023, https://2021-2025.state.gov/digital-press-briefing-with-anne-neuberger-deputy-national-security-advisor-for-cyber-and-emerging-technologies/.
9    “National Cybersecurity Strategy,” The White House.
10    “National Cybersecurity Strategy Implementation Plan,” The White House, July 13, 2023, https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
11    “Cybersecurity: National Cyber Director Needs to Take Additional Actions.”
12    “Cost of a Cyber Incident.”
13    “Federal Bureau of Investigation Internet Crime Report 2023,” Federal Bureau of Investigation Internet Crime Complaint Center, April 4, 2024, https://www.ic3.gov/AnnualReport/Reports/2023_IC3Report.pdf.
14    “Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA),” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/cyber-incident-reporting-critical-infrastructure-act-2022-circia.
15    Cybersecurity Disclosure, US Securities and Exchange Commission (statement of Erik Gerding, Director of SEC’s Division of Corporation Finance), December 14, 2023, https://www.sec.gov/newsroom/speeches-statements/gerding-cybersecurity-disclosure-20231214.
16    “Federal Bureau of Investigation Internet Crime Report 2023;” “Cybersecurity: National Cyber Director Needs to Take Additional Actions.”
17    Olav Bjerkholt, “On the Founding of the Econometric Society,” Journal of the History of Economic Thought 39 (March 6, 2017): 175–98, https://doi.org/10.1017/S105383721600002X.
18    Ross Anderson, “Why Information Security Is Hard – An Economic Perspective,” Keynote remarks, Seventeenth Annual Computer Security Applications Conference, New Orleans, LA, 2001, 358–65, https://doi.org/10.1109/ACSAC.2001.991552.
19    Jason Healey, “What the White House Should Do Next for Cyber Regulation,” Dark Reading, October 7, 2024, https://www.darkreading.com/vulnerabilities-threats/what-white-house-next-cyber-regulation; “Request for Information on Cyber Regulatory Harmonization; Request for Information: Opportunities for and Obstacles To Harmonizing Cybersecurity Regulations,” Office of the National Cyber Director, August 16, 2023, https://www.federalregister.gov/documents/2023/08/16/2023-17424/request-for-information-on-cyber-regulatory-harmonization-request-for-information-opportunities-for.
20    Daniel W. Woods and Sezaneh Seymour, “Evidence-Based Cybersecurity Policy? A Meta-Review of Security Control Effectiveness,” Journal of Cyber Policy 8, no. 3 (April 7, 2024): 365–83, https://doi.org/10.1080/23738871.2024.2335461.
21    “The New NIST Guidelines: We Had It All Wrong Before,” Risk Control Strategies, January 8, 2018, https://www.riskcontrolstrategies.com/2018/01/08/new-nist-guidelines-wrong/.
22    The precise meaning of “reduce” will be discussed later on.
23    “National Cybersecurity Strategy,” The White House.
24    “US and UK Disrupt LockBit Ransomware Variant,” US Department of Justice, February 20, 2024, https://www.justice.gov/archives/opa/pr/us-and-uk-disrupt-lockbit-ransomware-variant.
25    Matt Kapko, “Microsoft Reveals Ransomware Attacks against Its Customers Nearly Tripled Last Year,” Cybersecurity Dive, October 16, 2024, https://www.cybersecuritydive.com/news/microsoft-customers-ransomware-attacks-triple/730011/.
26    Alex Gantman, “NDSS 2022 Keynote – Measuring Security Outcomes,” April 27, 2022, by NDSS Symposium, YouTube, https://www.youtube.com/watch?v=qGD93mJ2ZAU.
27    Stewart Scott, “Counting the Costs in Cybersecurity,” Lawfare, October 9, 2024, https://www.lawfaremedia.org/article/counting-the-costs-in-cybersecurity.
28    “Cybersecurity: National Cyber Director Needs to Take Additional Actions.”
29    Woods and Seymour, “Evidence-Based Cybersecurity Policy?”
30    With enough unsecured accounts still accessible, attackers are able to avoid MFA protections entirely.
31    While these are not the only challenges that such measures face, they are the most definitional ones. For example, measures of known vulnerability struggle to account for unknown vulnerabilities or the potential for detected vulnerabilities to in reality be harmless given their context.
32    David Weston, “The Time Is Now – Practical Mem Safety,” Slide presentation, Tectonics 2023, San Francisco, CA, November 2, 2023), https://github.com/dwizzzle/Presentations/blob/master/david_weston-isrg_tectonics_keynote.pdf.  
33    There is often understandable distaste at lumping in physical harm with damages measured in dollars, but fortunately few deaths have ever resulted directly from cyberattacks. Moreover, a combined approach of tallying fatalities, financial damage, and injuries is how the impact of natural disasters is already measured. For more, see “How Can We Measure the Impact of Natural Disasters?,” World Economic Forum, March 16, 2015, https://www.weforum.org/stories/2015/03/how-can-we-measure-the-impact-of-natural-disasters/.
34    Scott, “Counting the Costs in Cybersecurity.”
35    Wasted in the sense that such efforts do not answer the macro question, “How secure are we?” These are useful measures in other respects, as enumerated below.
36    “Cybercrime To Cost The World $9.5 Trillion USD Annually In 2024,” eSentire, https://www.esentire.com/web-native-pages/cybercrime-to-cost-the-world-9-5-trillion-usd-annually-in-2024; Steve Morgan, “Cybercrime To Cost The World $10.5 Trillion Annually By 2025,” Cybercrime Magazine, November 13, 2020, https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/; “Unexpectedly, the Cost of Big Cyber-Attacks Is Falling,” The Economist, May 17, 2024, https://www.economist.com/graphic-detail/2024/05/17/unexpectedly-the-cost-of-big-cyber-attacks-is-falling.
37    At the time of writing, the author was unable to find any source that revised predictive estimates up or down based on new policies, technologies, or geopolitical circumstance.
38    “The Last Mile: Financial Vulnerabilities and Risks,” International Monetary Fund, April 2024, https://www.imf.org/en/Publications/GFSR/Issues/2024/04/16/global-financial-stability-report-april-2024.
39    “Federal Bureau of Investigation Internet Crime Report 2023.”
40    “Estimated cost of cybercrime worldwide 2018-2029,” Statista, https://www.statista.com/forecasts/1280009/cost-cybercrime-worldwide.
41    Morgan, “Cybercrime To Cost The World $10.5 Trillion;” Paul Bischoff, “Cybercrime Victims Lose an Estimated $714 Billion Annually,” Comparitech, December 5, 2023, https://www.comparitech.com/blog/vpn-privacy/cybercrime-cost/.
42    Ross Anderson et al., “Measuring the Changing Cost of Cybercrime,” The 18th Annual Workshop on the Economics of Information Security, Boston, MA, June 3, 2019, https://doi.org/10.17863/CAM.41598.
43    “Cybersecurity Incident Tracker,” Board Cybersecurity, last updated March 3, 2025., https://www.board-cybersecurity.com/incidents/tracker/.
44    “Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) Reporting Requirements,” Department of Homeland Security Cybersecurity and Infrastructure Security Agency,, April 4, 2024, https://www.federalregister.gov/documents/2024/04/04/2024-06526/cyber-incident-reporting-for-critical-infrastructure-act-circia-reporting-requirements.
45    “Federal Information Security Modernization Act,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/topics/cyber-threats-and-advisories/federal-information-security-modernization-act.
46    Richard J. Andreano, Jr., “FHA Requiring Reporting of Significant Cybersecurity Incidents,” Consumer Finance Monitor, May 24, 2024, https://www.consumerfinancemonitor.com/2024/05/24/fha-requiring-reporting-of-significant-cybersecurity-incidents/.
47    “FTC Safeguards Rule: What Your Business Needs to Know,” Federal Trade Commission, last updated December 2024, https://www.ftc.gov/business-guidance/resources/ftc-safeguards-rule-what-your-business-needs-know.
48    “Data Breach Reporting Requirements,” Federal Communications Commission, February 12, 2024, https://www.federalregister.gov/documents/2024/02/12/2024-01667/data-breach-reporting-requirements.
49    “Defense Industrial Base (DIB) Cybersecurity Portal – Cyber Incident Reporting,” Defense Industrial Base (DIB) Cybersecurity Portal, https://dibnet.dod.mil/dibnet/#reporting-reporting-2.
50    “Submitting Notice of a Breach to the Secretary,” US Department of Health and Human Services, last reviewed February 27, 2023, https://www.hhs.gov/hipaa/for-professionals/breach-notification/breach-reporting/index.html.
51    “State Data Breach Notification Chart,” IAPP, March 2021, https://iapp.org/resources/article/state-data-breach-notification-chart/.
52    Seema Sangari, Eric Dallal, and Michael Whitman, “Modeling Under-Reporting in Cyber Incidents,” Risks 10, no. 11 (October 22, 2022): 200, https://doi.org/10.3390/risks10110200.
53    Dan Geer, “Prediction and The Future of Cybersecurity,” Remarks, UNC Charlotte Cybersecurity Symposium Charlotte, NC, October 5, 2016, http://geer.tinho.net/geer.uncc.5×16.txt.
54    Trey Herr et al., Broken Trust: Lessons from Sunburst, Atlantic Council, March 29, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/broken-trust-lessons-from-sunburst/.
55    “Crowdstrike’s Impact on the Fortune 500: An Impact Analysis,” Parametrix, 2024, https://www.parametrixinsurance.com/crowdstrike-outage-impact-on-the-fortune-500.
56    “Delta Airlines, Inc. Form 8-K Report on August 8, 2024,” US Security and Exchange Commission, August 8, 2024, https://www.sec.gov/Archives/edgar/data/27904/000168316824005369/delta_8k.htm. It is alternatively possible that Delta systems were simply more severely impacted that other airlines.
57    “Cost of a Cyber Incident.”
58    Nitin Natarajan, “Cybersecurity Insurance and Data Analysis Working Group Re-Envisioned to Help Drive Down Cyber Risk,” Cybersecurity and Infrastructure Security Agency (blog), November 20, 2023, https://www.cisa.gov/news-events/news/cybersecurity-insurance-and-data-analysis-working-group-re-envisioned-help-drive-down-cyber-risk.
59    “2023 Data Breach Investigations Report,” Verizon, June 2023, https://www.verizon.com/business/resources/T227/reports/2023-data-breach-investigations-report-dbir.pdf.  
60    “Cybersecurity: National Cyber Director Needs to Take Additional Actions.”
61    Eric Jardine, “Global Cyberspace Is Safer than You Think: Real Trends in Cybercrime,” Global Commission on Internet Governance, revised October 16, 2015, https://www.cigionline.org/publications/global-cyberspace-safer-you-think-real-trends-cybercrime/.  
62    “Technical Report 22-02: Vital Statistics in Cyber Public Health,” CyberGreen Institute, March 2022, https://cybergreen.net/wp-content/uploads/2022/04/Technical-report-22-02-Vital-Statistics-in-Cyber-Public-Health-FINAL.pdf.
63    Dan Geer, “For Good Measure: The Denominator,” USENIX ;login: 40, no. 5 (October 2015), https://www.usenix.org/publications/login/oct15/geer.
64    Tom Johansmeyer, “Recent Cyber Catastrophes Show an Intensifying Trend – but They Are Manageable,” The Loop, September 25, 2024, https://theloop.ecpr.eu/recent-cyber-catastrophes-show-an-intensifying-trend-but-they-are-manageable/.
65    Tom Johansmeyer, “Surprising Stats: The Worst Economic Losses from Cyber Catastrophes,” The Loop, March 12, 2024, https://theloop.ecpr.eu/surprising-stats-the-worst-economic-losses-from-cyber-catastrophes/.
66    Gopal Ratnam, “Cleaning up SolarWinds Hack May Cost as Much as $100 Billion,” Roll Call, January 11, 2021, https://rollcall.com/2021/01/11/cleaning-up-solarwinds-hack-may-cost-as-much-as-100-billion/.  
67    Ben Lane, “Equifax Expects to Pay out Another $100 Million for Data Breach,” HousingWire, February 14, 2020, https://www.housingwire.com/articles/equifax-expects-to-pay-out-another-100-million-for-data-breach/.
68    “The Last Mile: Financial Vulnerabilities and Risks,” International Monetary Fund.
69    Geer, “For Good Measure: The Denominator.”
70    Geer, “For Good Measure: The Denominator.”
71    “Organization of the Federal Statistical System,” in Principles and Practices for a Federal Statistical Agency: Sixth Edition, ed. Constance F. Citro (Washington, DC: National Academies Press, 2017), https://www.ncbi.nlm.nih.gov/books/NBK447392/.
72    Thomas Kim, letter to the Securities and Exchange Commission Division of Corporate Finance, “AT&T Inc. Form 8-K Filed July 12, 2024 File No. 001-08610,” July 31, 2024, https://www.sec.gov/Archives/edgar/data/732717/000119312524190323/filename1.htm.
73    “Consumer Price Index Frequently Asked Questions,” US Bureau of Labor Statistics, December 18, 2024, https://www.bls.gov/cpi/questions-and-answers.htm.  

The post Counting the costs: A cybersecurity metrics framework for policy appeared first on Atlantic Council.

]]>
Canada needs an economic statecraft strategy to address its vulnerabilities https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/canada-needs-an-economic-statecraft-strategy-to-address-its-vulnerabilities/ Thu, 27 Mar 2025 12:00:00 +0000 https://www.atlanticcouncil.org/?p=835739 To address threats from Russia and China and reduce trade overdependence on the United States, Canada’s federal government will need to consolidate economic power and devise an economic statecraft strategy that will leverage Canada’s economic tools to mitigate economic threats and vulnerabilities.

The post Canada needs an economic statecraft strategy to address its vulnerabilities appeared first on Atlantic Council.

]]>
Introduction

Canada is facing economic threats from China and Russia targeting its critical industries and infrastructure. The Business Council of Canada, which consists of CEOs of top Canadian companies, identified cyberattacks, theft of intellectual property, Chinese influence on Canada’s academic sector, and trade weaponization by China among the top economic threats to Canada.

More recently, a new and unexpected threat emerged from the United States, when Washington announced 25 percent tariffs on all Canadian goods except for the 10 percent tariffs on energy. To address threats from Russia and China and reduce trade overdependence on the United States, Canada’s federal government will need to consolidate economic power and devise an economic statecraft strategy that will leverage Canada’s economic tools to mitigate these economic threats and vulnerabilities. This paper covers the following topics and offers recommendations:

  • Economic threats to Canada’s national security 
  • An unexpected threat: Overdependence on trade with the United States
  • Lack of economic power consolidation by Canada’s federal government
  • Mapping Canada’s economic statecraft systems: Sanctions, export controls, tariffs, and investment screening

Economic threats to Canada’s national security

Cyberattacks on Canada’s critical infrastructure 

Canada’s critical infrastructure has become a target of state-sponsored cyberattacks. In 2023, Canada’s Communications Security Establishment (CSE)—a signals intelligence agency—said that Russia-backed hackers were seeking to disrupt Canada’s energy sector. Apart from accounting for 5 percent of Canada’s gross domestic product (GDP), the energy sector also keeps the rest of Canada’s critical infrastructure functioning. CSE warned that the threat to Canada’s pipelines and physical infrastructure would persist until the end of the war in Ukraine and that the objective was to weaken Canada’s support for Ukraine. 

Beyond critical infrastructure, Canadian companies lost about $4.3 billion due to ransomware attacks in 2021. More recently in February 2025, Russian hacking group Seashell Blizzard was reported to have targeted energy and defense sectors in Canada, the United States, and the United Kingdom. Russia and other adversarial states will likely continue targeting Canada’s critical infrastructure and extorting ransom payments from Canadian companies. 

Theft of intellectual property

Canadian companies have become targets of Chinese state-sponsored intellectual theft operations. In 2014, a Chinese state-sponsored threat actor stole more than 40,000 files from the National Research Council’s private-sector partners. The National Research Council is a primary government agency dedicated to research and development in science and technology. Apart from undermining Canadian companies, theft of Canada’s intellectual property, especially research on sensitive technologies, poses a threat to Canada’s national security. 

Chinese influence on Canada’s academic sector 

Adversarial states have taken advantage of Canada’s academic sector to advance their own strategic and military capabilities. For example, from 2018 to 2023, Canada’s top universities published more than 240 joint papers on quantum cryptography, space science, and other advanced research topics along with Chinese scientists working for China’s top military institutions. In January 2024, Canada’s federal government named more than one hundred institutions in China, Russia, and Iran that pose a threat to Canada’s national security. Apart from calling out specific institutions, the federal government also identified “sensitive research areas.” Universities or researchers who decide to work with the listed institutions on listed sensitive topics will not be eligible for federal grants. 

Trade weaponization by China

Trade weaponization by China has undermined the economic welfare of Canadians and posed a threat to the secure functioning of Canada’s critical infrastructure. For example, between 2019 and 2020, China targeted Canada’s canola sector with 100 percent tariffs, restricting these imports and costing Canadian farmers more than $2.35 billion in lost exports and price pressure. In Canada’s 2024 Fall Economic Statement, which outlined key measures to enhance Canadian economic security, the Ministry of Finance announced its plans to impose additional tariffs on Chinese imports to combat China’s unfair trade practices. These included tariffs on solar products and critical minerals in early 2025, and on permanent magnets, natural graphite, and semiconductors in 2026. 

However, the imposition of 25 percent tariffs by Washington on both Canada and China could result in deepening trade ties between the two. Canada exported a record $2 billion in crude oil to China in 2024, accounting for half of all oil exports through the newly expanded Trans Mountain pipeline. Increased trade with China would increase Canada’s exposure to China’s coercive practices, and would be a direct consequence of US tariffs on Canada. 

An unexpected threat: Overdependence on trade with the United States

A new and unexpected threat to Canada’s economic security emerged from the United States when the Trump administration threatened to impose 25 percent tariffs on all Canadian goods (except for the 10 percent tariffs on energy imports). The United States is Canada’s largest export market, receiving a staggering 76 percent of Canada’s exports in 2024. Canada relies on the United States particularly in the context of its crude oil trade, shipping 97.4 percent of its crude oil to the United States. 

Canada had already started working on expansion to global markets through pipeline development even before Washington announced tariffs. It has succeeded in the expansion of the Trans Mountain pipeline in May 2024, which has enabled the export of Canadian oil to Asia. Canada is reviving talks on the canceled Energy East and Northern Gateway pipelines—the former would move oil from Alberta to Eastern Canada, and the latter would transport oil from Alberta to British Columbia for export to Asian markets. 

In addition to oil trade, another area where Canada is highly dependent on the United States is in auto manufacturing. Behind oil exports, motor vehicles account for the largest share of Canadian exports to the United States, resulting in exports valued at $50.76 billion (C$72.7 billion Canadian dollars) in 2024. With 25 percent tariffs on all Canadian goods, the automotive industry is expected to take a hit, especially as components cross the border six to eight times before final assembly.

Figure 1

The United States invoked the International Emergency Economic Powers Act to impose tariffs on Canada with the stated objective to curb fentanyl flows to the United States. The measure has plunged US-Canada relations into chaos and could result in a trade war between the two long-standing allies. In response, Canada might reroute oil shipments to China through existing pipelines and increase trade with China in general. Further economic integration with China would increase Canada’s exposure to economic threats emanating from China, including trade weaponization and anti-competitive practices. 

Because of US tariffs, Canada could also face challenges in strengthening the resilience of its nuclear fuel and critical mineral supply chains. In the 2024 Fall Economic Statement, Canada outlined key measures for its economic security that heavily incorporated US cooperation. This included plans to strengthen nuclear fuel supply chain resiliency away from Russian influence, with up to $500 million set aside for enriched nuclear fuel purchase contracts from the United States. Canada also aims to strengthen supply chains for responsibly produced critical minerals, following a $3.8 billion investment in its Critical Minerals Strategy, which relies on the United States as a key partner. Given the tariffs, Canada will need to diversify its partners and supply sources quickly if it wishes to maintain these economic security goals. 

Could the US-Canada trade war upend defense cooperation?

Recent tariff escalation between the United States and Canada has raised questions about the future of military cooperation between the two countries. Apart from being members of the North Atlantic Treasury Organization (NATO), the United States and Canada form a unique binational command called North American Aerospace Defense Command (NORAD). NORAD’s mission is to defend North American aerospace by monitoring all aerial and maritime threats. NORAD is headquartered at Peterson Space Force Base in Colorado, has a US Commander and Canadian Deputy Commander, and has staff from both countries working side by side. 

NORAD’s funding has been historically split between the United States (60 percent) and Canada (40 percent). However, the Department of Defense (DoD) does not allocate specific funding to NORAD and does not procure weapons or technology for NORAD, although NORAD uses DoD military systems once fielded. The US Congress recognized the need to allocate funding to modernize NORAD’s surveillance systems after the Chinese spy balloon incident in February 2023. While US fighter jets shot down the Chinese surveillance balloon after it was tracked above a US nuclear weapons site in Montana, the incident exposed weaknesses in NORAD’s capabilities. After the incident, former NORAD Commander Vice Admiral Mike Dumont stated that NORAD’s radar network is essentially 1970s technology and needs to be modernized. 

A year before the incident, the Canadian government had committed to invest $3.6 billion in NORAD over six years from 2022 to 2028, and $28.4 billion over twenty years (2022-2042) to modernize surveillance and air weapons systems. However, Canada has fallen short on delivering on these commitments. 

In March 2025, Canada’s Prime Minister Mark Carney announced that Canada made a $4.2 billion deal with Australia to develop a cutting-edge radar to detect threats to the Arctic. The radar is expected to be delivered by 2029 and will be deployed under NORAD. Canadian military officials have stated that the US military has supported the deal, signaling that the deterioration of economic relations has not (yet) had spillover effects for the defense cooperation. 

However, Prime Minister Carney has also ordered the review of F-35 fighter jet purchases from US defense company Lockheed Martin, citing security overreliance on the United States. Under the $13.29 billion contract with Lockheed Martin, Canada was set to buy 88 fighter jets from the US company. While Canada’s defense ministry will purchase the first sixteen jets to meet the contract’s legal requirements, Canada is actively looking for alternative suppliers. 

As the trade war continues, Canada will likely enhance defense cooperation with the European and other like-minded states, possibly to the detriment of the US defense industry and the US-Canada defense cooperation.

Figure 2: US-Canada overlapping memberships in security organizations and alliances

Source: Atlantic Council’s Economic Statecraft Initiative research

Lack of economic power consolidation by Canada’s federal government

Canada has a range of economic tools and sources of economic power to respond to emerging economic threats and mitigate vulnerabilities; however, it currently lacks economic power consolidation. Unlike the United States, where the federal government can regulate nearly all economic activity, Canada’s Constitution Act of 1867 grants provinces control over their “property and civil rights,” including natural resources. Section 92A, which was added to the constitution in 1982, further reinforced the provinces’ control over their natural resources. Meanwhile, the federal government has control over matters of international trade including trade controls. However, when international trade issues concern the natural resources of provinces, tensions and disagreements often arise between provinces and the federal government, and the lack of economic power consolidation by the federal government becomes obvious.

This issue manifested when the United States announced 25 percent tariffs on Canada in March 2025 as Canada’s federal government and the Alberta province had different reactions. Canada’s main leverage over the United States is oil exports. Refineries in the United States, particularly those in the Midwest, run exclusively on Canadian crude oil, having tailored their refineries to primarily process the heavy Canadian crude. Since 2010, Canadian oil accounted for virtually 100 percent of the oil imported by the Midwest. Threatening to hike levies on crude oil exports could have been Canada’s way of leveraging energy interdependence to respond to US tariffs. However, Alberta Premier Danielle Smith stated that Alberta, which is Canada’s largest oil producer and top exporter of crude oil to the United States, would not hike levies on oil and gas exports to the United States. Being unable to speak in one voice as a country even during a crisis is a direct consequence of Canada’s regional factionalism, characterized by each province looking out for their own interests. 

The United States-Mexico-Canada (USMCA) trade agreement, which entered into force during the first Trump administration in July 2020, may have also contributed to diminishing the economic power of Canada’s federal government. Article 32.10 of USMCA requires each member of the agreement to notify other countries if it plans to negotiate a free trade agreement (FTA) with a nonmarket economy. Thus, if Canada were to sign an FTA with China, the United States and Mexico could review the agreement and withdraw from USMCA with six months’ notice. After the USMCA was signed, Canadian scholars wrote that this clause would effectively turn Canada into a vassal state of the United States, with the authority to make decisions on internal affairs but having to rely on the larger power for foreign and security policy decisions. Five years later, it looks like the USMCA has put Canada in a difficult position, being targeted by US tariffs and not having advanced trading relations with other countries. 

Figure 3: US-Canada overlapping memberships in economic organizations and alliances

Source: Atlantic Council’s Economic Statecraft Initiative Research

Mapping Canada’s economic statecraft systems

To secure Canada’s critical infrastructure and leverage its natural resources to shape favorable foreign policy outcomes, Canada’s federal government has a range of economic tools and the ability to design new ones when appropriate. Canada’s economic statecraft tool kit is similar to those of the United States and the European Union and includes sanctions, export controls, tariffs, and investment screening. Canada has imposed financial sanctions and export controls against Russia along with its Group of Seven (G7) allies. It has levied tariffs on Chinese electric vehicles, in line with US policy, and recently created investment screening authorities to address concerns about adversarial capital. 

Financial sanctions 

Similar to the United States, Canada maintains sanctions programs covering specific countries such as Russia and Iran, as well as thematic sanctions regimes such as terrorismGlobal Affairs Canada (GAC), which is Canada’s Ministry of Foreign Affairs, administers sanctions and maintains the Consolidated Canadian Autonomous Sanctions List. Canada’s Finance Ministry, the Department of Finance, is not involved in sanctions designations, implementation, or enforcement, unlike in the United States, where the Department of the Treasury is the primary administrator of sanctions. 

The Parliament of Canada has enacted legislation authorizing the imposition of sanctions through three acts: the United Nations Act; the Special Economic Measures Act (SEMA); and the Justice for Victims of Corrupt Foreign Officials Act (JVCFOA). 

The United Nations Act enables GAC to implement sanctions against entities or individuals sanctioned by the UN Security Council. When an act of aggression or a grave breach of international peace occurs and the UN Security Council is unable to pass a resolution, Canada implements autonomous sanctions under SEMA; this act is Canada’s primary law for imposing autonomous sanctions and includes country-based sanctions programs. It is also used to align Canada’s sanctions with those of allies. For example, GAC derived its powers from SEMA to designate Russian entities and individuals in alignment with Canada’s Western allies in 2022. Meanwhile, the JVCFOA allows GAC to impose sanctions against individuals responsible for human rights violations and significant acts of corruption, similar to the Global Magnitsky Human Rights Accountability Act in the United States, with sanctions administered by the Office of Foreign Assets Control

Once GAC adds entities and individuals to the lists of sanctions, Canadian financial institutions comply by freezing the designated party’s assets and suspending transactions. GAC coordinates with several government agencies to enforce and enable private-sector compliance with sanctions: 

  • FINTRAC: Canada’s financial intelligence unit (FIU)—Financial Transactions and Reports Analysis Centre of Canada (FINTRAC)—is responsible for monitoring suspicious financial activities and collecting reporting from financial institutions on transactions that may be linked to sanctions evasion. FINTRAC is an independent agency that reports to the Minister of Finance. FINTRAC works closely with the US financial intelligence unit—Financial Crimes Enforcement Network (FinCEN)—on illicit finance investigations and when sanctions evasion includes the US financial system. For example, FinCEN and FINTRAC both monitor and share financial information related to Russian sanctions evasion and publish advisories and red flags for the financial sector in coordination with other like-minded partner FIUs. 
  • OSFI: The Office of the Superintendent of Financial Institutions (OSFI) is a banking regulator that issues directives to financial institutions regarding compliance and instructs banks to freeze assets belonging to sanctioned individuals and entities. FINTRAC also shares financial intelligence with OSFI on sanctions evasion activity under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA). OSFI shares intelligence with Royal Canadian Mounted Police (RCMP), the national police service of Canada, if there is evidence of sanctions evasion or other financial crimes. 
  • RCMP: Once OSFI notifies RCMP about suspicious activity, RCMP investigates whether the funds are linked to sanctions evasion or other financial crimes. If it finds evidence of a violation of sanctions or criminal activity, RCMP obtains a court order to seize assets under the Criminal Code and the PCMLTFA.
  • CBSA: Canada Border Services Agency (CBSA) is responsible for blocking sanctioned individuals from entering Canada. CBSA also notifies OSFI if sanctioned individuals attempt to move cash or gold through border crossings. 

All four agencies work with GAC and with one another on sanctions enforcement. GAC sets sanctions policy, FINTRAC analyzes financial intelligence and shares suspicious activity reports to inform law enforcement investigations, OSFI enforces compliance in banks, RCMP investigates crimes and seizes assets, and CBSA prevents sanctioned individuals from entering Canada and moving assets across borders. 

While financial sanctions are part of Canada’s economic statecraft tool kit, Canadian sanctions power does not have the same reach as US sanctions. The preeminence of the US dollar and the omnipresence of major US banks allows the United States to effectively cut off sanctioned individuals and entities from the global financial system. Canadian sanctions are limited to Canadian jurisdiction and affect individuals and entities with financial ties to Canada, but they do not have the same reach as US financial sanctions. 

Nevertheless, Canadian authorities have been able to leverage financial sanctions to support the G7 allies in sanctioning Russia. For example, in December 2022, under SEMA, Canadian authorities ordered Citco Bank Canada, a subsidiary of a global hedge fund headquartered in the Cayman Islands, to freeze $26 million owned directly or indirectly by Russian billionaire Roman Abramovich, who has been sanctioned by Canada and other G7 allies. In June 2023, Canadian authorities seized a Russian cargo jet at Toronto’s Pearson Airport pursuant to SEMA. 

Figure 4

Export controls

Canada participates in several multilateral export control regimes, including the Wassenaar ArrangementNuclear Suppliers GroupMissile Technology Control Regime, and Australia Group. When multilateral regimes fall short in addressing Canada’s foreign policy needs, Canada leverages its autonomous export control list, which is administered by GAC under the Export and Import Permits Act. The Trade Controls Bureau under GAC is responsible for issuing permits and certificates for the items included on the Export Control List (ECL).

Canada Border Services Agency plays a crucial role in the enforcement of export controls. CBSA verifies that shipments match the export permit issued by GAC. It can seize or refuse exports that violate GAC export permits through ports, airports, and land borders. CBSA refers cases to the Royal Canada Mounted Police (CRMP) for prosecution if exporters attempt to bypass regulations. 

Separately, FINTRAC monitors financial transactions that might be connected to the exports of controlled goods and technologies. If FINTRAC detects suspicious transactions, it shares intelligence with GAC and other relevant authorities. Canada’s method of leveraging financial intelligence for enforcing export controls is similar to that of the United States, where FinCEN has teamed up with the Commerce Department’s Bureau of Industry and Security to detect export control evasion through financial transactions. 

While in the United States the export controls authority lies within the Commerce Department, Canada’s equivalent, Innovation, Science and Economic Development Canada (ISED), does not participate in administering export controls. That responsibility is fully absorbed by GAC. 

While Canada has mainly used its export control authority in the context of sensitive technologies, Canadian politiciansand experts have recently been calling on the federal government to impose restrictions on mineral exports to the United States in response to US tariffs. The United States highly depends on Canada’s minerals, including uranium, aluminum, and nickel. Canada was the United States’ top supplier of metals and minerals in 2023 ($46.97 billion in US imports), followed by China ($28.32 billion) and Mexico ($28.18 billion). Notably, President Trump’s recent executive order called Unleashing American Energy instructed the director of the US Geological Survey to add uranium to the critical minerals list. Canada provides 25 percent of uranium to the United States. If Canada were to impose export controls on uranium, the US objective of building a resilient enriched uranium supply chain would be jeopardized. 

However, Canada could not impose export controls on the United States without experiencing significant blowback. Export control is a powerful tool. While US tariffs would increase the price of imported Canadian goods by at least 25 percent, Canada’s export controls would completely cut off the flow of certain Canadian goods to the United States. It would be destructive for both economies, so Canada will likely reserve this tool as a last resort and perhaps work on finding alternative export destinations before pulling such a trigger. 

Canada employs restrictive economic measures against Russia

In response to Russia’s unjust invasion of Ukraine in 2022, Canada imposed financial sanctions and export controls against Russia in coordination with G7 allies. To date, Global Affairs Canada has added more than 3,000 entities and individuals to its Russia and Belarus sanctions lists under SEMA. Assets of designated individuals have been frozen and Canadian persons are prohibited from dealing with them. Apart from financial sanctions, Canada imposed export controls on technology and import restrictions on Russian oil and gold. Canada also joined the G7 in capping the price of Russian crude oil at $60 per barrel and barred Russian vessels from using Canadian ports.

To enforce financial sanctions against Russia, FINTRAC joined the financial intelligence units (FIUs) of Australia, France, Germany, Italy, Japan, the Netherlands, New Zealand, the United Kingdom, and the United States to create an FIU Working Group with the mission of enhancing intelligence sharing on sanctions evasion by Russian entities and individuals. Separately, Canada Border Services Agency’s export controls enforcement efforts included the review of more than 1,500 shipments bound to Russia (as of February 2024), resulting in six seizures and fourteen fines against exporters. CBSA continues to work closely with the Five Eyes intelligence alliance to share information about export control evasion.

To disrupt the operation of Russia’s shadow fleet, Canada proposed the creation of a task force to tackle the shadow fleet in March 2025. Such a task force could be useful in addressing the various environmental problems and enforcement challenges the shadow fleet has created for the sanctioning coalition. However, the United States vetoed Canada’s proposal.

Figure 5

Tariffs

Canada’s approach to tariffs is governed primarily by the Customs Act, which outlines the procedures for assessing and collecting tariffs on imported goods, as well as the Customs Tariff legislation that sets the duty rates for specific imports (generally based on the “Harmonized System,” an internationally standardized system for classifying traded products). The Canada Border Services Agency is responsible for administering these tariffs. Additionally, the Special Import Measures Act enables Canada to protect industries from harm caused by unfair trade practices like dumping or subsidizing of imported goods, with the Canadian International Trade Tribunal determining injury and the CBSA imposing necessary duties. The minister of finance, in consultation with the minister of foreign affairs, plays a key role in proposing tariff changes or retaliatory tariffs, ensuring Canada’s trade policies align with its broader economic and diplomatic objectives. 

Canada has frequently aligned with its allies on tariff issues, as demonstrated in 2024 when, following the US and EU tariffs, it imposed a 100 percent tariff on Chinese electric vehicles to protect domestic industries. However, Canada has also been proactive in responding to US tariffs, employing a combination of diplomatic negotiations, retaliatory tariffs, and reliance on dispute resolution mechanisms such as the World Trade Organization and USMCA. In the past Canada was also quick to align itself with allies such as the EU and Mexico, seeking a coordinated international response, as was the case in 2018 when the United States imposed a broad tariff on steel and aluminum.

Similar to the United States, Canada offers remission allowances to help businesses adjust to tariffs by granting relief under specific circumstances, such as the inability to source goods from nontariffed countries or preexisting contractual obligations. The Department of Finance regularly seeks input from stakeholders before introducing new tariffs. In 2024, a thirty-day consultation was launched about possible tariffs on Chinese batteries, battery parts, semiconductors, critical minerals, metals, and solar panels, though it has yet to result in any new tariffs. 

Canada’s primary weakness regarding tariffs is its lack of trade diversification. The United States accounts for half of Canada’s imports and 76 percent of its exports. This dependency severely limits Canada’s ability to impose tariffs on the United States without facing significant economic repercussions. Canada’s relatively limited economic leverage on the global stage also complicates efforts to coordinate multilateral tariff responses or to negotiate favorable trade agreements. Furthermore, Canada’s lengthy public consultations and regulatory processes for implementing tariffs hinder its ability to leverage tariffs as a swift response to changing geopolitical or economic circumstances. 

Figure 6

Investment screening

Canada’s investment screening is governed by the Investment Canada Act (ICA), which ensures that foreign investments do not harm national security while promoting economic prosperity. The ICA includes net benefit reviews for large investments and national security reviews for any foreign investments which pose potential security risks, such as foreign control over critical sectors like technology or infrastructure.

The review process is administered by ISED, with the minister of innovation, science, and industry overseeing the reviews in consultation with Public Safety Canada. For national security concerns, multiple agencies assess potential risks, and the Governor-in-Council (GIC) has the authority to block investments or demand divestitures.

Criticism of the ICA includes lack of transparency and consistency, particularly in national security reviews, where decisions may be influenced by political or diplomatic considerations. To better mitigate risks to security, critical infrastructure, and the transfer of sensitive technologies, experts have argued that the ICA should more effectively target malicious foreign investments by incorporating into the review process the perspectives of Canadian companies on emerging national security threats. In response to these concerns, Bill C-34 introduced key updates in 2024, including preclosing filing requirements for sensitive sectors, the possibility of interim conditions during national security reviews, broader scope covering state-owned enterprises and asset sales, consideration for intellectual property and personal data protection, and increased penalties for noncompliance. In March 2025, further amendments were made to the ICA, expanding its scope to review “opportunistic or predatory” foreign investments. These changes were introduced in response to the United States’ imposition of blanket tariffs on Canadian goods.

Figure 7

Positive economic statecraft

Apart from coercive/protective tools, Canada maintains positive economic statecraft (PES) tools such as development assistance to build economic alliances beyond North America. For example, Canada is one of the largest providers of international development assistance to African countries. After Ukraine, Nigeria, Ethiopia, Tanzania, and the Democratic Republic of the Congo were the top recipients of Canada’s international assistance. Canada’s PES tools lay the ground for the federal governments to have productive cooperation when needs arise. Canadian authorities should leverage PES tools to enhance the country’s international standing and increase economic connectivity with other regions of the world. This is especially important amid the US pause on nearly all US foreign assistance. Canada could step up to help fill the vacuum in the developing world created by the Trump administration’s radical departure from a long-standing US role in foreign aid. 

Canadian authorities have already taken steps in this direction. On March 9, Canadian Minister of International Development Ahmed Hussen announced that Canada would be providing $272.1 million for foreign aid projects in Bangladesh and the Indo-Pacific region. The projects will focus on climate adaptation, empowering women in the nursing sector, advancing decent work and inclusive education and training. Earlier, on March 6, Global Affairs Canada launched its first Global Africa Strategy with the goal of deepening trade and investment relations with Africa, partnering on peace and security challenges, and advancing shared priorities on the international stage including climate change. Through this partnership, Canada plans to strengthen economic and national security by enhancing supply chain resilience and maintaining corridors for critical goods. 

Conclusion

Canada’s federal government maintains a range of economic statecraft tools and authorities to address economic and national security threats. While regional factionalism and provincial equities can hinder the federal government’s ability to leverage the full force of Canada’s economic power, threats to Canada’s economic security, including tariffs from the United States, may prove to further unite and align the provinces. The federal government and provincial premiers should work together to meet this challenging moment, consolidating Canada’s sources of economic power and moving forward with a cohesive economic statecraft strategy to protect the country’s national security and economic security interests.

Canada’s leadership and engagement in international fora including the G7, NATO, Wassenaar Agreement, among others, as well as its bilateral relationships, make it well-placed to coordinate and collaborate with Western partners on economic statecraft. Information sharing, joint investigations, multilateral sanctions, and multilateral development and investment can extend the reach of Canada’s economic power while strengthening Western efforts to leverage economic statecraft to advance global security objectives and ensure the integrity of the global financial system. Canada also has a solid foundation for building economic partnerships beyond the West through development assistance and other positive economic statecraft tools. 

About the authors

The authors would like to thank Nazima Tursun, a young global professional at the Atlantic Council’s Economic Statecraft Initiative, for research support.

The report is part of a year-long series on economic statecraft across the G7 and China supported in part by a grant from MITRE.

Related content

Explore the program

Housed within the GeoEconomics Center, the Economic Statecraft Initiative (ESI) publishes leading-edge research and analysis on sanctions and the use of economic power to achieve foreign policy objectives and protect national security interests.

The post Canada needs an economic statecraft strategy to address its vulnerabilities appeared first on Atlantic Council.

]]>
Ukraine’s IT sector offers opportunities for pragmatic partnership with the US https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-it-sector-offers-opportunities-for-pragmatic-partnership-with-the-us/ Thu, 27 Feb 2025 21:03:50 +0000 https://www.atlanticcouncil.org/?p=829408 As the new Trump administration reassesses its foreign partnerships through a lens of transactional pragmatism, Ukraine’s IT sector presents a potentially compelling case for deepening bilateral cooperation, write Anatoly Motkin and Hanna Myshko.

The post Ukraine’s IT sector offers opportunities for pragmatic partnership with the US appeared first on Atlantic Council.

]]>
As the new Trump administration reassesses its foreign partnerships through a lens of transactional pragmatism, Ukraine’s IT sector presents a potentially compelling case for deepening bilateral cooperation.

While Ukrainian President Volodymyr Zelenskyy has sought to maintain strong ties with the United States, the current shift away from aid-based diplomacy signals that Ukraine must further demonstrate its economic value. In this context, the thriving Ukrainian IT industry is a key asset. This sector not only drives domestic economic resilience, but also offers tangible benefits to American businesses through investment, technological innovation, and cybersecurity expertise.

Since the onset of Russia’s full-scale invasion three years ago, Ukraine’s IT industry has proven to be a resilient and dynamic force. Despite the ongoing war with Russia, the sector has demonstrated remarkable adaptability. In 2024, Ukraine’s IT services exports reached $6.45 billion, contributing 4.4 percent of the country’s GDP and accounting for approximately 38 percent of Ukraine’s total service exports. This strong performance has been possible despite the challenges posed by the largest European invasion since World War II, underscoring the Ukrainian IT sector’s ability to operate under extreme conditions.

Beyond its financial contribution, the Ukrainian IT industry also plays a crucial role in employment. By 2024, Ukraine’s IT workforce had grown to more than 300,000 specialists, solidifying its position as a major employer and a pillar of Ukrainian economic stability in today’s wartime environment.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

The United States is already an important partner for Ukraine’s IT industry. In 2023, the US was the largest importer of Ukrainian IT services, accounting for $2.39 billion or 37.2 percent of the industry’s total exports. This presents opportunities for intensified bilateral collaboration in both the private and public sectors that have the potential to transcend the kind of aid-based relations found elsewhere in the region.

Ukrainian IT companies are not seeking handouts but are actively investing in the US market. Rather than displacing American jobs, they are creating new opportunities and fostering technological advancements. Importantly, these companies are not appropriating US technologies but are in many cases sharing their own advanced developments. This cooperative approach could strengthen both economies, reinforcing a business-driven relationship that aligns with the Trump administration’s strategic vision.

The knowledge-based economy benefits immensely from such international partnerships. Unlike resource-dependent models, this framework ensures a two-way exchange of expertise. Ukraine’s IT professionals are already playing a significant role in cybersecurity, actively defending against digital threats and ensuring the integrity of critical infrastructure. From the early days of Russia’s full-scale invasion, they have consistently delivered in even the most difficult of circumstances and have enhanced Ukraine’s global reputation as a leading tech nation.

Moreover, the war has propelled Ukrainian engineers to the forefront of innovation in autonomous systems including aerial, maritime, and other drone technologies. Many of Ukraine’s most recent innovations in the drone sphere leverage AI. The depth of experience gained in developing and deploying these systems under real combat conditions is unparalleled worldwide. For the US defense industry, collaboration with Ukraine in this domain could be invaluable, offering access to battle-tested innovations that have the potential to redefine modern warfare.

The obvious synergies between the US and Ukrainian tech industries extends beyond the private sector. Cooperation in areas such as dual-use technologies should be prioritized by both governments to enhance security and drive innovation. Strengthening this partnership could contribute to a safer and more prosperous future for both nations.

By leveraging Ukraine’s IT expertise, the United States can improve its own technological capabilities while supporting a partner nation at a critical time. This partnership can bring further economic and strategic benefits to both parties. As the Trump administration moves toward a business-driven approach to US foreign policy, strengthening ties with Ukraine’s IT sector could boost innovation and security while also offering a range of business opportunities.

Anatoly Motkin is president of StrategEast, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan dedicated to developing knowledge-driven economies in the Eurasian region. Hanna Myshko is regional director for Ukraine, Moldova, and the Gulf at StrategEast.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s IT sector offers opportunities for pragmatic partnership with the US appeared first on Atlantic Council.

]]>
Issue brief: A NATO strategy for countering Russia https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/issue-brief-a-nato-strategy-for-countering-russia/ Thu, 20 Feb 2025 19:56:35 +0000 https://www.atlanticcouncil.org/?p=820507 Russia poses the most direct and growing threat to NATO member states' security. This threat now includes the war in Ukraine, militarization in the Arctic, hybrid warfare, and arms control violations. Despite NATO's military and economic superiority, a unified and effective strategy is essential to counter Russia's aggression.

The post Issue brief: A NATO strategy for countering Russia appeared first on Atlantic Council.

]]>

Key takeaways

  • Russia is the most direct and significant threat to the security of NATO member states—and since Moscow’s invasion of Georgia in 2008 this threat continues to grow. It now encompasses the war in Ukraine, the militarization of the Arctic, hybrid warfare, and violations of arms control treaties.
  • While NATO holds a significant advantage over Russia in military and economic power, an effective and unified strategy is needed to counter Russia’s aggression and fully harness the Alliance’s collective capabilities.
  • To effectively counter Russia, NATO must defeat Russia in Ukraine, deter Russian aggression against NATO allies and partners, contain Russian influence beyond its borders, and degrade Russia’s ability and will to accomplish its revisionist agenda. That will require, among other actions, a significant increase of support and commitment to Ukraine’s defense against Russia, and a more robust Alliance force posture including the modernization of its nuclear deterrent, the permanent stationing of brigade elements along NATO’s eastern frontier and increased defense industrial capacities.

Russia is “the most significant and direct threat to Allies’ security.” So states the NATO Strategic Concept promulgated at the Alliance’s Madrid Summit in June 2022, just four months after Russia’s massive escalation of its invasion of Ukraine.1 The concept and NATO declarations not only underscore the illegality and brutality of that ongoing attack but also highlight Moscow’s use of nuclear and conventional military aggression, annexation, subversion, sabotage, and other forms of coercion and violence against NATO allies and partners.

Ever since its invasion of Georgia in 2008, Russia’s aggression against the Alliance has steadily intensified. This led NATO leaders at their 2024 Washington Summit to task the development of “recommendations on NATO’s strategic approach to Russia, taking into account the changing security environment.”2 The Alliance’s “Russia strategy” is due for consideration at NATO’s next summit at The Hague in June 2025.3 This issue brief reviews Moscow’s actions affecting the security of the Euro-Atlantic area and presents the enduring realities, objectives, and actions that should constitute the core of an effective NATO strategy to counter the threat posed by Russia.

Intensified and globalized Russian aggression

Russia’s objectives go far beyond the subordination of Ukraine. Moscow seeks to reassert hegemony and control over the space of the former Soviet Union, diminish the power of the democratic community of nations, and delegitimize the international rules-based order. Moscow aims to subjugate its neighbors and to weaken—if not shatter—NATO, the key impediment to its European ambitions.

Toward these ends and under the leadership of President Vladimir Putin, Russia:

  • Has illegally occupied Moldova’s Transnistria region since the early 1990s.
  • Invaded Georgia in 2008, has continued to occupy portions of that country, and recently increased its influence, if not control, over the nation’s governance.
  • Invaded Ukraine in 2014 and significantly escalated this ongoing war in February 2022.
  • Militarized the Arctic by increasing its military presence in the region, including through reopening Soviet-era bases and building new facilities to buttress Russian territorial claims over Arctic waters.
  • Leveraged trade and energy embargoes and other forms of economic pressure to intimidate and coerce its European neighbors.
  • Conducts an escalating campaign of active measures short of war against NATO allies and partners, including information warfare, election interference, sabotage, assassination, weaponized migration, cyberattacks, GPS jamming, and other actions.
  • Expanded its conventional and nuclear military capabilities, an effort that was part of President Putin’s preparations to invade Ukraine.
  • Violated, suspended, and abrogated international arms control agreements, including New START Treaty, the Conventional Armed Forces in Europe (CFE) Treaty, the Intermediate-Range Nuclear Forces (INF) Treaty, the Comprehensive Test Ban Treaty (CTBT), the Open Skies Agreement, and others.4

Enduring realities

A NATO strategy to counter Russia’s aggression is long overdue. Its absence cedes to Russia the initiative, leaving the Alliance too often in a reactive, if not indecisive and passive, posture in this relationship. An effective strategy requires recognition of nine enduring realities:

First, Russia’s invasion of Ukraine was a failure of deterrence. The weakness of the Alliance’s response to Russia’s 2014 invasion of Ukraine, NATO’s failure to respond forcefully to Russia’s months long mobilization of forces along Ukraine’s frontiers in 2021, and NATO’s acquiescence to Putin’s exercise of nuclear coercion emboldened and facilitated Putin’s actions against Ukraine. As a result, the credibility of the Alliance’s commitment to defend resolutely its interests and values has been damaged.

A destroyed Russian tank remains on the side of the road near the frontline town of Kreminna, amid Russia’s attack on Ukraine, in Luhansk region, Ukraine March 24, 2023. REUTERS/Violeta Santos Moura

Second, Russia is at war, not just against Ukraine. It is also at war against NATO. The Alliance can no longer approach the relationship as one of competition or confrontation considering the military invasions, active measures, and other forms of violence and coercion Russia has undertaken against NATO allies and partners.5 As former US Deputy Secretary of State Stephen Biegun has written, “Quite simply, Putin has declared war on the West, but the West does not yet understand we are at war with Russia.”6 By failing to recognize this reality, NATO has ceded escalation dominance to Russia as evidenced by its limiting of support to Ukraine and its inaction against repeated Russian aggression and provocations. The Alliance must recognize and act upon the reality that Moscow has pushed the NATO-Russia relationship into the state of war.

Third, NATO faces long-term conflict with Russia. Putin cannot be expected to abandon his ambitions, even if defeated in Ukraine. Ever since Putin’s speech before the February 2007 Munich Security Conference in which he railed against the international order and NATO’s expanding membership, Russia’s campaign to subjugate its neighbors and to intimidate, divide, and weaken the Alliance has been unceasing and relentless. Nor can the Alliance assume that Putin’s successor will significantly diverge from the objectives and policies that drive Russia’s actions today. Peaceful coexistence with Russia is not attainable in the short to medium term and will be difficult to attain in the long term.

Quite simply, Putin has declared war on the West, but the West does not yet understand we are at war with Russia.


—Stephen Biegun, former US Deputy Secretary of State

Fourth, Russia will continue efforts to increase the size and capability of its armed forces. While Russian land forces have suffered significant losses in its invasion of Ukraine, Moscow has reconstituted that force faster than expected. Russia’s land forces were estimated to be 15 percent larger in April 2024 than when Russia attacked Kyiv in February 2022.7 Earlier this year, Russia announced new ambitious plans to restructure and expand its ground forces to 1.5 million active personnel.8 Moreover, the Russian air force and navy have not been significantly degraded by the war against Ukraine. Russia’s air force has only lost some 10 percent of its aircraft. While Russian naval ships have been destroyed in the Black Sea, Russian naval activity worldwide has increased.9 Similarly, Russian nuclear forces have been unaffected by the conflict in Ukraine. Russia retains the world’s largest arsenal of deployed and nondeployed nuclear weapons and continues to develop new models of intercontinental ballistic missiles (ICBM) and intermediate range ballistic missiles (IRBM), hypersonic boost-glide vehicles, nuclear-powered cruise missiles, nuclear-powered subsurface drones, antisatellite weapons, and orbital space weapons.10 With some 6 percent of gross domestic product (GDP) being directed to its military, Moscow is investing to increase its defense-industrial and research and development capacities.11 Russia’s industrial base produces more ammunition than that produced by all NATO members and is fielding new high-tech weapons systems, such as the nuclear-capable multiple warhead IRBM Oreshnik Russia, which was demonstrated in combat against Ukraine last November.12 In April 2024, NATO SACEUR General Christopher Cavoli testified to the US Congress that:

  • “Russia is on track to command the largest military on the continent and a defense industrial complex capable of generating substantial amounts of ammunition and material in support of large-scale combat operations. Regardless of the outcome of the war in Ukraine, Russia will be larger, more lethal and angrier with the West than when it invaded.”13

Fifth, Moscow’s aggressive actions short of war will continue and escalate. Putin has yet to face a response from the Alliance that will dissuade him from further exercising information warfare, cyber warfare, energy and trade embargoes, assassination, GPS jamming, sabotage, fomenting separatist movements, and other forms of hybrid warfare. These actions are intended to intimidate governments; weaken the credibility of the Alliance’s security guarantee; create and exacerbate internal divisions; and divide allies, among other objectives. Left unchecked, they threaten to undermine the Alliance’s ability to attain consensus necessary to take decisive action against Russia.

Sixth, Moscow’s exercise of nuclear coercion will continue as a key element of Russia’s strategy and should be expected to intensify. Threats of nuclear warfare are a key element of Putin’s strategy to preclude NATO and its members from providing Ukraine support that would enable it to decisively defeat Russia’s invasion. This repeated exercise of nuclear coercion includes verbal threats from President Putin and other senior Russian officials; the launching of nuclear capable ICBMs; the use of a nuclear capable IRBM against Ukraine, the first use of such a system in a conflict; nuclear weapons exercises; and the deployment of nuclear weapons to Belarus, according to both Russia and Belarus.14 NATO allies have repeatedly rewarded this coercion by expressing fear of nuclear war; declaring that NATO forces will not enter Ukraine; restricting NATO’s role in assisting Ukraine; limiting the flow of weapons to Ukraine; and restricting their use against legitimate military targets in Russia. Rewarding nuclear coercion encourages its repeated exercise and escalation. It risks leading Russia to conclude it has attained escalation dominance. A key challenge for NATO going forward will be to demonstrate that Russia’s threats of nuclear strikes are counterproductive, and the Alliance cannot be deterred by nuclear coercion.

NATO leaders stand together for a photo at NATO’s 75th anniversary summit in Washington in July 2024. REUTERS/Yves Herman

Seventh, Moscow is conducting a global campaign of aggression to weaken the democratic community of nations and the rules-based international order. Over the last two decades, Russia has exercised its military, informational, and economic assets to generate anti-Western sentiment across the globe, including in Europe, Africa, the Middle East, and the Indo-Pacific region. This has included military support to authoritarian, anti-Western regimes well beyond Europe, including Venezuela, Syria, and Mali. The most concerning element of Russia’s global campaign is the partnerships it has operationalized with China, Iran, and North Korea. Russia’s “no limits partnership” with China enables Putin to mitigate the impacts of Western sanctions on his war economy. Both Iran and North Korea have provided Russia with weapons and ammunition, and North Korean soldiers have joined Russia’s fight against Ukraine. In return, Russia has supplied missile and nuclear technologies, oil and gas, and economic support to these nations that enables them to stoke violence across the Middle East, threaten the Korean Peninsula, and drive forward Beijing’s hegemonic ambitions in the Indo-Pacific region.

Eighth, an effective Russia strategy will require a coordinated leveraging of all the instruments of power available through the Alliance, its member states, and its key partners, including the European Union. This includes the application of diplomatic, economic, ideological, informational, and other elements of power—none of which are the Alliance’s primary capacity, military power—that can be marshaled through its members states and multinational institutions, such as the European Union, where the Alliance and its member states have influence and authority.

Ninth, NATO significantly overmatches Russia in military and economic power.
NATO Headquarters estimates the combined GDP of Alliance member states to be $54 trillion, more than twenty-five times Russia’s estimated GDP of more than $2 trillion.15 The combined defense budget of NATO members amounts to approximately $1.5 trillion,16 more than ten times that of Russia’s publicly projected defense budget of $128 billion for 2025.17 This imbalance of power favoring the Alliance will be enduring and makes the execution of an effective Russia strategy not a matter of capacity, but one of strategic vision and political will.

Core objectives

To counter the direct and significant threat posed by Moscow, a NATO strategy for Russia should be structured around four core objectives:18

  • Defeat Russia in Ukraine: NATO must defeat Russia’s war against Ukraine. This is its most urgent priority. Failure to do so—and failure includes the conflict’s perpetuation—increases the risk of a wider war in Europe and will encourage other adversaries around the world to pursue their revisionist and hegemonic ambitions. Russia’s decisive defeat in Ukraine is essential to return stability to Europe and to reinforce the credibility of the Alliance’s deterrent posture.
  • Deter aggression by Russia: A key Alliance priority must be the effective deterrence of Russia aggression against the Alliance. A robust conventional and nuclear posture that deters Russian military aggression is far less costly than an active war. Deterrence must also be more effectively exercised against Russia’s actions short of war. Failure to deter aggression in this domain can undermine confidence in the Alliance and increase the risk of war.
  • Contain Russia’s influence and control: The Alliance must actively contain Russia’s efforts to assert influence and control beyond its borders. The Alliance must assist Europe’s non-NATO neighbors in Central and Eastern Europe, the Balkans, the Caucasus, and in Central Asia to strengthen their defenses and resilience to Russian pressure. NATO and NATO allies should also work to counter and roll back Russia’s influence and engagement around the globe.
  • Degrade Russia’s capabilities and determination: A core objective for the Alliance should include weakening Russia’s capacity and will to pursue its hegemonic ambitions. Denying Russia access to international markets would further degrade its economy, including its defense-industrial capacity. Active engagement of the Russian public and other key stakeholders should aim to generate opposition to Putin and the Kremlin’s international aggression.

Achievement of these objectives would compel the Kremlin to conclude that its revanchist ambitions, including the diminishment or destruction of NATO, are unachievable and self-damaging. It would diminish Russia’s will and ability to continue aggression in Europe and weaken the impact of Russia’s partnerships, including with China, Iran, and North Korea. In addition, achieving these objectives would return a modicum of stability to Europe that in the long-term would enhance the prospects for NATO’s peaceful coexistence with Russia.

Regardless of the outcome of the war in Ukraine, Russia will be larger, more lethal, and angrier with the West than when it invaded.


—Gen. Christopher Cavoli, NATO Supreme Allied Commander Europe

A NATO strategy to defeat, deter, contain, and degrade Russian aggression and influence should effectuate the following actions by the Alliance, its member states, and partners:

  • Defeat Russia in Ukraine and accelerate Ukraine’s accession into the NATO alliance Defeating Russian aggression against Ukraine requires its own strategy, which should feature five key elements: adopting Ukraine’s war objectives, including total territorial reconstitution (i.e., the Alliance must never recognize Russian sovereignty over the territories it illegally seized from Ukraine); maximizing the flow of military equipment and supplies to Ukraine, free of restrictions on their use against legitimate military targets in Russia; imposing severe economic sanctions on Russia; deploying aggressive information operations to generate opposition in Russia against Putin’s aggression; and presenting a clear, accelerated path for Ukraine to NATO membership. NATO membership, and the security guarantee it provides, would add real risk and complexity to Russian military planning. NATO membership for Ukraine is the only way to convince the Kremlin that Ukraine cannot be subject to Russian hegemony and would provide security conditions needed for Ukraine’s rapid reconstruction and economic integration into Europe.
  • Fulfill and operationalize NATO’s regional defense plans. To establish a credible and effective deterrent against Russian military aggression, NATO allies must:
    • Build and deploy the requisite national forces. Military plans are no more than visions in the absence of required capabilities. NATO’s European and Canadian allies need to generate more forces, with requisite firepower, mobility, and enabling capacities. In short, given European allies’ obligations under NATO’s new regional defense plans, they must act with urgency.
    • Strengthen transatlantic defense industrial capacity. High intensity warfare, as seen in Ukraine, consumes massive amounts of weapons stocks, much of which have to be in a near constant state of modernization to match the technological adaptations of the adversary. Today, the Alliance has struggled (and often failed) to match the defense-industrial capacity of Russia and its partners. NATO’s defense industrial base must expand its production capacities and its ability to rapidly develop, update, and field weapons systems.
    • Increase allied defense spending to the equivalent of 5 percent of GDP. To facilitate the aforementioned requirements and to address emerging challenges beyond Europe that could simultaneously challenge the transatlantic community, NATO allies need to increase the agreed floor of defense spending from 2 percent to 5 percent and fulfill that new commitment with immediacy. NATO members cannot allow themselves to be forced to choose between defending against Russia and another geopolitical challenge beyond Europe.
  • Terminate the NATO Russia Founding Act (NRFA). Russia has repeatedly and blatantly violated the principles and commitments laid out in the Founding Act. Russia’s actions include having invaded Ukraine both in 2014 and in 2022, using nuclear coercion and escalatory rhetoric to pressue the Alliance, and deploying nonstrategic nuclear weapons to Belarus, as both Russia and Belarus have affirmed. Consequently, NATO should formally render the NRFA defunct, including the Alliance’s commitments to:
    • Adhere to the “three nuclear no’s” that NATO member states “have no intention, no plan and no reason to deploy nuclear weapons on the territory of new members, nor any need to change any aspect of NATO’s nuclear posture or nuclear policy – and do not foresee any future need to do so.”19
    • Abstain from permanently stationing “substantial combat forces” in Central and Eastern Europe.20
  • Update NATO’s nuclear force posture. In response to Russia’s modernization of its nuclear arsenal, exercise of nuclear coercion, and adjustments to its nuclear strategy that lowers the threshold for first use of nuclear weapons, the Alliance must update its own nuclear posture. The objectives should be to provide NATO with a broader and more credible spectrum of nuclear weapons options. An updated force posture would improve NATO’s ability to manage, if not dominate, the ladder of conflict escalation, complicate Russian military planning, and thereby weaken Moscow’s confidence in its own military posture and its strategy of nuclear “escalation to de escalate.” Toward these ends, the Alliance should:
    • Increase the spectrum of NATO’s nuclear capabilities. This should include a nuclear-armed sea-launched cruise missile (SLCM-N) and a ground-launched variant. The breadth and number of NATO nuclear weapons exercises, such as the yearly Steadfast Noon, should be expanded and further integrated with exercises of conventional forces.
    • Expand the number of members participating in the Alliance’s nuclear sharing agreements. Doing so will expand the tactical options available to NATO and underscore more forcefully Alliance unity behind its nuclear posture.
    • Broaden the number and locations of infrastructure capable of hosting the Alliance’s nuclear posture. The Alliance’s nuclear posture still relies solely on Cold War legacy infrastructure in Western Europe. Given the threat posed by Russia, NATO should establish facilities capable of handling nuclear weapons and dual capable systems, including nuclear weapons storage sites, in NATO member states along its eastern frontier.
  • Reinforce NATO’s eastern flank. Russia’s assault on Ukraine and its growing provocations against NATO member states and partners underscore the need to further reinforce the Alliance’s eastern frontier. To date, NATO’s deployments along its eastern flank amount to more of a trip-wire force rather than one designed for a strategy of defense by denial. To give greater credibility to the Alliance’s pledge not to “cede one inch” when considering a potential attack by Russia, NATO should:
    • Establish a more robust permanent military presence along the Alliance’s eastern frontier. NATO is expanding its eight multinational battlegroups deployed to Central and Eastern Europe. But each of these deployments should be further upgraded to full brigades that are permanently stationed there. These elements should feature robust enabling capacities, particularly air and missile defenses and long-range fires. If the United States is expected to sustain a presence of 100,000 troops in Europe, the least Western Europe and Canada can do is to forward station some 32,000 troops combined in Central and Eastern Europe.
    • Conduct large-scale, concentrated exercises on NATO’s eastern flank. The Alliance has commendably reanimated its emphasis on large-scale joint military exercises. However, those exercises have yet to be concentrated on NATO’s eastern flank. Doing so would enhance readiness, reassure the Alliance’s Central and Eastern European member states, and demonstrate resolve and preparedness in the face of Russian aggression.
    • Upgrade the Alliance’s air defense and ballistic missile defense systems to more robustly address Russian threats. In its attacks on Ukraine, Russia has demonstrated with brutality its emphasis on missile and long-range drone strikes against military and civilian targets. As part of its efforts to upgrade its air and missile defense capacities, NATO should direct the European Phased Adaptive Approach to address threats from Russia.21
A Grad-P Partizan single rocket launcher is fired towards Russian troops by servicemen of the 110th Territorial Defence Brigade of the Ukrainian Armed Forces, amid Russia’s attack on Ukraine, on a frontline in Zaporizhzhia region, Ukraine January 21, 2025. REUTERS/Stringer
  • Expand the NATO SACEUR’s authority to order deployments and conduct operations along NATO’s eastern frontier. The Alliance’s regional defense plans are said to provide SACEUR with greater authority to activate and deploy NATO forces before crisis and conflict situations. Due to the aggressiveness of Russia’s ambitions, NATO should consider further expanding those authorities as they relate to the deployment and missions of forces along the Alliance’s eastern frontier. The actions of a deterrent force can be even more important than the magnitude of their presence.
  • Augment the Alliance’s posture in the Arctic. Russia has heavily militarized the Arctic, upgraded the status and capability of its Northern Fleet, and deepened its military cooperation with China in the region while the Kremlin continues to assert Arctic territorial claims that conflict with those of NATO allies. While NATO has been increasing the tempo of its Arctic operations and improving its Arctic capabilities, Russia continues to pose a significant threat in the region and possibly outmatches the Alliance in the High North. To further reinforce deterrence against Russian aggression in the Arctic, the Alliance should:
    • Develop a comprehensive NATO strategy to defend its interests in the High North. Such a document would underscore the Alliance’s commitment to the region and help foster allied investments in infrastructure, capabilities, and training needed to defend and deter Russian threats in the High North.
    • Establish a NATO Arctic Command and Joint Force. The Arctic poses a unique set of geographic and climatic challenges requiring tailored operational capabilities. A command and air-ground-naval force focused specifically on the High North would provide the Alliance a dedicated and tailored deterrent to counter Russian aggression in the Arctic.22
  • Bolster deterrence against Russian actions short of war by strengthening resilience and through more assertive and punitive counteractions. NATO and NATO member states’ failure to respond robustly to Russia’s hybrid warfare—whether it is information warfare, cyberattacks, sabotage, assassinations, or other forms of aggression — has resulted in Russia’s intensification and escalation of these actions. The transatlantic community must strengthen its resilience against such attacks but also take stronger punitive measures against Russia if it is to persuade Russia to cease these attacks. While much of what needs to be done falls beyond the remit of NATO’s military capabilities, greater consideration should be given to how military assets can be leveraged to gather intelligence about Russian activity and provide a military dimension to the transatlantic community’s response to such provocations. For example, when a Russian ship fired a warning shot directed at a commercial Norwegian fishing boat within Norway’s exclusive economic zone or when Russia pulled out Estonian navigation buoys from the Narva River,23 an immediate show of force from NATO could have been an appropriate response.
  • Strengthen the deterrence and resilience capacities of non-NATO nations in Europe and Russia’s periphery. Recent elections in Georgia, Moldova, and Romania reflect the intensity of Russia’s determination to claw back control and influence over the space of the former Soviet Union and Warsaw Pact. A key priority of a Russia strategy should be to strengthen efforts by the Alliance, its member states, and key institutional partners, such as the European Union, to reinforce the resilience and defense capabilities of non-NATO nations in Central and Eastern Europe, the Balkans, the Caucasus, and Central Asia. NATO’s programs, such as the Defence and Related Security Capacity Building Initiative, warrant even greater emphasis and resources, particularly in those regions.
  • Intensify Russia’s economic and diplomatic isolation. The current set of measures taken against Moscow in these realms have failed to sufficiently degrade Russia’s war economy and its ability to sustain its invasion of Ukraine and provocations elsewhere in the world. A key priority for NATO and its member states should be to significantly escalate economic sanctions, including the exercise of secondary sanctions to eliminate Moscow’s ability to generate international revenue from energy exports and attain critical technologies needed by its defense industrial sector.
  • Increase efforts to generate internal Russian opposition to the Kremlin’s revanchist objectives and greater support for democratic principles and governance. Russia has undertaken aggressive campaigns to influence the politics of NATO allies and partners. In the recent elections of Moldova and Romania, Russian intervention nearly effectuated regime change. For too long, the transatlantic community has remained on the defensive in this realm. NATO and its member states need to shift to the offensive and weaponize the power of truth to illuminate the brutal realities of Moscow’s invasion of Ukraine, the corruption of Russian officials, and other realties of Russian governance. NATO allies must more actively support Russian stakeholders—particularly civil society—that are more aligned with transatlantic values. This is critical to degrading the political will of the Russian state to continue its aggressions.
  • Modulate dialogue with Russia, limiting it to what is operationally necessary. The Alliance should formally disband the NATO-Russia Council—which last met in 2022—until Moscow has demonstrated genuine commitment to a constructive relationship. Nonetheless, the Alliance should establish and/or maintain lines of communication between the NATO secretary general and the Kremlin, as well as between Supreme Headquarters Allied Powers Europe (SHAPE) and the Russian General Staff, to enable crisis management and provide transparency needed for military stability. This would not preclude NATO allies from dialogues with Russia deemed necessary, for example, to assist Ukraine or pursue arms control measures.

The bottom line

As noted, NATO possesses an overmatching capacity to defeat Russia in Ukraine, deter Russian aggression, contain Russian influence beyond its borders, and degrade Russia’s ability and will to accomplish its revisionist agenda. Today, there is no better time to achieve these objectives by fully marshaling the Alliance’s assets and potential. Moscow cannot undertake an all-out military attack on NATO without risking the viability of Russia’s armed forces and thus its regime. The accomplishment of these objectives would provide stability to Europe’s eastern frontier and establish the best foundation for an eventual relationship with Moscow that is minimally confrontational, if not cooperative and constructive. However, this will take political will and resources. Russia today is determined to prevail in Ukraine, expand its military capabilities, and further leverage its partners, particularly China, Iran, and North Korea, to defeat the community of democracies and, particularly, the Alliance. Russia already envisions itself as being at war with NATO.

About the authors

Explore the program

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

Related content

1    “NATO Strategic Concept,” June 29, 2022, https://www.nato.int/strategic-concept/
2    Washington Summit Declaration, issued by NATO heads of state and government participating in the meeting of the North Atlantic Council in Washington, DC, July 10, 2024, https://www.nato.int/cps/ar/natohq/official_texts_227678.htm
3    Washington Summit Declaration
4    See Mathias Hammer, “The Collapse of Global Arms Control,” Time Magazine, November 13, 2023, https://time.com/6334258/putin-nuclear-arms-control/
5     more information about active measures, see Mark Galeotti, “Active Measures:
Russia’s Covert Geopolitical Operations,” Strategic Insights, George C. Marshall
European Center for Security Studies, June 2019, https://www.marshallcenter.org/en/
publications/security-insights/active-measures-russias-covert-geopolitical-operations-0
6    Stephen E. Biegun, “The Path Forward,” in Russia Policy Platform, Vandenberg Coalition
and McCain Institute, 2024, 32-36, https://vandenbergcoalition.org/the-russia-policyplatform/
7    US Military Posture and National Security Challenges in Europe, Hearing Before the
House Armed Services Comm., 118th Cong. (2024), (statement of Gen. Christopher
G. Cavoli, Commander, US European Command), https://www.eucom.mil/about-thecommand/2024-posture-statement-to-congress
8    Andrew Osborn, “Putin Orders Russian Army to Become Second Largest After China’s
at 1.5 Million-strong,” Reuters, September 16, 2024, https://www.reuters.com/world/
europe/putin-orders-russian-army-grow-by-180000-soldiers-become-15-millionstrong-2024-09-16/
9    US Military Posture Hearing (statement of Gen. Cavoli)
10    US Military Posture Hearing (statement of Gen. Cavoli)
11    Pavel Luzin and Alexandra Prokopenko, “Russia’s 2024 Budget Shows It’s Planning for
a Long War in Ukraine,” Carnegie Endowment for International Peace, October 11, 2023, https://carnegieendowment.org/russia-eurasia/politika/2023/09/russias-2024-budget-shows-its-planning-for-a-long-war-in-ukraine?lang=en
12    “How Does Russia’s New ‘Oreshnik’ Missile Work?,” Reuters video, November 28, 2024,
https://www.youtube.com/watch?v=pYKDNSYw1NQ
13    US Military Posture Hearing (statement of Gen. Cavoli)
14    “Ukraine War: Putin Confirms First Nuclear Weapons Moved to Belarus,” BBC, June
17, 2023, https://www.bbc.com/news/world-europe-65932700; and Associated Press,
“Belarus Has Dozens of Russian Nuclear Weapons and Is Ready for Its Newest Missile, Its
Leader Says,” via ABC News, December 10, 2024, https://abcnews.go.com/International/
wireStory/belarus-dozens-russian-nuclear-weapons-ready-newest-missile-116640354
.
15    “Defense Expenditures of NATO Countries (2014-2024),” Press Release, NATO Public
Diplomacy Division, June 12, 2024, 7, https://www.nato.int/cps/is/natohq/topics_49198.htm
16    “Defense Expenditures of NATO Countries (2014-2024)
17    Pavel Luzin, “Russia Releases Proposed Military Budget for 2025,” Eurasia Daily Monitor
21, no. 134, Jamestown Foundation, October 3, 2024, https://jamestown.org/program/
russia-releases-proposed-military-budget-for-2025/
18    These core objectives are derived in significant part from the writings of Stephen E.
Biegun and Ambassador Alexander Vershbow. Biegun calls for “a new Russia policy
for the United States…built around three goals: defeat, deter, and contain.” See: https://
vandenbergcoalition.org/wp-content/uploads/2024/11/8_The-Path-Forward-Beigun.pdf

published November 21, 2024. See also: Alexander Vershbow, “Russia Policy After the
War: A New Strategy of Containment,” New Atlanticist, Atlantic Council blog, February 22,
2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/russia-policy-after-the-war-anew-strategy-of-containment/
19    See the NATO-Russia Founding Act, “Founding Act on Mutual Relations, Cooperation
and Security between NATO and the Russian Federation,” NATO, May 27, 1997, https://
www.nato.int/cps/en/natolive/official_texts_25468.htm
20    NATO-Russia Founding Act.
21    Jaganath Sankaran, “The United States’ European Phased Adaptive Missile Defense
System,” RAND Corporation, February 13, 2015, https://www.rand.org/pubs/research_
reports/RR957.html
22    For an excellent proposal for a Nordic-led Arctic joint expeditionary force, see Ryan
R. Duffy et al., “More NATO in the Arctic Could Free the United States Up to Focus on
China,” War on the Rocks, November 21, 2024, https://warontherocks.com/2024/11/morenato-in-the-arctic-could-free-the-united-states-up-to-focus-on-china/
23    See Seb Starcevic, “Russian Warship Fired Warning Shot at Norwegian Fishing Boat,”
Politico, September 24, 2024, https://www.politico.eu/article/russia-warship-chaseaway-norway-fishing-vessel/; and George Wright, “Russia Removal of Border Markers
‘Unacceptable’ – EU,” BBC, May 24, 2024, https://www.bbc.com/news/articles/
c899844ypj2o

The post Issue brief: A NATO strategy for countering Russia appeared first on Atlantic Council.

]]>
2025 Washington, DC Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2025-washington-dc-cyber-9-12-strategy-challenge/ Tue, 18 Feb 2025 16:13:08 +0000 https://www.atlanticcouncil.org/?p=826056 The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the fourteenth annual Cyber 9/12 Strategy Challenge in Washington, DC on March 14-15, 2025. This event will be held in a hybrid format, meaning teams are welcome to attend either virtually via Zoom, or […]

The post 2025 Washington, DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>

The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the fourteenth annual Cyber 9/12 Strategy Challenge in Washington, DC on March 14-15, 2025. This event will be held in a hybrid format, meaning teams are welcome to attend either virtually via Zoom, or in-person at American University’s Washington College of Law. The agenda and format will look very similar to past Cyber 9/12 Challenges, except that it will be held in a hybrid format. Plenary sessions will be livestreamed via Zoom.

Held in partnership with:

Frequently Asked QuestionsGeneral

What can our team expect when competing in the 2025 Washington, DC Cyber 9/12 Strategy Challenge?

Step 1: Once your team has been accepted, you’ll receive Intelligence Report I a month before the competition.

Step 2: Your team will work together to prepare a 2-page Written Brief, which provides a concise assessment of the situation, addresses the potential impacts and risks, and discusses the implications of the cyber incident. The Written Brief should also describe policy considerations for different potential courses of action, weighing the advantages and disadvantages of proposed options, and should provide a timeline for completing these courses of action.

Step 3: Your team will submit your Written Brief. These Written Briefs will then be graded by the Cyber 9/12 Scenario team, and this score will make up 50% of your total Day 1 score.

Step 4: Your team will prepare your Day 1 Decision Document and your Oral Briefing. Your Decision Document is a 1-page document designed to outline your team’s response options, decision process, and recommendations. Your Day 1 Oral Briefing is a ten-minute presentation outlining your impact and risk assessment, as well as your suggested course of action. Your team will deliver this presentation to a panel of expert judges playing the role of the US National Security Council.

Step 5: Your team will submit your Day 1 Decision Document ahead of the competition, and the competition organizers will print sufficient copies for the judges. In a change from previous years, teams may not print their own Decision Documents and provide them to the judges.

Step 6: Arrive at the competition ready to network, learn, and compete! Remember to check the Cyber 9/12 Linktree regularly for agenda and schedule updates!

What does a competition round look like?

How do we advance to compete on Day 2?

Your score on Day 1 is comprised of your Written Brief score (50%) and your Oral Briefing score (50%). If your team scores in the top 50% of teams on Day 1, you will advance to the Semi-Final Rounds on Day 2.

What changes on Day 2?

The competition format remains the same. Intelligence Report II will be issued at the end of Day 1, and your team will need to prepare a new Oral Briefing and submit a new Decision Document before the start of the Semi-Final round. Teams will not prepare a Written Brief for the Semi-Final round.

On what criteria will our team be graded?

Teams will be graded against a rubric in the following 5 categories:

  1. Identification of Key Issues
  2. Understanding of Cyber Policy
  3. Policy Response Option – Analysis and Selected Option
  4. Structure and Communication
  5. Originality and Creativity

The panel of expert judges observing your team’s briefing will each score your team between 1-4 in each of these categories. Your Oral Presentation score will be the average score from your panel of judges.

How does advancing through the competition work in a hybrid format? 

After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

Where can our team find the rules, the grading rubric, past examples, and more information to prepare for this Cyber 9/12 Strategy Challenge?

Teams can find the Cyber 9/12 rules, the grading rubric, past examples of scenarios and playbooks, and much more on our “Preparation Materials” page here.

Frequently Asked QuestionsIn-person

Where will the event be held in-person? 

For participants attending in-person, the Cyber 9/12 Strategy Challenge will be held at American University’s Washington College of Law (WCL).

What time will the event start and finish? 

While the final schedule has yet to be finalized, participants will be expected at American University WCL at 8:00am on Day 1, and the competition will run until approximately 5:00pm, with an evening reception at approximately 6:30pm. Day 2 will commence at approximately 9:00am, and will finish at approximately 5:30pm. The organizing team reserves the right to modify the above timing. The official schedule of events will be distributed to teams in advance of the event and will be available on the Cyber 9/12 Linktree. All times are EST. 

Can teams observe other teams’ presentations?

Yes. The Cyber 9/12 Strategy Challenge is an educational experience first and foremost. In-person competing teams may observe the presentations of other in-person competing teams, as well as the Q&A portion. Teams may not observe the feedback portion.

Note: In-person teams may only observe other in-person teams if their is seating available in the competition room. Observing teams must stay seated for entire presentation and Q&A portions, and may not enter or exit the room once the competition round has started.

Can teams who are eliminated on Day 1 still participate in Day 2 events? 

Yes! All teams are welcome at all of the side-programming events. We strongly encourage teams eliminated on Day 1 to attend the competition on Day 2. There will be side-programming events such as Careers Talks, Resume Workshops, and other fun, cyber-related activities, including an opportunity for some eliminated teams to practice their briefing at the Cyber 9/12 Strategy Challenge: Presentation Skills and Policy Options session. See the Cyber 9/12 Linktree in the lead up to the event to see the full schedule of event.

Will meals be included for in-person attendees?

Yes, breakfast and lunch will be provided for all participants on both days. Light refreshments & finger foods will be provided at the evening reception on Day 1.

What should I pack/bring to a Cyber 9/12 event?

At the event: Name tags will be provided to all participants, judges, and staff at registration on March 15. We ask you to wear these name tags throughout the duration of the competition. Name tags will be printed using the exact first and last name provided upon registration.

Dress Code: We recommend that students dress in business casual attire as teams will be conducting briefings. You can learn more about business casual attire here.

Electronic Devices: Cell phones, laptops, and wearable tech will not be used during presentations but we recommend teams bring their laptops as they will need to draft their Decision Documents for Day 2 and conduct research. Please refer to the competition rules for additional information and for our policy on technology accommodations.

Presentation Aids: Teams may not use any visual aid other than their Decision Documents in their oral policy brief, including but not limited to slideshow presentations, additional handouts, binders, or folders.

How do we get to American University?

American University is on the DC Metro Red line. Metro service from both Dulles International Airport (IAD) and Reagan National Airport (DCA) connect with the Metro Red Line at Metro Center. 

Frequently Asked QuestionsVirtual

How do I log in to the virtual sessions? 

Your team and coach will be sent an invitation to your round’s Zoom meeting in the week leading up to the event using the emails provided during registration

How will I know where to log in, and where is the schedule? 

For competition rounds you will receive an email invitation with your Zoom link. For all plenary sessions and for the team room assignments and agenda please check the Cyber 9/12 Linktree. 

How are the virtual sessions being run? 

Virtual sessions will be run very close to the traditional competition structure and rules. Each Zoom meeting will be managed by a timekeeper. This timekeeper will ensure that each team and judge logs on to the conference line and will manage the competition round.

A 10-minute break is scheduled before the start of the next round. Each round has been allotted several minutes of transition time for technical difficulties and troubleshooting. 

Can teams observe other teams’ presentations?

No. Due to safety and security precautions, virtual teams may not observe other virtual or in-person teams.

What do I need to log into a virtual session?  

Your team will need a computer (recommended), tablet, or smartphone with a webcam, microphone, and speaker or headphones. 

Your team will be provided with a link to the Zoom conference for each competition round your team is scheduled for. If you have any questions about the software, please see Zoom’s internal guide here. 

Will my team get scored the same way on Zoom as in-person? 

Yes, the rules of the competition remain the same, including the rubric for scoring. You can see the rules and the grading rubric here.

How does advancing through the competition work in a hybrid format? 

After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

How will my team receive Intelligence Report 2 and 3? 

We will send out the Intelligence Reports via email to all qualifying teams. 

How will the final round be run? 

The final round will be run identically to the traditional final round format, except that the judges will be in-person. The virtual team will follow the standard final round format as outlined in the rules. After finishing the competition round, the virtual finalist team(s) will then join the plenary session webinar for the final round and watch the remaining finalist teams present.

Zoom

What is Zoom? 

Zoom is a free video conferencing application. We will be using it to host the competition remotely. 

Do I need a Zoom account? 

You do not have to have an account BUT we recommend that you do and download the desktop application to participate in the Cyber 9/12 Strategy Challenge. 

Please use your real name to register so we can track participation. A free Zoom account is all that is necessary to participate.  

What if I don’t have Zoom? 

Zoom is available for download online. You can also access Zoom conferences through a browser without downloading any software or registering.  

How do I use Zoom on my Mac? Windows? Linux Machine? 

Follow the instructions here and here to get started. Please use the same email you registered with for your Zoom to sign up.

Can I use Zoom on my mobile device? 

Yes, but we recommend that you use a computer or tablet 

Can each member of my team call into the Zoom conference line independently for our competition round? 

Yes. Please see the troubleshooting section below for tips if multiple team members will be joining the competition round on independent devices in the same room.  

Can other teams listen-in to my team’s session? 

Zoom links to competition sessions are team specific—only your team, your coach and your judges will have access to a session and sessions will be monitored once all participants have joined. If an observer has requested to watch your team‘s presentation, your timekeeper will notify you at the start of your round.

Staff will be monitoring all sessions and all meetings will have a waiting room enabled in order to monitor attendance. Any team member or coach in a session they are not assigned to will be removed and disqualified. 

Troubleshooting

What if my team loses internet connection or is disconnected during the competition? 

If your team experiences a loss of internet connection, we recommend following Zoom’s troubleshooting steps listed here. Please remain in contact with your timekeeper.

If your team is unable to rejoin the Zoom conference – please use one of the several dial-in lines included in the Zoom invitation.  

What if there is an audio echo or other audio feedback issue? 

There are three possible causes for audio malfunction during a meeting: 

  1. A participant has both the computer and telephone audio active. 
  2. A participant computer and telephone speakers are too close together.  
  3. Multiple participant computers with active audio are in the same room.  

If this is the case, please disconnect the computer’s audio from other devices, and leave the Zoom conference on one computer. To avoid audio feedback issues, we recommend each team use one computer to compete. 

What if I am unable to use a video conference, can my team still participate? 

Zoom has dial-in lines associated with each Zoom conference event and you are able to call directly using any landline or mobile phone. 

We do not recommend choosing voice only lines unless absolutely necessary.

Other

Will there be keynotes or any networking activity remotely? 

Keynotes will continue as reflected on our agenda and will be broadcast with links to be shared with competitors the day before the event. Some side-programming events may not be available virtually. We apologize for the inconvenience.

We also encourage competitors and judges to join the Cyber 9/12 Strategy Challenge Alumni Network on LinkedIn where we regularly share job and internship postings, as well as information about events and how to be a part of the cyber policy community worldwide.

How should I prepare for a Cyber 9/12?

Check out our preparation materials, which includes past scenarios, playbooks including award-winning policy recommendations and a starter pack for teams that includes templates for requesting coaching support or funding.

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post 2025 Washington, DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
Tran cited in paper on Chinese technological influence published by the Carnegie Endowment for International Peace https://www.atlanticcouncil.org/insight-impact/in-the-news/tran-cited-in-paper-on-chinese-technological-influence-published-by-the-carnegie-endowment-for-international-peace/ Fri, 14 Feb 2025 19:26:34 +0000 https://www.atlanticcouncil.org/?p=826002 Read the full paper here

The post Tran cited in paper on Chinese technological influence published by the Carnegie Endowment for International Peace appeared first on Atlantic Council.

]]>
Read the full paper here

The post Tran cited in paper on Chinese technological influence published by the Carnegie Endowment for International Peace appeared first on Atlantic Council.

]]>
McNamara published in Pell Center series on espionage of US intellectual property https://www.atlanticcouncil.org/insight-impact/in-the-news/mcnamara-published-in-pell-center-on-china-theft-of-us-intellectual-property/ Wed, 05 Feb 2025 17:00:00 +0000 https://www.atlanticcouncil.org/?p=824729 On February 5th, Whitney McNamara, nonresident senior fellow at Forward Defense, published a piece in the Pell Center’s series on China’s widespread espionage targeting critical intellectual property threatens US economic and military power.

The post McNamara published in Pell Center series on espionage of US intellectual property appeared first on Atlantic Council.

]]>

On February 5, Whitney McNamara, nonresident senior fellow at Forward Defense, published a piece in the Pell Center’s series, “The Project on U.S. – China Technology Competition” entitled, “For an Enduring Advantage, Accelerate Adoption over Stymieing Theft.” McNamara discusses how China’s widespread espionage targeting critical intellectual property threatens US economic and military power and emphasizes the vital role that the Department of Defense should be playing in regard to a solution. 

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post McNamara published in Pell Center series on espionage of US intellectual property appeared first on Atlantic Council.

]]>
Global China Hub nonresident fellow Dakota Cary in the Washington Post https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-in-washington-post/ Sat, 04 Jan 2025 03:42:00 +0000 https://www.atlanticcouncil.org/?p=816451 On January 3rd, 2025, Global China Hub nonresident fellow Dakota Cary spoke to the Washington Post about Beijing Integrity Tech, the cybersecurity company linked to the Flax Typhoon attacks.

The post Global China Hub nonresident fellow Dakota Cary in the Washington Post appeared first on Atlantic Council.

]]>

On January 3rd, 2025, Global China Hub nonresident fellow Dakota Cary spoke to the Washington Post about Beijing Integrity Tech, the cybersecurity company linked to the Flax Typhoon attacks.

The post Global China Hub nonresident fellow Dakota Cary in the Washington Post appeared first on Atlantic Council.

]]>
In it to win it: Understanding cyber policy through a simulated crisis  https://www.atlanticcouncil.org/content-series/capacity-building-initiative/in-it-to-win-it-understanding-cyber-policy-through-a-simulated-crisis/ Fri, 20 Dec 2024 19:34:00 +0000 https://www.atlanticcouncil.org/?p=817790 Competitors and judges from the Cape Town Cyber 9/12 Strategy Challenge share their perspectives on the competition's impact on the African cybersecurity landscape.

The post In it to win it: Understanding cyber policy through a simulated crisis  appeared first on Atlantic Council.

]]>
On October 7-8, 2024 the Cyber Statecraft Initiative held its fourth annual Cyber 9/12 competition in Cape Town, South Africa in partnership with the US Department of State’s Bureau of Cyberspace and Digital Policy, the University of Cape Town, and the MITRE Corporation. The competition included teams of students representing colleges and universities from across the African continent, including South Africa, Eswatini, Namibia, Botswana, Malawi, Senegal and Ghana. In groups of three or four students, teams responded to a fictional scenario focused on an organized criminal group targeting the Port of Cape Town with a ransomware attack to slow port operations that quickly spread to other ports around the world, impacting international trade and public safety. 

In recent years, governments, industry, and civil society have come to the realization that technical solutions alone are insufficient to stymie evolving cyber threats and that a capable workforce who can smoothly integrate policy and technical responses is imperative. Furthermore, there’s a recognition that Africa will be home to a significant portion of the future global workforce, highlighting a need for investment in cyber capacity building and the development of diverse skillsets that can support the protection of critical infrastructure, foster collaboration on cybersecurity issues with allies and partners, and develop policies that promote the development of more secure technologies.   

When the Atlantic Council established the Cyber 9/12 Strategy Challenge in 2012, the intent was not just to train tomorrow’s cybersecurity leaders, but also to broaden the pipeline of students considering a career in cybersecurity, connect students with potential mentors and employers, and increase connectivity between the technical and policy communities. 

To learn more about the ways scenario exercises can apply to African cybersecurity challenges and their impact on emerging cybersecurity policy leaders on the continent, we spoke to seven participants from the 2024 Cape Town Cyber 9/12 Strategy Challenge: 

Why did your team decide to compete in the Cyber 9/12 Strategy Challenge? What did you expect when signing up to compete in a policy-focused scenario exercise?

The Cyber 9/12 Strategy Challenge challenges students to engage in crisis management and develop strategies to bolster cyber resilience and the protection of critical infrastructures after a major attack. This not only aligned with our interests but is also crucial for developing countries, especially African nations like Senegal.

While researching the competition, we noticed that many prestigious universities from the United States and Europe had competed in Cyber 9/12, which motivated us to sign up as young researchers with interests in deploying security solutions, identifying vulnerabilities, and crisis management. Some of our team members had previously participated in the initial rounds of NIST’s Post-Quantum Cryptography competitions in 2017 and 2022, but this was the first time we had the opportunity to compete in an international cybersecurity competition.

Several aspects have motivated our team’s participation, including:

  • Challenging ourselves as young researchers who have exclusively studied in Senegal by competing internationally;
  • Seizing the opportunity to showcase our creativity and team spirit through the challenges presented in the scenario;
  • Enhancing our skills by competing against high-level teams from around the world;
  • Increasing the visibility of Francophone cyber talent in the international cybersecurity community;
  • Connecting with experts, professionals, entrepreneurs, and passionate young individuals from across the globe;
  • Enjoying the experience of participating in a friendly competition in a field we are passionate about: cybersecurity;
  • Exploring new places, such as the iconic country of South Africa;
  • Gaining credibility and highlighting the need for African countries and the private sector to invest in training, research, and implementing policies and partnerships in cybersecurity, aiming to protect internet users, personal data in cyberspace, and critical infrastructure.

EagleSen Team of Cheikh Anta Diop University, Senegal

How did/does Cyber 9/12 inform your career goals in both cybersecurity and policy?

The Cyber 9/12 Strategy Challenge was an invaluable opportunity for each of us, offering more than just exposure to cybersecurity and policy. Our aim in applying was to learn as much as we could, and the experience showed us the critical role that strategy and policy play in mitigating cyber threats and highlighted the importance of understanding both the technical and policy aspects of cybersecurity. It helped align our individual career paths within these fields in meaningful ways.

For Emmanuella, Ama, and Eli—with our backgrounds in Computer Science—the challenge emphasized how critical it is to design secure systems that are both resilient and adaptable to evolving cyber threats. We’ve become more committed to exploring careers where we can bridge the gap between technical security solutions and policy implementation. For Jessica, as an AI student, the competition underscored the growing intersection between artificial intelligence and cybersecurity; how AI can be leveraged for threat detection, and even support policy decisions. This has encouraged Jessica to explore roles where AI can be applied to bolster cybersecurity measures and support data-driven policymaking.

Throughout the competition, combining our unique backgrounds in AI and Computer Science gave us a durable foundation for tackling complex cybersecurity challenges from different perspectives. This collaboration proved invaluable and reinforced our belief that working at the intersection of these disciplines is key to developing innovative solutions and robust cybersecurity strategies.

Cyber Legends of the Academic City University College, Ghana

Who evaluated your scenario response and what kinds of questions did they pose for you to respond to? What feedback did you take from the experience?

On the first day, Dr. Kester Quist-Aphetsi, Adam Hantman, Rachel Adams, and Dr. Tendani Chimboza evaluated our team’s briefing and policy recommendations. On the second day, Jo Gill, Dr, Kester Quist-Aphetsi, Aisha Kamara, and Adam Hantman evaluated our updated briefing and policy recommendations. The judges’ questions focused on how the cybersecurity recommendations our team had developed may affect other sectors. More specifically, we were challenged to consider the socio-economic ramifications of our suggested responses, among other things. Their feedback really helped us hone our approach to incident response. The feedback underscored how crucial it is to think multi-dimensionally, not being restricted to technicalities, as cybersecurity is intersectional.

Cryptic Crusaders of the Malawi University of science and Technology, Malawi

How did your team decide to approach this year’s scenario and balance your responses to the different issues presented?

Our team’s approach to this year’s competition began with an in-depth analysis of the scenario, aiming to identify both prominent and subtle issues that could shape the incident’s trajectory. This comprehensive view helped us grasp the full scope of the challenge. Each member then focused on researching specific aspects of the scenario, concentrating on solutions that would meet both the immediate needs of the incident and support sustainable, long-term mitigations. This two-pronged approach enabled us to propose solutions that balanced both immediate action with preventative strategies.

To develop well-rounded responses, we emphasized a holistic perspective, examining the problem from multiple angles to address the technical, operational, economic, and policy facets. This approach involved accounting for resource allocation, system vulnerabilities, and the impact of proposed solutions on operational efficiency, state security, and diplomacy in assessing the incident to provide effective recommendations. By considering these factors, we aimed to ensure our recommendations were feasible, adaptable, and capable.

Our team’s adaptability played a critical role in addressing the dynamic nature of the challenge. By encouraging each team member to analyze the scenario through their specific lens, we were able to identify gaps in each other’s findings, allowing us to refine our solutions collaboratively. Time management was crucial to our approach, and our focus on efficiency allowed us to implement our strategy effectively within the limited timeframe. This collaborative, time-sensitive approach strengthened our team’s responses and contributed to our overall success in tackling the challenge.

Trojan Turtles of Namibia University of science and Technology, Namibia

Some of your team members have competed in other Cyber 9/12 competitions—how did you leverage those past experiences to inform how you wanted to prepare for the Cape Town competition?

Drawing from our experience at previous Cyber 9/12 competitions, we’ve refined our approach by understanding the competition’s structure and timing, allowing us to manage our resources more effectively. These experiences have also emphasized the importance of assigning clear team roles, ensuring that each member contributes based on their strengths in policy analysis or technical problem-solving.

While past experiences have been highly informative and prepared us to be agile in responding to a wide range of cyber scenarios, the unique perspectives of different judges can still make it challenging for a team to anticipate their responses. It can be discouraging when a judge disagrees with our recommended approach. After each competition, our post-mortem analysis helps us assess our performance, as well as sharpen our decision-making and teamwork, helping us make strategic choices and maintain composure under pressure—essential lessons that guide our preparation for upcoming competitions.

Cybertrons of the University of Cape Town, South Africa

What kinds of lessons might you apply from your Cyber 9/12 experience if you found yourself in a real cyber crisis? How so?

What I learned from listening to and critiquing students’ briefings and policy responses:

  • Leverage multidisciplinary teams to analyze and solve cyber issues;
  • Structure the problem to ensure that all aspects are addressed (e.g. the scenario presented challenges in regional relations, legal, policy, cybersecurity, logistical, and data management issues);
  • Analyze risks and prioritize solutions to address the highest risk issues first, such as the restoration of port operations, neutralization of internal threats, cooperation with affected regional partners, and responsible public communication, in addition to the usual cybersecurity response of discover, isolate, replace and/or repair, restore, defend, and deter;
  • Ensure that, beyond the solution of the immediate challenges, long-term lessons are also learned, and local and regional policies, strategies, cybersecurity, Information and Communications Technology (ICT), institutional arrangements, and capacity-building activities are identified, designed and implemented;
  • Implement cooperation and communication frameworks to ensure that related institutions adequately resolve aspects of the cyber issue that fall within their mandate, whether it be law enforcement, data regulators, ICT (software, hardware etc.) manufacturers, vendors, and integrators;
  • For developing countries, the ICT sector may have gaps, where legacy systems which are inadequately secured and poorly upgraded to align with the emerging ICT context, persist in various sectors. Regulators should keep tabs on such products and motivate their manufacturers to harden their products against emerging threats.

Dr. Kate Getao, Senior Advisor at Diplo Foundation

After seeing teams from across the region respond to the scenario presented to them, what do you see stakeholders doing well when it comes to cyber education and workforce development? Where do we have room to improve?

My impression as a judge in the Cape Town Cyber 9/12 Strategy Challenge is that stakeholders are doing a great job at developing analytical and presentation skills on cybersecurity issues and mitigation. Stakeholders are also doing a good job developing the cyber workforce to translate cyber incidents into policy and strategic responses.

My observation at the Cape Town competition is that the focus for most of the teams was on the identification of technical issues in the scenario and less on the the policy and strategy issues presented. There’s an opportunity here to increase our support for teams in developing their understanding cyber policy and strategy, and how to translate technical issues or occurrences into policy options to avert future crises.

Eric Akumiah, Africa Regional Liaison at Forum of Incident Response and Security Teams

As a former competitor and now, a judge, of the Cyber 9/12 Strategy Challenge, in what ways do you think the competition prepares students for careers in cybersecurity? As a practitioner in the field yourself, are there any specific skills that have translated well to your professional career?

At every Cyber 9/12 competition I’ve attended, whether as a competitor or a judge, teams arrive ready to dive deep into technical responses to the scenario designed by the Atlantic Council. There’s talk of patch rollouts, potential exploits, and remediation plans. As the competitors are grilled on their responses, they begin to realize that Cyber 9/12 is, in fact, a cyber policy and strategy challenge. Their lens cracks and their assumptions crumble as they begin to understand that the landscape of cybersecurity is much vaster than they realized; that cybersecurity impacts and is impacted by a host of different issues. It’s profound seeing that epiphany come in real time and it’s amazing how quickly the best competitors can change their approach. I can’t overemphasize the importance of that lesson in the practice of cybersecurity; and it’s one every Cyber 9/12 competitor walks away with.

Ben Ballard, Cybersecurity Engineer at MITRE

Safa Shahwan Edwards (she/her) is the director of Capacity Building and Communities within the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs.
Emerson Johnston (she/her) is a young global professional with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs.


The Cyber 9/12 Strategy Challenge is a one-of-a-kind cyber competition designed to provide students from across academic disciplines with a deeper understanding of the policy and strategy challenges associated with management of tradeoffs during a cyber crisis.

The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post In it to win it: Understanding cyber policy through a simulated crisis  appeared first on Atlantic Council.

]]>
The eight body problem: Exploring the implications of Salt Typhoon https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/the-eight-body-problem-exploring-the-implications-of-salt-typhoon/ Fri, 20 Dec 2024 00:57:11 +0000 https://www.atlanticcouncil.org/?p=818025 The Cyber Statecraft community and friends offer their thoughts on the implications of the Salt Typhoon campaign based on what is known to date, what the campaign says about the last four years of cybersecurity policy, and where policymakers should focus in the months ahead.

The post The eight body problem: Exploring the implications of Salt Typhoon appeared first on Atlantic Council.

]]>
The leaks and press releases (hard to call them announcements) that unveiled the Salt Typhoon campaign to the public opened a window into a staggering, months-long intelligence operation breaching at least eight US telecommunications providers. Through this access, it’s reported that the adversary at a minimum targeted senior Democratic and Republican campaign officials and candidates along with the wiretap request system mandated by the Communications Assistance for Law Enforcement Act (CALEA). The compromise portends to be an astounding structural compromise of the basic information and communications networks relied upon by hundreds of millions of Americans.  

There is much to unpack about this incident and the delicate balancing act required of policymakers seeking to communicate about a months-long campaign undermining critical US information infrastructure in an effort that limited public evidence strongly suggests is espionage and counterintelligence activity backed by the Chinese government, and for which the clearest mitigation so far circulated is for many people to install end-to-end encrypted software on their own phones.   

The Cyber Statecraft team and friends offer seven thoughts on the implications of the Salt Typhoon campaign based on what is known to date, what the campaign says about the last four years of cybersecurity policy, and where policymakers should focus in the months ahead. The gist from this group is that we should have known better, we did know it was possible, and we shouldn’t be surprised that China prioritized this target. 

Respondents from the ACTech Program: Nitansha Bansal, Sara Ann Brackett, Trey Herr, Emma Schroeder, Stewart Scott, Nikita Shah, Kenton Thibaut; and Marc Rogers

Why should you, our readers, care? 

Telecommunications and internet service providers like Lumen, Charter, Verizon, and AT&T are tightly interwoven into our daily lives. We call our families, manage our finances, and talk with our doctors through this infrastructure every day. Trust between Americans and these networks is fundamental.   

Not satisfied only to compromise what we hope to trust, a key focus of adversaries’ effort was the surveillance and wiretap request system used by US law enforcement. This should provoke a conversation about the premise of “nobody but us” access and how much it might reasonably be guaranteed. Regardless of instincts about who and what was likely targeted, if someone steals a master key to an apartment complex—even if their motivations were to access the package storage room—you would want to deeply scrutinize the number of master keys in circulation and their stewards. 

To reach its strategic targets, the adversary compromised critical infrastructure that Americans rely on every day, making it quite clear in the process that this industry was not ready to protect individuals from modern cyber threats. That is something everyone should care about. 

Emma Schroeder, Associate Director, Cyber Statecraft Initiative, Atlantic Council

Should we be surprised, given the industry? Was this a new vector? Or was this a well understood risk and the telecommunications industry failed to prepare?

No, I don’t think we should be surprised. With the exception of how effectively the threat actors executed the campaign at scale, nothing here was significantly novel. The threat actors abused the well understood technical debt of a sector in which acquisitions typically lead to smaller, older carriers becoming fossilized inside larger carriers. At the same time, the pace of evolution means that to maintain interoperability newer, more secure technologies end up living alongside or being layered on top of older less secure ones. 

The adversaries in this operation travelled interconnected networks, took advantage of inadequate defenses and monitoring, broke into vulnerable edge devices, and made configuration changes to maintain persistence—risks that have been well understood by all industries for decades. This is the reason that it was not particularly surprising to see that T-Mobile appears to have fared better than other US carriers after making a series of common-sense cyber hygiene improvements in the face of FCC’s September consent decree

Marc Rogers, Co-Founder and Chief Technology Officer, nbhd.ai  

Should we be surprised, given the actor?

No. 

There shouldn’t be surprise at the gravity of this incident all-round. The capability of China’s cyber actors is well understood by industry and assessment communities around the world, whether in terms of their sophistication, stealth, scale, or ability to both anticipate and successfully penetrate strategic targets over years-long campaigns. Moreover, reporting that details the targeting of US telecommunications systems by Chinese cyber actors goes back to at least 2018. The ability of Chinese cyber actors to compromise Western telecommunications networks has also been a significant concern in the Huawei debate, that the integration of such systems confers a high degree of risk and vulnerability. It therefore should come as no surprise that US telecommunications networks were able to be so deeply compromised years later. 

Nikita Shah, Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council

Does it matter exactly who did this and where they might sit in the Chinese bureaucracy? 

It is still too early to discern with certainty the full intent behind this incident. However, given the sophistication of the adversary and its tactics, techniques, and procedures (TTPs), it is possible that this is an organization working for a Chinese state institution, such as the Ministry of State Security or People’s Liberation Army. The Chinese cyber ecosystem is becoming increasingly complex and sprawling, fed in part of by the range of private-sector suppliers that support Chinese state cyber institutions, complicating both attribution and meaningful distinction of effort.  

The incident also points to a degree of sophistication in both how targeted and also how sprawling it was. Initial reporting identified a compromise in internet addressing systems and risks to “government and military personnel”. Follow on coverage pointed to the compromise of the wiretap system used by the US Department of Justice for sensitive national security cases, suggesting a significant counterintelligence operation by Chinese cyber actors. However, the later revelation that political figures had also been targeted suggested that this one campaign had a significantly broader strategic intent. On top of that, the unusual advisory by the Cybersecurity and Infrastructure Security Agency (CISA) then suggested an even more serious— and alarming—compromise of US telecommunications networks, by giving as its first recommendation the advice that high-profile individuals should shift to using only end-to-end encrypted communications. Altogether, this bears the markings of a highly sophisticated actor willing to deeply and thoroughly compromise the backbone of US digital architecture to capture extremely sensitive information. 

Much of this is speculation based on what we know of previous attributions and disclosures. The fact that we do not know a lot for certain right now may signal that the US government is still assessing the scale of the compromise and its fallout and is weighing possible response options. 

Nikita Shah, Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council
Kenton Thibaut, Resident Fellow, China, Democracy and Tech Initiative, Atlantic Council

End-to-end encrypted communications are great, but why is the burden to remediate on individuals?

The 2023 US National Cybersecurity Strategy (NCS) set a bold vision to rebalance responsibility for security in the cyber ecosystem, arguing that “end users bear too great a burden for mitigating cyber risk.” Fast forward to 2024, and the chief guidance in response to Salt Typhoon—from several of the same agencies responsible for implementing the NCS—puts the onus on users to mitigate a major failure by telecommunications providers. The recommendation to only use end-to-end encrypted communications has received widespread attention, engendering an abject lack of confidence in providers’ security in the long term. 

It might be that the 2023 strategy just needs more time, and this is part of a diminishing pattern. It might be that long-range ideals need to be put on pause in the middle of a crisis—that we shouldn’t debate housing codes amidst a five-alarm fire. More worryingly, it might be that the vulnerabilities abused by the Salt Typhoon attackers are not easily fixed, whether by virtue of the age of the target networks or innate security flaws in the standards that govern them, that there’s little hope for change from the telecommunications providers themselves, and so policymakers are calling on users to step up into the breach. 

Stewart Scott, Associate Director, Cyber Statecraft Initiative, Atlantic Council

What might policymakers have done differently if this had been an access denial operation instead of an intelligence gathering campaign?

As bad as this incident appears, at least in public reporting, it has not been revealed to involve disruption or damage to these networks. It’s still too early to determine what the depth of intrusion would have enabled, but the immediate incident response would have been entirely different if the campaign had involved service denial. First,  networks going down would have made it more difficult for the US government or companies to communicate with the public. Second, the response would have been sharply focused on the immediate recovery of the networks and getting critical services back up and running rather than what we have seen so far in shifting high-profile individuals towards end-to-end encrypted messaging platforms. 

Regardless of intent, policymakers should see this as (yet another) wake-up call to get serious about preparedness and resilience. This incident is a reminder that the lessons from the 2019 decision to remove Huawei from telecommunications networks due to cybersecurity concerns have still not been implemented in terms of basic cyber hygiene, removing legacy hardware, or defending the most critical assets against the threat. 

Sara Ann Brackett, Assistant Director, Cyber Statecraft Initiative
Nikita Shah, Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council

Are you, our readers, more secure in cyberspace than you were four years ago?

Though there appear to have been significant shifts in security practices throughout industry in the past four years, there is no macro-scale empirical evidence to suggest that cyberspace has grown more secure insofar as fewer bad things have happened. Sandwiching the last four years of policymaking between a multi-year espionage campaign targeting American cloud services and software providers (SolarWinds/Sunburst, 2020) and then another targeting telecommunications and internet service providers (Salt Typhoon, 2024) does not lend much credence to claims of success in US cybersecurity policy. 

There are the mildest indications that you are less secure—ransomware attacks against Microsoft customers have tripled in the past year and that’s bad  and recent projections from US government claim cybercrime will cost $23 trillion globally in 2027 (though these figures are hard to draw conclusions from). Projections for financial harms caused by cybercrime are generally in consensus that damages will grow, with one estimate pegging a 15 percent annual rate of growth. Generally though, there is little rigorous, broad empirical evidence of any kind with regards to cybersecurity policy outcomes. Anecdotally, major compromises have not abated in a noticeable way, though they haven’t grown more frequent or severe, at least subjectively, either. 

More secure? Maybe not. 

Trey Herr, Senior Director, Cyber Statecraft Initiative
Stewart Scott
, Associate Director, Cyber Statecraft Initiative, Atlantic Council


The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The eight body problem: Exploring the implications of Salt Typhoon appeared first on Atlantic Council.

]]>
How NATO learns and adapts to modern warfare https://www.atlanticcouncil.org/content-series/ac-turkey-defense-journal/how-nato-learns-and-adapts-to-modern-warfare/ Tue, 03 Dec 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=807268 One of the main strengths of NATO is it's ability to continuously develop and improve based on the lessons learned by the complexities of modern conflicts.

The post How NATO learns and adapts to modern warfare appeared first on Atlantic Council.

]]>
Russia’s illegal annexation of Crimea in 2014 and the full-scale invasion of Ukraine in 2022 have had strategic consequences far beyond the region, showcasing the complexities of modern conflicts, where conventional battles are intertwined with cyber warfare, information operations, and hybrid tactics.

No doubt, Russia’s actions have reshaped the global geopolitical landscape. Yet NATO’s capability to adapt has been central and the basis for its sustained relevance and success as an alliance since its founding in 1949. And now, seventy-five years later, NATO continues to lead in learning and evolving to address emerging challenges in the future operating environment.

As with past conflicts and Russia’s evolving war against Ukraine, NATO’s mechanisms for lessons learned and transformation serve as a critical means to adapt and prepare the Alliance to counter every aggression in the future.

But how does NATO, with thirty-two member nations, learn lessons? While NATO’s internal learning process is informed by its members and their own experiences, the situation in Ukraine now demands the ability to learn lessons from others’ experiences. In short, this external learning process is achieved by Alliance-wide lessons sharing and collecting through a dedicated NATO lessons-learned portal. These national observations and experiences are collected, evaluated, consolidated, and then transformed into actions to be applied in NATO’s activities to transform, adapt, and prepare for the future.

The organization’s military learning and adaptation process is strategically led by Allied Command Transformation (ACT) in the United States in Norfolk, Virginia, with a dedicated subordinate command as the Alliance’s center for enabling and supporting the NATO lessons-learned policy and capability: the Joint Analysis and Lessons Learned Centre (JALLC) in Lisbon, Portugal. By systematically collecting reports from open sources, partners, and allies, and sharing them in the NATO lessons-learned portal, all member nations can benefit. A dedicated analysis team gleans insights from the vast amount of data to enhance NATO’s understanding of Russia’s war against Ukraine, and thus, where applicable, inform and influence the development of new strategies, doctrines, and training programs. Recently, JALLC is also benefiting from inputs delivered by a Ukrainian nongovernmental organization focused on analysis and training.

NATO’s decision to establish the NATO-Ukraine Joint Analysis Training and Evaluation Centre (JATEC) will soon play another crucial role in ensuring that NATO remains informed, agile, adaptable, and effective in addressing contemporary and future security challenges. JATEC thus represents a significant commitment by allies not only to improve the interoperability and effectiveness of Ukrainian forces but also to enhance the Alliance’s capability by learning and applying lessons.

The lessons-learned process is also supported by various national NATO-accredited Centres of Excellence (COE). These COEs, under the coordinating authority of ACT, specialize in various military areas of expertise, such as cyber defense, command and control, air power, medical support, etc.

Altogether, ACT with the JALLC in its overarching role, the contributions by the nations, and the NATO-accredited COEs with their specializations, create a comprehensive system for ensuring lessons are captured and disseminated to operational forces, fostering a culture of continuous improvement within NATO.

The basis of a successful alliance is a common understanding and principles, which are laid out in doctrines. Therefore, doctrine development is a critical component of NATO’s adaptation and transformation process. By continuously updating doctrine based on real-world experiences and lessons learned, NATO ensures that its operational principles remain robust and effective in the face of evolving threats. With regard to Russia’s war in Ukraine, Russia’s use of hybrid warfare tactics, which combine conventional military force with irregular tactics, and cyber and information operations, has prompted improvements in NATO doctrine governing how NATO shares intelligence and counters disinformation campaigns to strengthen NATO’s response toward hybrid warfare tactics.

Furthermore, lessons from Russia’s war against Ukraine underscore the importance of agile, integrated command and control systems capable of coordinating operations across multiple domains: land, sea, air, cyber, and space. NATO needs command and control structures that are flexible, resilient, and capable of rapid decision-making. Advanced technologies such as artificial intelligence and machine learning are being leveraged to enhance shared situational awareness and streamline decision-making processes to maintain an advantage.

Lessons learned will be injected into NATO exercises and training to generate high-fidelity training scenarios allowing NATO forces to “train as they fight.” Besides improving interoperability, certifying NATO forces, and demonstrating NATO’s fighting credibility, NATO exercises also challenge training audiences to face operational dilemmas that reflect the complexities of modern warfare. JALLC reports summarizing lessons from the war in Ukraine are being used by the Joint Force Training Centre (JFTC) and Joint Warfare Centre (JWC) to update and improve NATO exercises. The increased use of drones, private-sector support for military operations, the battle for both cognitive and information superiority, sustainment, and civilian resilience are key features, which have already informed changes in NATO exercises to ensure that NATO forces are better prepared to operate in complex and dynamic environments.

ACT, as the strategic warfare development headquarters, also looks into the future. Studies focus on widely debated topics including, for example, the future operating environment and the future force structure. Other topics include the future of tanks and attack helicopters, small-drone warfare, vulnerabilities of fleets and ports to maritime drones, and the protection of critical infrastructures against long-range strikes.

NATO’s commitment and ability to continuously develop and improve ensures the Alliance’s enduring strength and cohesion. NATO is rapidly incorporating battlefield lessons into the transformation, adaptation, and preparation activities of the Alliance’s forces. ACT is key to this process, ensuring lessons reach operational forces at the speed of relevance.


General Chris Badia is NATO’s Deputy Supreme Allied Commander Transformation.

Explore other issues

The Atlantic Council in Turkey aims to promote and strengthen transatlantic engagement with the region by providing a high-level forum and pursuing programming to address the most important issues on energy, economics, security, and defense.

The post How NATO learns and adapts to modern warfare appeared first on Atlantic Council.

]]>
Seizing the win: Navigating competition and hands-on learning through Cyber 9/12  https://www.atlanticcouncil.org/content-series/capacity-building-initiative/seizing-the-win-navigating-competition-and-hands-on-learning-through-cyber-9-12/ Thu, 21 Nov 2024 00:28:00 +0000 https://www.atlanticcouncil.org/?p=818009 Competitors and judges from the inaugural Cyber 9/12 Strategy Challenge in Costa Rica share their perspectives on how to leverage teamwork and interdisciplinary skills to address tomorrow’s cyber challenges.

The post Seizing the win: Navigating competition and hands-on learning through Cyber 9/12  appeared first on Atlantic Council.

]]>
On July 22-23, 2024, the Cyber Statecraft Initiative held its inaugural Cyber 9/12 competition in San José, Costa Rica in partnership with the US Department of State’s Bureau of Cyberspace and Digital Policy, Universidad Fidélitas, MITRE Corporation, LAC4, and the Organization of American States. The competition included teams of students representing colleges and universities from across Latin America and the Caribbean, including Argentina, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, Mexico, Panama, Peru, Paraguay, and Uruguay. In groups of 3-4 students, teams responded to a fictional scenario focused on a compromise of airside data at Juan Santamaría International Airport, including the personally identifiable information of airport staff and several diplomatic delegations in town for a presidential summit focused on trade and technology. The incident was exacerbated by a separate baggage system error at the airport that contributed to delays in luggage collection and confusion for both passengers and airport staff.

Now more than ever, governments are realizing that technical solutions alone are insufficient to stymie evolving cyber threats and that a capable workforce alongside leaders that can smoothly integrate policy and technical response are imperative. This realization comes alongside a growing willingness to embrace the diverse pathways that can lead to a career in cybersecurity, as well as the varied skillsets and experiences that can support the protection of critical infrastructure, foster collaboration on cybersecurity issues with allies and partners, or develop policies that promote the development of more secure technologies.

That’s why the Atlantic Council established the Cyber 9/12 Strategy Challenge—not just to train tomorrow’s cybersecurity leaders, but also to broaden the pipeline of students considering a career in cybersecurity and offer an immersive learning experience combining technically informed policy challenges where they can try their hand at developing novel solutions during a crisis.

To learn more about the ways scenario exercises can apply to Latin American cybersecurity challenges and their impact on emerging cybersecurity policy leaders in the region, we spoke to seven participants from the 2024 San José Cyber 9/12 Strategy Challenge:

Why did your team decide to compete in the Cyber 9/12 Strategy Challenge? What did you expect when signing up to compete in a policy-focused scenario exercise?

We saw it as an amazing opportunity to experience what dealing with a real-world crisis is like, something we were very excited about as we hadn’t done that before in our lives. We expected it to serve as an opportunity to improve our skills in cybersecurity policy, communication, and team collaboration—and it sure was! The competition challenged us in ways we really hadn’t expected and gave us a chance to grow both individually and as a team.

ZeroDayMayDay of Monterrey Institute of Technology and Higher Education in Mexico

How did Cyber 9/12 inform your career goals in both cybersecurity and policy?

Cyber 9/12 opened our eyes to the importance of cybersecurity regulations and their impact on one of our teammate’s legal career. The experience allowed us to understand how laws and regulations in this field are key to addressing cyber threats, which has strengthened our interest in the creation and analysis of cyber policies. This focus provided us with the opportunity to combine legal knowledge with the need to enhance digital security through robust legal frameworks.

If we could describe Cyber 9/12 in one sentence, it would be: An experience that adapts challenges to inform a strategic vision for building the future of global cybersecurity. Being chosen to represent our country, the Dominican Republic, and our university in this competition not only filled us with pride but also ignited a deeper sense of responsibility towards digital protection and policy formulation. This experience reinforced our commitment to creating innovative solutions and offered a broader understanding of the challenges we face in the cyber and diplomatic spheres. Moreover, it was crucial in expanding our professional goals and opening up a world where cybersecurity is not only about technology but coordinated strategies that ensure a safer future for our countries.

We also loved meeting new people and learning about them through networking. It was an enriching experience as we shared ideas and perspectives on the future of cybersecurity. Additionally, Costa Rica was the perfect setting—a beautiful country that complemented the experience.

UNAPEC Strategic Team of University APEC in Dominican Republic

Who evaluated your scenario response and what kinds of questions did they pose for you to respond to? What feedback did you take from the experience?

On Day 1, we were evaluated by a panel of judges drawn from government, industry, and academia. These judges asked us questions regarding legislative aspects of the scenario that our team had struggled to address, as well as questions about the security certification process we recommended.

On Day 2, we were evaluated by a panel of judges representing more industry perspectives and academia. That time, we were much better prepared, and judges only asked us one question regarding which US agencies we had identified for the Costa Rican government to collaborate with following the incident. We were able to identify and recommend several agencies for collaboration, such as the US Federal Bureau of Investigation and the US Department of the Treasury.

On Day 1, the feedback we received from judges recommended that we be more specific regarding our contingency plans and to manage our time for the ten-minute briefing better. Just 24 hours later, we impressed the panel of judges with our performance, as we had successfully implemented all the feedback shared with us on Day 1 and demonstrated the most improvement of any team at the competition.

UPTP Task Force of Polytechnic University Taiwan – Paraguay

How did your team decide to approach this year’s scenario and balance your responses to the different issues presented?

At the start of our preparations for the competition, our team agreed that this was a new challenge we’d be facing for the first time, but we agreed that the first step to tackling this challenge would be to try to better understand Costa Rican cyber legislation and regulatory frameworks, and to take advantage of any available international cooperation resources.

Once we had a grasp of the scenario, we decided to focus on developing strategies that would empower decisionmakers to manage the incident in the best possible way. We took this approach knowing that even with limited time and information , we needed to lay the groundwork to mitigate the damages caused by the incident, minimize operational and reputational impact, and identify and remediate the exploited vulnerabilities.

As four young students in the second year of the Bachelor’s in Cybersecurity program at the Technological University of Panama, with little experience in incident management, disaster response, and even less in cyber policy, we made the decision to appoint a leader for our team. This leader, in conjunction with guidance from our coach and input from all team members, defined the path we would follow to create the most comprehensive plan possible, serving as a solid starting point when facing the Intelligence Report II.

Panama Cybersecurity Warriors of the Technological University of Panama

Some of your team members have competed in other Cyber 9/12 competitions, including the Santo Domingo and Washington, DC competitions—how did you leverage those past experiences to inform how you wanted to prepare for the Costa Rica competition?

The participation of one team member in previous competitions was a beacon for our team’s strategy, as those experiences provided us with valuable lessons that we used to define our preparation for the competition in Costa Rica. The main benefit was being able to leverage our understanding of the scoring rubric and past judge feedback from two previous events to identify areas for improvement. We reviewed past tactics that worked well for our team, as well as those that posed challenges, which enabled us to adjust our approach.

Additionally, we discussed the importance of teamwork, collaboration, and effective communication—key elements that helped us design a study and practice plan better suited to this new competition. Last, our coach made a point to identify each student’s strengths to strategically assign them to different areas that needed to be covered during the competition, such as technical aspects, political analysis, financial analysis, and social analysis of the scenario, among others.

Prime Team of National Polytechnic School in Ecuador

What kinds of lessons might you apply from your Cyber 9/12 experience if you found yourself in a real cyber crisis? How so?

If I were to face a real cyber crisis, I could apply several lessons learned from Cyber 9/12:

  1. Stay calm and think critically: In a crisis, it’s easy to get swept up in panic and make impulsive decisions. It’s essential to remain calm, analyze the situation with a clear mind, and evaluate all options before taking action.
  2. Communicate clearly and concisely: Having a communication plan is crucial during a crisis. During an incident, we have to convey complex technical information clearly and concisely, often to a non-technical audience. This is essential for coordinating the incident response and avoiding misunderstandings.
  3. Work as a team: A cyber crisis requires collaboration among different stakeholders. Developing skills that support teamwork, including the ability to delegate tasks, listen to various perspectives, and reach consensus, is key.
  4. Consider political and legal implications: A cyberattack can have significant political and legal repercussions. These implications must be considered when making decisions, striving to balance national security, individual rights, and public interest.
  5. Adapt to changing situations: A cyber crisis is dynamic and constantly evolving. It is crucial to have the capacity to adapt to new information and shift strategies as necessary.

Last, I would like to emphasize the importance of preparation. The best way to face a crisis is to be prepared for it. This involves developing incident response plans, conducting drills and training exercises, and staying up to date on the latest threats and vulnerabilities. Cyber 9/12 truly contributes to this readiness.

Fabiana Santellán of Uruguay Agency for Electronic Government and the Information and Knowledge Society

After seeing teams from across the region respond to the scenario presented to them, what do you see stakeholders doing well when it comes to cyber education and workforce development? Where do we have room to improve?

The commitment demonstrated by the fourteen teams from eleven Latin American countries in the Cyber 9/12 Strategy Challenge reflects significant progress in cybersecurity education and workforce development in the region. Stakeholders, including governments and educational institutions, are implementing national cybersecurity strategies that incorporate training and awareness programs, which is essential for strengthening regional capabilities. Collaboration between the public, private, and academic sectors is generating valuable opportunities for developing practical cybersecurity skills, as demonstrated by this competition.

However, there is still a need to improve the alignment of educational programs with the real needs of the job market, increase investment in specialized training, and promote greater diversity in the field of cybersecurity. To advance this, it is crucial to strengthen regional cooperation, share best practices and educational resources, and replicate initiatives such as the Cyber 9/12 Strategy Challenge to prepare the next generation of cybersecurity professionals in Latin America and the Caribbean.

Orlando Garces Organization of American States Cybersecurity Policy Officer



The Cyber 9/12 Strategy Challenge is a one-of-a-kind cyber competition designed to provide students from across academic disciplines with a deeper understanding of the policy and strategy challenges associated with management of tradeoffs during a cyber crisis.


Safa Shahwan Edwards is the director of Capacity Building and Communities within the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs.

Emerson Johnston (she/her) is a young global professional with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs.


The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Seizing the win: Navigating competition and hands-on learning through Cyber 9/12  appeared first on Atlantic Council.

]]>
The West must respond to Russia’s rapidly escalating hybrid warfare https://www.atlanticcouncil.org/blogs/ukrainealert/the-west-must-respond-to-russias-rapidly-escalating-hybrid-warfare/ Thu, 07 Nov 2024 13:13:11 +0000 https://www.atlanticcouncil.org/?p=805432 Russia's hybrid war against the West is escalating rapidly and requires a far firmer collective response, writes Doug Livermore.

The post The West must respond to Russia’s rapidly escalating hybrid warfare appeared first on Atlantic Council.

]]>
According to recent reports, Russia is currently stepping up its sabotage campaign across the EU as part of Moscow’s hybrid war against the West. “Russia is conducting an intensifying campaign of hybrid attacks across our allied territories, interfering directly in our democracies, sabotaging industry, and committing violence,” stated NATO Secretary General Mark Rutte on November 4. “This shows that the front line in this war is no longer solely in Ukraine. Increasingly, the front line is moving beyond borders to the Baltic region, to Western Europe, and even to the high north.”

Rutte’s claims are not new. The Russian authorities have long faced accusations of everything from cyberattacks and political manipulation to the deliberate spread of disinformation to destabilize individual countries and sow discord among Western allies. Russian hybrid warfare operations are now often kinetic operations within Western countries. Incendiary devices that ignited in Germany and the United Kingdom in July 2024 were reportedly part of a covert Russian operation that aimed to start fires aboard cargo and passenger flights heading to the US and Canada.

With the Russian invasion of Ukraine now approaching the three-year mark, Moscow’s campaign of hybrid hostilities throughout the Western world appears to be escalating. As Russia’s tactics evolve, governments and security services throughout the West must work together to identify threats and counter the Kremlin.

Stay updated

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

Information operations are a central feature of Russian efforts to weaken the West. Since the beginning of Russia’s full-scale invasion of Ukraine, the messaging emanating from Moscow has shifted from implausible accusations of a “Nazi regime” in Kyiv toward a greater focus on the inevitability of Russian military victory and the unreliability of Ukraine as a partner. These messages are being actively promoted throughout the West by Russian sources and by Moscow’s proxies.

For much of the past decade, Russia relied primarily on its own state-sponsored media outlets like RT and Sputnik to push narratives designed to undermine Western unity and polarize public opinion in democratic countries. However, in recent years there have been increasing efforts to co-opt non-traditional media and social media personalities throughout the West, such as US podcast hosts. This has made it possible for Russia to reach broader audiences, while also enhancing the credibility of its messaging by avoiding any overt links to the Kremlin.

Cyberattacks are another significant tactic used by Russia to undermine stability throughout the West. By disrupting communications, sowing chaos, and eroding public trust in institutions, Russian cyber warfare has the potential to disrupt and destabilize Western societies. One recent example was the December 2023 cyberattack on Ukraine’s largest telecommunications provider, which temporarily left millions of subscribers without mobile and internet access.

Russia has also sought to fuel political tensions by supporting populist movements and parties that align with the Kremlin’s own anti-NATO and anti-EU narratives. Throughout Europe and North America, the Kremlin is accused of empowering anti-establishment political parties and movements of all kinds. Moscow’s backing for far-right and far-left movements has been opportunistic rather than ideological, with an emphasis on support for any groups deemed capable of destabilizing domestic politics in Western countries. This approach has proved effective in amplifying the Kremlin’s narratives, while also making it harder to counter Russian influence and maintain support for Ukraine.

A further avenue of malign Russian influence is economic leverage, especially through the weaponized use of energy exports. While Europe’s overall dependence on Russian energy has declined significantly since the start of the full-scale invasion of Ukraine in February 2022, A number of EU countries continue to rely heavily on Russian energy supplies. This makes them vulnerable to political pressure from Moscow.

Western policymakers need to recognize that the hybrid security challenges currently coming from Russia are not going to go away soon. On the contrary, Putin is clearly preparing his country for a prolonged confrontation with the West. While open military conflict between Russia and NATO is still viewed as unlikely, Western nations must be better prepared to defend themselves against Russia’s escalating efforts to divide and destabilize them. This will require a multi-faceted approach that reflects the diverse nature of the hybrid threats posed by the Putin regime.

Addressing disinformation is vital. Western governments must intensify efforts to combat Russian information warfare by measures including support for fact-checking initiatives and improving media literary among the public. Before embarking on new steps of this nature in the information sphere, it is important to note that Russia has a record of successfully pushing back against countermeasures by framing them as attempts to suppress free speech.

Strengthening cyber defenses is another key task. NATO much invest in the recently announced Integrated Cyber Defence Centre to protect member states from Russian cyberattacks. The alliance should prioritize information sharing, joint cyber security exercises, and the development of rapid response teams to mitigate the impact of future attacks.

The Kremlin’s sophisticated brand of hybrid warfare poses a serious threat to Western unity and represents a critical front in the global confrontation that has emerged following the Russian invasion of Ukraine. By exploiting existing political, economic, and social vulnerabilities across Europe and North America, Russia aims to weaken the West from within. This requires a firm and coordinated response that includes efforts to counter disinformation, strengthen cyber defenses, and reduce energy dependence on Russia. Addressing these challenges is crucial for the future of transatlantic security in an increasingly complex and unpredictable geopolitical climate.

Doug Livermore is the National Vice President for the Special Operations Association of America, Senior Vice President for Solution Engineering at the CenCore Group, and the Deputy Commander for Special Operations Detachment – Joint Special Operations Command in the North Carolina Army National Guard. The views expressed are the author’s and do not represent official US Government, Department of Defense, or Department of the Army positions.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post The West must respond to Russia’s rapidly escalating hybrid warfare appeared first on Atlantic Council.

]]>
What to know about foreign meddling in the US election https://www.atlanticcouncil.org/content-series/fastthinking/what-to-know-about-foreign-meddling-in-the-us-election/ Wed, 06 Nov 2024 00:07:14 +0000 https://www.atlanticcouncil.org/?p=805072 Our experts explain the foreign malign interference operations targeting the 2024 US elections and how they might continue after Election Day.

The post What to know about foreign meddling in the US election appeared first on Atlantic Council.

]]>

GET UP TO SPEED

The target is you, voter. Russia, China, Iran, and other bad actors sought to interfere in the run-up to today’s US elections, according to research by the Atlantic Council’s Digital Forensic Research Lab (DFRLab), which has been monitoring online trends along with statements by governments, private companies, and civil society in its Foreign Interference Attribution Tracker. As DFRLab experts detail below, this year’s malign efforts in many ways surpass previous influence campaigns in sophistication and scope, if not in impact—and they are expected to continue well after the polls close.

TODAY’S EXPERT REACTION BROUGHT TO YOU BY

Tipping the scale

  • “By sheer volume, foreign interference in the 2024 US election has already surpassed the scale of adversarial operations in both 2016 and 2020,” Emerson says.
  • Dina notes that each US adversary played to its strengths. For example, Iran and China “attempted to breach presidential campaigns in hack-and-leak operations that raise concerns about their cyber capabilities during and after the elections,” she tells us.
  • At the same time, the United States is more prepared than it was in previous election cycles. Russian efforts in 2016 “made foreign interference a vivid fear for millions of Americans,” Emerson notes. “Eight years later, the US government is denouncing and neutralizing these efforts, sometimes in real time.”
  • In fact, Graham tells us, “the combined actions by the US departments of Justice, Treasury, and State against two known Russian interference efforts was the largest proactive government action taken against election influence efforts before an election.”

Doppelgangers and down-ballot races

  • US officials this week called Russia “the most active threat,” and it’s easy to understand why. Emerson notes Russia’s “ten-million-dollar effort to infiltrate and influence far-right American media,” alongside the “Doppelganger” network, which has spread “tens of thousands of false stories and staged videos intended to undermine election integrity in the swing states of Pennsylvania, Georgia, and Arizona.” Increasingly desperate, Russian actors have even sought to shut down individual polling places with fake bomb threats, he adds.
  • Meanwhile, China has focused on “down-ballot races instead of the presidential election to target specific anti-China politicians,” Kenton explains. Using fake American personas and generative artificial intelligence, China-linked operations have appeared across more than fifty platforms. Perhaps surprisingly, Kenton adds, “attributed campaigns appeared sparingly” on the Chinese-owned platform TikTok and far more often on Facebook and X.

Faith, fakes, and falsehoods 

  • “The primary aim is to erode Americans’ faith in democratic institutions and heighten chaos and social division,” Kenton explains, and thus to undermine the ability of the US government to function so it will have less bandwidth to contain adversarial powers.
  • “Some of the fake and already debunked narratives and footage circulating before the elections will likely continue to be amplified by foreign threat actors well after November 5,” Dina predicts. Expect to see activity around the submission of certificates of ascertainment on December 11, the December 17 meeting of the electors to formally cast their votes, and through inauguration day on January 20.
  • And in a post-election period where the results will likely be contested, Graham thinks there’s a “high likelihood” that foreign actors will “cross a serious threshold” from pre-election attempts to broadly influence American public opinion in service of their geopolitical interests to “direct interference” by trying to mobilize Americans to engage in protests or even violence.
  • Nevertheless, Graham points out that the high volume of foreign-influence efforts observed during this year’s election cycle so far does not appear to have had a significant impact in terms of changing Americans’ opinions or behavior.  
  • The consequences of foreign disinformation, Emerson adds, should be assessed against “the far more viral, sophisticated, and dangerous election-day falsehoods that Americans spread among themselves.”

Elections 2024: America’s role in the world

The Atlantic Council’s guide to the most consequential US political contest in generations.

The post What to know about foreign meddling in the US election appeared first on Atlantic Council.

]]>
Why this former Finnish president wants a new European spy agency https://www.atlanticcouncil.org/blogs/new-atlanticist/why-this-former-finnish-president-wants-a-new-european-spy-agency/ Tue, 05 Nov 2024 16:30:29 +0000 https://www.atlanticcouncil.org/?p=804747 One notable recommendation in a new report by former President of Finland Sauli Niinistö is the creation of a unified EU intelligence service.

The post Why this former Finnish president wants a new European spy agency appeared first on Atlantic Council.

]]>
Europe has seen report after report on how to bolster its defenses and enhance its readiness in an increasingly unpredictable world. The most recent of these comes from Sauli Niinistö, the former president of Finland and now a special adviser to the president of the European Commission. Published on October 30, this report, alongside others like the much-discussed Mario Draghi paper on European competitiveness, lays out a number of familiar, albeit urgent, calls for action. Will it be different this time? Will Europe follow through on these recommendations?

A push for real intelligence sharing

Notably, one of Niinistö’s top recommendations goes a step beyond usual European diplomatic rhetoric: the creation of a unified European Union (EU) intelligence service. “As a long-term objective, the EU should have a fully-fledged intelligence cooperation service, serving all EU institutions and Member States,” he writes in the report. However, he goes on to note that its aim “should not be to emulate the tasks of Member States’ national foreign intelligence and domestic security services, nor to interfere with their prerogative on national security.”

Instead, Niinistö suggests strengthening the EU’s Single Intelligence Analysis Capacity, which includes both the EU Intelligence and Situation Centre and EU military intelligence within the EU Military Staff under the European External Action Service, the diplomatic service of the EU. Both entities operate under the EU High Representative for Foreign Affairs, and former Estonian Prime Minister Kaja Kallas is the designate for this position. This framework should serve as an official channel for intelligence exchange among the EU’s intelligence services. The need for such a channel was made clear following Russia’s full-scale invasion of Ukraine in 2022 and China’s pledge of a “no limits” relationship with Russia, which underscored how ill unprepared the EU was for the emerging challenges posed by Moscow and Beijing.

In practice, this would mean deeper, more structured cooperation among member states to share intelligence and respond faster to hybrid threats, such as cyberattacks and disinformation campaigns. This recommendation acknowledges the reality that Europe’s security challenges demand something stronger than piecemeal national efforts or ad hoc alliances, as the hybrid attack on the Finnish, Polish, Lithuanian, Latvian, and Estonian borders made clear. 

Many Russian and Chinese diplomats have been expelled from European capitals due to espionage allegations, while Brussels, home to numerous institutions and embassies, has become a hub for covert activities. The war in Ukraine has further fueled instability within the EU, with incidents ranging from drones surveilling military training areas and assassination plots against arms industry executives to sabotage. Western nations already collaborate on intelligence through the Five Eyes alliance, which links the agencies of the United States, United Kingdom, Australia, Canada, and New Zealand. Niinistö emphasized that any EU intelligence body should focus on leveraging and effectively utilizing existing intelligence.

More civil-military coordination

Another standout from the report is Niinistö’s call for a European Civil Defence Mechanism to bridge the gap between military and civilian responses. The war in Ukraine has shown that keeping essential services running during conflict is just as crucial as maintaining military strength.

The report points out that Europe needs to move past its fragmented approach and ensure that when a crisis hits, military and civilian authorities are coordinated. This isn’t just theory; it’s the kind of readiness that can save lives and stabilize societies.

Public-private partnerships

The COVID-19 pandemic was a crash course in why private sector involvement is indispensable in a crisis. During the pandemic, partnerships with private companies were essential for vaccine development and distribution.

In his report, Niinistö argues that these lessons need to be expanded to broader crisis preparedness. This means developing clear rules for public-private cooperation, especially in industries vital for crisis response such as energy, medicine, and transportation.

Stockpiles and supply chains

Europe’s supply chain issues during recent crises exposed a major weakness. In response, Niinistö calls for a comprehensive EU-level stockpiling strategy to prevent future shortages.

Coordinating reserves across public and private sectors can buffer against disruptions, whether they come from geopolitical tensions or natural disasters. An EU-wide stockpiling strategy is practical, overdue, and aligns with similar calls in Draghi’s economic competitiveness report.

Getting citizens on board

Preparedness isn’t just a government affair; it’s a societal effort. Currently, only 32 percent of Europeans indicate that they would be willing to defend their country if it were involved in a war.

The Niinistö report stresses that Europeans need to be informed, engaged, and prepared at a personal level. Encouraging citizens to take an active role, from learning basic crisis management to preparing for power outages, is part of a realistic resilience plan.

Another report, but what next?

Reports like Niinistö’s and Draghi’s outline clear paths forward, but their effectiveness depends on political will and follow-through. In Niinistö’s report, the ambitions are clearly outlined: intelligence cooperation, military-civil readiness, and crisis preparedness. Niinistö’s report will contribute to the agenda of European Commission President Ursula von der Leyen’s upcoming term, during which the EU is set to appoint its first defense commissioner (former Lithuanian Prime Minister Andrius Kubilius). This new role will include the responsibility of preparing a comprehensive defense white paper, expected to be unveiled by next spring.

What Europe needs now is to move past endless discussions and start implementing real, measurable actions. If these ideas remain only on paper, Europe’s preparedness will continue to lag behind the evolving threats it faces.


Piotr Arak is an assistant professor of economic sciences at the University of Warsaw and chief economist at VeloBank Poland.

The post Why this former Finnish president wants a new European spy agency appeared first on Atlantic Council.

]]>
Take the bribe but watch your back: Why Russia imprisoned a security officer for taking cybercriminal payoffs  https://www.atlanticcouncil.org/content-series/conflict-risk-and-tech/take-the-bribe-but-watch-your-back-why-russia-imprisoned-a-security-officer-for-taking-cybercriminal-payoffs/ Tue, 05 Nov 2024 00:48:00 +0000 https://www.atlanticcouncil.org/?p=818020 Russia imprisoned a security service officer for taking bribes from cybercriminals—showing not a willingness to crack down on cybercrime, but instead just how much the Kremlin wants to maintain its cybercrime protection racket.

The post Take the bribe but watch your back: Why Russia imprisoned a security officer for taking cybercriminal payoffs  appeared first on Atlantic Council.

]]>
Earlier this year, a Russian court imprisoned a former counterintelligence official, who worked on cyber issues in the Federal Security Service (FSB), for accepting a $1.7 million bribe to shield cybercriminals from prosecution. But rather than serving as demonstration of the Kremlin’s potential newfound desire to crack down on hackers, this rare case shows something different: If you are going to run a protection racket for cybercriminals in Russia, you should keep your promises and watch your back. 

In February 2022, the Ministry of Internal Affairs’ (MVD’s) Department K—which focuses on computer crimes—arrested six hackers in Perm, Russia for selling stolen payment card data online. The MVD runs local police forces across Russia, among other functions, and operates separately from the FSB, one of Russia’s largest and most powerful security organs that works on everything from counterterrorism to counterintelligence and border security. It was not long before those MVD investigators learned the arrested hackers had been paying off Grigory Tsaregorodtsev, an FSB officer running a counterintelligence department after he discovered their activities in 2016 and approached them for a bribe. 

In late April of this year, a Russian court sentenced Tsaregorodtsev to nine years in prison for taking payments from the hackers, who stole US bank cardholders’ data. The court also ruled that Tsaregorodtsev must pay a fine of 320 million rubles (about $3.5 million) and confiscated his property and forfeited his military rank of major. The court also banned him from serving in government positions for eight years after his release. Ironically, his defense attorneys argued his crime was not accepting bribes, but fraud—after all, he clearly did not deliver on his promise of protection. This defense mattered for how the court determined his criminal liability (e.g., his agreement with the hackers). Presenting such an argument in a court also underscored the normalcy, and, in fact, the permissibility of Russian state security officers taking bribes from cybercriminals, a “tax” of sorts, to turn a blind eye. 

Viewed against Russia’s arrests of other criminal hackers, such as the reported “shutdown” of ransomware group “REvil” in January 2022 (which was wildly overhyped in Western media), this incident could be misconstrued as Moscow’s gradual steps towards cracking down on cybercrime emanating from within its borders. Yet, this misses some crucial details about Russia’s cyber ecosystem and how state officials work with hackers. It would be wrong to extrapolate from this case that the Russian state possesses a new desire to seriously crack down on cybercrime and a willingness to prosecute, rather than co-opt or tax, cybercriminals that come onto its radar. Instead of focusing on the fact that a state officer took bribes, the wider takeaway from this case should be centered on the protection racket itself and Moscow’s interest in upholding the krysha, or roof, for criminal hackers. 

Russia’s cybercriminal ecosystem exploded in the 1990s due to a lack of laws and enforcement, limited economic opportunities, and “highly educated and technologically empowered segments of [the] population with the capability to conduct sophisticated criminal operations.” Cybercriminals evolved from software piracy to bank hacking and credential theft, and today, they comprise a key element of what makes Russia a global cyber power. Criminal hackers bring money into Russia—by one count, seventy-four percent of global ransomware revenue in 2021 went to Russia-linked hackers—and also provide the state a rich pool of talent for under-the-table, plausibly deniable, or clearly state-condoned-if-not-coordinated cyber operations against foreign targets. For instance, in the late 2000s, the FSB reportedly contacted an individual tied to a patriotic hacker website in an attempt to establish a cooperative relationship; in 2017, the US Justice Department charged two FSB officers for paying criminal hackers to break into Yahoo and millions of email accounts. More recently, examples range from the leader of the criminal group Evil Corp working for the FSB and pursuing a Russian government security clearance to the FSB and Russia’s Foreign Intelligence Service (SVR) working with a ransomware group to reportedly target US government-affiliated organizations. While it is easy to imagine top-down orchestration, this discounts the often entrepreneurial, bottom-up, and patronage-seeking motives of the cybercriminal ecosystem in Russia.

When some part of the Russian state brings the hammer down on a cybercriminal, it has less to do with the criminal activity itself and more to do with the targets, the effects, and the actors’ place in the wider ecosystem.”

Within this ecosystem (and Russian criminal enterprise and state corruption more broadly), there is an unspoken “social contract” between the Kremlin and hackers. It generally has three components: 1) focus mainly on foreign targets, 2) do not undermine the Kremlin’s geopolitical objectives, and 3) be responsive to Russian government requests. ​For example, following its first court case, the REvil ransomware updated its malware code to avoid Russian-language computers (most Russian malware is engineered in this fashion to avoid damaging domestic systems). Hence, when some part of the Russian state brings the hammer down on a cybercriminal, it has less to do with the criminal activity itself and more to do with the targets, the effects, and the actors’ place in the wider ecosystem. 

This is what makes the sentencing of the FSB’s Tsaregorodtsev so curious. In most of the (rare) publicly reported instances of Russian authorities arresting cybercriminals, the hackers involved had either stolen from or targeted Russian citizens. In this case, however, the six hackers arrested in Perm in 2022 were running the large credit card shops Trump’s Dumps, Sky-Fraud, and Ferum Shop, which sold data stolen from US residents.  

Stas Alforov, a cybersecurity and fraud expert, noted the strangeness of the MVD going after criminals that were selling foreigners’ data: “It’s not in their business to be taking down Russian [credit] card shops. Unless those shops were somehow selling data on Russian cardholders, which they weren’t.” Later on, the Record reported that “among the customers were primarily Russian citizens seeking to conceal purchases from financial regulators.” ​​Generally, though, the initial arrests do not appear to have been caused by Russians scamming other Russians or cybercriminals defrauding Russian banks. 

It is very difficult to know what exactly happened in this case. Perhaps the US government negotiated with Moscow to take down the group—the US government contacted Russian law enforcement about the card scheme and may have done the same for a case against the Russian administrator of the UniCC card forum, who was also wanted by the Federal Bureau of Investigation.  

Perhaps instead the MVD was driven to act because Russians were hiding purchases and evading financial regulators. Or maybe the Kremlin wanted to handcuff small-time criminals to promote the propaganda line that it opposes criminal hacking. Yet, these all would go further to explain the hackers’ arrest and sentencing than it does to explain that of a corrupt FSB officer. 

In any case, Tsaregorodtsev’s downfall was not a forgone conclusion, as Russian authorities have previously arrested cybercriminals while protecting their FSB handlers. In 2022, the Russian government imprisoned twenty-one hackers in the group Lurk after one of the hackers published materials online—which quickly vanished—showing that the FSB had recruited the hackers to break into the systems of the US Democratic Party. The cybercriminals went down, but the FSB officers supposedly involved did not go down with them, at least publicly; authorities made zero mention of the FSB or investigating the allegations. Going back to the Kremlin’s “social contract” with hackers, there was plenty of reason for this outcome: the cybercriminals were focused on foreign targets; an operation of that kind (targeting the US Democratic Party) would have received higher-up approval (Putin himself authorized the 2016 US influence operations); and the Kremlin would not have wanted to lend credence to the criminal’s allegations by unveiling a FSB-hacker relationship. 

This is why it is more useful to concentrate on the protection racket itself. Tsaregorodtsev had expensive cars, real estate, 100 gold bars, and other assets as a result of the hackers’ money. For him, the benefit of the scheme is clear. But when the MVD decided to arrest the six hackers and shut down their major credit card forums, Tsaregorodtsev did not deliver on the protection they had paid for in gold (and much more). In fact, from their view, the protection probably failed the moment the arrests were even made. So, the hackers ratted out Tsaregorodtsev to the MVD and an FSB officer’s activities became part of the investigation. Regardless of how exactly the hackers turned on Tsaregorodtsev, it is plausible that the FSB then had to make a difficult decision: Go to bat for its man against another security agency or let him fall. 

While he was an officer in the FSB, Tsaregorodtsev was also only a single person working on cybercrimes in one far-flung Russian city, taking money on the side with seemingly no connection to a higher-level political objective, such as planting malware overseas or spying on valuable foreign targets. He was also seemingly not close to power, unlike Evil Corp head Maksim Yakubets, who is the son-in-law of an influential former FSB official who protected him against prosecution. Here, the FSB fighting for Tsaregorodtsev to walk away (whether he ultimately did or not) could put the FSB’s own protection schemes at risk. If he took money from cybercriminals and did nothing to protect them when arrested, with no consequences, other cybercriminals might hear a different tune from the FSB: We sell you on protection, but if someone else arrests you, good luck. That could disrupt the FSB’s dynamics with cybercriminals, which are complex, evolving, and certainly not top-down. 

Moscow’s arrests of hackers are few and far between—and the list of public examples gets even shorter when a state security officer goes down with the cybercriminals and, rather than getting released after a performative detention, is sentenced to prison. But taking this as a sign of some higher Kremlin interest in cracking down on criminal hacking or state corruption would be a mistake. 

A better interpretation, amid the many unknowns about this still opaque ecosystem, is that Russian state security officers engaging with cybercriminal groups, whether as hired hackers or taking a cut of their earnings, have no guarantees of protection. If caught, their fate may depend on anything from their familial connections to their operational objectives or the luck of the draw on interagency rivalries. Every now and then, those accepting cybercrime bribes might still find themselves in handcuffs. 


Justin Sherman is the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm, and a nonresident senior fellow at the Atlantic Council’s Cyber Statecraft Initiative. 


The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Take the bribe but watch your back: Why Russia imprisoned a security officer for taking cybercriminal payoffs  appeared first on Atlantic Council.

]]>
Eftimiades interviewed for France TV documentary on China’s espionage and transnational repression https://www.atlanticcouncil.org/insight-impact/in-the-news/eftimiades-interviewed-for-france-tv-documentary-on-chinas-espionage-and-transnational-repression/ Tue, 29 Oct 2024 19:21:34 +0000 https://www.atlanticcouncil.org/?p=803432 In 2024, Forward Defense Nonresident Senior Fellow Nicholas Eftimiades was interviewed for an award-winning documentary by France Télévisions on Chinese espionage and transnational repression efforts.

The post Eftimiades interviewed for France TV documentary on China’s espionage and transnational repression appeared first on Atlantic Council.

]]>

In 2024, Forward Defense Nonresident Senior Fellow Nicholas Eftimiades was interviewed for an award-winning documentary by France Télévisions on Chinese espionage and transnational repression efforts. The documentary outlines recent cases of international spying by China’s Ministry of State Security (Guoanbu), as well as examples of the arrest and repatriation of Chinese nationals under the Chinese government’s Operation Fox Hunt. Eftimiades was interviewed and quoted extensively throughout the film, saying that “The Ministry of State Security has about a hundred thousand people, which is five times [the size of] the largest intelligence services out there. We’ve never seen anything like this in history before. Even the old days of the Soviet Trust in the 1930s had nowhere near this much reach and power.”

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Eftimiades interviewed for France TV documentary on China’s espionage and transnational repression appeared first on Atlantic Council.

]]>
The 5×5—The evolving role of CISOs and senior cybersecurity executives https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-evolving-role-of-cisos-and-senior-cybersecurity-executives/ Wed, 23 Oct 2024 15:21:00 +0000 https://www.atlanticcouncil.org/?p=818102 For this Cybersecurity Awareness Month, senior cybersecurity executives share their insights into the evolution of their roles.

The post The 5×5—The evolving role of CISOs and senior cybersecurity executives appeared first on Atlantic Council.

]]>

In recent years, the role of cybersecurity executives has shifted in the face of increasing cyberattacks and the growing risks of business disruption, fines, and reputational damage. It has expanded from a focus on technology to securing the mission of a business, non-profit, or even government body. Instead of implementing the technical aspects of cybersecurity, these executives now help their organization’s leaders understand the importance of cybersecurity and design the organization’s cyber strategy. 

This Cybersecurity Awareness Month, we brought together five senior cybersecurity executives to delve into key issues faced by CISOs and other senior cybersecurity professionals, offering their insights on navigating regulatory hurdles, ransomware response, organizational risk management, and fostering a culture of security awareness. Their perspectives highlight the growing importance of integrating security into business decision-making and balancing legal liabilities with operational priorities across different types of organizations and countries. 

1. Given emerging jurisdictional conflicts between governments over who should control user’s data, how do you approach new markets and/or collaboration with international partners as a cybersecurity executive, keeping in mind these competing demands?  

Liisa Past (she/her/they/them), Former National Cyber Director, Estonia; Former Chief Information Security Officer, Ministry of Interior’s IT Organization, Estonia 

“Europe has taken global leadership on data and privacy regulation, therefore the EU norms and emerging best (or good enough) compliance practice is likely to become the de facto norm for the privacy-respecting rule-of-law based parts of the world. The dynamic could only change if the EU and US do, in fact, manage an agreement on the transatlantic exchange and processing of personal data or the US establishes appropriate bilateral deals with the largest democracies. Both seem unlikely.

On face value, the approach seems easy – make sure personal data of Europeans does not leave the European Union. After all, all major service providers have European data centers, right? In practice, General Data Protection Regulation’s (GDPR) provisions have by no means been tested as a unified practice across the Member States. Even if there is clarity about users’ and citizens’ personal data, what about metadata or code essential for accessing and processing the data? Do you have to be able to revert to a backup within the same jurisdiction in a crisis? Would encryption and key management that is compliant with the strictest interpretations of national norms also be technically viable? How do we manage requirements and standards across information systems and data owners or processors as well as national borders and jurisdictions?

There are also plenty of views on what compliance to the requirements looks like. While many of my US colleagues are perfectly happy with a service organization controls (SOC 2 type II) report, it is not unheard of that Europeans would not only require a much more through infosec standard compliance report (commonly of the ISO 27k family or the respective national standard) but also want to assess sites of the service providers, including the major cloud platform providers. The latter are less than reluctant to allow for such access to their data centers and it is unlikely to ever be practical or doable.”

Helen Patton (she/her/hers), Cybersecurity Executive Advisor, and former Chief Information Security Officer, Cisco

“I begin by making sure I understand the jurisdictional perspectives – reading regulations, frameworks, and guides from the various governments. Often there is common ground in the control requirements, even if the motivations differ. I talk to CISOs operating in the new environment, who have experienced the things that are assumed, but unwritten, in that environment. I partner with internal government relations teams to understand our current relationships.” 

Megan Samford (she/her/hers), Chief Security Officer, Schneider Electric  

“The first thing a CSO would need to understand is the regulatory landscape, especially as it pertains to data sovereignty. This often requires having an experienced team of digital policy experts across many geographies. From there, if their organization is applying an international security standard, such as IEC62443, they can map their existing security controls to regional regulation to understand where they are complying and where they have gaps. Without a Common Controls Framework, demonstrating compliance across so many international regulations would be challenging to scale.” 

Elizabeth Cartier (she/her/hers), Head of Security and Compliance, Maven Clinic 

“There is security, and then there is compliance, and the two don’t always overlap as much as we’d hope. From a security perspective – there are definitely challenges in international expansion – like what traffic is permissible, what security tooling works with different language sets, how to get MFA to work with service providers country-to-country. From a compliance and privacy perspective – we work very closely with our legal team to understand parameters around international collaboration, and sometimes we’re all sort of waiting for clear guidance on how upstream tech/architecture companies are managing the evolving requirements. There are also certainly different expectations and norms around privacy from country to country and culture to culture that we are always looking to understand and work with.” 

Michelle Chia (she/her/hers), Cyber Chief Underwriting Officer, AXA XL Americas

“Data, like other assets, holds value. How assets are used and the significance of their value dictate how those assets are protected, including jurisdictional considerations. Engaging with peer groups, e.g. ISACs, legal experts, and regulatory officials to understand the law and the culture is a great start.” 

2. A ransomware attack, earlier this year, against a major health-payments provider illustrated the risks of trying to negotiate with cyber criminals. Given what we’ve learned from years of dealing with ransomware, should CISOs and senior cybersecurity executives change their response? If so, how? 

Liisa Past  

“Ransomware has become THE wicked problem for information security practitioners given the relatively low barrier of entry for criminals. As a sector, we should not negotiate with criminals, let alone fund their growth by paying ransoms. In a particular situation, however, it might make economic sense for the system or service owners to negotiate and pay the ransom, even if criminals do not give guarantees. Negotiating can also buy time to recover systems or otherwise mitigate damage. 

Just like it is too late to take self-defense classes when already brawling with a thug with a gun, the ransomware problem cannot be solved in the response phase. So, the appropriate response of security professionals is to be prepared for the risk and take appropriate measures early and often including monitoring, segmentation of data and systems, offsite backups, and practiced revert and recovery.”  

Helen Patton   

“A CISOs response will always be framed by the organization they represent. The cross-industry trend is to not pay a ransom and focus on ensuring backups and other controls are in place to mitigate the damage. But ultimately it will depend on the individual circumstances of the attack and the organization being attacked as to the correct way to respond.” 

Megan Samford  

“From an industry perspective, we need to drive down ransomware payments to disincentivize cyber criminals.  Every payment that is made continues to fund a compounding, global problem. That being said, depending on the criticality of the asset and an organization’s confidence in being able to perform backup and restore functions, paying the ransom may be the quickest way to restoring operations. If, for example, a hospital was hit with ransomware, but the ransom amount was only $10K, they may choose to pay that amount to maintain lifesaving capability at the hospital. It’s a tough decision but I strongly recommend that organizations regularly test their backup and restore capability.” 

Elizabeth Cartier  

“I’m not sure if there is one single ‘response’ that can be changed. The risk matrix around ransomware response should be flexible and include consideration of your business imperatives and who you’re dealing with. I do think more awareness that paying a ransom is not a snap-your-fingers solution would inform these conversations and decisions. But as far as an overall shift – if everyone decided that payment would no longer be an option, and victims all stuck to that, it would make sense that ransomware would be a less common attack vector if it was no longer financially beneficial. However – that would require organizations and individuals to have extremely high tolerance for financial losses, downtime, data exposure, and as we see more health care operations get attacked – potential loss of human life. So, from a practical perspective, I’m not sure we’ll ever get there.” 

Michelle Chia 

“For years, CISOs have been collaborating with their legal counterparts to ensure that jurisdictional regulations are considered, e.g. OFAC rules in the US. Beyond that, organizations must balance their ability to respond and recover with other business decisions. Each business weighs recovery time, reputational impact, financial considerations, and other important factors differently. CISOs, senior security officials, and their peers running the business should align on what that balance is prior to experiencing an event so that they can respond confidently and in a timely manner. As an insurance carrier, we have considerable experience helping companies respond to cyber incidents. Furthermore, we have an extensive network of resources for specific areas of expertise.”  

3. What are 1 – 2 metrics that you find most revealing, important, or useful in measuring cybersecurity success (or progress) for your organization?  

Liisa Past  

“Any security professional knows, and all organizations will need to deal with, the inevitability of cybersecurity vulnerabilities. Thus, the way these vulnerabilities are dealt with is a great indicator of maturity of the technology operation. This includes the response to publicly disclosed cybersecurity vulnerabilities as well as regularity of vulnerability scans and their patch or mitigation time. Vulnerability scanners are readily available, and the Common Vulnerability Scoring System (CVSS) score quantifies severity to help prioritize actions, assuming the organization understands their technology stack. Patch time also provides an easily mappable and explainable management metric over time for those in the boardroom.” 

Helen Patton   

“Metrics that measure the adoption of security practices and activities (not just security awareness training) in non-IT parts of the organization are a great indicator of the health of the cybersecurity of an organization. This includes measurement of how many times non-security people reach out to the security team, and initiate security conversations and activities.” 

Megan Samford  

“It goes back to why we collect metrics, as Richard Seiersen would say, we collect metrics to inform decision making to ‘quantify, qualify, communicate, and advocate for change.’ Generally, metrics are going to be captured for protection of data, people, suppliers, financial assets, enterprise applications, and incident detection and response.   

Many peers may jump at some of the fancier metrics like mean time to detect, resolve, contain incidents, etc. but I’m going to keep it old school on this – my top metric is completeness of asset inventory/devices on the network. This seems foundational and it is; but it is extremely difficult and I haven’t seen many companies that had high confidence levels in their asset inventories. This can lead to cascading problems with devices directly exposed to the internet, lack of patching, and lack of basic visibility to changes that could be made to those devices. Not to mention lateral movement that attackers may be able to more easily obtain without being detected. If possible, organizations should seek third party independent validation of their metrics.” 

Elizabeth Cartier  

“Patching cadence and effectiveness and vulnerability remediation are two big ones – yes, there are tailored, well-planned attacks that can target your network. The basics can make the difference between being a victim of low-level, off the shelf attacks, and demonstrate overall hygiene and uptake. I also appreciate metrics around employee-reported flags – even if they’re false positives, it means folks are trying to be security-minded and have a security-aware culture.” 

Michelle Chia 

“Basic security hygiene is important. Too many firms make large investments in bells and whistles and fail at the basic level.” 

Liisa Past  

“It is alleged that the SolarWinds CISO was aware of risks and vulnerabilities but did not address them or raise enough noise, and that the public was misled about the company’s security posture. If one is negligent in their job or misrepresents facts, they have to be held responsible regardless of the industry, be they medial or financial professionals, system administrators, security operators, or CISOs. Even selling ice cream is subject to strict food safety and hygiene regulations.  

Such allegations should, of course, meet the reasonable professional test– would a reasonable qualified professional act differently in the situation.  Equally importantly, no profession should be scapegoated or singled out. If the CISO raises alarm and the CEO fails to appropriately address risks, it is on them.  

The SolarWinds charges drive professional responsibility home and make CISOs more likely to walk away if the organization is coming down on the wrong side of the functionality/security dilemma. Hopefully it also highlights the responsibility of the role, making those less competent or committed consider their options.” 

Helen Patton   

“This has a significant impact. It will change how a CISO shows up in an organization – their reporting structure and their management authority – hopefully for the better. It is also causing CISOs to question how they present their security concerns internally, versus what is reported to the market – and making sure the CISO has authority over the messaging. Finally, it is causing organizations to reconsider who is ‘in charge’ of security, and making sure that person has c-suite access and authority and is covered as an officer of the organization.” 

Megan Samford  

“In the absence of clear policy and regulation, the SEC charges are meant to signal to the market the behavior they don’t want to see, which is sometimes easier than articulating what behavior they do want to see. To CISOs, it must seem that an invisible line has been drawn, and the SEC will let you know if you cross it. Every CISO is currently reviewing content and claims their company has made over the last several years in regards to cybersecurity, and they’re measuring that against their confidence in security controls they’ve put in place. If you’ve made wide sweeping statements in the past claiming you follow best practices and have maintained security  at the forefront of everything your organization does, you need to be able to verify that, preferably through an independent third party. Even if you have policies that define security controls and what is acceptable risk tolerance in your company, if you’re found to not be consistently following what you’ve put in policy, you’re exposing yourself and your organization to liability.   

CISOs need to be thoughtful when negotiating their employment contracts to understand what insurances may be available to them, as well as where their employee versus personal protections may begin and end. In the event that a CISO is held personally liable in a criminal or civil case, that person would want to know beforehand if their organization is willing to pay their legal fees or if that’s something they will need to account for with their own finances.  What may be missing from a policy perspective is whistleblower protection, something akin to Cyber OSHA, whereby individuals and organizations would have the ability to report unsafe cyber conditions to a regulatory authority without fearing civil and/or criminal legal ramifications.” 

Elizabeth Cartier  

“I’m interested in why, of all the executives making risk-based business decisions – and security risks are business risks – that we seem to be singling out CISOs. But generally, the threat of jail definitely makes any job less appealing, I would say.” 

Michelle Chia 

“C-suite executives have been held to account on a variety of management risks.  Publicly traded companies are held to an even higher standard, as they have a duty of care and loyalty to their shareholders and employees.  The role of CISOs is critical to their company’s operations; therefore, upholding a high standard of professionalism is paramount. All executives need to prioritize risk management, such as adopting strong communication and documentation practices that ensure transparency and accountability, to protect themselves and the organization from potential legal consequences.” 

Liisa Past 

“Humans, of course, are most likely the weakest link because they use the IT systems and in doing so click on things, connect things and open things, generally for legitimate purposes. Security professionals have to stop treating normal and predictable human behavior as a problem and consider it a given planning assumption instead. This refocuses the conversation on designing and building systems that are secure for human use rather than trying to undo human instincts and behavioral patterns. Fault tolerant systems, better design, monitoring and similar do not replace cyber hygiene but has to supplement it by making doing the wrong or dangerous thing hard. 

For example, after a well-targeted phishing campaign had over a 50% success rate in gathering credentials of law enforcement professionals in an hour, we redesigned the login page so that the first login option was using Estonian government-provided secure digital ID, a two-factor authentication/authorization system. That drastically cut down the proportion of those even using their username and password and therefore dramatically reduced the related risks.”  

Helen Patton   

“I think this is a lazy answer. Instead, I suggest that ‘human’ failures are process and culture failures, and stopping at the ‘human’ as the root cause of a vulnerability misses the mark. So, to improve security awareness, I encourage employees to examine their business processes end to end and evaluate the security risks to those processes (not just the technology), and own closing any gaps.” 

Megan Samford  

“Humans can be a risk, but we should always ask from what standpoint? Apathy? Lack of capability? Lack of awareness? We should understand where human beings are failing in the task and work backwards from that problem because the converse can also be true, humans can act as a firewall in your first line of defense, when adequately trained and educated in cybersecurity. Human error is a prevalent and significant source of cyber incidents, even the most diligent individuals can make mistakes. These mistakes can range from downloading malicious attachments to using weak passwords or misplacing storage devices, all of which can compromise system or data security. Organizations should adopt a customized approach to address diverse employee groups, providing continuous training and awareness for high-risk populations such as VIPs, human resources, finance, customer-facing roles, and developers, empowering them to practice secure behaviors.” 

Elizabeth Cartier  

“We are the weakest link. We are the users; we want systems and apps to be convenient and usable because we have jobs to do and lives to live. But that also makes people the biggest security opportunity. My team works to explain why security is important in understandable, relatable terms. We aim to empower people to understand why security matters so they can apply related principles to their own roles and own lives. We also have an approachability policy around reporting – we promise we won’t get mad when you report something, even if you screwed up. Because if we got angry every time someone tried to do the right thing and follow guidance and report something, people would stop coming to us to ask for help or flag possible incidents. But it’s definitely a cultural drive – it’s going to be different at every organization.” 

Michelle Chia 

“Cyber risk is new compared to other types of risk, especially natural catastrophes. Most societies reinforce safety best practices by providing regular training from an early age, for example fire drills. My question is, how can we socialize cyber risk earlier so that security awareness and responsibility training does not rest solely on an employer? Until then, a combination of regular training and loopback is a strong approach to fostering a culture of security awareness and responsibility. Cyber underwriters look for employee training protocols in our risk assessments because through our claims data we see that human errors can open doors for an incident. Insurers see cybersecurity training for employees as an important loss prevention strategy and often offer guidance or services to help clients boost their cybersecurity posture.”  


The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The evolving role of CISOs and senior cybersecurity executives appeared first on Atlantic Council.

]]>
Finding security in digital public infrastructure https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/finding-security-in-digital-public-infrastructure/ Mon, 21 Oct 2024 14:00:00 +0000 https://www.atlanticcouncil.org/?p=799690 As governments worldwide adopt Digital Public Infrastructure (DPI), the need for robust cybersecurity and privacy protections has never been greater. This paper delves into the critical risks and opportunities associated with securing DPI systems. With examples from India, Ukraine, and other nations, it explores how governments are managing data privacy, addressing cyber threats, and building trust in digital services. The paper highlights key considerations for policymakers, including the balance between openness and security, the role of encryption, and the importance of resilience in digital systems. As more nations turn to DPI, ensuring the safety and privacy of citizens' data is essential to creating sustainable, trustworthy digital infrastructures.

The post Finding security in digital public infrastructure appeared first on Atlantic Council.

]]>
Digital public infrastructure (DPI) has evolved as a term used to describe everything from state-run digital payment systems to national cloud and data-exchange platforms to comprehensive backups of public documents and societal information. There is no single, cohesive, standard approach to digital public infrastructure—and examples range from Kenya to India to Ukraine—but DPI efforts share state involvement in the creation or operation of key digital platforms, are intended to be used country-wide, and have significant impacts on digital trust, privacy, and cybersecurity.

This issue brief examines the potential opportunities and risks of DPI across digital trust, data privacy, and cybersecurity and resilience. As part of an Atlantic Council working group, academic, civil society, and industry experts from the United States and South Asia explored these questions as they relate to DPI payment, public service delivery, data backup, cloud, and other projects and proposals—with an eye toward the biggest unresolved public policy, legal, and technological questions associated with state development and guidance of these systems. The working group’s virtual convenings were held under the Chatham House Rule.

The group discussion’s major considerations, themes, and recommendations are meant to provide an overview of the issue and are described below across digital trust, data privacy, and cybersecurity and resilience. The group focused on these issues for several reasons, including their pertinence to advancing DPI with respect for human rights (e.g., protecting and respecting privacy); their visibility in the international discussion on responsible technology practices (e.g., accountability to the public for digital tech); their centrality in many national and subnational laws and regulations on technology best practices (e.g., data-protection regimes); their necessity to DPI safety and security (e.g., using encryption, creating backups); and their lack of prioritization in numerous DPI circles that would be better served by a deeper understanding of digital trust, data privacy, and cybersecurity and resilience issues. Approaching these issues with a dual policy and technology lens will hopefully help decision-makers eliminate or mitigate some of DPI’s most serious risks—while enabling the maximization of public-interest, rights-centered opportunities for societies around the world.

Digital trust

DPI does not exist in a vacuum. When assessing whether to trust a country’s DPI projects, citizens, domestic and foreign companies, and even other governments (among other interested parties) must have trust in technological mechanisms themselves, in the surrounding political and economic environment, in the country’s policymaking and lawmaking (both the substance and the process), and in “trust proxies” that can attest to DPI projects and hold them accountable.

  • Technological mechanisms for trust, discussed more below, could include potentially publishing code or using open-source code, creating systems for independent third-party privacy and cybersecurity audits, requiring transparency in state procurement and public-private partnership agreements, incentivizing business models and design practices geared toward trust, and clearly explaining how, when, and why certain kinds of data are collected, analyzed, stored, and shared.
  • The broader trust environment depends on factors such as the perceived legitimacy of the government in power, privacy and cybersecurity laws and regulations, and meaningful transparency—wherein not everything is necessarily public, but where government agencies and companies make available as much information as possible before systems are deployed. Citizens, domestic and foreign companies, and other governments should also consider a country’s rule of law, judicial independence, checks and balances, and whether the government engages in meaningful multistakeholder consultations before developing a project or associated laws and regulations. These considerations will also inform how prone a state might be to abuse of a DPI system, such as to manipulate markets or intrusively collect personal data.

The timeline of rolling out DPI projects matters greatly for building trust. In India, for example, what is now being portrayed as a “unified stack” of DPI technology was, in fact, a series of products and services rolled out on top of one another.1 Citizens first saw the value in certain payment projects when they could see public benefit payments in their bank accounts; now, the Indian government’s articulated vision of a comprehensive Unified Payment Interface (UPI) builds on that foundation of trust.2 Simultaneously, this example demonstrates a considerable trust gap, because the Indian government has stated broad plans for DPI that raise many more questions about digital trust, privacy, and cybersecurity than those raised by government subsidy disbursement.

Digital trust questions about DPI in India today also circle back to the importance of the rule of law, transparency, and checks and balances, some of which are eroding under the current government.3 If citizens cannot trust their government to follows laws and regulations across areas ranging from corruption to respect for a free press, that lack of trust completely undercuts their ability to trust the government to follow its own governance mechanisms for DPI.4 Similarly, private-sector companies are going to be more suspect of governments that violate the law and do not respect due process because this increases the likelihood that they can be coerced and reduces their ability to seek recourse in the event of a dispute. For example, civil society has importantly recommended that a digital identity DPI system capture the minimum data necessary to function, employ end-to-end encryption, and not collect location and other data for user verification.5 But implementing these recommendations in practice depends on the Indian government supporting robust encryption in law and policy—not mandating encryption “backdoors” for law-enforcement and national security purposes—and having legal and accountability measures to prevent unnecessary, hidden collection of personal data. As discussed more below, these protections and structures are not entirely in place. India’s Digital Personal Data Protection Act 2023 implemented a variety of requirements for private-sector companies processing personal data—with many mirroring the European Union’s General Data Protection Regulation (GDPR)—but has large and problematic carveouts for government surveillance and data-use activities, including judicial functions and “prevention” or “investigation” of any crime.6 These gaps in trust matter when it comes to convincing companies—especially non-Indian companies—to buy into systems like UPI without skepticism about the ways the state can access the data, including proprietary company data and network data.

Ukraine’s Diia, a mobile app linking more than 19 million Ukrainians with more than 120 government services, is another instructive example of the importance of sequencing for building digital trust.7 Diia was developed in pieces as the Russian war on Ukraine evolved and Ukraine’s needs evolved with it, such as enabling the government to instantaneously deposit funds in citizens’ bank accounts in response to war-related damage complaints. But Diia goes well beyond payments and exemplifies the potential expansiveness of DPI. The Diaa app portal allows Ukrainians to access identification cards, foreign biometric passports, student cards, driver’s licenses, vehicle registration certificates, and much more. It also allows them to access grants for veterans and their families, apply for grants for businesses, document property losses due to the war, apply for greenhouse grants, process unemployment benefits, process marriages and divorces, and get COVID-19 vaccine certificates, among many other options.8 Ukraine’s goal is to ultimately make 100 percent of public services available online.9

Diia’s current services catalogue
For citizens: References and extracts (thirteen services), transport (three services), environment (three services), land, construction, and real estate (twenty-one services), security and law and order (one service), licenses and permits (six services), family (twelve services), health (seven services), entrepreneurship (twenty-six services), and pensions, benefits, and assistance (nineteen services).
For businesses: Land, construction, and real estate (nineteen services), medicine and pharmaceuticals (three services), licenses and permits (twelve services), extracts and certificates (six services), transport (two services), creating a business (nineteen services), booking (two services), and “action city” (three services related to resident status).

Rolling out a program like Diia step by step does not automatically absolve governments of other important digital trust questions. For example, Diia is not open source, does not have the most transparent privacy guardrails, and has evolved considerably in scope since its initial concept. The Diia website describes security measures such as encryption and privacy measures such as collecting minimal data, but it is light on details.10 But media reports about the system’s cybersecurity and the global partnerships involved with protecting the system underscore that Diia remains trusted amid Russia’s 2022 full-scale war on Ukraine.11(Of course, there might be many strategic, operational, and tactical reasons why Ukraine is not publishing more information about the system itself—primarily Russia’s concerted efforts to infiltrate Ukrainian digital systems to exfiltrate information, poison data, or disrupt or degrade systems entirely. Ukraine’s actions might very well be intentional, boosting security through obscurity.) Delivering tangible benefits to citizens in pieces is a potentially powerful way to build trust, and doing so in the context of a country under attack impacts the trust citizens are willing to place in government systems.

Data privacy

Privacy is often portrayed in the DPI context as a binary. In this characterization, DPI either affords more privacy or less privacy than alternative digital infrastructure. In reality, privacy in the DPI context is about the protection of people’s information and people’s ability to have autonomy over the disclosure and use of their information in different contexts—not a sliding scale of “more” or “less” privacy with DPI per se. It’s about different kinds of privacy, such as from whom, with which data, and so on. Citizens can have concerns about private companies dominating digital ecosystems and harnessing data for targeted advertising purposes. They can simultaneously have different, equally valid privacy concerns about the government building and managing all digital systems for key services. All of this matters because DPI systems typically collect and produce lots of data, and many have digital identity as a significant or core feature. Governments should not implement systematic identity schemes without a corresponding, systematic privacy scheme for first-party, third-party, and derived data because of the threats to privacy, human rights, and freedoms they would pose, as well as the systemic risks to governance and public trust.

The context is critical to understanding the many data privacy questions at play with DPI, including which entities get access to which kinds of data for which purposes; how the data are stored, analyzed, transferred, and shared; how long the data are kept; and what other data and metadata are available to organizations operating in a DPI ecosystem. It is not just about the data that are gathered and described in terms of service. Metadata, or data about data, provide incredible insights into individual and group behavior. Data holders can also use data to derive or infer additional information about individuals, such as deriving family information from housing records, attempting to predict financial status and income from education information and home neighborhood, and using geolocation to derive information about religious practices, political interests, and health conditions.

This combination of data collection and data derivation, or inference, prompts privacy risk questions focused within government, about which government agencies have access to which data, how, and why. For example, if a public benefits agency has a DPI project that gives it access to a wealth of data and potentially derived or inferred data, does or can it share the data with a law-enforcement agency? Governments might instinctually want the data shared for actual or alleged security reasons, but that creates substantial privacy harms. The creation and derivation of data through DPI projects also creates privacy risk questions about the nongovernmental entities that get or could get access to DPI-related data. For example, if a company is contracted to build the underlying operating system for a digital public services program, does or could it receive data about users who register, the information users enter into forms, or metadata about system usage? As governments increasingly purchase or acquire commercial data—and look to use machine learning (ML) and artificial intelligence (AI) models, such as in DPI projects—these questions of data collection, generation, and derivation will become essential components of evaluating privacy risks and identifying necessary responses.

Some countries have comprehensive data privacy regimes that can provide a foundation for approaching data privacy protections around DPI. Kenya, for example, enshrines privacy as a fundamental right in its constitution and passed a comprehensive data protection law in 2019.12 As Kenya moves to expand digital payments infrastructure, the regulations around data collection, consent, security of data, disclosure of data, retention of data, accuracy of data, governance of data, and much more will apply to many of the companies working on DPI in the country.13 Yet, it leaves open critical questions about what governments do vis-à-vis privacy and DPI. If a privacy regulator is set up to police private-sector practices, what are the governance and accountability mechanisms in place for public-sector actors collecting, storing, analyzing, using, and sharing data?

However, not all countries have these laws. Some countries’ data privacy laws have significant gaps that are especially consequential in the DPI context (e.g., consumer protection-focused privacy laws that exempt state uses of data), and some countries working on DPI projects might choose to pursue or create exemptions for government and government-led DPI activities. India’s new, landmark data privacy law introduces many protections for citizens against company use of data but, in this vein, also has broad exemptions for state collection and use of data, creating surveillance risks and exacerbating the privacy concerns emanating from state-led efforts to undermine virtual private network (VPN) privacy, coercive police raids of social media company offices, and other actions.14 Robust judicial oversight—as recommended by a government committee report accompanying an early version of the bill—could be one way to mitigate some of these concerns and contribute to boosting trust. Other countries have robust discussions on data privacy—think of the Organisation for Economic Co-operation and Development (OECD) principles for government access to private-sector-held data for national security purposes—but the discussions are focused through just that, a national security lens. DPI projects require a more comprehensive privacy approach than many laws take, one that will encompass the activities of public and private organizations working both separately and together.

Absent or alongside laws and regulations, there is also space for companies and civil society to identify and promote privacy best practices with DPI. The OECD’s privacy principles could be one example (even as the above-referenced, recent discussions have a national security focus). Developed in 1980 and updated in 2013, the principles include collection limitation, data quality, purpose specification, and use limitation.15 These principles could be integrated into Ukraine’s Diia, such as by more clearly describing the purpose for collecting each and every kind of data involved with Diia services, or into India’s DPI stack, such as by developing strict technical and policy controls to limit data use. Overall, though, there are fewer widely adopted privacy standards than cybersecurity ones (discussed more below). This creates more space for governments, companies, and civil society to develop DPI-tailored privacy principles.

Companies, civil society groups, and other stakeholders can also create co-governance mechanisms, frameworks, and best practices for data privacy in DPI that are specific to the country, project, and context. In fact, governments should ideally be willing to collaborate with businesses and civil society in such efforts so that DPI projects reflect a whole-of-society approach. Governments should consider it a best practice to carry out meaningful, multistakeholder engagement, make sessions public rather than closed door, invite individuals from the public and not just selected civil society organizations to attend or ask questions, and set up a process by which members of the public—including companies not involved in the DPI project—can submit comments that must be reviewed as part of a due-diligence and risk-assessment process at the DPI ideation stage. In practice, of course, governments might have little interest in slowing DPI rollouts and involving civil society organizations in their DPI design efforts and procurement processes. Also, civil society organizations do not have the budgets of large tech companies, such as major software vendors that might receive DPI programming contracts or large cloud vendors that might host a government’s DPI platform or service. In this scenario, the companies would be the ones building the components, products, and services, while civil society organizations could be relegated to advocacy, research, and education (as occurs in many other contexts). This is why governments should demonstrate trust and accountability by engaging in these good-faith, multistakeholder efforts—and why civil society organizations, the public, and even companies should treat a lack of this engagement as a sign of weak trust and poor accountability.

Some companies, meanwhile, may already feel incentivized to demonstrate privacy best practices. If a company builds a DPI payment app in country A, and then decides to package this into a new business vertical and do the same in countries B through Z, it is plausibly in the company’s own interest to implement robust privacy practices and show all governments it can be trusted—that it’s not just an infrastructure provider coming from country A to take all the data. Companies behind DPI projects with cross-border ambitions should similarly see the value in privacy standards that are interoperable across borders without compromising on core privacy principles and practices (e.g., encrypting data in transit and at rest, only collecting data necessary for a defined purpose). This happened in Kenya, where telecom operator Safaricom decided in 2022 to hide more user data when processing M-PESA mobile payments, following public outcry about data breaches.16 Such efforts reflect a retroactive approach to data minimization but underscore the importance, in a purely business calculus, of ensuring that DPI projects can be trusted if they are to be truly sustained and possibly replicated (that is, sold) elsewhere.

Of course, technology will not solve governance challenges. Not all technology projects can be pursued in ways that maintain and respect strong data privacy measures—and not all involved actors, such as governments and companies, prioritize or would prioritize privacy in practice. Therefore, the first step for companies, civil society, and other stakeholders should be evaluating the privacy prospects of a project before proceeding. For example, digital identity systems that are highly centralized and link many disparate data points together could create too many surveillance risks to be built in such a fashion while respecting privacy. Rather than leaping into a DPI project in that vein, the better option might be exploring a completely different underlying design or technical approach to better protect privacy.

DPI’s privacy implications also include which data are not gathered. In Ukraine, for example, the government has digitized housing records, bank agreements, and other sets of documents and records from 2013 onward as part of its e-recovery program. Citizens who need to access records from 2012 or earlier, therefore, have no digitized record. There are consequently multiple contextual privacy questions at once, including what protections exist for data that are digitized and collected (e.g., encryption, data minimization, minimum-size thresholds for sharing aggregated data) as well as what recourse citizens have when they do not have digital records and are not seen by the state in those contexts, in ways that could adversely impact them. Data correction rights (e.g., as found in Kenya’s privacy law or the GDPR) are thus part of the privacy picture.

Cybersecurity and resilience

Cybersecurity and resilience are necessary for DPI systems to operate with trust, protect individuals’ data and system data, and facilitate their predictable and reliable use. Strong cybersecurity and resilience practices are a process rather than an end state. However, DPI projects vary widely in their cybersecurity practices. There is no comprehensive, standard, and recognized framework for DPI projects that points to existing cybersecurity and resilience principles or standards—or even identifies a floor of best practices for cybersecurity. Governments and DPI project-involved companies are taking their own approaches to everything from encryption to third-party audits to the centralization of data storage and software functions.

Concentration or centralization of infrastructure can significantly impact cybersecurity risk. For example, a government that puts all its DPI data in one server is both creating a highly attractive target for malicious actors and risking the failure of the entire system—and even the loss of all the data, if the server goes down or a hacker encrypts and then deletes the data. Conversely, distributing the data according to cybersecurity best practices could avoid creating clear single points of failure and potentially minimize the amount of data stolen if one server gets hacked.

DPI efforts around the world take varied approaches to centralization—including who builds and controls the underlying digital infrastructure and who can then build on top of that digital infrastructure (e.g., making payment apps). Ukraine’s Diia platform is built by the Ukrainian government, in partnership with others such as the US government (which provided legal, financial, and technical assistance).17 India’s DPI stack, in contrast, is being driven forward by the government with the involvement of many private-sector actors. The development of the Aadhaar digital identity system, in which more than 1.3 billion Indians are enrolled, was led by the Unique Identification Authority of India but with ongoing procurement of private-sector services and equipment to support the identity program.18 Companies have also created services based on the identity system: Aadhaar data are stored by the government in thousands of servers in Bengaluru and Manesar, but government organizations and businesses using Aadhaar data can also store data in cloud computing systems, such as those operated by Amazon Web Services.19 India’s Unified Payments Interface, by contrast, allows banks to build mobile apps using the protocol, which is ultimately government built and controlled, and the Indian government has been vague about how future DPI projects will be developed.20

Sometimes, centralizing the management or layout of technical infrastructure can have cybersecurity benefits. For example, individual organizations managing large and globally dispersed infrastructure, such as cloud service providers, might be able to build institutional knowledge, cybersecurity resources, threat information-sharing networks, and lessons learned from security at scale that smaller infrastructure managers do not and will not have. Centralizing an underlying digital infrastructure under one developer also shapes the supply chain in different ways and could, for instance, decrease the number of third-party software vendors involved in building the backbone infrastructure for a DPI project.

On the other hand, centralization can reduce resiliency and enhance cybersecurity risks. If the system is hacked and data are stolen, or if the system is degraded or disrupted entirely, there is no independently managed alternative option or backup in place. Systems built and maintained by one organization can also become top-priority targets for malicious actors, for both compromise and coercion.21 For example, countries excited about building a single backbone for a payments app, or using one provider to back up all of their citizens’ records, could find many cybersecurity benefits (e.g., security at scale, cost savings of consolidated contracts and development efforts) but might find themselves simultaneously facing elevated cybersecurity (and digital trust and privacy) risks. Kenya faced these risks in 2023 when a distributed denial-of-service (DDOS) attack overloaded servers for e-Citizen and M-PESA, the government’s e-services portal and the country’s mobile payment system, respectively, among others, and took them offline for more than forty-eight hours.22 In total, more than five thousand public services were rendered inaccessible, disrupting citizens’ abilities to access passports, pay their electricity bills, and purchase railway tickets.23 Questions about concentration and cybersecurity risk are highly complex, but this discussion and these examples are meant to underscore that concentration of infrastructure can create significant risks—risks often overlooked in many DPI efforts.

Importantly, governments can achieve interoperability and create a standard architecture, or set of standards, for a DPI system’s construction without having a single entity build the entire system. In other words, governments can pursue using a standardized query language for a DPI database, creating a template set of server specifications for a DPI identity system, or can use the same visual interface backend for a public services app—and then have multiple agencies or private companies refer to those standards as they build pieces of the DPI system. This fact should hopefully encourage governments to pursue interoperability and the implementation of technical standards (including standards that better protect privacy and cybersecurity) without thinking they must delegate the building of a DPI data farm or online portal backend to just one government agency or private-sector company.

Much like with privacy, a country’s legal, regulatory, and industry environments matter. In countries where governments are intent on creating backdoors for commercial encryption or otherwise reducing cybersecurity (e.g., India’s VPN rules), those measures will impact how secure DPI projects are against hackers and other bad actors. A country with a collection of sector-specific cybersecurity regulations is also going to face a different cybersecurity risk landscape, and a different level of DPI trust questions, than a country with a single cybersecurity law or a country with nothing but voluntary industry best practices and risk-management standards. This is especially relevant as global DPI project proposals and discussions span everything from payments to government services to identity systems to data-exchange platforms.

Policy recommendations

Promote meaningful transparency. Governments and companies involved in DPI projects should promote meaningful transparency by leading multistakeholder engagement throughout the entire DPI project lifecycle, setting up independent third-party cybersecurity and privacy audit mechanisms, and making as much information about procurement processes public as possible. Governments should also be sure to develop transparency mechanisms and policies before DPI projects are rolled out, rather than during or after the rollout, and civil society groups and citizens should hold them accountable for being transparent about any DPI efforts. The open sourcing or publication of code is more complicated from transparency and cybersecurity standpoints. While some experts endorse countries making DPI code public or using open-source code across DPI systems, countries might also see value in developing DPI systems in which the source code is not publicly available for vulnerability identification and exploitation—or in locking down development processes so contributors cannot introduce vulnerabilities through open-source development.24 Critically, this is not just about technology. Digital trust depends on the perceived legitimacy of governments, checks and balances, rule of law, social context, and much more that has nothing to do with encryption or privacy-enhancing technologies. Governments approaching DPI projects must consider this broader trust environment and do what they can to boost trust—and civil society organizations, citizens, and companies evaluating the transparency of DPI projects must consider this broader context.

Define and implement frameworks for digital trust, privacy, and cybersecurity. High-level trust is going to come down, in part, to governments aligning their DPI projects with specific, best-practice frameworks that describe policies, processes, and practices for digital trust, privacy, and cybersecurity, such as operational processes, transparency mechanisms, audits, and privacy measures. This starts with governments looking to major, well-recognized frameworks and standards for cybersecurity, such as the NIST Cybersecurity Framework and standards published by the International Organization for Standardization. There are fewer frameworks for data privacy, but DPI policymakers and implementers can still use best practices found around the world—such as robust encryption, simple and clear explanations of data collection and use practices, and data minimization—collecting only what is needed for a specific, defined, and disclosed purpose. Private-sector actors supporting DPI projects should also implement these policies, processes, and practices. As rights-respecting and best-practice data privacy and cybersecurity measures are implemented, DPI implementing organizations should include the details in public and private DPI contracts, user-facing DPI terms of service, and public transparency reporting on DPI projects. But these international frameworks, while an important baseline for cybersecurity, are not a substitute for a comprehensive digital trust framework and, on their own, are not sufficient to justify a DPI project as trustworthy.

Emphasize that DPI needs privacy and cybersecurity guardrails. The US government, civil society organizations, and others involved in tech development and capacity building should rhetorically challenge the false binary that portrays DPI needs and motivations as incompatible with data privacy, cybersecurity, and development goals. Just as some large multinational tech conglomerates portray themselves as the foremost champions of development when trying to win contracts on the ground, some governments pursuing DPI programs portray them as highly urgent projects that cannot be slowed down over privacy, cybersecurity, and other concerns.

There are certainly situations—Ukraine’s development of Diia amid the Russian war chief among them—in which there is an urgency to DPI-related projects based on an evolving domestic or international crisis. But most countries are not currently in Ukraine’s position and, even then, there is still a place for data privacy and cybersecurity. The Ukrainian government is well aware that any system like Diia built without robust cybersecurity protections is only going to create additional, exploitable security problems for the country. For instance, if the widely used Diia had numerous, basic, and obvious cybersecurity vulnerabilities, this would simply provide the Russian government with an easily attackable and high-impact target.

India and Kenya face challenges here, too. India’s Aadhaar system has already been replicated in the Philippines and Morrocco and has received interest in Kenya, Vietnam, Sri Lanka, Brazil, Mexico, Singapore, and Egypt.25 The Indian government wants to also export other DPI systems abroad.26 But Aadhaar itself, despite some safeguards, already has several privacy and cybersecurity problems, including not recording the purpose of authentication, a lack of purpose limitation for data collection, the large-scale centralization of biometric data, and the creation of new opportunities for data-linkage attacks.27 Experts have also expressed concerns that as the uses of Aadhaar identities expand, so does the government’s, or even a company’s, ability to use the number as an anchor point to track citizens.28 Hence, when countries such as Kenya look to adopt Aadhaar—and potentially other Indian DPI systems that lack appropriate privacy and cybersecurity safeguards—they are opening themselves up to additional risk. For its part, the Indian government might be both exporting digital risk and potentially undermining its own DPI messaging in the process.

This is why the US government, civil society organizations, and other stakeholders should, as one working group member put it, “limit the zone of false choice.” DPI projects can be compatible with data privacy and cybersecurity. The success of viable (including rights-respecting) DPI projects also depends upon cybersecurity and privacy guardrails that enhance system functionality, mitigate risk, boost resilience, rightfully earn public trust, and enhance innovation within a rights-protecting context. Building these guardrails depends on getting past false binaries and evaluating if and how the specific DPI proposal at hand would positively or negatively impact data privacy and cybersecurity considerations—and identifying best-practice ways to mitigate risk.

About the author

Justin Sherman is a nonresident senior fellow at the Atlantic Council’s Cyber Statecraft Initiative. He is also the founder and chief executive officer of Global Cyber Strategies, an adjunct professor at Duke University, and a contributing editor at Lawfare. He previously ran the Cross-Border Data Flows and Data Privacy Working Group for the Atlantic Council’s Initiative on US-India Digital Trade.

Working Group Members

  • Dan Caprio, Providence Group
  • Shyam Krishnakumar, Pranava Institute
  • Venkatesh Krishnamoorthy , BSA
  • Jeff Lande, The Lande Group & Atlantic Council
  • Srujan Palkar, Atlantic Council
  • Anarkalee Perera, ASG
  • Allison Price, New America
  • Nikhil Sud, Ashoka University
  • Atman M Tivedi, ASG & Atlantic Council
  • Prem Trivedi, New America

Acknowledgements

The author would like to thank all the individuals who provided comments on earlier drafts of this paper, including Nikhil Sud, Atman Trivedi, Jeff Lande, Ananya Kumar, and Trey Herr. Thanks as well to all participants in the working group for their generosity with their time, insights, and expertise—noting that all views stated within are my own and do not necessarily reflect the positions of individual working group members or their listed, affiliated organizations.

This report was made possible in part by the generous support of Mastercard. 

This report is written and published in accordance with the Atlantic Council Policy on Intellectual Independence. The authors are solely responsible for its analysis and recommendations. The Atlantic Council and its donors do not determine, nor do they necessarily endorse or advocate for, any of this report’s conclusions.

Related content

1    Erin Watson, “The India Stack as a Potential Gateway to Global Economic Integration,” Observer Research Foundation, March 22, 2024, https://www.orfonline.org/research/the-india-stack-as-a-potential-gateway-to-global-economic-integration.
2    “Unified Payments Interface (UPI),” National Payments Corporation of India, last visited July 20, 2024, https://www.npci.org.in/what-we-do/upi/product-overview.
3    See, e.g., Saraphin Dhanani, “India’s Justice System Is No Longer Independent: Part I,” Lawfare, September 21, 2023, https://www.lawfaremedia.org/article/india-s-justice-system-is-no-longer-independent-part-i; Rana Ayyub, “The Destruction of India’s Judicial Independence Is Almost Complete,” Washington Post, March 24, 2020, https://www.washingtonpost.com/opinions/2020/03/24/destruction-indias-judicial-independence-is-almost-complete/; Sabyasachi Das, “Democratic Backsliding in the World’s Largest Democracy,” Ashoka University, July 3, 2023, https://hmpa.hms.harvard.edu/sites/projects.iq.harvard.edu/files/pegroup/files/das-india.pdf; Maya Tudor, “Why India’s Democracy Is Dying,” Journal of Democracy 3, 34 (July 2023), 121–132, https://www.journalofdemocracy.org/articles/why-indias-democracy-is-dying/.
4    See, e.g., Paranjoy Guha Thakurta, “Long on Rhetoric, Short on Practice: Modi Government Battling Corruption,” Hindu, May 2, 2024, https://frontline.thehindu.com/the-nation/corruption-lok-sabha-election-2024-narendra-modi-bjp-congress/article68110170.ece; Kenan Malik, “India Enjoyed a Free and Vibrant Media. Narendra Modi’s Brazen Attacks Are a Catastrophe,” Guardian, February 19, 2023, https://www.theguardian.com/commentisfree/2023/feb/19/india-enjoyed-a-free-and-vibrant-media-narendra-modis-brazen-attacks-are-a-catastrophe.
5    “DPI and Privacy/Security,” Centre for Digital Public Infrastructure, last visited July 20, 2024, https://docs.cdpi.dev/mythbusters-and-faqs/dpi-and-privacy-security.
6    “Digital Personal Data Protection Act,” Government of India, 2023, https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf.
7    “Diia in DC,” US Agency for International Development, last visited July 21, 2024, https://www.usaid.gov/diiaindc; “Digital Country,” Ukraine Now, last visited July 21, 2024, https://ukraine.ua/invest-trade/digitalization/.
8    “Digital Country”; “Guide,” Diia, last visited July 21, 2024, https://guide.diia.gov.ua.
9    Ibid.
10    “Як Дія зберігає та використовує інформацію про мене?” Diia, last visited July 21, 2024, https://diia.gov.ua/faq/16; “Наскільки захищений сервіс Дія?” Diia, last visited July 21, 2024, https://diia.gov.ua/faq/17.
11    See, e.g., Yuliya Panfil, et al., “Can Ukraine Transform Post-Crisis Property Compensation and Reconstruction?” New America, February 7, 2024, https://www.newamerica.org/digital-impact-governance-initiative/reports/ukraine-post-crisis-property-compensation-reconstruction/; Anatoly Motkin, “Ukraine’s Diia Platform Sets the Global Golden Standard for E-Government,” Atlantic Council, May 30, 2023, https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-diia-platform-sets-the-global-gold-standard-for-e-government/.
12    “Data Protection Laws of the World: Kenya,” DLA Piper, last visited September 26, 2024, https://www.dlapiperdataprotection.com/index.html?t=law&c=KE.
13    Nzilani Mweu, “Kenya—Data Privacy Overview,” DataGuidance, February 2024, https://www.dataguidance.com/notes/kenya-data-protection-overview.
14     Kim Lyons, “Police in India Raid Twitter Offices in Probe of Tweets with ‘Manipulated Media’ Label,” Verge, May 24, 2021, https://www.theverge.com/2021/5/24/22451271/police-india-raid-twitter-tweets-government-manipulated-media; Matthew Loh, “Jack Dorsey Said India Raided Homes of Twitter Workers When the Company Wouldn’t Shut Down Accounts for the Government,” Business Insider, June 13, 2023, https://www.businessinsider.com/jack-dorsey-india-raided-homes-of-twitter-workers-ban-refusals-2023-6; Varsha Bansal, “VPN Providers Flee India as a New Data Law Takes Hold,” Wired, September 25, 2022, https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/.
15    “Recommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data,” Organisation for Economic Co-operation and Development, October 7, 2013, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0188.
16    Meshack Masibo and Victor Kapiyo, “Concealing M-Pesa Details is a Step in the Right Direction,” KICTANet, May 26, 2022, https://www.kictanet.or.ke/concealing-m-pesa-details-is-a-step-in-the-right-direction/.
17    “USAID Administrator Announces Intent to Provide $650,000 in Assistance for Digital Transformation,” US Agency for International Development, January 18, 2023, https://www.usaid.gov/news-information/press-releases/jan-18-2023-us-supported-e-government-app-accelerated-digital-transformation-ukraine-now-ukraine-working-scale-solution-more-countries.
18    “Aadhaar Dashboard,” Unique Identification Authority of India, last visited July 21, 2024, https://uidai.gov.in/aadhaar_dashboard/.
19    “‘Aadhaar-Enabled DBT Savings Estimated over Rs 90,000 Crore,’” Times of India, July 11, 2018, https://timesofindia.indiatimes.com/business/india-business/aadhaar-enabled-dbt-savings-estimated-over-rs-90000-crore/articleshow/64949162.cms; Pankaj Patil, Mandar Patil, and Vikas Tiwari, “How to Build an Aadhaar Data Vault on AWS,” Amazon Web Services, November 22, 2022, https://aws.amazon.com/blogs/publicsector/build-aadhaar-data-vault-aws/.
20    See, e.g., “Accepted Payment Methods on Google Play,” Google Support, last visited July 21, 2024, https://support.google.com/googleplay/answer/2651410?hl=en&co=GENIE.CountryCode%3DIN.
21    Thanks to Trey Herr for discussion of this point.
22    Kabui Mwangi and Dominic Omondi, “Hackers Shake Kenya’s Digital Financial System,” Business Daily Africa, July 28, 2023, https://www.businessdailyafrica.com/bd/corporate/technology/hackers-shake-kenya-s-digital-financial-system–4318466.
23    Ibid.; Nanjira Sambuli, “When the Rubber Meets the Road: Cybersecurity and Kenya’s Digital Superhighway,” Carnegie Endowment for International Peace, October 12, 2023, https://carnegieendowment.org/research/2023/10/when-the-rubber-meets-the-road-cybersecurity-and-kenyas-digital-superhighway?lang=en.
24     In the first case, for example, a government agency might program a DPI system on its own or hire a private-sector contractor to do so, after which the code would be published for testing and auditing but not for editing. If the system itself were open source, however, not only would the code be public but individuals, nongovernmental organizations, and others would be able to contribute code to the DPI system itself.
25    Dia Rekhi, “Philippines, Morocco Implemented Adhaar System; More Countries Could Follow suit, Says Official,” Economic Times, March 22, 2023, https://economictimes.indiatimes.com/tech/technology/philippines-morocco-implemented-aadhaar-like-system-more-countries-could-follow-suit-says-official/articleshow/98914940.cms; Abigail Opiah, “National Digital Identity Authorities Share Real-World Governance and Service Gains,” Biometric Update, June 7, 2024, https://www.biometricupdate.com/202406/national-digital-identity-authorities-share-real-world-governance-and-service-gains.
26    Rhik Kundu and Subhash Narayan, “India to Expand Scope of Digital Stack Exports to Global South Nations,” Mint, April 21, 2024, https://www.livemint.com/news/india/india-to-expand-scope-of-dpi-exports-to-global-south-11713694336935.html.
27    Subhashis Banerjee and Subodh Sharma, “Privacy Concerns with Aadhaar,” Architecture and Hardware 62, 11 (November 2019), https://cacm.acm.org/research/privacy-concerns-with-aadhaar/; Debanjan Sadhya and Tanya Sahu, “A Critical Survey of the Security and Privacy Aspects of the Aadhaar Framework,” Computers & Security 140 (May 2024), https://www.sciencedirect.com/science/article/abs/pii/S016740482400083X.
28    K. Sudhir and Shyam Sunder, “What Happens When a Billion Identities Are Digitized?” Yale Insights, March 27, 2020, https://insights.som.yale.edu/insights/what-happens-when-billion-identities-are-digitized.

The post Finding security in digital public infrastructure appeared first on Atlantic Council.

]]>
Capture the (red) flag: An inside look into China’s hacking contest ecosystem https://www.atlanticcouncil.org/in-depth-research-reports/report/capture-the-red-flag-an-inside-look-into-chinas-hacking-contest-ecosystem/ Fri, 18 Oct 2024 13:00:00 +0000 https://www.atlanticcouncil.org/?p=799964 China has built the world’s most comprehensive ecosystem for capture-the-flag (CTF) competitions—the predominant form of hacking competitions, which range from team-versus-team play to Jeopardy-style knowledge challenges.

The post Capture the (red) flag: An inside look into China’s hacking contest ecosystem appeared first on Atlantic Council.

]]>

Table of contents

The China CTF competition tracker

The China CTF Competition Tracker presents data on 54 annually recurring hacking competitions in China. Each placard includes competition names in English and Mandarin, as well as logos of government sponsors of the competition. Users can click on each placard to reveal average number of annual participants, years of operations, hosts in both English and Mandarin, and links to competition write-ups from participants.

Executive summary

China has built the world’s most comprehensive ecosystem for capture-the-flag (CTF) competitions—the predominant form of hacking competitions, which range from team-versus-team play to Jeopardy-style knowledge challenges.

Our report finds that, pursuant to a government policy issued in 2018, many of China’s government ministries host recurring CTFs. These ministries support a handful of smaller competitions, especially through provincial or municipal governments, and one or two marquee national competitions each. These sponsoring government organs include the Ministry of Education, Cyberspace Administration of China, Chinese Academy of Science, Ministry of Industry and Information Technology, Ministry of Public Security, Ministry of State Security, and People’s Liberation Army. Over the past three years, China has hosted between forty-five and fifty-six competitions each year. In total, we identified 129 unique competitions since 2004, fifty-four of which have recurred at least once annually. Annual competition attendance can range from many hundreds to tens of thousands. The largest single-year attendance observed for the Ministry of Public Security’s Wangding Cup—which includes both companies and college students—exceeded thirty-five thousand participants.

The result of this widespread government support is an ecosystem that includes multiple national-level collegiate and professional hacking competitions. China’s CTF ecosystem is unparalleled in size and scope—something akin to four overlapping National Collegiate Athletic Associations, each with their own primary government sponsor, just for cybersecurity students to exercise their skills. We find evidence that many government-sponsored marquee competitions include talent-spotting mechanisms for recruitment.

Established practitioners also flourish in this ecosystem. Many of China’s best cybersecurity companies either host their own competitions or participate in the nationwide XCTF League. Elsewhere, foreigners are both brought into China or sought out abroad. Our report details a US college student visiting China to participate in Real World CTF who received a pitch from Chinese intelligence. On the other side of the coin, GeekCon is a private-sector competition from China that is hosted abroad and builds community with foreign hackers.

China’s CTF ecosystem should inspire policymakers outside China, but outright imitation should be avoided. Having so many national competitions in one country is wasteful. Still, Western countries should ensure the robustness of their CTF ecosystems. Each country will need to study its own systems to determine its goals, efficacy, and reach.

Introduction

Boxers do not train by reading books. Instead of in a library, they are found floating around the ring, throwing punches into bags, dodging their coach’s padded hands, and repeating their moves for hundreds of hours—all for a fight that lasts up to thirty-six minutes. Training in the ring is crucial for a fighter.

That same exposure to hands-on practice, in addition to classroom learning, is critical for cyber operators. While a degree from a prestigious institution can help someone land an interview at a tech company, companies screen candidates via coding interviews to check for actual, demonstrable capabilities. In cybersecurity, hacking competitions often serve this same role—letting students and experts prove their abilities in a safe, legal environment.

China has built the world’s most comprehensive ecosystem for CTF competitions. Hacking competitions build community, showcase talent, stimulate innovation, and allow participants to get hands-on experience.

Competitions help build communities and social bonds between hackers. In China, sector-specific competitions, like those for healthcare or public security bureaus, encourage coordination between participants in the same industry and their regulators. The social connections that result from these competitions can help participants coordinate quickly across organizations. Business in the same sector, such as hospitals, might see similar threats arise in close succession. Lessons learned by one hospital can be implemented in others if sharing mechanisms exist. Informal mechanisms, such as social relationships, allow information to spread quickly. The same is true for offensive teams—hackers struggling to hit a certain target might find a friend with access to the right tools to complete the job.

These competitions help the government spot talent for both regulatory and standard-setting roles, such as those under the Ministry of Industry and Information Technology (MIIT), or for offensive and defensive missions, such as those under the Ministry of Public Security, Ministry of State Security, and People’s Liberation Army. Some CTF winners receive admission to national talent programs; others are entered into a national database for cybersecurity talent. Many participants take home small sums of money. China also has a robust private-sector CTF ecosystem with a few national-level competitions that attract thousands of participants and support industry. Many of the competitions we cover in the “notable competitions” section of this paper have explicit talent spotting, recruitment, or talent programs in place.

Competitions also spur innovation. In some cases, this innovation can include new technologies. China’s Qiang Wang Elite Cyber Mimic Defense Competition helps the People’s Liberation Army (PLA) develop cyber mimic defense techniques. Participants come from industry, academia, and abroad to showcase their approaches to the PLA and its Purple Mountain Laboratory. Other forms of innovation are more subtle. When hackers exploit software, they do not all use the same strategy or tactics. A great hacker might take one step to achieve their objective, while an average hacker might take three steps. Once someone sees that great hacker’s attack path, their more efficient approach can be replicated by others. This transfer of knowledge is less glamorous and well-paid than zero-day (0day) vulnerability demonstrations but is just as impactful. China’s robust CTF ecosystem ensures that many participants learn from one another.

The hands-on experience from competitions is irreplaceable. Academia can use CTFs to check that students are not only grasping concepts in the classroom but are also ready to implement their education in the workplace. Students can use the opportunity to identify their own strengths and weaknesses, practice lackluster skills, and spot more capable classmates who can help them learn. Practitioners can keep their skills fresh on a variety of topics, even when their job only asks them to focus on a small subset of their capabilities.

This report aims to introduce and examine key components of China’s hacking contest ecosystem. To achieve this, we analyzed more than 120 China-based hacking contests since 2004, which we made available to the public.1 We display key information regarding more than fifty unique, recurring CTFs in our visualization, which you can access via this footnote.2 Key findings (see below) include the number of annual participants, total instances of participation, and most frequent competition hosts and cyber range providers in China. By integrating these data with an examination of relevant policy, key milestones within China’s hacking community, and the analysis of selected contests, we offer a comprehensive understanding of the evolution and strategic value of these events.

We anticipate that this analysis will be most useful to policymakers who are seeking to compare their countries’ promotion and use of hacking competitions to China’s system. This analysis should be considered in light of other efforts to improve cybersecurity education and job preparation.

CTF competitions

CTF competitions are simulated cybersecurity challenges and are divided into two main types: Jeopardy and Attack-Defense. Jeopardy CTFs challenge teams with tasks such as reverse engineering, web security, binary exploitation, and cryptography. Teams earn points by solving challenges and “capturing the flag,” a piece of data hidden throughout multiple challenges that proves a successful hack.

Attack-Defense CTFs involve teams attacking each other’s systems while defending their own. These systems are often virtual machines with pre-built toolkits, software, and flags provided by the hosts. Each team receives the same setup. To score points, teams must identify vulnerabilities, develop exploits, and capture opponents’ flags.

CTFs are hosted in cyber ranges, i.e., virtual environments simulating real-world networks, which provide a safe and legal space for participants to compete.

Exploit competitions focus on finding and exploiting 0day vulnerabilities.3 In contrast, CTF vulnerability-mining contests typically involve known vulnerabilities (n-days) in controlled environments. These competitions involving known vulnerabilities provide two benefits. First, participants exercise their exploitation capabilities in crafting code to take advantage of the vulnerability. Second, participants and organizers can learn new attack paths available against the targeted software. This knowledge provides enduring value to hackers who can replicate the same strategy in future attacks.

Background: China’s hacking competition policy

In 2014, President Xi Jinping pledged to transform China into a “cyber powerhouse.”4 Between 2015 and 2021, China took significant steps to improve its cybersecurity talent development and recruiting pipelines.5 The Ministry of Education revamped cybersecurity degree curricula, began building a National Cybersecurity Talent and Innovation Base in Wuhan, designated some universities as World-Class Cybersecurity Schools for their excellence in cybersecurity education, and identified the cybersecurity sector as an area for development in key policy documents.6 The US National Initiative for Cybersecurity Education (NICE) inspired many of China’s efforts. In 2017, the China Information Technology Security Evaluation Center (CNITSEC, also called the MSS 13th Bureau) even selectively translated and republished a NICE report on cybersecurity competitions and their impact on workforce development.7And while the United States also hosts many cybersecurity competitions for students and the public, China has excelled at this.8

China set about creating its ecosystem in 2018 during its most intense period of cybersecurity talent policy reform. The Central Cyberspace Affairs Commission (CAC) and Ministry of Public Security (MPS) issued the Notice on Regulating the Promotion of Cybersecurity Competitions.9 The document serves as the country’s guiding policy for cybersecurity events from exploit competitions to CTFs. The authors reiterated the need for hackers to apply to the MPS to travel abroad for competitions, criticized some players’ “profit-seeking” behavior, and required vulnerabilities be disclosed to “public security and other relevant departments.” The last part is especially notable, as the policy to collect vulnerabilities at domestic competitions preceded the 2021 Regulations on the Management of Software Vulnerabilities, which mandated collection from industry.10 The notice also explicitly banned “the use of high-value prizes to attract participants,” a policy ignored by China’s most important exploit competitions, such as the Tianfu Cup. For all that the notice banned, the most impactful clause was its last, which encouraged relevant ministries for cybersecurity, education, information technology, and public security to promote cybersecurity competitions.

Today in China, hacking competitions have become essential components of cybersecurity capability development, aligning with key policy frameworks and serving as an integral part of educational curricula. Many websites for China’s hacking contests prominently display their commitment to aligning with Xi’s vision of transforming the country into a cyber powerhouse, presenting this alignment as their core purpose. They highlight cyber-related goals and objectives drawn from various iterations of the National Congress of the Chinese Communist Party, held every five years, as well as specific legislation such as China’s Data Security Law and the country’s Five-Year Plans.

In September 2023, under the guidance of China’s Ministry of Education, the Discipline Evaluation Group of the Academic Degrees Committee of the State Council released its “White Paper on the Practical Ability of Cybersecurity Talents—Talent Evaluation.”11 The report emphasizes that colleges and universities with undergraduate programs play a crucial role in cultivating cybersecurity talent, accounting for 90 percent of the new workforce. Additionally, 63 percent of surveyed institutions found hacking competitions effective for training cybersecurity professionals. Among competing students, 45 percent began their participation in their freshman year, and 32 percent in their sophomore year, indicating that “cybersecurity competitions have become one of the most effective methods for colleges and universities to assess the practical skills of cybersecurity talent.”12 To further encourage involvement, 75 percent of colleges offered financial incentives to students who excel in these competitions, and 75 percent have allocated budgets for attack and defense laboratories designed to enhance students’ practical abilities through required courses.13

Promoting homegrown CTFs and keeping China’s best hackers at home offers a few advantages.

  • First, policymakers viewed software vulnerabilities—flaws in code that allow attackers to exploit targeted systems—as a national resource. It would be years before China mandated their total collection but, in 2018, China settled on stopping researchers from participating in foreign exploit competitions and collecting vulnerabilities at domestic competitions.14
  • Second, security services can learn new exploitation paths for common targets by watching how competitors attack their targets. Even without finding new vulnerabilities, observing other people’s exploits can improve attack methodologies.
  • Third, if marketed effectively, hacking competitions can inspire high school students to consider a relevant degree at university. Influential educational bodies in China consider cybersecurity competitions an important way to promote the field.15
  • Fourth, in addition to requirements for continuing education and certifications, a robust ecosystem of CTFs allows students and practitioners to engage regularly in events designed to keep their skills fresh.
  • Fifth, successful Chinese CTF participants have joined top domestic tech companies’ vulnerability research labs or founded cybersecurity startups offering specialized services, such as cyber ranges and automated vulnerability discovery systems, to Chinese companies and government agencies.16
  • Sixth, the government created an environment in which it could more easily entice China’s hackers to support its efforts through vulnerability research or other means, by proactively screening their travel abroad and creating a competitive ecosystem at home. Key CTF contests analyzed in this report are affiliated with the Ministry of State Security (MSS), PLA, and MPS serving as crucial talent-recruitment pipelines for security services.

Methodology

This report collected data mainly from three websites that host information about cybersecurity competitions in China (ichunqiu, XCTF, and CTFtime), but also included smaller data contributions from GitHub accounts, blogs, and WeChat accounts. The data set includes CTFs (Jeopardy or Attack-Defense), vulnerability competitions, and two competitions for technology development.

Our data are subject to significant inflation. There is no way to disambiguate competition participants across years. If a team of ten college students participates in a competition, two graduate, eight return the following year, and two new freshmen replace the two graduating seniors, then only twelve individuals will have participated over two years, but total attendee numbers will show twenty. Without the ability to deduplicate attendees, our numbers are best thought of as instances of participation, rather than individuals participating.

Whenever possible, attendance figures are first derived from the official competition website for each event. The second-most authoritative sources are the competition listing websites whose data were used to identify many of these competitions. Finally, if neither preceding figure could be found, then media accounts covering the event were acceptable. In the event that any of the sources identified the number of teams, rather than participants, and there was no other available sourcing, the authors used the most conservative estimate of four-person teams to determine the number of participants.

Some competitions had data available for some years and for some rounds (preliminary versus finals), but not all. To estimate the number of attendees for each year without data, we averaged the number of attendees during years for we had data and applied that number for missing instances. In cases where data were available for some rounds and not others, especially across years, we favored the use of preliminary rounds to calculate average annual attendees and did not count final rounds toward the aggregate competition number, as those attendees were already counted in the preliminary round. Additionally, many competition numbers did not include 2024 participation data, as our data were pulled in the first half of 2024 and many annual competitions occur in the fall. To estimate the number of unique annual hacking contests in China (Figure 2), we filtered out the qualification and semifinal rounds, retaining only the final rounds to avoid duplication and more accurately reflect standalone competitions.

China has a handful of competitions specifically to urge college students to produce new ideas for products that address cybersecurity issues (National College Student Information Security Contest, 全国大学生信息安全国赛; National Cryptographic Technology Competitions, 全国密码技术竞赛决赛). We excluded these product-focused innovation competitions. However, we included two research-focused competitions related to improving cyber defenses and the automation of software vulnerability discovery, on the grounds that these competitions included many of the same universities’ teams that participated in CTFs, included the use of the technology in an attack-defense testing dynamic, and were not exclusively predicated on the judgment of a panel and the award of investment money into a product idea. Additionally, competitions that were held by a single university for its own class or students were not included, as they are less individually significant, too numerous, and too difficult to identify in a comprehensive manner. Competitions listed on Chinese websites but hosted abroad by international organizations with no overlap with Chinese organizations (e.g., Google CTF) were also scrubbed from the data.

Some notes on coding of provincial or municipal government offices.

  • Local MIIT offices (天津市工信局) were counted toward the MIIT.
  • Local CAC offices (天津市委网信办) were counted toward the CAC.
  • Local Government (某市人民政府) were counted toward the local government.
  • Any subordinate organization counted toward its ministry.
  • The China Academy of Engineering (中国工程院) was counted toward CAS.
  • China Institute for Innovation and Development (国家创新与发展战略研究会) = MSS.17

On the webpage and visualization, we excluded events that occurred once, were hosted by a university for that university’s own students, or were corporate events meant to provide crowd-sourced security to a company through public testing (e.g., H3C and Didi Chuxing competitions).

Key findings on China’s hacking contest ecosystem

China’s ecosystem for collegiate CTFs is the best in the world. Multiple CTFs bring together hundreds of college teams across the country to compete annually, including the Information Security Ironman Triathlon, Qiang Wang Cup, Wangding Cup, and National University Cyber Security League. The system is so robust, it is as if there are multiple National Collegiate Athletic Associations for collegiate hacking. The MPS, PLA, MSS, and Ministry of Education all support one or more national-level collegiate events. For the United States, an equivalent ecosystem would see national collegiate-level competitions held by the Department of Defense, the Director of National Intelligence, the Federal Bureau of Investigation, and the Department of Education.

The following sections will explore this ecosystem in greater depth, examining the factors enabling its growth, the organizations hosting these contests, participation rates, sector-specific competitions, the role of cyber range companies, and a preliminary overview of an online database we have developed based on our collected and analyzed data.

1. Growth enablers

The number of domestic hacking competitions in China soared since 2014, following the success of Chinese teams in prestigious international contests. In 2013, Tsinghua University’s Blue Lotus team became the first Chinese team to compete in the DEF CON CTF finals in Las Vegas.18 In 2014, China’s Keen Team triumphed at the Pwn2Own contest in Vancouver, following its strong performance in the previous year’s Mobile Pwn2Own in Tokyo.19 The Blue Lotus team’s achievements significantly influenced China’s CTF hacking culture, leading to the creation of Baidu CTF (BCTF) and the XCTF International League, China’s largest CTF tournament, between 2013 and 2015.20 Keen Team’s success also led to the establishment of GeekPwn in 2014, an event modeled after Pwn2Own and now known as GeekCon.21 GeekPwn emerged during a time when Chinese tech companies and manufacturers were wary of hackers, often refusing to participate in competitions out of fear of reputational damage. Some hackers even attempted to disrupt GeekPwn’s inaugural event.22

As Chinese teams’ successes abroad began to challenge previous taboos at home and highlight the strategic importance of cybersecurity talent, companies like Baidu, Tencent, and Qihoo 360 began sponsoring events, acquiring teams, and recruiting top talent.23 As a result, the number of hacking competitions in China proliferated. We found that China has hosted at least 129 unique events since 2004, with most occurring after 2014. Our dataset includes 554 rounds of competition (preliminary, semifinal, and final, as applicable) for these events. From 2017 to 2023, the number of hacking contests stabilized, with roughly thirty-seven to fifty-six unique events held annually during this period (see Figure 1). For a comprehensive list of specific events conducted each year between 2004 and 2023, refer to Appendix A.

In summary, the growth and prominence of China’s hacking contest ecosystem have been significantly encouraged and supported by the state, as noted in the policy section of this paper. However, the findings from this section also indicate that this ecosystem has been organically driven by China’s dynamic and highly skilled hacking community. This community has embraced the opportunities provided by government initiatives and fostered a culture of innovation and collaboration, creating a thriving ecosystem that combines state support with grassroots engagement.

2. Main annual events

Among the 129 unique events we have identified, fifty-four are annual competitions—fifty-one CTF contests, two exploit contests, and one combining both. An overview of these recurring competitions can be found in an online database we developed from our collected and analyzed data (see Figure 2 for a preview). Each competition features a placard that reveals additional details upon clicking, as illustrated in Figure 3. This information includes years of activity, average and total participation numbers, host organizations, and links to detailed competition write-ups.

Figure 2: CTF online database dashboard

Thanks to Joseph Pantoga for creating this dashboard.

Figure 3: CTF online database placard

Thanks to Joseph Pantoga for creating this dashboard.

3. Host organizations

While many of the fifty-four annual competitions are organized by private companies or universities, they also involve state institutions, as detailed in Figure 4. Competitions frequently have more than one sponsor, so sponsor association is not an exclusive quality (e.g., a competition might have MPS and CAC as sponsors).

4. Participation rate

Participation in China’s contests varies widely. Of the fifty-four recurring events identified, most attract between a few hundred and two thousand participants annually. The top ten contests draw from five thousand (in the case of the Huawei Cup) to more than thirty-five thousand (for the Wangding Cup), as shown in Figure 5.

5. Sector-specific contests

China’s CTF landscape is set apart by its wide range of sector-specific CTF contests, such as those for healthcare, law enforcement, mobile applications, cryptography, vehicles, and smart cities. These competitions are tailored to address the distinct cybersecurity challenges of each sector. For example, the National Health Industry Cyber Security Skills Competition (全国卫生健康行业网络安全技能大赛) tests participants’ cybersecurity skills in medical contexts.24 Competitors typically include information-technology (IT) and cybersecurity professionals from hospitals and healthcare organizations. Similarly, the National Industrial Control System Information Security Competition (全国工控系统信息安全攻防竞赛) and the National Industrial Internet Security Technology Skills Competition (全国工业互联网安全技术技能大赛) focus on industrial control systems, featuring attack and defense scenarios that include physical systems.25 The Blue Cap Cup (蓝帽杯), centered on law enforcement, draws participants from police academies nationwide and emphasizes electronic forensics and case investigation challenges alongside traditional CTF tracks.26

6. Cyber ranges

Among the fifty-four recurring competitions identified in this report, most rely on either Beijing Integrity Tech (永信至诚) or Cyber Peace (赛宁网安) as their cyber range providers, while others develop in-house or ad hoc solutions tailored to specific contests.

In September 2024, the Federal Bureau of Investigation (FBI) announced that it had seized control of a botnet run by Integrity Tech in connection with the activities of the Chinese government-linked hacking group Flax Typhoon. This group targeted critical infrastructure in the United States, Taiwan, and globally, including corporations, media outlets, universities, and government agencies.27 Following the FBI statement, a joint advisory from the FBI, Cyber National Mission Force, and the National Security Agency (NSA) accused Chinese cybersecurity company Integrity Tech of compromising hundreds of thousands of Internet of Things (IoT) devices since 2021, with more than 260,000 devices affected by June 2024, spanning North and South America, Europe, Africa, Southeast Asia, and Australia.28 This is significant not only as a rare public outing of a company involved in state-sponsored malicious activities, but also due to Integrity Tech’s key role in China’s talent pipeline, both as a leading cyber range provider and through its active involvement in CTF contests.

Cyber ranges for at least twenty-six of the fifty-two recurring CTF hacking contests identified in this report are operated by Integrity Tech. These contests include some of the country’s most prestigious in terms of complexity, payouts, participation rates, and geographic reach. They are organized by both private companies and leading universities, such as the Matrix Cup, alongside government-sponsored events, such as the Information Security Ironman Triathlon, XCTF, Xiangyun Cup, Wangding Cup, and Qiang Wang Cup. Notably, competitions like the Wangding Cup and Qiang Wang, organized by the MPS and the PLA respectively, focus on IoT devices and vulnerability mining, reflecting the targets and activities for which Integrity Tech has been accused in the U.S. advisory.3031 Each of these events is analyzed in detail in the following sections.

As they provide simulated digital environments, cyber ranges collect and store performance data that state security services can leverage to gain insights into network configurations, vulnerabilities, and defenses, potentially informing real-world targeting strategies. Attackers can use these stored simulations and exercises to refine their techniques, reverse engineer attacks, and develop exploits, while also learning how defenders respond to craft strategies that bypass security measures. Although the extent is unclear, Integrity Tech’s capabilities as a leading cyber range provider likely enhanced its operational effectiveness in conducting cyber operations.

Notable competitions

We selectively highlight some competitions from our dataset. These were selected for their apparent ties to the military or security services, their relative size within China’s CTF ecosystem, or their association with efforts to develop technology. They are not provided in any particular order.

The Information Security Ironman Triathlon (信息安全铁人三项赛)

The Infosec Ironman Triathlon is designed to improve ties between industry and academia, and likely supports talent spotting by the MSS. The 2024 edition enters anyone qualifying for the semifinals into the National Cybersecurity Talent Database (国家网络安全人才库).29

Jointly established by CNITSEC (MSS 13th Bureau) and the Ministry of Education in 2016, the Infosec Ironman Triathlon attracted more than three hundred universities for its 2024 edition.30 The competition is held annually, with subnational competitions around the country producing teams that qualify for the finals, called the Great Wall Cup (长城杯).31 The three legs of the triathlon include: Personal Computer Confrontation (个人计算环境安全对抗), which includes personal computers (PCs), smartphones, and wearables; Corporate Environment Confrontation (企业计算环境安全对抗), which includes penetration testing, large-systems management, and setting up defensive technology; and Information Security and Data Analysis Confrontation (信息安全数据分析对抗), which focuses on incident response and management.32

The competition tightens relationships between universities and the cybersecurity industry by pairing the two on teams. Team structure requires one teacher from the university and another from industry (assigned by lottery), along with four competing students and two alternates.33Organizers designed the structure of the teams specifically to encourage a feedback loop between what industry needs and expects from students, what universities teach, and what the competition demands of participants. To this end, the organization’s oversight structures are populated by China’s most influential cybersecurity companies, universities, and government organizations.34

The Infosec Triathlon has also supported talent spotting since its inception. The inaugural competition selected ninety students to join a three-month “closed session” training camp (封闭集训), which included lectures from corporate sponsors, among others. At the end of the camp and after examination, a “talent committee” recommended students to be hired by members of the China Information Industry Trade Association.35

Talent spotting has evolved since 2016, however. The 2024 Infosec Ironman Triathlon regulations—hosted on the MSS 13th Bureau’s website—state that “teams which reach the semi-finals and finals will receive certificates, prize money, and be entered into a National Cybersecurity Talent Database (国家网络安全人才库).”36 There are few mentions of this database available online. The earliest available mention is from press coverage of 2022 Nation Cybersecurity Week. That coverage suggests the database was founded by the National Cybersecurity Education and Technology Industry Fusion Test Zone (国家网络安全教育技术产业融合发展试验区), which has five locations throughout China and was also launched in 2022.37 The test zones are overseen by the CAC, MIIT, Ministry of Education, and Ministry of Science and Technology.38 The only other specific mention of the database is from the MSS’s National Cybersecurity Talent Cultivation Base website (国家网络空间安全人才培养基地, see discussion in CTFWar).39 If the database is meant to facilitate hiring into the private sector, it should be easy to find, readily searchable, and promoting the success of its talent placement into industry. The database is none of these things. The database might be privately maintained by the government, or it might be a project that has failed to launch and remains aspirational.

National Collegiate Cybersecurity Attack and Defense Competition “Zhujian Cup” (全国大学生网络安全攻防竞赛, “铸剑杯”)

The National Collegiate Cybersecurity Attack and Defense Competition may pit college students against actual intelligence-collection targets. There are a few state media pieces about the event, which was first held on New Year’s Eve 2023, and the language and images are tightly scripted.40 The competition appears normal at first glance. A piece from People’s Daily states that two hundred students from twenty-nine universities participated in the event.41 Images show speakers giving talks to the students in attendance.

The Zhujian Cup, as the competition is also known, is hosted by Northwestern Polytechnical University alongside known vulnerability suppliers to the Ministry of State Security and the Shanxi Provincial CAC.42 The university conducts defense-related work and is one of the “Seven Sons” universities of National Defense overseen by the State Administration of Science, Technology, and Industry.43 Eighteen months before the Zhujian Cup, state propagandists named the university an alleged victim of US hacking. The competition is also supported by the Shanxi branch of the National Cybersecurity Education and Technology Industry Fusion Test Zone (国家网络安全教育技术产业融合发展试验区), which hosts the National Cybersecurity Talent Database associated with the aforementioned Infosec Ironman Triathlon.

But unlike almost every other competition examined in our dataset (see also CTFWar below), there are no write-ups by participants. There are no posts on Twitter, Weibo, LinkedIn, Bilibili, or any other social media site about the competition. This secrecy is legally enforceable.

Prior to participating in the Zhujian Cup, students are required to fill out three documents.44 The first collects typical information, such as name, sex, picture, and competition history. But the second document is atypical.45 The “political examination form (政治调查表)” asks students to recount their “political and ideological work performance” and asks: “Have you received any awards or punishments, if so when and where?” “Do immediate family members have any significant problems?” and “Are there any problems in your main social relationships?” The questions and title of the document make clear it is intended to serve as a background check. Readers unfamiliar with China may reasonably question the severity of the background check, which does not probe for personal moral defects the way checks from Western governments do. But the competition’s political examination form aligns with what is understood about the political background-check process for PRC government employees.46

The third document requires the student’s name, student number, and a signature.47 The content of that document, translated below, indicates that students are participating in a secretive competition and suggests that the target students are attacking is an actual intelligence-collection target. Critically, students promise not to disrupt the availability of the system they are attacking, nor cause a destructive impact to it; both are important stipulations if they are trying to conduct espionage undetected. Students affirmatively commit to “assist in deleting and removing the acquired data, and deleting and removing backdoor programs uploaded to the target system, and will not privately retain any data or backdoors.” The document continues, stating that various electronic and paper media will not be kept by the students, that such content should not be made public, and that the students are legally responsible for maintaining the secrecy of the information. If a leak occurs, students agree in Section 9 to bear responsibility for damages caused to the “competition host and the country.”

None of these requirements are standard for CTFs. Competitions typically occur entirely on a network established specifically for the competition, which functions as a cyber range. There are no data to delete and no backdoors to delete during regular competitions.

But swearing to secrecy, filling out a background check, accepting responsibility for damages to the nation, and promising to remove acquired data and delete backdoors on targeted systems are not the only atypical indicators of the Zhujian Cup’s activity. First, the Northwestern Polytechnical University website for the event lists the date of the competition as December 30–31, 2023. In parenthesis, the authors clarify that this date is for the “public environment actual confrontation competition (公开环境现场实战比赛),” begging the question of why it clarifies that this part of the competition is public. Second, the date itself—the weekend of New Year’s Eve—is an excellent time to attack foreign targets whose defenders are frequently either on holiday, inebriated, or both. Third, the university named this section of the competition the “public network-target, actual combat attack competition.”48 Finally, none of the organizations sponsoring the competition are known for offering cyber range technologies. Most competitions identified either Beijing Integrity Tech or Cyber Peace as their range provider. The Zhujian Cup has no such provider, raising further questions about the nature of the competition.

We acknowledge that there is no conclusive proof that the Zhujian Cup prompted students to attack an actual intelligence target. However, we note that such a competition is likely not without precedent. Intrusion Truth tied APT40 to hacking competitions at Hainan University, showing they were used to find vulnerabilities and recruit students.49 Years later, the Financial Times reported that the Hainan MSS bureau had hired students to translate stolen documents.50 Separately, infrastructure associated with a competition at Southeast University overlaps with infrastructure from the hack of Anthem Insurance, and the timeline of the university’s competition aligned with attacks against a US defense industrial base company.51 The connections between that competition infrastructure and the timeline of the attempted hack create circumstantial evidence that the competition targeted the company. Finally, Northwestern Polytechnical University holds a Top Secret clearance, allowing it to undertake sensitive work.52

Translation:

2. In the course of executing an authorized attack, I commit: to launching attacks on authorized attack targets according to the rules regarding constraints and without affecting service availability, such that the attacks will not have a destructive effect on target systems; and to assisting in deleting and removing the acquired data, and deleting and removing backdoor programs uploaded to the target system, and will not privately retain any data or backdoors.

3. I commit not to reproduce (or photocopy) without permission any materials, documents and electronic data, disk or paper media files, images, video materials, etc. used for completing assigned tasks and, if doing so is truly necessary for the work, to ask the competition organizer for approval.

4. I commit to properly storing the materials, documents and electronic data, disk or paper media files, images, video materials, etc. provided by the organizer for completing the assigned tasks, so as to prevent loss and theft.

5. I commit not to publicize or report the content of the tasks in any way within the unit or externally, and not to disseminate to third parties in any way the materials, documents and electronic data, disk or paper media files, images, video materials, etc. provided by the organizer for completing the assigned tasks.

6. I promise that, if the organizer imposes corrective requirements for matters of non-compliance with the terms of confidentiality, I will promptly conduct self-inspection and self-correction and cooperate in any investigation and penalties without delay or concealment.

7. I commit not to disclose or exploit data related to critical information infrastructure and system vulnerabilities discovered during the competition, and not to provide or publish externally any vulnerabilities or any information from the competition process.

8. I commit to promptly reporting to the organizer any loss or theft, regardless of the cause, of information provided by the organizer or of information generated by myself or jointly with others, and will actively cooperate with the organizer and local public security authorities in their investigation.

9. I promise that if any information disclosure incident (or case) occurs due to personal reasons, causing loss or harm to the organizer and the country, I, as an individual, will bear legal responsibility in accordance with the relevant laws and regulations.

CTFWar (国际网络安全攻防对抗联赛)

CTFWar is both a cybersecurity competition and a website with practice material and user profiles that likely enable talent spotting by the Ministry of State Security. Since 2021, CTFWar has been hosted under the guidance of the MSS’s own National Cybersecurity Talent Cultivation Base (国家网络空间安全人才培养基地), an institution formed by the Ministry of State Security’s 13th Bureau (CNITSEC, 中国信息安全测评中心) and Beijing University of Chemical Technology (北京化工大学).53 Beijing Hua Yun (Vul.AI, 华云信安(深圳)科技有限公司), a Tier 1 supplier of vulnerabilities to the MSS, helps administer CTFWar. The competition hosts six regional competitions across China, which feed into the final competition.54 CTFWar participants are likely subject to constraints regarding discussing their participation or the event online. As with the Zhujian Cup above, there are no write-ups available about the content of the competition, though we did not find any nondisclosure agreements (NDAs) online.

The CTFWar website has a plethora of resources for training and practicing common cybersecurity skills, vulnerability discovery, and attack and defense. Registered users of the training platform maintain their own profiles with competition history, learned skills, and points earned along the way (see below). For any government administrators sitting on the other side of the website, a list of high-performing users could serve as a talent-recruitment pool.

Figure 6: The loading page of a blank profile.

Source: CTFWar.

Figure 7: CTFWar provides its users learning opportunities and content across six categories (web, pwn, mobile, crypto, reverse, and miscellaneous) and organizes that content by difficulty.

Source: CTFWar.

Figure 8: This screenshot shows the “vulnerability discovery range.” The range’s content includes classes and demonstrations organized by product, difficulty, and vulnerability types. Here users can practice identifying known vulnerabilities (n-days) that have been reintroduced to products.

Source: CTFWar.

But CTFWar is not the National Cybersecurity Talent Cultivation Base’s only focus. The base, established in 2019, has facilitated at least three other one-time competitions, partnered with several universities for training, and participated in forums on cybersecurity and talent cultivation.55 Elsewhere, the base focuses much of its efforts on the promotion of, and the testing to qualify for, a number of cybersecurity certificates issued by CNITSEC.56 A handful of universities partner with the base to serve as a talent transport centers (人才输送中心), which suggests the university is certified to administer official tests for those certifications.57

Wangding Cup (网鼎杯)

The Ministry of Public Security, China’s domestic security and intelligence agency, uses the Wangding Cup to identify potential recruits for its Cybersecurity Thousand Talents Program (网络安全千人计划).58 The talent program signals to potential employers that the recruit is exceptional, and likely provides direct financial benefits and indirect awards, though the award program’s benefits are not public.

Hosted by the State Cryptographic Administration (国家密码管理局) and the National Cyber and Information Security Information Bulletin Center (国家网络与信息安全信息通报中心), an organization subordinate to the MPS, the competition drew more than fifty thousand participants in a single year at peak participation.59 Based on our data, the Wangding Cup is the largest annual competition in China. Teams from across society participate. A list of 2020 competition winners shows teams affiliated with universities, critical-infrastructure operators, MPS offices, and private-sector firms among its champions.60

Figure 9: Ministry of Public Security Periodical on the Wangding Cup, stating competition types include vulnerability discovery, exploitation, and patching, as well as integrating artificial intelligence for its Robot Hacking Games competition.

Source: “CTFer Homepage”, 红蓝信安网络, July 30, 2024, https://archive.ph/kZjxw.

Besides typical competition structure, the Wangding Cup also integrates a Robot Hacking Game competition. This competition type, modeled off the Defense Advanced Research Projects Agency’s Cyber Grand Challenge in the United States, tests competitors on their ability to utilize artificial intelligence (AI) for vulnerability discovery and exploitation.61 Its inclusion demonstrates China’s commitment to developing and deploying such tools.

Qiang Wang Cup (强网杯)

The Qiang Wang Cup is co-hosted by the PLA Information Engineering University and receives support from China’s World-Class Cybersecurity Schools—a group of universities recognized by the government for their excellence in cybersecurity education.62 If a comparable event were to take place in the United States, it would include all the service branches’ cyber training academies, like US Air Force Cybersecurity University, and the universities certified as Centers of Academic Excellence (CAE) in Cyber Operations by the jointly run National Security Agency and Department of Homeland Security CAE program. Despite the proximity between the PLA and these elite universities, we did not find evidence of formal recruitment programs seeking participants. However, it is unlikely the military would forgo such an opportunity.

The Qiang Wang Cup has an online qualifying round, an in-person round, and an elite round. The cup’s elite round pays out more than most other competitions in our dataset. After qualifying for the elite round, teams compete to successfully “answer” (相应) challenges (赛题) for a number of “fields of study” (科目).63 Each challenge is worth somewhere between 20,000 and 200,000 renminbi (RMB).64 Such payouts are typically offered at exploit competitions, in which competitors burn valuable vulnerabilities for cash. We were unable to find details about what constituted a challenge for the elite competition round.

The teams qualifying for the finals each year are a diverse set of critical-infrastructure operators, private-sector companies, or top-tier universities. Military institutions are also frequently listed among the thirty-two teams qualifying to participate in the finals each year, such as the PLA’s National University of Defense Technology.65

Qiang Wang International Elite Competition for Cyber Mimic Defense (“强网”拟态防御国际精英挑战赛)

The Competition for Cyber Mimic Defense is a PLA-run competition that supports the development of cyber mimic technology, a state priority identified by past development plans.66 Competitions are an effective way to jumpstart research into a specific technology, and this competition betrays significant interest in the technology by the PLA. Although the competition name also includes Qiang Wang, this competition is held separate from the competition discussed above.

Wu Jiangxing, a Chinese Academy of Engineering member and father of cyber mimic defense in China, is the progenitor of this competition.67 Cyber mimic defense is a technique used by defenders to introduce randomness and redundancy into the architecture of computer systems. Randomness creates a lack of predictability that forces attackers to spend more time and resources, while redundancy enables the detection of attacks. The technique is uncommon in commercial cyber-defense products and is hard to execute well.68 Since 2018, the PLA’s Purple Mountain Laboratory, the Chinese Academy of Engineering, and the Nanjing government have hosted the Competition for Cyber Mimic Defense every year.69 Both Purple Mountain Lab and the Chinese Academy of Engineering would be able to act on innovations brought to the competition.

The development of cyber mimic defense technology is a priority for China. A publication by the State Key Laboratory for Information Security credits Wu with positing the possibilities offered by cyber mimic defense in 2008.70 That same publication states the 863 Development Plan—a high-tech development plan—began prioritizing the technology in 2016, two years before the start of the competition. Governments frequently use competitions to stimulate innovation of a technology’s development; the Competition for Cyber Mimic Defense falls into this camp. Participants are not only directly demonstrating advances in techniques to the competition’s organizers—including the PLA’s Purple Mountain Laboratory—but are also forming social relations that last after the competition concludes. These relationships are often the most important intangible boosters of scientific research.

The competition’s first and second years saw the most international participation—attracting teams from Japan, Russia, Ukraine, and the United States.71 Since then, foreign participation—or qualification—has seemingly dropped. Although the name indicates the competition is open to international participants, recent lists of the competition’s top performers include only Chinese teams.72

RealWorldCTF

RealWorldCTF draws foreign hackers into China and has facilitated at least one initial contact by Chinese security services to an American attendee. That person’s story, recounted below, indicates that competition staff was involved in the pitch. We find it likely that such pitches continue today. Real World CTF occurs annually, continues to draw foreign participants, and is sponsored by Beijing Chaitin—a company that is a Tier 2 Technical Support Unit of the MSS.73

After the first day of the competition, Matt and his team had been chatting with a team from the UK. With different interests in dinner between the eight of them, four decided to head downtown for dinner, and the remaining two UK players, Matt, and another American stayed near the hotel instead. They walked as a group towards a shopping center near the conference center. Because the subway stop at the facilities had not been built yet, the area was only populated with conference attendees and people working at the shops.

Shortly after sitting down for hotpot, Matt noticed two women from the conference walk past the front of the restaurant. “I recognized one of them because she was involved in the Real World CTF organization. She had been walking around with a microphone and directing people.” The women ended up eating dinner in the same restaurant in the back corner of the room. When Matt and his crew tried to pay their bill for dinner, a cook came out from the kitchen and told the four guys the meal had already been paid for, gesturing towards the women in the back of the seating area.

On cue, the women came over to the table and tried to chat up the four attendees. Their questions were the kind of short and stilted questions that someone repeating rehearsed sentences in a second language might say after a short period of study. After a few awkward moments and seemingly without good reason, the women offered Matt a torn scrap of notebook paper. On it, there were the words “phone number, telegram, whatsapp, and email,” each with a line next to it—the women, whoever they were, clearly hoped to stay in touch. Matt jotted down a Google VOIP phone number he had created and nothing else. The women thanked the four and were on their way.

After the competition and back home in the United States, Matt checked his VOIP number on his laptop. The phone number had received an MMS image file. He never opened it and deleted the number from his Google account.

We don’t know how many other attendees received free meals or had their contact information collected that year or in the years since.

GeekCon (previously GeekPwn)

If RealWorldCTF draws foreigners into China for hacking competitions, GeekCon goes abroad to seek them out. Created by members of the elite Keen Team, GeekCon was originally hosted as GeekPwn within China from 2014 to 2021.74

The competition inside China hosted both conference talks and 0day demonstrations. Instead of creating a list of target systems as is standard at exploit competitions like Pwn2Own and Tianfu Cup, GeekPwn invited applicants to submit their own 0days of interest. A panel then picked the most compelling submissions. An archived version of the GeekPwn website from 2021 states that, in order for a GeekPwn vulnerability contest winner to receive their award money, the participant must provide a detailed explanation of the “security issue and technical methods” associated with the vulnerability.75 The organizers go on to state that they respect individuals’ privacy and will not expose the vulnerability information to the outside world. Effectively, the organizers would have the information all to themselves for the cost of the award. A 2014 post on Twitter shows the organization offering $100,000 for pwning a Tesla.76

After a year off from competitions in 2022, GeekCon was reestablished in Singapore in 2023.77 With the same structure as inside China, GeekCon invites conference talk submissions, as well as 0day submissions.78

GeekCon’s website suggests the competition, despite operating abroad, abides by Chinese regulations. The sixth rule for the event states, “As the top 1 security geek IP operator in China, we always advocate a reward mechanism that emphasizes both honor and moderate bounty.”79 Providing a “moderate bounty” is exactly what is required by the Cyberspace Administration of China’s 2018 Notice on CTFs (see the background section above).80

We do not know if GeekCon submits the vulnerabilities of its participants into the mandatory reporting system run by the MIIT and introduced in 2021. Under a strict reading of the law, the competition’s participants are required to tell the company whose product is vulnerable, and no one else, until a patch is made public. In practice, some members of the cybersecurity community in China report into the new system out of fear. Additionally, if GeekCon attendees from China are applying to the MPS to leave China for the competition, they might already be providing their GeekCon submissions to the government. Darknavy, the company behind GeekCon, claims to have facilitated the responsible disclosure of more than one thousand severe software vulnerabilities.81 It is entirely possible that GeekCon is hosted overseas to escape China’s claim to the software vulnerabilities disclosed at its competition. The year reprieve from hosting the competition occurred just after the new regulations on software vulnerabilities went into effect. GeekCon did not respond to the authors’ request for comment to clarify its process. Still, the competition’s emphasis on “moderate bounty” suggests the organizers are a rule-following bunch.

XCTF League (XCTF国际网络攻防联赛)

Tsinghua University’s Blue Lotus team founded the XCTF League in 2014, one year after it made history as the first Chinese team to reach the finals of DEF CON CTF.82 XCTF was the first Attack-Defense CTF competition established in China and has since grown into the largest CTF league in the country. According to XCTF’s own website, it is the largest contest brand in Asia and the second globally, likely referring to DEF CON CTF as the first.83 As such, it serves as a crucial platform for identifying and nurturing high-end cybersecurity talent.

The competition is now organized annually by Cyber Peace, a company founded in 2013, and is co-organized by the Blue Lotus team and the China Institute for Innovation and Development Strategy (CIIDS, 国家创新与发展战略研究会).84 Although CIIDS presents itself as a non-profit organization, it maintains close ties to the MSS and is led by former MSS officers who previously handled front organizations and influence campaigns, as outlined by Alex Joske in Spies and Lies.

The MSS 12th Bureau, also known as the Social Investigations Bureau, is responsible for influence operations against individuals and facilitating elite capture. In short, the 12th Bureau aims to shape the views of thought leaders, business executives, and politicians in foreign countries. As a cover organization, CIIDS sponsorship of XCTF is aligned with its attendant work under the Chinese Academy of Sciences, rather than its association with the MSS.

The XCTF League operates more like a tournament than a single event, consisting of a series of selection rounds that span several months leading up to the final stage.85 The qualification stage includes multiple regional Jeopardy-style competitions that vary annually. These partner competitions are typically organized by university hacking teams and private research laboratories across the country. Between 2014 and 2023, approximately five to ten regional competitions annually have served as XCTF qualifiers, including prestigious events like 0CTF/TCTF. The XCTF finals adopt an Attack-Defense format, in which participants compete for the championship, runner-up, third place, and other prizes. In the 2023 XCTF edition, the rewards amounted to nearly $14,000 for first place, $7,000 for second place, and $3,500 for third place—amounts higher than those of other domestic CTF competitions.86

Although primarily focused on China, XCTF has a strong international reach. Since 2019, its qualifying contests have included the CyBRICS and BRICS+CTF contests, organized by Russian CTF teams and aimed specifically at participants from BRICS (Brazil, Russia, India, China, and South Africa) countries.87 In 2018, it partnered with Hack in the Box (HITB) to host XCTF international iterations in cities such as Dubai and Singapore.88 As in previous editions, the 2024 XCTF qualifying rounds attracted a massive number of participants: 2,987 teams consisting of more than eleven thousand contestants, with twenty-four domestic and international teams qualifying for the finals, according to a write-up by China’s National University of Defense Technology.89

The Matrix Cup (矩阵杯网络安全大赛)

The Matrix Cup is China’s newest exploit competition. The inaugural Matrix Cup, held in Qingdao, Shandong Province from May to June 2024, was organized by 360 Digital Security Group and Beijing Huayunan Information Technology Co. (VUL.AI).90 The event’s scope was significant, combining three main competition styles into five challenges: three vulnerability mining challenges, one Attack-Defense CTF, and one AI challenge.91 With a prize pool totaling $2.75 million, the Matrix Cup exceeded the rewards offered by both Pwn2Own’s $1.1 million (2024) and the Tianfu Cup’s $1.4 million (2023).92 More than 90 percent of the prize money was allocated to vulnerability-related challenges, targeting both Western and Chinese systems.93

Unlike the Tianfu Cup, the Matrix Cup did not disclose its targets and results. However, SecurityWeek reported a list of targets more than a month before the event.94 These included Windows, Linux, and macOS operating systems; Google Pixel, iPhone, and Chinese smartphone brands; and networking devices. A post-competition write-up by 360 Digital Security revealed that around one hundred vulnerabilities were discovered, with significant findings in virtualization platforms and mobile operating systems.95 Teams from Tsinghua and Zhejiang Universities excelled, with Tsinghua students securing top rankings in all five challenges, as highlighted in a university blog post.96

The high rankings of university teams and the strong presence of Generation Z participants highlighted strong participation from academia over industry.97 Zhou Hongyi, founder of 360 Group, highlighted that the Matrix Cup focuses on talent development, aiming to discover, train, and recruit top cybersecurity talent.98

Figure 10: Final ranking of the general products contest.

Source: “3000名黑客巅峰较量,100+漏洞震撼突破!矩阵杯决赛落幕,” Qihoo360, July 1, 2024,
https://web.archive.org/web/20240708061854/https:/360.net/about/news/article66836ac56ddf08001f91a723#menu.

Vulnerabilities discovered during the Matrix Cup were likely reported to the MSS. Chinese hacking competitions, such as the Tianfu Cup, have been linked to state-sponsored activities in the past.99 In addition, the event’s organizers, 360 Digital Security Group and VUL.AI, are Tier 1 MSS vulnerabilities suppliers, while co-organizers Cyber Peace and Beijing Integrity Tech are Tier 2 suppliers.100

The Xiangyun Cup (祥云杯)

The Xiangyun Cup warrants additional scrutiny. Jilin provincial government and the local MSS13th Bureau (中国信息安全测评中心吉林分中心) co-sponsor the competition.101 Started in 2020, the Xiangyun Cup has two tracks—one for the public and one for college students. The public CTF track provides significantly above-average payouts for winners—grand champions take home 200,000 RMB (approximately $28,000) for the team.102 Most competitions we observed pay far less, typically around 50,000–100,000 RMB (approximately $7,000–14,000). The award amount even exceeds the 2023 Tianfu Cup bounty for local privilege escalation vulnerabilities on Windows 11 (150,000 RMB, or approximately $21,000).103 Besides flashy graphics and coverage by the Jilin provincial education department, there is little information available about the competition that would indicate why its award is above average.104. The local government may be flush with cash, unaware of the going price of CTF championships, trying to attract high-end talent for political points, or attempting to boost attendance for other reasons.

Conclusion

Following its 2018 policy document on hacking competitions, China’s CTF ecosystem flourished. Our data from the ten most popular competitions show more than one hundred thousand instances of participation annually. Further research is needed to quantify the size of the ecosystem outside China. We suspect China outstrips any individual country in annual participation, but might find itself on par with the United States and its treaty allies if numbers are evaluated together. Regardless of how the rest of the world fares, it is clear that China’s policymakers and ministries have successfully built a comprehensive system for hacking competitions.

CTF contests serve as powerful tools to identify and recruit top cybersecurity talent, enhance skill development, and drive innovation. Key CTF contests analyzed in this report, including the Information Security Ironman Triathlon, CTFWar, Qiang Wang Cup, and Wangding Cup, are affiliated with the MSS, PLA, and the MPS, serving as crucial talent-recruitment pipelines for security services. RealWorldCTF and GeekCon facilitate China’s interaction with foreign hackers—RealWorldCTF brings them to China and GeekCon meets them abroad in Singapore. As China’s largest and most prestigious CTF contest, the XCTF League stands out as the country’s premier talent-development platform, with multiple qualification rounds held nationwide throughout the year. Other competitions, such as X-NUCA, specifically rally college students to practice their skills. Elsewhere, the Qiang Wang International Elite Competition demonstrates how contests can foster the development of cutting-edge technologies like cyber mimic defense.

China’s system should inspire policymakers outside the country to catch up. A diverse group of ministries and private companies host hacking competitions. The result is a cohort of potential hires across economic sectors who have regular, hands-on practice with some key tenets of cybersecurity. Some competitions are purposefully designed to improve coordination between academic curricula and the private sector (e.g., Infosec Ironman Triathlon), while others simply allow college students to hone their skills outside the classroom.

Sector-specific contests, like those focusing on healthcare, offer a compelling example. Companies facing common threat actors and operating under a single regulatory regime may benefit from closer cooperation and communication among defenders. Relationships with other line-level defenders across industry will likely yield better cooperation than conferences for managers and C-suite of the same industry. Some Information Sharing and Analysis Centers (ISACs) in the United States have already started hosting such competitions; others should follow suit.

We hope our report serves as the first of many delving into competitions in China. The data we made available with this report should enable follow-on research. Specifically, future studies could examine the various types of CTF competitions in greater detail, including their organization and structure, and explore how these formats might be adapted to different national contexts. Furthermore, innovation-focused competitions—those in which students present ideas for new technology or businesses—fell outside the scope of our review; these should be studied for their impact on the development of China’s cybersecurity industry. China spent the five years from 2014 to 2019 recreating the US cybersecurity education system in the hopes of copying its success. Now, we think it is time for the United States to take a page from China’s playbook.

Key recommendations

The following recommendations include ways for other countries to draw on China’s CTF ecosystem to enhance talent identification and development, as well as to implement security measures to protect against the potential threats highlighted in this report.

1. Policymakers should promote the integration of CTF contests into academic curricula, as they effectively assess practical skills through rankings, prizes, and specialization. This approach can also align education with industry needs and strengthen the capabilities of the cybersecurity workforce. In the United States, this could be achieved by changing the criteria for the US Centers of Academic Excellence in Cyber Defense to include participation in CTFs or similar practical modules.

2. National critical infrastructure agencies should host CTFs for their sector. This approach would enhance sector-specific defenses and improve capabilities. This sector-specific approach would yield tangible benefits by way of hands-on skills and familiarity with tooling. Intangible benefits would result, too. Competition between companies in the same sector would help chief information security officers benchmark their security teams against one another, improve relationships between defenders in the same sector, and increase interaction between regulators and their sector. In the United States, sector risk-management agencies should host CTFs.105

Appendix A

Hacking Contests in China by Year (2004–2023)

2004

  • 第1届(2004)

2005

  • 第2届(2005)信息安全与对抗技术竞赛(ISCC2005)

2006

  • 第3届(2006)信息安全与对抗技术竞赛(ISCC2006)

2007

  • 第4届(2007)信息安全与对抗技术竞赛(ISCC2007)

2008

  • 第5届(2008)信息安全与对抗技术竞赛(ISCC2008)

2009

  • 第6届(2009)信息安全与对抗技术竞赛(ISCC2009)

2010

  • 第7届(2010)信息安全与对抗技术竞赛(ISCC2010)
  • 蓝桥杯 2010

2011

  • 第8届(2011)信息安全与对抗技术竞赛(ISCC2011)
  • 蓝桥杯 2011

2012

  • 第9届(2012)信息安全与对抗技术竞赛(ISCC2012)
  • NCTF 2012—南京邮电大学第七届网络安全竞赛
  • 蓝桥杯 2012

2013

  • 第10届(2013)信息安全与对抗技术竞赛(ISCC2013)
  • NCTF 2013—南京邮电大学第七届网络安全竞赛
  • 蓝桥杯 2013

2014

  • 首届 XCTF国际网络攻防联赛
  • BCTF
  • SCTF 2014
  • HCTF 2014
  • SCTF 2014全国赛
  • 第11届(2014)信息安全与对抗技术竞赛(ISCC2014)
  • 全国高校移动互联网应用开发创新大赛 2014
  • NCTF 2014—南京邮电大学第七届网络安全竞赛
  • 湖湘杯 2014
  • 蓝桥杯 2014
  • 问鼎杯 2014

2015

  • 首届 XCTF国际网络攻防联赛
  • ACTF 2015
  • 0CTF 2015 Finals
  • BCTF 2015
  • RCTF 2015
  • HCTF 2015
  • RCTF 2015
  • 第12届(2015)信息安全与对抗技术竞赛(ISCC2015)
  • 全国工控系统信息安全攻防竞赛 2015
  • 全国网络空间安全技术大赛 2015
  • TSCTF2015“京东安全杯”第四届 北京邮电大学信息网络安全技术挑战赛
  • NCTF 2015—南京邮电大学第七届网络安全竞赛
  • 湖湘杯 2015
  • 蓝桥杯 2015
  • 问鼎杯 2015

2016

  • 0CTF 2016 Finals
  • SSCTF 2016
  • ZCTF 2016
  • BCTF
  • HCTF 2016
  • SCTF 2016全国赛
  • 第四届XCTF联赛揭幕战武汉
  • 首届XMan选拔赛
  • 2016京津冀大学生网络安全技能挑战赛
  • 2016京津冀大学生网络安全技能挑战赛
  • 第13届(2016)信息安全与对抗技术竞赛(ISCC2016)
  • 2017首届全国密码技术竞赛决赛
  • 2016首届全国密码技术竞赛决赛
  • 全国工控系统信息安全攻防竞赛 2016
  • 全国网络空间安全技术大赛 2016
  • 全国高校移动互联网应用开发创新大赛 2015
  • 全国高校移动互联网应用开发创新大赛 2016
  • TSCTF2016“京东安全杯”第四届 北京邮电大学信息网络安全技术挑战赛
  • NCTF 2016—南京邮电大学第七届网络安全竞赛
  • 湖湘杯 2016
  • 蓝桥杯 2016
  • 第三届“问鼎杯”全国大学生网络信息安全与保密技能大赛
  • ALICTF

2017

  • 第三届 XCTF国际网络攻防联赛
  • 2017高校网络信息安全管理运维挑战赛
  • 360春秋杯 国际网络安全挑战赛
  • 360漏洞破解赛&信息安全训练营
  • DDCTF-2017高校闯关赛
  • GEEKPWN1024嘉年华上海站工业控制系统CTF决赛
  • HITB GSEC CTF 2017
  • 2017第七届 HECTF信息安全挑战赛
  • ISW 2017 内网安全实战演习
  • LCTF 2017
  • NJCTF2017
  • SSCTF2017
  • WCTF2017
  • 2017全国高校网安联赛X-NUCA 总决赛
  • 0CTF 2017 Finals
  • BCTF
  • 第二届ZCTF比赛
  • 第三届XCTF联赛NJCTF南京站比赛
  • HCTF 2017
  • 第四届XCTF联赛揭幕战武汉
  • RCTF 2017 国际赛
  • 第二届XMan选拔赛
  • 第三届上海市大学生网络安全大赛
  • 信息安全铁人三项赛赛季总决赛
  • 第14届(2017)全国大学生信息安全与对抗技术竞赛(ISCC2017)
  • 2017全国大学生软件测试大赛
  • 全国工控系统信息安全攻防竞赛 2017
  • 全国网络空间安全技术大赛 2017
  • 第四届全国高校移动互联网应用开发创新大赛-信息安全赛
  • 2017首届全球华人网络安全技能大赛 北京总决赛
  • TSCTF2017“京东安全杯”第四届 北京邮电大学信息网络安全技术挑战赛
  • NCTF 2017—南京邮电大学第七届网络安全竞赛
  • 首届国际机器人网络安全大赛
  • 安恒杯12月线上CTF
  • 第六届山东大学生网络安全技能大赛决赛
  • 首届山西大学生信息安全大赛
  • 工业信息安全技能大赛锦标赛 2017
  • 第二届“强网杯”全国网络安全挑战赛-线上赛
  • 首届港澳地区大专联校网络安全竞赛-决赛
  • 湖湘杯 2017
  • “湖湘杯”网络安全技能大赛
  • 第一届“百度杯”信息安全攻防总决赛
  • 第三届“百越杯”福建省高校网络空间安全大赛
  • 广东省红帽杯网络安全攻防大赛
  • 蓝帽杯
  • 蓝桥杯 2017
  • 2017年“问鼎杯”大学生网络信息安全与保密技能大赛决赛
  • 2017年陕西省网络安全管理员职业技能大赛-决赛

2018

  • 第四届 XCTF国际网络攻防联赛
  • 首届强网”拟态防御国际精英挑战赛
  • “360 企业安全春秋杯”网络安全技术大赛-线下赛
  • DDCTF 2018
  • GEEPWN 2018
  • XCTF Finals 2018-HITB Beijing
  • HITB-XCTF GSEC CTF 2018 Singapore
  • HITB-XCTF DUBAICTF/BCTF 2018
  • 2018第七届 HECTF信息安全挑战赛
  • N1CTF国际赛
  • OGeek 2018
  • Real World CTF 1st
  • XCTF分站赛—SCTF
  • WCTF2018
  • 2018全国高校网安联赛(X-NUCA’18)线上专题赛
  • 0CTF/TCTF 2018 Finals
  • BCTF
  • HCTF 2018
  • SCTF 2018
  • SUCTF 2018
  • N1CTF 2018
  • RCTF 2018 国际赛
  • Hack-in-the-Box Dubai
  • 第三届XMan选拔赛
  • 2018世界智能驾驶挑战赛(WIDC)——信息安全组汽车破解挑战赛
  • 2018年全国大学生网络安全邀请赛暨 第四届上海市大学生网络安全大赛——东华杯
  • 2018 年全国大学生网络安全邀请赛暨 第四届上海市大学生网络安全大赛——东华杯
  • 中国科学技术大学第五届信息安全大赛
  • 2018中国网络安全技术对抗赛——阿里安全攻防对抗赛
  • 第15届(2018)全国大学生信息安全与对抗技术竞赛(ISCC2018)
  • 第15届(2018)全国大学生信息安全与对抗技术竞赛(ISCC2018)
  • 2018首届全国密码技术竞赛决赛
  • 全国工控系统信息安全攻防竞赛 2018
  • 2018年第四届全国网络空间安全技术大赛——线下决赛
  • TSCTF2018“京东安全杯”第四届 北京邮电大学信息网络安全技术挑战赛
  • 2018年“北邮网安杯”首届全国中学生网络安全技术大赛 线下赛
  • NCTF 2018—南京邮电大学第七届网络安全竞赛
  • “天府杯”国际网络安全大赛 2018
  • 2018年3月安恒杯线上赛
  • 2018安恒杯1月线上赛
  • 网络安全技能挑战赛暨自主可控安全共测大赛
  • 工业信息安全技能大赛锦标赛 2018
  • 第二届“强网杯”全国网络安全挑战赛-线下赛
  • “护网杯”2018年网络安全防护赛
  • 2018·春秋圣诞欢乐赛
  • 首届浙江省大学生网络与信息安全竞赛
  • 湖湘杯 2018
  • 2018“湖湘杯”网络安全技能大赛
  • 第四届“百越杯”福建省高校网络空间安全大赛(决赛)
  • 2018“中国梦•劳动美”福建金融系统网络安全技能竞赛
  • 第二届红帽杯网络安全攻防大赛-线下赛
  • 网鼎杯——线下半决赛、总决赛
  • 蓝帽杯
  • 蓝桥杯 2018
  • 2018年陕西省网络安全管理员职业技能大赛-决赛
  • *CTF 2018国际赛

2019

  • 第五届 XCTF国际网络攻防联赛
  • 第二届“强网”拟态防御国际精英挑战赛
  • 第三届DDCTF高校闯关赛
  • GEEPWN 2019
  • 2019第七届 HECTF信息安全挑战赛
  • OGeek网络安全挑战赛
  • WCTF2019
  • X-NUCA’ 2019线上专题赛
  • 0CTF/TCTF 2019 Finals
  • N1CTF 2019
  • De1CTF2019国际赛
  • SCTF 2019全国赛
  • RCTF 2019 国际赛
  • SUCTF 2019全国赛
  • 第四届XMan选拔赛
  • 第四届中国创新挑战赛暨中关村第三届新兴领域专题赛网络与信息安全专项赛-线下决赛
  • 第16届(2019)信息安全与对抗技术竞赛(ISCC2019)
  • 2019首届全国密码技术竞赛决赛
  • 全国工控系统信息安全攻防竞赛 2019
  • “天府杯”国际网络安全大赛 2019
  • 首届字节跳动“安全范儿”高校挑战赛-ByteCTF
  • 2019巅峰极客网络安全技能挑战赛暨城市靶场应急响应大赛
  • 工业信息安全技能大赛锦标赛 2019
  • 第三届强网杯全国网络安全挑战赛精英赛
  • 第三届强网杯全国网络安全挑战赛人工智能挑战赛
  • 第二届浙江省大学生网络与信息安全竞赛
  • 湖湘杯 2018
  • “百越杯”第五届福建省高校网络空间安全大赛
  • 2019“神盾杯”上海市网络安全竞赛
  • 2019第五空间网络安全大赛 – CTF比赛决赛
  • 第三届红帽杯网络安全攻防大赛在广州启动
  • 蓝帽杯
  • 蓝桥杯 2019
  • 2019年陕西省网络安全管理员职业技能大赛-决赛
  • Roar CTF 2019
  • D^3CTF2019
  • *CTF 2019国际赛

2020

  • 第三届强网”拟态防御国际精英挑战赛
  • DDCTF 2020
  • GEEPWN 2020
  • 2020第七届 HECTF信息安全挑战赛
  • Real World CTF 2nd
  • 2020年全国高校网安联赛暨中国科学院第一届网络安全运维大师赛(X-NUCA’2020)
  • 0CTF/TCTF 2020 Finals
  • N1CTF 2020
  • XCTF-CyBRICS 2020
  • SCTF 2020 国际赛
  • RCTF 2020 国际赛
  • GACTF2020
  • De1CTF2020国际赛
  • 2020大学生网络安全邀请赛暨第六届上海市大学生网络安全大赛
  • 2020全国卫生健康行业网络安全技能大赛(决赛)
  • 第17届(2020)信息安全与对抗技术竞赛(ISCC2020)
  • 2020首届全国密码技术竞赛决赛
  • 2020年全国工业互联网安全技术技能大赛
  • 全国工控系统信息安全攻防竞赛2020
  • 全国网络与信息安全管理职业技能大赛
  • 全国网络与信息安全管理职业技能大赛
  • “天府杯”国际网络安全大赛 2020
  • 字节跳动“安全范儿”高校挑战赛(决赛)
  • 第二届字节跳动“安全范儿”高校挑战赛-ByteCTF
  • 工业信息安全技能大赛锦标赛 2020
  • 第四届“强网杯”全国网络安全大赛青少年专项赛
  • 2020数字中国创新大赛虎符网络安全赛道
  • 2020年新华三杯高校网络安全竞技大赛
  • 2020年春秋杯新春战“疫”——网络安全公益赛
  • 第三届浙江省大学生网络与信息安全竞赛
  • 2020第五空间网络安全大赛 – CTF比赛决赛
  • 第二届“网鼎杯”网络安全大赛总决赛
  • 第四届“蓝帽杯”全国大学生网络安全技能大赛线上决赛
  • 蓝桥杯 2020
  • 金盾信安杯 2019
  • 金盾信安杯 2020
  • 2020年陕西省网络安全管理员职业技能大赛-决赛
  • 高校战“疫”网络安全分享赛
  • WMCTF 2020

2021

  • 第七届XCTF国际网络攻防联赛
  • 第四届“强网”拟态防御国际精英挑战赛
  • 国际网络安全攻防对抗联赛
  • GEEKPWN 2021
  • 2021第七届 HECTF信息安全挑战赛
  • 2022首届ISCTF联合新生赛
  • ISCTF 2021
  • OGeek 2021
  • Real World CTF 3rd
  • SCTF 2021
  • WMCTF2021
  • 0CTF/TCTF 2021 Finals
  • N1CTF 2021
  • SCTF 2021
  • L3HCTF 2021
  • RCTF 2021
  • XCTF-CyBRICS 2021
  • 第五届XMan选拔赛
  • 2021第二届卫生健康行业网络安全技能大赛
  • 第18届(2021)信息安全与对抗技术竞赛(ISCC2021)
  • 2021第六届全国密码技术竞赛决赛
  • 兰州理工大学网络安全竞赛 2020
  • 兰州理工大学网络安全竞赛 2021
  • 北京大学信息安全综合能力竞赛 2021
  • “天府杯”国际网络安全大赛 2021
  • 第三届字节跳动“安全范儿”高校挑战赛-ByteCTF
  • 2021“巅峰极客”网络安全技能挑战赛
  • 工业信息安全技能大赛锦标赛 2021
  • 第五届“强网杯”全国网络安全挑战赛-青少年专项赛(实践赛)
  • 第五届“强网杯”全国网络安全挑战赛-青少年专项赛(创新赛)
  • 2021数字中国创新大赛虎符网络安全赛道
  • 2021年春秋杯网络安全联赛秋季赛
  • 2021春秋杯网络安全联赛春季赛
  • 2021年春秋杯新年欢乐赛
  • 首届极客少年挑战赛
  • 第四届浙江省大学生网络与信息安全竞赛
  • 第七届“湖湘杯”网络安全技能大赛(决赛)2021
  • 百度“AI的光”冬令营白帽黑客专项训练赛之春秋杯2021赛季
  • 2021第五空间网络安全大赛 – CTF比赛决赛
  • 第四届“红帽杯”网络安全大赛
  • 2021年数字中国创新大赛网络安全赛道(数据安全赛题)暨“红明谷”杯数据安全大赛
  • 第四届美团网络安全高校挑战赛(决赛)
  • 第三届美团网络安全挑战赛(决赛)
  • 第五届“蓝帽杯”全国大学生-网络安全技能大赛(决赛)
  • 蓝桥杯 2021
  • 金盾信安杯 2021
  • 第一届“长城杯”网络安全大赛
  • 首届“陇剑杯”网络安全大赛
  • 2021年陕西省网络安全管理员职业技能大赛-决赛
  • 2021年陕西省网络安全管理员职业技能大赛-决赛
  • 香山杯 2021
  • 第六届 XCTF国际网络攻防联赛
  • 中山市首届“香山杯”网络安全大赛(决赛)
  • 鹏城·中汽创智杯 2021
  • D^3CTF2021
  • *CTF 2021国际赛

2022

  • 第五届“强网”拟态防御国际精英挑战赛
  • 国际网络安全攻防对抗联赛
  • GEEKPWN 2022
  • 2022第七届 HECTF信息安全挑战赛
  • 2022第二届ISCTF联合新生赛
  • Mini XMan 线上快闪挑战赛
  • Real World CTF 4th
  • 西门极客挑战赛白帽黑客大赛
  • WMCTF2022
  • 0CTF/TCTF 2022
  • N1CTF 2022
  • SUSCTF 2022
  • ACTF 2022
  • RCTF 2022
  • 东华杯”2021年大学生网络安全邀请赛暨第七届上海市大学生网络安全大赛(决赛)
  • 第一届中国研究生网络安全创新大赛(决赛)
  • 第19届(2022)信息安全与对抗技术竞赛(ISCC2022)
  • 第七届全国工控系统信息安全攻防竞赛(决赛) 2022
  • 兰州理工大学网络安全竞赛 2022
  • “冀信2022”网络安全技能竞赛(决赛)
  • 北京大学信息安全综合能力竞赛 2022
  • 第四届字节跳动“安全范儿”高校挑战赛-ByteCTF
  • 2022“巅峰极客”网络安全技能挑战赛(决赛)
  • 工业信息安全技能大赛锦标赛 2022
  • 第六届“强网杯”全国网络安全挑战赛(决赛)
  • 2022数字中国创新大赛网络安全赛道-车联网安全赛(初赛)
  • 2022数字中国创新大赛-虎符网络安全赛道(决赛)
  • 2022年春秋杯冬季赛
  • 2022年春秋杯网络安全联赛-春季赛
  • 第五届浙江省大学生网络与信息安全竞赛
  • 2022第五空间网络安全大赛 – CTF比赛决赛
  • 第二届“红明谷”杯数据安全大赛-技能场景赛(决赛)
  • 第五届美团网络安全高校挑战赛(决赛)
  • 第六届“蓝帽杯”全国大学生网络安全技能大赛(半决赛)
  • 蓝桥杯 2022
  • 贵阳贵安2022年网络安全技能竞赛
  • 车联网(智能网联汽车)网络安全挑战赛
  • 2022年江门市“邑网杯”网络安全大赛
  • 金盾信安杯 2022
  • 第二届“长城杯”网络安全大赛(决赛)
  • 2016年陕西省网络安全管理员职业技能大赛-决赛
  • 香山杯 2022
  • 鹏城·中汽创智杯 2022
  • D^3CTF2022
  • *CTF 2022国际赛

2023

  • TPCTF
  • 第六届“强网”拟态防御国际精英挑战赛
  • GEEKCON 2023
  • 2023第七届 HECTF信息安全挑战赛
  • ISCTF2023 新生联合赛
  • 2022第二届ISCTF联合新生赛
  • Real World CTF 5th
  • SCTF 2023
  • WMCTF2023
  • 0CTF/TCTF 2023
  • N1CTF 2023
  • ACTF 2023
  • BRICS+ CTF 2023
  • SCTF 2023
  • 2023年江苏移动‘赋能建工’网络安全技能竞赛
  • Hackergame 2023
  • 2023年第四届卫生健康行业网络安全技能大赛
  • 2023第三届全国卫生健康行业网络安全技能大赛
  • 第20届(2023)信息安全与对抗技术竞赛(ISCC2023)
  • 2023第八届全国密码技术竞赛复赛通知
  •  2022第七届全国密码技术竞赛决赛
  • 2023山东省“技能兴鲁”职业技能大赛 – 线上赛
  • 兰州理工大学网络安全竞赛 2023
  • 第三届北京大学信息安全综合能力竞赛 2023
  • 商丘师范学院第三届网络安全及信息对抗大赛
  • “天府杯”国际网络安全大赛 2023
  • “天网杯”网络安全大赛
  • 山东电专第一届网络安全竞赛暨2022、2023级新生挑战赛
  • 2023“巅峰极客”网络安全技能挑战赛(决赛)
  • 工业信息安全技能大赛锦标赛 2023
  • 数字中国·数据安全产业人才能力挑战赛(决赛)
  • 首届数据安全大赛决赛
  • 2023-SICTF新生选拔赛
  • 2023新疆高校大学生信息安全大赛
  • 2023年春秋杯春季赛
  • 江河杯2023年东昌府区网络安全技能大赛
  • 波卡黑客松开发者大赛
  • 第六届浙江省大学生网络与信息安全竞赛
  • 第三届“祥云杯”网络安全大赛暨吉林省第五届大学生网络安全大赛(决赛)
  • 第三届“红明谷”杯网络安全技能场景赛(决赛)
  • 2022第三届“网鼎杯”网络安全大赛总决赛
  • 第十四届蓝桥杯大赛数字科技创新赛—网络安全春秋挑战赛(决赛)2023
  • 西湖论剑·2022中国杭州 网络安全技能大赛(决赛)
  • 邑网杯(决赛)
  • 河南省第五届“金盾信安杯”网络与数据安全大赛线下总决赛 2023
  • 全国大学生网络安全攻防竞赛
  • 第三届长城杯网络安全大赛暨京津冀高校网络安全技能竞赛(决赛)
  • 陇剑杯(决赛)
  • 2023年陕西省网络安全管理员职业技能大赛-决赛
  • 2022年陕西省网络安全管理员职业技能大赛-决赛
  • 2023中山市第三届“香山杯”网络安全大赛(决赛)
  •  联邦网络靶场协同攻防演练赛
  • 齐鲁师范学院QLNU22级网络安全考核赛
  • D^3CTF2023
  • *CTF 2023国际赛

About the authors

Eugenio Benincasa
Senior Researcher, Cyberdefense Project, Risk and Resilience Team
Center for Security Studies, ETH Zurich

Explore the program

The Global China Hub researches and devises allied solutions to the global challenges posed by China’s rise, leveraging and amplifying the Atlantic Council’s work on China across its sixteen programs and centers.

1    Dakota Cary and Eugenio Benincasa, “Capture the Red Flag Data,” Github, September 17, 2024, https://github.com/D14141414141414/References/blob/main/PRC%20CTFs%20for%20GitHub.xlsx.
2    “Capture the (red) flag: How hacking contests enhance China’s cyber capabilities,” Atlantic Council, October 3, 2024, https://www.atlanticcouncil.org/private-post/capture-the-red-flag-how-hacking-contests-enhance-chinas-cyber-capabilities/.
3    0days are vulnerabilities about which the software or hardware manufacturer is unaware and, thus, has not begun fixing. Once a vulnerability is known to the company, the vulnerability is named and transitions to an n-day. “N” is the common letter used in algebraic expressions to denote an integer and indicates that some days have passed since the company was made aware of the vulnerability.
4    William Wan, “Chinese President Xi Jinping Takes Charge of New Cyber Effort,” Washington Post, February 27, 2014, https://www.washingtonpost.com/world/chinese-president-takes-charge-of-new-cyber-effort/2014/02/27/a4bffaac-9fc9-11e3-b8d8-94577ff66b28_story.html.
5    Dakota Cary, “China’s Next Generation of Hackers Won’t Be Criminals—That’s a Problem,” TechCrunch, November 12, 2021, https://techcrunch.com/2021/11/12/chinas-next-generation-of-hackers-wont-be-criminals-thats-a-problem/.
6    Ibid.; “‘Internet+’ Artificial Intelligence Three-Year Action and Implementation Plan,” National Development and Reform Commission, Ministry of Science and Technology, Ministry of Industry and Information Technology, and Office of the Central Cyberspace Affairs Commission, May 18, 2016,
https://cset.georgetown.edu/publication/internet-artificial-intelligence-three-year-action-and-implementation-plan/; Dakota Cary, “China’s CyberAI Talent Pipeline,” Center for Security and Emerging Technology, July 2021, https://cset.georgetown.edu/publication/chinas-cyberai-talent-pipeline/.
7    “王星,“网络安全竞赛:建设明日的网络安全人才队伍,” 中国信息安全, February 2017, 88–91.
8    “Cybersecurity Competitions and Games,” National Initiative for Cybersecurity Careers and Studies (NICCS), last visited September 21, 2024, https://niccs.cisa.gov/cybersecurity-career-resources/cybersecurity-competitions-games.
9    “Notice on Regulating the Promotion of Cybersecurity Competitions,” Office of the Chinese Communist Party Central Cyberspace Affairs Commission and the PRC Ministry of Public Security, June 5, 2018, https://cset.georgetown.edu/publication/notice-on-regulating-the-promotion-of-cybersecurity-competitions/.
10    Dakota Cary and Kristin Del Rosso, “Sleight of Hand: How China Weaponizes Software Vulnerabilities,” Atlantic Council, September 6, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/.
11    2023国家网安周丨《网络安全人才实战能力白皮书-人才评价篇》重磅发 布,” 中国科学技术大学 (blog), September 15, 2023, https://archive.ph/Ywgxo.
12    Ibid.
13    Ibid.
14    “Notice on Regulating the Promotion of Cybersecurity Competitions.”
15    “2022 White Paper on the Live-Fire Capabilities of Cybersecurity Talents: Attack and Defense Live-Fire Capability Edition,” Ministry of Education Steering Committee on Instruction for Higher Education Cybersecurity Majors, September 13, 2022, 115, https://cset.georgetown.edu/publication/china-cyber-talent-white-paper-2022/.
16    Eugenio Benincasa, “From Vegas to Chengdu: Hacking Contests, Bug Bounties, and China’s Offensive Cyber Ecosystem,” Center for Security Studies and ETH Zürich, June 10, 2024, https://css.ethz.ch/content/dam/ethz/special-interest/gess/cis/center-for-securities-studies/pdfs/CyberDefenseReport_%20From%20Vegas%20to%20Chengdu.pdf.
17    “Organization Introduction,” China Institute for Innovation and Development Strategy, September 9, 2023, https://www.archive.ph/KoPaA; “Spies and Lies: How China’s Greatest Covert Operations Fooled the World,” Alex Joske, October 11, 2022, pg. 164-167.
18    DEF CON is one of the largest hacking conferences globally, held annually in Las Vegas, Nevada. Founded by Jeff Moss in 1992, DEF CON’s own CTF competition was first held in 1996.
19    Pwn2Own is a high-profile hacking competition, held annually in Vancouver, Canada, where security researchers attempt to exploit vulnerabilities in popular software and devices for cash prizes and recognition.
20    Benincasa, “From Vegas to Chengdu.”
21    Ibid.
22    “White Hack, Black Hat: Bringing Hackers Out of the Shadows,” Shanghai Observer, January 15, 2024, https://archive.ph/qv6L6.
23    Benincasa, “From Vegas to Chengdu.”
24    “医疗卫生, 2020年全国卫生健康行业网络安全技能大赛顺利收官,” Secrss, December 7, 2020, https://archive.ph/UqNyL.
25    “‘珞安杯’ 第七届全国工控系统信息安全攻防竞赛圆满举办,” Beijing Control Engineering Information Technology, December 19, 2022, https://archive.ph/GxHSi; “央视:全国工业互联网安全技术大赛落幕 顶象获 ’突出贡献奖’,” Dingxiang, November 17, 2020, https://archive.ph/vtR9y.
26    “Blue Cap Cup: Introduction of the Competition,” Qi An Xin, last visited August 20, 2024, https://archive.ph/t8t4v.
27    Joe Warminsky, “FBI Says It Recently Dismantled a Second Major China-Linked Botnet,” Record, September 18, 2024, https://therecord.media/fbi-dismantles-flax-typhoon-china-linked-botnet-wray-aspen.
28    “People’s Republic of China-Linked Actors Compromise Routers and IoT Devices for Botnet Operations,” US Department of Defense, September 18, 2024, https://media.defense.gov/2024/Sep/18/2003547016/-1/-1/0/CSA-PRC-LINKED-ACTORS-BOTNET.PDF.
29    “Competition Process,” Information Security Ironman Triathlon, last visited August 20, 2024, https://perma.cc/ZWE7-QA65.
30    Ibid.; 方言,刘春利, “‘信心安全铁人三项赛’ 探索校企结合人才培养模式,” 中国信息安全,October 2017, 71–72.
31    Xiao Xi, “2017–2018全国高校“西普杯”信息安全铁人三项赛第一赛区4月21日开赛,” Information Security Ironman Triathlon, April 11, 2018, https://perma.cc/TX4U-4KPR; “2021 Competition Homepage,” Information Security Ironman Triathlon, last visited March 31, 2021, https://perma.cc/QEY4-WRT2; “2024 Competition Homepage,” Information Security Ironman Triathlon, last visited July 11, 2024, https://perma.cc/L97G-TERQ; “2020 College Participant List from the 4th Sector,” Information Security Ironman Triathlon, last visited July 11, 2024, https://perma.cc/6ANM-PEXN.
32    “2019–2020 Information Security Ironman Triathlon Knowledge System Synopsis,” Information Security Ironman Triathlon, December 26, 2019, https://perma.cc/2KF6-9F8D; 方言,刘春利,“‘信心安全铁人三项赛’ 探索校企结合人才培养模式,” 中国信息安全,October 2017, 71–72.
33    “方言,刘春利,”’信心安全铁人三项赛’ 探索校企结合人才培养模式”, 中国信息安全.”
34    “Organization Structure,” Information Security Ironman Triathlon, last visited March 31, 2021, https://perma.cc/8Y8Y-XSKX; “Competition Process,” Information Security Ironman Triathlon, last visited August 20, 2024, https://perma.cc/ZWE7-QA65.
35    “方言,刘春利,”’信心安全铁人三项赛’ 探索校企结合人才培养模式”, 中国信息安全.”
36    “Competition Process.”
37    “孙颖, “全国首批仅5个!北京海淀成国家网络安全教育技术产业融合发展试验区,” 北京日报, September 5, 2022, https://perma.cc/W2JL-BP2J; 张强, 刘欢, “首批国家网络安全教育技术产业融合发展试验区授牌,” 中国新闻网, September 6, 2022, https://archive.ph/3uyo7.
38    Ibid.
39    “2023年网络 ‘攻&防’ 技能大赛暨网安人才评定工程启动,” 网安世纪科技有限公司, February 1, 2023, https://archive.ph/qxEO9.
40    ““铸剑杯”全国大学生网络安全攻防竞赛在我校举行,” Northwestern Polytechnical University Cyberspace Security School, January 1, 2024, https://archive.ph/AUD1F.
41    秦峰, 王纯清, 伍辉艳, “’铸剑杯’ 全国大学生网络安全攻防竞赛在西工大举行,” China Daily, January 8, 2024, https://archive.ph/eZolv.
42    Northwestern Polytechnical University Cyberspace Security School, ““铸剑杯”全国大学生网络安全攻防竞赛在我校举行”, NWPU, January 1, 2024, https://www.nwpu.edu.cn/info/1198/77538.htm;王翠萍, “关于举办 ‘铸剑杯’ 大学生网络安全攻防竞赛的通知,” Northwestern Polytechnical University Cyberspace Security School and State Secrets Protection School, December 19, 2023, https://web.archive.org/web/20240808234908/https:/wlkjaqxy.nwpu.edu.cn/info/1044/6440.htm; Cary and Del Rosso, “Sleight of Hand”; KnownSec (知道创宇) is not listed among the CNNVD’s current technical support units, but the company’s history shows it began supporting the MSS in 2008, https://archive.org/details/20240809_20240809_1541; KnownSec also supplies software vulnerabilities to CNNVD, providing at least six vulnerabilities in 2023, https://archive.org/details/knownsec-cnnvd.
43    Ryan Fedasiuk and Emily Weinstein, “Universities and the Chinese Defense Technology Workforce” (Center for Security and Emerging Technology, December 2020). https://doi.org/10.51593/20200043.
44    王翠萍, “关于举办 ’铸剑杯’ 大学生网络安全攻防竞赛的通知,” 西北工业大学网络空间安全学院,
https://archive.ph/aq4mW.
45    西北工业大学网络空间安全学院, “‘铸剑杯’ 大学生网络安全攻防竞赛 政治审查表,” 西北工业大学, December 2023, https://perma.cc/ZCU3-HHDY.
46    “PRC Law on the Protection of State Secrets,” Standing Committee of the National People’s Congress, February 27, 2024, https://www.chinalawtranslate.com/en/secrets-law-2024/; Article 43 of China’s Counter-Espionage Law requires “good political caliber” for personnel with access to state secrets.
47    西北工业大学网络空间安全学院, “’铸剑杯’ 大学生网络安全攻防竞赛选手个人承诺书.”
48    Northwestern Polytechnical University Cyberspace Security School, ““铸剑杯”全国大学生网络安全攻防竞赛在我校举行”, NWPU, January 1, 2024, https://archive.ph/AUD1F.
49    “Who Is Mr. Gu?” Intrusion Truth, January 10, 2020, https://intrusiontruth.wordpress.com/2020/01/10/who-is-mr-gu/.
50    Eleanor Olcott and Helen Warrell, “China Lured Graduate Jobseekers into Digital Espionage,” Financial Times, June 30, 2022, https://www.ft.com/content/2e4359e4-c0ca-4428-bc7e-456bf3060f45.
51    “The Anthem Hack: All Roads Lead to China,” ThreatConnect, February 27, 2015, https://perma.cc/ZNQ5-325G.
52    Alex Joske, “Northwestern Polytechnical University,” ASPI University Tracker May 6, 2021, https://unitracker.aspi.org.au/universities/northwestern-polytechnical-university/.
53    “CTFWar Cybersecurity Attack and Defense Confrontation Cyber Range Platform,” CTFWar, last visited October 4, 2024, https://archive.vn/Rgvh6; “Base introduction,” 国家网络空间安全人才培养基地, last visited October 4, 2024, https://archive.vn/Stpzw; 中国信息安全测评中心, “国家网络空间安全人才培养基地介绍,” 国家网络空间安全人才培养基地, October 14, 2019, http://j.nisp.org.cn/NewsDetail/1423582.html; “Spring Season Web Challenges,” CTFWar, last visited October 4, 2024, https://archive.ph/lABAh; 极牛网官方账号, “CTFWAR2022国际网络安全攻防对抗联赛全面启动!联赛赛程发布!” QQ, last visited October 4, 2024, https://archive.ph/qmHLL; “Competition Services,” CTFWar, last visited October 4, 2024, https://archive.fo/Vv4iW; “Competition Background,” CTFWar, 2024, https://web.archive.org/web/20240513143004/https://ctfwar.org.cn/2024/.
54    极牛安全,“CTFWAR2022国际联赛宣传片发布!硝烟弥漫!等你来战!” BiliBili, April 24, 2022, https://web.archive.org/web/20240809180213/https://www.bilibili.com/video/BV1cT4y1Y7uc/?spm_id_from=333.337.search-card.all.click.
55    中国信息安全测评中心, “国家网络空间安全人才培养基地介绍,” 中国信息安全测评中心, October 14, 2019, https://perma.cc/TA6L-P77X; 国家网络空间安全人才培养基地, “关于停止2022年第三届全国大学生网络安全精英赛奖项申领的通知,” NISP, August 2, 2023, 官网, https://web.archive.org/web/20230926122801/http://j.nisp.org.cn/NewsDetail/4266565.html; “第二届全国大学生网络安全精英赛火热报名中,” 河南促进大学生就业职业培训学校, Henan Provincial Vocational Training School for Improving College Students’ Employment, September 16, 2021, https://www.pxxxedu.com/nd.jsp?id=122; 网安世纪科技有限公司, “安全无界·成长无限—2023年网络安全“攻防”技能大赛报名启动,” 国家网络空间安全人才培养基地, February 7, 2023, https://web.archive.org/web/20240926020653/http://j.nisp.org.cn/NewsDetail/3767946.html; “四川大学网络空间安全学院2024年普通招考博士研究生招生简章,” HHKaobao, https://archive.ph/LjADJ; 安徽奥斯科信息科技有限公司, “祝贺2020产教融合-网络空间安全人才培养高峰论坛成功召开!” 163.com, January 13, 2021, https://archive.ph/nVPwu.
56    National Cyberspace Services Internet, “Base introduction”, 国家网络空间安全人才培养基地, last visited October 4, 2024, https://archive.vn/Stpzw; 中国信息安全测评中心, “国家网络空间安全人才培养基地介绍,” 国家网络空间安全人才培养基地, October 14, 2019, http://j.nisp.org.cn/NewsDetail/1423582.html.
57    何珍祥, “甘肃政法大学与国家网络空间安全人才培养基地举行签约揭牌仪式,” 甘肃政法大学, June 16, 2024, https://jwxx.gsupl.edu.cn/info/1012/5851.htm;河北经贸大学经济管理学院, “学院简介,” 河北经贸大学, last visited October 4, 2024, https://web.archive.org/web/20240525041200/https://jgxy.hueb.edu.cn/xygk/xyjj.htm; 财报网, “苏州举行首届网络安全官 ‘云赋能’ 行动,” 财报网, last visited October 4, 2024, https://archive.ph/p0aE8; 现代快报, “‘网安先锋 学以职用’ 活动走进苏州市职业大学,” 中共江苏省委网络安全和信息化委员会办公室, June 16, 2021, https://archive.ph/8xS4B.
58    Privately held documents; 赤峰公安, “诚邀各路英雄参与网络安全顶级赛事,” 上海东方报业有限公司, August 3, 2018, https://archive.ph/dT67J.
59    网鼎杯, “实战演练,为国铸鼎:第二届 ‘网鼎杯’ 网络安全大赛圆满收官,” 安全内参, November 29, 2020, https://archive.ph/dUsFA; 左晓栋, “《’十三五’ 国家信息化规划》网络安全工作细梳理,” China Daily, December 28, 2016, https://archive.ph/dV7bo; 工业和信息化部, 国家互联网信息办公室, 公安部, “工业和信息化部 国家互联网信息办公室 公安部关于印发网络产品安全漏洞管理规定的通知,” People’s Republic of China State Council, July 12, 2021, https://archive.ph/rGsY1.
60    安全内参 , “实战演练,为国铸鼎:第二届“网鼎杯”网络安全大赛圆满收官,” 安全会议· 网鼎杯, November 29, 2020, https://archive.ph/dUsFA.
61    Dakota Cary, “Robot Hacking Games,” Center for Security and Emerging Technology, Georgetown University, September 2021, https://cset.georgetown.edu/publication/robot-hacking-games/.
62    “Qiang Wang Cup Homepage,” Qiang Wang Cup, last visited October 4, 2024,
https://web.archive.org/web/20240723013151/https://www.qiangwangbei.com/; “第六届 ‘强网杯’ 全国网络安全挑战赛(线上赛),” Qiang Wang Cup, 2022, https://archive.ph/7fBg6.
63    “Qiang Wang Cup Homepage.”
64    Ibid.
65    “第七届 ‘强网杯’ 圆满收官,永信至诚提供技术服务”, Beijing Integrity Tech, last visited October 4, 2024, https://archive.ph/L79Fn; 孟魁, “0ops战队在第六届 ‘强网杯’ 全国网络安全挑战赛上夺冠,” Shanghai Jiao Tong University, August 29, 2022, https://archive.ph/u9gYP.
66    “State Key Laboratory of Information Security Periodical,” State Key Laboratory of Information Security, 2016, https://web.archive.org/web/20240723154440/http://www.sklois.cn/cxwh/systx/201702/P020170214544533350877.pdf.
67    Hu, H., Wu, J., Wang, Z., & Cheng, G., “Mimic Defense: a Designed‐in Cybersecurity Defense Framework,” IET Information Security 12, 3 (May 2018), 226–237, https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/iet-ifs.2017.0086.
68    Thanks to Joseph Pantoga for his concise definition and generous time.
69    黎云, “第二届 ‘强网’ 拟态防御国际精英挑战赛开幕 ‘人机对战’ 检验拟态防御体系安全,” 新华网, May 22, 2019, https://perma.cc/9CTW-5GNS.
70    “State Key Laboratory of Information Security Periodical,” State Key Laboratory of Information Security, 2016, https://web.archive.org/web/20240723154440/http://www.sklois.cn/cxwh/systx/201702/P020170214544533350877.pdf.
71    管理员, “我院学子获第二届“强网”拟态防御国际精英挑战赛决赛第三名,” X1cT34m.com, last visited May 14, 2021, https://perma.cc/MJ8C-ENE8.
72    “第六届“强网”拟态防御国际精英挑战赛—入围战队篇,” Digital World Consulting, January 2024, https://www.dwcon.cn/post/3257.”https://www.dwcon.cn/post/3257.
73    Cary and Del Rosso, “Sleight of Hand.”
74    Benincasa, “From Vegas to Chengdu.”
76    @GeekPwn, “@AnonCorpWatch Hack a Tesla, win 100,000 $ in GeekPwn! GeekPwn prepares a disassembled Tesla in our free lab in China,” X, September 2, 2014, https://archive.ph/ceyij.
77    “GeekPwn.”
78    Ibid.
79    Ibid.
80    “Notice on Regulating the Promotion of Cybersecurity Competitions.”
81    “About,” Darknavy, last visited August 13, 2024, https://archive.ph/s8dAC.
82    Benincasa, “From Vegas to Chengdu.”
83    “XCTF International Cyber Attack and Defense League,” Cyber Peace, last visited September 24, 2024, https://web.archive.org/web/20240924175159/https://adworld.xctf.org.cn/league/list?rwNmOdr=1719301811043.
84    Ibid.
85    Ibid.
86    安全419-网络安全产业资讯媒体, “The 7th XCTF International Cyber Attack and Defense League Finals,” anquan419, last visited September 24, 2024, https://web.archive.org/web/20240924180304/http:/www.anquan419.com/comp/10/107.html.
87    “CyBRICSCTF,” CTF Time, last visited September 24, 2024, https://ctftime.org/ctf/334/; “BRICS+CTF,” CTF Time, last visited September 24, 2024, https://ctftime.org/ctf/986.
88    “HITB-XCTF DUBAI CTF 2018,” CTF Time, last visited September 24, 2024, https://ctftime.org/event/720/; “HITB-XCTF GSEC CTF 2018 Final,” CTF Time, last visited September 24, 2024, https://ctftime.org/event/678/.
89    “NUTD Team Won the Championship in XCTF International League,” National University of Defense Technology, July 26, 2024, https://english.nudt.edu.cn/nav/News/Latestnews/0c6545b632fe4e688ce54da8a28a0846.htm.
90    Eugenio Benincasa and Natto Team, “Matrix Cup: Cultivating Top Hacking Talent, Keeping Close Hold on Results,” Natto Thoughts, July 24, 2024, https://nattothoughts.substack.com/p/the-matrix-cup-cultivating-top-hacking.
91    Ibid.
92    Eduard Kovacs, “Tesla, OS, Software Exploits Earn Hackers $1.1 Million at Pwn2Own 2024,” Security Week, March 22, 2024, https://www.securityweek.com/tesla-os-software-exploits-earn-hackers-1-1-million-at-pwn2own-2024/; “Registration for the 2023 ‘Tianfu Cup’ International Cyber Security Competition Is Now Open, with a Prize of Tens of Millions of Yuan to Recruit Cyber Security Heroes,” Tianfu Cup, September 25, 2023, https://web.archive.org/web/20240227214728/http://www.tianfucup.com/2023/news/details?id=151.
93    “Front Page,” Matrix Cup, last visited September 24, 2024, https://web.archive.org/web/20240617081128/https://matrixcup.net/page/race/home/; Eduard Kovacs, “$2.5 Million Offered at Upcoming ‘Matrix Cup’ Chinese Hacking Contest,” Security Week, May 13, 2024, https://www.securityweek.com/2-5-million-offered-at-upcoming-matrix-cup-chinese-hacking-contest/.
94    Kovacs, “$2.5 Million Offered at Upcoming ”Matrix Cup” Chinese Hacking Contest.”
95    “3000名黑客巅峰较量,100+漏洞震撼突破!矩阵杯决赛落幕,” Qihoo360, July 1, 2024,
https://web.archive.org/web/20240708061854/https:/360.net/about/news/article66836ac56ddf08001f91a723#menu.
96    “Students from Tsinghua University’s Institute of Network Science and Technology Won Two Championships and Other Awards in the Matrix Cup Network Security Competition,” Tsinghua University, July 2, 2024, https://web.archive.org/web/20240831194143/https://www.insc.tsinghua.edu.cn/info/1192/3725.htm.
97    信安君, “‘Matrix Cup’ Cyber Security Competition: Building a New Security Defense Line and Stimulating the Vitality of Cyber Security Talents,” 中国信息安全, June 28, 2024, https://web.archive.org/web/20240711211024/https:/mp.weixin.qq.com/s?src=11&timestamp=1720711460&ver=5376&signature=4sfNK%2A-IkpPLCdsZimRe54ctXDVPHxMD7b%2A2aDwf-1oUh7rQPPCaV1N54WtuKNfR4jwisULfK-vTNbcy0IKY1I7RLYLdhGjz-OT2tf8zimTt%2A9LS8PS1fn1AAf3T%2Aket&new=1.
98    紅衣大叔海外友人與粉絲會, “360为什么要办矩阵杯网络安全大赛?#网络安全# 不端不装有点二 360公司董事长兼CEO 周鸿祎,” YouTube, July 2, 2024, https://www.youtube.com/watch?v=FeRDtZhdx14.
99    Patrick Howell O’Neill, “How China Turned a Prize-Winning iPhone Hack against the Uyghurs,” MIT Technology Review, May 6, 2021, https://www.technologyreview.com/2021/05/06/1024621/china-apple-spy-uyghur-hacker-tianfu/.
100    “技术支撑单位,” CNNVD, last visited September 24, 2024, https://www.cnnvd.org.cn/home/tech.
101    “The Xiangyun Cup,” Xiangyun Cup, 2022, https://archive.ph/DgiIe.
102    Ibid.; “首届 ‘祥云杯’ 网络安全大赛,” Security419, last visited October 4, 2024, https://archive.ph/awpb2.
103    “Tianfu Cup Overview,” Tianfu Cup, 2023, https://archive.ph/sIJm5.
104    “The 3rd ‘Xiangyun Cup’ Network Security Competition and the 5th Jilin Province College Student Network Security Competition Finals Were Successfully Held,” Education Department of Jilin Province, September 13, 2023, https://archive.ph/Ift0V
105    “Sector Risk Management Agencies,” Cybersecurity and Infrastructure Security Agency, last visited September 24, 2024, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors/sector-risk-management-agencies.

The post Capture the (red) flag: An inside look into China’s hacking contest ecosystem appeared first on Atlantic Council.

]]>
Effective cybersecurity in Africa must start with the basics https://www.atlanticcouncil.org/blogs/africasource/effective-cybersecurity-in-africa-must-start-with-the-basics/ Mon, 07 Oct 2024 17:35:14 +0000 https://www.atlanticcouncil.org/?p=796587 Grand strategies and policies often lack practicality, especially for African firms with limited capacity. For them, core and basic practices are often easier to achieve.

The post Effective cybersecurity in Africa must start with the basics appeared first on Atlantic Council.

]]>
Africa is at the forefront of the digitalization wave. From bustling cities to remote villages, the rapid adoption of broadband internet and mobile-enabled transactions continues to reshape economies and lives across the continent.

Figures put together by the World Bank illustrate this surge: between 2019 and 2022, over 160 million Sub-Saharan Africans gained broadband access. From 2016 to 2021, internet users in the region increased by 115 percent, and from 2014 to 2021, 191 million people made or received digital payments.

While these digital advances are promising, they also come with significant risks in the form of cyber vulnerabilities. Threat actors—individuals, organized groups, and even countries—are becoming increasingly sophisticated and are propelling a rise in cybercrime. In addition, digital systems can be compromised by unintentional acts and errors, as seen in the recent CrowdStrike incident, in which a single corrupted software update triggered a chain reaction, affecting multiple sectors and regions.

International institutions and governments are becoming increasingly aware of this reality, driving the development of cyber policies across Africa and beyond. Governments of countries such as South Africa, Kenya, and Mauritius have taken early steps with national frameworks, while regional actors (notably the African Union) and global organizations such as the International Criminal Police Organization have introduced strategies and initiatives to strengthen defense efforts across the continent.

While these initiatives are important, grand strategies and policies often lack practicality, especially for African firms with limited capacity. In resource-constrained environments, core and basic practices are often easier to achieve than comprehensive frameworks. Thus, to foster a more secure continent, organizations engaging in the digital sphere—including businesses, nongovernmental organizations, government bodies, and more—should first be encouraged to implement basic cybersecurity measures to identify, protect, and recover critical assets.

Despite the increasing use of advanced and complex technologies like artificial intelligence, basic security measures remain surprisingly effective at diminishing the likelihood of cyber threats and mitigating their impact. A significant portion of cyberattacks can be prevented through the implementation of fundamental cybersecurity measures, as shown by the Verizon Data Breach Investigations Report and Microsoft Digital Defense Report. By implementing basic safeguards and preparing for recovery when critical assets are compromised, African organizations and corporations of all types can greatly diminish the impact of cyber threats:

  • Create a comprehensive inventory of assets: The foundation of effective cybersecurity lies in knowing what to protect. Organizations should inventory all systems, devices, and data assets. They should document relevant data elements to facilitate categorization, risk assessment, and prioritization. Outdated or unauthorized systems are common in remote offices, making a thorough inventory crucial.
  • Limit access rigorously: Organizations should restrict system access to essential personnel and enforce multi-factor authentication. In Africa, where mobile devices are often the primary means of internet access, secure authentication is especially important.
  • Devise an application whitelist: It is important to allow only approved software in systems, as doing so prevents the execution of unauthorized or malicious programs. This is particularly valuable in regions of Africa where pirated software is prevalent and users might be tempted to install unauthorized applications due to resource constraints. Only allowing approved software would also reduce risks associated with unauthorized or outdated software.
  • Standardize security configurations: Organizations should enforce uniform security settings across systems, which would minimize vulnerabilities and simplify management, ensuring consistent protection even in remote locations.
  • Proactively deploy patches: It is critical to promptly address software vulnerabilities systematically with patches. In areas with limited connectivity, creative solutions for distributing and applying patches (such as using local caching servers or scheduling updates during off-peak hours) can help ensure updates are applied.
  • Develop robust plans for backups and recovery: Developing, regularly updating, and testing recovery plans tailored to the most critical threat scenarios is essential for minimizing downtime when a disruption occurs. Organizations should consider both on-site and off-site backup solutions, taking into account local regulations and data sovereignty issues that may affect where data can be stored.

Governments play a crucial role in fostering the widespread adoption of cybersecurity practices. They can leverage various policy tools to incentivize organizations, especially small and medium-sized enterprises, to prioritize cybersecurity. Tax incentives for cybersecurity investments can make implementation more financially viable for both local and foreign companies operating within the country. These incentives could include tax credits for cybersecurity expenditures, accelerated depreciation for security-related hardware and software, or reduced corporate tax rates for companies meeting certain cybersecurity standards. By extending these benefits to foreign investors, governments can also attract international expertise and capital to bolster the country’s cybersecurity infrastructure.

The basic cybersecurity practices listed above not only protect against common threats but also bolster organizational resilience, drive innovation, and contribute to a more secure digital ecosystem. They are practical and balance immediate needs with strategic goals, making robust cybersecurity more accessible. By laying this foundation, African organizations of all sizes can build their cybersecurity programs, contributing to a safer and more resilient digital world.


Yasmine Abdillahi is the executive director for security risk and compliance and the business information security officer at Comcast.

The Africa Center works to promote dynamic geopolitical partnerships with African states and to redirect US and European policy priorities toward strengthening security and bolstering economic growth and prosperity on the continent.

The post Effective cybersecurity in Africa must start with the basics appeared first on Atlantic Council.

]]>
How do cyber-attacks threaten the Balkans? | A Debrief with Dan Ilazi and Filip Stojanovski https://www.atlanticcouncil.org/content-series/balkans-debrief/how-do-cyber-attacks-threaten-the-balkans-a-debrief-with-dan-ilazi-and-filip-stojanovski/ Tue, 01 Oct 2024 19:00:00 +0000 https://www.atlanticcouncil.org/?p=796275 Senior Fellow Ilva Tare speaks with Dan Ilazi and Filip Stojanovski about the political and economic threats of cyber-attacks for the Western Balkans.

The post How do cyber-attacks threaten the Balkans? | A Debrief with Dan Ilazi and Filip Stojanovski appeared first on Atlantic Council.

]]>

IN THIS EPISODE

Cyber-attacks are on the rise in the Western Balkans, with 1.2 million personal records exposed to data breaches and a 200% surge in ransomware attacks over the past two years. Businesses across the region have paid millions of euros to recover compromised data, and 75% of companies report facing phishing attacks. Cyber-actors are exploiting internal ethnic tensions to target reconciliation efforts, while disinformation campaigns undermine democracy, destabilize institutions, and disrupt daily life.

In this episode of #BalkansDebrief, Ilva Tare, Senior Fellow at the Atlantic Council’s Europe Center, sits down with Ramadan Ilazi from the Kosovar Centre for Security Studies and Filip Stojanovski, Director of Partnerships at Metamorphosis in North Macedonia. Together, they delve into the cybersecurity vulnerabilities threatening the region’s political and economic stability, examining the implications for critical infrastructure, businesses, and citizens.

The discussion tackles key questions, including how cyberattacks are being used to advance political agendas, the impact of emerging technologies like AI and the Internet of Things, and the gaps in regional cooperation. They also explore how the Western Balkans can strengthen its integration into the EU’s cybersecurity framework, including the role of ENISA in supporting regional efforts.

As cyber threats continue to evolve, this conversation highlights the urgent need for a resilient digital future in the Western Balkans, from workforce development to bolstering regional collaboration. Tune in for expert insights on navigating one of the region’s most critical challenges.

ABOUT #BALKANSDEBRIEF

#BalkansDebrief is an online interview series presented by the Atlantic Council’s Europe Center and hosted by journalist Ilva Tare. The program offers a fresh look at the Western Balkans and examines the region’s people, culture, challenges, and opportunities.

Watch #BalkansDebrief on YouTube and listen to it as a Podcast.

MEET THE #BALKANSDEBRIEF HOST

The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

The post How do cyber-attacks threaten the Balkans? | A Debrief with Dan Ilazi and Filip Stojanovski appeared first on Atlantic Council.

]]>
Dakota Cary’s research in WIRED on Chinese hacking competitions https://www.atlanticcouncil.org/insight-impact/in-the-news/dakota-carys-research-in-wired-on-chinese-hacking-competitions/ Wed, 25 Sep 2024 16:58:13 +0000 https://www.atlanticcouncil.org/?p=794075 Did Chinese college students conduct a government-backed cyberattack on a real world target?

The post Dakota Cary’s research in WIRED on Chinese hacking competitions appeared first on Atlantic Council.

]]>

Did Chinese college students conduct a government-backed cyberattack on a real world target?

On September 18th, GCH Fellow Dakota Cary’s investigation into Chinese hacking competitions was recently explored in a new WIRED article. Cary dives into the world of Chinese cyber competitions, notable examples, and a case where state-sponsored college students may have conducted real world cyber operations.

Be sure to keep an eye out for Cary’s full report into the Chinese hacking contest world later this fall!

The post Dakota Cary’s research in WIRED on Chinese hacking competitions appeared first on Atlantic Council.

]]>
Defense technology and innovation in Germany https://www.atlanticcouncil.org/in-depth-research-reports/report/defense-technology-and-innovation-in-germany/ Mon, 09 Sep 2024 14:42:48 +0000 https://www.atlanticcouncil.org/?p=788572 This paper explores the critical aspects of defense technology and innovation within the  German armed forces, detailing the necessity for innovation, the role of defense innovation hubs (specifically the Bundeswehr Cyber Innovation Hub CIHBw), the significance of software-defined defense, the contribution of venture capitalists, and the importance of a  supportive legal framework. 

The post Defense technology and innovation in Germany appeared first on Atlantic Council.

]]>
A report commissioned by Atlantik-Brücke in cooperation with the Atlantic Council

Executive Summary

Innovation in defense technology is a cornerstone of national security and military superiority. For Germany, a country with a complex historical and geopolitical backdrop, the impetus to innovate within its armed forces has never been more pressing. Today’s security environment, characterized by rapid technological advancements and asymmetric threats, necessitates a robust and agile approach to defense innovation.

As the demands on German foreign and security policy rise amidst a Europe plagued by multiple crises, so too do the demands on the German Armed Forces. To enhance the Bundeswehr’s assertiveness and effectiveness as a deterrent, despite constraints such as personnel and equipment shortages, it must rapidly and extensively adopt new technologies. The quicker and more effectively these technologies are utilized, the greater the advantages they will provide on the battlefield.

This paper explores the critical aspects of defense technology and innovation within the German Armed Forces, detailing the necessity for innovation, the role of defense innovation hubs (specifically the Bundeswehr Cyber Innovation Hub CIHBw), the significance of software-defined defense, the contribution of venture capitalists, and the importance of a supportive legal framework.

Sven Weizenegger is the Head of the Cyber Innovation Hub at the German Armed Forces.

Presented by

Atlantik-Brücke

Atlantik-Brücke

Strengthening the exchange between politics and business, Atlantik-Brücke aims to deepen cooperation between Germany, Europe and America on all levels. 

GeoEconomics Center

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

The post Defense technology and innovation in Germany appeared first on Atlantic Council.

]]>
Mythical Beasts and where to find them: Mapping the global spyware market and its threats to national security and human rights https://www.atlanticcouncil.org/in-depth-research-reports/report/mythical-beasts-and-where-to-find-them-mapping-the-global-spyware-market-and-its-threats-to-national-security-and-human-rights/ Wed, 04 Sep 2024 22:39:00 +0000 https://www.atlanticcouncil.org/?p=817985 The Mythical Beasts project pulls back the curtain on the connections between 435 entities across forty-two countries in the global spyware market.

The post Mythical Beasts and where to find them: Mapping the global spyware market and its threats to national security and human rights appeared first on Atlantic Council.

]]>

Table of Contents

Executive summary

Despite its contribution to human rights harms and national security risks, the proliferation of spyware remains rife. A significant channel for this proliferation is sale through a global market, of which most public information is known about only a handful of vendors. While some of these entities have achieved infamy, like NSO Group and the Intellexa Consortium, most others have largely flown under the radar.

The Mythical Beasts project addresses this meaningful gap in contemporary public analysis on spyware proliferation, pulling back the curtain on the connections between 435 entities across forty-two countries in the global spyware market. These vendors exist in a web of relationships with investors, holding companies, partners, and individuals often domiciled in different jurisdictions.

This market is a significant vector for facilitating the human rights harms and national security risks posed more broadly by spyware, software that facilitates unauthorized remote access to internet-enabled target devices for purposes of surveillance or data extraction. It is possible for policymakers to make significant progress in limiting these harms and risks by influencing this market, rather than playing “whack-a-mole” with individual vendors or transactions. This progress is possible now, even in the face of basic disagreements over what constitutes a “legitimate” use of spyware. Besides changes to participants in the market, greater transparency will also support more effective policies related to spyware, rooted in cooperative international action.

Developed as part of a wider study of proliferation and international cybersecurity, this report provides an analysis of the accompanying dataset comprised of information from 1992 to 2023 on forty-nine vendors along with thirty-six subsidiaries, twenty-four partner firms, twenty suppliers, and a mix of thirty-two holding companies, ninety-five investors, and one hundred and seventy-nine individuals, including many named investors. There are six trends that hold for this dataset, a detailed but even still incomplete sample: 1) concentration of entities in three major jurisdictions (Israel, Italy, and India), 2) serial entrepreneurship across multiple vendors, 3) partnerships between spyware and hardware surveillance vendors, 4) regularly shifting vendor identities, 5) strategic jurisdiction hopping, and 6) cross-border capital flows fueling this market.

These trends inform a set of policy recommendations to produce greater transparency across the market, limit the jurisdictional arbitrage of vendors seeking to evade limits on their behavior, and more effectively scrutinize supplier and investor relationships.

Figure 1: Policy recommendations to produce greater transparency across the market

Commercial acquisition of spyware is not the root cause of its abuse. While this project is focused on bringing transparency to participants in this market, it does not argue that only transactions through this market pose proliferation harms or risks. An information gap exists in what is known about the spyware market and its varied participants, a gap that is impeding international cooperation on policies that could meaningfully reduce the harms and risks posed by spyware. This report seeks to offer new data and analysis to bridge that gap and support the work of researchers and policymakers more widely.

Introduction

For at least the last thirty years, “mythical beasts” have been lurking around the globe, assuming the names of varying species of fish, fowl, and other creatures rooted in lore. These mythical beasts—often with dramatic naming conventions—are spyware: software that facilitates unauthorized remote access to an internet-enabled target device for purposes of surveillance or data extraction.  The companies that sell these tools are sustained by an increasingly diverse array of government customers across a global market, even in the face of scattered regulatory efforts targeting spyware supply chains.1

  • Out of 195 countries in the world, at least eighty are known to have procured spyware from commercial vendors.2
  • Fourteen of the twenty-seven countries in the European Union have purchased spyware from just one vendor, the NSO Group.3
  • Spyware vendors were attributed to fifty percent of all zero-day exploits discovered by one company’s threat research team in 2023, including sixty-four percent of all exploits in mobile and browser software.4
  • While the annual revenue generated by this market is unknown and subject to repeated speculation, largely recycling the same unsourced statistic, at least one vendor has considered an initial public offering valuation of $2 billion.5

With the proliferation of spyware, from NSO Group’s Pegasus to Intellexa Consortium’s Predator, comes increased attention to its use. Some argue that spyware can be employed as a legitimate law enforcement and intelligence tool.6 It has also been used by states to extend surveillance power well beyond their physical borders, making it easier to track, arrest, kidnap, and even kill their citizens.7 In these abuses of spyware, the victims are most often journalists, activists, opposition politicians, and a myriad of other individuals whose activity has attracted hostile interest from their governments. For years, civil society organizations like AccessNow and Amnesty International have sought to bring attention to these abuses and have reported on spyware’s use on nearly every continent.8

State surveillance, harassment, repression, and outright murder predate spyware, and there is little to suggest spyware “causes” these abuses. Measuring the human rights harms and national security risks of spyware against its value to law enforcement or intelligence activities is also challenging as these activities are, by their nature, even less visible. Few governments have sought to demonstrate the range of legitimate uses of spyware or its impacts. As a result, when considering spyware’s effects on society, there is a bias to what is known.

Still, what is known is an abundance of public evidence of the totality of abuses made easier— perhaps even directly possible—by spyware.9

It is not in dispute that spyware makes it easier for states to penetrate even the most robust commercial technologies, cell phones, computers, and communications services; makes it far easier to act against citizens beyond state borders; and even provides governments with the ability to target senior officials, both domestically and abroad, where they might otherwise have no means to do so.10 Where that information is used to facilitate repression and abuse, its harms are untenable. Where that information is gathered and used subject to due diligence and effective oversight in pursuit of credible law enforcement and intelligence activities subject to the limits of the law, its effects may provide for the public interest. These two categories overlap and are altogether too often separated by good intent and cursory legal review.

The proliferation of spyware also poses national security risks as it makes it more likely for states to become “more capable—for instance while conducting cyber-espionage for commercial or intelligence gain—or ready for more disruptive or damaging operations.”11The proliferation of these capabilities in most states takes place with few effective restraints, strict controls, or meaningful oversight mechanisms. This is a recognized policy challenge and one which has been taken up in various forms by some governments, largely in Europe, the US, and the UK.

Briefly through the past

Digital surveillance technologies, which include spyware, are known as dual-use goods, meaning they can be “used for both defense and civilian purposes.”12 Dual-use technology in forty-two countries falls under a multilateral export control regime established in 1996, the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (the Wassenaar Arrangement).13 In 2013, the Wassenaar Arrangement was amended to include “intrusion software” but after considerable feedback from the security research community and significant delay, the language was revised.14 While the Wassenaar Arrangement is not legally binding, its signatories typically voluntarily implement its control list into domestic regulations, oftentimes requiring firms whose products are listed to acquire special licenses to export these items.

Within the European Union, export controls are governed by the EU’s Dual-Use Regulation.15 The first EU legislation on dual-use goods was enacted in 1994, underwent significant changes in 2009, and a new version was enacted in 2021 to implement and modernize the EU’s export control regime.16Member states are required to abide by this common set of restrictions but may introduce additional controls on non-listed dual-use items due to public security or human rights considerations.

The United States is also a participant in the Wassenaar Arrangement and the Bureau of Industry and Security (BIS) within the Department of Commerce has the authority to regulate dual-use exports by issuing export licenses.17The BIS Entity List contains names and organizations subject to specific additional license requirements.18

In 2021, BIS added four entities to this list, including for the first time two spyware vendors:

  • Candiru Ltd
  • NSO Group

And two suppliers:

  • COSEINC
  • Positive Technologies AO.19

In 2023, BIS also added four companies associated with the Intellexa Consortium to the Entity List: Intellexa S.A., Cytrox AD Holdings ZRT, Intellexa Limited (Ireland), and Cytrox AD (North Macedonia) as they were determined by BIS to be “trafficking in cyber exploits used to gain access to information systems, threatening the privacy and security of individuals and organizations worldwide.”20

In addition to export controls, the European Union and the United States have sought to implement other measures to limit spyware proliferation. In 2022, in response to the investigative findings of the Pegasus Project, an international investigative journalism initiative, the European Parliament established the PEGA Committee to investigate the misuse of surveillance spyware, including the NSO Group’s Pegasus and similar spyware services.21 The committee concluded that European Union governments abused spyware services, lacked necessary safeguards to prevent misuse, and in at least one jurisdiction, Greece, the government had facilitated the export of Predator spyware which was then itself abused, here by Sudan’s Rapid Support Forces militias, who are reported to have committed war crimes.22 Despite the committee’s recommendations, the EU has not adopted any legislation as a bloc to curb the development or sale of spyware.23

More recently

The last twenty-one months saw a surge in policymaking activity building on these recent efforts. Most visible has been from the United States, which has enacted punitive measures targeting entities selling spyware and driven some measure of diplomatic consensus to “recognize the threat posed by the misuse of commercial spyware” and acknowledge the “fundamental national security and foreign policy interest in countering and preventing the proliferation of commercial spyware.”24

In March 2023, the United States first proposed blocking US government agencies from using “commercial spyware.” Under Executive Order 14093, the Biden administration prohibited the operational use of commercial spyware that presents a significant threat to national security.25 Also in March of 2023, the US and several other countries signed The Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware, pledging to “work collectively to counter the proliferation and misuse of commercial spyware.”26

In March 2024, the US Department of Treasury Office of Foreign Assets Control levied sanctions against several entities, some of which are also listed on the BIS Entity List.27 Ultimately Treasury sanctioned:

  • Tal Dilian
  • Sara Hamou
  • Intellexa S.A.
  • Intellexa Limited
  • Cytrox AD
  • Cytrox Holdings Crt
  • Thalestris Limited28

So far, the US has refrained from sanctioning five other entities within the Intellexa Group, previously identified publicly, and perhaps others, including entities associated with Thalestris Limited.29 That same month, several additional countries joined as signatories in an expansion of The Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware.30

In April 2024, the US Department of State announced a visa restriction policy to “promote accountability for the misuse of commercial spyware.”31This extended statutory language from 2021, originally implemented as visa bans on “individuals who, acting on behalf of a foreign government, are believed to have been directly engaged in serious, extraterritorial counter-dissident activities, including those that suppress, harass, surveil, threaten, or harm journalists, activists, or other persons perceived to be dissidents for their work, or who engage in such activities with respect to the families or other close associates of such persons.”32The new restrictions pertain to individuals who have been involved in the development and sale of commercial spyware and their immediate family members.33 Thirteen individuals, whose identities are not public, have been subject to this action as of the date of this writing.34

A new multilateral effort from the UK and French governments, the Pall Mall process, has also brought together an even wider array of state and non-state participants to develop principles, and perhaps practical policy action, to counter spyware proliferation.35 Pall Mall includes non-state groups and a much wider set of states than the Joint Statement signatories but, so far, with much more ambiguous outcomes, including a debated set of principles and plans for a broad consultative process.


Collectively these US and allied efforts demonstrate there is a growing focus on curtailing the proliferation of spyware. However, still missing from these discussions is a common picture of the spyware market with sufficient detail to understand the diversity of market participants and relationships that stretch across borders.

A turn to the spyware market

This report offers a new dataset covering 435 entities (incl. forty-nine vendors and twenty suppliers) across the spyware market. The data spans forty-two different countries and nearly thirty years, covering vendors, investors, and other corporate relationships. The collection of this data and its publication is an attempt to address a systematic bias toward the operations of a small handful of well-known firms that inform assumptions about the interactions and relationships of a large global market.

This narrow focus has helped obscure the impact of dozens of other vendors and the importance of their relationships with both investors and suppliers of crucial software components, including working exploits of some of the most widely used software (e.g. iOS, Android). This project supports the “turn to spyware” in recent transatlantic policy activity and should enable a more robust, market-first (rather than vendor-centric) approach. Such an approach can leverage both conventional tools to constrain and shape markets as well as new policies to address the unique dimensions of spyware.

Unlike other more tightly regulated markets, the market for spyware lacks public data that is consistent, reliable, and clearly sourced. While a single export authority, such as Israel’s Defense Exports Control Agency, may track sales out of the country, this information is neither public nor combined with similar resources from other jurisdictions. Thus, the view of the spyware market is limited even for exceptionally well-equipped states. Researchers, journalists, and policymakers alike must scrape through a variety of different resources just to scratch the surface of this market that has cloaked itself in secrecy, making it difficult for policy action. There is some comparison even to be had with the approach of US policymakers in regulating the market for cannabis over the last several decades. Once a widely banned yet still pervasively acquired substance, the cannabis market has now been legalized in many states and is subject to enormous scrutiny. This new approach focused not on blocking cannabis transactions or consumption but rather on leveraging market forces, accepting some legitimate use, and creating parameters for responsible procurement. Proliferation will not be prevented by a well-regulated and more transparent market, but it can be better channeled and made subject to controls, less opaque, and less harmful.

Policy responses to address spyware as a market is preferable over advocacy for a complete ban. Such a ban would likely supercharge government calls for exceptional access to encrypted services and data while sapping momentum toward better approaches to spyware in the US and UK, home to two of the most active policymaking communities on this issue.

An important caveat: the dataset collected here does not catalog use, so the authors cannot make novel claims about what constitutes “legitimate use” of spyware. Few of the policies mentioned in the previous section define legitimate use with sufficient precision to enforce granular bans on behavior and even these are not transparent.36 The lack of a common understanding of the scale, diversity, and relationships within the spyware market is a barrier to effective policymaking. Thus, this report argues for how to improve transparency in the market and develop sufficient granular controls to enable these kinds of distinctions in use.

The next section of this report explains the methodology associated with this dataset as well as definitions and a summary of the data. The following section analyzes major trends from this data and highlights specific examples and implications for policy before developing specific recommendations. As the authors have written previously, “markets matter,” and this report argues that the current surge in policymaking toward spyware can best be sustained and made more impactful by focusing policymaking on the spyware market instead of just a handful of the most well-known firms.37

Methodology, definitions, and navigating the dataset

This section discusses the project’s scope, including a working definition of spyware, data collection methodology, and sources, and closes with the major definitions and terms used as part of this dataset’s coding framework.

Spyware: is a type of malicious software that facilitates unauthorized remote access to an internet-enabled target device for purposes of surveillance or data extraction.38 Spyware is sometimes referred to as “commercial intrusion [or] surveillance software” with effectively the same meaning.39 This research considers the “tools, vulnerabilities, and skills, including technical, organizational, and individual capacities” as part of the supply chain for spyware and the meaningful risks posed by the proliferation of many of these components.40

This project is concerned with the commercial market for spyware and provides data on market participants. Focusing on the market does not presume that all harms from spyware stem from how it is acquired, or whether that acquisition is a commercial transaction with a third party (versus developed “in-house” by the customer). Some definitions of spyware differentiate it by the means with which it is acquired, creating confusion over the fundamental distinction between “spyware” and, for instance, “commercial spyware.”41

… so “commercial” spyware?

Transactions across the spyware market may be less regulated than in-house development of spyware but they are far from the only source of harm and insecurity. Policies that seek only to mitigate harms from the commercial sale of these capabilities risk ignoring their wider harms and avoid the opportunity to address fundamental concerns over surveillance and the full spectrum of government uses of these technologies.

The debate over what constitutes legitimate uses of spyware is ongoing, but commercial sale is a poor proxy for the degree of responsible or mature use. History has shown that this market is only one, albeit significant, part of a wider proliferation challenge.42Many human rights violations associated with spyware occur in the context of their use for state security purposes (e.g., by intelligence agencies), highlighting the diverse harms and risks posed by the proliferation of spyware. These include what some researchers have termed “vertical” uses (by states against their own populations) and “diagonal” uses (against the population of other states, including diaspora).43 There is some normative loading in the term “spyware” vs. the more functional “malware” or the rather impenetrable “commercial intrusion capabilities” but is beneficial to have a common term of art in many of these debates.

This report and accompanying dataset are mainly inclusive of investigations into vendors and suppliers that have been found selling spyware to governments across the world that have then used this software to abuse human rights. However, this is only one side of the coin. Far less data exists on the use of spyware for a myriad of intelligence and counterintelligence purposes, including “national security” missions both genuine and troubling. The report cannot resolve these tensions but does seek to frame them in service of a more immediate and practical purpose—and a better understanding of the market that provides the software tools and services to carry out these acts.

Commercial acquisition of spyware is not the root cause of its abuse. While this project is focused on bringing transparency to participants in the spyware market, it does not argue that only transactions through this market pose proliferation risks or harms.44To avoid further confusion in both analysis and policy the authors do not include the term “commercial” in the definition of spyware. While the debate continues about how to manage these risks, this project sheds better light on those buying, selling, and supporting this market.

A final note on scope

Spyware works without the consent or knowledge of the target or others with access to the target’s device; thus, this report does not consider the market for so-called “stalkerware,” which generally requires physical interaction from an individual, most often a spouse or partner, with access to a user’s device.45 This definition also excludes software that never gains access to a target device, such as surveillance technologies that collect information on data moving between devices over wired (i.e., packet inspection or “sniffing”) or wireless connections. This definition also excludes hardware such as mobile intercept devices, known as IMSI catchers, and any product requiring close or physical access to a target device, such as forensic tools.46

This definition is limited, by design, to disentangle lumping various other surveillance toolsets into the definition of spyware.

Building the dataset

This dataset represents a meaningful sample of the market for spyware vendors, but it is not a complete record and this report can only speak to trends and patterns within this data, not the market as a whole. The data is confined to entities for which there is a public record (i.e. registered businesses) and for which public information links the vendor to the development or sale of spyware or its components.47

To develop a list of vendors, the authors started by creating an initial “most visible” list of those with the widest public exposure from the use of their wares, relying principally on public reporting from Amnesty International, Citizen Lab, and the Carnegie Endowment for International Peace, as well as public reporting from a variety of news outlets. This initial set of vendors was the starting point for searching public corporate registries and a mix of public and private-sector corporate databases to profile each company in greater depth and find additional connections.

All the vendors identified through this process were included if they 1) publicly advertised products or services that matched the above definition of spyware, 2) were described as selling the same products by public reporting in the media or by civil society researchers, or 3) showed evidence of the products through court records, leaks, or similar internal documentation. As part of this search process, the team gathered records on subsidiaries and branches associated with each vendor, their publicly disclosed investors, and, where possible, named suppliers.


Each entity identified in this process was identified by at least two different open sources. In all cases for which data is available, the dataset includes vendor activities from the start of operation until 2023, or until records indicate that the vendor’s registration had ceased in a jurisdiction. The sources of public information on both firms’ activities and their organization varied but largely stemmed from different forms of corporate registration, records, and transaction data.

Defining entities in the spyware market 

The summary profiles of vendors and suppliers can be found in the Trends section of the report, as well as in Appendix A.

Figure 2: Entities in the dataset are spread across jurisdictions

This section identifies and explores six distinct trends from the data gathered in this sample of the spyware market:

Analysis

  1. The majority of identified entities in this sample are domiciled in Israel, India, and Italy
  2. Serial entrepreneurs are rife
  3. The robust partnerships between spyware and hardware surveillance vendors
  4. The deliberate and repeated efforts by firms to shift their identities and even corporate structure
  5. The movement of those corporate structures across strategic jurisdictional boundaries
  6. The significant cross-border mobility of capital supporting spyware development and sales

The dataset is based on a collection of known or reported spyware vendors and analysis, using these as a basis to identify first-order connections and map the resulting network of entities. We included only those entities as defined above.  As a result, the dataset is a sample of the spyware market and the trends speak to this data.

1. The three I’s

Across the 435 entities in this sample of the spyware market, there is a significant concentration of vendors and associated entities in three jurisdictions: Israel, India, and Italy. These states are by no means the only hosts of spyware vendors or their investors and partners, but they are unusually prolific.

  • Israeli cluster: Eight vendors (NSO Group, SaitoTech—formerly Candiru Ltd, Cognyte, Paragon Solutions, MerlinX, Quadream Inc./InReach Technologies Limited, Blue Ocean Technologies, and Interionet.) This cluster comprises 43.9 percent of the entities in this dataset. The average period of activity (from the time of initial registration to the most recent) for each of these vendors is 6.6 years.
  • Indian cluster: Five vendors (Aglaya Scientific Aerospace Technology Systems Private Limited, Appin Security Group, BellTroX Infotech Services Private Ltd., CyberRoot Risk Advisory Private Limited, and Leo Impact Security Service PVT Ltd.) as well as one supplier (RebSec Solutions). This cluster covers 7.8 percent of the entities in this dataset, with the average period of activity for vendors lasting 10.1 years.
  • Italian cluster: Six vendors (Dataflow Security s.r.l., DataForense s.r.l., Memento Labs srl—formerly Hacking Team srl or Grey Heron, Movia SPA, Negg Group/Negg International, s.r.l., and RCS ETM Sicurezza S.p.A.) and one supplier (VasTech). This cluster includes 13.6 percent of all entities in the dataset with an average period of activity lasting 6.1 years.

This dataset represents a global market and it is notable that, for all the press on some firms in the country, Israel represents barely half of this sample. It is also important to note that in the early stages of this project, the most widely reported vendors were based in Israel, which means they constituted a much larger portion of early versions of the dataset. Italy is a notable jurisdiction given its home in the EU, where debates continue about how to govern the presence and operation of spyware vendors. The geographic spread of these “Three I’s” underscores the need for cooperative approaches to driving transparency and shifting behavior in the spyware markets and highlights the absence of both Israel and India in the most recent high-profile spyware policymaking process, the Pall Mall declaration.48

2. Serial entrepreneurs

Across this sample of the market, there is a recurring pattern of employees, including founders leaving their first firm to found and work in other companies, often repeatedly. This is not unlike other startup cultures where this kind of serial entrepreneurship is common. It is interesting in the context of this market, however, given the essential similarity of these products’ intended function and the assumed stickiness of customer relationships with founders and senior employees. Within the dataset, founders of vendors and suppliers are involved in 2.2 companies on average.

Figure 3: Employees frequently make the jump from employee to founder

The NSO Group, arguably the most well-known spyware vendor, is a prime example of this phenomenon. The firm was founded in 2010 in Israel by Niv Karmi, Omri Lavie, Shalev Hulio, and Eddy Shalev and is the developer of the Pegasus spyware. Despite investigations from the EU and regulatory action from the US, the firm continues to operate branches in the United States and Luxembourg along with subsidiaries in Bulgaria and Cyprus.49

The vendor Quadream Inc., known for the spyware Reign, was founded in 2016 in Israel by former NSO Group employees Guy Geva and Nimrod Reznik, as well as former military official Ilan Dabelstein and goes by the name Kvader Ltd. in Israel.50 Like Quadream Inc., Interionet Systems Ltd. (Interionet) is an Israeli vendor founded in 2015 by Yair Pecht and Sharon Oknin, former employees of NSO Group.51 Israeli businessman Joshua Lesher, an NSO shareholder and board member, also sits on Interionet’s board.52

Interionet “develops malware for internet routers” and is notable for compromising internet-of-things devices, such as video surveillance cameras.53 In 2022, Interionet won a contract with the Belgian police for their €299 million modernization project called I-Police.54 In a study of cyber capabilities in the international arms market55the authors assessed with high confidence that Interionet “is willing to market its capabilities in countries which are not allied to the American and European interests.”56

Figure 4: Mapping connections in the NSO Group, Quadream, and Interionet clusters

There are examples of this trend outside of Israel as well. The Appin Security Group, established by Rajat Khare and his brother Anuj, has since been alleged to have targeted and spied on entities worldwide. Materials online appear to show the company offering hack for hire services.57 BellTroX Infotech Services Private Ltd. was registered in India in 2013 by Sumit Gupta, formerly an employee of Appin Security Group.58 BellTrox Infotech Services Private Ltd. has been previously named by Meta as offering “hack for hire” services.59

 
Figure 5: Mapping connections in the Appin Security Group and BellTrox Infotech Services Private Ltd Clusters

This suggests that more closely governing the talent pools of individuals pivoting between companies might restrict these individuals from creating their own companies and limit the proliferation of spyware vendors. To impede talent from pivoting easily between vendors, export licensing bodies could require more detailed information on key personnel and their past employment to help identify serial offenders of the laws and policies of other jurisdictions. Policymakers should also consider focusing on individuals when attempting to limit harmful activities by a vendor, rather than just the vendor as a business entity, given the fluidity of talent between firms.

3. Partnership with hardware surveillance

Spyware vendors in the dataset have sometimes partnered with hardware-based surveillance companies whose products might complement the functionality of their spyware tools. We have identified nine vendors or suppliers known to have at least one partner, with at least five vendors partnering with at least one hardware company. The most active example of these partnerships is the Intellexa Consortium, encompassing relationships between seven distinct hardware firms.60Formed around founder Tal Dilian in 2018, the Intellexa Group comprises several companies including Cytrox AD, WS WiSpear Systems Limited (also founded by Dilian), and Senpai Technologies Ltd.61 In 2020, Intellexa Group expanded to include Intellexa S.A., formerly known as Intellexa Single Member.62 Cytrox AD was formed in 2017 by Rotem Farkash and Abraham Rubinstein in North Macedonia and developed the spyware known as Predator. WS WiSpear Systems Limited specializes in intercepting targeted Wi-Fi signals and extracting passwords and communications at long range, and Senpai Technologies Ltd. is an open-source intelligence company that specializes in analyzing data from phones infected with spyware.63

In addition to the Intellexa Group, there is the Intellexa Alliance, formed in 2019 as a partnership between the Intellexa Group and the Nexa Group. Nexa Group is a cluster of four other companies selling interception technology that retail their products together.64 It remains unclear whether the Intellexa Alliance is still operational, as tensions have emerged between the two entities.65 Together, the Intellexa Group and Intellexa Alliance comprise the Intellexa Consortium, profiled in more detail in the Cyber Statecraft Initiative’s earlier report, “Markets Matter: A Glance into the Spyware Industry.”66

 Figure 6: Mapping connections in the Intellexa Consortium and Nexa Group clusters

This trend appears in the Italian market as well, with vendor Memento Labs srl (subsequently known as Hacking Team srl) partnering with South African firm VASTech, founded in 1999 by Frans Dreyer, to develop a passive interception product for wireless communications (building on earlier work from the firm DataVoice).67 VASTech maintains two offices in South Africa while VASTech AG operates in Switzerland, and VAS Technologies is located in the UAE.68 VASTech would later go on to propose a partnership with Hacking Team srl to directly resell the vendor’s spyware in 2015.69

Figure 7: Mapping connections between Hacking Team srl and VASTech clusters

Firms fostering relationships with others offering complementary products is not novel but it is nonetheless interesting to see in this sample of the spyware market. The phenomenon underlines the further importance of policies that address the market as a whole and collaboration across multiple states, as vendor or jurisdiction-specific actions often have limited effect on these wider relationships. Regulating the kinds of support provided to spyware vendors selling to government agencies could help govern the kinds, and content of these partnerships and extend important transparency measures like “Know Your Vendor” requirements to important firms a step beyond the initial spyware transaction. This recommendation is particularly important due to shifting vendor identities.

This trend also highlights the potentially complex relationship between the spyware market and vendors of other electronic surveillance technologies. An open question for further research is how efforts to constrain spyware sales may impact these complementary tools.70 A further question raised is how substitutable these non-spyware alternatives might be for existing customers and the extent to which spyware firms (like VASTech) offer both spyware and other products to diversify and strengthen their business.

4. Shifting vendor identities

Spyware vendors will change legal names and even shift entire corporate structures, which can serve to obscure their identity and, potentially, manage the impact of negative reporting.

Despite name changes, reporting often refers to entities by their most popularized name. This can obscure the vendor’s ongoing activity and impede researchers, policymakers, and any firms attempting to exercise due diligence in potential investments. On average, the entities tracked in the dataset changed names more than once, an average of 1.4 times over the time observed, with a name lasting an average of 4.5 years vs. an average vendor lifespan of nearly double that length.

To put this into perspective across the rest of the dataset, 14.3 percent of the vendors underwent a name change while 10.2 percent of all entities changed their name (excluding individuals). Holding Companies had the highest percentage of name changes at 34.4 percent, followed by Vendors (14.3 percent), Partners (20.8 percent), Suppliers (five percent), and Investors (2.1 percent).71

Figure 8: Entities change their legal name to obscure their identity and manage the impact of negative press

The most persistently shifting identity is that of the firm originally known as Candiru Ltd, which changed its name four times over the ensuing nine years, and is known at the time of this writing as Saito Tech Ltd.72 The vendor originally known as Candiru Ltd was incorporated in 2014 in Israel by founders Ya’acov Weitzman and Eran Shorer.73 Candiru Ltd has sold products to Hungary, Spain, and the United Arab Emirates, who all used the spyware for political suppression of opposition and civil society.74 The group’s annual name changes between 2016 and 2020 did not come with changes to the corporate structure. In 2021, Candiru Ltd and its associated names were added to the US Entity List alongside NSO Group.75 In popular discourse, the vendor is often called Candiru Ltd, this report refers to all vendors by their present legal name, which for Candiru Ltd is Saito Tech Ltd.

On the other hand, Memento Labs srl, initially named Hacking Team srl, retained its original brand for sixteen years, the longest of any entity in the dataset, until changing it in 2019. Formed in 2003 in Italy by David Vincenzetti and Valeriano Bedeschi, Hacking Team srl developed the Remote Control Systems (RCS) spyware. A wide breadth of information is available on the business model of Hacking Team srl due to a leak of its internal data in 2015.76Hacking Team srl has been reported to sell to Ecuador, Nigeria, and Saudi Arabia, as well as many states, all of which may have utilized the RCS spyware to suppress human rights.77 Despite legal obstacles, including the revocation of the firm’s export license in 2016, Hacking Team srl continued to exist as a company. From 2017 to 2018, there was also a potential spin-off of Hacking Team srl known as Grey Heron, as announced by a Hacking Team srl representative at a security conference in the United Kingdom.78The company was officially renamed Memento Labs srl in 2019 after being acquired by InTheCyber Group fSA, a Switzerland-based investor, in an effort to rebrand itself.79

Another example of this behavior can be observed in the Indian vendor Appin Security Group. Beginning in 2014, Appin Technology Ltd., Appin Security Group’s parent company, began a rapid succession of name changes evolving from Appin Technology Ltd. to Mobile Online Order Management Private Limited, then from this name to Chemieast Engineering, and then from Chemieast Engineering to Sunkissed Organic Farms. Appin Security Group itself also changed names to Approachinfinate Computer and Security Consultancy Grp and then to Adaptive Control Security Global Corporate.“80 This approach echoes Saito Tech’s approach of rapid name changes without significant alterations to business structures. Equus Technologies provides a good example of name changes in response to press and reporting. Founded in 2014 by Matan Markovics, Daniel Hanga, and Tal Tchwella in Israel, Google attributed the firm as the developer of the Lipizzan software in 2017 and labeled the vendor a “cyber arms company.”81 After this reporting, Equus Technologies struggled to recover from reputational damage as it started losing customers and shareholders shrank their positions in the company.82 Equus then changed its name to MerlinX between 2017 and 2018.83 Tal Tchwella, one of its three founders, also left the company.84 MerlinX was later acquired by Bindecy, an Israeli company specializing in vulnerability research, in 2021.85

Figure 9: Charting entity name changes over time

These various examples show why it is difficult for policymakers and researchers alike to keep track of vendors, creating an illusion that a vendor has ceased operations when they are functioning under a different name. This image grows more complicated with subsidiaries and branches as they too may shift names rapidly, furthering an already opaque market.

 To counter this trend, policy solutions can emphasize individual and investor relationships. A baseline improvement for spyware procurement would also be mandatory “Know Your Vendor” requirements to disclose first- and second-order supplier relationships. Better and more consistent transparency from corporate registries would also help establish the link between these identities, even across jurisdictions—which the report turns to next.

5. Strategic jurisdiction hopping

Several of the vendors captured in the dataset appear to have constructed subsidiary, branch, and partner relationships that cross strategic jurisdictional boundaries. These relocations may offer a variety of location-specific benefits, from facilitating sales to the EU market with an EU-domiciled firm to situating branches in states with more forgiving laws.

Figure 10: Number of Jurisdictions Per Cluster

In 2017, the Israeli vendor Quadream Inc. set up a supplier, InReach Technologies Limited, in Cyprus which Quadream Inc. claimed in a later court filing was for the “sole purpose of promoting Quadream Inc. products within the European Union.”86 InReach Technologies Limited’s financial structure also included A.I.L Nominal Services Ltd. (A.I.L), similarly established in Cyprus in 2010 as a holding company, with an individual with a relationship to the Ministry of Defense.

Quadream Inc. and InReach Technologies Limited’s relationship deteriorated in 2020 and they became entangled in a court case against one another.87 While the relationship was strained, it is unclear whether the companies formally separated by the time Citizen Lab released Quadream Inc.’s toolkit in 2023, exposing the company’s capabilities. This led to the company reporting that it would be shutting down operations although the company is still registered in Israel.88

The Intellexa Consortium provides another example of this jurisdictional hopping. One investigation discovered through leaked documents shows how the organizer of the Intellexa Consortium, Tal Dilian, and his partner Sara Hamou, utilized Cyprus as a hub for the Predator spyware to gain access to the European market.89

Memento Labs srl (formally known as Hacking Team srl) provides an interesting exception to this trend as its founders appear to have worked to make it strictly an Italian-based vendor. Like models found in other businesses that boast national pride, Hacking Team srl is proud to be “Made in Italy.”90Their investor base is also mainly Italian, with only two other European countries (Cyprus and Switzerland) present.91

Figure 11: Vendors construct subsidiary, branch, and partner relationships that hop across strategic jurisdictional boundaries

National laws governing the behavior of subject firms are largely premised on the common recognition by both the regulators and the regulated of sovereign boundaries. These boundaries delimit the application of law between, say, France and the United Kingdom. The deliberate construction of branch and subsidiary relationships to cross these boundaries may offer firms a measure of protection from regulatory approaches like export controls and create significant opacity in their operations and supply chains by wrapping even loose cross-border relationships in the cloak of “internal” corporate activity. The pivot of vendors to less restrictive jurisdictions reduces the efficacy of export controls and policy action must better limit the effects of jurisdictional arbitrage. But this trend is not limited to corporate organization, indeed it is reflected in capital flows as well, leading to the final trend.

6. Money from across the world fuels the spyware market

Investors in spyware vendors often cross borders with their capital. Like more conventional markets, spyware vendors and suppliers feature an investor base domiciled in many different jurisdictions. Investment in spyware vendors and suppliers is understudied, despite being a factor in the proliferation of this technology. The sample of the spyware market captured by this dataset includes ninety-five investors identified to date. Among these, outside of investors for whom location was not listed, four jurisdictions were most frequently represented: Italy, Israel, the United States, and the United Kingdom—comprising 46.3 percent of all investors.

The character of spyware investment varies from venture capital, private equity, and government loans to outright acquisitions and direct equity ownership. On average in this dataset, each vendor and supplier had 4.75 identified investors, with Figure 12 highlighting where investors are domiciled and to which cluster they invest. For example, the dataset documents fourteen different US entities investing in spyware vendors or suppliers, the bulk of whom (in twelve of the fourteen cases) are based in Israel. Of note, the Israeli and Italian investors captured in this dataset were likely to mostly invest in their own markets versus the United States and the United Kingdom, whose investors largely sent their capital abroad.

Figure 12: Money flows from investor to vendors, often crossing borders

A specific example of this trend is Paragon Solutions. Paragon was established in 2019 in Israel by Ehud Schneorson, Idan Nurick, Igor Bogudlov, and Liad Avraham.92A few years later in 2022, the firm established Paragon Solutions US, a US-domiciled subsidiary.93 Since its establishment, Paragon has made deliberate efforts to break into the US market. Paragon Solutions also has two US-based investors. Battery Ventures, considered to be one of the world’s top venture capital firms and headquartered in Boston, is an investor in Paragon Solutions as of this writing.94The listing does not name “Paragon Solutions US”. It is also supported by Blumberg Capital, another large US venture capital firm.95

Saito Tech Ltd and NSO Group also have investors domiciled in foreign jurisdictions. Saito Tech Ltd had the US-based Founders Group and NSO Group’s current and past investors include those domiciled in the United Kingdom, like Novalpina Capital, and in the United States, like Francisco Partners Management, Berkely Research Group, and Blackstone Group LP.96The transition in NSO Group investors from Novalpina Capital to other investors took a few years and is well covered in the press.

These cross-border capital flows highlight the importance of international cooperation, and perhaps the central role of the United States and EU, in applying more granular controls and scrutiny on investor relationships in the spyware market.

Improving corporate transparency requirements, such as the US’s recent move to compel companies to report their beneficial owners in line with policies in other countries, will support improved investor due diligence and deal review inside the United States.97 For vendors located outside the US, a recent notice of proposed rulemaking to extend US security review over some forms of outbound investment could provide the basis to catalog and potentially block investment.98Targeted sanctions are another option for limiting investors’ behavior via designating spyware vendors, blocking financial transactions, or designating investors themselves if their actions fall under the scope of spyware-related sanctions authorities. The use of unilateral sanctions is an active and much-debated topic in both the cybersecurity sector and the wider national security policymaking landscape and this report offers a more fully articulated set of policy recommendations based on these trends and the totality of this dataset in the next section.

Policy recommendations

The 2024 Report on the Cybersecurity Posture of the United States from the Office of the National Cyber Director (ONCD) lists the growing market of “sophisticated and invasive cyber-surveillance tools” as one of the trends driving change in the United States’ cyber strategic environment in 2023.99 The UK and French governments have made the proliferation and irresponsible use of commercial cyber intrusion capabilities an important and ongoing policy activity, most notably by leading the deliberately multilateral Pall Mall Process on Cyber Intrusion Capabilities.100 A European Parliamentary committee (the PEGA committee) highlighted the importance of the spyware market as a topic for policymaking and its implications for technology policy, human rights, and national security across the EU’s complex network of delegated powers. These efforts are part of a degree of sustained attention on spyware not seen in the previous decade. The authors are encouraged to note the effective adoption of some of their previous recommendations.101However, much remains to be done.

The final section of this report presents a set of policy recommendations to further advance these efforts. Not every action identified here is suitable for every state. The United States has outsized authorities and resources and sits in a unique position in the international financial system. Bearing that in mind, however, these recommendations envision a necessary cooperative international approach, conscious of the clear trend of the spyware market’s global infrastructure. While a well-regulated and more transparent market will not entirely prevent proliferation, it can be better channeled, subjected to controls, and made less opaque and harmful, as done with other markets of dual-use goods.

This project is focused on the spyware market, with the goal of fostering greater transparency across the market, limiting jurisdictional arbitrage, and more effectively scrutinizing supplier and investor relationships. These recommendations are built on a sample of this market, not a final and definitive record of all entities and relationships. Achieving these policy goals would address significant opportunities to limit the risks and harms stemming from the proliferation of spyware. These include more granular and effective policies on vendor behavior, a robust investor due diligence regime, information-rich government risk assessments prior to procurement, and credible legal support for greater long-term transparency in the market.

Figure 13: Mapping spyware trends to policy recommendations

1. Mandate “Know Your Vendor” requirements

If the defining characteristic of spyware acquisition should, in theory, be an exercise of due diligence, then a marked gap in current policy is the ability to exercise that diligence in the face of shifting vendor identities and supply chains. There is, however, a practical solution: employing “Know Your Vendor” (KYV) requirements. Building on previous recommendations made by the Cyber Statecraft Initiative, the United States and, at a minimum, the sixteen additional signatories to the Joint Statement, should enforce KYV requirements that spyware vendors disclose supplier and investor relationships.102This is a credible step toward better information about the segments of the spyware market with which these governments might do business. Such a KYV requirement, implemented consistently across these states, would present a united front to many of the vendors covered in this report who claim to work only with “government” and “Western government” clients. This would also mitigate the potential impact of individual governments fearing vendors would turn down their business.

KYV would create a more consistent reporting environment on the spyware market in these states, providing government clients with the ability to check where their prospective supply chain might include firms on restricted entity lists before awarding contracts. With straightforward information sharing, KYV would also enable long-term efforts to reduce government dollars flowing to high-risk suppliers or vendors. A more effective version of these requirements could mandate disclosure of firms further down the supply chain (suppliers to the suppliers of a vendor).

The United States could set KYV requirements through the US Federal Acquisition Regulatory Council, which would require an update to the Federal Acquisition Regulation (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS) to mandate that any company submitting a bid for a government contract for cyber operations to disclose a list of their vendors and suppliers, investors, and any parent corporate or holding entity. A notice and comment period for such a requirement would likely see vendors request more targeted disclosure requirements for larger conglomerate firms and mechanisms to scope KYV to spyware business units in these firms would be appropriate. KYV would also complement and be strengthened by more effective beneficial ownership requirements.

2. Improve government-run corporate registries

Similar to KYV, government-run corporate registries are a resource for due diligence and accountability. These registries would play a significant role in a more assertive policy regime that addresses the cross-border movement of, or investment in, spyware vendors. These registries would also be an important source of information for due diligence by potential investors as well as provide improved visibility into business entities operating within respective jurisdictions

For corporate registries to become sources of truth on business and financial structure and histories of vendors, they must be comprehensive, openly accessible to the public, and have verified information. However, currently, the information in corporate registries varies from country to country. For example, the corporate registry of the Czechia is comprehensive and contains information about the different names a company has used since its inception, its history of investment by various investors, as well as the individuals who held senior executive offices and their tenures.103 In contrast, the registries in India and Israel provide only basic information about entities such as the legal name of the corporation, address, date of incorporation, and registration number.“104In the United States, every state maintains a separate corporate registry of the entities incorporated in their jurisdiction.

2.1 Expand the minimum scope of data captured by registries

National regulations should determine requirements about the categories and corresponding details present in their corporate registries.105 They should include basic company information (name, registration number, payment ID, address, contact details, and date of registration), ownership details (senior executives, management board, and actual beneficiaries), the number of employees, financial information (balance sheet, cash flow, income statements, and investors), history of name changes, and the legal status of activity (liquidated, active, or bankrupt). This information serves as a bare minimum but could be expanded to include a history of mergers and acquisitions, legal actions against the firm, and active export licenses.

In the United States, this could be accomplished through the National Association of Secretaries of State (NASS) providing guidance to each of the fifty US states. Alternatively, given some risk of a race to the bottom between US states eager to attract corporate activity, the IRS could publish this data where it is collected for Federal purposes.106

Outside the US, given the global nature of the spyware market, there is merit in improving corporate registries, especially for the seventeen countries signed on to the Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware. As the Joint Statement evolves, the Department of State’s Bureau of Cyberspace and Digital Policy should consider developing subject-specific working groups—including one on corporate registries and associated information—to bolster harmonization and information sharing across signatories. These countries should seek to work towards seamlessly sharing this registry data where it is not made public. Regularly cross-referencing information on a vendor, especially ones with branches or subsidiaries in multiple countries, can be beneficial in avoiding jurisdictional hopping and arbitrage.

2.2 Expand beneficial ownership identification

In January 2024, the US Department of the Treasury unveiled its Beneficial Ownership Program (BOP), seeking to improve corporate filings on the “persons who ultimately own or control the company.”107 Most countries, however, do not have reporting requirements on beneficial ownership, and many that do have insufficient standards.108 Better recognition of the beneficial owners behind spyware vendors and, eventually, suppliers, would provide a strong counter to many of the identified trends in this report.

Further improvements can be made to BOPs worldwide. Analysis of beneficial ownership registries of G7 countries Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States (all of whom, except for Italy, are signatories of the Joint Statement) showed that while these countries maintain beneficial ownership registries that require companies to report individuals who own twenty-five percent or more of the shares and voting rights or exert significant control over the management of the company, most do not require any identification document to be submitted by such individuals for verification.109

While the US entity that is responsible for beneficial ownership registration, the Department of Treasury’s Financial Crimes Enforcement Network (FINCEN), requires entities to provide an image of an identification document for every beneficial owner, the United Kingdom’s Companies House register takes no steps to verify name or address data provided by applicants.110 As highlighted by the capital crosses borders trend, the United States and the United Kingdom are two (of several) concentrated hosts of investors in the spyware market in the sample provided by this dataset. It is especially important for these jurisdictions to improve their methods for identifying beneficial owners.

Other signatories of the Joint Statement do not have BOPs. Transparency Data’s study of European business registers reveals that out of the eleven central and eastern European countries that they analyzed, only three required companies to provide information about actual beneficiary owners. Finland, Sweden, and Switzerland (all signatories on the Joint Statement) do not even require information about the founders or owners of companies.111

Beneficial ownership transparency is endorsed and monitored by several international organizations, such as the Financial Action Task Force (FATF) and the OECD’s Global Forum on Transparency and Exchange of Information for Tax Purposes.112 The seventeen countries that are signatories of the Joint Statement without BOP programs or reporting requirements on founders or owners of companies should enact reporting requirements within their respective jurisdictions which harmonize with both US approaches and global best practices.

2.3 Make government-run corporate registry data public

A final improvement to these registries would be to provide universal public access to their data, in all jurisdictions. OpenCorporates’ report, “The Closed World of Company Data,” scores countries on the openness, depth, and accessibility of national corporate data records.113 While the average score is only twenty-two percent of the maximum, several major states—Spain, Greece, and Brazil—scored 0, which means that their corporate registers cannot even be searched without some form of payment.114 The United Kingdom’s scored highest while the United States was only a few ticks above the average.115 The poor quality of these records hampers due diligence efforts by many actors. For example, high-quality and publicly accessible corporate databases would provide raw informational material for a significantly enhanced investor due diligence regime in the spyware market. It would also help to level the playing field between regulators in different jurisdictions. This would enable cross-border collaboration in regulating the behavior of spyware vendors, suppliers, and investors. This would also allow entities subject to customer due diligence (e.g. banks, notaries, corporate service providers) to improve their verification processes and report discrepancies.

3. Enrich, audit, and publish export licenses

Export licensing requirements are a mechanism for governments to limit the sale and use of certain products and services outside of their borders. Export licenses are a complex domain characterized by significant inter-state variability in standards, covered goods, and application of broader tests of public interest, such as human rights considerations.116 Indeed, authorities may deprioritize human rights risks if countervailing considerations such as industry growth or perceived geopolitical influence weigh in favor of license approval.117 The Joint Statement commits its signatories to implement export controls on spyware technology in accordance with their respective laws and regulations.

Certain spyware vendors, like NSO Group, have publicly capitalized on the fact that their exports are licensed by government agencies as an indication of their lawfulness.118 Decoupling licensing decisions made in deliberate furtherance of geopolitical goals over those dominated by commercial considerations is a tricky and ongoing research question.119 As noted by the UN Special Rapporteur on Freedom of Opinion and Expression, the global export control framework and its national implementation in areas where NSO Group operates are inadequate for regulating surveillance technology or accounting for human rights impacts.120 The result is that while NSO Group’s exports are indeed “licensed,” these and those of other vendors could still present a grave risk to human rights, especially in jurisdictions where the legal framework governing the use of its product is minimal or even nonexistent.

Export licensing regimes can act as a legal mechanism to collect vendor records and some limited activity data. This recommendation suggests strengthening them for that purpose in three ways.

First, export control licenses for spyware and closely related services should include the names of all employees whose work has a material impact on the development of the product subject to the export license. This is important information for policymakers on two grounds. First, these employees are tied to a specific product or spyware service in a semi-public record in perpetuity. This may have some deterrent effect in choosing to work with these registered vendors and stops short of lifetime bans or similar sanctions. The record serves as an indicator of behavior at a certain point in time but does not constitute a lifetime scarlet letter. Second, these companies are tied to those employees, also at a point in time. The sample of the spyware market captured in this dataset has shown vendor names and labels to be a fluid construct and policies should be focused on learning, and potentially shaping, the behavior of individuals in this market more directly. Should a vendor seek to shift jurisdiction and apply for appropriate export licenses for the same product in their new domicile, embedded “critical employee” information in these licenses would ensure a paper trail.

The determination of what constitutes material impact, as well as defining covered products beyond the language in the Wassenaar Arrangement, would need to be determined by each county at the national level. In the US, this should come from a Bureau of Industry and Security (BIS) policy guidance. Under the Department of Commerce, BIS is already “actively engaged in formulating, coordinating, and implementing various export controls to counter the use of items subject to the Export Administration Regulations (EAR) that could enable human rights abuses or repression of democracy throughout the world”.121 More widely, this would inform a working group under the continuing Joint Statement mechanism as the seventeen signatories seek to harmonize their definitions and enforcement of this additional requirement.

Second, to ensure export controls are effective domestically, governments should build mandatory and regular audits into licensure practices along with punishments for non-compliance. In the United States, the BIS is responsible for regulating the export of dual-use products and services under EAR, including the Commerce Control List of Dual-Use Items (CCL). BIS’s Export Compliance Guidelines contain a section on audits with broad guidelines for auditors, but poor execution or a lack of audits is a recurring barrier in the implementation of EAR.122

An export license for software should also include an explicitly time-bound permission to export. The concept of “continuous monitoring” is an approach to security and compliance in cybersecurity that acknowledges software cannot be evaluated and “signed off” on at a single point in time, but rather needs to be continuously tracked throughout its operation. The same is true of the spyware vendor business model and regular audits of these licenses would accomplish much the same ends. The auditing process, conducted by the licensing authority with appropriate specialized support, would enhance transparency in the export control process and allow licenses to be revoked in the face of evidence of abuse or misuse of spyware vendors’ products. An effective audit would scrutinize all aspects of an export licensing process including application procedures, decision-making criteria, approval processes, monitoring mechanisms, and compliance enforcement. In this way, any discrepancies or red flags discovered could be disclosed to partner agencies to execute audit recommendations.123

Third, these audit reports and the original export licenses should also be made accessible to the public by the national licensing authority; this would be BIS in the case of the US. Reasonable redaction of personally identifiable or business-sensitive information could be made but this should be weighed heavily against the significant public interest in greater transparency in the activities of spyware vendors. Public export license records would complement largely private KYV data and allow for the broader research and civil society community to fulfill an important role as external accountability mechanisms.

Export controls are, at best, a marginal utility in regulating the spyware market. Their focus on transactions emphasizes one of the least regulable steps in the spyware supply chain and to expect licensing to address myriad end uses or all facets of vendor behavior would be wildly optimistic. Instead, this recommendation proposes to take export licensing for the marginal benefit it might offer policymakers and deriving additional value from improved transparency—without leaving export controls as the sole, or even most critical, line of defense against the risks of spyware.

4. Limit jurisdictional arbitrage by vendors

Imposing policy outside of a state’s jurisdiction is challenging and presents opportunities for spyware vendors and others seeking to elude regulatory controls. This extraterritoriality is often exploited by spyware vendors to engage in jurisdictional arbitrage and take advantage of inconsistencies across governments. This recommendation outlines several steps to limit this arbitrage, focusing on raising the barrier for vendors associated with an export license for electronic surveillance technologies, including spyware, to leave a single jurisdiction and mandate reporting of new branches and subsidiaries. To keep things simple, this recommendation defines surveillance technologies using the State Department’s language from the Joint Statement: “Technologies used for surveillance can refer to products or services that can be used to detect, monitor, intercept, collect, exploit, preserve, process, analyze, invasively observe, and/or retain sensitive data, personally identifying information, including biomarkers, or communications concerning individuals or groups.”124

Jurisdictional arbitrage by spyware vendors undermines the rule of law, efficiency of regulatory supervision, and market integrity, and can trigger regulatory competition amongst different jurisdictions where states adopt low-standard regulatory requirements to attract business and investments.125For example, the European Union, theoretically, has strong regulations against spyware vendors including Council Regulation (EC) No. 2021/821 which ensures regulatory consistency across EU member states.126However, the bloc faces fragmentation when it comes to implementation with countries like Bulgaria, Cyprus, Greece, Italy, Malta, and Hungary demonstrating highly variable political commitment to or institutional capacity for strict export controls.127Intellexa is a prime example of a vendor that established new subsidiaries in these countries to take advantage of jurisdictional arbitrage.128

To address this problem, policymakers should first make it more challenging and costly for a vendor with an export license to exit a jurisdiction. This should include mandating the public disclosure of any subsidiary or branch openings or closures by a vendor that has been granted a license to export spyware. Second, policymakers with existing authorities to regulate inbound investment (such as the Committee on Foreign Investment in the United States–CFIUS) should automatically review transactions impacting the ownership structure of domestic spyware vendors in any way. Vendors that fail to flag such transactions (such information may come to light in mandated KYV disclosures, for instance) should be barred from participating in government acquisitions and/or have their export licenses suspended for some period.

 Consider the example of the disclosure requirements of the banks which are members of the Federal Reserve System in the United States.129These banks are required by the Federal Deposit Insurance Act to seek approval of and disclose to the Federal Reserve (as well as the public) any instances of openings, closures, or mergers with another bank, including the closure of any bank branches as a result of mergers and acquisitions.130 This requirement ensures that the Federal Reserve can monitor and supervise the geographical footprint and operational changes of banks across different states and regions.131 In the context of the spyware market, a notification requirement would enhance the transparency of the market and help put regulators in different jurisdictions on a more equal footing. It would also allow key stakeholders, including civil society organizations, to have visibility into the operations of spyware vendors and their compliance with regulatory requirements.

Third, harmonizing these disclosure requirements should be the subject of another working group under the Joint Statement or a similar collaborative mechanism. Effective barriers to market exit by these spyware vendors would help to improve the potential influence of domestic policies, including those across the EU, on these firms. Consistency in those barriers would reduce the incentives toward arbitrage.

5. Provide greater protection against Strategic Lawsuits Against Public Participation (SLAPP)

One of the abiding trends of this research and the work of the broader analytic community examining spyware is the tremendous importance of open reporting. There is no substitute for applied public policy analysis in this space but the relationship between journalism and research is deeply symbiotic. A disturbing recent trend threatens to undermine this reporting as a handful of spyware vendors deploy Strategic Lawsuits Against Public Participation (SLAPP).132

In 2022, the news outlet Reuters was sued for defamation against a vendor’s parent company profiled in this report, Appin Technologies and Appin Security Group. As a result, Reuters removed an investigation into the group’s activities from its website.133 This lawsuit sets a dangerous precedent for journalists and researchers alike who offer much-needed transparency into an already opaque market.

To address the harms and frequency of SLAPP suits more generally, the European Commission established a set of rules in May of 2024 that provide heightened protections for speech on matters of “public interest.”134 The essential elements of these rules would be a suitable starting point for comparable policies in the United States and elsewhere, including:

  1. Accelerated treatment of issues raised under these heightened protections.
  2. The possibility for early dismissal of “claims against public participation” which are determined to be “manifestly unfounded” at “the earliest possible stage in the proceedings, in accordance with national law.”
  3. Provision for the recovery of all costs of the proceedings by defendants and the potential for application of “effective, proportionate and dissuasive penalties or other equally effective appropriate measures, including the payment of compensation for damage or the publication of the court decision” on the party initiating the action.135

It would also be welcome for states that host the victims of SLAPP suits to raise the issue through existing diplomatic channels with states hosting parties initiating these suits. In practical terms, the State Department should address the impact of the Appin suit against Reuters with the Indian and UK governments (as the domicile of the claimant vendor and the court of jurisdiction respectively). This does little to impact the suit directly but shows awareness by the US government and may raise the costs of action by firms in these countries.

Areas for future work

These recommendations focus on achieving greater transparency across the spyware market, limiting jurisdictional arbitrage by vendors, and more effectively scrutinizing supplier and investor relationships with those vendors. They do not address the full range of issues that urgently need greater attention and resolution in the proliferation of spyware. This section addresses these opportunities for future work and concludes with a call for consensus and action by at least a small group of states to advance these and related policies.

Bringing the brokers back in: There is insufficient coverage of supplier firms in this dataset relative to the number of spyware vendors. These firms, some of which might be categorized as exploit “brokers,” are important to the discussion of how to reduce the harms associated with spyware, but they are less widely reported on and not systematically understood. These firms traffic in information, and to some degree talent, whose product is not intrinsically malicious (and which has been subject to poorly conceived controls in the past). At the same time, the activities of these suppliers and brokers are a critical wedge between advocates for more effective spyware policy rooted in national security concerns and those advocating from a human rights-centered perspective (acknowledging a degree of overlap between the two). Better information on the diversity of locations, activities, and organization of these firms would benefit an otherwise opaque segment of this market whose activities extend far beyond spyware. The length of supply chains across the spyware market, and the number of steps between suppliers for high-risk vendors as well as governments otherwise practicing adequate due diligence against abuse, could be a substantial driver of unaddressed risk and merits further investigation.

Spyware vendors or suppliers partnered with major tech firms: There are several instances where spyware vendors or suppliers have formed partnerships with conventional technology firms. Positive Technologies may be the most notable, having previously been a member of Microsoft’s Active Protections Program (MAPP) and publicly advertising its work with Samsung.136 The structure of these relationships may be tied to vulnerability disclosure, underlining the complex role played by vulnerability discovery and exploitation in both offensive and defensive activities.

A “who’s who” for spyware investors: The incentives for different kinds of investors (for example, venture capital firms vs. private equity) are clear in conventional markets, but much less so in the spyware sector. Both types of entities appear in this dataset. Designing a more robust due diligence regime for investors in these vendors would benefit from a more precise understanding of the motivations of different types of investors for entering the market.

The customer might often be wrong: This project has not yet covered the range of government customers to which this market largely caters. The behavior of these agencies, parties, offices, and bureaus should be of significant interest as they are the ultimate source of demand that shapes the spyware market. States willing to take affirmative action against the spyware market may also quickly find they have better existing tools to shape the behavior of their allies and partners, integrating spyware into wider defense assistance, trade, and legal cooperation agendas. Cataloging customers, their relationships with specific firms and associated supply chains, and the timing of these relationships is a fruitful area for future work. How those customer relationships form and their portability between spyware vendors is also worth continued analysis.

A role for technology companies: Technology firms have a role to play in shaping the spyware market, if for no other reason than they may be supplying Software-as-a-Service (SaaS) and other technologies to spyware-related firms. Research from Amnesty International and others in 2021 established the NSO Group was using AWS products as part of the command-and-control infrastructure for its spyware product and even earlier reporting from Motherboard pointed to NSO delivering its product from an Amazon IP address.137Discovery as part of an ongoing lawsuit between Meta and NSO established that NSO became an AWS customer beginning in 2018.138

Technology companies might play at least two roles in further shaping the spyware market. First is executing due diligence on technology sales, especially export-controlled hardware and cloud services over a certain dollar threshold, to determine if the customer is an identified participant in the spyware market. Those identified customers should be subject to scrutiny and possible removal but early identification would help cloud vendors avoid discovering via the media that their services are being used in the development or deployment of spyware. Second is major technology firms, perhaps only cloud service providers, developing a common code of conduct as to how they sell services to participants in the spyware market and under what conditions they might limit or refuse sales. The development of both this review policy and the code of conduct are prospects for future work. In the meantime, major technology companies like Meta, Apple, Google, and Microsoft remain some of the only entities with both the standing and the resources to conduct sustained legal action against vendors selling products that create significant risk or harm.

Whistleblowers: While open reporting and journalism are important sources of transparency in this market and deserve heightened protection, the status of whistleblowers from spyware vendors or customers is uncertain. Consolidated guidance as to the human rights or procedural responsibilities of spyware vendors and their customers, accounting for jurisdiction, would help to clarify under what conditions an employee or government official might act with adequate legal protections. Specifying the kinds of legal or ethical expectations of vendors and customers, as well as clarifying protections both against retaliation and to navigate limits on publishing proprietary information, at a national level, would help strengthen an important channel of information on the use and abuse of these products.

Clarifying de-listing procedures: An important feature of an effective sanctions regime is a transparent de-listing procedure.139 Given the purpose of imposing sanctions is to cause behavior change in the sanctioned entity or individual, sanctioned entities and individuals should be able to request de-listing once they have demonstrably changed their practices. This is important for ensuring the legitimacy and credibility of sanctions regimes. Hence, governments and multilateral organizations should clearly specify the process and conditions that result in the de-listing from their sanctions list and both researchers and advocates can help map out these processes.

What comes next: How do these and other proposed changes impact the shape and jurisdictional concentration of the spyware market? One of the risks for more assertive regulation of spyware development, sales, and use is vendors, suppliers, or other entities moving to jurisdictions outside the reach of those states using policy to shape this market. The suggestion is not unwarranted in no small part because of the jurisdictional arbitrage already observed in this market. More work is needed to understand what policies might pose the greatest likelihood of this market shift or its segmentation into multiple tiers. This dataset (a sample of the market) shows that even vendors in “hard to reach” jurisdictions with local governments unwilling to regulate their behavior still rely on foreign capital and suppliers. These investors and suppliers are often based in states with a demonstrated willingness to enforce existing policies on spyware. Therein lies the opportunity for change.

Conclusion

There is a certain macabre humor in the lengths that spyware firms go to in obscuring their true nature and purpose, disguising themselves as mythical beasts in an obscured global market. The purpose of this report is primarily to demystify—or demythify—the global spyware market, moving beyond coverage of individual firms to unveil a network of relationships between spyware vendors, suppliers, and investors across forty-two different countries. This data is only a sample but it evidences several trends, including cross-border financial support, shifting vendor identities, and a pattern of jurisdictional arbitrage which would undermine discrete national-level efforts to reshape this market. There is more to be discovered about this market and the authors’ sincere hope is that this project provides support to many other researchers, analysts, and advocates.

Policies that work to regulate and influence the spyware market, which are coordinated amongst at least a small set of countries, have better prospects to reduce the harms and risks posed by spyware. Policymakers who succeed at improving transparency in this market, raising barriers to vendor reorganization and reincorporation, and applying greater scrutiny on supplier and investor relationships will directly confront critical drivers of spyware’s proliferation and abuse. The recommendations presented in this report address these three priorities while laying the groundwork for granular controls on transactions and policies based on distinctions in the legitimate use of spyware.

Most available evidence suggests that spyware sales are a present reality and likely to continue. Proliferation heedless of its potential human rights harms and national security risks, however, is not a stable status quo. Little of the present market for spyware is regulated or governed well enough to address these harms and risks. In some areas, there is a pressing need for additional research but in many others, the initiative sits with policymakers. Nascent steps by a handful of countries demonstrate that a more vigorous approach to shape the behavior of spyware vendors, their supply chain, and their investors is possible. Where such progress has been stymied by a lack of systematic data on this market, Mythical Beasts offers a contribution. However, much more remains to be done.

Acknowledgments

Mythical Beasts was made possible with the support of Microsoft and the UK’s National Cyber Security Centre, we extend sincere thanks to the teams in Redmond and London for their commitment to this work. This project owes a debt of gratitude to the security research community, in particular Sophia d’Antoine, and contributors both inside and out of the Cyber Statecraft networkFor early conversations that helped shape this research thank you to Kirsten Hazelrig, James Shires, and several others who wish to remain anonymous. For peer review of this report, the dataset, and its interactive, thank you to Chris Delaney, Kaja Ciglic, John Hering, David Agranovich, Ingrid Dickinson, Emma Schroeder, Jorn Fleck, Clement Lecigne, Salo Aburto, Jessica Dabrowsky, James Batchik, Constantine Stasinopoulos, Jennifer Brody, and Lisandra Novo.

Thank you to Stewart Scott and Alexander Beatty for their contributions which early on shaped the focus of this report and perspectives on its art. Thank you to Nancy Messieh for all the graphics and to the team at Schema Design for the interactive version of this dataset. Thanks and appreciation are due to Andrey Prokopenko for the artwork and for his creative output despite terrific challenges. Major credit to Emma Taylor as well as Jean Le Roux and Sopo Gelava without whose collaboration the dataset and its presentation would not have been possible. Thank you to Natalie McEvoy, Charlette Goth-Sosa, Kristopher Kaliher, and Donald Partyka for review, editing, and production. For feedback on this project as it evolved thank you as well to the attendees of more than half a dozen roundtables and events over the past year and to the members of two different communities for your time, questions, and willingness to engage.  

About the authors

Jen Roberts is an Assistant Director with the Atlantic Council’s Cyber Statecraft Initiative. She primarily works on CSI’s Proliferation of Offensive Cyber Capabilities and Combating Cybercrime work. Jen also helps support the Cyber 9/12 Strategy Challenge and is passionate about how the United States with its allies and partners, especially in the Indo-Pacific, can cooperate in the cyber domain. Jen holds an MA in International Relations and Economics from Johns Hopkins University’s School of Advanced International Studies (SAIS) where she concentrated in Strategic Studies. She also attained her BA in International Studies from American University’s School of International Service.

Trey Herr is assistant professor of Global Security and Policy at American University’s School of International Service and Senior Director of the Atlantic Council’s Cyber Statecraft Initiative. At the Council, the CSI team works at the intersection of cybersecurity and geopolitics across conflict, cloud computing, supply chain policy, and more. At American, Trey’s work focuses on complex interactions between states and non-state groups, especially firms, in cyberspace. Previously, he was a senior security strategist with Microsoft handling cybersecurity policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution at Stanford University. He holds a PhD in Political Science and BS in Musical Theatre and Political Science.

Nitansha Bansal is an assistant director with the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Tech Programs. In this role, her research focuses on the proliferation of offensive cyber capabilities, including spyware and its policy implications for human rights and national security, and open source software security. She also supports the capacity building efforts of CSI, and runs the Congressional Cyber and Digital Policy Program. Prior to joining the Council, Nitansha worked with government and think tanks in India on technology policy. She holds a masters in public administration from Columbia University’s School of International and Public Affairs where she concentrated on cybersecurity and business risk, social media policy, and data analysis.

Nancy Messieh is Deputy Director for Visual Communications with the Atlantic Council’s Cyber Statecraft Initiative, where she focuses on visual storytelling and data visualization.

Emma Taylor is a Research Assistant with the School of International Service and a highly interdisciplinary professional pursuing an M.S. in Computer Science and Cybersecurity with previous experience in the technology industry.

Jean Le Roux conducted this research as a research associate for the Sub-Saharan Africa region at the Atlantic Council’s Digital Forensic Research Lab (DFRLab). He is now a senior investigator at Graphika.

Sopo Gelava is a research associate for the Caucasus with the Atlantic Council’s Digital Forensic Research Lab. Prior to the DFRLab, she served as media literacy programs director at Media Development Foundation, leading Georgian think-tank countering disinformation and information operations.

Appendices

Appendix A- Supplier and Vendor Profiles

This appendix summarizes the profiles of suppliers and vendors within the dataset that are included in the analysis of this report. The authors sought to include this information to outline details about all entities within the dataset.

Suppliers

Azimuth Security

In 2010, Mark Dowd and John McDonald founded Azimuth Security, an Australia-based exploit developer and boutique hacking firm.“140 The company gained notoriety for its role in unlocking the San Bernadino shooter’s iPhone in 2016. In 2018, an American firm, L3 Technologies, now L3Harris, purchased Azimuth Security and Linchpin Labs of Canada.141Today, Azimuth and Linchpin Labs operate under the brand name “Trenchant.” The Trenchant group of companies operates across three jurisdictions: United Kingdom, Canada, and Australia. L3Harris Trenchant Canada Inc. (Canada), and Australia.142In the past, Azimuth supposedly restricted sales to members of the Five Eyes intelligence alliance—Australia, Canada, New Zealand, the United Kingdom, and the United States.143 Trenchant does not currently face any restrictions to its exploit sales or business operations.

Blue Oceans Technologies

Blue Ocean Technologies is an Israeli supplier that was incorporated in 2015144 by retired Brigadier General Rami Ben Efraim and Lieutenant Colonel Ron Tira.145An Israeli newspaper, The Globes, reported that Blue Ocean Technologies is an exception in the Israeli spyware market since it was established as part of a deal between an East Asian country and the founders of the firm.146Intelligence Online claims that the East Asian country is Singapore, and Blue Ocean Technologies received two export licenses from the Israeli Defense Ministry to provide the Singaporean Ministry of Defense with a team of vulnerability researchers to weaponize Singapore’s cyber tools.147

Brigadier General Rami Ben Efraim, through his strategic consulting firm Lee and Rami Ben-Efraim Ltd. (also known as BNF Group), holds options in Blue Ocean Technologies.

Computer Security Initiative Consultancy PTE Ltd. (COSEINC)

Founded in 2004 in Singapore,148 Computer Security Initiative Consultancy PTE Ltd. (known widely as COSEINC) is known for distributing exploits without control and known to host pwn0rama—its own cyber vulnerability acquisition program149—and is classified as a supplier within this dataset. The Bureau of Industry and Security (BIS) of the US Department of Commerce added COSEINC to its Entity List for Malicious Cyber Activities in November 2021 based on a BIS determination that the vendor “traffic[s] in cyber tools used to gain unauthorized access to information systems, threatening the privacy and security of individuals and organizations worldwide.”150

COSEINC was founded by Thomas Lim, who is known for organizing a security conference, SyScan, until it was sold to Chinese technology firm Qihoo 360, another sanctioned entity.151 In 2015, WikiLeaks exposed Lim’s attempt to sell hacking tools to Italian Spyware vendor Hacking Team srl,152thereby, hinting at a possible connection between COSEINC and Hacking Team srl.153In 2022, the company became inactive.154

Crowdfense Technological Project Management – Sole Proprietorship LLC

Founded in 2017 in the United Arab Emirates (UAE), Crowdfense Limited buys, develops, and sells zero-day exploits that target a variety of platforms. In 2018, Crowdfense Limited launched its first bug bounty program with a $10 million budget.155Since then, the company has continued to grow its bug-bounty budget year over year as it expands the scope of its “interest” areas. According to the UAE business registry, Crowdfense Limited dissolved in 2023 and a new entity named Crowdfense Technological Project Management – Sole Proprietorship LLC was registered. In 2024, Crowdfense Technological Project Management-Sole Proprietorship LLC boasted a $30 million budget that now includes exploit acquisitions related to “Enterprise Software, WiFi/Baseband and Messengers.”156The company maintains an unknown number of offices in Abu Dhabi157 and some reporting indicates it receives financial backing from the governments of the UAE and Saudi Arabia.158

Dataflow Security s.r.l.

DataFlow Security s.r.l. (AKA DFSEC) was founded in 2022 by Ofer Cohen.159Based in Italy, the company specializes in vulnerability research and exploit development.160 This report classifies DFSEC as a supplier due to its development, optimization, and sale of exploits. DFSEC’s internal client website contains a catalog of exploits for purchase. In 2022, Dataflow Security Spain SL was established in Spain.161 In the same year, Dataflow Forensics was established as a sister company to DFSEC focused on defensive cybersecurity operations.162DFSEC acquired a majority stake in Random Research, an Israeli company also founded by Ofer Cohen.163 At this time, there is little available information concerning DFSEC funding. However, per the official Spanish corporate gazette, the sole shareholder of Dataflow Security Spain SL is Dataflow Security s.r.l., and while there has been no update to the company’s shareholders since its incorporation its share capital increased from 3,000 Euros to 153,000 Euros on June 28, 2024.164This commonly indicates a new investment and/or a new shareholder. However, limited companies are not required to declare shareholders in the Spanish public gazette. This group of firms has not faced any significant roadblocks to business operations. 

PARS Defense

Registered in Turkey in 2021, PARS Defense was founded by Ibraham Baliç, an individual who has been operating as a “vulnerability specialist” since 2010.165PARS Defense specializes in detecting vulnerabilities and operating codes on mobile systems and is coded as a supplier within the dataset. Google identified two vulnerabilities attributed to PARS defense that were present in iOS.166

No information was found on PARS Defense subsidiaries, partners, holding companies, or investors.

Protect Electronic Systems LLC

Founded in 2016, Protect Electronic Systems LLC, also known as Protect and Protected AE, is a supplier based in the United Arab Emirates.167The company was reportedly founded from what remained of DarkMatter’s zero-day exploit.168 More recently, Protect Electronic Systems received attention due to its “special relationship” with Variston IT, a vendor tracked in this report.169Protect Electronic Systems built upon Variston spyware’s “framework and infrastructure” to create a polished product to sell directly to brokers and governments.170 At this time, little is known regarding Protect Electronic Systems’ investor base; however, some sources indicate the company may receive state funding.

RebSec Solutions

RebSec Solutions was incorporated in 2012 by Vishvadeep Singh in India and is classified as a supplier within this dataset. It was not possible to identify any institutional or angel investors in RebSec Solutions but the quality of data in open reporting on this firm is limited.171

Zerodium LLC

In 2015, Chaouki Bekrar founded Zerodium LLC in the United States. Bekrar previously founded and led Vupen, a French zero-day exploit vendor. Vupen clients reportedly included “vetted” NATO government agencies, specifically the US National Security Agency (NSA).172 After Vupen dissolved in 2015, Zerodium LLC emerged to provide identical services in the zero-day exploit industry.173Vista Incorporations Limited is Zerodium LLC’s registered agent in Delaware.174Amidst a market typically shrouded in financial mystery, Zerodium LLC was one of the first firms to put out ads detailing desired exploit specifications with corresponding prices.175 Other companies, including Russia’s OpZero, Have followed suit and adopted similar public marketing strategies.176 Currently, Zerodium LCC is a privately held venture capital-backed company; however, little information exists concerning the company’s investor base.177

Vendors

Aglaya Scientific Aerospace Technology Systems Private Limited

Aglaya Scientific Aerospace Technology Systems Private Limited (Aglaya) was founded in 2014 in India by Ankur Srivastava. In addition to spyware services, Aglaya markets itself as a zero-day seller and a censorship-as-a-service company specializing in online trolling and disinformation.178Offering to run its buyer’s spyware operations for 2,500 euros per day and 600 euros for disinformation campaigns, Agalya positions itself in an interesting space in this market as it sells to non-government entities.179

The financial structure of the company is also entirely based in India, with holding companies becoming inactive in 2021, alongside the vendor itself. Aglaya was included in this research report to highlight the market for spyware outside of corporate-to-government sales, as well as the wide range of product types offered by full-service vendors, which can include not only the spyware product itself but command and control package offerings.

Cognyte Software Ltd.

Cognyte Software Ltd. was established as an independent company in 2020, registered in Israel, after the US-based Verint Systems Inc. separated its customer engagement business from its cyber intelligence business due to shareholder pressure. As a result of the separation, Cognyte Software Ltd. focuses on the “security analytics software market.”180 Its CEO is Elad Sharon and has subsidiaries in India, Brazil, Bulgaria, Canada, Delaware USA, Mexico, the United Kingdom, Israel, Taiwan, Thailand, Germany, Cyprus, the Netherlands, and Romania. Cognyte is a public company that trades its shares on the NASDAQ, with Visa Equity Partners as its largest institutional shareholder.181

Cognyte Software Ltd., however, has a history dating back to 1994 when Verint Systems Inc. was incorporated as Interactive Information Systems Corporation in the United States. The firm developed AudioDisk, a digital surveillance product intended to be used by police and intelligence agencies to record and store wiretap material.182 Two years later, the company changed its name to Comverse Information Systems Corporation, which later merged with Comverse InfoMedia Systems to create Comverse Infosys.183 The US Department of Defense is known to be a customer of Comverse Infosys’ AudioDisk product.184 After the 9/11 attacks, Comverse Infosys changed its name to Verint Systems Inc. and launched its Initial Public Offer (IPO).185 In 2013, Verint Systems Inc. was separated from other businesses of Comverse Technology and made a standalone company with Dan Bodner as its CEO.186 This is the company that eventually became Cognyte Software Ltd. 

The company and its subsidiaries have been in controversies since its inception. In 2001, Fox News reported that AudioDisk used by the US government may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.187

In 2006, it was delisted from the NASDAQ due to allegations of being a part of an options backdating scandal.188 In its 2021 Threat Report on the Surveillance-for-Hire Industry189, Meta announced that it removed around one hundred Facebook and Instagram accounts linked to Cognyte Software Ltd. The report claimed that Cognyte Software Ltd. “sells access to its platform which enables managing fake accounts across social media platforms to social-engineer people and collect data.”190 Most recently in 2022, the Norwegian Government Pension Fund Global (GPFG) was recommended by the country’s Council on Ethics to divest from Cognyte Software Ltd. due to the “unacceptable risk that the company is contributing to serious human rights abuses.”191

Cyber Root Risk Advisory Private Limited

CyberRoot Risk Advisory Private Limited (CyberRoot) was founded in India in 2013 by Vijay Singh Bisht, Chiranshu Ahuja, and Vibhor Sharma.192 In 2013, CyberRoot entered a relationship of “information sharing” with Appin Security Group and BellTroX Infotech Services Private Ltd., both classified as spyware vendors within this dataset. The nature of this information sharing or when it ended is unclear.193 CyberRoot, unlike its other Indian vendor counterparties, has a holding company within the United Kingdom named CyberRoot Limited.194

DataForense s.r.l.

Founded in Italy by Annunziata Cirilloin in 2013,195 Dataforense s.r.l. is known for its spyware Aretmide/Spyrtacus project. This system allows users to extract data from phones running Android or iOS.196The company was in liquidation as of 2024 according to the Italian business registry.197There was little available information about this particular vendor, but it was included in this report and dataset to show the subcluster of vendors emerging in Italy.

DSIRF GmbH

Founded in 2016 in Austria by Stefan Gesselbauer, DSIRF GmbH is known for its spyware SubZero.198 The company has one known subsidiary, MLS Machine Learning Solutions GmbH, which specializes in the development and implementation of machine learning models.199 In 2023, the vendor entered liquidation proceedings within the Vienna Court of Commerce.200 It is believed its subsidiary, MLS Machine Learning Solutions, is absorbing the business of DSIRF, and that DSIRF’s lead investor, DSR Decision Supporting Information Forensic, will continue to support the company.201

Gamma Group International SAL

First registered in Germany in 2008, Gamma International GmbH, later renamed to FinFisher Labs GmbH in 2012, is the vendor of FinSpy spyware.202 FinFisher Labs GmbH, in collaboration with their supplier, Elaman GmbH, distributed FinSpy to a variety of different government clientele, including entities in Singapore, South Africa, and Turkey, but remained an exclusively German-domiciled vendor.203In 2022, FinFisher Labs GmbH shut down operations in Germany after legal prosecution.204 Gamma Group’s holding companies are almost entirely in the United Kingdom, British Virgin Islands, and Cyprus and are associated with a single family. These holding companies might be used to filter investment from this family to Gamma Group. Thus, while Gamma Group is no longer operational within Germany, its financial structure and potentially its investment base are operational.

InvaSys a.s.

In 2017, Kyrre Sletsjøe founded InvaSys a.s. in the Czechia.205 The company specializes in mobile phone interception and qualifies as both a funder and a supplier due to its production of spyware tools and its sales of zero-day vulnerabilities.206 Notably, the company’s Kelpie program provides backdoor access to Android and iPhone devices and encrypted messaging applications.207 The company operates out of two offices in the Czech Republic: one in Brno and another in Prague. Founder and CEO Kyrre Sletsjøe also owns and runs Defense System Property Protection, a physical security firm, and YX Systems. Some InVasys employees previously worked at Sletsjøe’s prior company CEPIA Technologies.208From March to August of 2017 Thomas Vestyby Jensen was listed as the sole owner of InVasys Technologies; however, as of 2022, Kyrre Sletsjøe has a ninety-one percent ownership stake in InvaSys. At present the company has not faced any challenges to its operations in the Czechia.209

Leo Impact Security Services

Founded in 2009 by Manish Kumar, Leo Impact Security Services is listed as a vendor in the dataset.210 There has been some reporting that this vendor is a direct competitor of Aglaya, another spyware vendor profiled for this report, as they offer similar spyware products.211 It has one branch based in the Czechia named Leo Impact Security s.r.o. that has been operational since 2010.212

Mollitiam Industries

Mollitiam is a Spanish vendor founded in 2018 by In-Nova and the cybersecurity firm StackOverflow Ltd. It is headed by Santiago Molins Riera, who is the former head of technology of In-Nova.213 Mollitiam is known to develop payloads that can intercept communications and steal cloud-hosted data from infected devices and deploy spyware on Microsoft, Apple, and Google mobile devices and operating systems. It is known for its interception tools Invisible Man and Night Crawler, which are capable of remotely accessing files and location data and covertly turning on a device’s camera and microphone.214

Mollitiam has provided services to Spain’s National Intelligence Centre (CNI) and Mando Conjunto de Ciberdefensa (MCCD), the country’s joint cyberspace command.215 It receives funding from the Centre for the Development of Industrial Technology (CDTI) which is a public corporation under the Spanish Ministry of Economy and Competitiveness.216 The European Union’s Regional Development Fund provided financial support to Mollitiam between 2019 and 2021 for a project worth €650,000 to build a platform to provide new ways to automatically generate intelligence from data extracted from social media platforms and the dark web.217 Apart from these government funds, venture capital firms like EASO Ventures, Sabadell Venture Capital, and Torsa Capital have investments in this Spanish firm.218

Movia S.p.A.

Movia S.p.A. is an Italian spyware vendor founded in 2003 by Luca Spina. It is known to sponsor ISS World, a global surveillance technology trade.219 The company’s spyware product is called Spider and is used by prosecution offices in Italy. In 2022, the company established a subsidiary called Bioss, and Spina launched another company called Vision s.r.l.220 Movia’s largest investor is known to be Sistema Investimenti.221Movia was exposed by the Italian anti-terrorist and anti-mafia investigative directorate called Direzione Nazionale Antimafia e Antiterrorismo (DNAA).222

negg Group s.r.l.

In 2013, Francesco Taccone co-founded negg Group s.r.l. in Italy.223By 2017, Kaspersky Lab published a report detailing the invasive capabilities of Skygofree, a spyware tool it attributes to negg Group.224 Skygofree ties many of its exploitive services, including audio recordings and photo captures on target devices, to the device location.225 For example, Skygofree allows attackers to turn on audio recording capabilities when they deem that a device has entered a sensitive location such as a meeting or product development site.226 Furthermore, the spyware tool forces infected devices to connect to attacker-controlled WiFi networks, offering attackers the ability to collect and analyze WiFi traffic. Finally, this spyware tool exploits vulnerabilities within a device’s accessibility services to allow attackers to read encrypted WhatsApp messages. As of 2024, Meta observed the negg Group accounts testing exploit delivery via Facebook and Instagram and consequently removed its accounts from these platforms.227 The company maintains three offices registered in Italy: two in Rome and one in Reggio Calabria.228 Between 2020 and 2022 negg International operated in the Netherlands under the ownership of companies with ties to the negg Group co-founder.229 However, the business relationship between negg Group and negg International remains unknown at this time. In 2014, the Italian Ministry of Economic Development awarded negg Group a digitalization voucher worth 9,872 euros.230 At the time, such vouchers were intended to support the digital transformations of Italian companies. At present, the negg Group website states that the company “actively seeks” investors.231

Positive Technologies AO

Founded by Yuri Maksimov and Dmitry Maximo in 2002, Positive Technologies AO is a Russian company that was added to the list of entities sanctioned by the US Office of Foreign Assets Control (OFAC) in 2021 on account of its role in the organization of the Positive Hack Days cybersecurity conference. The conference is said to be used by the Russian Federal Security Service (FSB) for recruitment, according to the US Treasury Department. The Bureau of Industry and Security (BIS) of the US Department of Commerce also accused Positive Technologies AO of distributing exploits and added it to the Entity List for malicious cyber activities.232 The US Department of State announced the vendor was listed based on a determination that it “misuse[s] and traffic[s] cyber tools that are used to gain unauthorized access to information systems in ways that are contrary to the national security or foreign policy of the United States, threatening the privacy and security of individuals and organizations worldwide”.233 Positive Technologies AO has a corporate presence in at least six different countries.

According to Intelligence Online, Positive Technologies operates two websites—one for the Russian market and another for the international market. On its website for the international market, the vendor lists Lukoil, Vimpelcom, Sberbank, the South Korean companies Hanwha and Samsung, France’s Societe Generale bank, and the French cybersecurity agency, Agence Nationale de la Sécurité des Systèmes d’Information (ANSSI), as its clients.234

RCS Labs

Founded in 1992 in Italy, RCS Labs (RCS ETM Sicurezza S.p.A) operates as both an original producer as well as an intermediary seller of spyware.235 As early as 2012, RCS facilitated the sale of Hacking Team srl products and services, including Hacking Team srl’s Remote Control System (RCS), to government agencies in Bangladesh, Pakistan, and Turkmenistan.236 In 2022, security researchers at Lookout determined RCS Lab S.p.A created and sold the Hermit spyware.237

The RCS Group (formerly Aurora Group) owns RCS Labs. In March 2022, another Italian firm that sells spyware amongst other products, Cy4Gate, acquired Aurora Group, including its seven subsidiary companies: RCS ETM Sicurezza S.p.A., RCS LAB GMBH, Tykelab, Azienda Informatica Italiana, Servizi Tattici Informativi Legali, Dars Telecom SL, and Aurora France S.A.S.238 Cy4Gate is one of Italy’s largest technology companies, is publicly traded, and its primary investors are Elettronica Group and Expert System. According to Cy4Gate’s 2023 financial reporting, RCS Labs remains the most profitable company in the Aurora Group.239

Variston Information Technology

Variston Information Technology (Variston) was founded in 2018 by Ralf Wegener and Ramanan Jayaraman and is headquartered in Barcelona, Spain.240 Variston is known to develop data collection tools for law enforcement and security solutions in the areas of supervisory control and data acquisition (SCADA) and the Internet of Things (IoT).241 Shortly after its incorporation in 2018, Variston IT acquired Truel IT,242 an Italian zero-day vulnerabilities research company. This acquisition helped Variston onboard new researchers and capabilities, including developing its spyware Heliconia.243 According to reporting from Intelligence Online in May 2024, Variston Information Technology is now effectively defunct.244

Appendix B-Markets Map: Country List (42)

Appendix C- Markets Map: Vendor List (49)

  • Aglaya Scientific Aerospace Technology Systems Private Limited 
  • Appin Security Group > Approachinfinate Computer and Security Consultancy Grp. 
    • Adaptive Control Security Global Corporate
  • BellTroX Infotech Services Private Ltd 
  • Candiru Ltd > DF Associates > Grindavik Solutions Ltd./Greenwick Solutions > Taveta Ltd./Tabatha Ltd > Saito Tech Ltd. 
  • CyberRoot Risk Advisory Private Limited > CyberRoot Software Solutions LTD 
  • Cytrox AD 
    • Intellexa S.A.
  • Dataflow Security s.r.l. 
  • DataForense s.r.l 
  • DSIRF GmbH 
  • Equus Technologies > MerlinX Ltd. 
  • Gamma Group International SAL 
    • Gamma International GmbH > FinFisher Labs Gmbh
  • Hacking Team srl (Italy) > Memento Labs srl 
    • Hacking Team srl (United States)
    • Grey Heron (United Kingdom)
    • Grey Heron (Italy)
  • Interionet Systems Ltd. 
  • InvaSys a.s. 
  • Leo Impact Security Service PVT Ltd. 
    • Leo Impact Security s.r.o.
  • Mollitiam Industries 
  • Movia SPA 
  • Negg Group S.R.L 
    • Negg International
  • NSO Group 
    • L.E.G.D Technologies > Q Cyber Technologies
    • Westbridge Technologies
    • Osy Technologies SARL
    • Q Cyber Technologies SARL
  • Paragon Solutions 
  • Positive Technologies AO (Russia)
  • 245
    • Positive Technologies Global Holding Ltd. (United Kingdom)
    • Positive Technologies Global Solutions Ltd. (United Kingdom) 
    • Positive Technologies S.R.L (Romania) 
    • Positive Technologies S.R.L. (Italy)
    • Positive Technologies Inc. (United States)
    • Positive Technologies Czech s.r.o. (Czechia)
    • Positive Technologies Holding AG (Switzerland)
  • Quadream Inc. 
  • RCS ETM Sicurezza S.p.A. 
    • RCS MEA DMCC
  • Variston IT 
  • Verint Systems Inc. 
    • Verint Systems Ltd. 
    • Cognyte Software Ltd. (Israel) 

    Disclaimer on Sources

    More information: All sources for this dataset are open-source and were publicly available at the time of writing. For more on the kinds of data used in this project, see here. We are aware that some links have broken or been removed, and a handful of sources have been taken down in the wake of court orders. We are unable to replace, or host, copyrighted material. For any questions on sourcing, please email cyber@atlanticcouncil.org.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    Lorenzo Franceschi-Bicchierai, “Price of Zero-Day Exploits Rises as Companies Harden Products against Hackers,” TechCrunch, April 6, 2024, https://techcrunch.com/2024/04/06/price-of-zero-day-exploits-rises-as-companies-harden-products-against-hackers/.
    2    Alexander Martin, “More than 80 Countries Have Purchased Spyware, British Cyber Agency Warns,” The Record, April 19, 2023, https://therecord.media/spyware-purchased-by-eighty-countries-gchq-warns.
    3    Pieter Omtzigt, “Pegasus and Similar Spyware and Secret State Surveillance,” (Parliamentary Assembly, Council of Europe, September 20, 2023), https://rm.coe.int/pegasus-and-similar-spyware-and-secret-state-surveillance/1680ac7f68. See also Jen Roberts et al., “Markets Matter: A Glance into the Spyware Industry,” DFRLab, April 22, 2024, https://dfrlab.org/2024/04/22/markets-matter-a-glance-into-the-spyware-industry/.
    4    “We’re all in this together: A year in review of zero-days exploited in-the-wild in 2023,” Google, March 2024, https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Year_in_Review_of_ZeroDays.pdf; Google notes that while the spyware they captured targeted mobile and browser software exclusively, “we know that Candiru Ltd, a CSV, had a chain for Windows because we were able to recover their first stage Chrome exploit, but we were not able to recover the rest of the exploits in the chain.”
    5    Revenue generated by these sales is difficult to estimate and a subject for further study to include customers and not just the sales side of this market. One widely cited estimate, $12 billion, does not seem to have a rigorous source but is quoted by entities like the Centre for International Governance and Innovation (Kyle Hiebert, “The Growing Global Spyware Industry Must Be Reined In,” Centre for International Governance Innovation, March 27, 2023, https://www.cigionline.org/articles/the-growing-global-spyware-industry-must-be-reined-in/) and the Carnegie Endowment (Steven Feldstein and Brian (Chun Hey) Kot, “Why Does the Global Spyware Industry Continue to Thrive? Trends, Explanations, and Responses,” March 14, 2023, https://carnegieendowment.org/research/2023/03/why-does-the-global-spyware-industry-continue-to-thrive-trends-explanations-and-responses?lang=en), as well as a host of media (e.g., Jessica Lyons, “Spyware Business Booming despite Government Crackdowns,” The Register, February 7, 2024, https://www.theregister.com/2024/02/07/spyware_business_booming/ and Ronan Farrow, “How Democracies Spy on Their Citizens,” The New Yorker, April 18, 2022, https://www.newyorker.com/magazine/2022/04/25/how-democracies-spy-on-their-citizens). It first appeared in a 2019 New York Times article (Mark Mazzetti et al., “A New Age of Warfare: How Internet Mercenaries Do Battle for Authoritarian Governments,” New York Times, March 21, 2019, https://www.nytimes.com/2019/03/21/us/politics/government-hackers-nso-darkmatter.html)—without citation to a specific source or substantiation. An earlier $5 billion estimate appears in a 2016 Vanity Fair piece by Bryan Burrough attributed to an anonymous expert. Firm Valuation – Reuters, “Israeli cyber firm NSO Group mulls Tel Aviv IPO at $2 billion value – reports”, January 6, 2021, accessed July 16, 2024, https://www.reuters.com/article/israel-cyber-nso-ipo-int-idUSKBN29B0WU/
    6    Mike Sexton, “Unregulated Spyware’s Threat to National Security – Third Way,” June 22, 2023, accessed July 10, 2024, https://www.thirdway.org/memo/unregulated-spywares-threat-to-national-security; US Department of State, “Guiding Principles on Government Use of Surveillance Technology,” March 30, 2023, https://www.state.gov/guiding-principles-on-government-use-of-surveillance-technologies/
    7    Mike Sexton, “Unregulated Spyware’s Threat to National Security,” Third Way, June 22, 2023, https://www.thirdway.org/memo/unregulated-spywares-threat-to-national-security; A.J. Vicens, “Phones of Journalists and Activists in Europe Targeted with Pegasus,” CyberScoop (blog), May 30, 2024, https://cyberscoop.com/spyware-europe-nso-pegasus/; Natalie Kitroeff and Ronen Bergman, “How Mexico Became the Biggest User of the Pegasus Spyware,” New York Times, April 18, 2023, https://www.nytimes.com/2023/04/18/world/americas/pegasus-spyware-mexico.html; Fanny Potkin and Poppy McPherson, “Israel’s Cognyte Won Tender to Sell Intercept Spyware to Myanmar before Coup,” Reuters, January 18, 2023, https://www.reuters.com/technology/israels-cognyte-won-tender-sell-intercept-spyware-myanmar-before-coup-documents-2023-01-15/; Siena Anstis et al., The Dangerous Effects of Unregulated Commercial Spyware, TheCitizen Lab (Munk School, University of Toronto), June 24, 2019, https://citizenlab.ca/2019/06/the-dangerous-effects-of-unregulated-commercial-spyware/.
    8    “Standing Up to Surveillance,” AccessNow (blog), accessed July 3, 2024, https://www.accessnow.org/surveillance/; “The Predator Files: Caught in the Net, the Global Threat from ‘EU Regulated’ Spyware,” Amnesty International, October 9, 2023, https://www.amnesty-international.be/sites/default/files/2023-10/act1072452023english.pdf; Bill Marczak et al., Hooking Candiru Ltd, Citizen Lab (Munk School, University of Toronto), July 15, 2021, https://citizenlab.ca/2021/07/hooking-Candiru Ltd-another-mercenary-spyware-vendor-comes-into-focus/.
    9    As analysis from the Atlantic Council has argued previously, proliferation “presents an expanding set of risks to states and challenges commitments to protect openness, security, and stability in cyberspace. The profusion of commercial offensive cyber capabilities (OCC) vendors, left unregulated and ill-observed, poses national security and human rights risks. For states that have strong OCC programs, the proliferation of spyware to state adversaries or certain non-state actors can be a threat to immediate security interests, long-term intelligence advantage, and the feasibility of mounting an effective defense on behalf of less capable private companies and vulnerable populations. The acquisition of OCC by a current or potential adversary makes them more capable.” Winnona DeSombre et al, “Countering Cyber Proliferation: Zeroing in on Access-as-a-Service,” Atlantic Council (blog), March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/.
    10    Andy Greenberg and Lily Hay Newman, “Security News This Week: US Congress Targeted with Predator Spyware,” Wired, October 14, 2023, https://www.wired.com/story/us-congress-spyware/; Gordon Corera, “Pegasus: French President Macron Identified as Spyware Target,” BBC, July 20, 2021, https://www.bbc.com/news/world-europe-57907258.
    11    Winnona DeSombre et al., Countering Cyber Proliferation: Zeroing in on Access-as-a-Service, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/.
    12    “Exporting Dual-Use Items,” European Commission, accessed July 10, 2024, https://policy.trade.ec.europa.eu/help-exporters-and-importers/exporting-dual-use-items_en.
    13    “The Wassenaar Arrangement at a Glance,” Arms Control Association, February 2022 [last reviewed], https://www.armscontrol.org/factsheets/wassenaar.
    14    “2013 Amendments to Wassenaar Arrangement Need Rewording, US State Dept. Concedes,” The Wire, accessed July 10, 2024, https://thewire.in/tech/2013-amendments-to-wassenaar-arrangement-need-rewording-us-state-department-concedes; Garrett Hinck, “Wassenaar Export Controls on Surveillance Tools: New Exemptions for Vulnerability Research,” Lawfare, January 5, 2018, https://www.lawfaremedia.org/article/wassenaar-export-controls-surveillance-tools-new-exemptions-vulnerability-research
    15    “Exporting Dual-Use Items,” European Commission.
    16    Council Regulation (EC) No 428/2009 of 5 May 2009 setting up a Community regime for the control of exports, transfer, brokering and transit of dual-use items (recast), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32009R0428; Council Regulation (EC) No 428/2009 of 5 May 2009 setting up a Community regime for the control of exports, transfer, brokering and transit of dual-use items (recast), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32009R0428.; “Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 Setting up a Union Regime for the Control of Exports, Brokering, Technical Assistance, Transit and Transfer of Dual-Use Items (Recast)” (Official Journal of the European Union, June 11, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2021:206:FULL&from=EN.; Mark Bromley and Kolja Brockman, “Implementing the 2021 Recast of the EU Dual-Use Regulation: Challenges and Opportunities,” Eu Non-Proliferation and Disarmament Consortium, Non-Proliferation and Disarmament Papers, No.77, September 2021, https://www.sipri.org/sites/default/files/2021-09/eunpdc_no_77.pdf
    17    Bureau of Industry and Security, U.S. Department of the Commerce, https://www.bis.doc.gov/index.php/91-dual-use-export-licenses; Governed by the Export Administration Regulations
    18    15 C.F.R. § 744, “Control Policy: End-User and End-Use Based,” https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-744.
    19    US Department of State, “The United States Adds Foreign Companies to Entity List for Malicious Cyber Activities,” media note (Office of the Spokesperson), November 3, 2021, https://www.state.gov/the-united-states-adds-foreign-companies-to-entity-list-for-malicious-cyber-activities/. Positive and COSEINC were both added, “based on a determination that they misuse and traffic cyber tools that are used to gain unauthorized access to information systems in ways that are contrary to the national security or foreign policy of the United States, threatening the privacy and security of individuals and organizations worldwide.”
    20    Office of Congressional and Public Affairs, “Commerce Adds Four Entities to Entity List for Trafficking in Cyber Exploits,” press release, US Department of Commerce: Bureau of Industry and Security, July 18, 2023, https://www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3297-2023-07-18-bis-press-package-spyware-document/file.
    21    “About the Pegasus Project,” Forbidden Stories, July 18, 2021, https://forbiddenstories.org/about-the-pegasus-project/;Sophie in ‘t Veld, “Report of the Investigation of Alleged Contraventions and Maladministration in the Application of Union Law in Relation to the Use of Pegasus and Equivalent Surveillance Spyware (Report – A9-0189/2023),” European Parliament, May 5, 2023, https://www.europarl.europa.eu/doceo/document/A-9-2023-0189_EN.html; European Parliament, 2022/2077(INI), Legislative Observatory, accessed July 10, 2024, https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=en&reference=2022/2077(INI).
    22    Sophie in’t Veld “Report of the Investigation of Alleged Contraventions and Maladministration in the Application of Union Law in Relation to the Use of Pegasus and Equivalent Surveillance Spyware (2022/2077(INI)).”; “Sudan: One Year of Atrocities Requires New Global Approach,” Human Rights Watch, April 12, 2024, https://www.hrw.org/news/2024/04/12/sudan-one-year-atrocities-requires-new-global-approach.
    23    Max Griera, “EU Parliament Vote on Spyware Gets Politicised, Implementation Challenges Loom,” Euractiv, May 9, 2023, https://www.euractiv.com/section/politics/news/eu-parliament-vote-on-spyware-gets-politicised-implementation-challenges-loom/.
    24    “Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware,” The White House, March 18, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/18/joint-statement-on-efforts-to-counter-the-proliferation-and-misuse-of-commercial-spyware/; President Biden, “Executive Order on Prohibition on Use by the United States Government of Commercial Spyware That Poses Risks to National Security,” press release, The White House, March 27, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/03/27/executive-order-on-prohibition-on-use-by-the-united-states-government-of-commercial-spyware-that-poses-risks-to-national-security/.
    25    “Executive Order on Prohibition on Use by the United States Government of Commercial Spyware That Poses Risks to National Security.”
    26    The White House, “Joint Statement on Efforts to Counter the Proliferation and Misuse.”; Even the Summit for Democracy statement points to an admittedly limited statement of “Guiding Principles on Government Use of Surveillance Technologies” which emphasizes it is a “voluntary and non-legally binding” document and calls for actions like, “Governments should ensure the operation of surveillance technologies is governed in a manner that proactively mitigates the risks of misuse and enables appropriate access to judicial or administrative review.” While this is a positive starting point, it does not, yet present an implementable model of transparent and rigorous governance of the use of spyware.
    27    US Department of the Treasury, “Treasury Sanctions Members of the Intellexa Commercial Spyware Consortium,” press release, March 5, 2024, https://home.treasury.gov/news/press-releases/jy2155. These sanctions were issued pursuant to Executive Order 13694, as amended by Executive Order 13757.
    28    US Department of the Treasury, “Treasury Sanctions Members of the Intellexa.”
    29    Balinese Ltd (formerly Cytrox AD Software Ltd), Peterbald Ltd (formerly Cytrox AD EMEA Ltd), Passitora Ltd (formerly WS WiSpear Systems Limited), and Senpai Technologies Ltd—all currently based in Israel—as well as the British Virgin Islands-domiciled Intellexa Limited.
    30    A number of the concepts, some language, and three signatories (Australia, Denmark, and Norway) for this document originated in the first Summit for Democracy as part of the “Export Controls and Human Rights Initiative” – “Fact Sheet: Export Controls and Human Rights Initiative Launched at Summit For Democracy,” The White House, December 10, 2021, https://www.whitehouse.gov/briefing-room/statements-releases/2021/12/10/fact-sheet-export-controls-and-human-rights-initiative-launched-at-the-summit-for-democracy/.
    31    “Announcement of a Visa Restriction Policy to Promote Accountability for the Misuse of Commercial Spyware,” U.S. Department of State, February 5, 2024, https://www.state.gov/announcement-of-a-visa-restriction-policy-to-promote-accountability-for-the-misuse-of-commercial-spyware/.
    32    “Accountability for the Murder of Jamal Khashoggi,” U.S. Department of State, February 26, 2021, https://www.state.gov/accountability-for-the-murder-of-jamal-khashoggi/.
    33    Based on authority from Section 212(a)(3)(C) of the Immigration and National Act.
    34    “Promoting Accountability for the Misuse of Commercial Spyware,” U.S. Department of State, April 22, 2024, https://www.state.gov/promoting-accountability-for-the-misuse-of-commercial-spyware/.
    35    “The Pall Mall Process Declaration: Tackling the Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities,” UK Foreign, Commonwealth & Development Office, February 6, 2024, https://www.gov.uk/government/publications/the-pall-mall-process-declaration-tackling-the-proliferation-and-irresponsible-use-of-commercial-cyber-intrusion-capabilities.
    36    Secretary Blinken, “Announcement of a Visa Restriction Policy to Promote Accountability for the Misuse of Commercial Spyware,” press statement, United States Department of State, February 5, 2024, https://www.state.gov/announcement-of-a-visa-restriction-policy-to-promote-accountability-for-the-misuse-of-commercial-spyware/.
    37    Jen Roberts, Trey Herr, Emma Taylor, and Nitansha Bansal, “Markets Matter: A Glance into the Spyware Industry, DFRLab, April 22, 2024, https://dfrlab.org/2024/04/22/markets-matter-a-glance-into-the-spyware-industry/.
    38    “Unauthorized” access separates spyware from myriad other services or tools that might be used to effectuate similar surveillance but which require a user’s consent at some stage e.g. downloading an application from a mobile phone app store.
    39    50 U.S. Code § 3232a – Measures to mitigate counterintelligence threats from proliferation and use of foreign commercial spyware, https://www.law.cornell.edu/uscode/text/50/3232a.
    40    Winnona DeSombre et al., “A Primer on the Proliferation of Offensive Cyber Capabilities” (Atlantic Council, March 1, 2021), https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/a-primer-on-the-proliferation-of-offensive-cyber-capabilities/.
    41    “Prohibition on Use by the United States Government of Commercial Spyware That Poses Risks to National Security,” Federal Register, March 30, 2023, https://www.federalregister.gov/documents/2023/03/30/2023-06730/prohibition-on-use-by-the-united-states-government-of-commercial-spyware-that-poses-risks-to.
    42     Read more about the ‘breakout’ of “offensive capabilities like EternalBlue, allegedly engineered by the United States, used by Russian, North Korean, and Chinese governments” (DeSombre et al., Countering Cyber Proliferation). See also Gil Baram, “The Theft and Reuse of Advanced Offensive Cyber Weapons Pose a Growing Threat,” Council on Foreign Relations (blog), June 19, 2018, https://www.cfr.org/blog/theft-and-reuse-advanced-offensive-cyber-weapons-pose-growing-threat; Insikt Group, “Chinese and Russian Cyber Communities Dig Into Malware From April Shadow Brokers Release,” Recorded Future (blog), April 25, 2017, https://www.recordedfuture.com/shadow-brokers-malware-release/; Leo Varela, “EternalBlue: Metasploit Module for MS17-010,” Rapid7, May 19, 2017, https://blog.rapid7.com/2017/05/20/metasploit-the-power-of-the-community-and-eternalblue/.
    43    Herb Lin and Joel P. Trachtman, ”Using International Export Controls to Bolster Cyber Defenses,” Protecting Civilian Institutions and Infrastructure from Cyber Operations: Designing International Law and Organizations, Center for International Law and Governance, Tufts University, September 10, 2018, https://sites.tufts.edu/cilg/files/2018/09/exportcontrolsdraftsm.pdf.
    44    As argued in previous work published by the Atlantic Council, proliferation “presents an expanding set of risks to states and challenges commitments to protect openness, security, and stability in cyberspace. The profusion of commercial OCC vendors, left unregulated and ill-observed, poses national security and human rights risks. For states that have strong OCC programs, proliferation of spyware to state adversaries or certain non-state actors can be a threat to immediate security interests, long-term intelligence advantage, and the feasibility of mounting an effective defense on behalf of less capable private companies and vulnerable populations. The acquisition of OCC by a current or potential adversary makes them more capable” (See: Winnona DeSombre et al, Countering Cyber Proliferation).
    45    “Stalkerware: What to Know,” Federal Trade Commission, May 10, 2021, https://consumer.ftc.gov/articles/stalkerware-what-know.
    46    IMSI catchers are also referred to as “Stingrays” after the Harris Corporation’s eponymous product line; Amanda Levendowski, “Trademarks as Surveillance Technology,” Georgetown University Law Center, 2021, https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=3455&context=facpub.
    47    For more see: Winnona DeSombre et al., A Primer on the Proliferation of Offensive Cyber Capabilities, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/a-primer-on-the-proliferation-of-offensive-cyber-capabilities/.
    48    “The Pall Mall Process: Tackling the Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities,” February 6, 2024, https://assets.publishing.service.gov.uk/media/65c25bb23f6aea0013c1551a/The_Pall_Mall_Process_tackling_the_proliferation_and_irresponsible_use_of_commercial_cyber_intrusion_capabilities.pdf.
    49    “Investigation of the Use of Pegasus and Equivalent Surveillance Spyware,” European Parliament, June 2023, https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/747923/EPRS_ATA(2023)747923_EN.pdf; US Department of Commerce, “Commerce Adds NSO Group and Other Foreign Companies to Entity List for Malicious Cyber Activities,” press release, November 3, 2021, https://www.commerce.gov/news/press-releases/2021/11/commerce-adds-nso-group-and-other-foreign-companies-entity-list.
    50    “Online Information and Services – Online Corporations (ONLINE Corporations),” accessed July 3, 2024, https://ica.justice.gov.il/GenericCorporarionInfo/SearchCorporation?unit=8; Bill Marczak et al., Sweet Quadream: A First Look at Spyware Vendor Quadream’s Exploits, Victims, and Customers, Citizen Lab (Munk School, University of Toronto), April 11, 2023, https://citizenlab.ca/2023/04/spyware-vendor-Quadream -exploits-victims-customers/.
    51    “Interionet (Company Profile),” Crunchbase, accessed July 3, 2024, https://www.crunchbase.com/organization/interionet-5cdb; “Interionet (Company Profile),” Datanyze, accessed July 3, 2024, https://www.datanyze.com/companies/interionet/481395181; NSO Group / Q Cyber Technologies: Over One Hundred New Abuse Cases, Citizen Lab (Munk School, University of Toronto), October 29, 2019, https://citizenlab.ca/2019/10/nso-q-cyber-technologies-100-new-abuse-cases/; Henricks, “All About Holding Companies.”
    52    “IVC Research Center: Data & Insights,” accessed July 10, 2024, https://www.ivc-online.com/.
    53    Becky Peterson, “Inside the Israel Offensive Cybersecurity World Funded by NSO Group,” Business Insider, September 6, 2019, https://archive.is/MtUPB#selection-2905.1-2905.327; “Interionet,” accessed July 10, 2024, https://www.interionet.com/; “Dream Poaches from Tenable, SAT Distributes Kaymera, Fischler at Interionet, Boeing Upheld for DIA Contract,” Intelligence Online, March 23, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/03/23/dream-poaches-from-tenable-sat-distributes-kaymera-fischler-at-interionet-boeing-upheld-for-dia-contract,109926917-art.
    54    “Interionet Adds Another Israeli Stamp to Belgium’s I-Police Programme,” Intelligence Online, February 11, 2022, https://www.intelligenceonline.com/surveillance–interception/2022/11/02/interionet-adds-another-israeli-stamp-to-belgium-s-i-police-programme,109840803-art.
    55    Winnona DeSombre, Lars Gjesvik, and Johann Ole Willers, Surveillance Technology at the Fair: Proliferation of Cyber Capabilities in International Arms Markets, Atlantic Council, November 8, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/surveillance-technology-at-the-fair/.
    56    DeSombre, Gjesvik, and Ole Willers, Surveillance Technology at the Fair. Also of note, as recently as in June 2024, Interionet presented its capability to provide access to the information behind dynamic IPs, NAT, and P2P at the ISS World Europe, a trade show for lawful interception and intrusion products (see: “ISS World Training,” TeleStrategies, accessed July 10, 2024, https://www.issworldtraining.com/).
    57    Appin Documents for Indian Angels Network (ValPro Employee April 4, 2009 Draft Equity Participation Memo),” DocumentCloud, accessed July 10, 2024, https://www.documentcloud.org/documents/23451868-20090506-memo-for-indian-angels-network; Andy Greenberg, “A Startup Allegedly ‘Hacked the World.’ Then Came the Censorship—and Now the Backlash,” Wired, February 1, 2024, https://www.wired.com/story/appin-training-centers-lawsuits-censorship/.
    58    Raphael Satter and Christopher Bing, “How Mercenary Hackers Sway Litigation Battles,” Reuters, June 30, 2022, https://www.reuters.com/investigates/special-report/usa-hackers-litigation/.; Appin Security Group had an “infrastructure sharing” relationship with BellTroX Infotech Services Private Ltd, another vendor, in 2013, but it is unclear what the nature of this relationship was and when it ended.
    59    Mike Dvilyanski, David Agranovich, and Nathaniel Gleicher, “Threat Report on the Surveillance-for-Hire Industry,” Meta, December 16, 2021, https://about.fb.com/wp-content/uploads/2021/12/Threat-Report-on-the-Surveillance-for-Hire-Industry.pdf; In 2020, Citizen Lab connected Dark Basin, a likely state-sponsored actor, to the larger BellTroX Infotech Services Private Ltd’s network – John Scott-Railton et al., Dark Basin: Uncovering a Massive Hack-For-Hire Operation, Citizen Lab (Munk School, University of Toronto), June 9, 2020, https://citizenlab.ca/2020/06/dark-basin-uncovering-a-massive-hack-for-hire-operation/; Ottavio Marzocchi and Emily (Ai Hua) Gobet, “Briefing for the PEGA Mission to Cyprus and Greece,” European Parliament: Policy Department for Citizens’ Rights and Constitutional Affairs, November 2022, https://www.europarl.europa.eu/RegData/etudes/STUD/2022/738330/IPOL_STU(2022)738330_EN.pdf.
    60    “Intellexa Consortium” is a term the companies have used to market themselves and is the label of choice for the US Department of Treasury’s Office of Foreign Assets Control (OFAC) – “Treasury Sanctions Members of the Intellexa Commercial Spyware Consortium,” U.S. Department of the Treasury, March 5, 2024, https://home.treasury.gov/news/press-releases/jy2155.
    61    Marzocchi and Gobet, “Briefing for the PEGA Mission.”
    62    “Predator Files: Technical Deep-Dive into Intellexa Alliance’s Surveillance Products,” Amnesty International, October 6, 2023, https://securitylab.amnesty.org/latest/2023/10/technical-deep-dive-into-intellexa-alliance-surveillance-products/.
    63    Marzocchi and Gobet, “Briefing for the PEGA Mission”; “Predator Files: Technical Deep-Dive into Intellexa”; Meir Orbach, “The Cyber Company, the Former Officer, and the Lost Money,” CTech by Calcalist, October 17, 2019, https://www.calcalistech.com/ctech/articles/0,7340,L-3772040,00.html.
    64    Nexa Group comprises Nexa Technologies (now RB 42), Nexa Technologies CZ s.r.o., Advanced Middle East Systems FZ LLC, and Trovicor FZ (alt. Trovicor Intelligence) – “The Predator Files: Caught in the Net” (Amnesty International, October 9, 2023), https://www.amnesty-international.be/sites/default/files/2023-10/act1072452023english.pdf.
    65    “The Predator Files: Caught in the Net.”
    66    Roberts et al., “Markets Matter: A Glance into the Spyware Industry.”
    67    Roberts et al., “Markets Matter: A Glance into the Spyware Industry”; “VasTech Profile: Version 1,” VasTech, February 10, 2008, https://respubca.home.xs4all.nl/pdf/J-LA-001-VT-01-LA-VASTech-profile-2.pdf. Notably, before its demise, the Gaddafi regime heavily relied on Zebra to surveil the entire Libyan population (See: Jenna McLaughlin, “South African Spy Company Used by Gadaffi Touts Its NSA-Like Capabilities,” The Intercept, October 31, 2016, https://theintercept.com/2016/10/31/south-african-spy-company-used-by-gadaffi-touts-its-nsa-like-capabilities/)
    68    “VasTech Profile: Version 1”; “VASTech AG (Company Profile),” OpenCorporates, accessed July 3, 2024, https://opencorporates.com/companies/ch/1129537.
    69    “Re: (Vastech) Possible visit to Milano,” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024, https://wikileaks.org/hackingteam/emails/emailid/1064489; “R: further conversation,” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024. https://wikileaks.org/hackingteam/emails/emailid/12014; “Re: (Vastech) Meeting” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024, https://wikileaks.org/hackingteam/emails/emailid/1150073.
    70     Hat tip to James Shires for this trenchant point.
    71    Percentages are based on excluding individuals from the count and there are no name changes recorded for subsidiaries.
    73    Bill Marczak et al., Hooking Candiru: Another Mercenary Spyware Vendor Comes into Focus, Citizen Lab (Munk School, University of Toronto), July 15, 2021, https://citizenlab.ca/2021/07/hooking-candiru-another-mercenary-spyware-vendor-comes-into-focus/; John Scott-Railton et al., CatalanGate: Extensive Mercenary Spyware Operation against Catalans Using Pegasus and Candiru, Citizen Lab, (Munk School, University of Toronto), April 18, 2022, https://citizenlab.ca/2022/04/catalangate-extensive-mercenary-spyware-operation-against-catalans-using-pegasus-candiru/.
    74    Feldstein and Kot, “Why Does the Global Spyware Industry Continue to Thrive?”
    75    US Department of Commerce, “Commerce Adds NSO Group and Other Foreign Companies to Entity List.”
    76    Patrick Howell O’Neill, “The Fall and Rise of a Spyware Empire,” MIT Technology Review, November 29, 2019, https://www.technologyreview.com/2019/11/29/131803/the-fall-and-rise-of-a-spyware-empire/.
    77    Lorenzo Franceschi-Bicchierai, “Hacking Team srl’s ‘Illegal’ Latin American Empire,” Vice (blog), April 18, 2016, https://www.vice.com/en/article/gv5v8q/hacking-team-illegal-latin-american-empire. Within this leak were details on how vulnerability- and exploit-deprived Memento Labs srl (then Hacking Team srl) compared to other vendors who develop some of these in-house (at least in part) like Gamma Group or NSO Group – Vlad Tsyrklevich, “Hacking Team srl: A Zero-Day Market Case Study,” (author blog), September 26, 2015 [update], https://tsyrklevich.net/2015/07/22/hacking-team-0day-market/.
    78    Joesph Cox, “Government Malware Company ‘Grey Heron’ Advertises Signal, Telegram Spyware,” Vice, March 7, 2018, https://www.vice.com/en/article/bj54kw/grey-heron-new-spyware-brochure-hacking-team.
    79    This also occasioned a renaming of the spyware product to Dante in 2022 (see: Joseph Cox and Lorenzo Franceschi-Bicchierai, “Memento Labs srl, the Reborn Hacking Team srl, Is Struggling,” Vice (blog), March 31, 2020, https://www.vice.com/en/article/xgq3qd/memento-labs-the-reborn-hacking-team-is-struggling; Lorenzo Franceschi-Bicchierai, “New Traces of Hacking Team srl Malware Show the Spy Vendor Is Still in Business,” Vice (blog), February 29, 2016, https://www.vice.com/en/article/nz7nm7/new-hacking-team-apple-mac-malware-samples; “Hacking Team srl’s Global License Revoked by Italian Export Authorities,” Privacy International (blog), April 8, 2016, https://privacyinternational.org/blog/1042/hacking-teams-global-license-revoked-italian-export-authorities; “Italy, UAE: Memento Labs srl Tries to Get Back into UAE Market through Local Distributor SAT,” Intelligence Online, January 19, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/01/19/memento-labs-tries-to-get-back-into-uae-market-through-local-distributor-sat,109903859-art; Joseph Cox, “Government Malware Company ‘Grey Heron’ Advertises Signal, Telegram Spyware,” Vice (blog), https://www.vice.com/en/article/bj54kw/grey-heron-new-spyware-brochure-hacking-team.
    80    Appin Companies’ Name Change Documents,” DocumentCloud, accessed July 10, 2024, https://www.documentcloud.org/documents/23581428-appin-companies-name-change-documents.
    81    Megan Ruthven, Ken Bodzak, and Neel Mehta, “From Chrysaor to Lipizzan: Blocking a New Targeted Spyware Family,” Android Developers Blog (blog), July 26, 2017, https://android-developers.googleblog.com/2017/07/from-chrysaor-to-lipizzan-blocking-new.html.
    82    “Israel: Bindecy Lays Hands on Struggling Cyber Security Firm Merlinx, ” Intelligence Online, February 6, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/06/02/bindecy-lays-hands-on-struggling-cyber-security-firm-merlinx,109670437-art.
    83    “Israel: Merlinx, Ex-Equus Technologies, Will Bow at ISS,” Intelligence Online, February 28, 2018, https://www.intelligenceonline.com/corporate-intelligence/2018/02/28/merlinx-ex-equus-technologies-will-bow-at-iss,108296225-bre.
    84    “Israel: Bindecy Lays Hands”; “Israel: Merlinx, Ex-Equus Technologies”; “Tal T. (LinkedIn Profile),” accessed July 10, 2024, https://www.linkedin.com/in/tal-tchwella/; “Israel: Ex-Merlinx Tempt Fresh Start in Cyber with Cyence,” Intelligence Online, September 7, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/09/07/ex-merlinx-tempt-fresh-start-in-cyber-with-cyence,109689543-art. ; “Tal T. | LinkedIn.” Intelligence Online points to Tchwella leaving “shortly after the Google report” but no source times the departure relative to the firm’s name change.
    85    “Israel: Bindecy Lays Hands.”; According to one corporate registry, MerlinX became inactive in 2022 however, the authors also located annual reports filed by the company with the Israeli Corporations Authority of the Department of Justice that mention its corporate status as “Active” as recent as 2024 – “Merlinx,” accessed July 3, 2024, https://finder.startupnationcentral.org/company_page/equus-technologies.; “Online Information and Services – Online Corporations (ONLINE Corporations),” accessed July 3, 2024, https://ica.justice.gov.il/GenericCorporarionInfo/SearchCorporation?unit=8.
    86    Sourced from “court documents obtained from the District Court of Limassol in Cyprus” per Marczak et al., Sweet Quadream and the original InDream Cypriot registration (see: “InReach Technologies Limited Technologies Limited,” CyprusRegistry, accessed July 3, 2024, https://cyprusregistry.com/companies/HE/373827).
    87    Marczak et al., Sweet Quadream.
    88    Ravie Lakshmanan, “Israeli Spyware Vendor Quadream to Shut Down Following Citizen Lab and Microsoft Expose,” The Hacker News, April 17, 2023, https://thehackernews.com/2023/04/israeli-spyware-vendor-quadream-to-shut.html.
    89    David Kenner and Eve Sampson, “Spyware firm Intellexa hit with US Sanctions after Cyber Confidential Exposé, International Consortium of Investigative Journalists, March 6, 2024, https://www.icij.org/investigations/cyprus-confidential/spyware-firm-intellexa-hit-with-us-sanctions-after-cyprus-confidential-expose/.
    90    Cox and Franceschi-Bicchierai, “Memento Labs srl, the Reborn Hacking Team srl”; Howell O’Neill, “The Fall and Rise of a Spyware Empire.” One non-public source suggested Memento Labs srl might have recently renamed itself to “M-Labs”.
    91    There is comparatively little open-source reporting on Grey Heron.
    92    Thomas Brewster, “Meet Paragon: An American-Funded, Super-Secretive Israeli Surveillance Startup That ‘Hacks WhatsApp And Signal,’” Forbes, July 29, 2021, https://www.forbes.com/sites/thomasbrewster/2021/07/29/paragon-is-an-nso-competitor-and-an-american-funded-israeli-surveillance-startup-that-hacks-encrypted-apps-like-whatsapp-and-signal/.
    93    “Israel, United States: Israeli Cyber Firm Paragon Beefs up US Subsidiary,” Intelligence Online, August 31, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/08/31/israeli-cyber-firm-paragon-beefs-up-us-subsidiary,110037838-art.
    94    “List of all companies,” Battery Ventures, accessed July 11, 2024, https://www.battery.com/list-of-all-companies/.
    95    “Blumberg Capital Alumni Founded Companies,” Crunchbase, Accessed July 27, 2024, https://www.crunchbase.com/hub/blumberg-capital-alumni-founded-companies
    96    Thomas Brewster, “Meet Candiru – The Mysterious Mercenaries Hacking Apple And Microsoft PCs For Profit,” Forbes, October 3, 2019 https://www.forbes.com/sites/thomasbrewster/2019/10/03/meet-candiru-the-super-stealth-cyber-mercenaries-hacking-apple-and-microsoft-pcs-for-profit/?sh=4825751d5a39; “Private Equity Owner of Spyware Group NSO Stripped of Control of €1bn Fund,” Financial Times, https://www.ft.com/content/d88518dd-7c66-48b2-b3e5-c765ebe720ab; “NSO Group’s management buys firm from Francisco Partners,” Reuters, February 14, 2019, https://www.reuters.com/article/idUSL5N209642/; Stephanie Kirchgaessner, “US consultants lined up to run fund that owns Israeli spyware company NSO Group,” The Guardian, July 31, 2021, https://www.theguardian.com/news/2021/jul/31/nso-group-israeli-spyware-company-berkeley-research-group.
    97    “New Rules Require Beneficial Ownership Reporting to FinCEN,” Grant Thornton, March 4, 2024, https://www.grantthornton.com/insights/alerts/tax/2024/insights/new-rules-require-beneficial-ownership-reporting-fincen.
    98    See the Treasury Department’s Notice of Proposed Rulemaking – “Provisions Pertaining to U.S. Investments in Certain National Security Technologies and Products in Countries of Concern”, US Department of the Treasury, July 5, 2024, https://www.federalregister.gov/documents/2024/07/05/2024-13923/provisions-pertaining-to-us-investments-in-certain-national-security-technologies-and-products-in and the original direction in Executive Order 14105 “Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern,” August 9, 2023, https://home.treasury.gov/system/files/206/Executive%20Order%2014105%20August%209%2C%202023.pdf
    99    Harry Coker, Jr., “2024 Report on the Cybersecurity Posture of the United States,” (Washington DC: Office of the National Cyber Director, May 2024), https://www.whitehouse.gov/wp-content/uploads/2024/05/2024-Report-on-the-Cybersecurity-Posture-of-the-United-States.pdf.
    100    “The Pall Mall Process: Tackling Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities, UK Foreign, Commonwealth & Development Office, February 6, 2024, https://assets.publishing.service.gov.uk/media/65c25bb23f6aea0013c1551a/The_Pall_Mall_Process_tackling_the_proliferation_and_irresponsible_use_of_commercial_cyber_intrusion_capabilities.pdf.
    101    DeSombre et al., Countering Cyber Proliferation; The White House, “Joint Statement on Efforts to Counter the Proliferation and Misuse;” “The Pall Mall Process: Tackling Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities.”
    102    DeSombre et al., “Countering Cyber Proliferation.”; The countries who have signed the Joint Statement are Australia, Canada, Costa Rica, Denmark, France, Finland, Germany, Japan, New Zealand, Norway, Poland, Ireland, Republic of Korea, Sweden, Switzerland, the United Kingdom, and the United States. “Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware.”
    103    Transparent Data, “Czech Companies API: Meet Business Register of the Czech Republic.” Medium (blog), December 2, 2020. https://medium.com/transparent-data-eng/czech-companies-api-meet-business-register-of-the-czech-republic-78ab563dee92; Transparent Data, “European Business Registers – Comparison of Registry Data on Foreign Companies,” Medium (blog), September 10, 2021, https://medium.com/transparent-data-eng/european-business-registers-comparison-of-registry-data-on-foreign-companies-3dda4d32061c.
    104    מידע ושרותים מקוונים – תאגידים ברשת (תאגידים ONLINE).” Israeli Corporations Authority accessed July 10, 2024.
    105    “Initiatives,” National Association of Secretaries of State (NASS), accessed July 11, 2024, https://www.nass.org/initiatives.
    106    Hat tip to Winnona DeSombre for this clear-eyed view of corporate registration.
    107    Financial Crimes Enforcement Network, “FinCEN Issues Final Rule for Beneficial Ownership Reporting to Support Law Enforcement Efforts, Counter Illicit Finance, and Increase Transparency,” press release, US Department of the Treasury, September 29, 2022, https://www.fincen.gov/news/news-releases/fincen-issues-final-rule-beneficial-ownership-reporting-support-law-enforcement.
    108    “Open Ownership Map: Worldwide Action on Beneficial Ownership Transparency,” Open Ownership, n.d., accessed July 10, 2024, https://www.openownership.org/en/map/.
    109    “Snapshot of Beneficial Ownership Registries in G7 Countries,” Athennian, n.d., accessed July 10, 2024, https://www.athennian.com/post/snapshot-of-beneficial-ownership-registries-in-g7-countries.
    111    Transparent Data, “European Business Registers – Comparison of Registry Data. on Foreign Companies.”
    112    “International Standards on Combating Money Laundering and the Financing of Terrorism and Proliferation: The FATF Recommendations,” Financial Action Task Force, (Paris, France), November 2023 [update], https://www.fatf-gafi.org/content/dam/fatf-gafi/recommendations/FATF%20Recommendations%202012.pdf.coredownload.inline.pdf; “Global Forum on Transparency and Exchange of Information for Tax Purposes,” n.d., United Nations, https://www.un.org/esa/ffd/wp-content/uploads/sites/3/2017/05/Global-Forum_-info-sheet-2017.pdf.
    113    “The Closed World of Company Data: An Examination of How Open Company Data Is in Open Government Partnership Countries,” OpenCorporates, August 4, 2012, https://web.archive.org/web/20120804043101/http://opencorporates.com/downloads/ogp_company_data_report.pdf; “Members of Open Government Partnership,” Open Government Partnership, n.d., accessed July 10, 2024. https://www.opengovpartnership.org/our-members/; “OGP Open Company Data Survey Results – Google Sheets,” n.d., accessed July 10, 2024, https://docs.google.com/spreadsheets/d/1J0f-InGNz3qzMDNjacOmLtPilVPhEZmp_itrfhVGcv8/edit?gid=0#gid=0.
    114    Countculture, “How Open Is Company Data in Open Government Partnership Countries?” OpenCorporates (blog), April 16, 2012, https://blog.opencorporates.com/2012/04/16/how-open-is-company-data-in-open-government-partnership-countries/.
    115    “OGP Open Company Data Survey Results – Google Sheets.”
    116    “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect, and Remedy’ Framework,” United Nations Human Rights Office of the High Commissioner, 2011, https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf.; Curtis Domek and Julien Blanquart, “A New Era of Export Controls Begins in the EU: The Revised EU Dual-Use Export Controls to Promote Human Rights”, SheppardMullin, May 14, 2021,https://www.globaltradelawblog.com/2021/05/14/dual-use-export-controls-promote-human-rights/.
    117    Daniel Moßbrucker, “EU States Unanimously Vote Against Stricter Export Controls for Surveillance Equipment,” Netzpolitik.org, (Berlin, Germany), July 16, 2019, https://netzpolitik.org/2019/eu-states-unanimously-vote-against-stricter-export-controls-for-surveillence-equipment/; Daniel Moßbrucker, “Surveillance Exports: How EU Member States Are Compromising New Human Rights Standards,” Netzpolitik.org, (Berlin, Germany), October 29, 2018, https://netzpolitik.org/2018/surveillance-exports-how-eu-member-states-are-compromising-new-human-rights-standards/; Patrick Howell O’Neill, ”Inside NSO, Israel’s Billion-Dollar Spyware Giant,” MIT Technology Review, August 19, 2020, https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights/.
    118    Declaration of Shalev Hulio In Support of Defendants’ Motion to Dismiss, WhatsApp Inc. v. NSO Group Technologies Limited, 2 April 2020, paras. 5-9, 12, www.courtlistener.com/docket/16395340/45/11/whatsapp-inc-v-nso-group-technologies-limited/; Statement disseminated by Mercury Public Affairs, LLC, on behalf of Q Cyber Technologies Ltd., NSD/FARA Registration Unit, 2 October 2020, https://efile.fara.gov/docs/6170-Informational-Materials-20201002-729.pdf
    119    Kali Robinson, “How Israel’s Spyware Stoked Surveillance Debate,” Council on Foreign Relations, March 8, 2022, https://www.cfr.org/in-brief/how-israels-pegasus-spyware-stoked-surveillance-debate.
    120    “Surveillance and human rights – Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression,” United Nations General Assembly, May 28, 2019, https://documents.un.org/doc/undoc/gen/g19/148/76/pdf/g1914876.pdf?token=ILbcRnDnfZ18fonWDP&fe=true.
    121    “Promoting Human Rights and Democracy,” Bureau of Industry and Security, U.S. Department of Commerce, Accessed July 28, 2024, https://www.bis.doc.gov/index.php/human-rights.
    122    15 C.F.R. §§ 730–780, “Subchapter C: Export Administration Regulations,” https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C; “Export Compliance Guidelines: The Elements of an Effective
    123    “Red Flag Indicators,” US Department of Commerce: Bureau of Industry and Security, accessed July 11, 2024, https://www.bis.doc.gov/index.php/all-articles/23-compliance-a-training/51-red-flag-indicators.
    124    “The Guiding Principles on Government Use of Surveillance Technologies,” U.S. Department of State, March 30, 2023, https://www.state.gov/guiding-principles-on-government-use-of-surveillance-technologies/; and the commitments made as part of the “Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware”, The White House, March 18, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/18/joint-statement-on-efforts-to-counter-the-proliferation-and-misuse-of-commercial-spyware/.
    125    Danièle Nouy, “Gaming the Rules or Ruling the Game? – How to Deal with Regulatory Arbitrage” (Speech by Nouy, as Chair of the Supervisory Board of the ECB, at the 33rd SUERF Colloquium, Helsinki), September 15, 2017, European Central Bank, https://www.bankingsupervision.europa.eu/press/speeches/date/2017/html/ssm.sp170915.en.html; Janet Dine, “Jurisdictional Arbitrage by Multinational Companies: A National Law Solution?” Journal of Human Rights and the Environment 3, no. 1 (March 2012): 44–69, https://doi.org/10.4337/jhre.2012.01.02; Sideris Draganidis, “Jurisdictional Arbitrage: Combatting an Inevitable by-Product of Cryptoasset Regulation,” Journal of Financial Regulation and Compliance 31, no. 2 (March 29, 2023): 170–85, https://doi.org/10.1108/JFRC-02-2022-0013.
    126    “Regulation (EU) 2021/821”; “Council Regulation (EC) No 428/2009 of 5 May 2009 Setting up a Community Regime for the Control of Exports, Transfer, Brokering and Transit of Dual-Use Items (Recast),” Official Journal of the European Union (Luxembourg: Publications Office of the European Union, May 5, 2009), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32009R0428.
    127    Omtzigt, “Pegasus and Similar Spyware and Secret State Surveillance.”
    128    Omtzigt, “Pegasus and Similar Spyware and Secret State Surveillance.”
    129    “eCFR :: 12 CFR Part 208 — Membership of State Banking Institutions in the Federal Reserve System (Regulation H),” accessed July 30, 2024, https://www.ecfr.gov/current/title-12/chapter-II/subchapter-A/part-208.
    130    “Supporting Statement for the Domestic Branch Application (FR 4001; OMB No. 7100-0097),” n.d., https://www.federalreserve.gov/reportforms/formsreview/FR%204001%20OMB%20SS.pdf.; “Electronic Applications and Applications Filing Information–State Member Bank,” Board of Governors of the Federal Reserve System, accessed July 31, 2024, https://www.federalreserve.gov/supervisionreg/afi/smfilings.htm.; “12 U.S.C. 1831r-1 – Notice of Branch Closure – Document in Context – USCODE-2010-Title12-Chap16-Sec1831r-1,” accessed July 31, 2024, https://www.govinfo.gov/app/details/USCODE-2010-title12/USCODE-2010-title12-chap16-sec1831r-1/context.
    131    “Compliance Handbook,” Federal Reserve, n.d., https://www.federalreserve.gov/boarddocs/supmanual/cch/closings.pdf.
    132    For more on anti-SLAPP laws and related resources, see an excellent resource from the Reporters Committee for Freedom of the Press titled “Understanding Anti-SLAPP Laws,” available at: https://www.rcfp.org/resources/anti-slapp-laws/.
    133    “Editor’s Note,” Reuters, December 5, 2023, https://www.reuters.com/investigates/special-report/usa-hackers-appin/.
    134    “Directive (EU) 2024/1069 of the European Parliament and of the Council of 11 April 2024 on protecting persons who engage in public participation from manifestly unfounded claims or abusive court proceedings (‘Strategic lawsuits against public participation’), Official Journal of the European Union, April 11, 2024, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32024L1069.
    135    Hat tip to Lisandra Novo for this excellent suggestion.
    136    Ionut Arghire, “Russian Security Vendor Positive Technologies Dropped From MAPP Member List,”Security Week, April 19, 2021, https://www.securityweek.com/russian-security-vendor-positive-technologies-responds-us-sanctions/;  “Spotlight / China, Russia: Huawei Hired Top Researchers from Russia’s US-Sanctioned NeoBit,” Intelligence Online, June 18, 2021, https://www.intelligenceonline.com/corporate-intelligence/2021/06/18/huawei-hired-top-researchers-from-russia-s-us-sanctioned-neobit,109674074-eve.
    137    “Forensic Methodology Report: How to catch NSO Group’s Pegasus,” Amnesty International, July 18, 2021,https://www.amnesty.org/en/latest/research/2021/07/forensic-methodology-report-how-to-catch-nso-groups-pegasus/; Joseph Coz, “Forensic Methodology Report: How to catch NSO Group’s Pegasus,” Vice/Motherboard, May 20, 2020 https://www.vice.com/en/article/qj4p3w/nso-group-hack-fake-facebook-domain.
    138    Suzanne Smalley, “WhatsApp: AWS leased infrastructure to NSO Group beginning in 2018,”The Record, March 17, 2024, https://therecord.media/aws-leased-infrastructure-nso-pegasus-whatsapp-lawsuit.
    139    “Best Practices for the Effective Implementation of Restrictive Measures,” (Brussels, Belgium: General Secretariat of the Council of the European Union, June 27, 2022), https://data.consilium.europa.eu/doc/document/ST-10572-2022-INIT/en/pdf.
    140    L3HARRIS AZIMUTH SECURITY PTY. LIMITED ACN 141 714 061,” Australian Securities and Investments Commission (ASIC), accessed July 3, 2024, https://connectonline.asic.gov.au/RegistrySearch/faces/landing/panelSearch.jspx?_adf.ctrl-state=fkb9ywzcb_15&searchText=141714061&searchType=OrgAndBusNm; “The Team,” Azimuth Security, accessed July 3, 2024, https://www.azimuthsecurity.com/theteam.
    141    Jane Edwards, “L3 to Buy Cyber Firms Linchpin Labs, Azimuth Security for $200M; Christopher Kubasik Comments,” GovCon Wire, July 12, 2018, https://www.govconwire.com/2018/07/l3-to-buy-cyber-firms-linchpin-labs-azimuth-security-for-200m-christopher-kubasik-comments/.
    142    “L3Harris Trenchant Canada Inc (Company Profile),” OpenCorporates, accessed July 3, 2024, https://opencorporates.com/companies/ca/7056401; “L3HARRIS TRENCHANT LTD – United Kingdom (Company Profile),” OpenCorporates, accessed July 3, 2024, https://opencorporates.com/companies/gb/09068202; “L3HARRIS AZIMUTH SECURITY PTY. LIMITED ACN 141 714 061,” ASIC.
    143    Joseph Cox and Lorenzo Franceschi-Bicchierai, “How a Tiny Startup Became the Most Important Hacking Shop You’ve Never Heard Of,” Vice (blog), February 7, 2018, https://www.vice.com/en/article/8xdayg/iphone-zero-days-inside-azimuth-security.
    144    This is based on the records from the Israeli Corporate Authority (see also: “Blueocean Technologies Ltd., Petah Tikva, Israel (Company Profile),” North Data, accessed July 3, 2024, https://www.northdata.com/Blueocean+Technologies+Ltd.,+Petah+Tikva/ICA-515223196). However, the authors would like to note that there is at least one other source that claims that Blue Ocean Technologies was incorporated in 2017 (see: Assaf Gilead, “Israeli Cyberattack Co Blue Ocean Serves East Asian Gov’t,” Globes, May 14, 2023, https://en.globes.co.il/en/article-israeli-cyberattack-co-blue-ocean-serves-east-asian-govt-1001446311).
    145    “Israel: Rami Ben Efraim Adds Planet Nine to Growing Cyber Empire,” Intelligence Online, December 21, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/12/21/rami-ben-efraim-adds-planet-nine-to-growing-cyber-empire,110131613-art.
    146    Assaf Gilad, “Air Force Veterans Founded a Cyber Offensive Company for a Foreign Country,” Globes, December 5, 2023, https://www.globes.co.il/news/article.aspx?did=1001446258.
    147    “Israel: Cyberintelligence Firm Blue Ocean’s Mystery Clients Revealed,” Intelligence Online, May 30, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/05/30/cyberintelligence-firm-blue-ocean-s-mystery-clients-revealed,109978498-art.
    148    “COSEINC (Company Profile),” Crunchbase, accessed July 3, 2024, https://www.crunchbase.com/organization/coseinc.
    149    “China, Singapore, United States: Blacklisted by the US, Zero Day Distributor COSEINC Works on for China’s Pwnzen,” Intelligence Online, November 8, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/11/08/blacklisted-by-the-us-zero-day-distributor-coseinc-works-on-for-china-s-pwnzen,109703349-art.
    150    “Commerce Adds NSO GRoup and Other Foreign Companies to Entity List for Malicious Cyber Activities,” U.S. Department of Commerce, Accessed July 28, 2024, https://www.commerce.gov/news/press-releases/2021/11/commerce-adds-nso-group-and-other-foreign-companies-entity-list.
    151    “Addition of Entities to the Entity List, Revision of Certain Entries on the Entity List (A Rule by the Industry and Security Bureau),” Federal Register, June 5, 2020, https://www.federalregister.gov/documents/2020/06/05/2020-10869/addition-of-entities-to-the-entity-list-revision-of-certain-entries-on-the-entity-list.
    152    “Re: 0 days,” WikiLeaks (Hacking Team Archive), accessed July 3, 2024, https://wikileaks.org/hackingteam/emails/emailid/695766.
    153    Tsyrklevich, “Hacking Team: A Zero-Day Market Case Study.”
    154     “OpenCorporates: The Open Database of The Corporate World,” accessed July 3, 2024. https://opencorporates.com/events/2762512532.
    155    Crowdfense, “Crowdfense Launches $10 Million Bug Bounty Program,” PR Newswire, April 24, 2018, https://www.prnewswire.com/news-releases/crowdfense-launches-10-million-bug-bounty-program-300635496.html.
    156     “About Us – Crowdfense.” n.d. Accessed July 3, 2024. https://www.crowdfense.com/about-us/.
    157    ITP Staff, “Crowdfense to Expand Scope and Funding for Bug Bounty Program,” Edge, December 9, 2018, https://www.edgemiddleeast.com/services/618451-crowdfense-to-expand-scope-and-funding-for-bug-bounty-program.
    158    “Singapore, UAE: Emerging SIGINT Powers Seek Own Cyber-Bounty Hunters,” Intelligence Online, May 16, 018, https://www.intelligenceonline.com/international-dealmaking/2018/05/16/emerging-sigint-powers-seek-own-cyber-bounty-hunters,108310461-art.
    159    “Dataflow Security – Defining the Forefront of Innovation, Mastering Vulnerability Research,” accessed July 3, 2024, https://dfsec.com/; “Ofer Cohen – Founder at Dataflow Security (Organization Chart),” The Org, accessed July 3, 2024, https://theorg.com/org/dataflow-security/org-chart/ofer-cohen.
    160    “Dataflow Security – Defining the Forefront”; “Dataflow Security Spain SL, Madrid, Spain (Company Profile),” North Data, accessed July 3, 2024, https://www.northdata.com/Dataflow+Security+Spain+SL,+Madrid/NIF+B10866671.
    161    “Dataflow Security Spain SL, Madrid, Spain.”
    162    “DATAFLOW FORENSICS INC. (Company Profile),” OpenCorporates,” accessed July 3, 2024, https://opencorporates.com/companies/us_ny/6616280; “Dataflow Security – Defining the Forefront”; “Italy, United States: Dataflow Security Sets up New Forensics Company in New York,” Intelligence Online, October 27, 2022, https://www.intelligenceonline.com/surveillance–interception/2022/10/27/dataflow-security-sets-up-new-forensics-company-in-new-york,109839003-art.
    163    “רנדום מחקר בע”מ (Company Profile),” [alternative legal name: Random Research], OpenCorporates, accessed July 3, 2024, https://opencorporates.com/companies/il/516847472; “The Tech Times: Assured Information Security’s Cyber Contract Renewed, Ofer Cohen Launches Israeli Firm, Brian Katz into Private Sector,” Intelligence Online, November 2, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/11/02/assured-information-security-s-cyber-contract-renewed-ofer-cohen-launches-israeli-firm-brian-katz-into-private-sector,110083925-art.
    164    BOLETÍN OFICIAL DEL REGISTRO MERCANTIL SECCIÓN PRIMERA Empresarios Actos inscritos MADRID, June 5, 2024, https://www.boe.es/borme/dias/2024/07/05/pdfs/BORME-A-2024-129-28.pdf.
    165    “Turkey: Pars Defense, Turkey’s Zero-Day Champion,” Intelligence Online, February 15, 2024. https://www.intelligenceonline.com/surveillance–interception/2024/02/15/pars-defense-turkey-s-zero-day-champion,110159845-art; Graham Cluley, “Was Ibrahim Balic the Man Who ‘Hacked’ Apple’s Developer Center?” (author blog), July 22, 2013, https://grahamcluley.com/was-this-the-man-who-hacked-apples-developer-center/.
    166    Shubham Bhandari, “Google Links Over 60 Zero-Days to Commercial Spyware Vendors,” LinkedIn (post), February 7, 2024, https://www.linkedin.com/pulse/google-links-over-60-zero-days-commercial-spyware-vendors-bhandari-oxvnc/.
    167    “UAE: Abu Dhabi’s Protect Takes over DarkMatter’s Cyber-Offensive Role,” Intelligence Online, May 27, 2019, https://www.intelligenceonline.com/international-dealmaking/2019/05/27/abu-dhabi-s-protect-takes-over-darkmatter-s-cyber-offensive-role,108358798-art; ”Buying Spying: Insights into Commercial Surveillance Vendors,” Google, February 2024, https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Buying_Spying_-_Insights_into_Commercial_Surveillance_Vendors_-_TAG_report.pdf.
    168    “UAE: Digital14 Picks up Darkmatter’s Key Activities, Including the Vulnerabilities Researcher xen1thLabs,” Intelligence Online, January 21, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/01/21/digital14-picks-up-darkmatter-s-key-activities-including-the-vulnerabilities-researcher-xen1thlabs,109636378-gra.
    169    Lorenzo Franceschi-Bicchierai, “Spyware Startup Variston May Be Shutting Down,” Techcrunch, Business & Human Rights Resource Centre, February 15, 2024, https://www.business-humanrights.org/en/latest-news/spyware-startup-variston-may-be-shutting-down/.
    170    “Buying Spying.”
    171    “Rebsec Solutions Company Profile (Overview),” Tracxn, accessed June 12, 2024, https://tracxn.com/d/companies/rebsec-solutions/__fwWl5NbcY9wydDJGa_LhP1Fo0thK_R070rY_RyUH1BE.
    172    Andy Greenberg, “Meet The Hackers Who Sell Spies the Tools To Crack Your PC (And Get Paid Six-Figure Fees),” Forbes, March 21, 2012, https://www.forbes.com/sites/andygreenberg/2012/03/21/meet-the-hackers-who-sell-spies-the-tools-to-crack-your-pc-and-get-paid-six-figure-fees/?sh=377debb81f74; Charlie Osborne, “NSA Purchased Zero-Day Exploits from French Security Firm Vupen,” ZDNET/Tech, September 18, 2013, https://www.zdnet.com/article/nsa-purchased-zero-day-exploits-from-french-security-firm-vupen/.
    173    “VUPEN SECURITY (Company Profile): Fermée définitivement [Closed permanently], Chiffre d’affaires [key figures],” Societe (Paris, France), accessed July 3, 2024, https://www.societe.com/societe/vupen-security-478502123.html#chiffrecle.
    174    “ZERODIUM LLC (Company Profile),” OpenCorporates, accessed July 3, 2024. https://opencorporates.com/companies/us_de/5811248.
    175    Lily Hay Newman, “Zerodium Zero Day iOS Bounty Is Now $1.5 Million,” Wired, September 29, 2016, https://www.wired.com/2016/09/top-shelf-iphone-hack-now-goes-1-5-million/.
    176    Gintaras Radauskas, “OpZero Raises Stakes in Zero-Day Exploit Market,” Cybernews, November 15, 2023 [update], https://cybernews.com/news/opzero-zero-day-exploit-market-pricing-russia/.
    177    “Zerodium (Company Profile),” Info Security Index (Infosecindex), accessed July 3, 2024. https://infosecindex.com/companies/zerodium/.
    178    Franceschi-Bicchierai, Lorenzo, “This Leaked Catalog Offers ‘Weaponized Information’ That Can Flood the Web,” Vice (blog), September 2, 2016, https://www.vice.com/en/article/d7ywvx/leaked-catalog-weaponized-information-twitter-aglaya.
    179    Thomas Brewster, “Meet The ‘Cowboys of Creepware’– Selling Government-Grade Surveillance to Spy on Your Spouse,” Forbes, March 14, 2017, https://www.forbes.com/sites/thomasbrewster/2017/02/16/government-iphone-android-spyware-is-the-same-as-seedy-spouseware/?sh=71933002455c.
    180    “Amendment No. 1 to Form 20-F: Cognyte Software Ltd.,” January 13, 2021, https://www.sec.gov/Archives/edgar/data/1824814/000119312521008526/d52351d20fr12ba.htm.
    181    “Cognyte Software Ltd. (CGNT) DCF Valuation,” dcf.fm, accessed July 31, 2024, https://dcf.fm/products/cgnt.
    182    “Verint Systems Inc.,” International Directory of Company Histories, Encyclopedia.com, accessed July 31, 2024, https://www.encyclopedia.com/books/politics-and-business-magazines/verint-systems-inc.
    183    “Verint Systems Inc.”
    184    “Verint Systems Inc.”
    185    “Verint Systems Inc.”
    186    “Comverse Technology,” Wikipedia, accessed July 31, 2024, .https://en.wikipedia.org/wiki/Comverse_Technology.
    187    Censored Israeli Software Spying On US Am Docs Comverse Infosys Carl Cameron Dec 2001, 2013, http://archive.org/details/CensoredIsraeliSoftwareSpyingOnUSAmDocsComverseInfosysCarlCameronDec2001.
    188    “Former Comverse CEO Agrees to $53 Million Settlement of Options Backdating Charges (Press Release No. 2010-232; November 23, 2010,” accessed July 31, 2024, https://www.sec.gov/news/press/2010/2010-232.htm.
    189    Dvilyanski, Agranovich, and Gleicher, “Threat Report on the Surveillance-for-Hire Industry.”
    190    Dvilyanski, Agranovich, and Gleicher, “Threat Report on the Surveillance-for-Hire Industry.”
    191    “Recommendation to Exclude Cognyte Software › from Investment by the Norwegian Government Pension Fund Global (GPFG)” (Council on Ethics, The Government Pension Fund Global, June 17, 2022), https://files.nettsteder.regjeringen.no/wpuploads01/sites/275/2022/12/Rec-Cognyte-ENG.pdf.
    192    Satter and Bing, “How Mercenary Hackers Sway Litigation Battles.”
    193    Satter and Bing, “How Mercenary Hackers Sway Litigation Battles.”
    194    “Cyber Root Limited (Company Profile),” Gov.UK (UK Department for Business & Trade: Companies House), accessed July 11, 2024, https://find-and-update.company-information.service.gov.uk/company/14414734.
    195    “Business registers-search for a company in the EU,” European-Justice, Accessed July 11, 2024, https://e-justice.europa.eu/489/EN/business_registers__search_for_a_company_in_the_eu
    196    “SIO follows Europe cyber offensive consolidation trend with Asingit acquisition.” Intelligence Online, March 3, 2022, https://www.intelligenceonline.com/surveillance–interception/2022/03/03/sio-follows-european-cyber-offensive-consolidation-trend-with-asingit-acquisition,109737657-art.
    197    “Business registers-search for a company in the EU,” European-Justice, Accessed July 11, 2024, https://e-justice.europa.eu/489/EN/business_registers__search_for_a_company_in_the_eu
    198    Andre Meister, “We reval the state trojan “SubZero” from Austria,” Netzpolitik, December 17, 2021, https://netzpolitik.org/2021/dsirf-wir-enthuellen-den-staatstrojaner-subzero-aus-oesterreich/.
    199    “MLS Machine Learning Solutions,” North Data, Accessed July 30, 2024, “https://www.northdata.com/MLS+Machine+Learning+Solutions+GmbH,+Wien/521402v.
    200    “Targeted for Russian ties, cyber intelligence firm DSIRF shuts up shop,” Intelligence Online, August 28, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/08/28/targeted-for-russian-ties-cyber-intelligence-firm-dsirf-shuts-up-shop,110036360-art
    201    “Targeted for Russian ties, cyber intelligence firm DSIRF shuts up shop,” Intelligence Online, August 28, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/08/28/targeted-for-russian-ties-cyber-intelligence-firm-dsirf-shuts-up-shop,110036360-art
    202    “Company Register,” accessed July 3, 2024, https://www.unternehmensregister.de/ureg/result.html;jsessionid=635E5C635A2A17FAAE4DB7AB9D7547DB.web01-1.; Feldstein and Kot, “Why Does the Global Spyware Industry Continue to Thrive?”
    203    Feldstein and Kot, “Why Does the Global Spyware Industry Continue to Thrive?”
    204    “Finfisher Ceases Business Operations Following Criminal Complaint against Illegal Export of Surveillance Software,” Business & Human Rights Resource Centre, March 28, 2022, https://www.business-humanrights.org/en/latest-news/finfisher-ceases-business-operations-following-criminal-complaint-against-illegal-export-of-surveillance-software/; Andre Meister, “State Trojan Manufacturer FinFisher ‘Is Closed and Will Remain So,’” Netzpolitik.org, March 28, 2022, https://netzpolitik.org/2022/nach-pfaendung-staatstrojaner-hersteller-finfisher-ist-geschlossen-und-bleibt-es-auch/.
    205    Veřejný Rejstřík a Sbírka Listin – InvaSys a.s.” [Public Register and Collection of Deeds – InvaSys a.s.], eJustice ( Ministry of Justice of the Czech Republic), accessed July 3, 2024, https://or.justice.cz/ias/ui/vypis-sl-detail?dokument=47757423&subjektId=967334&spis=1068597.
    206    “Invasys: Solutions,” Invasys a.s., accessed July 3, 2024, https://www.invasys.com/solutions/.
    207    Omer Benjakob, “At Defense and Arms Expo, Israeli Cyber Is Out, but Surveillance Tech in,” Haaretz, December 8, 2023, https://www.haaretz.com/israel-news/security-aviation/2023-12-08/ty-article/.premium/at-defense-and-arms-expo-israeli-cyber-is-out-but-surveillance-tech-in/0000018c-49da-db23-ad9f-69da26e10000.
    208    “Veřejný Rejstřík a Sbírka Listin – InvaSys a.s.” [Public Register and Collection of Deeds – InvaSys a.s.].
    209    “Veřejný Rejstřík a Sbírka Listin – InvaSys a.s.” [Public Register and Collection of Deeds – InvaSys a.s.].
    210    “Leo Impact Security Services Private Limited (Company Profile),” Zaubacorp (Zauba Technologies), July 9, 2024, https://www.zaubacorp.com/company/LEO-IMPACT-SECURITY-SERVICES-PRIVATE-LIMITED/U72900RJ2009PTC028837.
    211    “Cyber offensive firm Leo Impact competing with Aglaya for greater share in surveillance domain,” Medium (blog), June 22, 2023, https://mahdiabbastech.medium.com/cyber-offensive-firm-leo-impact-competing-with-aglaya-for-greater-share-in-surveillance-domain-965187dff2d.
    212    “Leo Impact Security s.r.o.”, Ministry of Justice of the Czech Republic, Accessed July 11, 2024, https://or.justice.cz/ias/ui/rejstrik-firma.vysledky?subjektId=389677&typ=UPLNY
    213    “The Tech Times: Mollitiam Gets New CEO, Paolo Stagno Joins Crowdfense, Whooster Assists US Secret Service,” Intelligence Online, January 11, 2024, https://www.intelligenceonline.com/surveillance–interception/2024/01/11/mollitiam-gets-new-ceo-paolo-stagno-joins-crowdfense-whooster-assists-us-secret-service,110136850-art.
    214    Bruce Schneier, “Mollitiam Industries Is the Newest Cyberweapons Arms Manufacturer,” Schneier on Security (author blog), June 23, 2021, https://www.schneier.com/blog/archives/2021/06/mollitiam-industries-is-the-newest-cyberweapons-arms-manufacturer.html.
    215    “Europe, Israel: Excem, Israeli Cyber’s Bridgehead in Spain,” Intelligence Online, May 20, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/05/20/excem-israeli-cyber-s-bridgehead-in-spain,109667518-art.
    216    “Centre for the Development of Industrial Technology (Company Profile),” Crunchbase, accessed July 3, 2024, https://www.crunchbase.com/organization/centre-for-the-development-of-industrial-technology-cdti.
    217    “ERDF Pluri-Regional Operational Programmes,” DGFE (Spain Directorate General for European Funds), accessed July 3, 2024. https://www.fondoseuropeos.hacienda.gob.es/sitios/dgfc/en-GB/loFEDER1420/poplFEDER/Paginas/inicio.aspx.
    218    “ERDF Pluri-Regional Operational Programmes”; “Mollitiam Industries (Company Profile: Valuation and Funding),” PitchBook,” accessed July 3, 2024, https://pitchbook.com/profiles/company/462012-40.
    219    Patrick Howell O’Neill, “ISS World: The Traveling Spyware Roadshow for Dictatorships and Democracies,” CyberScoop, June 20, 2017, https://www.cyberscoop.com/iss-world-wiretappers-ball-nso-group-ahmed-mansoor/. ; “Italy: Italian Cyber Intelligence Specialist Movia Goes Global,” Intelligence Online, November 8, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/11/08/italian-cyber-intelligence-specialist-movia-goes-global,110085487-art.
    220    “Italian Cyber Intelligence Specialist Movia Goes Global.”
    221    “Italian Cyber Intelligence Specialist Movia Goes Global.”
    222    Marco Bova, “Una ditta di intercettazioni nelle indagini sul sistema Montante” [A wiretapping firm in the investigations on the Montante system], L’Espresso, August 9, 2022, https://lespresso.it/c/attualita/2022/8/9/una-ditta-di-intercettazioni-nelle-indagini-sul-sistema-montante/12796.
    223    “Negg Group© (Company Profile),” Crunchbase, accessed July 3, 2024, https://www.crunchbase.com/organization/negg.
    224    Nikita Buchka and Alexey Firsh, “Skygofree: Following in the Footsteps of HackingTeam,” Securelist, January 16, 2018, https://securelist.com/skygofree-following-in-the-footsteps-of-hackingteam/83603/.
    225    Buchka and Firsh, “Skygofree: Following in the Footsteps of HackingTeam.”
    226    Buchka and Firsh, “Skygofree: Following in the Footsteps of HackingTeam.”
    227    Ben Nimmo et al., “Adversarial Threat Report: Countering the Surveillance-for-Hire Industry & Influence Operations,” 2023, https://about.fb.com/wp-content/uploads/2023/06/Meta-Quarterly-Adversarial-Threat-Report-Q1-2023.pdf.
    228    “Negg® Group | Find Us.” n.d. Accessed July 3, 2024. https://www.negg.group/offices.
    229    “Negg International B.V. (Company Profile),” OpenCorporates,” accessed July 3, 2024, https://opencorporates.com/companies/nl/80409644; “Italy: Italian Intelligence Provider Negg Makes Entrance at ISS World Exhibition,” Intelligence Online, September 1, 2022, https://www.intelligenceonline.com/surveillance–interception/2022/09/01/italian-intelligence-provider-negg-makes-entrance-at-iss-world-exhibition,109808500-art.
    230    “Voucher Digitalizzazione, Elenco Cumulativo Dei Soggetti Beneficiari – Regione Calabria” [Digitization Voucher, Cumulative list of beneficiaries – Calabria region], MIMIT (Italy: Ministry of Enterprises and Made in Italy, formerly the Ministry of Economic Development), September 9, 2014, https://www.mimit.gov.it/images/stories/normativa/Allegato_A_-_Calabria.pdf; “Digitization Vouchers 2021: We Help You Get Them,” Digitalics Innovation, accessed July 3, 2024, https://digitalicsinnovation.com/en/voucher-digitalizzazione-2021-innovazione-aziendale/.
    231    “Negg® Group – Investors,” accessed July 3, 2024. https://www.negg.group/investors/overview.
    232    “Commerce Adds NSO Group and Other Foreign Companies to Entity List for Malicious Cyber Activities,” U.S. Department of Commerce, Accessed July 28, 2024, https://www.commerce.gov/news/press-releases/2021/11/commerce-adds-nso-group-and-other-foreign-companies-entity-list.
    233    “CHINA/SINGAPORE/UNITED STATES : Blacklisted by the US, Zero Day Distributor COSEINC Works on for China’s Pwnzen – 08/11/2021.” 2024. Intelligence Online. July 3, 2024. https://www.intelligenceonline.com/surveillance–interception/2021/11/08/blacklisted-by-the-us-zero-day-distributor-coseinc-works-on-for-china-s-pwnzen,109703349-art.; “The United States Adds Foreign Companies to Entity List for Malicious Cyber Activities – United States Department of State.” n.d. Accessed July 3, 2024. https://www.state.gov/the-united-states-adds-foreign-companies-to-entity-list-for-malicious-cyber-activities/.
    234    “Spotlight / China, Russia: Huawei Hired Top Researchers from Russia’s US-Sanctioned NeoBit,” Intelligence Online, June 18, 2021, https://www.intelligenceonline.com/corporate-intelligence/2021/06/18/huawei-hired-top-researchers-from-russia-s-us-sanctioned-neobit,109674074-eve.
    235    Cy4Gate S.p.A., “Press Reports,” press note, June 24, 2022, https://www.cy4gate.com/assets/Uploads/CS-CY4gate-Nota-stampa-RCS.pdf.
    236    “Re: PAF and PN,” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024, https://wikileaks.org/hackingteam/emails/emailid/599145; “RE: Proposal GD6 (via CNC),” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024, https://wikileaks.org/hackingteam/emails/emailid/16869; “Re: HT & RCS cooperation,” WikiLeaks (Hacking Team srl Archive), accessed July 3, 2024. https://wikileaks.org/hackingteam/emails/emailid/567762.
    237    “Lookout Uncovers Hermit Spyware Deployed in Kazakhstan,” Lookout Threat Intelligence, June 16, 2022, https://www.lookout.com/threat-intelligence/article/hermit-spyware-discovery.
    238    “The Cy4Gate Group – Corporate Data of the Parent Company,” Cy4Gate S.p.A., n.d., https://www.cy4gate.com/assets/Uploads/Consolidated-Financial-Statement-CY4Gate-Group-30.6.2022-ENG-Courtesy-copy.pdf.
    239    ”Cy4Gate S.P.A. Interim Financial Report,” Cy4Gate S.p.A., June 30, 2023, https://www.cy4gate.com/assets/Uploads/Interim-Financial-Report-as-at-30-June-2023.pdf.
    240    Clement Lecigne and Benolt Sevens, “New Details on Commercial Spyware Vendor Variston,” Google Threat Analysis Group, November 30, 2022, https://blog.google/threat-analysis-group/new-details-on-commercial-spyware-vendor-variston/.
    241    “Europe: German Ralf Wegener Builds Small Cyber-Intelligence Empire in Cyprus and Beyond,” Intelligence Online, October 22, 2021, https://www.intelligenceonline.com/surveillance–interception/2021/10/22/german-ralf-wegener-builds-small-cyber-intelligence-empire-in-cyprus-and-beyond,109700415-gra.
    242    “Europe: Commercial Cyber Bosses Ralf Wegener and Ramanan Jayaraman Operate Singapore-Based Nanostrea,” Intelligence Online, March 3, 2023, https://www.intelligenceonline.com/surveillance–interception/2023/03/03/commercial-cyber-bosses-ralf-wegener-and-ramanan-jayaraman-operate-singapore-based-nanostrea,109919867-art.
    243    Lecigne and Sevens, “New Details on Commercial Spyware Vendor Variston.”
    244    “Ex-Variston zero day experts regroup at Paradigm Shift,” Intelligence Online, May 15, 2024, https://www.intelligenceonline.com/surveillance–interception/2024/05/15/ex-variston-zero-day-experts-regroup-at-paradigm-shift,110226089-art This section has been updated to clarify the description of recent reporting on Variston Information Technology.
    245    Sourcing from Positive Technologies website indicates there are branches of the company in South Korea and Tunisia. However, the authors were unable to find corporate registrations of these companies in these jurisdictions and thus they are not included in the dataset.

    The post Mythical Beasts and where to find them: Mapping the global spyware market and its threats to national security and human rights appeared first on Atlantic Council.

    ]]> Mythical Beasts and where to find them: Data and methodology https://www.atlanticcouncil.org/in-depth-research-reports/report/mythical-beasts-and-where-to-find-them-data-and-methodology/ Wed, 04 Sep 2024 22:24:00 +0000 https://www.atlanticcouncil.org/?p=817981 Learn more about the methodology and dataset behind Mythical Beasts and where to find them: Mapping the global spyware market and its threats to national security and human rights

    The post Mythical Beasts and where to find them: Data and methodology appeared first on Atlantic Council.

    ]]>

    Mythical Beasts and where to find them: Mapping the global spyware market and its threats to national security and human rights is concerned with the commercial market for spyware and provides data on market participants. Focusing on the market does not presume that all harms from spyware stem from how it is acquired, or whether that acquisition is a commercial transaction with a third party (versus developed “in-house” by the customer). Some definitions of spyware differentiate it by the means with which it is acquired, creating confusion over the fundamental distinction between “spyware” and, for instance, “commercial spyware.”1

    Spyware: is a type of malicious software that facilitates unauthorized remote access to an internet-enabled target device for purposes of surveillance or data extraction.2 Spyware is sometimes referred to as “commercial intrusion [or] surveillance software” with effectively the same meaning.3 This research considers the “tools, vulnerabilities, and skills, including technical, organizational, and individual capacities” as part of the supply chain for spyware and the meaningful risks posed by the proliferation of many of these components.4

    … so “commercial” spyware?

    Transactions across the spyware market may be less regulated than in-house development of spyware but they are far from the only source of harm and insecurity. Policies that seek only to mitigate harms from the commercial sale of these capabilities risk ignoring their wider harms and avoid the opportunity to address fundamental concerns over surveillance and the full spectrum of government uses of these technologies.

    The debate over what constitutes legitimate uses of spyware is ongoing, but commercial sale is a poor proxy for the degree of responsible or mature use. History has shown that this market is only one, albeit significant, part of a wider proliferation challenge.5Many human rights violations associated with spyware occur in the context of their use for state security purposes (e.g., by intelligence agencies), highlighting the diverse harms and risks posed by the proliferation of spyware. These include what some researchers have termed “vertical” uses (by states against their own populations) and “diagonal” uses (against the population of other states, including diaspora).6 There is some normative loading in the term “spyware” vs. the more functional “malware” or the rather impenetrable “commercial intrusion capabilities” but is beneficial to have a common term of art in many of these debates.

    This report and accompanying dataset are mainly inclusive of investigations into vendors and suppliers that have been found selling spyware to governments across the world that have then used this software to abuse human rights. However, this is only one side of the coin. Far less data exists on the use of spyware for a myriad of intelligence and counterintelligence purposes, including “national security” missions both genuine and troubling. The report cannot resolve these tensions but does seek to frame them in service of a more immediate and practical purpose—and a better understanding of the market that provides the software tools and services to carry out these acts.

    Commercial acquisition of spyware is not the root cause of its abuse. While this project is focused on bringing transparency to participants in the spyware market, it does not argue that only transactions through this market pose proliferation risks or harms.7To avoid further confusion in both analysis and policy the authors do not include the term “commercial” in the definition of spyware. While the debate continues about how to manage these risks, this project sheds better light on those buying, selling, and supporting this market.

    A final note on scope

    Spyware works without the consent or knowledge of the target or others with access to the target’s device; thus, this report does not consider the market for so-called “stalkerware,” which generally requires physical interaction from an individual, most often a spouse or partner, with access to a user’s device.8 This definition also excludes software that never gains access to a target device, such as surveillance technologies that collect information on data moving between devices over wired (i.e., packet inspection or “sniffing”) or wireless connections. This definition also excludes hardware such as mobile intercept devices, known as IMSI catchers, and any product requiring close or physical access to a target device, such as forensic tools.9

    This definition is limited, by design, to disentangle lumping various other surveillance toolsets into the definition of spyware.

    Building the dataset

    This dataset represents a meaningful sample of the market for spyware vendors, but it is not a complete record and this report can only speak to trends and patterns within this data, not the market as a whole. The data is confined to entities for which there is a public record (i.e. registered businesses) and for which public information links the vendor to the development or sale of spyware or its components.10

    To develop a list of vendors, the authors started by creating an initial “most visible” list of those with the widest public exposure from the use of their wares, relying principally on public reporting from Amnesty International, Citizen Lab, and the Carnegie Endowment for International Peace, as well as public reporting from a variety of news outlets. This initial set of vendors was the starting point for searching public corporate registries and a mix of public and private-sector corporate databases to profile each company in greater depth and find additional connections.

    All the vendors identified through this process were included if they 1) publicly advertised products or services that matched the above definition of spyware, 2) were described as selling the same products by public reporting in the media or by civil society researchers, or 3) showed evidence of the products through court records, leaks, or similar internal documentation. As part of this search process, the team gathered records on subsidiaries and branches associated with each vendor, their publicly disclosed investors, and, where possible, named suppliers.

    Each entity identified in this process was identified by at least two different open sources. In all cases for which data is available, the dataset includes vendor activities from the start of operation until 2023, or until records indicate that the vendor’s registration had ceased in a jurisdiction. The sources of public information on both firms’ activities and their organization varied but largely stemmed from different forms of corporate registration, records, and transaction data.

    Government-Run Corporate Registries: perhaps the most useful type of source for the project to collect credible information on formal names, jurisdictions, and directors and run by respective governments. Perhaps the most comprehensive type of sourcing the authors found and could point to investor relationships. An example of corporate registration is the EU business registers, where researchers can look up if a business entity is registered in any European Union jurisdiction. Corporate registries were utilized to determine jurisdictions and legal registered name of a label, this type of source can be found in most of the rows. 
     
    Court Records: a resource that provided important names, dates, and relationships. Useful for certain vendors, like Appin Security Group, which was involved in a case before the Additional District Judge of the Rohini court in India that issued a summons to eight people associated with Reuters. 
     
    Opencorporates: a resource that pointed to corporate registries and linked to where a company by name was registered in based off jurisdiction and whether the information was publicly available. This source was typically credible but required very specific search terms to yield appropriate results. For example, if an entity has multiple names, it was recommended to search by its various names for more comprehensive results. This source was useful for finding jurisdictions, government databases, individuals, and dates. It was less useful for name changes and investor data. For example, the authors were able to find Dream Security, a company founded by an individual who also founded NSO Group, Shaliev Hulio, through Opencorporates. 
     
    Pitchbook: a paywalled resource for initial research and trying to determine how an entity describes itself. This source was useful for initial scoping of entities to add to the dataset but limited in scope. Some investors in the dataset were initially found in pitchbook, but further research was needed to determine the extent of their relationship with a vendor or supplier. 
     
    Crunchbase: a resource for initial research and trying to determine how an entity describes itself. Most content is behind a paywall. This source was useful for the initial scoping of entities to add to the dataset. Some investors in the dataset were initially found in pitchbook, but further research was needed to determine the extent of their relationship with a vendor or supplier. 
     
    Zaubacorp: a resource for Indian registered records. This source was useful for compiling government records across various government databases into one common searchable space. Its limitation was in the constrained jurisdiction it covered. For example, some of the individuals we discovered for CyberRoot Risk Advisory Private Limited were listed in this database, as well as holding companies like Wynard India Private Limited and its connection to Appin Security Group. 
     
    News Media: These were taken as mostly credible if the news outlet itself is reliable, but ideally with a secondary source supporting specific information claimed, especially if the news source itself did not link to external or additional sources within itself. More credible sources tended to point to corporate registrations supplied by governments or the entity in question. This type of sourcing was useful for leads in terms of finding alternative company names, dates for any activity, and finding new entities to incorporate into CSI’s dataset. News sources also served as a good start for finding investor data when accompanied by government sourcing. 
     
    Leaked Materials: a source for internal communications, advertisements, pricing, and establishing partner relationships. Particularly useful in mapping the vendors Memento Labs srl (formerly known as Hacking Team srl) and Gamma Group and mapping partnerships like that between Memento Labs srl and RCS Labs e.g. “Hacking Team Source Dump Map”. 

    Defining entities in the spyware market 

    Disclaimer on sources

    More information: All sources for this dataset are open-source and were publicly available at the time of writing. For more on the kinds of data used in this project, see here. We are aware that some links have broken or been removed, and a handful of sources have been taken down in the wake of court orders. We are unable to replace, or host, copyrighted material. For any questions on sourcing, please email cyber@atlanticcouncil.org.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “Prohibition on Use by the United States Government of Commercial Spyware That Poses Risks to National Security,” Federal Register, March 30, 2023, https://www.federalregister.gov/documents/2023/03/30/2023-06730/prohibition-on-use-by-the-united-states-government-of-commercial-spyware-that-poses-risks-to.
    2    “Unauthorized” access separates spyware from myriad other services or tools that might be used to effectuate similar surveillance but which require a user’s consent at some stage e.g. downloading an application from a mobile phone app store.
    3    50 U.S. Code § 3232a – Measures to mitigate counterintelligence threats from proliferation and use of foreign commercial spyware, https://www.law.cornell.edu/uscode/text/50/3232a.
    4    Winnona DeSombre et al., “A Primer on the Proliferation of Offensive Cyber Capabilities” (Atlantic Council, March 1, 2021), https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/a-primer-on-the-proliferation-of-offensive-cyber-capabilities/.
    5     Read more about the ‘breakout’ of “offensive capabilities like EternalBlue, allegedly engineered by the United States, used by Russian, North Korean, and Chinese governments” (DeSombre et al., Countering Cyber Proliferation). See also Gil Baram, “The Theft and Reuse of Advanced Offensive Cyber Weapons Pose a Growing Threat,” Council on Foreign Relations (blog), June 19, 2018, https://www.cfr.org/blog/theft-and-reuse-advanced-offensive-cyber-weapons-pose-growing-threat; Insikt Group, “Chinese and Russian Cyber Communities Dig Into Malware From April Shadow Brokers Release,” Recorded Future (blog), April 25, 2017, https://www.recordedfuture.com/shadow-brokers-malware-release/; Leo Varela, “EternalBlue: Metasploit Module for MS17-010,” Rapid7, May 19, 2017, https://blog.rapid7.com/2017/05/20/metasploit-the-power-of-the-community-and-eternalblue/.
    6    Herb Lin and Joel P. Trachtman, ”Using International Export Controls to Bolster Cyber Defenses,” Protecting Civilian Institutions and Infrastructure from Cyber Operations: Designing International Law and Organizations, Center for International Law and Governance, Tufts University, September 10, 2018, https://sites.tufts.edu/cilg/files/2018/09/exportcontrolsdraftsm.pdf.
    7    As argued in previous work published by the Atlantic Council, proliferation “presents an expanding set of risks to states and challenges commitments to protect openness, security, and stability in cyberspace. The profusion of commercial OCC vendors, left unregulated and ill-observed, poses national security and human rights risks. For states that have strong OCC programs, proliferation of spyware to state adversaries or certain non-state actors can be a threat to immediate security interests, long-term intelligence advantage, and the feasibility of mounting an effective defense on behalf of less capable private companies and vulnerable populations. The acquisition of OCC by a current or potential adversary makes them more capable” (See: Winnona DeSombre et al, Countering Cyber Proliferation).
    8    “Stalkerware: What to Know,” Federal Trade Commission, May 10, 2021, https://consumer.ftc.gov/articles/stalkerware-what-know.
    9    IMSI catchers are also referred to as “Stingrays” after the Harris Corporation’s eponymous product line; Amanda Levendowski, “Trademarks as Surveillance Technology,” Georgetown University Law Center, 2021, https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=3455&context=facpub.
    10    For more see: Winnona DeSombre et al., A Primer on the Proliferation of Offensive Cyber Capabilities, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/a-primer-on-the-proliferation-of-offensive-cyber-capabilities/.

    The post Mythical Beasts and where to find them: Data and methodology appeared first on Atlantic Council.

    ]]>
    Mythical Beasts and where to find them https://www.atlanticcouncil.org/in-depth-research-reports/report/mythical-beasts-and-where-to-find-them/ Wed, 04 Sep 2024 22:19:00 +0000 https://www.atlanticcouncil.org/?p=817978 Mythical Beasts and Where to Find Them: Mapping the Global Spyware Market and its Threats to National Security and Human Rights is concerned with the commercial market for spyware and provides data on market participants.

    The post Mythical Beasts and where to find them appeared first on Atlantic Council.

    ]]>
    Mythical Beasts and where to find them is an ongoing project by the Atlantic Council’s Cyber Statecraft Initiative and American University’s Center for Security, Innovation, and New Technology documenting the proliferation of spyware. The ‘mythical beasts’ here refers to the common practice of vendors in the spyware market taking names from fantasy and mythology for their products.

    The project currently consists of an interactive and an in-depth report based on a dataset collected by the Atlantic Council and American University team.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post Mythical Beasts and where to find them appeared first on Atlantic Council.

    ]]>
    McCue co-authors chapter in edited volume on US and Indian approaches to nuclear security challenges https://www.atlanticcouncil.org/insight-impact/in-the-news/mccue-co-authors-chapter-in-edited-volume-on-us-and-indian-approaches-to-nuclear-security-challenges/ Wed, 21 Aug 2024 19:04:27 +0000 https://www.atlanticcouncil.org/?p=786547 On August 13, 2024, Forward Defense Visiting Senior Fellow Lieutenant Colonel James McCue co-authored a chapter in a new book published by Springer titled The Challenges of Nuclear Security: U.S. and Indian Perspectives.

    The post McCue co-authors chapter in edited volume on US and Indian approaches to nuclear security challenges appeared first on Atlantic Council.

    ]]>

    On August 13, 2024, Forward Defense Nonresident Senior Fellow Lieutenant Colonel James McCue, USAF (ret.) co-authored a chapter in a new book entitled The Challenges of Nuclear Security: U.S. and Indian Perspectives. The book was published as part of the “Initiatives in Strategic Studies: Issues and Policies” series by the US Naval Postgraduate School. The volume assembles experts on US and Indian nuclear security to analyze six issues critical to the “safety and security of nuclear facilities, technologies, and materials.” These issues include insider threats, organizational culture, emergency response, physical protection, control of radioactive sources, and cybersecurity.

    McCue co-wrote the chapter on “Physical Protection of Nuclear Facilities and Materials” with Anil Kumar from the Indian Department of Atomic Energy (retired) and Alan Evans from Sandia National Laboratories.

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    The post McCue co-authors chapter in edited volume on US and Indian approaches to nuclear security challenges appeared first on Atlantic Council.

    ]]>
    AI in cyber and software security:  What’s driving opportunities and risks? https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/ai-in-cyber-and-software-security-whats-driving-opportunities-and-risks/ Mon, 19 Aug 2024 20:14:00 +0000 https://www.atlanticcouncil.org/?p=817512 This issue brief discusses the drivers of evolving risks and opportunities presented by generative artificial intelligence (GAI), particularly in cybersecurity, while acknowledging the broader implications for policymakers and for national security.

    The post AI in cyber and software security:  What’s driving opportunities and risks? appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Abstract

    This paper discusses rapid advancements in artificial intelligence (AI), focusing on generative artificial intelligence (GAI) and its implications for cybersecurity and policy. As AI technologies evolve, they present both opportunities and risks, necessitating some understanding of what drives each. This is crucial not only for harnessing AI’s capabilities in cybersecurity—where AI can both defend against and potentially enhance cyber threats—but also in considering broader national security implications. Throughout, the issue brief highlights the importance of acknowledging the long history and varied paradigms within AI development. It also emphasizes the need to consider how AI technologies are integrated into larger software systems and the unique risks and opportunities this presents. Finally, the brief calls for a more nuanced understanding of AI’s impact across different sectors. 

    Introduction 

    The rapid pace of technological improvement and the resulting groundswell of innovation and experimentation in artificial intelligence (AI) has prompted a parallel conversation in policy circles about how to harness the benefits and manage the potential risks of these technologies. Open questions in this conversation include how to map or taxonomize the set of known risks, how to assign responsibility to different actors in the ecosystem to address these risks, and how to build policy structures that can adapt to manage “unknown unknowns” (e.g., AI-related risks that are hard to predict at present). Then, add in the question of how to do all of the above while preserving some essential abilities: the broader public’s to express their preferences, the research community’s to innovate, and industry’s to commercialize responsibly. Each of these will be a foundation for realizing the potential benefits of generative artificial intelligence (GAI) innovations and preserving the US edge in AI development to the benefit of its economic productivity and security. 

    This report focuses on the risks and opportunities of AI in the cyber context. Current GAI systems have proven capabilities in writing and analyzing computer code, raising the specter of their usefulness to both cybersecurity defense and offense. Cybersecurity is, by its nature, an adversarial context in which operators of information systems compete against cybercriminals and nation-state hackers. Thus, if and when AI provides a “means” to improve cybersecurity capabilities, there will be no shortage of actors with “motives” to exploit these capabilities for good and ill. As critical infrastructure and government services alike increasingly rely on computing to deliver vital goods, cybersecurity questions are also increasingly questions of national security, raising the stakes for appraising both cyber opportunity and risk. 

    Cybersecurity is far from the only AI application that may create opportunity or risk. The harms of non-consensual intimate imagery and harassment, the manufacture of bioweapons, the integration of biased or flawed outputs into decision-making processes, or other areas of AI risk will take different forms and demand varying mitigations. The factors that drive risk and opportunity in the cyber context may provide useful insight across other contexts as well—the authors of this paper respectfully leave it to experts in those other fields to draw from its findings as much or as little as they suit. 

    An important note on scope: an all-too-frequent assumption in contemporary policy conversations is that AI is synonymous with GAI. Yet—as this paper later discusses—GAI is merely the latest and greatest innovation from a decades-old field in which different paradigms and approaches to crafting nonhuman intelligent systems have risen and fallen over time. This work focuses on capabilities shown—or suggested—by current AI systems, including GAI, because these examples provide a grounded basis for reasoning about AI capabilities and accompanying risks and opportunities. Where appropriate, the report mentions or considers other AI paradigms that could prove relevant to risk and opportunity in the cybersecurity context. The report weighs, as well, not just standalone models but also “AI systems” that involve AI models embedded into broader software systems, such as an AI model paired with a code interpreter or a Retrieval-Augmented Generation (RAG) system.1, 2

    Opportunities from AI in the cybersecurity context 

    In the broadest sense, the opportunities of AI in the cybersecurity context arise from their potential use to improve a defender’s lot in cybersecurity, whether by helping secure code or by helping make cybersecurity tasks easier or more efficient for defenders. Many of these opportunities arise from GAI models’ ability to read, analyze, and write code. 

    A. Finding and fixing vulnerabilities in code 

    AI models that can detect vulnerabilities in software code—and, ideally, propose solutions—could benefit cybersecurity defenders by helping them scan code to find—and fix—vulnerabilities before malicious actors can exploit these. AI tools that could find significantly more vulnerabilities than existing tools, such as static analysis or fuzzing tools, could improve programmers’ ability to run checks over their code before merging it or building it, preventing the deployment of vulnerable code to customers. Using these tools on existing codebases will create more challenges since applications may necessitate asking customers to patch or upgrade their code. These tools might be particularly valuable in low-resource contexts in which developers do not have access to in-house security expertise or security code reviews, such as small businesses, nonprofits, and open-source maintainers. 

    Using AI to find vulnerabilities in code is an area of active research effort. For example, the Defense Advance Research Projects Agency (DARPA) and Advanced Research Projects Agency for Health (ARPA-H) are partners in the two-year AI Cyber Challenge (AIxCC) that asks participants to “design novel AI tools and capabilities” to help automate the process of vulnerability detection or other cyber defense activities.3 Right now, the open debate in this area is how good GAI models are at this task and how good they can become. One blog post from a small-business AIxCC semi-finalist said, ”our experiments lead us to believe real-world performance on code analysis tasks may be worse than current benchmarks can measure quantitatively.”4 Some benchmarks do exist, such as the CyberSecEval2 framework,5 developed by Meta—yet evidence offers mixed evaluations. The original authors of the CyberSecEval2 paper found “none” of the large language models (LLMs) “do very well on these challenges.”6 However, follow-on studies from the Project Zero security team at Google reported that they improved the performance of the LLMs through several principles, such as sampling and allowing the models access to tools, while still reporting that “substantial progress is still needed before these tools can have a meaningful impact on the daily work of security researchers.”7


    Drivers of opportunity 

    • Domain-specific capability (vulnerability identification): How good AI models are or could be at this task, especially compared to existing capabilities, such as fuzzing or static analysis tools. Any model that can identify vulnerabilities that current tools cannot find would have initial value as an improvement over today’s baseline. Greater efficiency benefits will emerge the more AI models work to minimize both false positives and false negatives, as this will make capabilities more effective and reduce the need for human review of detections. 
    • Integration with existing tools: The more development workflows integrate AI vulnerability-finding tools, such as embedded into build processes or as part of code-hosting platforms like GitHub, the easier it will be for these tools to help detect vulnerabilities before the merge and rollout of code to customers, making bugs easier and less costly to fix. 
    • Cost and availability: Free or low-cost AI models or model-based tools could be particularly useful for organizations or individuals without significant resources dedicated to security reviews, such as for use in small businesses or for open-source software packages. 
    • Education: Ensuring that organizations know how to use vulnerability-finding tools and how to integrate them into their development process can help ensure that, as these tools develop, their benefits flow to defenders and, in particular, to those in less-resourced areas. 

    B. Helping developers write more secure code 

    Closely related to the question of finding and fixing vulnerabilities in existing code is the idea that AI tools that help developers generate code could help improve the security of that code by ensuring that its suggestions are free from known vulnerabilities. Despite the longstanding knowledge that certain common-class vulnerability patterns are insecure, these have recurred in code over many years.8 Code-generating AI tools could potentially help avoid these patterns, either by training the underlying model to avoid insecure generations, such as through reinforcement learning from human feedback,9 or by filtering model outputs for known insecure code patterns. One factor influencing LLM efficacy in this context is the type of secure coding or vulnerability discovery task assigned. Some flaws require a significant volume of context and might exceed what an LLM can accept. In other instances, model benchmarks could point to a specific code segment to propose mitigations in conjunction with human review. 

    Experiments on some of these techniques are already in process; in 2023, GitHub announced that its CoPilot code assistant would now include an “AI-based vulnerability filtering system” to filter out code results containing known insecure code patterns, such as those vulnerable to Structured Query Language (SQL) or path injection or the use of hard-coded credentials.10 These tools could also have their use expanded to propose fixes—at a significantly greater speed than locating them—allowing for the integration of security review tooling based on LLMs into existing human development environments. 

    However, one should not assume that AI-generated code will be more secure, especially without further research and investment in this area. (The Risks section of this paper covers an early study indicating that the opposite may well have been true for one generation of LLMs.) Conducting security reviews of AI-generated code will likely require heavy human oversight limiting the throughput from even large-scale LLM deployments for software development. 

    The need exists for more evaluation and benchmarking to understand the security properties of AI-generated code, as compared to human code. This would offer developers and organizations defining information on how to integrate AI tools into their workflows, such as identifying contexts in which their use benefits security and pinpointing weaknesses or blind spots where developers should still thoroughly review AI-generated code for security flaws. For example, one could imagine using AI tools capable of identifying and avoiding common insecure patterns, such as a lack of input sanitization, but, consequently, might generate code with more subtle design or logic errors that create new vulnerabilities. 

    Drivers of opportunity 

    • Trustworthy AI outputs: A first, vital prerequisite is that AI-generated code improves upon the security of human-written code in relatively consistent ways (and without causing human developers to neglect security concerns in their code more than is currently the case). The security improvements of AI code need not be absolute across contexts—AI-generated code does not need to be better than the best cryptography expert to help the average developer avoid SQL injection attacks. Thus, additional clarity in how and when to trust AI-generated code with respect to security would help ensure its appropriate adoption in different contexts. In addition to being secure, AI code suggestions must, at least, be moderately helpful to developers, if only to buoy wider adoption of the suggestions (and their potential security benefits). 
    • Integration with existing tools: The more that code-generating tools coalesce with integrated development environments (IDEs) and other environments where programmers can use them as part of their development workflows, the more expansive their potential adoption, which will increase tool leverage on other information, such as the broader context of a project to more accurately assess the security implications of the code they generate. 
    • Cost and availability: Many small developers, including open-source software maintainers, may likelier use free or widely available tools rather than expensive proprietary solutions. Ensuring that low-cost model solutions have strong security protections for the code they generate—not just expensive or leading-edge models—could benefit these developers. 
    • Education: Educating developers on the best ways to use AI code-generating tools, as well as how to verify the security of generated code, could also help ensure that these tools roll out in ways that maximize their potential benefits. 

    C. Making sense of cybersecurity data

    In addition to using the code-analysis and code-generation features of AI to improve the security of software code, another relatively well-developed current use case for AI in cybersecurity is the idea of using AI to help with cybersecurity-relevant data processing. For example, AI tools could help sort through data generated by computer systems, such as system logs, to help identify or investigate cyberattacks by identifying anomalous behavior patterns and indicators. Likewise, AI tools could help process and analyze cyber threat intelligence or information about vulnerability disclosures to help defenders respond to this information and prioritize follow-up actions.11 These systems may incorporate generative AI but might also follow entirely separate AI paradigms, like supervised machine learning. 

    Drivers of opportunity 

    • Domain-specific capabilities (anomaly detection): The degree to which AI systems can correctly identify anomalies or other relevant information from system data. Both false negatives and false positives would be harmful in this situation, though false negatives, perhaps more so. 
    • Integration with existing data and tooling: How well can new AI solutions integrate with existing security tooling to access the panoply of data required to do anomaly detection? Is there adequate high-quality available to train these models in the first place? 
    • Cost and availability: Free or low-cost models or tools could be particularly useful for organizations or individuals without significant resources to operate their own security operations center (SOC) teams and similar. 
    • Education: Helping organizations, particularly those with fewer resources, understand how to use and configure these tools can help them harness the efficiencies—and avoid hoodwinking by tools that make big promises but then deliver little in terms of increased security. 

    D. Automation of other cybersecurity tasks 

    Beyond these well-developed categories, there are other examples of often neglected cybersecurity tasks, which, if improved or eased using AI, would provide benefits to security. One example is the failure to “timely” apply patches and version upgrades to software within a network. These patches and version upgrades often contain important security updates, but many organizations are slow to patch, whether due to resource constraints or negligence. Another related example is consistently upgrading dependencies in software packages to address upstream vulnerabilities. 

    Further afield suggestions include the idea of having AI systems, including agents, that can automate longer action sequences in cyber defense, such as systems that can identify an anomaly and then autonomously take action, such as quarantining affected systems. Such autonomy is likely beyond the capabilities of current GAI models, and some researchers have suggested creating “cyber gyms” to help train reinforcement learning agents for these kinds of tasks through trial and error.12

    Drivers of opportunity 

    • Trustworthiness: Once operators seek to delegate tasks to AI systems (rather than asking the system to make a suggestion for a human operator to action), it becomes more important to have a very good sense of the accuracy and robustness of the model. For example, an AI patch management system that can modify and control arbitrary elements of a corporate network requires high-level trust protocols that it will not take spurious or destructive actions. This contrasts with many of the other opportunities identified, which envision a human-in-the-loop.  
    • Openness and availability for experimentation: The more different researchers and organizations experiment with models of how to implement AI into the defensive cyber process, the more likely it becomes that a product or service of genuine value might emerge to help use LLMs to automate additional tasks in cybersecurity. 

    AI risks in the cybersecurity context 

    Broadly, the risks posed by AI in the cybersecurity context fall into at least two categories: risks from malicious misuse (e.g., the use of models to create outputs useful for malicious hacking) and risks to AI users arising from their well-intentioned use (e.g., cyber harms created when models generate incorrect or harmful outputs or take incorrect or harmful actions). Notably, this second category of risks to AI users tightly connects with many of the potential benefits outlined above. 

    A. Risks from malicious misuse: Hacking with AI 

    The broadest category of malicious misuse risks in the cyber context is the potential for malicious actors—whether high-capability entities like the United States, Israel, or Russia or the most lackadaisical cybercriminal—to use generative AI models to become more efficient or more capable hackers. 

    Previous work published by the Cyber Statecraft Initiative on this topic “deconstructs” this risk by breaking “hacking” into constituent activities and examining GAI’s potential utility for assisting with both making capable players better and bringing new malicious entrants into the space.13 It seems possible, and likely, that all kinds of hackers could use GAI tools for activities including reconnaissance or information gathering, as well as assistance with coding and script development. Indeed, OpenAI reported disrupting threat actors who were using their models to conduct research into organizations and techniques and tools, generate and debug scripts, understand publicly available vulnerabilities, and create material for phishing campaigns.14

    These risks are already here. What is less clear is whether or not these risks are acceptable and bearable. The OpenAI case shows that GAI is arguably a useful tool for hackers, but not necessarily that it provides a step change in terms of sophistication or capability. Tools like Google, after all, are also a benefit to hackers. The essential question is: where to draw the line? 

    This research recommends a few areas where GAI capabilities could create more profound capability improvements for malicious hackers. 

    • Models that can generate content for highly sophisticated social engineering attacks, such as creating deepfakes that impersonate a known figure for the purpose of carrying out an attack. 
    • Models that can identify novel vulnerabilities and develop novel exploits in code at an above-human level. 
    • AI-based “agents” with the ability to string together multiple phases of the cyberattack lifecycle and execute them without explicit human intervention, providing significant benefits in terms of speed and scalability as well as challenging typical means of detecting malicious activity such as looking for connections to a command and control server. 

    Thus, the risk that hackers will use GAI is not speculative—it is here. The issue, instead, is how much this usage increases risks to businesses, critical infrastructure companies, government networks, and individuals. 

    Drivers of risk 

    • Deepfakes: The ability for GAI systems to generate realistic-looking content that impersonates a human being, which the people interacting with it cannot distinguish or identify as machine-generated.15
    • Domain-specific capabilities (vulnerability identification and exploitation): The ability for models, especially those fine-tuned on relevant datasets and actions, to display above-human level performance at specific high-risk activities, such as identifying novel vulnerabilities. 
    • Domain-specific capabilities (autonomous exploitation): The ability of models to string together and execute complex action sequences—particularly, though not exclusively, in the form of generating and executing code—to compromise an information system end-to-end. 
    • Integration with existing tools: Studies appear to suggest that integrating AI models with tools such as code interpreters can upskill these models,16 which could increase the risk that they can be useful to hackers. 
    • Removal of safeguards: It is very challenging to create blanket safeguards that prevent bad behavior while protecting legitimate use cases, in part because of the similarity between malicious and benign activities. Developers call this the “safety-utility tradeoff.” At the same time, models do currently refuse to comply with overtly malicious requests and appear to be improving in their ability to do so over time—thus, models without any safeguards at all or those fine-tuned for malicious cyber activity could lose even these modest protections. 

    B. Risks to AI users 

    Risks to AI users depend much more heavily on the context and purposes of the model’s, or its outputs’ use, as well as the type or nature of safeguards and checks implemented within that environment. Some of the key contexts and activities in which AI can create cyber risks to users include the use of AI-generated code, the use of systems where AI agents may have access to user devices and data, and the use of AI in defensive cybersecurity systems. 

    B1. Risks of insecure AI-generated code 

    In one initial study on the security properties of AI-generated code, published by Stanford, researchers split developers into two groups, gave only one group access to code-assist tools, then observed the developers during the process of solving coding problems and examined the security of the resultant code.17 They found that “participants who had access to an AI assistant … wrote significantly less secure code than those without access.” For example, only 3 percent of programmers in the group with the AI assistant implemented an encryption/decryption function in a way that the researchers categorized as “secure,” compared to 22 percent of programmers working alone who generated a “secure” solution. The researchers surveyed the developers and found that, of the developers using the AI assistant, those who reported placing less subjective trust in the AI assistant were more likely to generate “secure” code. Additionally, the researchers found that code labeled “secure” had, on average, a larger “edit distance,” (e.g., more changes from initial AI-generated code than did “insecure” or “partially secure” solutions). 

    While it is possible, and perhaps even likely, that the assistant’s properties have evolved since this point, this example illustrates the need to better understand the security properties of AI-generated code before developers embed it deeply into their workflows. Policymakers can help hold companies to account on this question. 

    Drivers of risk 

    • Untrustworthy outputs: The risks from AI-generated code are greatest when the developer is incapable of, or unlikely to, validate the output themselves or if there is no process of human oversight over the generated code. That is, risks become acute when there is a mismatch between the trust that a developer thinks they can place in AI-generated code and the level of trust that is actually appropriate. These levels may vary across contexts, as different kinds of code are more or less security sensitive—for example, deploying a web app has fewer opportunities to go wrong than implementing a cryptographic library—or AI models may be better or worse at generating it securely by virtue of having seen more or fewer examples. These risks necessitate the development of robust benchmarks that measure the security properties of AI-generated code across a variety of contexts. 
    • Misplaced user trust: If users verify the security of generated code themselves and to their own standards, the risks that the code will be insecure significantly lessen. Much of the problem thus stems from users placing unearned trust in model outputs. Yet, pointing the blame finger back at the user is not an appealing path for policy, Moving forward, users will place trust in automated systems, and therefore, it is up to the makers of those systems and policymakers alike to help ensure that the systems are fit to deserve that trust. 

    B2. Risks from integrated AI systems with data or system access 

    There is a lot of interest in connecting GAI models to environments that give them the tools to automate tasks—rather than feeding output to a human to do a task; leading to more autonomous agents. Such conditions create cybersecurity risks because many AI models are very vulnerable to adversarial attacks that can cause them to do strange and potentially undesirable things, including compromising the security of the system they operate or the data they have access to. 

    From stickers on stop signs that can fool computer vision algorithms to “jailbreak” prompts that can convince LLMs to ignore their creator-imposed safeguards,18, 19. it is hard to ensure that AI systems solely do what you want them to do. Many leading models have proven vulnerable to “prompt injections,”20 which allow a user (or a potential attacker) to get around security limitations, including to obtain hidden information. Researchers have already demonstrated that, by embedding hidden text on their webpage, they can manipulate the results of GAI model outputs.21 If users interact with a model that has access to sensitive data, such as a business database or sensitive files on a user’s computer, they might be able to use prompt engineering to trick the model into handing that information over. Or, people could create malicious websites that, when an autonomous agent scrapes them, contain hidden commands to obtain and leak data or damage the machine they are operating. 

    These risks grow as developers embed AI systems into higher stakes systems that grant access and authorization to take ever more sensitive actions. Cybersecurity experts have highlighted reliability as a core concern to using AI models as a component of cybersecurity defense, and they stressed the need to deploy models and grant them autonomy in ways proportional to the organizational context in which they operate and the risks associated.22

    Drivers of risk 

    • Untrustworthy outputs: Outputs by models that misalign with the goals or needs of their human operators, whether insecure code, harmful outputs as the result of prompt injection, or unsafe decision-making in the cyber context. 
    • Misplaced user (or system) trust: When users or information systems embed a model into a context with more trust and permissions than the model deserves based upon its own reliability. 
    • Increased delegation / lessened supervision: The integration of models into contexts without sufficient, or no, oversight before placing their outputs into “use” (e.g., code merged into a product or security action taken). 

    Dual drivers 

    The opportunity and risk drivers outlined above are not always diametrically opposed. Were they, it would offer an easy remedy for policy: do more of the “opportunity” drivers and less of the “risk” drivers. Instead, as the next sections illustrate, the close coupling between many of these drivers will challenge policy’s ability to neatly extricate one from the other. 

    Domain-specific capabilities 

    Particular domain-specific capabilities for AI models would drive both opportunity and risk in the cyber context. For example, the ability to find novel vulnerabilities would benefit defenders by helping them identify weaknesses to patch and malicious actors searching for footholds into software systems. To a lesser degree, the same is true of the general ability that models would have to write complex, correct code—this ability could offer efficiency benefits to developers, whether they are open-source maintainers or ransomware actors. It seems unlikely that these capabilities would advance in ways that only benefit the “good guys.” While model safeguards could help reject obvious malign requests (e.g., ask a model to help them write an urgent email), in the wider cyber context, bad actors are on an endless search for reasonable justifications to test for and seek vulnerabilities in a codebase. No currently known software can develop a foolproof way to see inside its operator’s heart to discern their true intent. Instead, it is likely that policy will simply have to accept these twinned risks, seeking to measure them as they progress and find ways to make it as easy as possible for defenders to implement new technologies in hopes that they can outpace malicious actors. This is an uneasy balance, but it is also one that is deeply familiar in information security. 

    Trust and trustworthiness 

    Perhaps the single largest driver of AI opportunity in the cybersecurity context is model “trustworthiness”—that is, the degree to which a model or system that integrates AI produces outputs that are accurate, reliable, and “fit for purpose” in a particular application context. For example, if a model can regularly generate code that is secure, free of bugs, and does exactly what the human user intended, it might be trustworthy in this context. 

    A model’s trustworthiness almost directly controls the potential productivity benefits it can deliver by dictating whether a human must essentially run “quality control” on model outputs, such as carefully reviewing all generated code or all processed data to ensure the model did not make a mistake or miss an important fact. For example, a completely untrustworthy model saves no time (and may, in fact, waste it) because its work requires manual duplication; theoretically, a perfectly trustworthy model should not need human oversight. In practice, human oversight (whether manual or automated) in some fashion must bridge this imperfect trust. Moreover, it is important that the humans or systems performing this oversight have a good understanding of the level of oversight needed and avoid the complacency of overly trusting the system’s outputs. 

    Trust is not a single benchmark but a property dictated by context. Different contexts have distinct requirements, acceptable performance levels, and potential for catastrophic errors. What matters is that the operator has an appropriate way to measure the model’s trustworthiness within a specific task context and determine its respective risk tolerances, then compare both to ensure they align. Policymakers and businesses alike should review the varied levels of criticality for AI application contexts and be specific as to both how to define the properties that a model would need to be trustworthy in each context and how to measure these properties. 

    Developing better ways to measure model trustworthiness and make models more trustworthy will, for the most part, unlock opportunity. However, this factor is in the twinned risk section because, undeniably, trusting a model creates risk. The more a model has delegated tasks without stringent oversight, the greater the productivity gains—and the greater the stakes are for its performance and robustness against attack. Notably, in the cybersecurity context, embedding AI systems into broader information systems, while they remain vulnerable to adversarial inputs, creates the risk that these models could become potent vectors for hacking and abusing systems into which they integrate. In this area, it will be vitally important to benchmark and understand AI models’ vulnerability and to develop security systems that embed AI models in ways that account for these risks.23 Without better ways to measure risk before models become embedded into sensitive contexts, there is a risk that AI systems will develop their own kind of “Peter Principle” (i.e.,  AI models embedded into increasingly high-trust situations until they prove they have not earned that trust). 

    Openness 

    Many of the most acute benefits that GAI systems can provide in cybersecurity will come from using such systems to reduce the labor required to perform security tasks, from auditing code packages to monitoring system logs. The more open innovation there is, the more tools there will be. And the more these tools have accessible price points, the likelier it will be that less-resourced entities will use them. Competition and, in particular, the availability of open-source models can encourage innovation and experimentation to build these tools and keep costs relatively low. Open models can also benefit some of the key questions of trust that are core to AI opportunity and risk: open models are easier to experiment with and customize, making it easier for users and researchers alike to measure the trustworthiness of models in particular contexts and to customize models to meet their specific trust needs. These models are growing ever larger and also more powerful. Cohere AI recently released a 104 billion parameter model through Hugging Face.24 Open models can also contribute to higher levels of trustworthiness, allowing developer-led organizations to validate model behavior under different conditions and tasks with more control of model versions and constraints.  

    At the same time, expanded access to capable models—and, in particular, open-source models—may create additional challenges in preventing model misuse. Open models foreclose abuse-preventing tools, such as monitoring application programming interface (API) requests, and allow users to remove safeguards and protections through fine-tuning. The science of safeguards and their relative strengths and weaknesses needs further study to make the case that open models create significantly more “marginal risk” than closed models.25 For example, in the cyber context, even reasonably designed safeguards may be unable to stop hackers from appropriating reasonable outputs, such as email text or scripts seeking more malign ends. However, safeguards may be more impactful when it comes to contexts like embedding watermarks in AI-generated content and similar. As model capabilities and safeguarding techniques advance, the marginal risk posed by open models may increase. 

    Asymmetric drivers 

    At the same time, there are some factors likely to drive primarily risk or primarily opportunity in the cybersecurity context. These asymmetric drivers of risk and opportunity make promising areas for policy intervention. 

    Risk: Deepfakes and impersonation 

    There are few legitimate reasons why AI models should need to generate content that imitates a person (especially an actual person) without appropriate disclosures that this content is not real. This is true across images, video, and voice recordings. Policy could knock out a series of easy wins by focusing on requiring disclosures and making AI-generated media easier to identify. Already, a bevy of proposed state initiatives exist, which, if enacted, will mandate disclosing AI-generated media in contexts from political advertising to robocalls,26  and federal lawmakers could unify these requirements with legislation to apply them consistently whenever consumers interact with advertising or businesses. Laws will not stop criminals, of course—for that, the government may need to invest in technical research to embed watermarks into AI-generated content and to help electronic communication carriers like voice and video calling implement systems for detecting faked content. This work will not be easy, requiring novel research and development as well as implementation across a variety of parties. Nonetheless, the government is the best-positioned actor to coordinate and drive this forward. 

    Opportunity: Education 

    Another clear opportunity is investing in ways to educate different users who will interact with and make decisions about AI—from business leaders to developers—about how to use AI in responsible and reasonable ways. This kind of education can increase the uptake of AI, where it can be helpful, while also providing an opportunity to prime these users to consider specific kinds of risks, from the need to review AI-generated code to the security risks of embedding AI systems that might be vulnerable to prompt injection. 

    Opportunity: Measuring trustworthiness 

    The more that operators have a grounded sense of models’ strengths and weaknesses, the more they can build applications atop them that do not run the risks of strange and unexpected failures. Policy can help steer and incentivize the development of ways to measure relevant aspects of model trustworthiness, such as a model’s accuracy (best defined in a specific context), its security and susceptibility to adversarial inputs, and the degree to which its decisions allow audits or reviews after the fact. Better measurements will unlock better usage with fewer risks. And they will enable the government to step in and demand clear standards for certain high-risk applications. 

    Drivers of risk and opportunity in context 

    Many of the drivers of risk and opportunity draw from the unique characteristics of this moment in AI. Understanding the story of how we got to this moment, alongside identifying some specific meta-trends that characterize it, can help policymakers comprehend the drivers of risk and opportunity as well as how they are likely to change in the future. 

    Deeply unsupervised 

    The first trend is the rise of unsupervised learning, alongside its resulting highly capable generalist models. The field of AI has seen the rise and fall of multiple different paradigms throughout its lifetime, with generative AI representing the next instantiation of a longer-running trend in the field toward systems that learn to make sense of data themselves using patterns and rules that are increasingly opaque to their creators. 

    Many early attempts to build artificially intelligent systems focused on programming complex, pre-determined rules into computer systems. These systems could be surprisingly capable: in 1966, the first “chatterbot,” Eliza, used simple language-based rules to emulate responses from a mock therapist, with its creator finding that “some subjects have been very hard to convince that Eliza (with its present script) is not human.”27 And, in 1997, the computer Deep Blue outplayed world chess champion Garry Kasparov using brute-force computation and a complex set of rules provided by chess experts.28 Yet, these systems lacked at least one key characteristic of intelligence: the ability to learn. 

    Decades before these rule-based approaches, research into how the human brain works through the interconnection and firing of neurons inspired the invention of another paradigm: neural networks.29 The weights in neural networks—updated over time by an algorithm that seeks to reduce the error between the network’s prediction and reality—allow neural networks to learn rules, patterns, and relationships not explicitly specified by their creators. While neural networks fell out of favor during a long “AI Winter,” they began to recur in the nascent field of machine learning, which focused on developing statistical algorithms that could learn to make predictions from data. 

    Initially, machine learning focused primarily on supervised learning, a paradigm in which a model tries to learn relationships between input data (such as images or numerical and financial data) and output labels (such as names of items in an image or future price projections). Supervised learning with increasingly deep neural networks proved very successful for tasks like image classification, predictive analyses, spam detection, and many other tools developed during the 2000s and 2010s. 

    In contrast, current generative AI systems receive their training, at least in large part, through unsupervised learning, a different paradigm in which a model reviews an immense amount of unlabeled data, such as raw text, and learns to cluster or predict that data without explicit human-provided labels (or target predictions). LLMs, like OpenAI’s Generative Pre-trained Transformer (GPT) series, are huge neural networks trained on trillions upon trillions of words of text data, much of which comes from scraped internet sites and digital books.30 Interestingly, these models still learn by making predictions and receiving error signals to correct their prediction functions—but instead of learning to predict human-generated labels, they learn to predict patterns and structure in human-generated data (text) itself. 

    Unsupervised learning has increased the capacity of models, producing technologies, like ChatGPT, that can and have dazzled users and researchers alike with their capabilities. It has also created systems that are more challenging for developers, researchers, policymakers, and users to understand. Rules-based systems were definitionally transparent. Deep learning was perhaps the first indication that subsequent AI systems might bring opaque internal logic that defies easy interpretation. However, supervised approaches have still provided some clear ways to evaluate model performance within a specific domain. New unsupervised models are challenging to interpret and evaluate. Their capabilities emerge through testing and scale rather than explicit design.31 The emergence of these models preceded the development of empirical ways to test their capabilities across many of the domains they likely have skills. Harnessing the opportunity and avoiding the risks of these highly general models will require developing new ways to think about model explainability and new ways to evaluate model capabilities across the varied tasks and contexts, where their use is not only probable but also possible.32

    Ravenous demand for compute and data 

    The second trend focuses on the ways in which the intensive compute and data needs of the latest generation of AI model development have made current systems highly proximate to concentrated power in the hands of large technology companies. 

    Current leading-edge models are big.33 Size defines the computing costs associated with training a model, namely, the size of its training dataset and the size of the model itself (often measured as the number of “parameters”). Both of these have grown ever larger and the compute required to train these massive models is expensive.34 At present, the well-capitalized and semi-commercial players (e.g., OpenAI, Meta, and Google) build most of the leading models. This creates a different paradigm than that of previous iterations of AI or machine learning systems, which more often emerged from research and academic settings. The computational and data costs of large-model development have tied the evolution of AI models to other existing technology infrastructures, especially cloud computing, with major providers to deliver, in part, the required compute (e.g., the Amazon and Microsoft partnerships with leading generative AI labs).35 Likewise, access to text data for training models has become a point of leverage. Sites like Reddit and Twitter that host lots of public text have begun charging for API access to data,36 as users question whether their technology providers take advantage of private data to train AI models (major model providers say they use only public data).37

    The pressures for large labs to rapidly commercialize these systems and to recoup their investments may drive both opportunity and risk—opportunity because there will be well-capitalized machines seeking to build functional applications and use cases for these models; risk because these companies will face tremendous pressure to create product offerings from these models, regardless of their shortcomings. Closed and for-profit paradigms may make it harder for independent researchers and outsiders to access models to evaluate them and expose their weaknesses—while large labs have definitely allowed some level of access,38 for which they should be commended, it is hard to know exactly what the limits of this access and of researchers’ ability to publicly report adverse findings are. While open-source models help bridge some of this gap, this paradigm only works if open-source models are at relative parity with closed-source ones, which may not have guarantees.39

    New stakeholders 

    The third trend—and an important caveat to the second trend—is how the popularity and accessibility of natural language interfaces for AI models have brought a new wave of AI stakeholders into the ecosystem. Even people with no technical background can easily interact with tools like ChatGPT, Bard, and the Bing chatbot through prompts written in English (or other languages) rather than computer code. Consumers, hacker-builders, entrepreneurs, and large companies—alike—expand and help develop new potential use cases for AI. Significant application development activity is also happening based on open-source and publicly available models, led by platforms like Hugging Face and the decision by Meta to publicly release its Llama models. This distributed innovation environment creates the potential for AI’s benefits to disperse more widely and in a more decentralized way than were innovations, such as the large internet platforms of the 2000s. At the same time, this decentralization will increase the challenge for regulators seeking to set standards around the development and use of AI applications, in much the same way as regulators have struggled to define functional and universal standards for software security because of software’s heterogeneous and decentralized nature. 

    Conclusions: Whose risks, whose opportunity? 

    Advances in AI will bring both opportunity and risk. The key question for policymakers is not how to get only opportunity and no risk—this seems all but impossible. Instead, it is one of recognizing and seeking to balance who must deal with each. Models that can write more trustworthy and reliable code will help open-source maintainers and other organizations better shore up security—and help novice hackers write scripts and tools. Both defenders and cybercriminals will use models that can find vulnerabilities. Models that integrate into workflows entrusted to make decisions can deliver the benefits of machine speed and scale, while creating risks because humans can no longer perfectly oversee and interpret their decisions. 

    With many of these cases, such as vulnerability hunting and coding, policymakers’ best option may simply be to try to encourage enterprises to build and adopt these tools into their workflows and development processes faster than they end up as common tools for malicious hackers. For certain other cases, as with deepfake-based impersonations, it may be possible to push model developers to implement tailored protections that can asymmetrically reduce their abuse potential while preserving their benefits. And, in general, policymakers can seek to develop incentives and support for the development of best practices, tools, and standards for AI assurance, to encourage enterprises and organizations to apply appropriate scrutiny in their adoption of AI, and to hold them to account when they fail to do so. 

    Policymakers might also consider ways to shift more of the costs of safely integrating AI – ways of measuring trust and mitigating risk—onto the makers of these systems. The history of the debate over software liability illustrates the peril of allowing technology vendors to reap the profits from selling technology without facing any consequences when that technology proves unfit for the purpose for which they sold it.40 The debate over software liability has raged for decades.41 Maybe the advent of AI provides an opportunity to adopt a new paradigm a little sooner. 

    The balance of risk and opportunity for the end users of technology should be a primary concern for policymakers; how the market and policy equip cybersecurity defenders will play a significant role in determining that balance. Thus, there remain plenty of opportunities (and risks) for policymakers to evaluate in these next formative years of AI policy. 

    About the authors

    Maia Hamin is currently serving an assignment under the Intergovernmental Personnel Act at the US AI Safety Institute within the National Institute of Standards and Technology (NIST). She is on leave from the Cyber Statecraft Initiative, where she held the position of associate director with the Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. Hamin’s contributions to this work predate her NIST assignment, and the views expressed in this paper are not those of the AI Safety Institute.

    Jennifer Lin is a former Young Global Professional with the Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Tech Programs. During her time with the team, she was a sophomore at Stanford University double-majoring in political science and symbolic systems, with a specialization in artificial intelligence.

    Trey Herr is senior director of the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Technology Programs, and assistant professor of global security and policy at American University’s School of International Service.

    Acknowledgments 

    Thank you to the CSI team for support on this project as well as Charlette Goth-Sosa and Donald Partkya for editing and production support. Thank you also to several reviewers at different stages of drafting including Harriet Farlow, Chris Wysopal, Kevin Klyman, and others who wish to remain anonymous. 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “Assistants API Overview: How Assistants work,” Open AI Platform, accessed June 30, 2024, https://platform.openai.com/docs/assistants/overview.,
    2    Patrick Lewis et al, “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” arXiv, April 12, 2021 [last revised], https://doi.org/10.48550/arXiv.2005.11401.
    3    Advanced Research Projects Agency for Health (ARPA-H), “ARPA-H Joins DARPA’s AI Cyber Challenge to Safeguard Nation’s Health Care Infrastructure from Cyberattacks,” March 21, 2024, https://arpa-h.gov/news-and-events/arpa-h-joins-darpas-ai-cyber-challenge; AI Cyber Challenge (AIxCC), accessed June 30, 2024, https://aicyberchallenge.com/.
    4    “Zellic Wins $1M From DARPA in the AI Cyber Challenge,” Zellic, April 4, 2024. https://www.zellic.io/blog/zellic-darpa-aixcc/.
    5    Manish Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models,” arXiv, April 19, 2024. http://arxiv.org/abs/2404.13161.
    6    Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite.”
    7    Sergei Glazunov and Mark Brand, “Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models,” Google Project Zero (blog), June 20, 2024, https://googleprojectzero.blogspot.com/2024/06/project-naptime.html.
    8    “Secure by Design Pledge,” US Cybersecurity and Infrastructure Security Agency (CISA), accessed June 30, 2024, https://www.cisa.gov/securebydesign/pledge; Isabella Wright and Maia Hamin, “‘Reasonable’ Cybersecurity in Forty-Seven Cases: The Federal Trade Commission’s Enforcement Actions Against Unfair and Deceptive Cyber Practices.” Cyber Statecraft Initiative, June 12, 2024. https://dfrlab.org/2024/06/12/forty-seven-cases-ftc-cyber/.
    9    AI models, which receive human feedback on their predictions, learn to generate outputs that receive more favorable feedback. See Paul Christiano et al., “Deep Reinforcement Learning from Human Preferences,” arXiv, February 17, 2023, http://arxiv.org/abs/1706.03741.
    10    Anthony Bartolo, “GitHub Copilot Update: New AI Model That Also Filters Out Security Vulnerabilities,” Microsoft (blog), Feb 16, 2023, https://techcommunity.microsoft.com/t5/educator-developer-blog/github-copilot-update-new-ai-model-that-also-filters-out/ba-p/3743238.
    11    “CISA Artificial Intelligence Use Cases,” US Cybersecurity and Infrastructure Security Agency (CISA), accessed June 30, 2024, https://www.cisa.gov/ai/cisa-use-cases.
    12    Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson, “Autonomous Cyber Defense: A Roadmap from Lab to Ops,” Center for Security and Emerging Technology (CSET), June 2023, https://cset.georgetown.edu/publication/autonomous-cyber-defense/.
    13    Maia Hamin and Stewart Scott, “Hacking with AI,” Cyber Statecraft Initiative, February 15, 2024, https://dfrlab.org/2024/02/15/hacking-with-ai/.
    14    “Disrupting Malicious Uses of AI by State-Affiliated Threat Actors,” OpenAI, February 14, 2024, https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/
    15    Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, “Truth, Lies, and Automation,” Center for Security and Emerging Technology (CSET), May 2021, https://cset.georgetown.edu/publication/truth-lies-and-automation/
    16    Glazunov and Brand, “Project Naptime: Evaluating Offensive Security Capabilities.”
    17    Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh, “Do Users Write More Insecure Code with AI Assistants?” In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, (November 2023), 2785–99, https://doi.org/10.1145/3576915.3623157.
    18    Evan Ackerman, “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms,” IEEE Spectrum, August 2017, https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms.
    19    Melissa Heikkilä, “Three Ways AI Chatbots Are a Security Disaster,” MIT Technology Review, April 3, 2023, https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/
    20    Bhatt et al., “CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite.”
    21    Arvind Narayanan (@random_walker), “While Playing around with Hooking up GPT-4 to the Internet, I Asked It about Myself… and Had an Absolute WTF Moment before Realizing That I Wrote a Very Special Secret Message to Bing When Sydney Came out and Then Forgot All about It. Indirect Prompt Injection Is Gonna Be WILD Https://T.Co/5Rh1RdMdcV,” X, formerly Twitter, March 18, 2023, 10:50 p.m., https://x.com/random_walker/status/1636923058370891778.
    22    Anna Knack and Ant Burke, “Autonomous Cyber Defence: Authorized Bounds for Autonomous Agents,” Alan Turing Institute, May 2024, https://cetas.turing.ac.uk/sites/default/files/2024-05/cetas_briefing_paper_-_autonomous_cyber_defence_-_authorised_bounds_for_autonomous_agents.pdf
    23    Caleb Sima, “Demystifing LLMs and Threats.” Csima (blog), August 15, 2023, https://medium.com/csima/demystifing-llms-and-threats-4832ab9515f9.
    24    Cohere 4 AI, “Model Card for Cohere 4 AI Commanr R+”, May 23, 2024, https://huggingface.co/CohereForAI/c4ai-command-r-plus.
    25    Sayash Kapoor et al., “On the Societal Impact of Open Foundation Models,” February 27, 2024, https://arxiv.org/pdf/2403.07918v1.
    26    Bill Kramer, “Transparency in the Age of AI: The Role of Mandatory Disclosures,” Multistate, January 19, 2024. https://www.multistate.ai/updates/vol-10.
    27    Ben Tarnoff, “Weizenbaum’s Nightmares: How the Inventor of the First Chatbot Turned against AI,” Guardian, July 25, 2023, https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai.
    28    IBM, “Deep Blue,” accessed June 30, 2024, https://www.ibm.com/history/deep-blue.
    29    Warren S McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5 (1943), https://home.csulb.edu/~cwallis/382/readings/482/mccolloch.logical.calculus.ideas.1943.pdf.
    30    Dennis Layton, “ChatGPT – Show Me the Data Sources,” Medium (blog), January 30, 2023, https://medium.com/@dlaytonj2/chatgpt-show-me-the-data-sources-11e9433d57e8.
    31    Jason Wei et al., “Emergent Abilities of Large Language Models,” arXiv, October 26, 2022, https://doi.org/10.48550/arXiv.2206.07682.
    32    Leilani H. Gilpin et al., “Explaining Explanations: An Overview of Interpretability of Machine Learning,” arXiv, February 3, 2019, http://arxiv.org/abs/1806.00069
    33    Anil George, “Visualizing Size of Large Language Models,” Medium (blog), August 1, 2023, https://medium.com/@georgeanil/visualizing-size-of-large-language-models-ec576caa5557.
    34    Jaime Sevilla et al., “Compute Trends Across Three Eras of Machine Learning,” 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, (2022), 1–8, https://doi.org/10.1109/IJCNN55064.2022.9891914.
    35    Amazon Staff, “Amazon and Anthropic Deepen Their Shared Commitment to Advancing Generative AI,” March 27, 2024. https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment; “Microsoft and OpenAI Extend Partnership,” Official Microsoft Blog, January 23, 2023, https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/.
    36    Mike Isaac, “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems,” New York Times, April 18, 2023, https://www.nytimes.com/2023/04/18/technology/reddit-ai-openai-google.html.
    37    Eli Tan, “When the Terms of Service Change to Make Way for A.I. Training,” New York Times, June 26, 2024, https://www.nytimes.com/2024/06/26/technology/terms-service-ai-training.html
    38    “OpenAI Red Teaming Network,” accessed June 30, 2024, https://openai.com/index/red-teaming-network/.
    39    Xiao Liu et al., “AgentBench: Evaluating LLMs as Agents.” arXiv, October 25, 2023, http://arxiv.org/abs/2308.03688; “LMSys Chatbot Arena Leaderboard,” Hugging Face, accessed June 30, 2024, https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard; “SEAL Leaderboards,” Scale, accessed June 30, 2024, https://scale.com/leaderboard
    40    Bruce Schneier, “Liability Changes Everything,” November 2003, https://www.schneier.com/essays/archives/2003/11/liability_changes_ev.html.
    41    Maia Hamin, Sara Ann Brackett, and Trey Herr, “Design Questions in the Software Liability Debate,” Cyber Statecraft Initiative, January 16, 2024, https://dfrlab.org/2024/01/16/design-questions-in-the-software-liability-debate/.

    The post AI in cyber and software security:  What’s driving opportunities and risks? appeared first on Atlantic Council.

    ]]>
    The Great IT Outage of 2024 is a wake-up call about digital public infrastructure https://www.atlanticcouncil.org/blogs/new-atlanticist/the-great-it-outage-of-2024-is-a-wake-up-call-about-digital-public-infrastructure/ Tue, 06 Aug 2024 17:24:12 +0000 https://www.atlanticcouncil.org/?p=784093 The July 19 outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

    The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

    ]]>
    On July 19, the world experienced its largest global IT outage to date, affecting 8.5 million Microsoft Windows devices. Thousands of flights were grounded. Surgeries were canceled. Users of certain online banks could not access their accounts. Even operators of 911 lines could not respond to emergencies.

    The cause? One mere faulty section of code in a software update.

    The update came from CrowdStrike, a cybersecurity firm whose Falcon Sensor software many Windows users employ against cyber breaches. Instead of providing improvements, the update caused devices to shut down and enter an endless reboot cycle, driving a global outage. Reports suggest that insufficient testing at CrowdStrike was likely the cause.

    However, this outage is not just a technology error. It also reveals a hidden world of digital public infrastructure (DPI) that deserves more attention from policymakers.

    What is digital public infrastructure?

    DPI, while an evolving concept, is broadly defined by the United Nations (UN) as a combination of “networked open technology standards built for public interest, [which] enables governance and [serves] a community of innovative and competitive market players working to drive innovation, especially across public programmes.” This definition refers to DPI as essential digital systems that support critical societal functions, like how physical infrastructure—including roads, bridges, and power grids—are essential for everyday activities.

    Microsoft Windows, which runs CrowdStrike’s Falcon Sensor software, is a form of DPI. And other examples of DPI within the UN definition include digital health systems, payment systems, and e-governance portals.

    As the world scrambles to fix their Windows systems, policymakers need to pay particular attention to the core DPI issues that underpin the outage.

    The problem of invisibility

    DPI, such as Microsoft Windows, is ubiquitous but also largely invisible, which is a significant challenge when it comes to managing risks associated with it. Unlike physical infrastructure, which is tangible and visible, DPI powers essential digital services without drawing public awareness. Consequently, the potential risks posed by DPI failures—whether stemming from software bugs or cybersecurity breaches—tend to be underappreciated and underestimated by the public.

    The lack of a clear definition of DPI exacerbates the issue of its invisibility. Not all digital technologies are public infrastructure: Companies build technology to generate revenue, but many of them do not directly offer critical services for the public. For instance, Fitbit, a tech company that creates fitness and health tracking devices, is not a provider of DPI. Though it utilizes technology and data services to enhance user experience, it does not provide essential infrastructure such as internet services, cloud computing platforms, or large-scale data centers that support public and business digital needs. That said, Fitbit’s new owner, Google, known for its widely used browser, popular cloud computing services, and efforts to expand digital connectivity, can be considered a provider of DPI.

    Other companies that do not start out as DPI may become integral to public infrastructure by dint of becoming indispensable. Facebook, for example, started out as a social network, but it and other social media platforms have become a crucial aspect of civil discourse surrounding many elections. Regulating social media platforms as a simple technology product could potentially ignore their role as public infrastructure, which often deserve extra scrutiny to mitigate potential detrimental effects on the public.

    The recent Microsoft outage, from which airlines, hospitals, and other companies are still recovering, should now sharpen the focus on the company as a provider of DPI. However, the invisibility of DPI and the absence of appropriate policy guidelines for measuring and managing its risks result in two complications. First, most users who interact with DPI often do not recognize it as a form of DPI. Second, this invisibility leads to a misplaced trust in major technology companies, as users fail to recognize how high the collective stakes of a failure in this DPI might be. Market dominance and effective advertising have helped major technology companies publicize their systems as benchmarks of reliability and resiliency. As a result, the public often perceives these systems as infallible, assuming they are more secure than they are—until a failure occurs. At the same time, an overabundance of public trust and comfort with familiar systems can foster complacency within organizations, which can lead to inadequate internal scrutiny and security audits.

    How to prevent future disruptions

    The Great IT Outage of 2024 revealed just how essential DPI is to societies across the globe. In many ways, the outage serves as a symbolic outcry for solution-oriented policies and accountability to stave off future disruptions.

    To address DPI invisibility and misplaced trust in technology companies, US policymakers should first define DPI clearly and holistically while accounting for its status as an evolving concept. It is equally crucial to distinguish which companies are currently providers of DPI, and to educate leaders, policymakers, and the public about what that means. Such an initiative should provide a clear definition of DPI, its technical characteristics, and its various forms, while highlighting how commonly used software such as Microsoft Windows is a form of DPI. A silver lining of the recent Microsoft/CrowdStrike outage is that it offers a practical, recent case study to present to the public as real-world context for understanding the risks when DPI fails.

    Finally, Microsoft has outlined technical next steps to prevent another outage, including extensive testing frameworks and backup systems to prevent the same kind of outage from happening again. However, while industry-driven self-regulation is crucial, regulation that enforces and standardizes backup systems, not just with Microsoft, but also for other technology companies that may also become providers of DPI, is also necessary. Doing so will help prevent future outages, ensuring the reliability of infrastructure which, just like roads and bridges, props up the world.


    Saba Weatherspoon is a young global professional with the Atlantic Council’s Geotech Center.

    Zhenwei Gao is a young global professional with the Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs.

    The post The Great IT Outage of 2024 is a wake-up call about digital public infrastructure appeared first on Atlantic Council.

    ]]>
    Russia’s digital tech isolationism: Domestic innovation, digital fragmentation, and the Kremlin’s push to replace Western digital technology https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/russias-digital-tech-isolationism/ Mon, 29 Jul 2024 23:11:00 +0000 https://www.atlanticcouncil.org/?p=818001 Russia’s technological isolation is both a reality and a desired goal for Moscow. This piece explores the impacts of this phenomenon and offers recommendations for how to deal with that evolving digital ecosystem.

    The post Russia’s digital tech isolationism: Domestic innovation, digital fragmentation, and the Kremlin’s push to replace Western digital technology appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    Digital technology has long been a key component of the Russian government’s power, and for years following the collapse of the Soviet Union there was significant technology entanglement between Russia, the West, and other areas of the world. That changed in the late 2000s and early 2010s with heightened paranoia within the Kremlin about regime security and foreign subversion—and Russia’s 2022 full-scale invasion of Ukraine has taken this to new levels. Due to combinations of intense securitization, Western sanctions, foreign businesses exiting Russia, tech “brain drain,” and other factors, digital technological isolationism is now both a reality and a desired goal for Moscow. This report examines the history of the modern Russian state’s approach to digital technology, the internet, and connection and interdependence with the West and foreign countries. It then analyzes the Kremlin’s late 2000s and early 2010s shift to a heavily securitized approach to the internet and its concerted push to develop domestic digital technology—both the successes and many failures. It then examines the 2022 Russian war on Ukraine, how the conflict and resulting events (such as sanctions and brain drain) have shifted Russia’s approach to domestic technology and digital isolation, and where different digital technology segments, such as hardware and software, stand. The analysis concludes with five key takeaways for the US and its allies and partners, paired with recommendations: 

    1. Russia has even fewer incentives (and even less room) today to stop pursuing an isolationist and securitized approach to digital technology—with impacts across international tech engagement, domestic policy, and human rights. 
    2. Russian companies have shown more success building their own domestic software than domestic hardware. 
    3. The Russian cybersecurity sector will play an important role in Moscow’s reaction to growing sanctions and other restrictions as well as its efforts to technologically isolate itself from the West.  
    4. Some Russian technology companies are already looking to the international market to expand their profit streams, including in internet and cybersecurity services. 
    5. Russia is becoming more digitally dependent on China. 

    Introduction

    Digital technology has long been a key component of the Russian government’s power, from launching cyber operations to creating propaganda content. But in Russia, as in many countries, software and hardware have done far more than just support cyber and information operations: they have contributed to state economic modernization efforts, underpinned the government’s growing surveillance of its own citizens, and enabled technological and intellectual connectivity between populations at home and abroad. For years, this technology drew from a diversity of international sources: Russia, China, Europe, the United States, Japan, and elsewhere. Conversely, many companies outside of Russia depended upon the labor and skills of Russian developers and engineers, many of whom provided remote services. Western technology such as the Microsoft Windows operating system was found all throughout Russia in the 2000s. But as the state became more paranoid about the internet as a threat to regime security, the Kremlin increasingly advocated for building domestic software and hardware and instituted policies to shift the government away from using Western digital technology. The government also introduced tax and other incentives for Russian tech developers to stay in Russia. A regime security approach came to dominate Russian policy. 

    Since the Putin regime launched its full-scale war on Ukraine in February 2022, this environment has been replaced with a new level of Russian techno-isolationism. The US, European allies, and other countries have imposed sanctions on a range of Russian digital technology companies and services. Countless global technology companies have terminated or severely curtailed their business activities in Russia, due to sanctions compliance, concerns over employee safety, support for Ukraine, signaling resolve to Western governments, restrictions from the Russian government, or a combination thereof. Around 100,000 Russian technologists (at least) fled the country by December 2022 to seek out economic opportunity and a less repressive political environment elsewhere, further accelerating Russia’s “brain drain” problems.1

    Technological isolationism is now both a reality and a desired goal for many in the Russian government and technology sector. 

    Simultaneously, the Russian government has accelerated its push to remove Western digital technology from the country and develop domestic software and hardware replacements that can be used in military and intelligence activities, bring money into Russia (at least in the state’s hope), and serve as a means of expanding Russia’s technology influence abroad. The Kremlin notably exempted Russian information technology workers from military conscription to fight in Ukraine, and it continues its frantic attempts to stem the departure of technology talent. Sanctions mitigation and evasion are now frequent topics of conversation in the Russian cyber community. All told, a greater degree of digital technological isolationism is now both a reality and a desired goal for many in the Russian government and technology sector. 

    This raises numerous questions for Western policymakers. As Russia’s economy continues to shift during the war2 —and sanctions continue to impose at least some costs on Russia’s digital technology industry—the government, tech industry, and tech civil society in Russia are grappling with issues such as developing software alternatives to foreign app stores and operating systems, buying hardware from non-Western sources, illicitly acquiring hardware from Western sources, keeping tech talent in the country, fostering the next generation of cyber talent (including in support of the security services), and expanding Russia’s tech market share abroad. For example, some Russian cybersecurity companies that support the Russian intelligence community are increasingly talking about selling their software overseas—in Latin America, in the Middle East, and elsewhere. Russia has also become more dependent on Chinese digital technology in the last two years. 

    But to quote historian Stephen Kotkin, “the Russian state can confound analysts who truck in binaries.”3 Despite these clear or emerging trends, the reality of Russia’s digital tech ecosystem today is also complicated, messy, and in many ways uncertain. This report therefore presents five key takeaways from the analysis of this reality, paired below with implications for US policymakers and those in allied and partner countries. It focuses on digital technologies and companies—such as software, hardware, and Russian cybersecurity companies—rather than technology broadly, such as biotechnology and manufacturing technologies. 

    Key takeaways: 

    1. Russia has even fewer incentives (and even less room) today to stop pursuing an isolationist and securitized approach to digital technology—with impacts across international tech engagement, domestic policy, and human rights. 
    2. Russian companies have shown more success building their own domestic software than domestic hardware. 
    3. The Russian cybersecurity sector will play an important role in Moscow’s reaction to growing sanctions and other restrictions as well as its efforts to technologically isolate itself from the West.  
    4. Some Russian technology companies are already looking to the international market to expand their profit streams, including in internet and cybersecurity services. 
    5. Russia is becoming more digitally dependent on China. 

    Creeping suspicion: Russian domestic technology from the 1990s to Mid-2010s 

    Over the last three decades, Russia’s technology sector has undergone a notable shift. In the 1990s and early 2000s, Russia’s burgeoning internet services and technology sector used Western software and hardware without much question. Russian tech-focused universities collaborated with foreign institutions, and many Western companies, even in the cybersecurity sphere, struck up partnerships with rapidly expanding Russian businesses. Firms were also less dependent on China, and Russian tech companies had the freedom to operate abroad. Then, in the late 2000s and early 2010s, as high-level Kremlin officials became increasingly concerned about the internet as a regime security threat, and as those already concerned gained more power within the Putin regime, the Russian government made a concerted push to replace Western hardware and especially software. The resulting policies did not immediately rid Russia of foreign technology (and still have not done so). But domestic technology and restricted tech procurement became the name of the game—and in practice, there have been many bumps in the road. 

    Following the collapse of the Soviet Union, the Russian government was forced to contend with a confluence of challenges in its technology sector. There were many talented individuals in Russia with expertise in fields like computer science, physics, mathematics, and engineering.4 Some moved out of the country to seek economic opportunities. Some turned to cybercrime, a far more lucrative profession amid an economy with limited jobs, widespread criminal enterprise, and insufficient laws.5 Others yet founded companies. The security services, meanwhile, expanded their focus on internet surveillance and laid the foundation for the Kremlin’s later, high-level concern about the internet as a regime security threat. 

    Notable Russian technology firms include Yandex, now a search and internet services giant, which was created in 19976 after its founders started building search programs for the Bible, the International Classifier of Patents, and more.7 (It is worth noting that Yandex was even ahead of Google, which was founded in 1998).8 Mail.ru, an internet service and now technology conglomerate in Russia (presently operating under the VK brand, now a Russian internet and social media conglomerate), was founded in 1998 as an email service provider for Russians.9  Russian search engine Rambler, later bought by the Russian company Prof-Media (a media conglomerate and investment group) and then Russia’s state-owned bank Sber, was founded around the same time and quickly took up market share as well.10 Other examples of technology development and proliferation abound. 

    The Russian technology sector in the late 1990s and 2000s relied heavily on Western software and hardware. President Bill Clinton’s administration modified US export control rules in 1999 to permit the sale of faster computers to Russia (and China).11  Many of the large chips and electronics distributors in Russia in the 1990s and 2000s sold equipment from the likes of AMD (US-based), Intel (US), Motorola (US), Samsung (South Korea), Texas Instruments (US), Toshiba (Japan), and Philips (Netherlands).12   Motorola (US), Nokia (Finland), and Samsung (South Korea) dominated Russia’s 2000s mobile phone market.13 The open-source Linux operating system was widely used in the region, and billions of dollars of Linux-related technologies were sold in Russia and the former Soviet republics in the early 2000s.14  In early 2005, Microsoft made the Windows operating system available in Russia;15 in October 2008, Apple launched iPhone sales with Russian retailers.16  As more Russians used the internet at home,17 the most-visited websites included Yandex, Rambler, and Mail.ru—which controlled the most market share—as well as non-Russian websites like Google and Yahoo, companies that quickly came to define the US tech sector.18 Piracy of software, mainly Western software such as Microsoft Windows, was also rampant around this time, especially in the 1990s, with a 2001 industry report estimating that about 90% of Russia’s software market at the time was pirated.19

    Russian organizations also collaborated with foreign counterparts. After the Soviet Union’s collapse in 1991, some Western businesses began to realize they could leverage the scientific and technical talent pools in Russia to outsource software development and other tasks.20 In 1996, billionaire George Soros launched an effort to build and equip internet centers at Russian universities to link schools, hospitals, and other Russian organizations to the global internet.21  In 2003, the University of Missouri launched a journalism education partnership with Moscow State University, which, relatively novel at the time, included using the internet to communicate between the two schools.22Cisco advised the Russian government on e-government strategy in the mid-2000s;23Russian cell provider MTS and British cell provider Vodafone signed a major agreement in October 2008, where MTS would receive “exclusive access to Vodafone’s products and services” and in turn leverage the company’s assistance in building third-generation (3G) cellular networks.24 Russian programmers continued to grow the IT outsourcing industry in service of a variety of global businesses.25 The list goes on. Some 1990s US sanctions issues and 2000s Putin anti-corruption raids notwithstanding,26  the interconnectivity across borders was pronounced. 

    As the Russian technology sector grew into the internet age, so did the Russian security services. Boris Yeltsin signed a presidential decree in 1993 creating the Federal Agency of Government Communications and Information (FAPSI), the successor to the Committee for State Security (KGB)’s Eighth Chief Directorate, focused on signals interception at home and abroad.27  Domestically, FAPSI ran SORM,28  a surveillance system for intercepting telephone calls, emails, and other internet communications whose tactics and technology originated in a 1980s KGB research institute29 (later expanded to its now-current SORM-3 version, which captures a range of telecommunications data). FAPSI also controlled licensing for information technology imports and exports, and, in 1994, it began coordinating telecommunication data-sharing between Russian security services and law enforcement agencies and those of countries in the Commonwealth of Independent States, or CIS (composed of Armenia, Azerbaijan, Belarus, Kazakhstan, Kyrgyzstan, Moldova, Russia, Tajikistan, and Uzbekistan).30 The agency answered directly to the Russian president.31 Yeltsin intended to use FAPSI to, among other tasks, “support his battles with the political opposition at the top.”32

    In 1995, the Federal Security Service (FSB), the KGB’s successor with some foreign and mostly domestic purview, took over the operation of the SORM system.33 In 2000, the government began to let the tax police, the Ministry of the Interior (which controls the national police), and other institutions use SORM as well34  The FAPSI was dissolved in 2003, and its Third Directorate was mostly absorbed into the FSB in 2003; some of its functions were also transferred to the Federal Protection Service (FSO), such as providing strategic signals intelligence to Russian leadership and surveilling the internet.35 The Federal Service for Technical Export and Control (FSTEK), a subcomponent of the Ministry of Defense, also played (and still plays) a role in licensing the export of dual-use technology items, military information security, and defense-focused control of Russian technology.36

    Nonetheless, high-level Kremlin officials were not paying as much attention to the internet as a threat to regime security at this time, particularly compared to their counterparts in China. The security hardliners who were very much concerned and paying attention to this issue—such as intelligence heads pushing forward “information security,” a sprawling concept of cybersecurity and information control37 —did not yet have enough influence to drown out the “technocrats,” elite technical experts in influential positions, and crystallize a highly securitized view of the internet.38 The use of Western technology in Russia, the relatively uninhibited growth of the Russian technology sector from the 1990s into the 2000s, and technology partnerships between Russian and Western businesses and universities underscored this reality. As technology and security scholar Jackie Kerr incisively notes: 

    “Russia’s moderate approach to the internet throughout this period was striking, given the extent to which it contrasted with the regime’s demonstrated distrust of (and limited tolerance for) independent media, criticism, and social movements, as well as its growing paranoia about foreign and Western influence.”39

    The Kremlin’s “internet awakening” 

    Moscow’s position on the internet began shifting in the late 2000s and early 2010s, catalyzed by a perception that Western technology was a means of foreign espionage, revolution-stoking, and influence-projection. The Kremlin’s “internet awakening,” as I would call it, was driven by a number of events, including the role of Georgian bloggers in the 2008 Russo-Georgian War, the use of social media in the 2010-2013 Arab Spring, online-organized protests against Putin’s 2011 election rigging and 2012 return to the presidency, the 2013 Snowden leaks about US internet surveillance, and the 2014 social media-driven Euromaidan Revolution in Ukraine.40 These events coincided with (and perhaps partly contributed to) the security hardliners in the Putin regime, already concerned about the internet years prior, gaining more power and influence—but now better equipped with the means to drive internet policy in the Russian political system.41

    During this period, Moscow’s cries of “color revolutions” and foreign interference were not simply propagandistic. The security services are rife with paranoia and conspiratorialism.42 Key officials, including Putin himself, were trained in the KGB at a time in which the US and Soviet Union routinely interfered in foreign political systems. Some senior security figures also believe in pseudoscientific means of controlling human behavior through information, where “complex psychosocial phenomena,” such as how populations of people think, are overlaid “with an innovative, mechanistic sense of order and control.”43 The influence of these security figures only grew in the 2000s as Putin restructured the government, consolidated power, reorganized the security services, and witnessed events such as the 2004-2005 Orange Revolution in Ukraine that provoked anger and paranoia.44

    When Kremlin officials looked on television or their own streets in the late 2000s and early 2010s and saw people mobilizing against their governments—in part using American internet platforms—they did not see people with agency, acting of their own volition; they saw a foreign hand at work. While the fear of regime overthrow certainly predates the Russo-Georgian War and the Arab Spring, this was the first, major time that the Kremlin widely linked the internet to potential revolutionary peril.45 Still today, many Russian foreign affairs commentators refer to the Arab Spring and similar, internet-involved events as “color revolutions.”46

    Alongside a crackdown on the internet in Russia,47the Russian government started talking more frequently in public about the importance of domestic technology to replace foreign-made hardware and software, particularly from Western countries. Domestic tech was now the name of the game. Older comments buried in state documents—the 2000 Information Security Doctrine’s call to “intensify development of the domestic production of information protection hardware and software, along with the methods to control their efficiency”48—were resurrected and given a stronger security bent. 

    “We must lessen our critical dependence on foreign technology.” 

    Vladimir Putin, speech to Federal Assembly, December 201449

    In a 2014 speech to the Federal Assembly, Putin iterated that “we must lessen our critical dependence on foreign technology” and that “import substitution programs must encourage the creation of a large group of industrial companies that can be competitive not only domestically but also on foreign markets.”50 The 2014 Military Doctrine said the main internal military risks to Russia included activities aimed at “destabilizing [the] domestic political and social situation in the country” and “subversive information activities against the population, especially young citizens of the State, aimed at undermining historical, spiritual, and patriotic traditions related to the defense of the Motherland.”51Russia’s 2015 National Security Strategy accuses the US and its allies of seeking to limit Russia’s dominance in world affairs, including by exerting “political, economic, military, and informational pressure on it” and manipulating information and communication technologies.52The Kremlin’s growing worries about the internet also stemmed from the extent to which Russian citizens’ use of the internet (especially among young people) makes them less susceptible to state television propaganda.53

    Western sanctions following Russia’s illegal 2014 invasion and annexation of Crimea in Ukraine contributed to this trend as well.54At a meeting with defense industry executives in May 2014, for instance, Putin said that 

    “[Because of Western sanctions] we have new circumstances to address now—we need to replace imports. … [W]e need to do everything we can to have everything that our defense industry needs produced here on our own soil, so that we will not be dependent on anyone else for any of the new weapons systems we are delivering to our armed forces.”55

    By that point in the year, the US had already issued a number of sanctions against Russian individuals and defense firms.56Notably, beginning in March 2014, the US Bureau of Industry and Security stopped issuing licenses for new exports of dual-use goods destined for Russia due to concern that they could be used in potential military applications.57 These restrictions forced Moscow to rethink its digital technology acquisition and development plans. 

    The Russian state was not entirely unfamiliar with domestic technology initiatives. In 2007, for instance, the government stood up Rusnano, a state company, to produce and make Russia a leader in nanotechnology.58Despite the backing of several high-ranking officials and credentialed scientists, it failed to meet ambitious targets for 2011 due to a combination of limited technical talent, challenges with cultivating entrepreneurship, a lack of competence in business management, and, perhaps most importantly, a lack of domestic nanotechnology production capability, which the then-Ministry for Industry and Energy described in 2007 as at a critically low level.59Follow-on targets, such as companies mass-producing nanotechnologies beginning in 2013, were never met.60 Since 2016, Rusnano has been on the edge of bankruptcy,61 and corruption investigations have plagued its leadership.62

    More robust policies to promote domestic technology development and foreign technology replacement soon followed, and Moscow’s push for technological autarky picked up speed. 

    Accelerating the push: Moscow’s mid-2010s domestic technology policies  

    Russia’s campaign to boost domestic technology and, where possible, replace Western technology with its own substitutes accelerated in the following years. These efforts ranged from domestic investments in high-tech sectors to creating a registry of domestic software, requiring the use of domestic microelectronics (such as in computer processing), and “isolating” Russia’s internet. 

    Clarity in Russian strategy 

    In Putin’s 2014 address to the Federal Assembly, he launched the National Technology Initiative, an effort to stimulate the development of high-tech Russian industry sectors.63 It focused on nine projects: what the government called AutoNet, AeroNet, EnergyNet, FinNet, FoodNet, HealthNet, MariNet, NeuroNet, and SafeNet.64 (There are 68 approved NTI projects as of July 2023, but it is unclear how much these efforts have achieved; this is discussed further below.)65 AutoNet, for example, is a public-private partnership to develop the Russian market for services, systems, and modern vehicles focused on logistics—what the initiative calls the “Internet of Transportation.”66 The goals of the overall initiative, as laid out in the subsequent 2016 strategy, included boosting the Russian economy and spending four percent of Russia’s GDP on science and technology by 2035.67 (This goal, as it turns out, was not achieved, as discussed further below.) All of this followed, or at least coincided with, a raft of new sanctions, mainly from the US and the EU, targeting Russia’s financial, energy, and defense sectors, among other industries.68

    Nevertheless, Moscow’s efforts continued. The government passed a law to create a registry of domestic software products in 2015, which went into effect on January 1, 2016.69 Its initial purpose was to establish a list of Russian software products that state organizations could use.70  The registry contains products that either (i) are at least 50 percent Russian-owned, (ii) have less than thirty percent of revenue going to foreign beneficiaries, or (iii) are open-sourced with the relevant intellectual property owned by a Russian entity.71 Around August 2016, about a year into the registry’s launch, the executive director of Russia’s Association of Software Developers said that “most customers already have an established IT infrastructure that uses foreign software” and that “it takes time to change procedures that have been established over so many years.”72 This effort occurred alongside a broader push in Russia to unify and digitize government procurement through contract registries, complaint databases, and a platform covering the entire procurement process from notice to audit and monitoring.73

    By the end of 2016, Putin proposed launching a “large-scale, system-wide program to develop an economy of a new technological generation” and declared that “Russia’s national and technological independence, in fact, our future depend on this.”74 The Russian government also expanded its payment card system that year, called Mir, which was launched in 2014 following sanctions against Russia for invading and annexing Crimea, Ukraine.75

    Russian cybersecurity companies also began to face more challenges in the Western market. At the beginning of the decade, Russian cybersecurity giant Kaspersky Lab planned an initial public offering (IPO) in the US but then backed out of the plan in 2012, with its founder Eugene Kaspersky saying he wanted to keep control of the company’s direction.76 There were also some media reports emerging, which Kaspersky contested, discussing the company’s relationships with Russian security organizations and its general need to align with the Kremlin’s interests.77 In 2018, the US Department of Defense, General Services Administration, and NASA banned the use of Kaspersky Lab hardware, software, and services on federal government systems.78 Detailed, public revenue information for Kaspersky is not available—including about how the US ban impacted Kaspersky’s revenue—but as of 2018, the company was making more than 85 percent of its revenue from outside Russia.79 In June 2024, the Commerce Department banned the sale of Kaspersky antivirus and cybersecurity technologies in the US altogether.80  Other Russian cyber firms, meanwhile, stayed under the public’s radar in the 2010s. Positive Technologies, subsequently sanctioned by the US in 2021 and the EU in 2023 for supporting Russian intelligence operations, had offices in Massachusetts and London for most of the decade.81

    “We all know who the chief administrator of the global internet is. And due to its volatility, we have to think about how to ensure our national security.” 

    Dmitry Peskov, Kremlin Press Secretary, November 28, 201782

    All of this coincided with the Russian government cracking down heavily on the internet, relative to its degrees of openness in the country in the 1990s and early 2000s. Notably, in August 2014, the Kremlin expanded the SORM-2 internet surveillance program beyond internet service providers (ISPs), requiring that all online service providers operating in Russia install the “black boxes” that enable the FSB to intercept traffic.83 Putin that year infamously called the global internet a CIA project.84 In a similar form, Kremlin press secretary Dmitry Peskov remarked in November 2017 that “We all know who the chief administrator of the global internet is. And due to its volatility, we have to think about how to ensure our national security.”85The practice of widespread blocking of websites accelerated in March 2014 tied to the Russian government’s illegal annexation of Crimea.86

    Russian government strategic documents reflected this view. The 2016 Information Security Doctrine of the Russian Federation stated that the “intelligence services of certain states are increasingly using information and psychological tools with a view toward destabilizing the internal political and social situation in various regions around the world.”87  Russia’s 2021 National Security Strategy for the first time specifically called out non-Russian technology companies saying that they are “spreading unverified information.”88  A “distorted view of historical facts,” it continued, “as well as events taking place in the Russian Federation and in the world, are imposed on internet users for political reasons.”89 Although the documents characteristically made these statements in passive voice, the actors supposedly threatening Russia were clear: the West, and especially the United States. 

    A particular Kremlin perspective on the internet was evolving, one in which the web was both a weapon to be used against Russia’s enemies and a threat to regime security. It at once reflected the reality of a Putin regime using the internet to conduct cyber espionage, launch destructive cyberattacks, and spread mis- and disinformation while also monitoring online activity and dissent with intense paranoia. This view is championed by a president who, by at least one allegation, limits his own personal use of mobile phones and the internet.90 This concern extended to all kinds of technologies, from operating systems to mobile app stores and social media platforms. More than cross-border connectivity, innovation, or anything else, Russian officials saw security risks. 

    Yet, mixed results in practice 

    Despite all this rhetoric, practice once again diverged from policy. These gaps between domestic tech on paper and in reality were seen in surveillance, open-source software development, the development of a “Russian Silicon Valley,” and microelectronics manufacturing, among other areas. 

    Many of Russia’s domestic tech efforts in the 2010s were a mixed bag. The state has made little progress on its 2016 vision to spend 4% of the country’s GDP on scientific R&D by 2035—an objective that was incredibly ambitious—if not unrealistic—for Russia. This lofty goal was part of Russia’s broader, concerted push to promote domestic digital technology, and it was arguably driven in part by a belief that commercial funding and productization would necessarily rise to meet the state’s interest in domestic digital technology. However, it did not. Data from the Organization for Economic Cooperation and Development (OECD) in Figure 1 shows that Russian spending on domestic R&D barely rose above one percent of GDP from the entire period of 2000-2020, even before the start of the 2022 Russian war on Ukraine and the Kremlin’s even greater focus on defense and military technology. 

    Source: OECD91

    Other challenges were exemplified in security and surveillance legislation. In 2018, Russia’s parliament amended the Yarovaya law—a set of 2016 counterterrorism and security bills named after one of its authors, Irina Yarovaya, a member of the Russian parliament.92 The amendment required telecommunications operators to store phone call recordings, text messages, internet traffic, and other information from users for up to six months, beginning in July 2018.93 Yet it was quickly clear that many Russian telecommunications companies could not acquire the requisite equipment for this data collected domestically and instead would have to use Cisco (US), HP (US), and Huawei (China) technology to comply with the new data storage requirements.94  On the one hand, the Russian security services further advanced their ability to access data and target dissent at home; on the other hand, the companies faced domestic tech shortfalls when implementing the data retention that caused further reliance on foreign technology companies. In May 2019, the state formalized a requirement for companies to use domestic data storage technology95 —but the reality was still that many domestic offerings were insufficient to meet companies’ needs.96 Here also lay signals of a future problem for Russia: if the offerings in Russia were insufficient and the offerings in the West were not available, the country would likely be forced to turn to digital technologies from China. 

    Building domestic software products, on the other hand, may be one of the most successful areas of Russia’s overall push. Google products remained popular in Russia in the 2000s and 2010s—YouTube is still one of the more widely used platforms—but Yandex controlled the majority of the search engine market domestically.97VK (then, VKontakte) was Russia’s answer to Facebook; it is often dubbed “Russia’s Facebook,” in fact, because of the virtually identical interface. The platform was for years more popular than Facebook in Russia,98  even as the Kremlin wrested the administration of the website away from its founder, Pavel Durov (who also founded Telegram), to clearly put it more clearly under state surveillance and control.99 It is worth noting that this occurred by giving the ownership of VK to Mail.ru, the Russian tech conglomerate that already owned the social network Odnoklassniki and operated Russia’s biggest email provider.100 These and other domestic software products carved out market share that remained unconquered by US and non-Russian counterparts. 

    Not all products and services, of course, were as competitive. Rutube, developed in the mid-2000s as a YouTube alternative, switched in 2012 to a content aggregation model (after struggling to compete with the actual YouTube) and in December 2020 was purchased by the state-owned Gazprom Media to build out longer professional and amateur content for Russians.101  As Gazprom then looked to develop its own TikTok-style product, Andrei Konyaev, of the digital science magazine N+1, commented that Rutube exemplified the challenge ahead: where a product already exists with millions of users in its base, Russians would not immediately go en masse to a new service.102  Rutube has since expanded into areas such as streaming live mixed martial arts (MMA) fights.103 For the time being, a more likely substitute appears to live with VK, which saw considerable growth in social media and content services in 2023.104

    In 2015, Moscow reportedly looked to Jolla, a Finnish company, to provide a mobile operating system built specially for Russian use.105 The chairman of the company, which develops the Linux-based Sailfish OS, said at the time that Russia’s plan was to “have one code base but then to integrate local internet services and ecommerce services on the user interface.”106 Russian authorities chose Sailfish OS in 2016 as the mobile platform to develop further, yet, in 2021, the company began curtailing business in Russia and severed ties in 2022.107 Now, Russian state-owned telecom Rostelecom is supposedly building an Aurora OS mobile operating system for Russia, the progress of which remains to be seen.108

    At the same time, the Russian government also upped its interest in open-source software. This includes Astra Linux, a Linux-variant operating system developed by the Russian conglomerate Astra Linux Group (RusBITech-Astra LLC) in the late 2000s or early 2010s (depending on the source) based on the Debian version of Linux.109 It has slowly become the Russian state’s operating system of choice, now offering both a commercial version and one designed for handling secure information.110

    In September 2018, the Ministry of Digital Development wrote that open-source software is safer to use than proprietary software in government settings because many applications from well-known developers have undocumented features that can be a security threat—but with open-source software, the state can access the source code and control this risk.111 (This is, of course, not necessarily true but is an interesting perspective from the ministry nonetheless.) By September 2021, the state announced new plans to further support open-source software development, even though Microsoft Windows remained widely used in the country.112 Based on images released by the Kremlin in December 2021, it even appeared that some of the computers in Putin’s office still used Windows XP, which was originally released in 2001.113

    Other domestic tech activities have fallen more on their face. The Skolkovo Innovation Center is a prime example. Established in 2010114 to become, in aspiration, “Russia’s Silicon Valley,” the program had billions of dollars in Russian government funding and global partnerships with Siemens, IBM, Intel, Microsoft, and Cisco.115  Upon its launch, then-President Dmitry Medvedev flew to Silicon Valley in California to meet with Apple’s Steve Jobs, then-California Governor Arnold Schwarzenegger, and executives from Twitter, Google, and many companies—saying his goal was to develop “full-fledged relations” and cooperation with companies.116 Yet, the initial excitement quickly gave way to political fights and other problems. As journalist Alec Luhn wrote in 2013, “Skolkovo has in the past seemed like a typical pet project of Medvedev’s: reform-minded, jumped up on economic modernization rhetoric, but producing little in the way of results.”117

    The state opened and later closed corruption investigations into some of the officials in charge, reportedly due to political fights against Medvedev and others in his faction118—a “tacit repudiation for Medvedev’s dalliance with [the] West,” as Gavin Wilde and I wrote in 2022.119 By June 2015, many of the involved startups had emigrated from Russia and Skolkovo had shifted towards partnerships with Chinese companies.120 Russian officials absurdly suggested this had nothing to do with Western sanctions post-Crimea annexation.121 In 2022, Skolkovo was dealt another blow after MIT ended its partnership with Skolkovo, as many of the Western businesses involved with the center left the Russian market entirely.122

    Domestic hardware manufacturing has been another significant pain point. The development of domestic computer chips and nanotechnology had been a state focal area since, at least, Rusnano’s creation in 2007. Simultaneously, Russian spies continued to steal advanced microelectronics from the West for use in radar and surveillance systems, weapon guidance systems, and detonation triggers.123 At a private Ministry of Digital Development meeting in December 2021, large buyers of Russian server equipment told state officials that they were dissatisfied with the cost, quality, and performance of domestic processors compared to foreign versions.124  Russian chip manufacturers reportedly responded by pointing to Moscow’s import substitution campaign and claimed that it was sufficient that the servers at least worked.125 Of course, this is the bare minimum for an ostensibly functional technology product: that it functions. 

    This was not an isolated incident. The Moscow Center of SPARC Technologies (MCST) had spent years developing and manufacturing the Elbrus-8C processor,126  designed to serve as a replacement for foreign components. It was an aspiration like many others in Russia’s years-long push for greater technological independence. Yet when SberInfra—part of Russian bank Sber—tested the processor in January 2022, it found insufficient memory capacity, poor out-of-the-box software optimization, and other problems.127 A Sber representative called the Elrbus-8C “very weak” compared to an Intel-made equivalent.128

    “We’re throwing rocks at the locomotive.” 

    Alexei Venediktov, owner of Ekho Moskvy (Echo of Moscow) radio station, about Russia’s then-legal ban on Telegram, April 13, 2018129

    Even on the surveillance front, the state’s domestic technology capabilities were not at the level of sophistication the Kremlin desired. In 2018, the Russian government issued a legal ban on the encrypted messaging app Telegram, after Telegram said it could not provide encryption keys to the Russian government related to a 2017 terrorist attack in St. Petersburg.130 Journalists, dissidents, and other Russians had also been using the app to share news and facilitate political conversations. For the next two years, the state tried and failed over and over again to block access to the app within Russia, due in part to Telegram’s circumvention efforts such as using domain fronting, where traffic looks like it is going to one place but actually went to Telegram servers, as well as weaknesses in the state’s internet censorship and deep packet inspection (DPI) filtering capabilities.131 Even the Kremlin’s press secretary, Dmitry Peskov, and other senior officials were still using the app while the ban was in effect.132 Alexei Venediktov, the owner of Ekho Moskvy (the Echo of Moscow) radio station, quipped in April 2018 that “we’re throwing rocks at the locomotive.”133

    In June 2020, the Russian government lifted the ban on Telegram, for a variety of likely reasons that include wasted time and resources to fail to block the app—as well as Pavel Durov’s vague claim that Telegram had improved its ability to remove extremist content while also protecting privacy.134 (There has also been reporting about Russian intelligence spying on Telegram chats in Ukraine.)135  The state’s filtering capabilities have improved somewhat but remained quite weak during this period.136  Moscow’s vision of a sovereign Russian internet, where internet regions could be isolated from the rest of the world at will, has similarly faced numerous challenges—and not just technical ones.137 Of course, many other kinds of state surveillance, like the SORM internet monitoring system, remained in place and provided invasive data interception capabilities to the state security services alongside failed attempts at large-scale internet filtering. 

    And when all else fails, Moscow can wield offline violence and coercion, from detaining protestors to harassing dissidents to a notable example in September 2021: when Apple and Google refused to delete opposition leader Alexey Navalny’s election app from their app stores, the Kremlin sent masked men with guns to Google’s Moscow office, gave Apple and Google representatives lists of Russian employees that would be jailed, and even sent FSB agents to the home of Google’s top executive in Russia and then followed her to a hotel—all to get the companies to comply.138

    Meanwhile, Chinese telecommunications firm Huawei made significant inroads in Russia by playing into this Kremlin fear of Western technology. Newly signed partnerships with Russian telecom providers, meetings with state officials, and talk of broadly supporting Russia’s “digital economy” all signified Huawei’s greater access in a country increasingly worried about US and European subversion.139 One Russian international affairs analyst importantly argued at the time that Chinese technology also came with espionage risks and that overdependence on non-Western technology was still a point of vulnerability.140

    All told, the reasons behind these difficulties varied depending on the technology and policy in question. Domestic hardware development fell far short of stated goals, not least because of Russia’s incredibly limited microelectronics manufacturing capacity. The Skolkovo Innovation Center was plagued by corruption, ineffective management, and political fights among Russian leadership. Efforts to isolate the internet were in many areas not given sufficient priority by the Kremlin and ran into companies simply dragging their feet, as with installing “black boxes” on internet networks.141 More broadly, as Russian international relations professor Tatiana Romanova noted in March 2015—a year after the Putin regime’s invasion and annexation of Crimea, Ukraine—“import substitution requires huge investment at a time when resources are scarce.”142

    The Russian government is not the only actor influencing these dynamics. Different parts of Russian industry had their own mixed motives in dealing with the realities of sanctions compliance following the invasion of Crimea, trying to remain competitive in the global market, and pushing for self-serving domestic tech policies, among others. The US government was also concerned about how Russia-US tech engagement in the 2010s could enable Russian investors and others to steal American tech and trade secrets.143 Given this paper’s focus, though, the discussion of Russia’s domestic tech push is meant to highlight just how Western sanctions in 2014, the Kremlin’s “internet awakening” and growing paranoia about foreign technology, and other factors catalyzed a push for Russia’s relative technological independence. 

    Headed into 2022, the march towards domestic technology—across state software procurement, moves to expel Microsoft Windows, and more—continued apace. 

    The 2022 Russian war on Ukraine and evolving techno-isolationism 

    Since its full-scale invasion of Ukraine began in February 2022, Russia’s relative technological isolationism has rapidly accelerated. Combinations of escalating “brain drain” and a frantic state push to retain domestic tech talent, Western tech companies exiting Russia, some forced and some self-serving private-sector excitement at domestic tech efforts, and more success with software than hardware have produced a landscape in which the Russian tech sector under Vladimir Putin’s rule is forced to contend with more isolation than ever before. Russia also faces persistent roadblocks to investing greater resources in domestic technology development and has become far more dependent on digital technology from China since the war’s inception. 

    Brain drain has been a problem in Russia for decades, but the 2022 Russian war on Ukraine elevated Russia’s tech brain drain to new heights. In the months after the war began, numerous Russian programmers and other technically talented individuals left the country.144 The Russian Association for Electronic Communications said that 50,000-70,000 IT specialists left in February and March 2022 alone.145 Departures only grew in the ensuing period. Russia’s Ministry of Digital Development reportedly estimated in December 2022 that approximately 100,000 IT workers had left Russia since February 2022, which the Ministry equated to 10 percent of Russia’s entire technology workforce.146 Former employees of Yandex, the Russian internet giant, “estimate that as many as a third left the country in just the first two months after the invasion,” according to MIT Technology Review, although many still work remotely.147   One study examined the listed online locations of active Russian developers, finding that between February 2021 and November 2022 about 11 percent of these developers had changed locations to a new country.148

    This discourse on brain drain has also permeated the Russian tech community. Notably, Lev Gershenzon, the former head of Yandex News, called in March 2022 for his former colleagues to quit working at Yandex: 

    “The fact that a significant part of the Russian population may believe there is no war is the basis and driving force of this war… Today, Yandex is a key element in hiding information about war. Every day and hour of such ‘news’ costs human lives. And you, my former colleagues, are also responsible for this.”149

    These reports collectively point to a staggering number of Russian residents who have left the country since February 2022 and brought their technological skills with them. And even if some of those individuals living outside of Russia work remotely for Russian companies, that still poses a challenge for Russia’s tech sector: they may be unable to return to Russia, and once located in some foreign countries, technically talented Russians may have opportunities to make far more money by working for non-Russian companies than they had when living and working in the Russian market. These incentives are not new to the wartime period, but the starkness of the choices and the inability of many individuals to return to Russia have been heightened greatly since February 2022. This is not to say, of course, that there are no difficulties on the other side of this equation—including non-Russian companies hesitating to hire IT professionals who have recently left Russia. 

    Moscow has semi-frantically attempted to stem the tide. It upped tax incentives in March 2022 for qualified IT experts to remain in the country150 and exempted some IT workers in September 2022 (along with some bankers and other professionals) from conscription into the military.151 This followed tech companies in Russia, as well as Russia’s Association of Software Developers, telling the Ministry of Digital Development that a widespread deployment of tech workers in combat would seriously harm the country, including by undermining support for the military and for “critical information infrastructure” facilities (as they are called in Russian law).152

    In March 2023, the state announced that foreign software engineers could sign contracts with approved Russian tech companies without needing work permits.153 Russian international affairs commentator Ifan Timofeev—also the program director for the well-known Valdai Discussion Club, which Putin frequents154 —wrote in May 2023 that one of Russia’s “biggest vulnerabilities is its industrial and human potential,” citing the 2022 brain drain acceleration as a factor.155This feeling is clear among Russian members of parliament, some of whom were discussing the need for a law in December 2022 to prevent Russians who left the country from remotely working for many public- and private-sector organizations altogether.156 This law did not materialize, though some Russian organizations like banks Sber and Tinkoff have restricted their employees’ ability to work remotely from outside Russia.157

    This outflow of technical talent from the country has merged with a broader exit of companies from the Russian market and persistent domestic technology challenges. Many non-Russian businesses have shuttered their operations in Russia and/or left the market entirely since February 2022. Their motivations for doing so include combinations of sanctions compliance, concerns over employee safety, support for Ukraine, signaling resolve to Western governments, and restrictions from the Russian government, among others. Western sanctions, for instance, have hit semiconductors, unmanned aerial vehicles (UAVs), and many other kinds of technologies;158 companies providing information services to Russians, such as Google and Twitter (now X), are still active in the country. 

    Some businesses, like McDonald’s, sold their in-country infrastructure to new Russian owners after they left.159 The Russian government cracked down on other businesses that remained, such as officially designating Meta—the parent company of Facebook, WhatsApp, and Instagram—as an “extremist” organization.160 This is at once propagandistic (by essentially labeling Facebook as a terrorist organization), sincere (in that the Kremlin genuinely believes Western tech platforms are operating at the behest of the US government),161  and intended to enable further crackdowns (given that many repressive laws in Russia are oriented around the term “extremism”).162  All told, the historical engagements between Russian and Western businesses and universities in the technology sphere have given way to even more severed ties. 

    These Western sanctions and business departures have forced the Russian government, as well as Russian industry and civil society, to contend with tech replacement and acquisition problems more urgently than ever before. Russia’s pre-February 2022 starting point was already worrisome for the Kremlin: a Bank of Finland analysis published in March 2022 found that Russia’s industrial production shares of “medium- and high-technology sectors such as machinery [and] equipment have declined slightly over the past decade,” with the exception of the pharmaceutical industry.163 In some ways, Russia’s tech dependence had also been shifting towards China: between 2013-2018, the study found, the percentage of Russian tech imports from the EU declined, while “China’s share for technology sectors has grown visibly.”164 At a meeting in 2023 with European defense and intelligence analysts, one expert described this dynamic as Russia losing its strategic ability to counterbalance between tech dependence on the US and China. Now, Moscow is largely stuck with the latter.165

    In the hardware sphere, Russia has struggled even more since than it did prior to the war. A key factor in this decline is that the state does not have a robust microelectronics capability. In May 2022, Alexander Kuleshov, a mathematician and technologist who took over the Skolkovo Innovation Center in 2021,166 called Russia’s supply of tech equipment a “disaster.”167Equipment such as supercomputer boards break down frequently, he said, and the manufacturers of some equipment have terminated repair, maintenance, and other warranties.168 News reports indicate that Russian intelligence organizations have evaded sanctions to purchase chips from third countries, and Russian forces have resorted in some cases to stripping down refrigerators and other appliances to use their chips in military gear.169

    The aforementioned Elbrus processor—which the Russian state hoped could replace processors made by Intel and other US firms—was originally manufactured by TSMC in Taiwan.170 After the 2022 Russian war on Ukraine began, TSMC stopped working with Russian companies, and the MCST that designs Elbrus had to pivot to the Mikron Group, a microelectronics company in Russia.171 This is hardly a one-to-one replacement: TSMC is a global leader in semiconductor manufacturing, and Mikron Group (JSC Mikron), by some reports, cannot even meet the requirements to produce chips used in mobile phones, computers, and other devices.172 JSC Mikron has also had some manufacturing infrastructure, at least historically, in China.173 The only other major microelectronics company in Russia, Baikal Electronics—which makes ARM-based processors—also relied on TSMC to do most of its manufacturing, a partnership that is now terminated.174 Other smaller Russian manufacturers have struggled in recent years with debt, and since sanctions during the war, “Russian chip-design firms have lost access to most foreign contract manufacturing.”175 Sources in the electronic manufacturing sector told the newspaper Vedomosti in March 2024 that over half of the processors made by Baikal Electronics are defective.176

    Software is a more complete story than hardware. Russia’s cybersecurity sector has many competitive companies, like Kaspersky and Positive Technologies; even with US and EU sanctions,177 Positive Technologies has seen double-digit revenue growth in 2023 and is positioned for additional international growth in 2024.178 The Astra Linux operating system has also grown in usage in recent months.179 In May 2022, Russia’s Ministry of Digital Development announced plans to take Russia’s domestic software registry, which then had over 13,000 products, and turn it into a “full-fledged marketplace” for acquiring software (users located outside of Russia currently appear blocked from accessing the registry).180 Some companies are also pushing the state to reduce the competitiveness of foreign products in Russia: in May 2022, for example, the Domestic Software Association, which represents over 220 tech companies, told the Ministry of Digital Development that it should not simplify the process for joining the domestic software registry because “the simplification may lead to [the] emergence of foreign software clones.”181 In short, while the state is rolling out many policies at once, it is reductive and inaccurate to treat Russia’s tech ecosystem as a highly coordinated, top-down system in which companies and other stakeholders have no agency or influence. 

    For some Russian internet companies attempting to show distance from the state, such as Yandex—which sold off its news assets to VK in September 2022, as the Kremlin cranked up penalties for companies not bowing to its propaganda directives and wishes182 —the major source of growth may be out of Russia. Yandex engaged in months of conversations, discussed more below, about restructuring the company to separate its publicly listed Dutch holding company from the Russian side of the business.183 For years, the company has maintained business operations on other continents, including Europe. The Q3 2023 results from Yandex’s public Dutch holding company showed quarterly revenue up 54% from the year prior.184 An internet giant born in Russia in the 1990s may now be able to keep its growth—but, ironically, by cutting off its Russian arm. And as of February 2024, for a sale price of $5.2 billion, this is exactly what Yandex plans to do.185

    These advances aside, conversations at Positive Hack Days 2023—Russia’s largest hacking conference, put on by Russian cyber firm and intelligence contractor Positive Technologies—indicate that many Russian companies are still using Western software even if they are not supposed to do so. There is less visibility into this “shadow” market, but it exists because companies have not always been able to replace foreign-made software with domestic software.186 A lack of many viable alternatives in kernels, compilers, and interpreters (lower-down parts of the software “stack”) contributes to this problem, and it will continue to prove a challenge going forward in building out alternative applications, operating systems, and other technologies in Russia.187 Compatibility issues also plague Russian-made software. As of June 2023, the Russian government has been creating independent centers to test the compatibility of Russian software with domestic hardware and operating systems for this very reason.188  It has also announced plans to develop a “Multiscanner” platform to replace the use of VirusTotal, due to Russian government fears that the US government could access data uploaded to VirusTotal via its owner Google.189

    Russian tech investments and Russian-Chinese tech entanglement 

    China is a consistent and growing player in Russia’s technology developments. By one count, the economic value of Chinese and Hong Kong exports of US chips to Russia increased ten times from 2021 to 2022 (from $51 million to just under $600 million), and China and Hong Kong comprised nearly ninety percent of global chip exports to Russia between March-December 2022.190 The US Office of the Director of National Intelligence noted in a declassified June 2023 assessment that “the PRC is providing some dual-use technology that Moscow’s military uses to continue the war in Ukraine, despite an international cordon of sanctions and export controls” and cited foreign press reports that Russia has acquired large numbers of chips through small Chinese- and Hong Kong-based traders.191Two unnamed senior Biden administration officials said in April 2024 that in 2023, about ninety percent of Russia’s microelectronics were provided from China.192

    In other hardware, Chinese smartphone sales rose forty-two percent by volume in Russia from 2022 to 2023.193 Chinese smartphone manufacturers Xiaomi and Realme took the first and second spots for Russian market share in 2023, overtaking Samsung (South Korea) and Apple (US).194 It appears that for some Chinese tech firms, initial concerns about US sanctions and pressure from suppliers195 have turned into companies remaining in the Russian market. However, there are exceptions when it comes to hardware: Chinese telecom Huawei, for its part, disbanded its enterprise business group in Russia in December 2022 and reportedly stopped taking new contracts to sell network equipment to Russian operators. 196

    Russia’s dependence on Chinese technology is less prominent in software. After Visa and Mastercard terminated their business operations in Russia in the spring of 2022,197 China’s UnionPay system was briefly seen as an alternative before it stopped accepting cards from sanctioned Russian banks in September 2022.198 OpenKylin, China’s first domestic-made open-source operating system for desktops, built on Linux, was released in July 2023—but it is unclear how much it might be presently used in Russia.199 As mentioned, Russia has been developing the Astra Linux operating system—which is also based on Linux and has an open-source version—as a replacement for Microsoft Windows.200The state banned officials from using foreign-built messaging apps in March 2023, including the Chinese platform WeChat (along with Telegram, WhatsApp, and others).201Russian authorities are also looking to develop a Russian app that, similar to WeChat, serves as a one-stop-shop for communications, banking, and more—and which could enable, much like WeChat, a dangerous kind of concentrated surveillance.202

    On the investment front, the Russian Ministry of Economic Development quoted a Chinese representative in November 2022 stating that Chinese investment in Russia from January to August of 2022 totaled $450 million, up 150 percent from the same period in 2021.203 But this investment has not been consistent across sectors or as meaningful in the technology realm. Analysis from the Observer Research Foundation, an India-based think tank, found that Chinese investment in Russia has “surged” in the energy, infrastructure, and transportation sectors—while “fear of Western sanctions has driven away major Chinese tech companies such as Huawei and DJI from Russia, much to the chagrin of Moscow.”204 Former Russian journalist and economics expert Mikhail Korostikov has also argued that Chinese investment in Russia “remains relatively small, partly because Moscow is not prepared to accept Chinese investment without certain restrictions.”205 An analysis from the Asia Society in October 2023 concluded that “Beijing is in no hurry to embed itself in the unpredictable and now war-focused and strained Russian economy” as investment flows stay “modest.”206Russian dependence on Chinese technology in some areas, such as semiconductors, does not necessarily translate to other areas such as software usage and investment. 

    On the domestic financing front, the National Technology Initiative, first called for by Putin in 2014 and established formally in 2016, currently has sixty-eight projects approved under its general NTI Fund, one of the multiple vectors through which the state financially supports projects focused on high-tech industries. 207Most projects are, as of July 2023, in the implementation stage, with others suspended, discontinued, or undergoing post-project monitoring. 

    The list goes on. Russia’s National Technology Initiative announced a new project in April 2023, called NTI Venture Funding, in partnership with the Popov Radio Manufacturing Plant in Siberia. Reportedly, the NTI Venture Funding project plans to invest approximately $65.8 million in 20 or more projects across robotics, microelectronics, unmanned aviation, cargo delivery, and wireless technology, among others.208  It is clear that developing Russian alternatives to foreign tech remains the goal. In practice, this venture funding plan contrasts with overall Russian spending on R&D, which as indicated above has remained stagnant for two decades. For 2024, the Russian government plans to spend six percent of GDP on the military, most of which will likely go towards the production of military equipment.209 Some technology companies may be able to pitch defense- and military-focused projects to receive some of the funding, such as “information security” systems for combat units. But even that sub-slice of the pie, if it materializes at all, is hardly enough to catapult Russia’s digital tech development and commercialization to the levels once imagined a decade prior. 

    The drumbeat of restrictions, meanwhile, continues: in September 2022, Putin declared that the government must ensure Russia’s technological independence from foreign software by December 2022;“210  in August 2023, Putin signed a new law banning state agencies and companies from using non-Russian and non-compliant geoinformation technologies, beginning in January 2026.211 It is often unclear how these deadlines are set and whether they are remotely realistic. Simultaneously, the Putin regime’s obsessive focus on defense and securitization may increase the likelihood that new digital technologies developed in Russia are grabbed up by the military and defense base before companies or scientific research centers have opportunities to develop the commercial or civilian use that would increase their sustainability and attract investment. 

    Conclusion and key takeaways 

    Russia’s technological independence was an idea accelerated into reality by the conspiratorialism and paranoia surrounding the early 2000s “color revolutions” in former Soviet republics and the Kremlin’s “internet awakening” in the late 2000s and early 2010s. Now, Russia’s digital isolationism is both a growing reality and an explicit goal of the state. In some ways, this evolving saga appears to corroborate what economist Sergei Guriev argued in 2015: 

    “Having understood that its current foreign policy can only lead to isolation, the Russian government has put together a narrative in which this was its plan all along—that isolation is actually good for Russia. By reducing imports and foreign investment, the government claims that sanctions and countersanctions will eventually promote import substitution and growth.” 212

    The Kremlin is now further locked into this narrative, complemented by a loud (but bogus) narrative of Russia’s “victimization” by Western sanctions, cyber operations, and critical news reporting (As of late, Moscow calls reporting on the war it dislikes “information operations” or “information war.”) Even Vladimir Putin, in a May 2022 Russian Security Council meeting, said that “a number of Western tech companies unilaterally cut off Russia from technical support services for their equipment” and that “all this should be taken into account when Russian companies and public authorities introduce new foreign IT products or use previously installed ones.”213Narratives aside, the recognition is there: Russia’s technological autonomy has always been a goal, and its relative technological isolation is now a growing reality. 

    This section is geared toward at least four groups of policymakers and government organizations:  

    • The State Department, the US Agency for International Development (USAID), and others working on multilateral technology relations and capacity-building, founded on an understanding of Russia’s current technological ecosystem.
    • US and Western intelligence organizations monitoring the development of Russia’s technology sector as well as Russia’s offensive cyber capability development, technology procurement, and relationships with China. 
    • Those at the US Bureau of Industry and Security (BIS) under the Department of Commerce and others seeking to understand Russian demands for technologies that are export-controlled (e.g., semiconductors) and Russia’s level of technological independence versus dependence on foreign suppliers and investors. 
    • US, allied, and partner defense and security policymakers with an overall interest in evaluating how the 2022 Russian war on Ukraine has impacted Russian technology. 

    Key takeaways and recommendations 

    1. Russia has even fewer incentives (and even less room) today to stop pursuing an isolationist and securitized approach to digital technology—which will have impacts across international tech engagement, domestic policy, and human rights. The waves of sanctions against Russia and the termination of many tech relationships with Russian firms have cemented this as a reality for the Kremlin and Russian industry. Sanctions and terminated business relationships likely serve as confirmation bias for Russian officials who believe that a military and security paradigm is the most important and realistic way to approach technology development, deployment, and governance.214 After all, one vein of argument goes, if the US and the West are going to weaponize technology in their favor and to Russia’s detriment, Russia must approach technology through a securitized lens. US officials should remember that this is not purely a propagandistic line. Despite some analysts dismissing Russian worries about Western tech—characterizing them as bad-faith arguments made for utilitarian purposes—Russian officials’ concerns about foreign technology are genuine and serious in that they truly believe Western technology is a source of foreign election meddling, disinformation, espionage, and sabotage in Russia.215  This is all the more interesting as Russia becomes more digitally dependent on China.
      1. The State Department and USAID, among other organizations, should continue evaluating how this momentum for isolating and securitizing digital technology will harm freedom of expression and further impede opportunities for Russians to dissent in the country. Russian tech platforms and services will have more surveillance and censorship built in than most Western alternatives, such as YouTube or the encrypted messaging app Signal. For example, the push to develop a super app in Russia216 —one where payments, communications, and other functions are embedded into one application, much like China’s WeChat—is potentially a surveillance nightmare in the hands of an ever-more paranoid and security-driven regime. Some Russians and analysts have also worried the Kremlin will block YouTube in the coming months.217 Capacity-building, development, and freedom of expression efforts focused on Russia and the region will need to increase investments in virtual private networks (VPNs) and other means of providing access to lesscensored and surveilled platforms for the Russian people. The highly dynamic nature of the surveillance risks on the Russian internet (such as how VPNs are monitored and blocked or which organizations take charge of policing which kind of dissent) requires capacity-building agencies and democracy-focused nongovernmental organizations to continuously engage those with on-the-ground insights into Russia’s censorship, surveillance, and tech isolation. 
      2. The State Department, the intelligence community, and other elements of US and allied and partner governments working on Russia issues should seriously weigh their assumptions about Russian thinking against the evidence that Russian officials are genuine in their characterization of the internet as a weapon and a threat—and Western technologies as a threat to regime security and tools of foreign subversion. Analysts and policymakers should not underestimate the extent to which ideology, more than economic aims, drives Russian technology and information actions.218
    2. Russian companies have shown more success building their own domestic software than domestic hardware. Domestic software competitors have existed for years in areas like search (Yandex) and social media (VK and Odnoklassniki, or OK, which is also now owned by VK). The Astra Linux operating system is slowly but surely used on more and more government systems as well as private systems in industries like healthcare. For all the struggles facing the state—such as continued dependence on non-Russian technology and Russian companies “shadow” installing non-Russian technology without the state’s knowledge, or at least with its blind eye—the success story here is a greater possibility than it is with domestic hardware. On that front, Russia’s microelectronics manufacturing capability remains wholly insufficient. The US and its Western allies and partners have already seriously constrained Russia’s microelectronics sector—as have companies like TSMC—by simply ending their business relationships in the country. Russia cannot produce viable chips and other technology at any scale to be meaningfully useful. The country has become more dependent on Chinese hardware, and intelligence services have had to lean into the theft of processors and other hardware from the West. Russia’s hardware activities in the coming years are most likely to focus on illicit procurement rather than attempting to stand up domestic manufacturing capabilities (which the state has struggled to do for years). These challenges are exacerbated by a lack of investment: Russia’s spending on domestic R&D has hovered around one percent from 2000-2020. Those numbers are unlikely to shift as the state focuses its resources on the war in Ukraine and the immediate military uses of digital technology. The state has announced some plans to increase venture funding for Russian companies but is unclear how that will unfold—especially as most venture funding will not fix the immediate, underlying issue of the country’s minimal hardware manufacturing capacity. 
      1. BIS and other agencies monitoring export controls and Russia’s interest in illicit technology procurement should continue to monitor public reporting about Russian software, firmware, and hardware development—and also make sure to integrate some of the investment data, news sources, Russian industry discussions, and other references cited in this report into their analysis. Ever since the Putin regime began its strong push to boost Russian domestic technology and reduce technological dependence on the West, there have been important gulfs between government policy, on-the-ground reality, and what industry leaders have thought versus said aloud. Those gaps, which can often come to light in Russian news reporting and Russian industry conversations, provide critical insights into where the Russian government might move next. For instance, Russian companies’ past complaints about nonfunctional chips spoke to some of the underlying, systemic issues Russia faces with semiconductor manufacturing. The latest industry excitement about an Astra Linux technology stack, by contrast, speaks to greater advancements when it comes to operating systems, also made clear by news reports and other information. In addition to nonpublic information sources, these conversations and sentiments should not be overlooked. Rhetoric from state officials should be matched against industry conversations and the most reliable data on state and commercial investment in digital technology R&D. US policymakers should use the reality of investment (and lack thereof) in Russian domestic digital tech, rather than just state policy, to understand Russia’s future directions. 
      2. The US defense and intelligence community, as well as those of US allies and partners, should note that technological isolation poses new or enhanced cybersecurity risks to the Russian state. For example, on the domestic software side, widespread use of the Astra Linux operating system and the company’s goal of creating a full-fledged software stack219 potentially create new single points of failure and concentrations of technology that the West could exploit. US allies and partners may also wish to track and analyze these domestic digital technology concentrations to evaluate where they may create new vulnerabilities. 
      3. BIS and its Commerce Department partner agencies, the State Department, and others tracking Russia’s domestic software development should monitor new developments in the Russian private sector and hacking community to understand future directions across operating systems, mobile apps, and other technology and software. These developments matter for Russia’s tech sector at home, its ability to market products and services overseas, and the technical vulnerabilities within Russian networks. Still, whether the Russian state can and will muster the resources, bureaucratic buy-in, and industry coordination to promote domestic software is an open question. As Russian journalist and intelligence expert Andrei Soldatov notes, “The concept of [domestic software] registers also encompasses the fundamental belief in the possibility of forming a final, exhaustive list of everything, from innovation to enemies of the regime.”220 Mandating licensing processes and other checks before deploying even the most basic software can also slow down implementation.221 Such considerations should guide how US and allied governments issue sanctions and investigate sanctions evasion—and agencies can assess these challenges by monitoring what Russian companies, hackers, and developers are publicly saying on blogs and at conferences and events
    3. The Russian cybersecurity sector will play an important role in Moscow’s reaction to growing sanctions and other restrictions as well as its efforts to technologically isolate itself from the West. Russian cybersecurity companies are dealing with a complex landscape at home. There is a nuanced spectrum of perspectives within the industry on Western sanctions, the 2022 Russian war on Ukraine, and the Putin regime’s domestic tech push—and many of these individuals are in difficult positions remaining in the country. While some companies and individuals are vocally supportive of the state’s propaganda and its domestic tech pushes, even those perspectives may come from a genuine belief in the state’s narratives, a desire to appear supportive of state efforts, or self-serving wishes to profit off the domestic tech push, tax subsidies, and other newly introduced policies from the Russian government. Many other firms may express agreement with state policies when that does not actually reflect their view.222 The security-focused nature of some of these companies, albeit often in a commercial and consumer-protective sense, may still give them more rhetorical play with Moscow than other tech companies outside the “defensive” sphere.   
      1. US and Western policymakers generally trying to understand the future of Russia’s tech sector—whether to evaluate sanctions efficacy (e.g., at BIS), track emerging cyber threats (e.g., at the UK’s National Cyber Security Centre), or something else entirely—should know that there is increasingly little room within Russia’s technology sector to push back against the state or to contradict core Kremlin objectives, such as getting rid of Microsoft Windows in state organizations and “critical information infrastructure” operators. But, to recall historian Stephen Kotkin’s quote, binaries are not an effective way to understand Russia. The state does not control every single tech decision in the country, and in many areas, the state does not have or has not demonstrated high competence on technical issues, such as with building cyber defense systems. Within the space that companies do have to push back or shape initiatives, cybersecurity companies providing services to the state and the security services will be an important voice in how some of these policies are designed and implemented; the Ministry of Digital Development does at least speak with and listen to their perspectives. That some of these companies are adjacent to or squarely within the national security sphere will help their influence in a state increasingly dominated by conspiratorial, paranoid, and security-driven views of technology. Western organizations should be sure to monitor public sources from Russian cybersecurity companies, forums, and conferences to gain these insights.  
    4. Some Russian technology companies are already looking to the international market to expand their profit streams, including in internet and cybersecurity services, or to separate their Russian components entirely. Yandex had been discussing the possibility of splitting the company into two parts since Russia’s full-scale invasion of Ukraine, one of which would operate within Russia and the other internationally. This effort is ongoing but faces many challenges, given the sheer number of Yandex employees and developers alone who appear to be based in Russia223 and the government’s interference with the restructuring due to anti-war comments made by Yandex’s co-founder.224 Nonetheless, negotiations held at the end of 2023 drive this corporate restructuring closer to reality. Some of the leaders in Russia’s cybersecurity sector, meanwhile, remain globally competitive. For instance, the revenue of Positive Technologies, a US-sanctioned firm that supports Russia’s intelligence community, has only been growing internationally in the last two years despite the ongoing war. In July 2023, the company announced that it had shipped forty-six percent more products and services compared to the prior year, to the tune of approximately $47.6 million.225
      1. The State Department and other organizations building and engaging on US global technology policy should not dismiss the notion of Russian cyber firms remaining globally competitive—thinking of companies like Kaspersky or Positive Technologies as industry persona non grata post-February 2022. That would be a mistake. Analysts should watch how companies like Positive Technologies are positioning themselves to compete in overseas markets, ranging from Latin America to the Asia-Pacific, in some cases by explicitly offering themselves as alternatives to Western technology and ways for organizations to decentralize their risk. You might be concerned about Russian tech, the pitch goes, but you certainly do not want to rely entirely on US, Israeli, or Chinese cyber solutions, either; using domestic tech in some areas and theirs in others is a way to minimize exposure to both.226 Many of these companies are primarily commercially motivated but still operate within an increasingly constrained Russian political environment. Their expansion can therefore serve as a means by which Moscow can project influence, gather data, and engage in other activities as well. 
      2. The State Department and the defense and intelligence community should also observe the growth of Russian internet firms like Yandex which may receive skepticism or face restrictions in some parts of the world (e.g., Western Europe) but may offer attractive cloud and other services elsewhere (e.g., Latin America). For Yandex, this is especially the case if its deal goes through to sell the Russian business entity to Russian managers and oil company Lukoil for $5.2 billion.227 The Dutch parent could then run Yandex’s current, non-Russian business operations separately. (Of course, this will further harm human rights in Russia and expand the Kremlin’s domestic internet control as the Russian Yandex falls further under the state’s grip.)228 US analysts and policymakers should track these developments and prepare for this reality, also potentially noting to US companies that they will still be competing with Russian or historically Russian internet and cyber firms in certain parts of the world. 
    5. Russia is becoming more digitally dependent on China. Chinese digital technology has long played a part in Russia’s domestic technology evolution, such as in the failed Skolkovo Innovation Center, but dependence is at newly high levels. Western sanctions, Western businesses exiting the country, IT workers fleeing Russia, and the Putin regime’s even greater paranoia about Western digital technology, among other factors, have increased Russia’s reliance on Chinese chips, software, and other technology. The Russian government is concerned about this dependence—despite what one might assume, there are Russian security analysts worried about espionage and digital threats from Beijing, too. But it has little choice in the face of digital techno-isolationism and serious problems with domestic, digital technology development and procurement. This digital dependence on China has accelerated since February 2022. Russia’s increasing use of Chinese software and especially hardware should change how the US strategically and tactically approaches China, Russia, countries concerned about Beijing and Moscow’s tech activities, and the tech ecosystem in Russia. 
      1. The White House and the State Department should, at the strategic level, evaluate existing policies and plans against Moscow’s growing digital dependence on China—and determine how that dependence could or should shift the US’ approach to countering Beijing’s global technology influence and its efforts to acquire technology from the West. For instance, for countries around the world that are more concerned about Russian government activities than Chinese government activities, this trend highlights how the two issues are entangled. If Chinese technology is facilitating Russia’s technological influence or military and intelligence activities, countries worried about Moscow may become more concerned about Chinese government tech programs and policies. This trend may also change how US diplomats engage with or signal to Russia: Kremlin officials are certainly most worried about espionage, information warfare, and regime security threats from the West, but that doesn’t mean they are fearless about using Chinese technology. And somewhat unlike their Chinese counterparts, who integrate commercial and economic views into their perception of the security of digital technologies, Moscow is much less focused on the economics of digital technologies and much more driven by a conventional security lens. 
      2. The White House, State Department, and Defense Department should note that for all Putin and Chinese leader Xi Jinping may cooperate in some areas,229 elements of the Russian state worry about Chinese tech dependence.230 In the summer of 2022, for instance, an internal Russian Ministry of Digital Development assessment expressed senior officials’ concern about the dominance of Chinese companies like Huawei in Russia and the resulting information security risks.231 Russian officials proposed imposing quotas on Chinese tech imports, shifting production of certain components to Russia, and using Russian subcontractors to limit direct and total dependence on China.232 Even since the 2022 Russian war on Ukraine began, Chinese government-linked threat groups have been publicly tied to espionage campaigns against Russian defense institutions.233 The US may wish to shape its communications and signaling to Moscow with that in mind. 
      3. The US defense and intelligence community, as well as those of US allies and partners, should consider at the tactical level how Russia’s growing digital dependence on China may create new points of vulnerability. This could lead to opportunities for the US and its allies and partners to continue mapping the technological environment in Russia and explore how capabilities could be applied to intelligence and other advantages. 

    Author

    Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; an adjunct professor at Duke University’s Sanford School of Public Policy; and a contributing editor at Lawfare. He writes, researches, consults, and advises on Russia security and technology issues and is sanctioned by the Russian Ministry of Foreign Affairs. 

    Acknowledgements

    The author would like to thank Gavin Wilde, Carolina Vendil Pallin, Trey Herr, Jackie Kerr, Michael van Landingham, Emma Schroeder, Dylan Myles-Primakoff, Iria Puyosa, Konstantinos Komaitis, and Andrew D’Anieri for their comments on earlier drafts of this report—and Nitansha Bansal for critical help in getting the report to final form. 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “About 100,000 IT specialists left Russia in 2022 – digital development minister,” Interfax, December 20, 2022, https://interfax.com/newsroom/top-stories/86316/.
    2    See, e.g., Emil Wannheden, Russia’s Wartime Economy — Neither Boom nor Bust (Stockholm: Swedish Defense Research Agency, October 2023); FOI Memo 8236, https://www.foi.se/rest-api/report/FOI%20Memo%208236;  “Russia Falls Into Recession,” The Moscow Times, November 17, 2022, https://www.themoscowtimes.com/2022/11/16/russia-falls-into-recession-a79398
    3    Stephen Kotkin, “Technology and Governance in Russia: Possibilities,” Hoover Institution, October 3, 2018, https://www.hoover.org/research/technology-and-governance-russia-possibilities
    4    See, e.g., R. Adam Moody, “Reexamining Brain Drain from the Former Soviet Union,” The Nonproliferation Review (Spring/Summer 1996): 92-97, https://www.nonproliferation.org/wp-content/uploads/npr/moody33.pdf; Andrei V. Korobkov and Zhanna A. Zaionchkovskaia, “Russian brain drain: Myths v. reality,” Communist and Post-Communist Studies 45, no. 3-4 (September-December 2012): 327-341, https://www.sciencedirect.com/science/article/abs/pii/S0967067X1200058X; Ina Ganguli, “Scientific Brain Drain and Human Capital Formation After the End of the Soviet Union,” European University Institute, 2013, https://cadmus.eui.eu/bitstream/handle/1814/27883/CARIM-East_RR-2013-26.pdf;jsessionid=AFA7EAF7E62CDE43D0032BA6F92B41F0?sequence=1
    5    Dmitri Alperovitch and Keith Mularski, “Fighting Russian Cybercrime Mobsters: Report from the Trenches,” Black Hat, July 25-30, 2009, https://www.blackhat.com/presentations/bh-usa-09/ALPEROVITCH/BHUSA09-Alperovitch-RussCybercrime-PAPER.pdf, 2.
    6    “History of Yandex: 1997,” Yandex.com, accessed November 16, 2022, https://yandex.com/company/history/1997
    7    “History of Yandex: 1990,” Yandex.com, accessed November 16, 2022, https://yandex.com/company/history/1990
    8    “From the garage to the Googleplex,” Google, accessed August 28, 2023, https://about.google/our-story/
    9    “Mail.Ru Group,” Crunchbase.com, accessed November 16, 2022, https://www.crunchbase.com/organization/mail-ru.
    10    « “Проф-Медиа” приобретает 54.8% Rambler Media », Rambler, October 31, 2006, https://web.archive.org/web/20061213022333/http://www.rambler.ru/db/press/msg.html?mid=9017070&s=260000269; « Сбербанк стал единственным владельцем Rambler », RBC, October 29, 2020, https://www.rbc.ru/business/29/10/2020/5f9af8339a79470b67836406
    11    Michael Lelyveld, “Russia: U.S. Takes Steps To Allow Super-Computer Sales,” Radio Free Europe/Radio Liberty, July 9, 1999, https://www.rferl.org/a/1091683.html.
    12    “Chip distributors in Russia,” chipinfo.ru, June 30, 2002, http://www.chipinfo.ru/chipdir/dist/ru.htm.
    13    Motorola had more trouble than the others, although due to patent disputes in Russia, not government opposition to Western devices. See: Guy Chazan, “Russia Puts Motorola on Hold,” The Wall Street Journal, June 8, 2006, https://www.wsj.com/articles/SB114973190819574520
    14    Tom Adelstein, “Linux in Government: Outside the US, People Get It,” Linux Journal, July 18, 2005, https://www.linuxjournal.com/article/8449.
    15    “Windows XP Starter Edition Pilot Expands to Russia, India,” Microsoft, September 27, 2004, https://news.microsoft.com/2004/09/27/windows-xp-starter-edition-pilot-expands-to-russia-india/.
    16    “Russian retailers to start Apple iPhone sales Oct 3,” Reuters, September 26, 2008, https://www.reuters.com/article/us-iphone-russia-retailersustech/russian-retailers-to-start-apple-iphone-sales-oct-3-idUSTRE48P3BX20080926.
    17    “Half of Russian Internet users connect at home,” Sputnik International, June 23, 2005, https://sputnikglobe.com/20050623/40750068.html
    18    See, e.g., “Most popular Russian sites – Yandex, Rambler, Mail.ru, Google,” ZDNet, June 17, 2005, https://www.zdnet.com/article/most-popular-russian-sites-yandex-rambler-mail-ru-google/
    19    Elizabeth Williamson, “Software Piracy Rates in Eastern Europe Are Twice That of West, Report Says,” The Wall Street Journal, June 25, 2001, https://www.wsj.com/articles/SB993403332336788539. See also: “A pirates’ bazaar in Moscow offers treasured bootleg media,” Baltimore Sun, December 11, 2002, https://www.baltimoresun.com/2002/12/11/a-pirates-bazaar-in-moscow-offers-treasured-bootleg-media/; Connie Neigel, “Piracy in Russia and China: A Different U.S. Reaction,” Law and Contemporary Problems 63, no. 4 (2000): 179-199, https://www.jstor.org/stable/1192397?seq=7; Susan Tiefenbrun, “Piracy of Intellectual Property in China and the Former Soviet Union and its Effects upon International Trade: A Comparison,” Buffalo Law Review 46, no. 1 (1998), https://digitalcommons.law.buffalo.edu/cgi/viewcontent.cgi?article=1460&context=buffalolawreview.  
    20    See, e.g., John Markoff, “Russian Computer Scientists Hired by American Company,” The New York Times, March 3, 1992, https://www.nytimes.com/1992/03/03/business/russian-computer-scientists-hired-by-american-company.html; Maria Trombly, “Outsourcers Begin to Tap Russian Talent,” Computer World, April 30, 2001, https://www.computerworld.com/article/2592184/outsourcers-begin-to-tap-russian-talent.html.  
    21    “Soros Plans to Finance Project to Develop Internet in Russia,” The New York Times, January 15, 1996, https://www.nytimes.com/1996/01/15/business/soros-plans-to-finance-project-to-develop-internet-in-russia.html; Lee Hockstader, “U.S. Financier Gives Russia $100 Million for Internet Link,” The Washington Post, March 19, 1996, https://www.washingtonpost.com/archive/politics/1996/03/16/us-financier-gives-russia-100-million-for-internet-link/0a0ce72b-d1bc-43ca-8d77-edf5f8388eb7/
    22    “Partnership with Russia’s Largest School of Journalism Announced,” University of Missouri School of Journalism, February 10, 2003, https://journalism.missouri.edu/2003/02/partnership-with-russias-largest-school-of-journalism-announced/
    23    “Cisco in Europe,” Cisco Systems, 2004, https://www.cisco.com/c/dam/global/fi_fi/assets/docs/solutions_europe.pdf
    24    “Vodafone, Russia’s MTS sign services exchange deal,” Reuters, October 30, 2008, https://www.reuters.com/article/vodafone-mts/vodafone-russias-mts-sign-services-exchange-deal-idINLT48607920081030
    25    F. Joseph Dresen, “The Growth of Russia’s IT Outsourcing Industry: The Beginning of Russian Economic Diversification?” Wilson Center, April 17, 2006, https://www.wilsoncenter.org/publication/the-growth-russias-it-outsourcing-industry-the-beginning-russian-economic
    26    See, e.g., Jeff Gerth, “I.B.M. Unit Admits Illegal Sale of Computers to Russian Nuclear Lab,” The New York Times, August 1, 1998, https://archive.nytimes.com/www.nytimes.com/library/tech/98/08/biztech/articles/01ibm.html; Dave Gradijan, “IBM’s Moscow Office Raided in Fraud Investigation,” CSO Online, December 8, 2006, https://www.csoonline.com/article/518844/data-protection-ibm-rsquo-s-moscow-office-raided-in-fraud-investigation.html
    27    “FAPSI Operations,” Federation of the American Scientists, accessed November 21, 2020, https://fas.org/irp/world/russia/fapsi/ops.htm.
    28    See, e.g., Gordon Bennett, The Federal Security Service of the Russian Federation (London: Conflict Studies Research Center, March 2000), https://www.files.ethz.ch/isn/96631/00_Mar_3.pdf.
    29    Andrei Soldatov and Irina Borogan, “In Ex-Soviet States, Russian Spy Tech Still Watches You,” Wired, December 21, 2012, https://www.wired.com/2012/12/russias-hand/
    30    Amy Knight, Russia’s New Security Services: An Assessment (Washington, D.C.: Library of Congress Federal Research Division, October 1994), https://apps.dtic.mil/sti/tr/pdf/ADA299951.pdf, 38.
    31    Ibid, 37.
    32    Ibid, 5.
    33    Julian Cooper, “The Internet as an Agent of Socio-Economic Modernization of the Russian Federation,” in Markku Kangaspuro and Jeremy Smith, eds., Modernization in Russia Since 1900 (Helsinki: Finnish Literature Society, 2006), 294.
    34    .Jen Tracy, “New KGB Takes Internet by SORM,” Mother Jones, February 4, 2000, https://www.motherjones.com/politics/2000/02/new-kgb-takes-internet-sorm/.
    35    Andrei Soldatov and Irina Borogan, The New Nobility: The Restoration of Russia’s Security State and the Enduring Legacy of the KGB (New York: Public Affairs, 2010), 232; Roland Heickerö, Emerging Cyber Threats and Russian Views on Information Warfare and Information Operations (Stockholm: Swedish Defense Research Agency, March 2010), FOI-R—2970—SE, https://foi.se/rest-api/report/FOI-R–2970–SE, 27-28.
    36    Federal Service for Technical and Export Control, Government of Russia, accessed January 4, 2024, http://government.ru/en/department/96/
    37    See, e.g., Information Security Doctrine of the Russian Federation, 2000, https://www.itu.int/en/ITU-D/Cybersecurity/Documents/National_Strategies_Repository/Russia_2000.pdf; Gavin Wilde and Justin Sherman, “No Water’s Edge: Russia’s Information War and Regime Security,” Carnegie Endowment for International Peace, January 2023, https://carnegieendowment.org/2023/01/04/no-water-s-edge-russia-s-information-war-and-regime-security-pub-88644.
    38    Thanks to Carolina Vendil Pallin for additional discussion of this point.
    39    Jaclyn A. Kerr, “Runet’s Critical Juncture: The Ukraine War and the Battle for the Soul of the Web,” SAIS Review of International Affairs 42, no. 2 (Summer/Fall 2022): 63-84, https://muse.jhu.edu/article/892250.
    40     For a more detailed treatment of this evolving Kremlin thinking, see: Andrei Soldatov and Irina Borogan, The Red Web: The Kremlin’s Wars on the Internet (New York: Public Affairs, 2015); Justin Sherman, Reassessing RuNet: Russian Internet Isolation and Implications for Russian Cyber Behavior, Atlantic Council, July 2021, 3-4, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/reassessing-runet-russian-internet-isolation-and-implications-for-russian-cyber-behavior/. For a more detailed analysis of the role of the Euromaidan in Moscow’s internet threat perception and foreign election interference, see: Gavin Wilde and Justin Sherman, Targeting Ukraine Through Washington: Russian Election Interference, Ukraine, and the 2024 US Election, Atlantic Council, March 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/targeting-ukraine-through-washington/.
    41    Thanks to Carolina Vendil Pallin for additional discussion of this point.
    42    Martin Kragh, Erik Andermo, and Liliia Makashova, “Conspiracy theories in Russian security thinking,” Journal of Strategic Studies (January 2020), https://www.tandfonline.com/doi/full/10.1080/01402390.2020.1717954; Yulia Nikitina, “The ‘Color Revolutions’ and ‘Arab Spring’ in Russian Official Discourse,” Connections 14, no. 1 (Winter 2014): 87-104, https://connections-qj.org/article/color-revolutions-and-arab-spring-russian-official-discourse
    43    Gavin Wilde, “In Russia’s Information War, a New Field of Study Gains Traction,” New Lines Magazine, September 14, 2022, https://newlinesmag.com/argument/in-russias-information-war-a-new-field-of-study-gains-traction/.
    44    For a synopsis of some of the security service restructuring in the 2000s, see: Soldatov and Borogan, The New Nobility, 19-22.
    45    For a thorough discussion of “color revolution” fears among Russian security experts, see: Graeme P. Herd, “Russia and the ‘Orange Revolution’: Response, Rhetoric, Reality?” Connections 4, no. 2 (Summer 2005): 15-28, https://connections-qj.org/article/russia-and-orange-revolution-response-rhetoric-reality
    46    See, e.g., Elena Zinovieva and Bai Yajie, “Digital Sovereignty in Russia and China,” Russian International Affairs Council, June 14, 2023, https://russiancouncil.ru/en/analytics-and-comments/analytics/digital-sovereignty-in-russia-and-china/.
    47    See, e.g., Jackie Kerr, The Russian Model of Internet Control and Its Significance (Livermore: Lawrence Livermore National Lab, December 2018), https://www.osti.gov/biblio/1491981.
    48    Information Security Doctrine of the Russian Federation, 2000, I.1.
    49    “Presidential Address to the Federal Assembly,” The Kremlin, December 4, 2014, http://en.kremlin.ru/events/president/news/47173.
    50    Ibid.
    51    Military Doctrine of the Russian Federation, 2014. II. 13, https://web.archive.org/web/20180501051233id_/https://www.offiziere.ch/wp-content/uploads-001/2015/08/Russia-s-2014-Military-Doctrine.pdf.
    52    National Security Strategy of the Russian Federation, 2015. II. 12. and II. 21, https://www.russiamatters.org/node/21421.
    53    Denis Volkov, Stepan Goncharov, and Maria Snegovaya, “Russian Youth and Civic Engagement,” Center for European Policy Analysis, September 29, 2020, https://cepa.org/comprehensive-reports/russian-youth-and-civic-engagement/.
    54    Russian government policies and actions in response to Western sanctions of course went well beyond technology. See, e.g., Neil MacFarquhar and Alison Smale, “Russia Responds to Western Sanctions With Import Bans of Its Own,” The New York Times, August 7, 2014, https://www.nytimes.com/2014/08/08/world/europe/russia-sanctions.html.
    55    Clifford G. Gaddy and Barry W. Ickes, “Ukraine: A Prize Neither Russia Nor the West Can Afford to Win,” Brookings Institution, May 22, 2014, https://www.brookings.edu/articles/ukraine-a-prize-neither-russia-nor-the-west-can-afford-to-win/. 
    56    See, e.g., “Treasury Sanctions Russian Officials, Members of the Russian Leadership’s Inner Circle, and an Entity for Involvement in the Situation in Ukraine,” US Department of the Treasury, March 20, 2014, https://home.treasury.gov/news/press-releases/jl23331; “FACT SHEET: Ukraine-Related Sanctions,” The White House, March 17, 2014, https://obamawhitehouse.archives.gov/the-press-office/2014/03/17/fact-sheet-ukraine-related-sanctions; “Ukraine and Russia Sanctions,” 2009-2017, US Department of State, accessed January 20, 2024, https://2009-2017.state.gov/e/eb/tfs/spi/ukrainerussia/.
    57    Sam Skove, “U.S. Ceases Issuing Export Licenses on Some Goods Destined for Russia,” The Moscow Times, March 27, 2014, https://www.themoscowtimes.com/2014/03/27/us-ceases-issuing-export-licenses-on-some-goods-destined-for-russia-a33399.
    58    Quirin Schiermeier, “High hopes for Russia’s nanotech firms: but an ambitious government initiative has been slow to incubate a domestic high-tech industry,” Nature 461, no. 7267 (2009): 1036-1039.
    59    Fredrik Westerlund, Russian Nanotechnology R&D: Thinking Big About Small Scale Science (Stockholm: Swedish Defense Research Agency, June 2011), FOI-R—3197-SE. 37, 47-52, 140-142.
    60    Anatoly Chubais, “RUSNANO: Fostering Innovations in Russia through Nanotechnology,” USRBC’s 18th Annual Meeting, San Francisco, October 20-21, 2010, 14, https://www.rusnano.com/upload/oldnews/Document/28506_3.pdf.
    61    Alexander Etkind, Russia Against Modernity (Hoboken: Wiley, 2023), 34.
    62    « Против бывшего партнера «Роснано» возбудили уголовное дело о хищении из компании $50 млн », Vedomosti, October 24, 2022, https://www.vedomosti.ru/economics/articles/2022/10/24/947149-protiv-bivshego-partnera-rosnano-vozbudili
    63    “Presidential Address to the Federal Assembly,” The Kremlin, December 4, 2014, http://www.en.kremlin.ru/events/president/news/47173
    64    Dzhabrailov Shamkhal, “Russian Digital Economy: Artificial Intelligence R&D Support Strategy,” presentation to the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP), 2018, 2, https://www.unescap.org/sites/default/files/Session%203_%20Mr.%20Dzhabrailov%20Shamkhal_Russia.pdf
    65    « Реестр Проектов НТИ », NTI 2035, July 28, 2023, https://nti2035.ru/documents/docs/projects/Реестр%20проектов%20НТИ_28.07.2023.pdf
    66    “Autonet,” AutoNet, accessed September 25, 2023, https://autonet-nti.ru/en/; “NTI Autonet,” AutoNet, accessed September 25, 2023, https://autonet-nti.ru/en/autonet/
    67    “NTI National Technology Initiative,” TA Advisor, December 21, 2021, https://tadviser.com/index.php/Company:National_Technology_Initiative_(NTI)
    68    See, e.g., “A timeline of EU and US sanctions and Russia countersanctions,” Cambridge University Press and Assessment, accessed January 20, 2024, https://static.cambridge.org/content/id/urn:cambridge.org:id:article:S1049096519001781/resource/name/S1049096519001781sup001.pdf
    69    « Об установлении запрета на допуск программного обеспечения, происходящего из иностранных государств, для целей осуществления закупок для обеспечения государственныхи муниципальных нужд», Digital Russia, November 16, 2015, https://d-russia.ru/wp-content/uploads/2015/11/ac872y0wqioFnrRUeTnpGjEavWCfgEAo.pdf.
    70    « Реестр российского ПО – инструкция для госзаказчиков », Digital Russia, February 26, 2016, https://d-russia.ru/reestr-rossijskogo-po-instrukciya-dlya-goszakazchikov.html; « Как попасть в реестр российского ПО: пошаговая инструкция », The Skolkovo Foundation, September 12, 2016, https://sk.ru/news/kak-popast-v-reestr-rossiyskogo-po-poshagovaya-instrukciya/.
    71    Gijs Hillenius, “Russia scrapped open source plans to focus on self-reliance,” Interoperable Europe, August 15, 2019, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/unified-software-register
    72    Andrei Zdanevich, “Why do Russian officials still prefer to use Microsoft?” Russia Beyond, August 9, 2016, https://www.rbth.com/science_and_tech/2016/08/09/why-do-russian-officials-still-prefer-to-use-microsoft_619419.
    73    “Country case: Towards e-procurement in the Russia [sic] Federation,” OECD, October 7, 2016, https://search.oecd.org/governance/procurement/toolbox/search/towards-e-procurement-russian-federation.pdf. 
    74    “Presidential Address to the Federal Assembly,” The Kremlin, December 1, 2016, http://en.kremlin.ru/events/president/news/53379
    75    “Russia to issue 30mn national payment cards in 2016 – CBR head,” RT, August 10, 2015, https://www.rt.com/business/312073-russia-national-payment-system/; “Russian Federation: Financial Infrastructure Technical Note: July 2016,” The World Bank Group, July 2016, https://documents1.worldbank.org/curated/en/659541472539905263/pdf/108087-FSA-P157494-PUBLIC-Russia-FSAP-Update-II-TN-on-Financial-Infrastructure.pdf
    76    John E. Dunn, “Kaspersky Lab CEO Backs Out of IPO Plans,” CSO Online, February 7, 2012, https://www.csoonline.com/article/534676/data-protection-kaspersky-lab-ceo-backs-out-of-ipo-plans.html; “Kaspersky to buy out U.S. investors, rules out IPO,” Reuters, February 6, 2012, https://www.reuters.com/article/us-kaspersky/kaspersky-to-buy-out-u-s-investors-rules-out-ipo-idUSTRE81511Z20120206/
    77    See, e.g., Noah Shachtman, “Russia’s Top Cyber Sleuth Foils US Spies, Helps Kremlin Pals,” Wired, July 23, 2012, https://www.wired.com/2012/07/ff-kaspersky/; Eugene Kaspersky, “What Wired Is Not Telling You – a Response to Noah Schathman’s Article in Wired Magazine,” Kaspersky, July 25, 2012, https://eugene.kaspersky.com/2012/07/25/what-wired-is-not-telling-you-a-response-to-noah-shachtmans-article-in-wired-magazine/; Carol Matlack, Michael Riley, and Jordan Robertson, “The Company Securing Your Internet Has Close Ties to Russian Spies,” Bloomberg, March 19, 2015, https://www.bloomberg.com/news/articles/2015-03-19/cybersecurity-kaspersky-has-close-ties-to-russian-spies; Corey Flintoff, “Kaspersky Lab: Based In Russia, Doing Cybersecurity in the West,” NPR, August 10, 2015, https://www.npr.org/sections/alltechconsidered/2015/08/10/431247980/kaspersky-lab-a-cybersecurity-leader-with-ties-to-russian-govt.
    78    Federal Acquisition Regulation; Use of Products and Services of Kaspersky Lab, 83 FR 28141, June 15, 2018, https://www.federalregister.gov/documents/2018/06/15/2018-12847/federal-acquisition-regulation-use-of-products-and-services-of-kaspersky-lab. See also the final rule: Federal Acquisition Regulation: Use of Products and Services of Kaspersky Lab, 84 FR 47861, September 10, 2019, https://www.federalregister.gov/documents/2019/09/10/2019-19360/federal-acquisition-regulation-use-of-products-and-services-of-kaspersky-lab.
    79    Shane Harris, Gordon Lubold, and Paul Sonne, “How Kaspersky’s Software Fell Under Suspicion of Spying on America,” The Wall Street Journal, January 5, 2018, https://www.wsj.com/articles/how-kasperskys-software-fell-under-suspicion-of-spying-on-america-1515168888.
    80    Commerce Department Prohibits Russian Kaspersky Software for U.S. Customers,” US Department of Commerce, June 20, 2024, https://www.bis.gov/press-release/commerce-department-prohibits-russian-kaspersky-software-us-customers.
    81    Justin Sherman, “Russia’s Open-Source Code and Private-Sector Cybersecurity Ecosystem,” NSI, February 22, 2023, https://nsiteam.com/russias-open-source-code-and-private-sector-cybersecurity-ecosystem/; “Treasury Sanctions Russia with Sweeping New Sanctions Authority,” US Department of the Treasury, April 15, 2021, https://home.treasury.gov/news/press-releases/jy0127;, “Official Journal of the European Union,” Volume 66,  European Union, June 23, 2023, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L:2023:159I:FULL.
    82    “Russia to launch ‘independent internet’ for BRICS nations – report,” RT, November 28, 2017, https://www.rt.com/russia/411156-russia-to-launch-independent-internet/.
    83    Sergey Kozlovsky, “Russia Just Doubled Its Internet Surveillance Program,” Global Voices, August 15, 2014, https://globalvoices.org/2014/08/15/russia-sorm-medvedev-social-networks-internet/
    84    Ewen MacAskill, “Putin calls internet a ‘CIA project’ renewing fears of web breakup,” The Guardian, April 24, 2014, https://www.theguardian.com/world/2014/apr/24/vladimir-putin-web-breakup-internet-cia.
    85    “Russia to launch ‘independent internet’ for BRICS nations – report.” 
    86    “Russia censors media by blocking websites and popular blog,” The Guardian, March 14, 2014, https://www.theguardian.com/world/2014/mar/14/russia-bans-alexei-navalny-blog-opposition-news-websites
    87    Information Security Doctrine of the Russian Federation, 2016, http://www.scrf.gov.ru/security/information/DIB_engl/.
    88    National Security Strategy of the Russian Federation, 2021;  “What You Need to Know About Russia’s 2021 National Security Strategy,” Meduza, July 5, 2021, https://meduza.io/en/feature/2021/07/05/what-you-need-to-know-about-russia-s-2021-national-security-strategy.
    89    National Security Strategy of the Russian Federation, 2021;  “What You Need to Know about Russia’s 2021 National Security Strategy — Meduza.”
    90    Joshua Zitser, “Putin lives in an ‘information vacuum’ and never uses a cellphone or the internet, a Russian intelligence officer who defected says,” Business Insider, April 4, 2023, https://www.businessinsider.com/vladimir-putin-never-uses-cellphone-internet-russian-defector-says-2023-4
    91    Gross domestic spending on R&D,” OECD, accessed January 3, 2024, https://www.oecd.org/en/data/indicators/gross-domestic-spending-on-r&d.html
    92    “Russia: ‘Big Brother’ Law Harms Security, Rights,” Human Rights Watch, July 12, 2016, https://www.hrw.org/news/2016/07/12/russia-big-brother-law-harms-security-rights
    93    “The Yarovaya Law: One Year After,” Digital Report Analytica, April 2017, 5, https://analytica.digital.report/wp-content/uploads/2017/07/The-Yarovaya-Law.pdf.
    94    Maria Kolomychenko and Polina Nikolskaya, “Exclusive: Russia’s telecoms security push hits snag – it needs foreign help,” Reuters, July 5, 2018, https://www.reuters.com/article/us-russia-technology-dataprotection-excl/exclusive-russias-telecoms-security-push-hits-snag-it-needs-foreign-help-idUSKBN1JV12Y.
    95    Постановление Правительства РФ от 28 мая 2019 г. N 673, Garant, May 28, 2019, https://base.garant.ru/72255540/.
    96    This is reflected in many areas of Russia’s domestic technology push, where there is widespread noncompliance with existing laws but the state continues to pass new ones anyway, far ahead of the tech reality and the compliance curve. See, e.g., Jon Porter, “Russia passes law forcing manufacturers to install Russian-made software,” The Verge, December 3, 2019, https://www.theverge.com/2019/12/3/20977459/russian-law-pre-installed-domestic-software-tvs-smartphones-laptops.
    97    « Что американцу монополия, то русскому… », Kommersant, October 24, 2020, https://www.kommersant.ru/doc/4547165.
    98    Oleg Yegorov, “Facebook and Google’s Russian rivals: Why are they winning?” Russia Beyond, February 12, 2019, https://www.rbth.com/science-and-tech/329970-russian-facebook-vk-russian-google-yandex.
    99    See, e.g., Maria Kiselyova, “Usmanov tightens hold on Russian social net VKontakte as founder sells stake,” Reuters, January 24, 2014, https://www.reuters.com/article/us-russia-vkontakte/usmanov-tightens-hold-on-russian-social-net-vkontakte-as-founder-sells-stake-idUSBREA0N1MA20140124; Kevin Rothrock, “Pavel Durov, Founder of Russia’s #1 Social Network, Is Not Going to Prison (For Now),” Global Voices, June 8, 2013, https://globalvoices.org/2013/06/08/pavel-durov-founder-of-russias-1-social-network-is-not-going-to-prison-for-now/; Kevin Rothrock, “Pavel Durov, Russia’s Zuckerberg, Fights for Control of His Creation,” Global Voices, April 30, 2013, https://globalvoices.org/2013/04/30/pavel-durov-russias-zuckerberg-fights-for-control-of-his-creation/.
    100    Mark Scott, “Mail.ru Takes Full Ownership of VKontakte, Russia’s Largest Social Network,” The New York Times, September 16, 2014, https://archive.nytimes.com/dealbook.nytimes.com/2014/09/16/mail-ru-takes-full-ownership-of-vkontakte-russias-largest-social-network/.
    101    Ingrid Lunden, “Rutube, the YouTube of Russia, Links up with Facebok, Gets YouTube, Vimeo Vids in Aggregation Pivot,” TechCrunch, June 29, 2012, https://techcrunch.com/2012/06/29/rutube-the-youtube-of-russia-links-up-with-facebook-gets-youtube-vimeo-videos/; « Россияне будут продолжать смотреть телевизор », Kommersant, December 23, 2020, https://www.kommersant.ru/doc/4625739.
    102    « «Газпром-медиа» строит видеовертикаль », Kommersant, December 23, 2020, https://www.kommersant.ru/doc/4626686; “Andrey Konyaev,” European Conference of Science Journalism, accessed September 18, 2023, http://www.ecsj2020.eu/speakers/andrey-konyaev/.
    103    « Битва Титанов », Kommersant, December 23, 2021, https://www.kommersant.ru/doc/5142905.
    104    Philipp Dietrich, “The Key Player in Russia’s Cybersphere,” German Council on Foreign Relations, September 2023), 10-11, https://dgap.org/system/files/article_pdfs/DGAP%20Analysis%20No.%204_September_20_2023_20pp.pdf.
    105    Liam Tung, “The inside story of Russia’s ‘own mobile OS’: It’s not what you think,” ZDNet, May 20, 2015, https://www.zdnet.com/article/the-inside-story-of-russias-own-mobile-os-its-not-what-you-think/
    106    Ibid.
    107    “Sailfish OS licensing model,” Sailfish, accessed January 4, 2024, https://sailfishos.org/cases/; Natasha Lomas, “Finland’s Jolla, maker of Sailfish OS, is trying to cut ties with Russia,” TechCrunch, March 1, 2022, https://techcrunch.com/2022/03/01/jolla-cut-ties-russia/.
    108    Darnya Antoniuk, “Russia wants 2 million phones with home-grown Aurora OS for use by officials,” The Record, June 2, 2023, https://therecord.media/russia-wants-phones-with-aurora-os
    109    “About,” Astra Linux, accessed September 28, 2023, https://astralinux.ru/en/about; “Astra Linux,” Wikipedia, April 17, 2023, https://en.wikipedia.org/wiki/Astra_Linux; “Astra Linux,” Debian, accessed September 28, 2023, https://wiki.debian.org/Derivatives/Census/AstraLinux
    110    The secure version’s source code is not publicly available online, even though Astra Linux is based on an open-source operating system. 
    111    « Свободное программное обеспечение в госорганах », Ministry of Digital Development, Communications and Mass Media of the Russian Federation, September 6, 2018, https://digital.gov.ru/ru/activity/directions/106/.
    112    « Национальный репозиторий СПО предлагают наполнить софтом, созданным по госзаказу », Ministry of Digital Development, Communications and Mass Media of the Russian Federation, September 15, 2021, https://digital.gov.ru/ru/events/41270/
    113    Marc Bennetts, “Vladimir Putin ‘still uses obsolete Windows XP’ despite hacking risk,” The Guardian, December 17, 2019, https://www.theguardian.com/world/2019/dec/17/vladimir-putin-still-uses-obsolete-windows-xp-despite-hacking-risk
    114    Russian Federal Law No. 244-FZ, September 28, 2010, https://www.wipo.int/wipolex/en/legislation/details/17945.
    115    Elena Pakhomova, “City of the future: the trials and tribulations of Russia’s Silicon Valley,” New East Digital Archive, July 8, 2013, https://www.new-east-archive.org/articles/show/1177/future-city-trials-tribulations-russia-silicon-valley-skolkovo.
    116    Andrew Clark, “Dmitry Medvedev picks Silicon Valley’s brains,” The Guardian, June 23, 2010, https://www.theguardian.com/business/2010/jun/23/dmitry-medvedev-silicon-valley-visit
    117    Alec Luhn, “Not Just Oil and Oligarchs,” Slate, December 9, 2013, https://slate.com/technology/2013/12/russias-innovation-city-skolkovo-plagued-by-doubts-but-it-continues-to-grow.html
    118    “Investigators uncover multi-million embezzlement in Skolkovo high-tech hub,” TASS, February 13, 2013, https://tass.com/russianpress/689632; “Former Executive at Russian Innovations Hub Skolkovo Arrested in Absentia,” The Moscow Times, July 27, 2015, https://www.themoscowtimes.com/2015/07/27/former-executive-at-russian-innovations-hub-skolkovo-arrested-in-absentia-a48554; Luhn, “Not Just Oil and Oligarchs.”
    119    Gavin Wilde and Justin Sherman, “Putin’s internet plan: Dependency with a veneer of sovereignty,” Brookings Institution, May 11, 2022, https://www.brookings.edu/articles/putins-internet-plan-dependency-with-a-veneer-of-sovereignty/
    120    Mark Rice-Oxley, “Inside Skolkovo, Moscow’s self-styled Silicon Valley,” The Guardian, June 12, 2015, https://www.theguardian.com/cities/2015/jun/12/inside-skolkovo-moscows-self-styled-silicon-valley.
    121    Ibid.
    122    Phillip Martin, “MIT abandons Russian high-tech campus partnership in light of Ukraine invasion,” WGBH, February 25, 2022, https://www.wgbh.org/news/local/2022-02-25/mit-abandons-russian-high-tech-campus-partnership-in-light-of-ukraine-invasion; Rebecca Fannin, “The Silicon Valley fallout from waging economic war against Russia,” CNBC, March 17, 2022, https://www.cnbc.com/2022/03/17/the-silicon-valley-fallout-from-waging-economic-war-against-russia.html.  
    123    “Russian Agent and 10 Other Members of Procurement Network for Russian Military and Intelligence Operating in the U.S. and Russia Indicated in New York,” US Federal Bureau of Investigation , October 3, 2012, https://archives.fbi.gov/archives/houston/press-releases/2012/russian-agent-and-10-other-members-of-procurement-network-for-russian-military-and-intelligence-operating-in-the-u.s.-and-russia-indicted-in-new-york
    124    « Суровый российский сервер », Kommersant, December 17, 2021, https://www.kommersant.ru/doc/5131374?from=main.
    125    Ibid.
    126    « Центральный процессор «Эльбрус-8С» (ТВГИ.431281.025) », MCST, accessed August 29, 2023, http://www.mcst.ru/elbrus-8c.
    127    Anton Shilov, “Russian-Made Elbrus CPUs Fail Trials, ‘A Completely Unacceptable Platform,” Tom’s Hardware, January 2, 2022, https://www.tomshardware.com/news/russias-biggest-bank-tests-elbrus-cpu-finds-it-unacceptable
    128    Ibid.
    129    Andrew Roth, “Moscow court bans Telegram messaging app,” The Guardian, April 13, 2018, https://www.theguardian.com/world/2018/apr/13/moscow-court-bans-telegram-messaging-app.
    130    “Russia to block Telegram app over encryption,” BBC, April 13, 2018, https://www.bbc.com/news/technology-43752337
    131    Matt Burgess, “This is why Russia’s attempts to block Telegram have failed,” Wired, April 28, 2018, https://www.wired.co.uk/article/telegram-in-russia-blocked-web-app-ban-facebook-twitter-google.
    132    “Kremlin Spokesman Still Uses Telegram Despite Ban,” The Moscow Times, April 26, 2018, https://www.themoscowtimes.com/2018/04/26/kremlin-spokesman-still-uses-telegram-despite-ban-a61278; « Дворкович заявил, что у него работает Telegram, несмотря на блокировку », TASS, April 27, 2018, https://tass.ru/obschestvo/5160494. Cited in: Burgess, “This is why Russia’s attempts to block Telegram have failed.”
    133    Roth, “Moscow court bans Telegram messaging app.”
    134    Justin Sherman, “What’s behind Russia’s decision to ditch its ban on Telegram?” Atlantic Council, June 26, 2020, https://www.atlanticcouncil.org/blogs/new-atlanticist/whats-behind-russias-decision-to-ditch-its-ban-on-telegram/
    135    Matt Tait, “Russia is spying on Telegram chats in occupied Ukrainian regions. Here’s how,” Pwn All the Things, December 2, 2022, https://www.pwnallthethings.com/p/russia-is-spying-on-telegram-chats.
    136    See, e.g., Diwen Xue et al., TSPU: Russia’s Decentralized Censorship System (Ann Arbor: Censored Planet, November 2022), https://censoredplanet.org/tspu.
    137    Justin Sherman, Reassessing RuNet: Russian Internet Isolation and Implications for Russian Cyber Behavior, Atlantic Council, July 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/reassessing-runet-russian-internet-isolation-and-implications-for-russian-cyber-behavior/
    138    Max Seddon and Madhumita Murgia, “Apple and Google drop Navalny app after Kremlin piles on pressure,” Financial Times, September 17, 2021, https://www.ft.com/content/faaada81-73d6-428c-8d74-88d273adbad3; Greg Miller and Joseph Menn, “Putin’s prewar moves against U.S. tech giants laid groundwork for crackdown on free expression,” The Washington Post, March 12, 2022, https://www.washingtonpost.com/world/2022/03/12/russia-putin-google-apple-navalny/.  
    139    Justin Sherman, “Huawei’s push in Russia exploits Kremlin fears of Western technology,” Atlantic Council, November 18, 2020, https://www.atlanticcouncil.org/blogs/new-atlanticist/huaweis-push-in-russia-exploits-kremlin-fears-of-western-technology/.
    140    Danil Bochkov, “China’s Bid to Conquer Russia’s 5G Market Should Worry the Kremlin,” The Diplomat, October 14, 2020, https://thediplomat.com/2020/10/chinas-bid-to-conquer-russias-5g-market-should-worry-the-kremlin/
    141    Russian ISPs have had issues going back years with the state’s insistence that they not only install black boxes but that they pay for the equipment, its installation, and its maintenance. See, e.g., Andrei Soldatov and Irina Borogan, “Inside the Red Web: Russia’s back door onto the internet – extract,” The Guardian, September 8, 2015, https://www.theguardian.com/world/2015/sep/08/red-web-book-russia-internet.
    142    Tatiana Romanova, “The Impact of Sanctions on Russia’s Domestic and Foreign Policy,” Chatham House, March 2015, 3, https://www.chathamhouse.org/sites/default/files/field/field_document/2015-03-24%20-%20The%20Impact%20of%20Sanctions%20-%20Event%20SummaryLP%20edited%20-JAKE.pdf.
    143    Carl Schreck, “FBI Wary of Possible Russian Spies Lurking in U.S. Tech Sector,” Radio Free Europe/Radio Liberty, May 17, 2014, https://www.rferl.org/a/fbi-wary-of-possible-russian-spies-in-lurking-in-us-tech-sector/25388490.html
    144    See, e.g., Cade Metz and Adam Satariano, “Russian Tech Industry Faces ‘Brain Drain’ as Workers Flee,” The New York Times, April 13, 2022, https://www.nytimes.com/2022/04/13/technology/russia-tech-workers.html.
    145    « ИТ-специалисты десятками тысяч уезжают из России », C News, March 22, 2022, https://www.cnews.ru/news/top/2022-03-22_poslableniya_ne_pomogayut
    146    “About 100,000 IT specialists left Russia in 2022 – digital development minister,” Interfax, December 20, 2022, https://interfax.com/newsroom/top-stories/86316/.
    147    Masha Borak, “How Russia killed its tech industry,” MIT Technology Review, April 4, 2023, https://www.technologyreview.com/2023/04/04/1070352/ukraine-war-russia-tech-industry-yandex-skolkovo/.
    148    Johannes Wachs, “Digital traces of brain drain: developers during the Russian invasion of Ukraine,” EPJ Data Science 12, no. 14 (2023), https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-023-00389-3.  
    149    Katie Canales, “The ex-news director of Russia’s largest search engine urged his former colleagues to quit, accusing the company of censoring Russia’s invasion into Ukraine,” Business Insider, March 1, 2022, https://www.businessinsider.com/yandex-russia-former-news-director-urges-colleagues-quit-ukraine-invastion-2022-3
    150    “Russia announces new tax support measures for IT companies,” CMS Law-Now, August 3, 2022, https://cms-lawnow.com/en/ealerts/2022/03/russia-announces-new-tax-support-measures-for-it-companies.
    151    “Russia announces exemptions from Ukraine war mobilization,” Al Jazeera, September 23, 2022, https://www.aljazeera.com/news/2022/9/23/russia-excludes-some-professionals-from-mobilisation; “Ukrainians Express Fear and Defiance as Staged Voting Begins,” The New York Times, September 23, 2022, https://www.nytimes.com/live/2022/09/23/world/russia-ukraine-putin-news#russia-says-it-will-exempt-some-white-collar-workers-from-call-up-after-businesses-warn-of-repercussions.
    152    « Авиация настраивает систему бронирования », Kommersant, September 23, 2022, https://www.kommersant.ru/doc/5572535; « IT-компании попросили Минцифры предоставить айтишникам отсрочку от мобилизации », Forbes Russia, September 22, 2022, https://www.forbes.ru/tekhnologii/477855-it-kompanii-poprosili-mincifry-predostavit-ajtisnikam-otsrocku-ot-mobilizacii
    153    “Russia Turns to Foreign IT Workers After Wartime Brain Drain,” The Moscow Times, March 15, 2023, https://www.themoscowtimes.com/2023/03/15/russia-turns-to-foreign-it-workers-after-wartime-brain-drain-a80493.
    154    See, e.g., “Vladimir Putin Meets with Members of the Valdai Discussion Club,” Valdai Discussion Club, October 27, 2022, https://valdaiclub.com/events/posts/articles/vladimir-putin-meets-with-members-of-the-valdai-club/; Fiona Hill, “Dinner with Putin: Musings on the Politics of Modernization in Russia,” Brookings Institution, October 8, 2010, https://www.brookings.edu/articles/dinner-with-putin-musings-on-the-politics-of-modernization-in-russia/.
    155    Ivan Timofeev, “Ending Western domination is key to the emerging world order. Here’s what needs to be done to achieve it,” RT, May 30, 2023, https://www.rt.com/russia/576856-end-west-domination-world-order/.
    156    « Б госдуму внесут закон о запрете удалëнной работы из—за границы », Verstka, December 14, 2022, https://verstka.media/udalennuyu-rabotu-zapretyat?tg_rhash=86cf5f61f61288
    157    Mary Ilyushina, “Russia eyes pressure tactics to lure fleeing tech workers home,” The Washington Post, March 8, 2023, https://www.washingtonpost.com/world/2023/03/08/russia-employers-intimidation-workers-war/#; “Russia goes after remote workers with tighter income tax draft law,” Reuters, May 18, 2023, https://www.reuters.com/world/europe/russia-goes-after-remote-workers-with-tighter-income-tax-draft-law-2023-05-18/; “Key Russian bank limits remote work from abroad — RBK,” RT, November 29, 2023, https://www.rt.com/business/588130-tinkoff-limits-remote-work-abroad/.
    158    “With Over 300 Sanctions, U.S. Targets Russia’s Circumvention and Evasion, Military-Industrial Supply Chains, and Future Energy Revenues,” US Department of the Treasury  May 19, 2023, https://home.treasury.gov/news/press-releases/jy1494; “The United States Imposes Sanctions on Russian Entities Involved in UAV Deal with Iran,” US Department of State, December 9, 2022, https://www.state.gov/the-united-states-imposes-sanctions-on-russian-entities-involved-in-uav-deal-with-iran/
    159    “McDonald’s To Exit from Russia,” McDonald’s, May 16, 2022, https://corporate.mcdonalds.com/corpmcd/our-stories/article/mcd-exit-russia.html; Amelia Lucas, “McDonald’s to sell Russian business to existing Siberian licensee,” CNBC, May 19, 2022, https://www.cnbc.com/2022/05/19/mcdonalds-to-sell-russian-business-to-existing-siberian-licensee.html
    160    “Russia confirms Meta’s designation as extremist,” BBC, October 11, 2022, https://www.bbc.com/news/technology-63218095.
    161    Justin Sherman, “Russia Signals a New Era in Its War on Western Internet Platforms,” Slate, March 8, 2022, https://slate.com/technology/2022/03/russia-roskomnadzor-youtube-information-warfare.html.
    162    See, e.g., The Structure of Russian Anti-Extremism Legislation,” SOVA Center for Information and Analysis, November 2010, https://www.europarl.europa.eu/meetdocs/2009_2014/documents/droi/dv/201/201011/20101129_3_10sova_en.pdf.
    163    Heli Simola, Made in Russia? Assessing Russia’s Potential for Import Substitution (Helsinki: Bank of Finland Institute for Emerging Economies, March 2022), 5,  https://www.econstor.eu/handle/10419/253652.
    164    Ibid., 14-15.
    165    Author’s conversation with European defense and intelligence analysts, August 2023.
    166    “Alexander Kuleshov,” Skoltech, January 3, 2021, https://www.skoltech.ru/en/team/alexander-kuleshov/.
    167    « Тотального бегства иностранцев не наблюдаем, хотя отдельные обидные потери есть », Kommersant, May 22, 2022, https://www.kommersant.ru/doc/5357614
    168    Ibid.
    169    “Special report: How U.S.-made chips are flowing into Russia,” Nikkei, April 12, 2023, https://asia.nikkei.com/Business/Tech/Semiconductors/Special-report-How-U.S.-made-chips-are-flowing-into-Russia; “Web of Secret Chip Deals Allegedly Help US Tech Flow to Russia,” Bloomberg, March 15, 2023, https://www.bloomberg.com/news/features/2023-03-15/secret-chip-deals-allegedly-help-us-technology-flow-to-russia-despite-sanctions#xj4y7vzkg; Zoya Sheftalovich and Laurens Cerulus, “The chips are down: Putin scrambles for high-tech parts as his arsenal goes up in smoke,” Politico Europe, September 5, 2022, https://www.politico.eu/article/the-chips-are-down-russia-hunts-western-parts-to-run-its-war-machines/.
    170    “Elbrus processors developer preparing to transfer production to Zelenograd’s Mikron from Taiwan – media,” Interfax, May 30, 2022, https://interfax.com/newsroom/top-stories/79684/
    171    “Elbrus Processors Developer Preparing to Transfer Production to Zelenograd’s Mikron from Taiwan – Media.”
    172    Ramish Zafar, “Russia Funds Largest Chipmaker With 7 Billion Rubles In Aid As Sanctions Bite,” Wccftech, September 7, 2022, https://wccftech.com/russia-funds-largest-chipmaker-with-8-billion-rubles-in-aid-as-sanctions-bite/.
    173    “Silicon Trust welcomes JSC MIKRON as new partner,” Silicon Trust, April 27, 2014, https://silicontrust.org/2014/04/27/silicon-trust-welcomes-jsc-mikron-as-new-partner/.
    174    Pavel Urusov, “Vital Microchip Sanctions Will Hit Russian Computing Power Hard,” Carnegie Endowment for International Peace, July 25, 2023, https://carnegieendowment.org/politika/90250
    175    Chris Miller, “The Impact of Semiconductor Sanctions on Russia,” American Enterprise Institute, April 2024, 1-3,  https://www.aei.org/research-products/report/the-impact-of-semiconductor-sanctions-on-russia/.
    176    “Half the processors made by Russian computer chipmaker Baikal electronics are reportedly defective,” Meduza, March 27, 2024, https://meduza.io/en/news/2024/03/27/half-the-processors-made-by-russian-computer-chipmaker-baikal-electronics-are-reportedly-defective; « Разработчик процессоров Baikal локализует один из этапов производства », Vedomosti, March 26, 2024, https://www.vedomosti.ru/technology/articles/2024/03/26/1027924-razrabotchik-protsessorov-baikal-lokalizuet-odin-iz-etapov-proizvodstva
    177    See, e.g., European Union, “Official Journal of the European Union,” Volume 66, June 23, 2023.
    178    “Positive Technologies Q2 IFRS revenue rises by 49% to $35.09 mln,” TASS, July 25, 2023, https://tass.com/economy/1651585
    179    « Группа «Астра» объявила финансовые результаты по МСФО за первое полугодие 2023 года », C News, September 22, 2023, https://www.cnews.ru/news/line/2023-09-22_gruppa_astra_obyavila. See also, e.g., « Linux жил, Linux жив, Linux будет жить », Kommersant, September 20, 2022, https://www.kommersant.ru/doc/5570730.
    180    “Russian Ministry of Digital Development to transform domestic software register into marketplace,” Interfax, May 25, 2022, https://interfax.com/newsroom/top-stories/79526/.
    181    “Domestic Soft Association Asks Ministry of Digital Development not to Ease Entry to Register of Russian Software,” ICT Moscow, June 22, 2022, https://ict.moscow/en/news/domestic-soft-association-asks-ministry-of-digital-development-not-to-ease-entry-to-register-of-russian-software/.
    182    Natasha Lomas, “Yandex’s sale of News and Zen to VK completes,” TechCrunch, September 12, 2022, https://techcrunch.com/2022/09/12/yandex-news-zen-vk-sale-completes/.
    183    Darya Korsunskaya and Alexander Marrow, “Exclusive: Yandex NV could sell Russian assets all at once,” Reuters, November 14, 2023, https://www.reuters.com/markets/deals/yandex-nv-could-sell-all-russian-assets-one-go-2023-11-14/; “Yandex to Fully Divest Russian Assets and Distribute Proceeds,” Bloomberg, November 14, 2023, https://www.bloomberg.com/news/articles/2023-11-14/yandex-to-fully-divest-russian-assets-and-distribute-proceeds.
    184    “Yandex Announces Third Quarter 2023 Financial Results,” Yandex, October 26, 2023, https://ir.yandex/financial-releases?year=2023.
    185    Alexander Marrow, Darya Korsunskaya, and Polina Devitt, “Yandex owner to exit Russia in a $5.2 billion deal,” Reuters, February 5, 2024, https://www.reuters.com/technology/yandex-nv-agrees-52-bln-sale-russian-assets-investor-consortium-2024-02-05/
    186    “Network security in Russia: what remains after all is gone,” discussion at Positive Hack Days 2023, Moscow, Russia, https://www.youtube.com/watch?v=rxuzvuQrbC0.
    187    “Cyber sovereignty: open code contribution,” discussion at Positive Hack Days 2023, Moscow, Russia, https://www.youtube.com/watch?v=nj3KqTVPza4.
    188    “Russian Software: Domestic Software,” TA Adviser, July 18, 2023, https://tadviser.com/index.php/Article:Russian_Software_(Domestic_Software).
    189    Alexander Martin, “Russia to launch its own version of VirusTotal due to US snooping fears,” The Record, October 30, 2023, https://therecord.media/russia-launching-own-malware-repository-virustotal
    190    Brian (Chun Hey) Kot, “Hong Kong’s Technology Lifeline to Russia,” Carnegie Endowment for International Peace, May 17, 2023, https://carnegieendowment.org/2023/05/17/hong-kong-s-technology-lifeline-to-russia-pub-89775.
    191    “Support Provided by the People’s Republic of China to Russia,” US Office of the Director of National Intelligence, June 2023, 6, https://democrats-intelligence.house.gov/uploadedfiles/odni_report_on_chinese_support_to_russia.pdf.
    192    “US intelligence finding shows China surging equipment sales to Russia to help war effort in Ukraine,” The Associated Press, April 19, 2024, https://apnews.com/article/united-states-china-russia-ukraine-war-265df843be030b7183c95b6f3afca8ec
    193    Iris Deng, “Chinese smartphone brands gain market share in Russia with Xiaomi gaining top spot displacing Samsung,” South China Morning Post, April 18, 2023, https://www.scmp.com/tech/big-tech/article/3217430/chinese-smartphone-brands-gain-market-share-russia-xiaomi-gaining-top-spot-displacing-samsung.
    194    Ibid.
    195    See, e.g., Dan Strumpf, “Chinese Tech Giants Quietly Retreat From Doing Business With Russia,” The Wall Street Journal, May 6, 2022, https://www.wsj.com/articles/chinese-tech-giants-quietly-stop-doing-business-with-russia-11651845795
    196    Iris Deng, “Huawei disbands enterprise business team in Russia in further pullback amid Western sanctions, local media reports,” South China Morning Post, December 20, 2022, https://www.scmp.com/tech/big-tech/article/3203995/huawei-disbands-enterprise-business-team-russia-further-pullback-amid-western-sanctions-local-media; “Huawei,” KSE Institute, accessed January 21, 2024, https://leave-russia.org/huawei
    198    Selena Li, “Explainer: China UnionPay, Russia’s potential payments backstop,” Reuters, April 21, 2022, https://www.reuters.com/business/finance/china-unionpay-russias-potential-payments-backstop-2022-04-21/; Nicholas Gordon, “Visa and Mastercard have already cut ties with Russian banks. Now China’s largest credit card brand might be pulling out too,” Fortune, April 22, 2022, https://fortune.com/2022/04/22/unionpay-china-credit-card-sberbank-secondary-sanctions-russia/; “Chinese UnionPay System Cuts Off Russian Bank Cards,” Kyiv Post, September 3, 2022, https://www.kyivpost.com/post/1439.
    199    Josh Ye, “China releases its first open-source computer operating system,” Reuters, July 6, 2023, https://www.reuters.com/technology/china-releases-its-first-open-source-computer-operating-system-2023-07-06/; Tao Mingyang, “China’s homegrown operating system sees rapid development as US’ tech assault backfires,” Global Times, August 10, 2023, https://www.globaltimes.cn/page/202308/1296031.shtml
    200    Catalin Cimpanu, “Russian military moves closer to replacing Windows with Astra Linux,” ZDNet, May 30, 2019, https://www.zdnet.com/article/russian-military-moves-closer-to-replacing-windows-with-astra-linux/; “Digital Ministry drafting changes to allow developers to participate in international projects not registered in Russia,” Interfax, October 11, 2023, https://interfax.com/newsroom/top-stories/95329/
    201    Phil Muncaster, “Russian Government Bans Foreign Messaging Apps,” Infosecurity, March 2, 2023, https://www.infosecurity-magazine.com/news/russian-government-bans-foreign/
    202    Mike Eckel, “One App To Rule Them All: Coming Soon To Russia’s Internet,” Radio Free Europe/Radio Liberty, December 2, 2023, https://www.rferl.org/a/russia-internet-app-social-media-surveillance-/32711114.html. See also, Philipp Dietrich, “The Key Player in Russia’s Cybersphere,” German Council on Foreign Relations, September 2023, https://dgap.org/en/research/publications/key-player-russias-cybersphere.
    203    The original Russian government webpage, linked in the story by Kommersant, is not accessible. « Россия и Китай договорились проинвестировать совместные проекты на $1,3 млрд », Kommersant, November 8, 2022, https://www.kommersant.ru/doc/5652855. See also Russian government discussion of Russian-Chinese trade: “Andrei Belousov: Trade in Russia and China can reach $300 billion by 2030,” Government of Russia, November 20, 2023, http://government.ru/en/news/50157/.
    204    Prithvi Gupta, “China’s steadily expanding investments in Russia since the Ukraine conflict,” Observer Research Foundation, July 26, 2023, https://www.orfonline.org/expert-speak/chinas-steadily-expanding-investments-in-russia-since-the-ukraine-conflict
    205    Mikhail Korostikov, “Is Russia Really Becoming China’s Vassal?” Carnegie Endowment for International Peace, June 7, 2023, https://carnegieendowment.org/politika/90135
    206    Philipp Ivanov, “Together and Apart: The Conundrum of the China-Russia Partnership,” Asia Society, October 202), https://asiasociety.org/policy-institute/together-and-apart-conundrum-china-russia-partnership
    207    « Реестр проектов », NTI 2035, accessed September 25, 2023, https://nti2035.ru/catalog/
    208    “Russia Forms Drone, Microchip Investment Fund – Vedomosti,” The Moscow Times, April 3, 2023, https://www.themoscowtimes.com/2023/04/03/russia-forms-drone-microchip-investment-sovereign-fund-vedomosti-a80688; « В России появился венчурный Фонд суверенных технологий », Vedomosti, April 3, 2023, https://www.vedomosti.ru/technology/articles/2023/04/03/969178-v-rossii-poyavilsya-venchurnii-fond-suverennih-tehnologii.
    209    Emma Burrows, “A record Russian budget will boost defense spending, shoring up Putin’s support ahead of the election,” The Associated Press, November 15, 2023, https://apnews.com/article/russia-draft-budget-state-duma-economy-ukraine-4ac21a2259169d7c689ac452830bb0af; Pavel Luzin and Alexandra Prokopenko, “Russia’s 2024 Budget Shows It’s Planning for a Long War in Ukraine,” Carnegie Endowment for International Peace, November 10, 2023, https://carnegieendowment.org/politika/90753.
    210    Putin instructs Cabinet to take steps to make Russia independent from foreign software,” TASS, September 5, 2022, https://tass.com/politics/1502743.
    211    « Подписан закон о переходе на использование отечественных геоинформационных технологий », Digital Russia, August 7, 2023, https://d-russia.ru/podpisan-zakon-o-perehode-na-ispolzovanie-otechestvennyh-geoinformacionnyh-tehnologij.html.
    212    Sergei Guriev, “Deglobalizing Russia,” Carnegie Endowment for International Peace, December 2015, 3, https://carnegieendowment.org/files/Article_Guriev_Eng.pdf.
    213    “Security Council Meeting,” The Kremlin, May 20, 2022, http://en.kremlin.ru/events/president/news/page/65
    214    See, e.g., Gregory Arcuri, “Lessons from Russia’s Dysfunctional Pre-War Innovation Economy,” Center for Strategic & International Studies, April 11, 2022, https://www.csis.org/blogs/perspectives-innovation/lessons-russias-dysfunctional-pre-war-innovation-economy (“…Putin’s regime has been at best indifferent—and at worst, hostile—towards the civilian and purely economic application of emerging technologies.”)
    215    See, e.g., on the dismissal of these concerns, Jon Fingas, “Russia is ditching Microsoft because it’s an easy target,” Engadget, July 18, 2019, https://www.engadget.com/2016-11-02-russia-to-drop-microsoft-software.html.
    216    Eckel, “One App To Rule Them All: Coming Soon To Russia’s Internet.
    217    See, e.g., Philipp Dietrich, “Banning YouTube in Russia: Just a Matter of Time,” German Council on Foreign Relations, April 4, 2024, https://dgap.org/en/research/publications/banning-youtube-russia-just-matter-time-0; “Russia To Create Blacklist Of YouTube Vloggers Who Refuse To Join Kremlin-Backed Platform,” Radio Free Europe/Radio Liberty, February 10, 2024, https://www.rferl.org/a/russia-youtube-blacklist-vloggers/32813744.htmlAuthor’s conversation with an expert in Russia’s technology ecosystem. See the state’s denial from 2022, with relative silence since: “Russia Will Not Ban YouTube, Minister Shadayev Says,” Radio Free Europe/Radio Liberty, May 17, 2022, https://www.rferl.org/a/russia-ban-youtube-shadayev/31854787.html.
    218    Thanks to Iria Puyosa for further discussion of this point.
    219    « Мы строим глобального вендора системного ПО »C News, 2021, https://www.cnews.ru/projects/2021/astra_linux.
    220    Andrei Soldatov, “Russia’s Endless Registers Are a Back Door to Preliminary Censorship,” The Moscow Times, June 29, 2021, https://www.themoscowtimes.com/2021/06/29/russias-endless-registers-are-a-back-door-to-preliminary-censorship-a74376.
    221    See, e.g., the comments quoted in: Hillenius, “Russia scrapped open source plans to focus on self-reliance.”
    222    Justin Sherman, “Russia’s largest hacking conference reflects isolated cyber ecosystem,” Brookings Institution, January 12, 2023, https://www.brookings.edu/articles/russias-largest-hacking-conference-reflects-isolated-cyber-ecosystem/.
    223    Justin Sherman, “Analyzing Russian Internet Firm Yandex, Its Open-Source Code, and Its Global Contributors,” Margin Research, March 27, 2023, https://margin.re/2023/03/analyzing-russian-internet-firm-yandex-its-open-source-code-and-its-global-contributors/.
    224    Russia Revises Yandex Partition Terms Over Founder’s Anti-War Stance – Reports,” The Moscow Times, October 6, 2023, https://www.themoscowtimes.com/2023/10/06/russia-revises-yandex-partition-terms-over-founders-anti-war-stance-reports-a82686.
    225    « Бизнес Positive Technologies растет: компания увеличила объем отгрузок во втором квартале на 71% — до 3,3 млрд рублей », PT SecurityJuly 25, 2023, https://group.ptsecurity.com/ru/news/biznespositivetechnologiesrastetkompaniyauvelichilaobemotgruzokvovtoromkvartalena-71-do-3-3-mlrdrub.
    226    « Positive TechnologiesВероятна международная экспансия », IT InvestJune 9, 2022, https://itinvest.ru/analytics/stocks/stocksideas/12861/.
    227    David McHugh, “The owners of Russia’s tech pioneer Yandex are selling — at a big, Kremlin-required discount,” The Associated Press, February 5, 2024, https://apnews.com/article/yandex-russia-sale-search-engine-4de5a04fcf9b99ed5b5fc5fcdd24dd2a.
    228    See, e.g., Anton Shvets, “The sale of Yandex is a weapon in the hands of the Kremlin,” Ukrainska Pravda, March 21, 2024, https://www.pravda.com.ua/eng/columns/2024/03/21/7447514/.
    229    See, e.g., Karen DeYoung and Missy Ryan, “Russia says China agreed to secretly provide weapons, leaked documents show,” The Washington Post, April 13, 2023, https://www.washingtonpost.com/national-security/2023/04/13/russia-china-weapons-leaked-documents-discord/.
    230    Even beyond cyber per se, Elizabeth Wishnick argues that the “Russian intelligence services have been increasingly uneasy about the scope of Chinese intelligence-gathering in Russia, even publicizing cases of Russians being apprehended for spying for China.” See: Elizabeth Wishnick, “A ‘Superior Relationship’: How the Invasion of Ukraine Has Deepened the Sino-Russian Partnership,” China Leadership Monitor 76 (June 2023), https://www.prcleader.org/post/a-superior-relationship-how-the-invasion-of-ukraine-has-deepened-the-sino-russian-partnership.
    231    Alberto Nardelli, “Russian Memo Said War Leaves Moscow Too Reliant on Chinese Tech,” Bloomberg, April 18, 2023, https://www.bnnbloomberg.ca/russian-memo-said-war-leaves-moscow-too-reliant-on-chinese-tech-1.1909355.
    232    “Sanctions-Hit Russia Wary of Over-Reliance on Chinese Tech — Bloomberg,” The Moscow Times, April 19, 2023, https://www.themoscowtimes.com/2023/04/19/sanctions-hit-russia-weary-of-over-reliance-on-chinese-tech-bloomberg-a80875.
    233    “Twisted Panda: Chinese APT Espionage Operation Against Russian State-Owned Defense Institutes,” CheckPoint, May 19, 2022, https://research.checkpoint.com/2022/twisted-panda-chinese-apt-espionage-operation-against-russians-state-owned-defense-institutes/.

    The post Russia’s digital tech isolationism: Domestic innovation, digital fragmentation, and the Kremlin’s push to replace Western digital technology appeared first on Atlantic Council.

    ]]>
    OT cyber policy: The Titanic or the iceberg https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/ot-cyber-policy-the-titanic-or-the-iceberg/ Wed, 24 Jul 2024 23:01:00 +0000 https://www.atlanticcouncil.org/?p=817995 Current policy does not address the issue of cyber-physical security with a systemic approach, instead focusing with tunnel vision on specific events. This analysis uses the iceberg model for systems thinking to address policy gaps in the OT ecosystem, detailing recommendations for the Cybersecurity and Infrastructure Security Agency (CISA).

    The post OT cyber policy: The Titanic or the iceberg appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    The maritime sector is both a commercial and defense industry, critical economically and for national security. Although the Titanic was primarily a passenger vessel, it also carried a substantial amount of cargo. Ships, once primitive, transformed into analog marvels of engineering, then into highly digitized floating systems of systems. Some rival small cities, with complex interdependent systems and supply chains for things like power, sanitation, food, communication, navigation, medicine, healthcare, and retail.

    However, manufactured vessels still seem like a commodity item, like any other raw material. Despite potential malfunction or mishap, manufacturers do not own and operate commercial ships once they leave the port, so their (vendor) liability and support depend on each component, contract, and situation. What is not fully managed or entrusted to owners and operators is outsourced to third parties, with checks and balances further delegated to laws, regulations, compliance, and insurance.

    Today, cyber-physical operations across critical infrastructure are treated as both the Titanic–a complex system of interdependent digitized systems–and the iceberg, as many people refer to a speculative “cyber 9/11” or “cyber–Pearl Harbor.” Deploying dual-use technology, all sixteen critical infrastructure sectors in the U.S. impact civilian life and national security. Inherently, however, control systems–known as operational technology (OT) or industrial control systems (ICS)–are owned and operated vendor products with responsibility for their security shared by many different stakeholders.

    Current policy does not address the issue of cyber-physical security with a systemic approach, instead focusing with tunnel vision on specific events, such as demonstrated adversarial capabilities, discovered vendor product vulnerabilities and patches, or patterns like USB attacks This reactionary impulse makes prioritization difficult across sectors, entities, and their critical functions. Generally, organizations and agencies cannot reasonably determine where the highest concentration of vulnerable or homogenous systems are deployed, nor which entity will be targeted next.

    This dynamic has resulted in a general “Shields Up” stance, but a lack of prioritization, among the sixteen critical infrastructure sectors and a plethora of efforts to contextualize and address the growing concerns for cyber-physical systems, which only tackle parts of the problem. Furthermore, stakeholders across critical infrastructure struggle to discern the potential impacts on the integrity of their systems and processes of each attack scenario, to prioritize actions and activities, and to calculate the cost-benefit analysis of those actions and activities.

    This issue brief illustrates why no one-size-fits-all approach is appropriate for OT cybersecurity, and how each stakeholder has a role to play in enhancing the security of cyber-physical systems, including vendors, owners and operators, and national security and defense practitioners and policymakers. Borrowing again from the Titanic analogy, this analysis uses the iceberg model for systems thinking to address policy gaps existing between the various levels of the OT ecosystem, detailing the following recommendations for the Cybersecurity and Infrastructure Security Agency (CISA):

    1. Streamline available OT and industrial controls systems (ICS) cybersecurity data
    2. Align public-private risk researchers and analysts
    3. Conduct Cyber Performance Goal (CPG) reviews with low, mid, and high-maturity organizations
    4. Expand training and awareness

    Introduction

    The general public clearly believes the US government has a key role to play in securing and maintaining infrastructure that underpins economic and national security. Survey data released by the MITRE Corporation in March 2024 reveals that 49 percent of the public believes the federal government bears partial responsibility for fortifying critical infrastructure. 29 percent believe the federal government is solely responsible.1 Despite its central role, however, the federal government currently lacks the data and resources to prioritize where and how to secure and fortify OT and ICS networks across the nation and its many sectors and interdependent global supply chains.

    A primary cause of this problem is that federal cybersecurity policymakers, sector risk management agencies (SRMAs), and research and development teams lack a holistic understanding of critical operational technology and industrial control systems, as well as the risks of cascading cyber-physical impacts. Despite the barrage of recent frameworks, assessments, and recommendations to coalesce the field around shared principles and security controls, “Strategy for Cyber-Physical Resilience: Fortifying Our Critical Infrastructure for a Digital World,” President’s Council of Advisors on Science and Technology, February 2024,2 cybersecurity is inherently subjective. Each asset owner’s unique perspectives and needs determine its priorities. Similar concerns exist across industries, but heterogeneous systems, configurations, and networks do not result in homogeneous risks, impacts, or outcomes.

    A secondary cause of this problem is that available datasets and information-sharing regimes for OT and ICS vulnerabilities and threat intelligence are siloed, resulting in limited sampling for cyber-physical environments–limited by the number of participating asset owners, sector coverage, available indicators of compromise, and national security clearance measures. This creates multiple single sources of information without much consensus. Despite this variation, however, OT systems do have some common characteristics that are vital for cybersecurity experts to understand.

    OT systems background: Devices, networks, and access

    Operational technology is a broad set of technologies covering process automation, instrumentation, cyber-physical operations, and industrial control systems (ICS). OT systems are often connected to other supervisory control and data acquisition (SCADA) systems and field devices or instrumentation, with control data separately captured for use in business applications. Operational technology can be found in a variety of contexts, from control systems that automatically run assembly lines and manufacturing processes to those that produce and deliver electricity, lighting, and heating.

    Regardless of the context, all sectors with OT systems have three things in common: critical assets (machines and equipment essential to operations), critical functions (processes and outputs of operations), and varying cyber risk and exposure. However, risks to OT and ICS do not apply to all systems in the exact same way. OT and ICS systems are built with significant protocol and configuration differences, which are often customized for their intended purpose, presenting competing demands for availability, safety, and security priority and attention.

    After infecting an intermediary system, a threat actor or group of actors may pivot into control networks either in a supervisory capacity (read-only) or with the ability to send commands (write) to control systems that dictate instructions to field devices that move, turn, heat, cool, open and close physical devices in the real world. Without personnel manually responsible for the functioning of all the turbines, pumps, valves, actuators, dials, and cyber-physical processes, it is virtually impossible for owners and operators to disconnect these vital systems from their intermediary counterparts and principal business requirements.

    Many of these systems–which are often designed to last fifteen years or more–inherently lack encryption, password protection, multifactor authentication, and other best practices for cybersecurity. Some systems only allow for short windows of time–sometimes only 24 to 48 hours offline–to install critical software patches. Due to this difficulty, owners or operators often opt to harden systems instead of isolating, securing, patching, or replacing an insecure device. Hardening devices is a practice that requires configuration changes to disable or remove any services or programs not required for normal or intended system operations. Reducing the number of services and programs removes superfluous access and frivolous data exchange, lowering the number of potentially exploitable attack paths.

    There are thousands of known software and application vulnerabilities from each vendor that manufactures machinery and equipment. Although vulnerabilities are published with an associated common vulnerability score, the rating is specific to the vulnerability in the system itself and does not translate to the severity of the vulnerability in the context of a deployed environment. Vulnerabilities must be analyzed in the context of their operations to understand their significance and prioritize remediation and response efforts.

    It is incredibly challenging to manually verify the exposure or risk status of numerous operational devices at all times. There are many reasons for this, including a lack of system provenance, supply chain and chain of custody issues, and limited root cause analysis capabilities. If an owner or operator cannot entirely secure its network, it must reinforce it with access controls–both machine to machine and user or role-based interactions. If authentication is not possible or credentials can be spoofed or bypassed, teams will then need to harden devices. And if OT devices are vulnerable or no longer supported by their vendor, network security remains a top priority. This cycle repeats asset by asset, process by process, network by network, and company by company.

    When examining OT vulnerabilities, cybersecurity conversations sometimes overlook the physical layer of protections often built into control systems, called interlocks. These protective physical and logical components “define mutually exclusive conditions to prevent undesired (harmful) states of the process” such as acceptable voltage, chemical levels, or speed parameters.3 Focusing solely on the cyber aspects of control systems and their connectivity overlooks the complexity of this physical protection logic and disproportionately focuses attention on systems or controls that may not reduce risk.

    Thus, policy solutions to improve OT cybersecurity largely fall into two camps. The first focuses on securing or replacing the control system equipment or systems, while the second focuses on avoiding cyber incidents altogether by promoting risk avoidance, security controls, and best practices; often relying on a single motivating event or threat actor for urgency. With limited resources, including budgets, personnel, and time, both these approaches have their drawbacks. Focusing on addressing product vulnerabilities is a cumbersome process that may not be financially or technically viable. Focusing on avoiding all cyber risks, on the other hand, often ignores the importance of different critical assets and their essential functions.

    Connecting criticality and “cyber” 

    sanitary hospitals, safe and reliable electricity, critical manufacturing, and more–is so vulnerable to cyberattack, why not take it all offline? This question was asked by Congressman Carlos Gimenez during a February 2024 House Homeland Security Committee hearing on operational technology.4 While the desire for a simple fix is understandable, this approach is unrealistic given the scope and scale of digital technologies for both localized and distributed operations.

    Localized operations like regional or municipal utilities and multinational corporations like oil companies and automobile manufacturers cannot meet the demands of their businesses without relying on connected digital infrastructure. From logistics and scheduling to enterprise resource management, reliability, and process monitoring, typical IT and business systems rely on data from processes that are now automated via digital and network-connected technologies. These numerous technologies and systems are and will continue to be susceptible to cyberattacks.

    In an attempt to prioritize critical services and functions, the congressionally mandated Cyberspace Solarium Commission created the following categories for Systemically Critical Designations:

    1. The interruption of critical services, including the energy supply, water supply, electricity grid, and/or emergency services, that could cause mass casualties or lead to mass evacuations.
    2. The perpetuation of catastrophic damage to the economy, including the disruption of the financial market, disruption of transportation systems, and the unavailability of critical technology services.
    3. The degradation and/or disruption of defense, aerospace, military, intelligence, and national security capabilities.
    4. The widespread compromise or malicious intrusion of technologies, devices, or services across the cyber ecosystem.5

    Unfortunately, the attack surface does not end with these systemically critical categories of services and functions. Despite categorizing entities, goods, and services as “systemically critical,” there is also no established way to prioritize and secure specific asset owners or targets based on potential OT and ICS cyber scenarios or cascading impacts. Lack of prioritization also leads to a lack of preparation. And lack of preparation leads to misunderstanding of tolerance–the capacity to endure continued subjection to something or an allowable amount of variation of a specific quantity, especially in the dimensions of a machine or part.

    In OT and ICS, fault-tolerant system design–where a system continues to operate despite software or hardware failures–is a well understood aspect of functional safety and hazard analysis but is ill-defined for cybersecurity. Functionally, operators measure, train, plan for, analyze, and handle various failures: sensor failure, effector failure, computer hardware or software failure, operator failure, negligence, or accident. However, it is much more difficult to predict all possible cyber scenarios, events, or attacks that would lead to similar failures. As a result, stakeholders struggle to understand cascading cyber-physical impacts. Things like manual operations, redundancy, and isolated networks and facilities for operations all matter as much as the forensic artifacts of a cyber incident.

    What cyber experts call resilience (the ability of systems to withstand adversity and recover quickly), operations experts call tolerance (the threshold at which systems can effectively and consistently deal with stressful situations). Policymakers are left to bridge that gap with flexible solutions to complex problems amidst a widely distributed risk management landscape. However, defining actual risk, perceived risk, and acceptable risk to date has been marketed as blanket cyber resilience with very little understanding of system tolerance.

    For instance, internet-connected devices may be hardened or have compensating security controls in place, representing a lower risk to organizations taking these steps. Vulnerabilities may require human interaction or physical access for exploitation, reducing their widespread impacts. Some organizations keep alternate backup systems ready to implement if critical assets are targeted or degraded. Others rely on physical logic embedded in systems that would prevent worst-case scenarios from occurring in the process control systems themselves. Finally, many safety systems provide alerts on an unsafe operational status that may be caused by some cyber threats and scenarios.

    Engineers and operators understand fault-tolerant system design and cybersecurity experts understand security controls, but few business and government leaders understand the overlap and the gaps. Tolerance is essential for business calculations including annualized loss expectancy, maximum tolerable downtime, and mean time to recover. Determining tolerance, however, is complicated by the fact that there is no single definition of an OT cyber incident. Does the root of an incident have to be intentional, or could one also result from user error, negligence, or accident? Does a piece of OT or SCADA equipment or machinery need to be directly impacted to count? Many stakeholders have a role to play in determining and defining the extent of an incident.

    The iceberg model for systems thinking

    An iceberg represents a potentially catastrophic scenario–a situation commonly referring to something menacing or a harbinger of bad outcomes. Used as an idiom, the phrase “tip of the iceberg” often refers to some small, visible part of a much larger situation or context. Colliding with an iceberg, real or metaphorical, results in cascading impacts. This logic forms the basis of the “iceberg model” for systems thinking, which businesses, policymakers, and academics use to critically evaluate problems in complex ecosystems. The tip of the iceberg, called “the event,” is the most visible activities that occur within a system. The iceberg model, however, is designed to push thinking beyond the most obvious symptoms into further levels of “patterns,” “structures,” and “mental models.” When it comes to security for cyber-physical operations, industrial control systems, and operational technology, potential zero-day incidents and malign nation-state actors really do represent just the tip of the iceberg.

    Iceberg Model for Systems Thinking

    At the event level of OT cybersecurity, vendors, owners and operators, and national security policymakers focus their efforts on addressing recent attacks, ransomware, espionage campaigns, zero-day vulnerabilities, and other high-profile developments. Below that, at the pattern level, more technical stakeholders look to address events taking place over time, like the vulnerabilities of control systems intentionally or inadvertently connected directly to the internet or advanced persistent threats (APTs) and their tactics, techniques, and procedures (TTPs) targeting remote or physical access.

    At the structure level, OT environments are underpinned by legacy devices with vulnerabilities, flat networks, opaque patching policies, and unencrypted protocols. Original equipment manufacturers (OEMs) are primary actors at this level, investing in proprietary protocol design and development, and maintaining control and change authority over manufactured systems for various commercial and technical reasons. End users or asset owners exist between the pattern and structure segments, understanding the design and architecture of networks and facilities, and the contextual implications of cyber scenarios.

    Threat researchers, intelligence analysts, and third-party security monitoring vendors exist between the event, pattern, and structure portions of the iceberg but are often overlooked in policymaking. They uniquely understand and quantify the risks of the potential exploitation of OT and ICS and networks deployed today. At the base of the iceberg is the mental model: the industry’s established attitudes, beliefs, expectations, and values that are deeply rooted and difficult to change. For instance, this includes a reluctance to actively interrogate control systems (as opposed to passively) or the understandable hesitation to run IT scanning tools like Nmap to identify vulnerabilities in OT networks.

    Core assumptions, market realities, conflicting priorities, budget and resource constraints, political will and capital, knowledge, awareness, and training all inform the belief system beneath OT cybersecurity and the schools of thought for how to address its many challenges. Blanket security requirements and compliance measures that do not account for the patterns, structures, and mental model struggle to cover the actual install base of OT and ICS vendor technologies, properly address the threat landscape, and contend with the unique potential for cascading impacts each asset owner faces.

    OT policy today focuses on avoiding significant events, providing tools for pattern analysis, adding security requirements for OEMs and asset owners, and adding product and liability requirements for vendors at the structure level. Without connective tissue, these efforts will not lead to holistic outcomes that reduce risk and build resilience. A better approach to policymaking is to consider how policies, requirements, best practices, and compliance measures intersect and connect systemically to address cyber-physical risks, threats, and responsibilities for all stakeholders.

    Policy gaps

    JD Work, Professor in the College of Information and Cyberspace at the National Defense University wrote, “Every time one sees an official advocating for a ransomware payment ban, the correct response is not to debate the policy failure modes that result from such a proposal. It is to call out that having failed to provide for the common defense…the state has left private enterprise with only two responses to predation.”6 Despite the overwhelming amount of federal attention on critical infrastructure, many asset owners continue to feel like sitting ducks in the face of cyber threats due to this lapse.

    Meanwhile, the United States and its allies are constantly assessing and reassessing offensive and defensive strategies in response to adversaries engaging in more provocative cyberattacks, like the Chinese-sponsored Volt Typhoon group’s attacks on the IT systems of critical infrastructure organizations and the evolution of more difficult-to-detect “living off the land” techniques. In short, the landscape for critical infrastructure cybersecurity is becoming more complex and confrontational, accentuating the shortcomings of the available solutions, protections, and investments in securing cyber-physical operations.

    Where potential conflict red lines and theaters continue to blur, the 2024 Annual Threat Assessment of the US Intelligence Community from the Office of the Director of National Intelligence (ODNI) has doubled down on the fact that “China remains the most active and persistent cyber threat to the US government, private sector, and critical infrastructure networks,” and that “Russia maintains its ability to target critical infrastructure–including underwater cables and industrial control systems–in the United States as well as in allied and partner countries.”7

    With any critical infrastructure organization a potential target, it is useful to review competing priorities in recent policy. Starting from a high level, the Biden administration announced its new National Cybersecurity Strategy (NCS) in March 2023 as a comprehensive approach to safeguarding US critical digital infrastructure. The strategy is composed of five pillars, of which the first and arguably most important for homeland security is “Defend Critical Infrastructure.” The Cybersecurity and Infrastructure Security Agency (CISA) is tasked with the subsequent functions in that pillar.8

    With these functions in mind, CISA’s Cybersecurity Strategic Plan for FY2024-2026, published in August 2023, has three primary objectives as a subdivision of national cybersecurity priorities second to the NCS:

    1. Address immediate threats
    2. Harden the terrain
    3. Drive security at scale9

    “Operational technology” is mentioned three times in this strategy, though “prioritize” appears fourteen times. The strategy stipulates that CISA will “prioritize our actions to achieve the greatest impact…focus[ing] on four broad sets of stakeholders: (1) federal civilian executive branch agencies…(2) target rich, resource-poor entities where federal assistance and support is most needed…(3) organizations that are uniquely critical to providing or sustaining National Critical Functions…and (4) technology and cybersecurity companies with capability and visibility to drive security at scale.” This is a great start, but in tying the framework back to objective 2.1 of the CISA strategy, “understand how attacks really occur – and how to stop them,”10 for OT/ICS, it is clear that many stakeholders that exist within and between the levels of the iceberg model are missing.

    Another example of a disconnect between the strategy and reality of OT, the CISA enabling measure to “develop a robust capacity to analyze information about cybersecurity intrusions and adversary adaptation, and derive insights into which security measures were, or could have been, most effective in limiting impact and harm”11 will not provide holistic awareness of the most consequential scenarios to prioritize for asset owners. For reasons previously outlined, this enabling measure in OT and ICS may highlight a gap or limitation in one aspect of OT and ICS cybersecurity for a particular stakeholder, but applicability will vary.

    With continued emphasis on collaboration, on March 7, 2024, the Government Accountability Office (GAO) added to CISA’s functional requirements four recommendations approved by the US Department of Homeland Security (DHS) to improve CISA’s OT products, services, and collaboration. Specifically, the GAO report recommended that CISA:

    1. measure customer service for its OT products and services;
    2. perform effective workforce planning for OT staff;
    3. issue guidance to the sector risk management agencies on how to update their plans for coordinating on critical infrastructure issues; and
    4. develop a policy on agreements with sector risk management agencies with respect to collaboration.12

    The first recommendation requires outreach and an audit of asset owners to understand the accessibility and usefulness of CISA products, services, and collaboration. It realistically requires a parallel review of the challenges that in-house security teams are tackling and the security controls and processes that are outsourced to the private sector. The second recommendation addresses a major need across the entire federal government and cybersecurity market. The third is currently the responsibility of each of the Sector Risk Management Agencies (SRMAs), including but not limited to CISA, which “coordinate and collaborate with DHS and other relevant Federal departments and agencies, with critical infrastructure owners and operators, [and] where appropriate with independent regulatory agencies and with state, local, tribal, and territorial (SLTT) entities.”13 Finally, the fourth recommendation has essentially been replaced by the directives of National Security Memorandum 22 (NSM-22), published on May 3, 2024. NSM-22 directs the Secretary of Homeland Security, acting through the Director of CISA, to “coordinate with SRMAs to fulfill their roles and responsibilities to implement national priorities consistent with strategic guidance and the National [Infrastructure Risk Management] Plan and continuously strengthen a unified approach to critical infrastructure security and resilience.”14 A more robust consideration from the GAO review might include CISA hiring and developing internal sector-specific subject matter experts to act as attachés for additional SRMAs. Many sector experts do exist at specific agencies, but very few specialize in cybersecurity, particularly for OT and ICS.

    In February 2024 the President’s Council of Advisors on Science and Technology (PCAST) released a report on cyber-physical resilience. Recommendations included:

    1. establish sector-specific performance goals;
    2. bolster and coordinate research and development;
    3. break down silos and strengthening government cyber-physical resilience capacity; and
    4. develop greater industry, board, CEO, and executive accountability.15  

    The report also calls on the federal government to “clarify the what and why of the national critical functions list to help each sector prioritize,”16 which the GAO previously recommended in GAO-22-104279 published in March 2022.17 The report also suggested the creation of a National Critical Infrastructure Observatory.

    In March 2024, a draft report from the President’s National Security Telecommunications Advisory Committee (NSTAC) suggested that economic incentives, liability tied to risk mitigation, and regulatory simplification tied to the National Institute for Standards and Technology’s Cyber Security Framework (which CISA’s Cybersecurity Performance Goals do quite well) provide a path toward strengthening national security and emergency preparedness.18 The NSTAC also suggested the establishment of a Cybersecurity Measurement Center of Excellence to coordinate the management and assessment of existing data sources across the federal government.

    These lists of competing priorities suggest many good ideas but often lack measurable milestones and deliverables. Critical infrastructure stakeholders cannot address these challenges without an overwhelming amount of support and coordination. Recent initiatives express a vast amount of support for critical infrastructure but demonstrate a lack of coordination in addressing technical and procedural considerations for OT and ICS among relevant stakeholders. These competing priorities are also now being compared against more mandatory requirements like incident reporting and potential future sector-specific mandatory requirements.

    For example, on March 27, 2024, CISA released the proposed rules issued to implement cyber incident reporting under the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), passed in 2022. This comes just two months after updates to the Security and Exchange Commission (SEC) Rule 17 went into effect, covering Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure. The SEC rule requires public companies to annually disclose material information about their cybersecurity risk management, strategy, and governance, and to disclose a significant cyber incident within four days of defining it as significant.

    Covered entities under the proposed CIRCIA legislation would be required to report a “substantial cyber incident” within seventy-two hours. The rule applies “to entities in critical infrastructure sectors that either exceed the small business size standard (as set by the Small Business Administration) or meets any sector-based criterion.”19 CISA anticipated the criteria would impact over 316,000 businesses and organizations in the United States.20 As legal scholars dig into increasingly mandatory policy measures like CIRCIA, they are uncovering new challenges for several regulated entities.

    Cybersecurity attorney Megan Brown noted that the use of existing disparate and diverse best practices and frameworks “can be harmful if the regulator takes an idea or concept created for one use and imports into a different context for which it is ill-suited or – worse – fails to consider the similarities and differences. Use of unclear or shifting definitions and approaches can be unfair to regulated entities who lack predictability.”21 CIRCIA rulemaking “does not explicitly differentiate incidents based on what type of system or data was targeted or where the system is geographically located.”22 Taken into consideration with all the other goals and requirements, incident reporting is largely perceived as a burden not shared by the wider OT ecosystem but instead placed on asset owners and operators.

    This regulatory landscape creates a challenge: how can stakeholders work together to prioritize actions, activities, and cost-benefit analysis if the federal government continues to present a sea of frameworks, best practices, suggestions, and voluntary and mandatory regulations? Without a harmonizing source of guidance across agencies, authorities, and well-intentioned bodies, there are few trusted advisors for stakeholders pursuing actions and activities. Each organization must choose which ideas and mandates to follow, defining their own champions and priorities, reviewing and mapping their operations risks based on the available standards, regulations, suggestions, and best practices.

    Recommendations

    Whoever is tasked with holistic harmonization among these various groups and agencies, there needs to be more robust synchronization beyond creating partnerships, points of contact, and building trust. Many of the needs and recommendations from available reports and strategies exist in current projects and resources that require stitching together across various federal projects and public-private partnerships. The following recommendations are intended to streamline previous suggestions and existing resources across the federal government, targeting the event, pattern, structure, and mental models of OT and ICS.

    It is essential to establish which entities will be responsible for reviewing what is already available to avoid recreating many existing projects and data sources. Barring the creation of yet another organization or agency, these recommendations assume CISA has the homeland security mandate–authority derived from the National Cybersecurity Strategy and NSM-22 and directed tasks from the GAO–to facilitate the following recommendations:

    1. streamline available OT and ICS cybersecurity data
    2. align public-private risk researchers and analysts
    3. conduct Cybersecurity Performance Goal (CPG) reviews with low, mid, and high-maturity organizations
    4. expand training and awareness

    Streamline available OT cybersecurity data

    As the NSTAC report suggested, data is a central missing factor for OT cybersecurity. The PCAST report suggested the United States should map its infrastructure to outmatch adversaries in discovering and addressing vulnerabilities and concentration risk. Building a national asset inventory depending on the install base could lead to a better understanding of the penetration rate of various vendor products, but doing so will not illuminate their networked implementations, configuration settings, or compensating controls introduced by asset owners and end users. CISA and existing SRMAs should consider streamlining available industry projects, resources, and data.

    For example:

    • CISA has a program coordinating executive authorities to subpoena telecommunications companies for network information with manufacturers to identify internet connected assets and drive down risk exposure.
    • The MITRE Corporation recently unveiled the EMB3D Framework, which gives device makers a common understanding of vulnerabilities in their technologies, as well as the security mechanisms for addressing those weaknesses.23
    • Since 2020, the OT cybersecurity industry has maintained the Programmable Logic Controllers (PLC) Security Top 20 List and interactive dashboard to improve the security posture of industrial control systems. These practices leverage natively available functionality in PLCs and Distributed Control Systems (DCS).24

    Data from these programs can inform stakeholders from all levels of the iceberg model, producing shared priorities and outcomes for owners, operators, and product manufacturers. This improved coordination would produce a shared understanding of connectivity and targeting and hardening techniques.

    The PCAST report also recommended the creation of a National Critical Infrastructure Observatory, to “develop a single national system that can support the overlay of key elements like active incidents, indications and warnings and act as a national virtual fusion environment for coordination.”25 Streamlining available data would allow the National Critical Infrastructure Observatory as a central body to not only identify deployed systems and determine their sector and use case, but also owners’ and operators’ risk posture, security concerns, tolerance for downtime, and prioritization efforts for defense and resilience.

    Other examples of complementary programs and pilots without centralized data and gap analysis include programs such as:

    • CISA’s Cyber Sentry is “a CISA-managed threat detection and monitoring capability, governed by an agreement between CISA and voluntarily participating critical infrastructure partners who operate significant systems supporting National Critical Functions.”26
    • The Department of Energy’s CyTRICS program works with industry partyers “to identify high priority OT components, perform expert testing, share information about vulnerabilities in the digital supply chain, and inform improvements in component design and manufacturing.”27
    • The Electricity ISAC Cybersecurity Risk Information Sharing Program shares data collected “through information sharing devices (ISDs) installed on participants’ networks. Data collected through CRISP is used to identify cyber threat actors, pinpoint emerging trends, and analyze correlations across the sector.”28
    • Idaho National Labs’ Malcolm is an open-source network traffic analysis tool designed to make network traffic analysis accessible to both the public and private sectors, supporting all sixteen critical infrastructure sectors.29

    This type of monitoring and trends analysis is essential for stakeholders at the pattern and structure levels and can inform and incentivize ways to expand and replicate industry initiatives that create specific and actionable best practices. It is nonsensical to focus separately and simultaneously on bolstering asset owner security posture, analyzing external risks, measuring security controls, and mapping relevant government standards and compliance regimes. Lastly, this data can inform interdependence research that will be critical for government funding and policy prioritization moving forward.

    Researchers in Canada recently published a time-series analysis of sector interdependency. Using twenty-five years of industrial statistics from 1997-200, they compare Gross Domestic Product (GDP) to the production of finished goods and services per sector, and the transactional use of goods and services by each to create finished products. Their findings generate two indicators: “weak correlations which likely indicate interdependency risks,” and “strongly correlated but imbalanced interdependencies, which often indicate unmanaged supply-chain vulnerabilities.”30 CISA, in partnership with other agencies, should conduct similar research to capture interdependence correlations among and between sectors on a national level.

    Align public-private risk researchers and analysts

    Though most attacks on OT, ICS, or cyber-physical processes bear some similarities, each is  unique, frustrating automated response and remediation as complete solutions. Danielle Jablanski, quoted in William Loomis,31 Signatures, tactics, techniques, and procedures vary widely. This is further complicated by the fact that, in some cases, many owners and operators believe the risk of altering control systems outweighs the benefits of security controls. Unfortunately, this creates a situation where every organization must independently prioritize product vulnerabilities, researcher details, and disclosures. This is a major roadblock for efficacy, situational awareness, and strategic planning across the SRMA communities.

    In many cases, organizations can only learn shared signatures, detections, and intelligence after another organization is victimized. Today, no single stakeholder could corroborate threat research information from two different publicly available OT or ICS cybersecurity resources. Where one publishes more details about indicators of compromise or tactics, techniques, and procedures witnessed in one sector, it may be because it has more customers in that sector and is not necessarily indicative of the threat landscape there. No method for standardizing, correlating, and collating threat and vulnerability research from market leaders exists currently.32

    There are also several novel academic findings that detail existing vulnerabilities and capabilities the private sector has been working on for at least a decade. While the government does not typically provide a list of products to reverse engineer, security research teams often lack centralized insights to inform their own prioritization of research. The ICS Joint Cyber Defense Collaborative (JCDC) within CISA can facilitate enhanced mission alignment for the community by spearheading the development of a technical working group to align researchers and analysts in their approach to security research for industrial control systems, embedded devices, and OT in a more coordinated fashion.

    Connecting these research dots would be significantly impactful for CISA’s strategic objectives, helping to understand where attacks really occur and how to stop them. Therefore, the ICS JCDC should also establish a team and goal to create a better sense of the OT and ICS threat landscape beyond researchers. This would require working together with stakeholders from the data sources listed as part of recommendation one and harnessing available threat information from proprietary OT network monitoring solutions. These improvements, as well as championing ways to produce earlier warnings for exploitation and compromise indicators, would establish more proactive defense mechanisms before adversaries can build exploits.

    Conduct cyber performance goals reviews with low, mid, and high resourced organizations

    The CISA Cyber Performance Goals (CPGs) are general controls and security practices serving as a living document, with checklists enabling asset owners and end users in critical infrastructure to evaluate their systems’ progress and maturity.

    CISA should convene an independent OT Cybersecurity Advisory Board of voluntary, unbiased individuals, without a financial stake in OEM or cybersecurity products and separate from the SRMAs. To meet with the advisory board for private guidance, asset owners must review their CPG maturity and self-attest their level of CPG implementation by scope, cost, impact, and complexity. The ICS and Cybersecurity Divisions at CISA and the relevant JCDC leaders should work together to sort and review asset owners based on their maturity levels to discuss OT and ICS workflows, data streams, products, services, and recommendations coming from CISA and intended for these entities.  

    This volunteer board would meet with each maturity level group per quarter to review their progress and pain points with the CPGs. In the first quarter the board would meet with less well-resourced organizations with little to no implementation; second quarter meetings would focus on mid-level organizations with several controls and practices; and in the third quarter the board would meet with high-level maturity organizations with many cybersecurity partners and solutions working together to achieve most or all of the CPGs. In the fourth quarter, the independent advisory board, JCDC teams, and representatives from each Sector Coordinating Council would convene to discuss lessons and challenges to reflect in CISA offerings.  

    Periodic maturity reviews of CPG implementation can provide necessary baselines and, in turn, inform the analysis questions raised above, without proposing sector-specific additions. This review also organically unpacks the many tolerance considerations of each asset owner and their risk posture. These baselines, together with the new and available data sources outlined previously, will address prioritization objectives–like identifying top federal resource allocation needs, which systems really need to get off the internet, addressing legacy system vulnerabilities, product logic and configuration best practices, change management, and more robust training and awareness programs.

    Expand training and awareness

    CISA is the de facto hub for critical infrastructure cybersecurity knowledge and shares resources with many partners. As the agency continues to review the use of its resources by partners and the public, a concerted effort is needed to resurrect relevant documentation and workstreams to promote learning and understanding for OT and ICS cybersecurity. Working with groups like the International Society of Automation, the OT Cybersecurity Coalition, and others can facilitate broader and more strategic reconceptualization of risks and priorities across OT and ICS, focusing primarily on awareness and advocacy.

    Just like existing industry programs, pilots, and data sources, several worthwhile training programs exist that can be strengthened and offered to larger audiences to educate, train, exercise, learn, understand, and build resilience. For example, the CISA ICS Training in the Virtual Learning Portal and in person with the Idaho National Lab can be expanded and promoted to many more organizations that may not have internal OT expertise.33 The International Society of Automation’s microlearning modules, including basics like “Cybersecurity for CISOs” and similar modules can be promoted and required training.34

    Many industries have their own resources and groups for training and education, including the information sharing and analysis center (ISAC) communities, exercises like GridEx, Radics, and Liberty Eclipse for the electric sector, industry associations like the National Rural Electric Cooperative Association (NRECA) and American Public Power Association (APPA), research arms like the Cybersecurity Manufacturing Innovation Institute (CyManII), and so on. What is clearly missing is a centralized understanding and cohesion of these similar efforts which can sometimes be perceived by stakeholders as noncomprehensive or feudal, depending on their financial and membership models.

    Expansion and shared outcomes from these and similar exercises can form the foundations for behavioral changes that target the attitudes, beliefs, expectations, and values of the OT and ICS industries. In the future, a significant behavioral norm equivalent to “patch Tuesday” activities in IT security may emerge, becoming second nature for owners and operators. For example, more concerted efforts for “islanding” operations or disconnecting sites from more integrated and digitized SCADA systems could become more commonplace, where owners and operators are more equipped to safely and securely practice failure modes and manual operations.

    Finally, every emergency begins and ends somewhere local. Emergency planning for asset owners should be a mandated requirement by SRMAs. For example, the Incident Command System for Industrial Control Systems (ICS4ICS) is designed to improve global ICS cybersecurity incident management capabilities and planning. ICS4ICS leverages the Incident Command System, as outlined by FEMA, for response structure, roles, and interoperability. The Incident Command System has been tested for more than thirty years of emergency and non-emergency applications, throughout all levels of government and within the private sector.

    Conclusion

    In 2019, the Federal Cybersecurity Research and Development Strategic Plan noted that as cyber-physical systems “become more complex, the interdependence of components increases the vulnerability to attacks and cascading failures.”35 Despite this realization, policy ideas, implementation, and standards continue to focus on vulnerabilities and attacks, with less attention paid to the systemic approaches. Between 2020 and 2024, the number of OT and ICS cybersecurity incidents exceeded the total number reported between 1991 and 2000. 36 Despite this increase in targeting, risks to OT and ICS have not changed drastically since a 2003 GAO hearing on “Critical Infrastructure Protection: Challenges in Security Control Systems.”37

    Each critical infrastructure entity is a vessel delivering products, resources, or services–with a complex system of interdependent digitized systems. Largely non-federal organizations, these entities require consistent and centralized strategy, leadership, and funding. Not incorporating all stakeholders in the relevant policymaking processes results in overlapping and incongruent policy, a range of voluntary and mandatory standards and best practices, and an overall reactionary stance in a discipline and domain that consistently benefits from ample planning and preparedness.

    More clearly defined, coordinated, and shared objectives must be applied across all layers of the iceberg model. This level of coordination will begin to answer the many open questions related to the lack of available data and also help install base awareness of OT and ICS vendor technologies, the threat landscape, and the unique potential for cascading impacts each asset owner faces. Prioritization and understanding vendors, owners and operators, and national security and defense policymakers require a reconceptualization around priorities for OT cybersecurity–its events, patterns, structures, and mental models.

    A key component of this reconceptualization will be the understanding of overlapping cyber risks, operational redundancy, and tolerance. These principles, best understood by each and every asset owner with cyber-physical infrastructure, produce the contingency planning and muscle memory required for resilience. The stitching together of numerous current activities, projects, technologies, and data sources will also require more personnel to contend with the complexity of this problem set and the evolving risk and threat landscape

    Acknowledgements

    The author thanks the following peers for contributing feedback on earlier versions of this paper and ideas and discussion during its creation: Blake Benson, Mark Bristow, Alphaeus Hanson, Trey Herr, Katherine Hutton, Karrie Jefferson, Will Loomis, Rob Morgus, Jen Pedersen, Sean Plankey, Sarah Powazek, Austin Reid, Matt Rogers, Megan Samford, Stewart Scott, and Joe Slowik.

    About the author

    Danielle Jablanski is an ICS cybersecurity strategist at the US Cybersecurity and Information Security Agency (CISA), serving in the Office of the Technical Director. As the lead for ICS strategy, she is responsible for expanding the utility and reach of ICS products and services, coordinating internal and external stakeholder efforts, and maximizing public-private efforts for the OT and ICS cybersecurity industry and critical infrastructure owners and operators. She is also a nonresident fellow at the Cyber Statecraft Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security. As time allows, Jablanski is also an advisor at Kutoa Technologies, and an Adjunct Professor teaching Intro to ICS Cybersecurity at Dallas College. Jablanski has been responsible for conducting academic and market research on emerging technologies throughout her career. She has independently consulted for the US government and a technology startup on novel technology applications for the military, Department of Defense, and commercial sectors. She began her career with the Stanley Center for Peace and Security evaluating cyber technology impacts to nuclear weapons policy and use worldwide. Before returning to the world of physical and industrial cybersecurity, Jablanski was a senior research analyst with Guidehouse Insights and spent the two years prior contributing to the creation and development of the Stanford Cyber Policy Center at Stanford University

    Disclaimer

    Danielle Jablanski previously worked as an OT Cybersecurity Strategist in the private sector. The information, arguments, and recommendations presented in this issue brief were written and provided prior to her joining the Cybersecurity and Infrastructure Security Agency full time in May 2024.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “Public Perceptions on Security Critical Infrastructure,” MITRE, March 2024, https://www.mitre.org/focus-areas/cybersecurity/public-perceptions-securing-critical-infrastructure.
    3    Marina Krotofil, “Industrial Control Systems: Engineering Foundations and Cyber-Physical Attack Lifecycle,” https://67a8c4b4-678a-443b-bcfa-f1260e164991.filesusr.com/ugd/8efadc_0772cf53bffb46b0a64d219b563710c5.pdf?index=true.
    4    “Securing Operational Technology: A Deep Dive into the Water Sector,” US House of Representatives Subcommittee on Cybersecurity and Infrastructure Protection, February 6, 2024, https://homeland.house.gov/hearing/securing-operational-technology-a-deep-dive-into-the-water-sector/.
    5    Tasha Jhangiani and Graham Kennis, “Protecting the Critical of Critical: What Is Systemically Important Critical Infrastructure?” Lawfare, June 15, 2021, https://www.lawfareblog.com/protecting-critical-critical-what-systemically-important-critical-infrastructure.
    6    JD Work (@HostileSpectrum), “Every time one sees an official advocating for a ransomware payment ban, the correct response is not to debate the policy failure modes that result from such…” Twitter, March 11, 2024, https://x.com/HostileSpectrum/status/1767172187176182031.
    7    “Annual Threat Assessment of the U.S. Intelligence Community,” Office of the Director of National Intelligence, February 5, 2024, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2024-Unclassified-Report.pdf.
    8    “National Cybersecurity Strategy,” Office of the National Cyber Director, March 1, 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    9    “CISA Cybersecurity Strategic Plan FY 2024-2026,” Cybersecurity and Infrastructure Security Agency, August 4, 2023, https://www.cisa.gov/sites/default/files/2023-08/FY2024-2026_Cybersecurity_Strategic_Plan.pdf.
    10    “CISA Cybersecurity Strategic Plan FY 2024-2026.”
    11    “CISA Cybersecurity Strategic Plan FY 2024-2026.”
    12    “Improvements Needed in Addressing Risks to Operational Technology,” Government Accountability Office, March 7, 2024, https://www.gao.gov/assets/d24106576.pdf.
    13    “Sector Risk Management Agencies,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors/sector-risk-management-agencies.
    14    “National Security Memorandum on Critical Infrastructure Security and Resilience,” The White House, April 30, 2024, https://www.whitehouse.gov/briefing-room/presidential-actions/2024/04/30/national-security-memorandum-on-critical-infrastructure-security-and-resilience/.
    15    “Strategy for Cyber-Physical Resilience: Fortifying Our Critical Infrastructure for a Digital World,” The White House – President’s Council of Advisors on Science and Technology, February 2024, https://www.whitehouse.gov/wp-content/uploads/2024/02/PCAST_Cyber-Physical-Resilience-Report_Feb2024.pdf.
    16    “Strategy for Cyber-Physical Resilience: Fortifying Our Critical Infrastructure for a Digital World.”
    17    “Critical Infrastructure Protection,” Government Accountability Office, March 1, 2022, https://www.gao.gov/products/gao-22-104279#summary_recommend.
    18    “Measuring and Incentivizing the Adoption off Cybersecurity Best Practices,” President’s National Security Telecommunications Advisory Committee, March 2024, https://www.cisa.gov/sites/default/files/2024-02/2024.02.12_DRAFT_NSTACM%26IReport_508c.pdf.
    19    Rajesh De et al., “Proposed Rule Issued to Implement Cyber Incident Reporting for Critical Infrastructure Act,” Mayer Brown, March 29, 2024, https://www.mayerbrown.com/en/insights/publications/2024/03/proposed-rule-issued-to-implement-cyber-incident-reporting-for-critical-infrastructure-act.
    20    Rajesh De et al., “Proposed Rule Issued to Implement Cyber Incident Reporting for Critical Infrastructure Act.”
    21    Megan L. Brown, “As Cyber Regulators Rush Toward New Rules, Shifting Foundations May Complicate Compliance,” Wiley, April 1, 2024, https://www.wileyconnect.com/As-Cyber-Regulators-Rush-Toward-New-Rules-Shifting-Foundations-May-Complicate-Compliance.
    22    Rajesh De et al., “Proposed Rule Issued to Implement Cyber Incident Reporting for Critical Infrastructure Act.”
    23    “MITRE, Red Balloon Security, and Narf Announce EMB3D – A Threat Model for Critical Infrastructure Embedded Devices,” MITRE, December 13, 2023, https://www.mitre.org/news-insights/news-release/mitre-red-balloon-security-and-narf-announce-emb3d.
    24    “Top 20 Secure PLC Coding Practices,” PLC Security Top 20 List, https://plc-security.com/.
    25    “Strategy for Cyber-Physical Resilience: Fortifying Our Critical Infrastructure for a Digital World,” President’s Council of Advisors on Science and Technology.
    26    “CyberSentry Program,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/resources-tools/programs/cybersentry-program.
    27    “CyTRICS,” Idaho National Laboratory, US Department of Energy, https://cytrics.inl.gov/.
    28    “Cybersecurity Risk Information Sharing Program,” Electricity Information Sharing and Analysis Center, https://www.eisac.com/s/crisp.
    29    “Malcolm,” Idaho National Laboratory, US Department of Energy, https://inl.gov/national-security/ics-malcolm/.
    30    Tyson Macaulay, “Critical Infrastructure Interdependency: Measuring a Moving Target,” Pulse & Praxis: A Journal for Critical Infrastructure Protection, Security and Resilience, March 4, 2024, https://doi.org/10.5683/SP3/Y2CMPZ.
    31    “Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21,” Atlantic Council, March 22, 2023, https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/modernizing-critical-infrastructure-protection-policy-seven-perspectives-on-rewriting-ppd21/.
    32    Danielle Jablanski, quoted in William Loomis, “Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21.”
    33    “ICS Training Available Through CISA,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/ics-training-available-through-cisa.
    34    “Microlearning Modules: A New Learning Tool for Automation Professionals Involved in Cybersecurity,” International Society of Automation, https://www.isa.org/training/microlearning-modules.
    35    “Federal Cybersecurity Research and Development Strategic Plan,” National Science and Technology Council, December 2019, https://www.nitrd.gov/pubs/Federal-Cybersecurity-RD-Strategic-Plan-2019.pdf.
    36    Mark Cristiano, “Cyber Regulation Roadmap: Navigating OT Security,” Industry Today, January 23, 2024, https://industrytoday.com/cyber-regulation-roadmap-navigating-ot-security/.
    37    Danielle Jablanski, “Show Don’t Tell: Four Ways to Address Cyber Risks to Energy Systems,” Guidehouse, May 17, 2021, https://energycentral.com/o/Guidehouse/show-don%E2%80%99t-tell-four-ways-address-cyber-risks-energy-systems.

    The post OT cyber policy: The Titanic or the iceberg appeared first on Atlantic Council.

    ]]>
    Ukraine’s drone success offers a blueprint for cybersecurity strategy https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-drone-success-offers-a-blueprint-for-cybersecurity-strategy/ Thu, 18 Jul 2024 20:28:12 +0000 https://www.atlanticcouncil.org/?p=780918 Ukraine's rapidly expanding domestic drone industry offers a potentially appealing blueprint for the development of the country's cybersecurity capabilities, writes Anatoly Motkin.

    The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

    ]]>
    In December 2023, Ukraine’s largest telecom operator, Kyivstar, experienced a massive outage. Mobile and internet services went down for approximately twenty four million subscribers across the country. Company president Alexander Komarov called it “the largest hacker attack on telecom infrastructure in the world.” The Russian hacker group Solntsepyok claimed responsibility for the attack.

    This and similar incidents have highlighted the importance of the cyber front in the Russian invasion of Ukraine. Ukraine has invested significant funds in cybersecurity and can call upon an impressive array of international partners. However, the country currently lacks sufficient domestic cybersecurity system manufacturers.

    Ukraine’s rapidly expanding drone manufacturing sector may offer the solution. The growth of Ukrainian domestic drone production over the past two and a half years is arguably the country’s most significant defense tech success story since the start of Russia’s full-scale invasion. If correctly implemented, it could serve as a model for the creation of a more robust domestic cybersecurity industry.

    Stay updated

    As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

    Speaking in summer 2023, Ukraine’s Minister of Digital Transformation Mykhailo Fedorov outlined the country’s drone strategy of bringing together drone manufacturers and military officials to address problems, approve designs, secure funding, and streamline collaboration. Thanks to this approach, he predicted a one hundred fold increase in output by the end of the year.

    The Ukrainian drone production industry began as a volunteer project in the early days of the Russian invasion, and quickly became a nationwide movement. The initial goal was to provide the Ukrainian military with 10,000 FPV (first person view) drones along with ammunition. This was soon replaced by far more ambitious objectives. Since the start of Russia’s full-scale invasion, more the one billion US dollars has been collected by Ukrainians via fundraising efforts for the purchase of drones. According to online polls, Ukrainians are more inclined to donate money for drones than any other cause.

    Today, Ukrainian drone production has evolved from volunteer effort to national strategic priority. According to Ukrainian President Volodymyr Zelenskyy, the country will produce more than one million drones in 2024. This includes various types of drone models, not just small FPV drones for targeting personnel and armored vehicles on the battlefield. By early 2024, Ukraine had reportedly caught up with Russia in the production of kamikaze drones similar in characteristics to the large Iranian Shahed drones used by Russia to attack Ukrainian energy infrastructure. This progress owes much to cooperation between state bodies and private manufacturers.

    Marine drones are a separate Ukrainian success story. Since February 2022, Ukraine has used domestically developed marine drones to damage or sink around one third of the entire Russian Black Sea Fleet, forcing Putin to withdraw most of his remaining warships from occupied Crimea to the port of Novorossiysk in Russia. New Russian defensive measures are consistently met with upgraded Ukrainian marine drones.

    In May 2024, Ukraine became the first country in the world to create an entire branch of the armed forces dedicated to drone warfare. The commander of this new drone branch, Vadym Sukharevsky, has since identified the diversity of country’s drone production as a major asset. As end users, the Ukrainian military is interested in as wide a selection of manufacturers and products as possible. To date, contracts have been signed with more than 125 manufacturers.

    The lessons learned from the successful development of Ukraine’s drone manufacturing ecosystem should now be applied to the country’s cybersecurity strategy. “Ukraine has the talent to develop cutting-edge cyber products, but lacks investment. Government support is crucial, as can be seen in the drone industry. Allocating budgets to buy local cybersecurity products will create a thriving market and attract investors. Importing technologies strengthens capabilities but this approach doesn’t build a robust national industry,” commented Oleh Derevianko, co-founder and chairman of Information Systems Security Partners.

    The development of Ukraine’s domestic drone capabilities has been so striking because local manufacturers are able to test and refine their products in authentic combat conditions. This allows them to respond on a daily basis to new defensive measures employed by the Russians. The same principle is necessary in cybersecurity. Ukraine regularly faces fresh challenges from Russian cyber forces and hacker groups; the most effective approach would involve developing solutions on-site. Among other things, this would make it possible to conduct immediate tests in genuine wartime conditions, as is done with drones.

    At present, Ukraine’s primary cybersecurity funding comes from the Ukrainian defense budget and international donors. These investments would be more effective if one of the conditions was the procurement of some solutions from local Ukrainian companies. Today, only a handful of Ukrainian IT companies supply the Ukrainian authorities with cybersecurity solutions. Increasing this number to at least dozens of companies would create a local industry capable of producing world-class products. As we have seen with the rapid growth of the Ukrainian drone industry, this strategy would likely strengthen Ukraine’s own cyber defenses while also boosting the cybersecurity of the wider Western world.

    Anatoly Motkin is president of StrategEast, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan dedicated to developing knowledge-driven economies in the Eurasian region.

    Further reading

    The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

    The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

    Follow us on social media
    and support our work

    The post Ukraine’s drone success offers a blueprint for cybersecurity strategy appeared first on Atlantic Council.

    ]]>
    Strengthening Taiwan’s resiliency https://www.atlanticcouncil.org/in-depth-research-reports/report/strengthening-taiwans-resiliency/ Tue, 02 Jul 2024 13:00:00 +0000 https://www.atlanticcouncil.org/?p=776535 Resilience is a nation’s ability to understand, address, respond to, and recover from any type of national security risk. Given the scale of risk Taiwan faces from mainland China, domestic resilience should be front and center in Taiwan’s national security strategy, encompassing areas such as cybersecurity, energy security, and defense resilience.

    The post Strengthening Taiwan’s resiliency appeared first on Atlantic Council.

    ]]>

    Table of contents

    Introduction

    This report recommends actions for the new leadership of Taiwan to take to enhance its societal resilience against Chinese aggression in the context of both “gray zone” conflict and wartime attacks. The report focuses on establishing a comprehensive security strategy and analyzes three key areas particularly important for effective resilience: enhancing cybersecurity for critical infrastructures; improving energy security; and accelerating defense transformation.

    The new administration of Lai Ching-te faces both existing resilience challenges and the potential for significantly greater problems if the People’s Republic of China (PRC) pursues expanded gray zone activities or if actual conflict occurs.1 The ongoing challenges include substantial disinformation campaigns, cyberattacks, military incursions, and periodic economic coercion. Potential future challenges could involve expansion of one or more of these ongoing Chinese activities. In the context of a more contested environment such as a quarantine,2 blockade, or a kinetic conflict, Chinese actions could seek to cause leadership failures and loss of social cohesion; undertake cyberattacks to target critical infrastructures; generate energy shortages; and seek to defeat Taiwan militarily before the United States could provide effective support. The potential for such harms substantially increases the importance of resilient responses by Taiwan.

    The report recommends four major sets of actions to enhance Taiwan’s resilience:

    1. Establish a comprehensive security strategy that engages government, the private sector, and individuals in cooperative efforts to ensure all facets of resilience including:
      1. Risk analyses and priority requirements.
      2. Organization of data relevant to responding to challenges from the PRC.
      3. Development of expertise in key areas required for response.
      4. Provision of governmental leadership and activation of the whole nation as part of a comprehensive approach.
    2. Enhance cybersecurity by establishing:
      1. Off-island, cloud-based capabilities to duplicate governmental and other critical functions.
      2. Working arrangements with high-end, private-sector cybersecurity providers.
      3. A surge capability of cybersecurity experts.
      4. Regular engagement with US Cyber Command’s Hunt Forward program.
      5. Alternatives to undersea cables through low-earth orbit (LEO) communications satellites.
    3. Bolster energy security resilience by:
      1. Rationalizing—that is, increasing—energy prices, especially for electricity.
      2. Supporting indigenous supply, including nuclear energy.
      3. Prioritizing energy needs.
      4. Dispersing and hardening energy storage facilities.
      5. Preparing comprehensive rationing plans for energy.
    4. Enhance defense resilience by:
      1. Continuing the trend of higher defense spending to at least 3 percent of gross domestic product (GDP).
      2. Leveraging Taiwan’s strength in high tech manufacturing and shipbuilding to accelerate the development of a Ukraine-style, public-private “capability accelerator”3 for emerging technologies.
      3. Fielding low-cost, high-effectiveness capabilities including unmanned surface vessels, unmanned aerial vehicles, and naval mines.
      4. Incorporating training in emerging technologies and unconventional tactics for conscripts and reserves.
      5. Investing in East Coast port infrastructure as counterblockade strongholds.
      6. Raising the All-out Defense Mobilization Agency (ADMA) to the national level and implementing a larger civil defense force that fully integrates civilian agencies and local governments.

    Establish a comprehensive security strategy

    Resilience is not a new theme in Taiwan. Former President Tsai Ing-wen, who completed two terms in office on May 20, entitled her 2022 National Day Address “Island of Resilience,”4 and similarly identified resilience as a key factor for Taiwan in her two subsequent National Day addresses.5 “The work of making the Republic of China (Taiwan) a more resilient country is now our most important national development priority,” she stated in that 2022 speech, in which she articulated four key areas of  resilience: economy and industry, social safety net, free and democratic government system, and national defense. What is left undone, however, is aligning these and other resilience elements into a comprehensive security strategy similar to those undertaken by Finland6 and Sweden,7 which utilize a whole-of society approach to enhance resilience.

    Resilience is a nation’s ability to understand, address, respond to, and recover from any type of national security risk. Given the scale of risk Taiwan faces from China, domestic resilience should be front and center in Taiwan’s national security strategy.8 Comparable comprehensive national security approaches, such as the Finnish model, aim to foster and enable an engaged national ecosystem of partners, each with a clear understanding of their roles and responsibilities. Finland’s model is instructive, underscoring the importance of engagement by the entire society:

    • The Security Strategy for Society lays out the general principles governing preparedness in Finnish society. The preparedness is based on the principle of comprehensive security in which the vital functions of society are jointly safeguarded by the authorities, business operators, organisations and citizens.9

    Comprehensive security thus is far more than just government activities:

    • Comprehensive security has evolved into a cooperation model in which actors share and analyse security information, prepare joint plans, as well as train and work together. The cooperation model covers all relevant actors, from citizens to the authorities. The cooperation is based on statutory tasks, cooperation agreements and the Security Strategy for Society.10

    The Finnish strategy identifies seven “vital functions” as key areas: leadership; international and European Union activities; defense capability; internal security; economy, infrastructure, and security of supply; functional capacity of the population and services; and psychological resilience.11

    Taiwan has taken a variety of actions to enhance resilience including the establishment in 2022 of the All-out Defense Mobilization Agency.12 That agency has a useful but limited scope with its mandate of “comprehensive management of ‘planning for mobilization, management, service, civil defense, [and] building reserve capacity.’ ”13 But while defense is important (and further discussed below), as the Finnish and Swedish strategies underscore, Taiwan should expand its approach to resilience to include the full spectrum of governmental, private sector, and individual tasks—and the necessary cooperative efforts to make them most effective.

    President Lai’s recent election ushered in an unprecedented third consecutive term for the Democratic Progressive Party.14 This outcome not only provides continuity in the agenda set by the island’s duly elected leader, but also presents an opportunity to sharpen the focus areas for resilience. As Taiwan transitions to a Lai presidency, the challenge of shoring up the island’s resilience should be at the forefront.

    As a valuable starting point for establishing such an expanded resilience strategy, the Lai government should undertake extensive consultations with both Finland and Sweden—which could be facilitated as necessary by the United States. Taiwan should also seek to engage with the Hybrid Center of Excellence, based in Finland, which is an “autonomous, network-based international organization countering hybrid threats.”15

    The discussion below describes several important elements of a comprehensive resilience strategy, and it will be a crucial task for the Lai administration to expand Taiwan’s current efforts to the full scope of such an approach. Resilience is a team game with the whole of society playing a role. But only Taiwan’s central government can act as the team captain, setting expectations, establishing priorities, formulating and communicating national strategy, and coordinating activities. Only leaders in national-level government can oversee the critical work of developing institutional effectiveness in key areas of risk management and resilience.

    As a starting point, Taiwan should undertake a comprehensive audit now to uncover any gaps in the country’s ability to understand, respond to, and recover from both the chronic risks it currently faces and any more acute manifestations of PRC aggression in the future. Taiwan’s government should examine the following areas to pursue greater resilience:

    1. Activating the whole nation: Working with the private sector and local government, and communicating to households are essential to develop a truly comprehensive approach to Taiwan’s resilience.
    2. Understanding risk: Developing a set of scenarios that will help prioritize activities across government and beyond. Prioritizing is critical where resources are limited—as is identifying areas of cross-cutting work that can help to reduce risk in multiple scenarios.
    3. Building data capacity: Laying a foundation for data exploitation needs will be critical for Taiwan, which will need this capacity both ahead of and during any crisis response. Preparing for and providing this capacity is not just the preserve of government, as commercially available and industry data sources will provide critical insights. Planning to access, receive, store, and process this data needs to start early, as the foundations for technical infrastructure, capabilities, data-sharing policies, and data expertise in government all require time and cannot just be activated on the cusp of crisis. Part of this work entails developing scenarios to help analysts map out gaps in information sources (intelligence, open source, commercial, and from allies) that Taiwan will likely need in each circumstance to build situational awareness. Ahead of and during crisis, risk assessment and effective decision-making will be highly dependent on the availability, quality, and usability of intelligence, information, and data.
    4. Expanding its network of professional skills and resources: Assessing the range of skills and the levels of resourcing needed in government to manage a long-term crisis posture should start well ahead of any crisis. It would be helpful to look now at the gaps in key areas of professional expertise: analysts, data experts, crisis-response professionals, and operational planners will all be needed in larger numbers to sustain an effective response. Taiwan will also need professionally administered and well-exercised crisis facilities, resilient technical infrastructure, and business continuity approaches in place.
    5. Preparedness and planning: Thinking through potential impacts of crisis scenarios in advance and working up potential policy and operational responses will bolster the quality of adaptability, which is an essential component of resilience. The process of exercising and refining plans is also helpful to build the professional connections and networks that will be activated during a live response.

    Working with countries that are already developing vanguard resilience capabilities could help Taiwan quickly establish a workable model. For example, the United Kingdom’s National Situation Centre16—built in less than a year during the COVID-19 pandemic—is a model of developing access to critical data in peacetime and lessons learned from previous crisis scenarios about the practical challenges a nation could face in a variety of scenarios. Many commercial providers offer competent ways of displaying data insights on dashboards, and while this is helpful, it is only part of what can be achieved.

    As a model for its broader resilience requirements, Taiwan will have the benefit of its existing efforts including in the counterdisinformation arena, where it has programs as effective as any in the world, despite the fact that Taiwan consistently faces the world’s highest volume of targeted disinformation campaigns.17 The saturation of PRC information manipulation across Taiwan’s traditional and social media platforms is strategically designed to undermine social cohesion, erode trust in government institutions, and soften resistance to Beijing’s forced unification agenda, while sowing doubts about America’s commitment to peace and stability in the region. 

    Taiwan has developed a multifaceted strategy to combat this onslaught, eschewing heavy-handed censorship in favor of promoting free speech and empowering civil society. This approach serves as a beacon for other democracies, demonstrating how to effectively counter disinformation through rapid-response mechanisms, independent fact-checking, along with widespread media literacy initiatives. Collaborative efforts such as the Taiwan FactCheck Center, Cofacts, and MyGoPen have proven instrumental in swiftly identifying and debunking false rumors, notably during the closely watched presidential election on January 13.18

    Taiwan’s Minister of Digital Affairs (MoDA) attributes the island’s success in combating this “infodemic” to its sophisticated civil-sector efforts, which avoids reliance on reactive takedowns of malicious content akin to a game of whack-a-mole. Much like its handling of the pandemic—where Taiwan achieved one of the world’s lowest COVID-19 fatality rates without resorting to draconian lockdowns—it has demonstrated resilience and innovation in the digital sphere.19

    Taiwan’s response to disinformation demonstrates that it is well-positioned to establish a comprehensive approach to societal resilience. The discussion below describes several important elements of a comprehensive resilience strategy, but it will be a crucial task for the Lai administration to expand Taiwan’s current efforts to the full scope of such an approach.

    Cybersecurity and critical infrastructure resilience

    Cyber risks to critical infrastructures

    Like all advanced economies, Taiwan depends on its critical infrastructures. Critical infrastructures have been described as “sectors whose assets, systems, and networks, whether physical or virtual, are considered so vital . . .  that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety.”20 Since several critical infrastructures are interlinked, it is important in evaluating resilience to “capture cross-cutting risks and associated dependencies that may have cascading impacts within and across sectors.”21 Among those interlinked critical infrastructures are energy, communications, transportation, and water. Each of these are critical to society as a whole and each are dependent on digital technology for their operations.

    In Taiwan, the Administration for Cyber Security has identified critical infrastructures “by their feature types into the following eight fields: energy, water resource, telecommunications, transportation, banking and finance, emergency aid and hospitals, central and local governments, and high-tech parks.”22 It is worth underscoring that several of Taiwan’s critical infrastructures, such as the electric grid23 and the water system,24 are significantly centralized or have other notable vulnerabilities such as the dependency on undersea cables for international communications25 that increase the potential consequences from a successful cyberattack.

    The Taiwan government has fully recognized the significant risks from cyberattacks. As described by Taiwan’s Administration for Cyber Security, “Due to Taiwan’s unique political and economic situation, the country faces not only a complex global cyber security environment but also severe cyber security threats, making the continuous implementation and improvement of cyber security measures a necessity.”26

    The number of cyberattacks against Taiwan is notable.27 Published estimates range from five million cyberattacks per day against Taiwanese government agencies28 to the detection of 15,000 cyberattacks per second, including attempted intrusions, in Taiwan during the first half of 2023.29

    The attacks often focus on key societal infrastructures. A recent Voice of America report noted that just prior to the January 2024 elections:

    • Most of the attacks appeared to focus on government offices, police departments, and financial institutions, with the attackers focused on internal communications, police reports, bank statements and insurance information.30

    Google researchers have likewise described the cyber threat to key critical infrastructures, revealing that it is “tracking close to 100 hacking groups out of China [and that these] malicious groups are attacking a wide spectrum of organizations, including the government, private industry players and defense organizations.”31

    The attacks themselves are often relatively sophisticated. Trellix, a cybersecurity firm, described multiple techniques utilized by attackers that “focused on defense evasion, discovery, and command and control . . . to subvert system defenses to gather information about accounts, systems, and networks.” Among them are “living-off-the-land” techniques, which allow attackers to maintain their intrusions over time with smaller chances of detection.32

    While no one can say with certainty what actions the PRC would take in the context of a blockade of or outright conflict with Taiwan, the United States is clear-eyed about the potential for attacks on its own critical infrastructures if engaged in conflict with China. The February 2023 Annual Threat Assessment of the US Intelligence Community notes the likelihood of such PRC cyberattacks in that context:

    • If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide . . .  China almost certainly is capable of launching cyber attacks that could disrupt critical infrastructure services within the United States, including against oil and gas pipelines, and rail systems.33

    The ongoing Russian cyberattacks against Ukraine in the Russia-Ukraine war further underscore the reality of critical infrastructures as a target in a conflict. It seems reasonable to assume that comparable actions (and perhaps even more) would be undertaken against Taiwan in the event of a blockade or kinetic conflict. “Probable targets,” according to James A. Lewis, would include critical infrastructures such as electrical power facilities, information and communications systems, and pipelines.34

    Actions to enhance Taiwan’s cyber resilience

    Taiwan can enhance its cyber resilience through its own actions and in collaborative activities with private-sector companies and with the United States. While cyberattacks can be highly disruptive, one of the important lessons of the Ukraine-Russia conflict is that the effects on operations can be mitigated, as described in a CyberScoop analysis that underscores a shift in expectations:

    • The war has inspired a defensive effort that government officials and technology executives describe as unprecedented—challenging the adage in cybersecurity that if you give a well-resourced attacker enough time, they will pretty much always succeed. The relative success of the defensive effort in Ukraine is beginning to change the calculation about what a robust cyber defense might look like going forward.35

    According to the analysis, the critical element for such success has been significant multinational and public-private collaboration:

    • This high level of defense capability is a consequence of a combination of Ukraine’s own effectiveness, significant support from other nations including the United States and the United Kingdom, and a key role for private sector companies.
    • The defensive cyber strategy in Ukraine has been an international effort, bringing together some of the biggest technology companies in the world such as Google and Microsoft, Western allies such as the U.S. and Britain and social media giants such as Meta who have worked together against Russia’s digital aggression.36

    Actions by Taiwan

    Taiwan should utilize the Ukraine model of cyber resilience—backed in part by private-sector companies—and take comparable actions to enhance its cybersecurity. Taiwan has a substantial existing cybersecurity framework on which to build such mitigating actions. Since 2022, the Ministry of Digital Affairs, through its Administration for Cyber Security, is responsible for “implementing cyber security management and defense mechanisms for national critical infrastructures” including “evaluating and auditing cyber security works at government agencies and public entities.”37 Utilizing that framework, Taiwan should undertake the following four actions that would significantly enhance the island’s cybersecurity resilience.

    First, Taiwan should utilize cloud-based capabilities to establish a duplicative set of cyber-enabled governmental functions outside of Taiwan. Ukraine undertook such actions, thereby rendering Russian cyberattacks in Ukraine unable to disrupt ongoing governmental activities. Taiwan’s Ministry of Digital Affairs has been evaluating the use of public clouds including the possibility of  “digital embassies” abroad to hold data.38 Taiwan should organize such actions with key cloud providers such as Amazon Web Services, which provided support to Ukraine.39 The United States should work with Taiwan and appropriate cloud providers to help effectuate such a result.

    Second, Taiwan should establish arrangements with private-sector cybersecurity providers to undertake defensive operations against PRC cyberattacks in the context of a blockade or kinetic conflict. As noted above, such private-sector actions have been instrumental to Ukraine, and would similarly be invaluable for Taiwan. The United States should also help facilitate such private-sector defensive cyber operations for Taiwan.

    Third, Taiwan should organize a surge capability of individual cybersecurity experts who can be called upon to complement governmental resources. Both Estonia and the United Kingdom have very effective cyber-reserve approaches, and Taiwan should engage with each country, seeking lessons learned as part of establishing its own reserve corps.

    Fourth, Taiwan needs to accelerate its low-earth orbit satellite communications program. The Ministry of Digital Affairs’ two-year, US$18 million plan to strengthen the resilience of government communications entails building more than 700 satellite receiver stations. The impetus: ships from mainland China have repeatedly severed submarine internet cables in what Taiwan perceived as “a trial of methods” that the PRC could use to prepare for a military invasion.40

    The existing program involves satellites as well as ground-based receivers. The Taiwan Space Agency disclosed its plan for a “dedicated” LEO satellite communications project in late 2022,41 as a public-private partnership: 

    • Distinct from traditional government programs, this groundbreaking project is structured as a privately operated venture, wherein the Taiwanese government would retain a substantial minority ownership. . . . This project intends to enhance the Taiwan Space Agency’s initial proposal for two government-built LEO satellites by evolving it into a “2+4” configuration. This will involve constructing four additional satellites through collaborative efforts between the public and private sectors.42

    Actions with the United States

    In accord with the Taiwan Relations Act,43 and as a matter of long-standing policy, the United States strongly supports Taiwan’s defensive capabilities including for cybersecurity. The Integrated Country Strategy of the American Institute in Taiwan (essentially the unofficial US embassy) specifically provides that “bolster[ing] Taiwan’s cybersecurity resilience” is one of the United States’ strategic priorities for the island.44 To support that objective, the United States can enhance Taiwan cybersecurity through cooperative defensive activities.

    First, US Cyber Command regularly supports the network resilience of allied countries and partners through its “Hunt Forward” operations, which are “strictly defensive” joint ventures, undertaken following an invitation from the ally or partner, to “observe and detect malicious cyber activity” on these networks, together searching out “vulnerabilities, malware, and adversary presence.”45

    While Taiwan has not been specifically identified as a Hunt Forward participant, Anne Neuberger, who is the US deputy national security advisor for cyber and emerging technology, said at a Politico Tech Summit in 2023 that in the event of a major cyberattack on Taiwan, the United States would “send its best teams to help hunt down the attackers, the same approach typically used to help global allies in cyberspace.”46 She described the typical approach as:

    • Putting our best teams to hunt on their most sensitive networks to help identify any current intrusions and to help remediate and make those networks as strong as possible.”47

    Neuberger also highlighted US work with Taiwan to carry out military tabletop games and exercises to prepare for potential cyberattack.48

    More recently, the National Defense Authorization Act (NDAA) for Fiscal Year 2024 explicitly authorized the Defense Department to cooperate on:

    • Defensive military cybersecurity activities with the military forces of Taiwan to (1) defend military networks, infrastructure, and systems; (2) counter malicious cyber activity that has compromised such military networks, infrastructure, and systems; (3) leverage United States commercial and military cybersecurity technology and services to harden and defend such military networks, infrastructure, and systems; and (4) conduct combined cybersecurity training activities and exercises.49

    Going forward, those authorities authorize not only Hunt Forward actions but also actions to  leverage commercial and military technology to harden such networks (which would seem to resolve any export control issues) and to conduct combined training and exercises, all of which underscores clear congressional approval for enhanced cybersecurity activities with Taiwan.50

    Second, the United States should undertake to enhance Taiwan’s communications resilience by making available access to US commercial and military LEO networks. The important role of the commercial provider Starlink in assuring communications in the context of the Ukraine-Russia war is well-known.51 Starlink’s parent company, SpaceX, is, however, controlled by Elon Musk, whose Tesla company has major investments in China. That linkage has raised the question of whether Taiwan could rely on any commercial arrangements it might make on its own with Starlink—particularly since Starlink did impose some limitations on Ukraine’s use of the network.52 However, as previously described by one of the authors of this report, the US government has sway on such matters:

    • The Defense Production Act authorizes the [US] government to require the prioritized provision of services—which would include services from space companies—and exempts any company receiving such an order from liabilities such as inability to support other customers.53

    Accordingly, the US should rely on this authority to organize appropriate arrangements with Starlink—and other space companies that provide like capabilities—to ensure access that would support Taiwan communications. One way to do this would be to incorporate appropriate terms into the commercial augmentation space reserve (CASR) program arrangements that US Space Force is currently negotiating with civil space providers,54 as part of the Department of Defense’s overall commercial space strategy.55

    Additionally, the DOD is developing its own LEO capability through a variety of constellations being put in place by Space Force.56 Pursuant to the recent NDAA authorization noted above, DOD should work with the Taiwan military to ensure that those constellations will be available to support Taiwan as necessary.

    Longer term, the United States should also undertake to enhance the resilience of Taiwan’s undersea cables. As previously proposed by one of the authors, the United States should lead in establishing an international undersea infrastructure protection corps. It should:

    • Combine governmental and private activities to support the resilience of undersea cables and pipelines. Membership should include the United States, allied nations with undersea maritime capabilities, and key private-sector cable and pipeline companies.57

    Such an activity would include focus on cybersecurity for undersea cable networks, hardening and other protection for cable landing points, and capabilities and resources to ensure expeditious repair of cables as needed.58 To be sure, getting such an activity up and running will necessarily be a multiyear effort. However, Taiwan’s vulnerability underscores the importance of beginning promptly and working as expeditiously as possible.

    Cybersecurity recommendations for Taiwan

    • Utilize cloud-based capabilities to establish a duplicative set of cyber-enabled governmental functions outside of Taiwan.
    • Establish arrangements with private-sector cybersecurity providers to undertake defensive operations against PRC cyberattacks.
    • Organize a surge capability of individual cybersecurity experts who can be called upon to complement governmental resources.
    • Accelerate the low-earth orbit satellite communications program.
    • Actively engage with Cyber Command’s Hunt Forward activities.
    • Enhance Taiwan’s communications resilience by making available access to US commercial and military LEO networks.
    • Undertake on a longer-term basis enhanced resilience of Taiwan’s undersea cables.

    Energy

    As part of its efforts to enhance resilience, Taiwan must mitigate its energy vulnerabilities, as its reliance on maritime imports for about 97 percent59 of its energy needs creates acute risks. To lessen its dependency on maritime imports and strengthen its resiliency in the face of potential PRC coercion, Taiwan should curb energy and electricity demand, bolster indigenous supply, overhaul its inventory management, and prepare rationing plans. A resilient energy security approach would credibly signal to the PRC that Taiwan could hold out for long durations without maritime resupply.

    Curbing demand by rationalizing prices 

    Taiwan’s ultra-low electricity prices are a security risk (and a black eye for its climate targets). Reliance on seaborne energy shipments presents straightforward security problems, and Taiwan’s low electricity prices subsidize consumption that is being met by imports of hydrocarbons, especially coal. The new Lai administration should make haste prudently, increasing electricity prices more frequently and significantly, without exceeding the limits of the politically possible.

    Taiwan’s electricity price quandary is illustrated by Taipower, the state-owned monopoly utility. In 2022 and 2023, Taipower lost 227.2 billion New Taiwan dollars (NTD) and 198.5 billion NTD, respectively, as its per kilowatt hour cost of electricity sold substantially exceeded per unit prices.60 Taipower’s prices failed to offset the steep rise in electricity input costs amid Russia’s invasion of Ukraine and the post-COVID-19 unsnarling of supply chains.

    Taiwan’s electricity costs remain too low, diminishing the island’s resiliency, although policymakers have now taken some steps in light of the problem. The Ministry of Economic Affairs’ latest electricity price review, in March 2024, raised average prices by about 11 percent, with the new tariff reaching about 3.4518 NTD, or approximately $0.11 USD/kWh.61 This rationalization of prices, while welcome, is insufficient. In the United States, the rolling twelve-month ending price in January 2024 for all sectors totaled $0.127/kWh.62 Taiwan’s heavily subsidized electricity consumers therefore enjoy a discount in excess of 13 percent compared to their US counterparts, despite US access to low-cost, abundant, and indigenously produced energy.

    Taiwan’s heavily subsidized electricity prices incentivize maritime imports, especially coal. Astonishingly, Taiwan was the world’s largest per capita user of coal generation for electricity in 2022, higher than even Australia, a major coal exporter.

    Taiwan’s low electricity prices and use of coal expose the island to PRC economic coercion. Taiwan’s dependency on imported coal heightens its vulnerability in the summer, when the island’s electricity-generation needs peak. Concerningly, Taiwan has already experienced electricity shortfalls in summer peacetime conditions, including a wave of outages63 between July and August 2022. With the island’s future summer cooling needs set to rise even further due to climate change and hotter temperatures, Taiwan’s electricity needs pose a vulnerability that the PRC may attempt to exploit.

    Curbing Taiwan’s electricity demand during summer months is critical, necessitating a rise in prices. While this reduction is a principal energy security challenge, the island must also do more to secure supply, especially for nuclear energy.

    Supply: Support indigenous production

    Taiwan’s resiliency will be strengthened by producing as much indigenous energy as possible, especially during the critical summer months. Taiwan, which has virtually no hydrocarbon resources, can therefore indigenously produce only four different forms of energy at scale: nuclear energy, offshore wind, onshore wind, and solar. Taiwan should pursue each of these indigenous energy sources. Taiwan should apply “carrots” by strengthening incentives and payments for indigenous production. At the same time, applying the “stick” of higher prices to energy consumption, especially for energy imports, would bolster the island’s resiliency.

    Taiwan’s renewable resources are significant and often economically viable, but they cannot secure adequate levels of resiliency by themselves. Taiwan’s wind speeds slow in the summer,64 limiting onshore and offshore wind’s effectiveness in bolstering energy security. Additionally, Taiwan’s stringent localization requirements for offshore appropriately minimize PRC components and sensors in Taiwan’s offshore wind turbines, but also raise the costs of this technology. Taiwan’s solar potential65 is also limited66 by cloudy skies, frequent rainfall, and land scarcity.

    Accordingly, nuclear energy is the most viable way for Taiwan to address its summer electricity needs without turning to maritime imports. While Taiwan’s nuclear reactors must acquire fuel from abroad, this fuel can be used for approximately eighteen to twenty-four months.67 Taiwan should maintain its existing nuclear energy capacity; restart retired capacity as soon as politically and technically feasible; and seek new, incremental capacity over time.68

    Unpacking Taiwan’s storage complexities: Dispersal and hardening is critical

    To cope with various contingencies, including the possibility of a prolonged summertime blockade, Taiwan should increase its stockpiles of energy, disperse inventory around the island, and harden facilities.

    While Taiwan’s ability to hold out against a blockade involves by many factors, energy inventories are a critical element. Taiwan’s electricity reserves are limited: it reported fifty-six days of supply of coal inventories in February 2023,69 and aims to raise its natural gas inventories from eleven days to more than twenty days by 2030.70 These inventory levels should be expanded, in part because “days of supply” fail to encapsulate uncertainty. Demand fluctuates depending on temperature and other variables, while Taiwan’s access to energy storage inventories faces the risk of sabotage and, in certain scenarios, kinetic strikes.

    Taiwan’s management of petroleum reserves is a matter of great importance, given the use of these fuels, especially diesel, for military matters. Taiwan’s Energy Administration, in the Ministry of Economic Affairs, reported in April 2024 that its total oil inventories stood at 167 days of supply.71 This topline figure presents an overly optimistic portrait of Taiwan’s petroleum security, however. For instance, Taiwan’s government-controlled inventories in April 202472 included 2.6 million kiloliters of crude oil and refined products; private stocks added another 6.5 million kiloliters. Accordingly, Taiwan reports forty-seven days of supply from government stockpiles, with an additional 120 days from private inventories.73 Given that domestic sales and consumption equated to about 54,200 kiloliters per day from prior comparable periods,74 Taiwan calculated it had about 167 days of supply.

    There may, however, be insufficient monitoring of private inventories. Marek Jestrab observed:

    • A concerning—and possibly significant—loophole exists in these laws, where the criteria and computation formulas for the actual on-hand security stockpiles will be determined by the central competent authority, and are not required to be disclosed. This presents the opportunity for energy that is loaded onboard merchant shipping while in transit to Taiwan to count toward these figures.75

    While Taiwan should ensure that stockpiles are actually on the island, and not at sea, it also needs to carefully examine the inventory split between crude oil and crude products, such as diesel, gasoline, jet fuel, etc. Additionally, Taiwan’s policymakers should not expect to rely on its crude inventories, which only have a latent potential: crude oil cannot be used until it is refined into a crude product. Therefore, if the PRC disrupted Taiwan’s refineries via cyber or even kinetic means, Taiwan would not be able to access the totality of its crude oil reserves.

    Taiwan’s military requirements for fuel would likely surge during a confrontation or conflict with the PRC, reducing the “days of supply.” Since Taiwan’s military vehicles largely run on diesel, the island should pay careful attention to this product.

    Taiwan should disperse and harden its energy assets, especially diesel storage, as concentrated objects would present inviting targets for the PRC. Beijing is studying Russia’s invasion of Ukraine closely and will not fail to notice that Moscow attacked about 30 percent of Ukrainian infrastructure in a single day.76 As one author witnessed during his recent visit to Kyiv, Ukraine’s dispersal of electricity assets is achieving a reasonable degree of success. Indeed, Russia’s more recent campaign77 attacking large-scale thermal and hydroelectric power plants illustrates the utility of dispersed energy infrastructure. Like Ukraine, Taiwan should disperse and harden its energy storage inventories to the maximal feasible extent.

    Rationing plans

    While both Taiwan’s electricity supply and demand will be very hard to predict in a state of emergency, rationing plans must be considered—especially for the island’s manufacturing and semiconductor industries.

    Taiwan’s economy is uniquely78 tied to electricity-intensive manufacturing, as industrial consumers accounted for more than 55 percent of Taiwan’s electricity consumption in 2023.79 Most of these industrial producers (such as chipmaker Taiwan Semiconductor Manufacturing Company) service export markets—not Taiwan. While the PRC might attempt to disrupt the island’s energy and electricity supply via cyber and kinetic means, Taiwan’s electricity consumption would fall dramatically during a crisis if Taiwan’s industries were forced to shut down. Although the closure of Taiwan’s industry would prove economically ruinous, it would also make the island’s electricity and energy issues much more manageable. Adding an additional layer of complication, many of Taiwan’s most valuable exports – such as chips – are shipped via civilian airliners, not on seaborne vessels, and would consequently be more difficult to interdict in circumstances short of war.80 Taiwan should prepare rationing plans for a variety of contingencies, adapting to a range of scenarios, including a quarantine, siege, or even kinetic conflict. Taiwan must be ready. 

    Energy recommendations for Taiwan 

    • Gradually raise electricity and energy prices, communicating that price hikes will persist and require significant adjustments over the medium term.
    • Expand the frequency of electricity price reviews from twice a year to a quarterly basis. More frequent price adjustments will allow smaller incremental increases while also enabling Taiwan to respond more quickly to potential contingencies.
    • Expand fiscal support for indigenous forms of energy. Demand-side management programs could include virtual power plants, building efficiency measures, two-way air conditioning units, and more. On the supply side, Taiwan should incentivize indigenous energy production, including nuclear energy, onshore wind, offshore wind, and solar.
    • Extend the life of Taiwan’s nuclear energy power plants and consider expanding capacity. Nuclear energy is not only Taiwan’s best option for meeting its summer generation needs but also extremely safe and reliable. In the event of a conflict, the PRC is extremely unlikely to launch highly escalatory and provocative attacks against nuclear facilities on territory it seeks to occupy.
    • Bolster domestic energy supplies and decarbonization objectives including by considering easing localization requirements for offshore wind projects—while ensuring that PRC components and sensors are not incorporated.
    • Disperse and, where possible, harden energy and electricity assets and volumes across the island for both military and civil defense needs.
    • Examine potential alternatives to diesel, as diesel inventories can begin to degrade after several weeks, including “long-duration diesel” solutions that, while more polluting, could extend the shelf life of its inventories, enhancing the durability of Taiwan’s military and civil defense efforts.
    • Deepen liquified natural gas (LNG) ties with the United States. Contracting with US LNG producers would moderately bolster Taiwan’s energy security, as the PRC would be more reluctant to interdict US cargoes than vessels from other nations.
    • Conduct comprehensive studies into energy contingency planning, examining how energy and electricity would be prioritized and rationed during various scenarios.

    Food and water resiliency

    Taiwan’s food supply needs will be significant in the event of a contingency, but pale in comparison to its energy and water requirements. Taiwan’s water security is a serious concern, as it is already suffering from water access issues in noncrisis periods. Taiwan should prioritize scarce land for electricity generation, especially onshore wind and solar, which are much less water-intensive than coal and natural gas generation. Repurposing farmland for renewables would ease Taiwan’s electricity and water needs in peacetime and during any crisis.

    Taiwan’s food security challenges are serious, but manageable. The island’s self-sufficiency ratio for food stands at about 40 percent, after rising somewhat in recent years. Unlike energy, however, Taiwan can both store food, especially rice, and replenish these inventories. Meals ready to eat (MREs) can store for more than eighteen months.

    Additionally, the island would likely be able to resupply itself aerially in all situations short of conflict. The PRC might well be extremely reluctant to shoot down a civilian aircraft resupplying Taiwan with food. The PRC’s shootdown of a civilian aircraft would damage external perceptions of the PRC, and strengthen global support for sanctions. While there can be no certainty, the PRC’s self-interest in managing perceptions of a confrontation would increase the likelihood of the safe transit of aerial and perhaps even maritime food deliveries to the island.

    Taiwan’s water access problems are serious. Water shortages have manifested even in peacetime, as Taiwan experienced a severe drought in 2021. During a contingency with the PRC, Beijing might attempt to exploit this vulnerability.

    Luckily, Taiwan’s water resiliency can be strengthened by tackling agricultural consumption and, wherever politically and technically feasible, repurposing farmland for energy generation. From 2013 to 2022, 71 percent of Taiwan’s water consumption was attributable to agriculture. Meanwhile, Taiwan’s industries comprised only 10 percent of demand during that period, with domestic (i.e., residential and commercial) consumption accounting for the remainder. Taiwan’s water needs are growing, due to “thirsty” industrial customers, but the agricultural sector is primarily responsible for the majority of the island’s consumption, although consumption and supply sources vary across the island.

    Taiwan’s policymakers recognize its water problems and have begun raising water prices,  especially for heavy users. Taiwan should continue to encourage efficiency by gradually but perceptibly increasing water prices. Concomitantly, it should further reduce demand by repurposing water-intensive farmland for electricity generation, when feasible. Repurposing farmland will undoubtedly prove politically difficult, but it will also improve Taiwan’s water and electricity resiliency.

    Food and water security recommendations 

    • Prioritize energy and water security needs over food production.
    • Secure and disperse inventories of foodstuffs, such as MREs, medicines, and water, along with water purification tablets.
    • Bolster the island’s cold storage supply chains and overall foodstuff inventories.
    • Plan and work with partners to stage food supply if a Berlin airlift-style operation becomes necessary.
    • Continue to encourage water conservation by increasing water prices gradually but steadily.
    • Ensure redundancy of water supplies and systems, especially in the more populous northern part of the island.
    • Ensure that drinking water and sanitation systems can operate continuously, after accounting for any electricity needs.
    Gustavo F. Ferreira and J. A. Critelli, “Taiwan’s Food Resiliency—or Not—in a Conflict with China,” US Army War College Quarterly: Parameters 53, no. 2 (2023), doi:10.55540/0031-1723.3222; Joseph Webster, “Does Taiwan’s Massive Reliance on Energy Imports Put Its Security at Risk?,” New Atlanticist, Atlantic Council blog, July 7, 2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/does-taiwans-massive-reliance-on-energy-imports-put-its-security-at-risk/; Amy Chang Chien, Mike Ives, and Billy H. C. Kwok,  “Taiwan Prays for Rain and Scrambles to Save Water,” New York Times, May 28, 2021, https://www.nytimes.com/2021/05/28/world/asia/taiwan-drought.html; “Water Resources Utilization,” Ministry of Economic Affairs (MOEA), Water Resources Agency, 2022, https://eng.wra.gov.tw/cp.aspx?n=5154&dn=5155; Meng-hsuan Yang, “Why Did Formosa Plastics Build Its Own Desalination Facility?,” CommonWealth Magazine, May 31, 2023, https://english.cw.com.tw/article/article.action?id=3440; and Chao Li-yen and Ko Lin, “Taiwan State-Owned Utility Evaluates Water Price Adjustments,” Focus Taiwan, January 26, 2024, https://focustaiwan.tw/society/202401260017#:~:text=As%20of%20Aug.
    The Berlin airlift of 1948 and 1949 demonstrates the power of aerial food replenishment logistics in an uncontested environment. From June 26, 1948, to September 30, 1949, Allied forces delivered more than 2.3 million tons of food, fuel, and supplies to West Berlin in over 278,000 airdrops. While Taiwan’s population of more than twenty-three million is significantly larger than West Berlin’s population of 2.5 million, the world civilian air cargo fleet has expanded dramatically over the past seventy-five years. In all situations short of conflict, Taiwan would be able to restock food from the air. For more on the Berlin airlift, see Katie Lange, “The Berlin Airlift: What It Was, Its Importance in the Cold War,” DOD News, June 24, 2022, https://www.defense.gov/News/Feature-Stories/Story/Article/3072635/the-berlin-airlift-what-it-was-its-importance-in-the-cold-war/.

    Enhancing defense resilience

    Ever since Beijing leveraged then-Speaker Nancy Pelosi’s August 2022 visit to Taiwan as an excuse to launch large-scale joint blockade military exercises, pundits have labeled the residual military situation around Taiwan as a “new normal.” Yet there is really nothing normal about a permanent presence of People’s Liberation Army (PLA) Navy warships menacingly surrounding the island along its twenty-four nautical mile contiguous zone, and nothing usual about increasing numbers of manned and unmanned military aircraft crossing the tacit median line in the Taiwan Strait—a line that held significance for seven decades as a symbol of cross-strait stability. Nor should it be viewed as normal that a steady stream of high-altitude surveillance balloons—which are suspected of collecting military intelligence—violate Taiwan’s airspace.81 Some have better described this “new normal” as a strategy akin to an anaconda noticeably tightening its grip around the island, drawing close enough to reduce warning time and provocative enough to raise the risk of inadvertent clashes. In other words, the PRC has unilaterally dialed up a military cost-imposition campaign meant to chip away at peace and stability across the Taiwan Strait, wear down Taiwan’s military, and erode confidence and social cohesion in Taiwan society. 

    Russia’s full-scale invasion of Ukraine in 2022 was an additional wake-up call for the citizens of Taiwan, following mainland China’s 2019 crackdown on Hong Kong freedoms, heightening recognition of the risks presented by the PRC and, in particular, that the long-standing status quo in cross-strait relations is no longer acceptable to Beijing. Taiwan thus finds itself in the unenviable position of simultaneously countering PLA gray zone intrusions and cognitive warfare—what NATO calls affecting attitudes and behaviors to gain advantage82—while beefing itself up militarily to deter the growing threat of a blockade or assault.

    With this backdrop, Taipei authorities have since embarked on long-overdue reforms in defense affairs, marked by several developments aimed at bolstering the island democracy’s military capabilities and readiness in the face of growing threats from Beijing.

    First, Taiwan’s overall defense spending has undergone seven consecutive year-on-year increases, reaching 2.5 percent of gross domestic product.83 While this is commendable, Taiwan’s defense requirements are very substantial, and its budget in US dollars is only $19.1 billion.84 Accordingly, it will be important for Taiwan to continue the trend of higher defense spending to at least 3 percent of GDP both to bolster Taiwan’s military capabilities and as a deterrent signal to Beijing—and also to garner international community recognition that Taiwan is serious about its own defense. A key element will be to ensure that Taiwan has sufficient stocks of ammunition and other weapons capabilities to fight effectively until the United States could fully engage and in the event of a longer war. One area that deserves a high degree of attention is defense against ballistic and cruise missiles and unmanned vehicles. Especially in light of the recent coalition success in defeating such Iranian attacks against Israel, planning should be undertaken to assure comparable success for Taiwan against PRC attacks. Adding mobile, short-range air defenses to the high-priority list of military investments for Taiwan—such as the highly mobile National Advanced Surface-to-Air Missile System (NASAMS)85—will make it harder for the PLA to find and destroy Taiwan defenses, especially if combined with passive means for target detection and missile guidance.

    Second, the new president can kick-start an enhanced approach to defense by embracing full integration of public-private innovation and adopting Ukraine’s model of grass-roots innovation for defense, which has served it well through a decade of war against a much larger Russia. Recognizing that innovation is itself a form of resilience, Taiwan can draw valuable lessons from Ukraine, particularly in leveraging private-sector expertise. By implementing what some Ukrainian defense experts term a “capability accelerator” to integrate emerging technologies into mission-focused capabilities, Taiwan can enhance its resilience and swiftly adapt to evolving security challenges, including rapidly fielding a high volume of unmanned systems to achieve distributed surveillance, redundant command and control, and higher survivability.86 This comprehensive approach, which recognizes the private sector as the greatest source of innovation in today’s complex security environment, holds significant potential for enhancing Taiwan’s defense capabilities through the utilization of disruptive technologies. The island’s overall resilience would significantly benefit by drawing the private sector in as a direct stakeholder in national defense matters. 

    Ukraine’s grass-roots model of defense innovation, spearheaded by volunteers, nongovernment organizations, and international partners, is a worthy and timely model for Taiwan. Ukraine’s approach has yielded significant advancements in drone warfare, as well as sophisticated capabilities like the Delta battlefield management system—a user-friendly cloud-based situational awareness tool that provides real-time information on enemy and friendly forces through the integration of data from sources such as drones, satellites, and even civilian reports.87 This collaborative model, reliant on cooperation between civilian developers and military end users, has propelled Ukraine’s military technological revolution by integrating intelligence and surveillance tasks, while enhancing decision-making and kill-chain target acquisition. Taiwan will benefit from a comparable approach.

    Third, as suggested above, Taiwan should focus a large portion of its defense budget on low-cost, highly effective systems. In terms of force structure, it appears that Taiwan has settled on a design that blends large legacy platforms of a twentieth-century military with the introduction of more survivable and distributable low-end asymmetric capabilities. The latter are best exemplified by Taiwan’s indigenously produced Ta Chiang-class of high-speed, guided-missile corvettes (PGG) and Min Jiang-class fast mine laying boats (FMLB).88 But much more must be done to bolster Taiwan’s overall defense capabilities by focusing on less expensive, but nonetheless highly effective systems.

    In Ukraine’s battle against Russian Federation invaders, drones have provided Ukrainian forces with important tactical capabilities by enabling them to gather intelligence, monitor enemy movements, and conduct precision strikes on high-value targets. Taiwan can comparably utilize low-cost UAVs to establish mesh networks that connect devices for intelligence, surveillance, and reconnaissance and for targeting that would be invaluable in countering a PRC amphibious assault. Lessons from Ukraine further highlight the importance of having the right mix of drone types and capabilities in substantial stockpiles, capable of a variety of missions. Notably, Ukrainian officials have called for the production of more than one million domestically produced drones in 2024.89 Then-President Tsai’s formation of a civilian-led “drone national team” program is a commendable step in this direction and underscores the power of collaborative innovation in joint efforts between  users.90 Encouraging cooperation between Taiwan drone makers and US private industry will accelerate the development of a combat-ready unmanned systems fleet with sufficient range, endurance, and payload to enhance situational awareness and battlefield effects. 

    Concurrent with those efforts utilizing unmanned systems, Taiwan should bolster its naval mining capabilities as a strategic measure against PRC aggression. Naval mines represent one of the most cost-effective and immediately impactful layers of defense.91 In this regard, Taiwan’s new Min Jiang class of FMLB represents the right type of investments in capabilities which could prove pivotal in thwarting potential invasion attempts.

    Even more significantly for a Taiwan audience, Ukraine broke a blockade of its Black Sea ports using a combination of naval drones and coastal defense missiles—and repelled the once-mighty Russian Black Sea Fleet—all without a traditional navy of its own.92 Faced with clear intent by a PLA Navy practicing daily to isolate the island, the time is past due for Taiwanese authorities to hone their own counterblockade skills including a heavy reliance on unmanned surface vehicles. 

    Taiwan should also make rapid investments in port infrastructure and defenses along Taiwan’s eastern seaboard in places such as Su’ao and Hualien harbors, which can serve as deepwater ports that are accessible, strategic, antiblockade strongpoints, and where any conceivable PLA blockade would be at its weakest and most vulnerable point logistically. Su’ao harbor, as a potential future homeport for Taiwan’s new indigenous Hai Kun-class diesel submarines, could also serve a dual purpose as an experimentation and development zone for public-private collaboration on unmanned-systems employment and operations. Infrastructure investments in East Coast ports could enhance the island’s ability to attain emergency resupply of energy, food, humanitarian supplies, and munitions under all conditions, broadening options for international community aid and complicating PLA efforts.

    Fourth, every new capability needs trained operators who are empowered to employ and engage.  This year Taiwan began implementation of a new, one-year conscript training system for male adults born after January 1, 2005 (up from a wholly inadequate four months of conscription in the past decade).93 Taiwan’s “all-out defense” plan realigns into a frontline main battle force consisting of all-volunteer career military personnel, backed up by a standing garrison force composed mainly of conscripted military personnel guarding infrastructure, along with a civil defense system integrated with local governments and private-sector resources. Upon mobilization, there would also be a reserve force to supplement the main battle and garrison forces. 

    According to details laid out in its 2023 National Defense Report, Taiwan’s revamped one-year conscript system and reorganized reserve mobilization system place significant emphasis on traditional military combat skills, such as rifle marksmanship and operation of mortars.94 However, in response to evolving security challenges and the changing nature of warfare, Taiwan’s military should incorporate greater training in emerging technologies and unconventional tactics, along with decentralized command and control, especially in the areas of drone warfare, where unmanned aerial vehicles and surface vessels play a crucial role in reconnaissance, surveillance, and targeted strikes. By integrating drone warfare training into the conscript system as well as in annual reserve call-up training, Taiwan can better prepare its military personnel to adapt to modern battlefield environments and effectively counter emerging threats.

    Integrating drone operations into military operations down to the conscript and reservist level offers a cost-effective means to enhance battlefield situational awareness and operational capabilities, and also has the added benefit of enhancing the attractiveness and value of a mandatory conscription system emerging from years of low morale and characterized by Taiwan’s outgoing president as “insufficient” and “full of outmoded training.”95 Recognizing the imperative to modernize military training to face up to a rapidly expanding PLA threat, Taiwan’s military force realignment plan came with a promise to “include training in the use of Stinger missiles, Javelin missiles, Kestrel rockets, drones, and other new types of weapons . . . in accordance with mission requirements to meet the needs of modern warfare.”96 Looking at the example of Ukraine, where drones have been utilized, underscores the importance of incorporating drone warfare training into its asymmetric strategy.

    The Taiwan Enhanced Resilience Act “prioritize[d] realistic training” by the United States, with Taiwan authorizing “an enduring rotational United States military presence that assists Taiwan in maintaining force readiness.”97 There have been numerous reports of US special forces in Taiwan,98 and those forces could provide training in tactical air control, dynamic targeting, urban warfare, and comparable capabilities.99 Likewise, parts of an Army Security Force Assistance Brigade could do similar work on a rotational basis, on- or off-island.

    To facilitate a comprehensive and integrated approach to defense planning and preparedness between the military, government agencies, and civilian organizations, Taiwan has also established the All-out Defense Mobilization Agency, which (as noted above) is a centralized body subordinate to the Ministry of National Defense that is tasked with coordinating efforts across various sectors, down to the local level, to enhance national defense readiness. That agency would be significantly more effective if raised to the national level with a broadened mandate as part of a comprehensive approach.

    The Taiwanese leadership also should consider elevating their efforts to create a large-scale civil defense force, offering practical skills training which would appeal to Taiwanese willing to dedicate time and effort toward defense of their communities and localities. These skills could include emergency medical training, casualty evacuation, additive manufacturing, drone flying, and open-source intelligence. Private, nonprofit civil defense organizations such as Taiwan’s Kuma Academy hold widespread appeal to citizens seeking to enhance basic preparedness skills.100 With a curriculum that covers topics ranging from basic first aid to cognitive warfare, Kuma Academy’s popular classes typically sell out within minutes of going online. According to a recent survey of domestic Taiwan opinions sponsored by Spirit of America, “When facing external threats, 75.3% of the people agree that Taiwanese citizens have an obligation to defend Taiwan.”101 A well-trained civil defense force and other whole-of-society resilience measures provide an additional layer of defense and enhance social cohesion to better deny Beijing’s ultimate political objective of subjugating the will of the people.

    Defense resilience recommendations for Taiwan

    • Raise defense spending to at least 3 percent of GDP.
    • Adopt Ukraine’s model of grass-roots innovation in defense.
    • Focus a large portion of its defense budget on low-cost, highly effective systems including unmanned vehicles and naval mines.
    • Incorporate greater training in emerging technologies and unconventional tactics for conscripts and reserves.
    • Invest in East Coast port infrastructure as counterblockade strongholds.
    • Elevate the All-out Defense Mobilization Agency to the national level and implement a larger civil defense force that fully integrates civilian agencies and local governments.

    Conclusion

    On April 3, 2024, Taiwan was struck by the strongest earthquake in twenty-five years. In the face of this magnitude 7.4 quake, Taiwan’s response highlights the effectiveness of robust investment in stricter building codes, earthquake alert systems, and resilience policies, resulting in minimal casualties and low infrastructure damage.102 Taiwan’s precarious position on the seismically vulnerable Ring of Fire, a belt of volcanoes around the Pacific Ocean, mirrors its vulnerability under constant threat of military and gray zone aggression from a mainland China seeking seismic changes in geopolitical power. Drawing from its success in preparing for and mitigating the impact of natural disasters, Taiwan can apply a similarly proactive approach in its defense preparedness. Safeguarding Taiwan’s sovereignty and security requires investments in a comprehensive security strategy for resilience across society—including cybersecurity for critical infrastructures, bolstering energy security, and enhanced defense resilience. Such an approach would provide Taiwan the greatest likelihood of deterring or, if necessary, defeating PRC aggression including through blockade or kinetic conflict. 

    About the authors

    Franklin D. Kramer is a distinguished fellow at the Atlantic Council and a member of its board. He is a former US assistant secretary of defense for international security affairs.

    Philip Yu is a nonresident senior fellow in the Indo-Pacific Security Initiative at the Atlantic Council’s Scowcroft Center for Strategy and Security, and a retired US Navy rear admiral. 

    Joseph Webster is a senior fellow at the Atlantic Council’s Global Energy Center, a nonresident senior fellow in the Indo-Pacific Security Initiative at the Atlantic Council’s Scowcroft Center for Strategy and Security, and editor of the independent China-Russia Report.

    Elizabeth “Beth” Sizeland is a nonresident senior fellow at the Scowcroft Strategy Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security. Earlier, she served in the United Kingdom’s government including as deputy national security adviser and as adviser to the UK prime minister on intelligence, security, and resilience issues.

    This analysis reflects the personal opinions of the authors.

    Acknowledgments

    The authors would like to thank the following individuals for their helpful comments and feedback: Amber Lin, Elsie Hung, Kwangyin Liu, and Alison O’Neil.

    Related content

    1    “The gray zone describes a set of activities that occur between peace (or cooperation) and war (or armed conflict),” writes Clementine Starling. “A multitude of activities fall into this murky in-between—from nefarious economic activities, influence operations, and cyberattacks to mercenary operations, assassinations, and disinformation campaigns. Generally, gray-zone activities are considered gradualist campaigns by state and non-state actors that combine non-military and quasi-military tools and fall below the threshold of armed conflict. They aim to thwart, destabilize, weaken, or attack an adversary, and they are often tailored toward the vulnerabilities of the target state. While gray-zone activities are nothing new, the onset of new technologies has provided states with more tools to operate and avoid clear categorization, attribution, and detection—all of which complicates the United States’ and its allies’ ability to respond.” Starling, “Today’s Wars Are Fought in the ‘Gray Zone.’ Here’s Everything You Need to Know About it,” Atlantic Council, February 23, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/todays-wars-are-fought-in-the-gray-zone-heres-everything-you-need-to-know-about-it/.
    2    In a quarantine of Taiwan, Beijing would interdict shipments but allow some supplies—potentially food and medicine—to pass through unimpeded. This measure would enable the PRC to assert greater sovereignty over Taiwan without formally committing to either a war or a blockade.
    3    Mykhaylo Lopatin, “Bind Ukraine’s Military-Technology Revolution to Rapid Capability Development,” War on the Rocks, January 23, 2024, https://warontherocks.com/2024/01/bind-ukraines-military-technology-revolution-to-rapid-capability-development/.
    4    “President Tsai Delivers 2022 National Day Address,” Office of the President of Taiwan, October 10, 2022, https://english.president.gov.tw/News/6348.
    5    “Full Text of President Tsai Ing-Wen’s National Day Address,” Focus Taiwan website, Central News Agency of Taiwan, October 10, 2023, https://focustaiwan.tw/politics/202310100004; and “President Tsai Delivers 2024 New Year’s Address,” Office of the President, Taiwan, January 1, 2024, https://english.president.gov.tw/NEWS/6662.
    6    Finnish Security Committee, Security Strategy for Society: Government Resolution, Ministry of Defense, November 2, 2017, https://turvallisuuskomitea.fi/wp-content/uploads/2018/04/YTS_2017_english.pdf.
    7    “Swedish Defence Commission Submits Total Defence Report,” Ministry of Defense, December 19, 2023, https://www.government.se/articles/2023/12/swedish-defence-commission-submits-total-defence-report/.
    8    Pursuing a professional and structured approach to resilience against Chinese aggression will also have a “halo” effect, building approaches and expertise that will support effective work on other areas of national security risk.
    9    Finnish Security Committee, Security Strategy for Society.
    10    Finnish Security Committee, Security Strategy for Society.
    11    Finnish Security Committee, Security Strategy for Society.
    12    “All-out Defense Mobilization Agency,” agency website, n.d., https://aodm.mnd.gov.tw/aodm-en/indexE.aspx.
    13    John Dotson, “Taiwan’s ‘Military Force Restructuring Plan’ and the Extension of Conscripted Military Service,” Global Taiwan Institute’s Global Taiwan Brief 8, no. 3 (2023), https://globaltaiwan.org/2023/02/taiwan-military-force-restructuring-plan-and-the-extension-of-conscripted-military-service/.
    14    The party does face, however, the governance challenges that come with a hung parliament.
    15    Hybrid CoE,” European Centre of Excellence for Countering Hybrid Threats, n.d., https://www.hybridcoe.fi/.
    16    Lucy Fisher, “First Glimpse Inside UK’s New White House-Style Crisis Situation Centre,” Telegraph, December 14, 2021, https://www.telegraph.co.uk/news/2021/12/14/first-glimpse-inside-uks-new-white-house-style-crisis-situation/.
    17    A. Rauchfleisch et al., “Taiwan’s Public Discourse About Disinformation: The Role of Journalism, Academia, and Politics,” Journalism Practice 17, no. 10 (2023): 2197–2217, https://doi.org/10.1080/17512786.2022.2110928.
    18    Chee-Hann Wu, “Three Musketeers against MIS/Disinformation: Assessing Citizen-Led Fact-Checking Practices in Taiwan,” Taiwan Insight magazine, July 21, 2023, https://taiwaninsight.org/2023/03/31/three-musketeers-against-mis-disinformation-assessing-citizen-led-fact-checking-practices-in-taiwan/; and David Klepper and Huizhong Wu, “How Taiwan Beat Back Disinformation and Preserved the Integrity of Its Election,” Associated Press, January 29, 2024, https://apnews.com/article/taiwan-election-china-disinformation-vote-fraud-4968ef08fd13821e359b8e195b12919c.
    19    E. Glen Weyl and Audrey Tang, “The Life of a Digital Democracy,” Plurality (open-source project on collaborative technology and democracy), accessed May 6, 2024, https://www.plurality.net/v/chapters/2-2/eng/?mode=dark.
    20    “Critical Infrastructure Sectors,” US Cybersecurity and Infrastructure Security Agency (CISA), 2022, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors.
    21    “National Critical Functions,” CISA, n.d., https://www.cisa.gov/topics/risk-management/national-critical-functions.
    22    Taiwan Administration for Cyber Security, “Cyber Security Defense of Critical Infrastructure: Operations,” Ministry of Digital Affairs, February 21, 2023, https://moda.gov.tw/en/ACS/operations/ciip/650.
    23    “Taipower Announces Grid Resilience Strengthening Construction Plan with NT$564.5 Billion Investment Over 10 Years, Preventing Recurrence of Massive Power Outages,” Ministry of Economic Affairs, September 15, 2022,  https://www.moea.gov.tw/MNS/english/news/News.aspx?kind=6&menu_id=176&news_id=103225#:~:text=Wen%2DSheng%20Tseng%20explained%20that,of%20electricity%20demand%20in%20Taiwan.
    24    Taiwan Water Corporation provides most of the water in Taiwan. See Taiwan Water Corporation, https://www.water.gov.tw/en.
    25    Wen Lii, “After Chinese Vessels Cut Matsu Internet Cables, Taiwan Seeks to Improve Its Communications Resilience,” Opinion, Diplomat, April 15, 2023, https://thediplomat.com/2023/04/after-chinese-vessels-cut-matsu-internet-cables-taiwan-shows-its-communications-resilience/.
    26    “About Us: History,” Administration for Cyber Security, MoDA, n.d., https://moda.gov.tw/en/ACS/aboutus/history/608. Note: US government analyses likewise underscore the significant number of attacks. As described by the US International Trade Administration (ITA), “Taiwan faces a disproportionately high number of cyberattacks, receiving as many as 30 million attacks per month in 2022.” See “Taiwan—Country Commercial Guide,” US ITA, last published January 10, 2024, https://www.trade.gov/country-commercial-guides/taiwan-cybersecurity.
    27    Statistics are not entirely consistent, and attempted intrusions are sometimes counted as attacks.
    28    “Taiwanese Gov’t Facing 5M Cyber Attacks per Day,” CyberTalk, Check Point Software Technologies, accessed May 2, 2024, https://www.cybertalk.org/taiwanese-govt-facing-5m-cyber-attacks-per-day/. Other private-sector companies’ analyses have reached comparable conclusions.
    29    Huang Tzu-ti, “Taiwan Hit by 15,000 Cyberattacks per Second in First Half of 2023,” Taiwan News, August 17, 2023, https://www.taiwannews.com.tw/news/4973448.
    30    Jeff Seldin, “Cyber Attacks Spike Suddenly prior to Taiwan’s Election,” Voice of America, February 13, 2024, https://www.voanews.com/a/cyber-attacks-spike-suddenly-prior-to-taiwan-s-election-/7485386.html.
    31    Gagandeep Kaur, “Is China Waging a Cyber War with Taiwan?,” CSO Online, December 1, 2023, https://www.csoonline.com/article/1250513/is-china-waging-a-cyber-war-with-taiwan.html#:~:text=Nation%2Dstate%20hacking%20groups%20based.
    32    Anne A wrote that “attackers are likely to employ living off-the-land techniques to gather policing, banking, and political information to achieve their goals. They also likely simultaneously and stealthily evaded security detections from remote endpoints.”See An, “Cyberattack on Democracy: Escalating Cyber Threats Immediately Ahead of Taiwan’s 2024 Presidential Election,” Trellix, February 13, 2024, https://www.trellix.com/blogs/research/cyberattack-on-democracy-escalating-cyber-threats-immediately-ahead-of-taiwan-2024-presidential-election/. Separately, a Microsoft Threat Intelligence blog said: “Microsoft has identified a nation-state activity group tracked as Flax Typhoon, based in China, that is targeting dozens of organizations in Taiwan with the likely intention of performing espionage. Flax Typhoon gains and maintains long-term access to Taiwanese organizations’ networks with minimal use of malware, relying on tools built into the operating system, along with some normally benign software to quietly remain in these networks.” See “Flax Typhoon Using Legitimate Software to Quietly Access Taiwanese Organizations,” Microsoft Threat Intelligence blog, August 24, 2023, https://www.microsoft.com/en-us/security/blog/2023/08/24/flax-typhoon-using-legitimate-software-to-quietly-access-taiwanese-organizations/.
    33    Office of the Director of National Intelligence, Annual Threat Assessment of the US Intelligence Community, February 6, 2023, 10, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2023-Unclassified-Report.pdf.
    34    James Lewis, “Cyberattack on Civilian Critical Infrastructures in a Taiwan Scenario,” Center for Strategic and International Studies, August 2023, https://csis-website-prod.s3.amazonaws.com/s3fs-public/2023-08/230811_Lewis_Cyberattack_Taiwan.pdf?VersionId=l.gf7ysPjoW3.OcHvcRuNcpq3gN.Vj8b.
    35    Elias Groll and Aj Vicens, “A Year After Russia’s Invasion, the Scope of Cyberwar in Ukraine Comes into Focus,” CyberScoop, February 24, 2023, https://cyberscoop.com/ukraine-russia-cyberwar-anniversary/.
    36    Groll and Vicens, “A Year After Russia’s Invasion.”
    37    “About Us: History,” Administration for Cyber Security.
    38    Si Ying Thian, “‘Turning Conflicts into Co-creation’: Taiwan Government Harnesses Digital Policy for Democracy,” GovInsider, December 6, 2023, https://govinsider.asia/intl-en/article/turning-conflicts-into-co-creation-taiwans-digital-ministry-moda-harnesses-digital-policy-for-democracy.
    39    Frank Konkel, “How a Push to the Cloud Helped a Ukrainian Bank Keep Faith with Customers amid War,” NextGov/FCW, November 30, 2023, https://www.nextgov.com/modernization/2023/11/how-push-cloud-helped-ukrainian-bank-keep-faith-customers-amid-war/392375/.
    40    Eric Priezkalns, “Taiwan to Build 700 Satellite Receivers as Defense against China Cutting Submarine Cables,” CommsRisk, June 13, 2023, https://commsrisk.com/taiwan-to-build-700-satellite-receivers-as-defense-against-china-cutting-submarine-cables/.
    41    Juliana Suess, “Starlink 2.0? Taiwan’s Plan for a Sovereign Satellite Communications System,” Commentary, Royal United Services Institute, January 20, 2023, https://rusi.org/explore-our-research/publications/commentary/starlink-20-taiwans-plan-sovereign-satellite-communications-system.
    42    Gil Baram, “Securing Taiwan’s Satellite Infrastructure against China’s Reach,” Lawfare, November 14, 2023, https://www.lawfaremedia.org/article/securing-taiwan-s-satellite-infrastructure-against-china-s-reach.
    43    Taiwan Relations Act, US Pub. L. No. 96-8, 93 Stat. 14 (1979), https://www.congress.gov/96/statute/STATUTE-93/STATUTE-93-Pg14.pdf.
    44    “Integrated Country Strategy,” American Institute in Taiwan, 2022, https://www.state.gov/wp-content/uploads/2022/05/ICS_EAP_Taiwan_Public.pdf.
    45    Franklin D. Kramer, The Sixth Domain: The Role of the Private Sector in Warfare, Atlantic Council, October 16, 2023, 13, https://www.atlanticcouncil.org/wp-content/uploads/2023/10/The-sixth-domain-The-role-of-the-private-sector-in-warfare-Oct16.pdf.
    46    Joseph Gedeon, “Taiwan Is Bracing for Chinese Cyberattacks, White House Official Says,” Politico, September 27, 2023, https://www.politico.com/news/2023/09/27/taiwan-chinese-cyberattacks-white-house-00118492.
    47    Gedeon, “Taiwan Is Bracing.”
    48    Gedeon, “Taiwan Is Bracing.”
    49    National Defense Authorization Act for Fiscal Year 2024, Pub. L. No. 118-31, 137 Stat. 136 (2023), Sec. 1518, https://www.congress.gov/bill/118th-congress/house-bill/2670/text.
    50    National Defense Authorization Act for Fiscal Year 2024.
    51    According to a report by Emma Schroeder and Sean Dack, “Starlink’s performance in the Ukraine conflict demonstrated its high value for wartime satellite communications: Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable . . . the government ‘made a decision to use satellite communication for such emergencies’ from American companies like SpaceX. Starlink has proven more resilient than any other alternatives throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions.” See Schroeder and Dack, A Parallel Terrain: Public-Private Defense of the Ukrainian Information Environment, Atlantic Council, February 2023, 14, https://www.atlanticcouncil.org/wp-content/uploads/2023/02/A-Parallel-Terrain.pdf.
    52    Joey Roulette, “SpaceX Curbed Ukraine’s Use of Starlink Internet for Drones: Company President,” Reuters, February 9, 2023, https://www.reuters.com/business/aerospace-defense/spacex-curbed-ukraines-use-starlink-internet-drones-company-president-2023-02-09/.
    53    Kramer, The Sixth Domain.
    54    Frank Kramer, Ann Dailey, and Joslyn Brodfuehrer, NATO Multidomain Operations: Near- and Medium-term Priority Initiatives, Scowcroft Center for Strategy and Security, Atlantic Council, March 2024, https://www.atlanticcouncil.org/wp-content/uploads/2024/03/NATO-multidomain-operations-Near-and-medium-term-priority-initiatives.pdf.
    55    Department of Defense, “Commercial Space Integration Strategy,” 2024, https://media.defense.gov/2024/Apr/02/2003427610/-1/-1/1/2024-DOD-COMMERCIAL-SPACE-INTEGRATION-STRATEGY.PDF; and “U.S. Space Force Commercial Space Strategy,” US Space Force, April 8, 2024, https://www.spaceforce.mil//Portals/2/Documents/Space%20Policy/USSF_Commercial_Space_Strategy.pdf.
    56    “Space Development Agency Successfully Launches Tranche 0 Satellites,” Space Development Agency, September 2, 2023, https://www.sda.mil/space-development-agency-completes-second-successful-launch-of-tranche-0-satellites/.
    57    Kramer, The Sixth Domain.
    58    Kramer, The Sixth Domain.
    59    “E-Stat,” Energy Statistics Monthly Report, Energy Administration, Taiwan Ministry of Economic Affairs, accessed May 6, 2024, https://www.esist.org.tw/newest/monthly?tab=%E7%B6%9C%E5%90%88%E8%83%BD%E6%BA%90.
    60    “Comparison of Electricity Prices and Unit Cost Structures,” Electricity Price Cost, Business Information, Information Disclosure, Taiwan Electric Power Co., accessed May 6, 2024, https://www.taipower.com.tw/tc/page.aspx?mid=196.
    61    Ministry of Economic Affairs (經濟部能源署), “The Electricity Price Review Meeting,” Headquarters News, accessed May 6, 2024, https://www.moea.gov.tw/MNS/populace/news/News.aspx?kind=1&menu_id=40&news_id=114222.
    62    “Electric Power Monthly,” US Energy Information Administration (EIA), February 2024, https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=table_5_03.
    63    Lauly Li and Cheng Ting-Feng, “Taiwan’s Frequent Blackouts Expose Vulnerability of Tech Economy,” Nikkei Asia, August 30, 2022, https://asia.nikkei.com/Business/Technology/Taiwan-s-frequent-blackouts-expose-vulnerability-of-tech-economy.
    64    Xi Deng et al., “Offshore Wind Power in China: A Potential Solution to Electricity Transformation and Carbon Neutrality,” Fundamental Research, 2022, https://doi.org/10.1016/j.fmre.2022.11.008.
    65    “Global Solar Atlas,” World Bank Group, ESMAP, and Solar GIS, 2024, CC BY 4.0, https://globalsolaratlas.info/map?c=24.176825.
    66    Julian Spector, “Taiwan’s Rapid Renewables Push Has Created a Bustling Battery Market,” Canary Media, April 6, 2023, https://www.canarymedia.com/articles/energy-storage/taiwans-rapid-renewables-push-has-created-a-bustling-battery-market.
    67    “U.S. Nuclear Plant Outages Increased in September After Remaining Low during Summer,” Today in Energy, US EIA, October 18, 2015, https://www.eia.gov/todayinenergy/detail.php?id=37252#:~:text=Nuclear%20power%20plants%20typically%20refuel.
    68    For a more detailed discussion of Taiwan’s indigenous supply, see Joseph Webster, “Does Taiwan’s Massive Reliance on Energy Imports Put Its Security at Risk?,” New Atlanticist, Atlantic Council blog, July 7, 2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/does-taiwans-massive-reliance-on-energy-imports-put-its-security-at-risk/.
    69    “The Current Situation and Future of [the] Country’s Energy Supply and Reserves (立法院),” Seventh Session of the Tenth Legislative Yuan, Sixth Plenary Meeting of the Economic Committee, accessed May 7, 2024, https://ppg.ly.gov.tw/ppg/SittingAttachment/download/2023030989/02291301002301567002.pdf.
    70    Jeanny Kao and Yimou Lee, “Taiwan to Boost Energy Inventories amid China Threat,” ed. Gerry Doyle, Reuters, October 23, 2022, https://www.reuters.com/business/energy/taiwan-boost-energy-inventories-amid-china-threat-2022-10-24/.
    71    Energy Administration, “Domestic Oil Reserves Monthly Data (國內石油安全存量月資料),” Ministry of Economic Affairs, accessed May 6, 2024, https://www.moeaea.gov.tw/ecw/populace/content/wfrmStatistics.aspx?type=4&menu_id=1302.
    72    Energy Administration, Ministry of Economic Affairs.
    73    Energy Administration, Ministry of Economic Affairs.
    74    Energy Administration, Ministry of Economic Affairs.
    75    Marek Jestrab, “A Maritime Blockade of Taiwan by the People’s Republic of China: A Strategy to Defeat Fear and Coercion,” Atlantic Council Strategy Paper, December 12, 2023, https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/a-maritime-blockade-of-taiwan-by-the-peoples-republic-of-china-a-strategy-to-defeat-fear-and-coercion/.
    76    Kathleen Magramo et al., “October 11, 2022 Russia-Ukraine News,” CNN, October 11, 2022, https://edition.cnn.com/europe/live-news/russia-ukraine-war-news-10-11-22/index.html.
    77    Tom Balforth, “Major Russian Air Strikes Destroy Kyiv Power Plant, Damage Other Stations,” Reuters, November 2024, https://www.reuters.com/world/europe/russian-missile-strike-targets-cities-across-ukraine-2024-04-11/#:~:text=KYIV%2C%20April%2011%20(Reuters),runs%20low%20on%20air%20defences.
    78    Global Taiwan Institute, “Taiwan’s Electrical Grid and the Need for Greater System Resilience,” June 14, 2023, https://globaltaiwan.org/2023/06/taiwans-electrical-grid-and-the-need-for-greater-system-resilience/.
    79    “3-04 Electricity Consumption (3-04 電力消費),” Taiwan Energy Statistics Monthly Report (能源統計月報), accessed May 6, 2024, https://www.esist.org.tw/newest/monthly?tab=%E9%9B%BB%E5%8A%9B.
    80    Alperovitch, D. (2024, June 6). A Chinese economic blockade of Taiwan would fail or launch a war. War on the Rocks. https://warontherocks.com/2024/06/a-chinese-economic-blockade-of-taiwan-would-fail-or-launch-a-war/
    81    “The Ministry of National Defense Issues a Press Release Explaining Reports That ‘Airborne Balloons by the CCP Had Continuously Flown over Taiwan,’ ” Taiwan Ministry of National Defense, January 6, 2024,  https://www.mnd.gov.tw/english/Publish.aspx?title=News%20Channel&SelectStyle=Defense%20News%20&p=82479.
    83    “Taiwan Announces an Increased Defense Budget for 2024,” Global Taiwan Institute, September 20, 2023, https://globaltaiwan.org/2023/09/taiwan-announces-an-increased-defense-budget-for-2024/.
    84    Yu Nakamura, “Taiwan Allots Record Defense Budget for 2024 to Meet China Threat,” Nikkei Asia, August 24, 2023, https://asia.nikkei.com/Politics/Defense/Taiwan-allots-record-defense-budget-for-2024-to-meet-China-threat.
    85    “NASAMS: National Advanced Surface-to-Air Missile System,” Raytheon, accessed May 12, 2024, https://www.rtx.com/raytheon/what-we-do/integrated-air-and-missile-defense/nasams.
    86    Lopatin, “Bind Ukraine’s Military-Technology Revolution.”
    87    Grace Jones, Janet Egan, and Eric Rosenbach, “Advancing in Adversity: Ukraine’s Battlefield Technologies and Lessons for the U.S.,” Policy Brief, Belfer Center for Science and International Affairs, Harvard Kennedy School, July 31, 2023, https://www.belfercenter.org/publication/advancing-adversity-ukraines-battlefield-technologies-and-lessons-us.
    88    For more information, see, e.g., Peter Suciu, “Future of Taiwan’s Navy: Inside the Tuo Chiang-Class Missile Corvettes,” National Interest, March 27, 2024,  https://nationalinterest.org/blog/buzz/future-taiwans-navy-inside-tuo-chiang-class-missile-corvettes-210269; and Xavier Vavasseur, “Taiwan Launches 1st Mine Laying Ship for ROC Navy,” Naval News, August 5, 2020, https://www.navalnews.com/naval-news/2020/08/taiwan-launches-1st-mine-laying-ship-for-roc-navy/.
    89    Mykola Bielieskov, “Outgunned Ukraine Bets on Drones as Russian Invasion Enters Third Year,” Ukraine Alert, Atlantic Council blog, February 20, 2024, https://www.atlanticcouncil.org/blogs/ukrainealert/outgunned-ukraine-bets-on-drones-as-russian-invasion-enters-third-year/.
    90    Yimou Lee, James Pomfret, and David Lague, “Inspired by Ukraine War, Taiwan Launches Drone Blitz to Counter China,” Reuters, July 21, 2023, https://www.reuters.com/investigates/special-report/us-china-tech-taiwan/.
    91    Franklin D. Kramer and Lt. Col. Matthew R. Crouch, Transformative Priorities for National Defense, Scowcroft Center for Strategy and Security, Atlantic Council, 2021, https://www.atlanticcouncil.org/wp-content/uploads/2021/06/Transformative-Priorities-Report-2021.pdf.
    92    Peter Dickinson, “Ukraine’s Black Sea Success Exposes Folly of West’s ‘Don’t Escalate’ Mantra,” Ukraine Alert, Atlantic Council, January 22, 2024, https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-black-sea-success-provides-a-blueprint-for-victory-over-putin/.
    93    Ministry of National Defense, ROC National Security Defense Report 2023, https://www.mnd.gov.tw/newupload/ndr/112/112ndreng.pdf.
    94    Ministry of National Defense, ROC National Security Defense Report 2023.
    95    “President Tsai Announces Military Force Realignment Plan,” Office of the President, December 27, 2022,  https://english.president.gov.tw/NEWS/6417.
    96    “President Tsai Announces Military Force Realignment Plan.”
    97    International Military Education and Training Cooperation with Taiwan, 22 U.S.C. § 3353 (2022), https://www.law.cornell.edu/uscode/text/22/3353.
    98    Guy D. McCardl, “US Army Special Forces to Be Deployed on Taiwanese Island Six Miles from Mainland China,” SOFREP, March 8, 2024, https://sofrep.com/news/us-army-special-forces-to-be-deployed-on-taiwanese-island-six-miles-from-mainland-china/.
    99    “Taiwan Defense Issues for Congress,” Congressional Research Service, CRS Report R48044, updated May 10, 2024, https://crsreports.congress.gov/product/pdf/R/R48044.
    100    Jordyn Haime, “NGOs Try to Bridge Taiwan’s Civil Defense Gap,” China Project, August 4, 2023, https://thechinaproject.com/2023/08/04/ngos-try-to-bridge-taiwans-civil-defense-gap/.
    101    Spirit of America, Taiwan Civic Engagement Survey, January 2024.
    102    Amy Hawkins and Chi Hui Lin, “‘As Well Prepared as They Could Be’: How Taiwan Kept Death Toll Low in Massive Earthquake,” Observer, April 7, 2024, https://www.theguardian.com/world/2024/apr/07/as-well-prepared-as-they-could-be-how-taiwan-kept-death-toll-low-in-massive-earthquake.

    The post Strengthening Taiwan’s resiliency appeared first on Atlantic Council.

    ]]>
    The impact of corruption on cybersecurity: Rethinking national strategies across the Global South   https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-impact-of-corruption-on-cybersecurity-rethinking-national-strategies-across-the-global-south/ Tue, 02 Jul 2024 00:08:00 +0000 https://www.atlanticcouncil.org/?p=818032 As the Global South prepares for the next stage in ICT development, governments must prioritize policies that reduce corruption in critical network software procurement to protect those countries' developing cyberspace.

    The post The impact of corruption on cybersecurity: Rethinking national strategies across the Global South   appeared first on Atlantic Council.

    ]]>

    Executive summary

    Recent government-wide shutdowns of information systems in a half-dozen developing countries ranging from Albania to Vanuatu suggest that ransomware and state-sponsored attacks are finding success in targeting critical infrastructure networks of the Global South. Over the first decade of their integration with the digital economy low-income and lower-middle-income countries faced relatively few cyberattacks, but that honeymoon appears to be over, with the Global South now ranking first in cyberattacks per institution and cyberattacks per capita. 

    As mobile device e-commerce and ICT networks continue to expand across the Global South, this rise in cyberattacks is not surprising. Nevertheless, the level of digital integration in the region still trails the rest of the world, suggesting the record-setting levels of cyberattacks may be the result of vulnerabilities systemic to the region. The most corrosive of these problems is corruption. While few governments in the Global South have publicized the role of IT corruption in critical infrastructure enterprises, this analysis builds on donor and regional software association investigations to argue that IT departments in the Global South are vulnerable to corrupt procurement schemes catalyzed by the proliferation of pirated software.  

    Until recently the prevalence of pirated or lapsed licensed software on government networks across the Global South may have led to little more than poor or unpredictable network performance. This is no longer the case today, as networks built around pirated software serve as easy targets for ransomware gangs and hacktivists that still find decades-old malware like the “WannaCry” worm to be effective in countries challenged by systemic corruption. 

    In response to the growing cyber threat, governments in the region and foreign donors focused their response on the best practices found in the action plans and policy initiatives drawn from national cybersecurity strategies designed for the Global North. As a result, not one government’s national cybersecurity strategy in the Global South recognizes corruption as an important issue for critical infrastructure network security.  

    This analysis underlines the effects of addressing corruption on cybersecurity by highlighting the positive impacts of Ukraine’s switch to an autonomous and transparent procurement platform by comparing the experience of cyberattacks in 2017 with those that accompanied the 2022 full-scale invasion of the country. Taking these lessons forward, cybersecurity officials across the Global South must consider identifying procurement corruption as a cybersecurity risk and develop initiatives to mitigate the impact of systemic corruption on cybersecurity.  

    Introduction

    “Cyber criminals are coming for the Global South”

    Deutsch Welle1

    The global revolution in information and communications technology (ICT) has expanded educational and economic opportunities across the Global South2 even as it brings new threats of inequality and cyber vulnerability. Whether these countries are prepared, they now represent the fastest-growing population of new internet users. Moreover, malicious hackers have recognized this rise in networked users, with Latin America and the Caribbean now leading the globe in the rate of cyberattacks as a share of the networked population,3 while Africa leads in the rate of cyberattacks per institution.4   

    The process of digital transformation started later in the Global South, which likely limited the vulnerability of these countries to ransomware attacks. This is no longer the case. Vanuatu served as a wake-up call in 2022 when most of the island’s public services shut down after hackers encrypted the government’s data networks.5  The ransomware gang’s commitment of time and resources to infiltrate Vanuatu’s government networks demonstrates that even the smallest nations in the Global South can no longer assume they will be overlooked by global hacker organizations.  

    A critical lesson from the first decade of ubiquitous cyberattacks is the importance of patching an enterprise’s network software. Unfortunately, the vulnerabilities that IT professionals must track and patch each year have been growing, especially since the arrival of cryptocurrency in the mid-2010s offered the first practical means for hackers to receive payments after locking up or seizing data.6 Figure 1 shows MITRE’s recorded annual increase in registered vulnerabilities and exposures, which shows the growth has been rising at an exponential rate since 2018. As ransomware began to grow and criminal organizations sought to continue finding lucrative and vulnerable targets, hackers suddenly turned to institutional networks in countries they might never have heard of before researching potential targets.7 A growing horde of ransomware organizations appear to be choosing targets based first on vulnerability, which has resulted in more attacks on institutions in the Global South.8  

    Although the need to patch software vulnerabilities has never been higher, corrupt practices in software procurement explain why many organizations do not regularly update their security. Functioning software that was not legitimately acquired rarely provides a connection to the software vendor.9  The presence of pirated software on a network reduces the likelihood that the network is regularly receiving updates that the software’s producer distributes to patch newly discovered vulnerabilities.10
     
    An organization’s cybersecurity can also face vulnerabilities due to obsolete versions of software still running on its network. This can happen for multiple reasons, from vendors going out of business to developers choosing to no longer support a product line. In underfunded institutions across the globe, it is not rare to find the continued use of obsolete software. This vulnerability is further exacerbated by procurement managers prioritizing corrupt rents over issues of trusted vendors or sustainable support for software.  

    Given the epidemic levels of corruption in public and private procurement across the Global South,11  this study draws from recent cybersecurity experiences in European and Eurasian economies similarly challenged by corruption to argue that a digitally integrated Global South may be more susceptible to cyberattacks than those in the Global North. While the limited scale of the digital economy across most of the Global South continues to keep these countries out of top spots in terms of the total number of attacks, the Global South has suddenly become a disproportionately high malware target.12 This new reality reflects unique challenges to cybersecurity in the Global South and also suggests that the solutions to this challenge may not be found in the traditional national cybersecurity strategies based on the playbooks of more developed countries.  

    The digitalization of the Global South has only begun 

    The International Telecommunication Union estimates that the Global South passed the milestone of more than half of its citizens gaining access to the internet in 2022. At this growth rate, more than 75 percent of the Global South will be connected by the end of 2025.13  Most of this access is represented by limited bandwidth connections to mobile phone subscribers, which allow for a range of services to citizens that, because of resource constraints or great distances, were previously impractical to offer at scale. For example, the tiny nation of Vanuatu provides its citizens residing across its far-flung islands the ability to use mobile phones to pay utility bills and taxes or initiate government document requests without a long trip between islands.14 The economic impact is enormous for citizens who are no longer required to spend 1-2 days of travel for administrative tasks. 

    Figure 2

    The private services offered by early mobile phone entrepreneurs in the Global South have been no less impressive. Widespread mobile phone use looks to be a pathway from poverty for millions of citizens once isolated from the global flow of information resources.15 ICT-based businesses may already be the leading force for economic growth in many of these countries. A study between 2007 and 2016 found mobile phone diffusion had a more significant impact on the rise in gross national product (GNP) in Sub-Saharan Africa than any other form of investment.16 Across Africa, rural residents who lack access to landline phones or banks benefit from nearly one billion mobile phones that allow them to tap into the internet sites (notably, banking and wholesale services) necessary to engage in entrepreneurial activity.17 Moreover, the nascent digital information space in these countries has allowed the emergence of internet-based businesses specifically supporting rural entrepreneurs with a range of supply chain and logistic services.18

    Despite the rapid growth in mobile phone-led economic activity, most countries in the Global South are just now starting to develop the ICT infrastructure needed to support this demand. Many states in the Global South have only completed the “first mile” of broadband access–global connections to their capitals and large cities–and large parts of the population still lack access to reliable broadband internet.19 Like mobile phone connectivity, widely distributed broadband is positively associated with economic growth. Two findings have become measuring sticks in the digital development sphere: the World Bank estimates that a 10 percent increase in broadband leads to a 1.4% increase in GNP20 and the McKinsey Global Institute found ICT economic activity over the first decade of internet connectivity may have accounted for more than 21% of the gross domestic product (GDP) growth in mature economies.21 

    The new wave of ICT development in the Global South is realizing this economic potential by bringing broadband infrastructure to smaller cities and rural communities. Improved connectivity is already affecting the quality of life in the Global South,22 allowing for national e-business, good governance solutions, and social services like healthcare and education to operate beyond the limited bandwidth offered by mobile phone data connections. 

    The World Bank has provided more than $1.2 billion in ICT lending alone for broadband development in Africa, the South Pacific, and the Caribbean.23 Moreover, Nigeria and Mozambique have led the way in Africa by licensing SpaceX’s Starlink service, which offers near broadband connections through low-orbit satellites for private users.24 Certainly, Starlink’s fees and terminals are barriers to access for most citizens in the Global South, but it and future providers are the first competitors to largely state-owned services in the region that have not succeeded in providing economical and reliable high-speed service across large swaths of territory.  

    Nevertheless, the growing benefits of the information age in the region come with risks–this same connectivity also attracts scores of cyber criminals who expect to profit from vulnerabilities in connected enterprise networks. 

    Corruption in public procurement 

    Corruption in procurement processes is a global problem, but the scale of systemic corruption in procurement tenders in the Global South has long been a major obstacle to developing effective governance and prosperity.25 This issue does not go unrecognized by local leaders and external advisors, but they rarely account for this threat when drafting new regulatory or development initiatives.26 The backroom decisions on what hardware and software is purchased for large enterprise information networks may be the archetype of systemic corruption but it is routinely missed by national cybersecurity strategists.  

    As with most forms of corruption, there are few studies on corrupt practices in software procurement but countless anecdotes of rent-seeking found in public sector network management in developing countries.27 In this author’s thirteen years of experience overseeing IT development projects in four post-Soviet countries in Europe and Eurasia, this was the common view of corruption held by those working for critical infrastructure enterprises and IT Departments inside and outside government. Control over IT procurement decisions in systemically corrupt countries is ideally suited for inflating costs and hiding kickbacks because networks are built around software that is neither visible nor readily measurable as authentic through the standard procurement oversight measures of quantity, delivery, and price.28

    Another barrier to changing the software procurement culture in the Global South is the social norms and expectations among senior management and network administrators who in their everyday lives use or interact with pirated software on personal devices.29 In countries where the use of pirated software is not viewed as a significant ethical or technical issue, managers of critical infrastructure enterprises may be less hesitant to acquire cheaper, pirated software for their networks. In fact, this may even be perceived as a positive decision that reduces the organization’s IT budget.  

    The Business Software Alliance (BSA) found that although globally one-third of software in the private sector is unlicensed, in countries in the Global South this proportion may be twice as high. In the last breakdown for the Middle East and Africa in 2019, for instance, the BSA found that fifty-six percent of software in use was pirated.30 This short-sighted view of the risks of pirated software undermines the state’s ability over the long term to protect domestic networks from cyberattacks. While most government officials in the region were raised in a previous era of altruistic websites distributing unlicensed software with few downsides, this is no longer the case. One of the few studies examining pirated software samples from eleven developing countries found that 61 percent contained malware.31

    The illicit rents found in legal software procurement, on the other hand, come in the form of bribes paid to one or more employees at the purchasing firm, an intermediary, or the supplier.  This can occur through the purchasing enterprise soliciting the bribe (extortion), the seller offering the bribe for colluding (kickback), or an intermediary receiving fees or commissions on the inflated price of the transaction.32 Interviews with investigators of private procurement fraud in the Global South find that most align with the dominant embezzlement practices found in the country’s wider economy.33 As the seller can often procure pirated or unsupported software at a fraction of the market price, the rents gained by the supplier can approach 100 percent of the stated cost.  

    An organization that continually uses pirated or unsupported software will likely develop a culture of avoiding–rather than actively pursuing–interactions with its software’s producers. Although some software companies claim to provide software patches to customers even after they stop paying for licenses, in practice those using pirated network software rarely receive updates from the vendors.Angela Moscaritolo, “Losses from software piracy leads $51 billion in 2009,” SC Media, May 13, 2010, https://ww34 It is a source of debate as to where the blame lies. On the one hand, users complain of significant technical hurdles for updating unregistered software. On the other hand, vendors claim that few network administrators overseeing unlicensed copies ever seek them out, leaving them unaware of who is running pirated copies of their unpatched software. In either case, the presence of pirated software increases the likelihood that the organization’s network is susceptible to vulnerabilities.35 Even if an unregistered organization seeks out and applies a patch, because the process is not timely or automatic, there will always be a window of time with unpatched vulnerabilities.  

    In evaluating their country’s cybersecurity posture, governments in the Global South must measure the degree to which pirated and unpatched software is present on their information platforms and identify mechanisms that can decrease their rate of use. It may be wise for policymakers to look at culture and practice instead of simply increasing IT budgets. Comparative country research shows that the income of countries or individual enterprises was not a consistent predictor of choosing licensed or pirated software. Instead, the strongest predictors were a tolerance of open pirated software markets and the degree of systemic corruption in the country.36 Moreover, a comparison of a half dozen policy measures in eleven African countries found that the strongest initiative reducing the presence of pirated software on networks was implementing corruption-control policies, not measures that raised incomes or procurement budgets.37

    Case study: IT procurement corruption in Pakistan 

    Even donor-funded procurement can fall victim to IT procurement schemes. Drawing on a 2019 World Bank loan, the Pakistan Federal Board of Revenue (FBR) used $80 million to upgrade the Karachi data center as its hardware and software were no longer supported by vendors and had been assessed as “end-of-life equipment.”38 Just a year after the procurement, the US Assistant Secretary of State Alice Wells publicly accused FBR of using pirated versions of US software in the data center.39 A year later, a suspected Russian cybercriminal gang gained access to the center’s more than 1,500 computers and data; reportedly benefiting from the pirated and unsupported Microsoft Hyper-V software used for the virtual hard disks storing FBR data.40The FBR has never identified the vendor involved in the World Bank procurement or whether they paid the ransom to unlock their data that was advertised for sale on a Russian dark web site.  

    As countries struggling with public corruption or high levels of pirated software integrate further into the global digital economy, they are increasingly susceptible to cyberattacks on their critical infrastructure. Some observers already view 2022 as an inflection point in the rising number of successful hacks of smaller countries.41 In July 2022, for example, the government of Albania was forced to shut down its government computer and internet systems after a devastating cyberattack. The intrusion was a result of an unpatched version of the file-sharing software Microsoft SharePoint Server (versus the more common cloud-based Microsoft 365 SharePoint) that understaffed IT teams had maintained for years on their networks.42 Albania has not chosen to explain how its software did not get the patch for this vulnerability released automatically by Microsoft two years earlier. As mentioned previously, the island nation of Vanuatu was also hit by a ransomware attack in 2022 that froze nearly all government network servers, shutting down fire and rescue services, erasing five months of court data, and preventing 315,000 citizens from paying taxes or utilities.43 That same year, two more ransomware attacks by the Russia-based Conti Group led the Costa Rican government to declare a national emergency,44 and cybercrime groups also temporarily gained control of government networks in Montenegro and Chile. 

    Regardless of the technical cause, the lessons from these examples are the same: Countries seeking to protect networks across their critical infrastructure must prioritize systematic communications with software developments and implement regular updates or face an army of hackers that target unpatched vulnerabilities to gain control of a network. 

    The ascendancy of cyberattacks in the Global South buoyed by these successful breaches also suggests that cybercriminals are now targeting small or economically challenged countries because they may be viewed as “softer” hacking targets. Certainly, enterprises around the world continue to pay ransomware, as 2023 set a record for total and average payments of ransoms. The recent cybercriminal focus on the Global South, however, may partly reflect a perception that their networks represent lower-risk targets with a higher willingness to pay for returning access to their data.45 The strong correlation between systemic corruption and a preference for pirated software may shape an approach to ransomware that is appealing to cyber criminals. If an organization’s management has been earning significant kickbacks on purchases of pirated software, for example, they are unlikely to pursue a strategy of resisting ransom requests, instead quietly choosing to pay and maintain the status quo. The focus on the Global South may also reflect a landscape of reduced opportunities in the Global North. Leading ransomware negotiator Coveware recently reported that the portion of victims in the U.S. paying the ransom has fallen by half over the last three years (See Figure 3); the same exact period that has seen a dramatic increase in attacks on enterprises in the Global South.46  

    Moving forward, more and more government institutions and critical infrastructure enterprises in the Global South will likely be targeted as they continue to integrate with global information and communication networks. What is less certain is whether the procurement culture in these countries can keep up by transforming from one of avoiding the attention of software developers to a strategy of maximizing communication and exchanges. This transition is unlikely to succeed if corrupt practices continue to incentivize avoiding transparent procurement and collaboration with vendors to support resilient network systems. Moreover, the transition will require a proactive government guided by a clear national cybersecurity strategy that addresses the unique cyber policy challenges in the Global South. 

    What is in the National Cybersecurity Strategies – And what isn’t

    As digital connectivity continues to expand, more than 100 countries have developed national cybersecurity strategies to serve as the framework for synchronized public-private cybersecurity development. A review of the twenty-three published national strategies from countries in Africa and the Asia-Pacific region found that, in general, the strategies’ objectives were grouped across a minimum of four pillars for strengthening cyber resilience.47 

    The first common pillar consists of strategic objectives that often include developing new cybersecurity agencies and/or improving coordination between disparate ministries overseeing cybersecurity policy. This pillar also includes new policy initiatives based on gap analyses of the country’s cybersecurity architecture. The government’s structural reform steps are often intertwined with the second pillar of legislation and regulations. The Council of Europe has been the most influential donor in this space, assisting in the development of legislative frameworks and advising in the development of national cybersecurity strategies in at least nineteen countries in the Global South.  

    The last two pillars focus on external initiatives. The third pillar is focused on public-private partnerships, including cooperation with multinational software producers and other governments pursuing cybersecurity. The fourth major pillar usually describes information campaigns and education initiatives that would strengthen cybersecurity in the workforce. While national strategies in the Global South have prescribed more limited activities in the fourth pillar, the EU has recently joined the US in developing cybersecurity workforce frameworks to bridge the gap between the planning and development of cybersecurity educational standards and the workplace requirements for the knowledge and skills needed to defend critical infrastructure networks.  

    Across the national cybersecurity strategies in the Global South, not one of the twenty-three documents contained the terms “corruption” or “pirated software.” In some ways, this is not surprising. The leading roadmap to developing a national cybersecurity strategy, the UN’s International Telecommunication Union’s (ITU) Guide to Developing a National Cybersecurity Strategy, also does not reference corruption or pirated software. The 2018 guide was produced by a partnership between ITU, the World Bank, the Council of Europe, the Organization of American States (OAS), Interpol, Microsoft, Deloitte, and the NATO Cooperative Cyber Defense Center of Excellence, as well as several think tanks. The guide specifically states its objective is “to “provide direction and good practice on ‘what’ should be included in a National Cybersecurity Strategy, as well as on ‘how’ to build, implement and review it.”48

    Case study: International counter ransomware initiative 

    The most significant US-sponsored global cybersecurity initiative is arguably the International Counter Ransomware Initiative (CRI). Now in its third year of existence, the group has established a platform for capacity building and developing best practices to reduce the success of ransomware, including via a joint statement that member countries should not pay ransoms.49 Although more than a dozen of the fifty nation-state participants in the CRI are considered Global South countries that face significant challenges in addressing systemic corruption, the CRI’s policy and capacity-building efforts have so far followed the ITU and World Bank’s lead in not addressing procurement corruption as part of cybersecurity initiatives.50

    As representatives of government and civil society in the Global South look to further develop and reassess their national policies and infrastructure for cybersecurity, they are unlikely to find anti-corruption best practices in the prevailing guiding documents and best practices. The reality is that top cybersecurity officials in North America and the EU do not consider the role of corruption to be a major or even minor factor in their country’s cybersecurity resilience. Instead, countries in the Global South must consider the context of corruption and its impact on cybersecurity and critical infrastructure resilience when developing their strategies, as well as learn from the experiences of other states’ adoption of reform initiatives focused on procurement corruption.  

    Global South lessons learned: The Ukrainian response to corruption

    Ukraine offers a case study of how a country challenged by systemic corruption can reduce its impact on network security. In the years leading up to the start of open conflict with Russia in 2014, on more than one occasion senior officials in Ukrainian ministries iterated to the author that they would rather cancel a project than not receive their preference for an expensive network software solution. In at least one case this prevented a donor-supported project from moving ahead as the ministry refused to use a simpler, less expensive software product that was more aligned with their needs and local network. Across several ministries, the practice of procuring the highest-cost network solutions over this period would result in arrears owed to the vendor due to the inability to pay annual fees. At one point, the sales representative of a global network software company told the author that they would not sell new software to a US-funded project if the ministry did not agree to pay off years of outstanding annual license fees owed from past procurement.  

    By late 2014, as the first wave of cyberattacks on Ukraine preceded the Russian military’s annexation of Crimea, most critical state infrastructure had been operating for years without licenses (and the associated updates and patches), even legally purchased software.51 Many institutions were instead paying a fraction of the retail price to obtain pirated versions of software, which conveniently left the bulk of the recorded procurement expenditures for corrupt rent-seeking. This explains how, prior to 2014, an estimated eighty percent of the network software used in Ukrainian private and public enterprises either never had been or no longer was supported by the software’s vendors.52 

    As hackers associated with Russia began cyberattacks in support of the new “special operation” in Ukraine, they targeted local software commonly used in the two countries by exploiting vulnerabilities for which patches had not been installed.53 Most notably, in 2015 the Russian military hacker group Sandworm used Blackenergy-3 malware to temporarily knock out the information networks of three energy distribution companies, denying power to more than 200,000 homes in 2015. The next year, the Industroyer-1 malware was used to target the Kyiv region’s power grid.54 Slovakian cybersecurity firm ESET found that the hackers benefited from knowledge of common post-Soviet electric grid networks and control system software. ESET reported that a major factor in the success of the power grid attacks was the failure of Ukrainian electrical distribution enterprises to change obsolete and unpatched operating system software. 

    In arguably the most damaging cyberattack in history, in 2017 the Sandworm group unleashed the NotPetya wiper malware that specifically targeted a well-publicized vulnerability in Microsoft network software that the company had patched in updates a few months before the attack. At that time, most Ukrainian enterprises were using either pirated or older versions of Microsoft data management software and thus did not receive timely or automatic updates.55 Although Microsoft and other vendors in principle permit operators of pirated software to request and apply updates, this is rarely accomplished and the exploitation of unprotected networks in Ukraine accounted for more than an estimated $10 billion in commercial losses.56 The NotPetya malware is also an example of a software vendor advertising new software updates after an attack. A secondary effect of this disclosure, however, is that it provides hackers a roadmap for similar attacks on other unpatched systems in the Global South.  

    Policy recommendations drawn from Ukraine

    Since 2017, Ukraine has adopted, with mixed results, a range of internal and donor-supported anti-corruption initiatives ranging from the establishment of investigation bureaus to prosecuting state corruption and mounting ad campaigns that promote good governance.57 One of the most well-known developments, which also had an outsized impact on software procurement corruption, is the launch of a national e-government tool for public procurement.58 A public/private-administered electronic platform for government tenders, ProZorro, which means “through transparency” in Ukrainian, began operating with more than 300 private suppliers in 2016. ProZorro largely put an end to backroom procurement processes in Ukraine by making bidding and decision-making available to the public, which reduces opportunities for rent-seeking.59

    Over the next four years, additional legislative and operational improvements were made to ProZorro, including integrating the role of tax authorities directly onto the platform to provide additional oversight for fraudulent pricing and hidden kickback schemes. By gaining private sector support early in its development, ProZorro was able to move the government’s IT infrastructure purchases onto a platform by 2019, which by then was facilitating $22 billion worth of tenders across the government.60 In a sign of trust in the transparency and efficiency of ProZorro, the World Bank has also began conducting its own Ukrainian procurement through the platform.61 

    In 2022, the Computer Emergency Response Team of Ukraine (CERT) reported a total of 2,194 investigated malware attacks, twenty-five percent of which targeted government systems, with at least a dozen cases in which the malware was detected on critical infrastructure information systems.62 Nonetheless, the work of the CERT, bolstered by robust private sector partnerships with software developers, led to quick responses to patch identified vulnerabilities before malware could spread and result in significant network outages. The result of this new capacity has been the prevention of cyber-induced infrastructure outages such as the electric grid collapses that plagued Ukraine in 2015-2016.63  

    In the years following the NotPetya attack, Ukrainian public and private organizations began addressing old debts to network software vendors while using the ProZorro platform for new IT procurement. As a result, the country’s state-owned critical infrastructure operators were forced to pursue open tenders on a public-private run platform while network software companies returned to selling licenses to the state enterprises. The author was told by senior officials at Ukraine’s State Special Communications Service (SSCS) that they estimated the share of pirated and unsupported software on the country’s networks had dropped from more than eighty percent at the start of the conflict with Russia in 2014 to only twenty percent in 2020. 

    While state enterprises have been required to make transparent software purchases since 2020, anti-corruption progress in the private sector is less certain. As part of the 2022 Russian cyberattacks on Ukraine, the Mandiant cybersecurity firm found that Russian military intelligence hackers likely uploaded “trojanized” versions of Microsoft software on torrent sites popular with Ukrainians.64The malware was part of the Ukrainian language packs that, if selected, would perform reconnaissance on a system and install further malware as needed.  

    The commitment of state critical infrastructure in Ukraine to rapidly expand licensed software on their networks also drew the interest of large international software vendors that saw Ukraine as ground zero in identifying new malware.65 Therefore, as Ukrainian public and private sector enterprises pursued legitimate purchases of licensed software, they also found that vendors were just as motivated to repair relationships with Ukraine’s large network operators. A benefit that few could have predicted in 2016 at the start of Ukraine’s anticorruption agenda is the role that the return of licensed software vendors would have in countering the much larger volume of cyberattacks that accompanied the 2022 Russian invasion. The major network software vendors, such as Microsoft and Cisco, established computer response and threat intelligence teams in Ukraine as part of their effort to identify and mitigate new threats to their licensed software before the malware targeting Ukraine could become a global problem.66

    The transformation of Ukrainian cybersecurity resilience over the five years between the last of the most harmful cyberattacks on the country (WannaCry and NotPetya) in 2017 and the resilience in the face of the relentless wave of malware attacks that accompanied Russia’s 2022 full-scale invasion suggests governments can proactively make progress against serious systemic vulnerabilities. Nonetheless, the anti-corruption approach must be relentless to succeed. For example, it was no surprise when in late 2023 national anti-corruption investigators uncovered a large IT software kickback where two senior SSCS officials had falsely categorized some procurement as classified, keeping it from being posted on the ProZorro site.67  

    Overall, the Ukrainian experience suggests that countries burdened with systemic corruption should integrate procurement reform into their cybersecurity measures to mitigate the impact of cultures across the Global South that have promoted or looked the other way at the use of pirated or unsupported software.  

    Addressing corruption in cybersecurity strategies

    The decision by a dozen of the world’s most influential institutions promoting international cybersecurity not to address the threat of systemic corruption in their 2018 Guide to Developing a National Cybersecurity Strategy continues to be echoed in advisory and technical assistance offered to countries in the Global South. A recent example is the removal of considerations for sectoral vulnerability to procurement corruption in the World Bank’s 2023 influential sectoral cyber capability maturity model (C2M2) assessment tool, which was originally present in the pioneering PRoGReSS sectoral C2M2 developed by Tel Aviv University that the assessment tool is based upon.68 

    It is clear from the Ukrainian case study that neglecting issues of corruption in software procurement may result in overlooking an important lever for reducing overall cyber vulnerabilities. While officials in both the Global South and Global North tend to avoid public discussions of corruption, Ukraine’s IT procurement transparency reform offers cybersecurity policymakers a more targeted and politically acceptable policy goal. Certainly, the absence of guidance on IT procurement corruption is leaving cybersecurity strategists in countries challenged by systemic corruption without inspirational goals or advice on mitigating a key threat to their critical infrastructure networks.  

    As profiled in this analysis, Pakistan offers a cogent example of a country seeking to address its vulnerability to IT procurement corruption. Just two years after a Russian ransomware organization gained complete access to the revenue service’s new data center riddled with unsupported software, the Ministry of Foreign Affairs official responsible for cybersecurity championed the need to adopt national policies in line with the 2018 Guide to Developing a National Cybersecurity.69 The Guide offers a robust set of recommendations and certainly should influence Pakistan’s implementation of its 2021 national cybersecurity strategy, but a government that witnessed first-hand how procurement corruption undermines critical infrastructure cybersecurity would also have benefited from the inclusion of guidance and materials on targeted procurement anticorruption measures—advice not found in the 2018 Guide. 

    The dramatic turnaround in the resilience of Ukrainian networks demonstrates the importance of cybersecurity strategies that include the adoption of external and transparent procurement platforms for critical infrastructure enterprise software and technology. As with any capacity-building measure, a cybersecurity anti-corruption initiative could start small as countries struggle to wrestle public procurement from rent-seeking interest groups. A national public tender system that covers all procurement, such as Ukraine’s ProZorro, is an ambitious goal that requires years to develop and operationalize. Nevertheless, national cybersecurity strategies could promote more limited platforms focused on critical infrastructure enterprise procurement from the handful of network software providers serving the market.  

    IT procurement reform success depends on the degree to which sectoral or national institutions introduce public-private collaboration, transparency, and autonomy into decision-making processes that currently happen in the backrooms of state bureaucracies. A failed approach was demonstrated in Kenya when the centralization of IT procurement within a single ministry led to the doubling or tripling of prices for key technologies negotiated by newly empowered senior officials.70 Kenya’s cybersecurity strategists should be credited with seeking to address the vulnerabilities linked to IT procurement processes. Moreover, they were proposing solutions in a sphere (IT procurement reform) that international donors and cybersecurity consultants continue to avoid.  

    The most durable solution is for national cybersecurity strategies to begin to address procurement processes to remove the role of illicit rent-seekers in transactions. Kenya’s failed 2019 centralization of state IT procurement is an example of how many countries in the Global South have only adopted narrow measures limiting the impact of reform to just IT procurement. The next step would be to further limit that procurement to transparent and external electronic tender platforms modeled on Ukraine’s ProZorro system. The e-tender process would serve to transform the country’s critical infrastructure networks by shifting procurement to licensed and updated network software while attracting increased software vendor competition because sales revenues are no longer flowing back to rent-seeking IT administrators.  

    A shift in national cybersecurity strategies toward the adoption of e-tender platforms can be facilitated by the rapid growth in e-governance across the Global South. The first generation of e-tender platforms, like ProZorro, were “semi-distributed” to the degree that public and private entities supervise their analytical dashboards across the platform.71 The growing role of blockchain technology in creating transparent contracts across peer-to-peer networks will certainly transform the next generation of transparent procurement platforms.  

    Addressing IT procurement vulnerabilities can also build on existing resilience measures in national cybersecurity strategies. For example, cybersecurity awareness campaigns championed in existing national strategies can be leveraged to have a potential anti-corruption role. Their messaging could target not only individuals but also enterprises while highlighting the vulnerabilities that follow the choice to adopt pirated or other unsupported software. A generation of IT managers, who spent decades downloading pirated software for their personal use, must understand that those practices are no longer safe in the era of the ransomware gang and their recent turn toward targeting the Global South.  

    Another strategy that often is proposed as a cybersecurity solution for budget-constrained institutions in the Global South is open-source software (OSS). Paying for commercial software is not the only means to reduce the portion of pirated software on an enterprise’s network. OSS software has long been the building blocks of the world’s dominant network software sold by private vendors, and for more than two decades governments in the Global North have been adopting requirements mandating that officials first seek OSS alternatives before purchasing commercial software for their critical infrastructure networks.72 Nonetheless, malware has increasingly been targeting open-source solutions and a policy shift toward OSS in the Global South must be part of a wider government-led effort to recognize the need to support OSS as another element of critical infrastructure.73

    As countries continue to innovate in measures to raise transparency in the procurement of IT software and hardware, donors should reconsider their past hesitancy to advocate for anti-corruption measures as part of the cybersecurity strategies they support. The absence of even indirect references to the role of corruption in national cybersecurity strategies across the Global South is inexplicable given the serious cybersecurity risks that are present for countries standing up large information networks founded on pirated or unsupported software. Given the significant challenges developing countries face in responding to cyber threats, they cannot afford to simultaneously overlook the vulnerability associated with corrupt procurement practices.  

    Conclusion

    Developing countries are continuing to make progress in digitizing governance and trade while simultaneously raising transparency in their public expenditures. Nevertheless, 2022’s country-wide network outages across the Global South suggest this capacity has been built on networks left vulnerable by unlicensed and unsupported software. As governments and critical infrastructure in the Global South prepare for the next stage in ICT development, they must prioritize policies that can reduce corruption in the critical procurement of the network software responsible for protecting their country’s nascent cyberspace. As Adam and Fazekos argue, reform-minded governments and donors throughout the Global South have adopted ICT practices in the fight against national corruption but have developed a blind spot to the role corruption plays in undermining the security of this rapid digitization.74  

    Cybersecurity strategists working in the Global South must reevaluate a decade of national strategies that largely replicated those from the Global North. It is no longer safe to assume that cyber best practices are divorced from the harsh reality of addressing systemic corruption. At a minimum, national cybersecurity strategies must, for the first time, identify procurement corruption as a cybersecurity risk. Moreover, countries challenged by systemic corruption and under-resourced governance should consider more limited initiatives, such as creating transparent and autonomous IT tender processes for the most critical state sectors. The digital integration of the Global South offers its citizens greater prosperity and transparency in governance, but as decades of past economic development have demonstrated, the equity and reliability of this new revenue stream will depend on leaders not overlooking the adverse impact corruption can play in the social outcomes of their digital development. 


    About the author

    Robert Peacock is a nonresident senior fellow at the Cyber Statecraft Initiative of the Atlantic Council’s Digital Forensic Research Lab, where his work builds on his past role supporting the highly correlated goals of reducing corruption in critical infrastructure procurement and developing cybersecurity resilience in the Global South. Peacock is Senior Strategic Technical Advisor at DAI Global advising on cybersecurity development programs, funded by the US Agency for International Development (USAID), across a half dozen countries in Eastern Europe and Eurasia. Peacock’s past advisory roles have included developing assistance programs in Armenia, Mozambique, Morocco, while more recently serving as a co-creator for USAID’s first bilateral cybersecurity program (Ukraine) and first regional cyber pathway for women program (Balkans).


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    Janosch Delcker, “Ransomware: Cyber criminals are coming for Global South,” Deutsch Welle, August 28, 2022, https://www.dw.com/en/ransomware-cyber-criminals-are-coming-for-the-global-south/a-62917234.
    2    Although the term Global South is a preferred term for those nations most challenged in economic growth and good governance, there is no set definition of its membership. This policy brief defines the Global South not by geography or GNP, but rather by any country that is not one of the top 60 countries in Transparency International’s Global Corruption Perceptions Index (CPI). Therefore, geography is not the defining feature that explains why Uruguay (Latin America’s richest country and 14th ranked by the CPI index) is defined as Global North while Hungary is not. 
    3    Charlette Donalds, Corlane Barclay, and Kweku-Muata Osei-Bryson, Cybercrime and Cybersecurity in the Global South, (London: Taylor & Francis, Routledge, 2022).
    4    “Global Cyberattacks Continue to Rise with Africa and APAC suffering most,” Checkpoint Research, April 27, 2023, https://blog.checkpoint.com/research/global-cyberattacks-continue-to-rise/.
    5    Nabilah S., “The Vanuatu ransomware attack serves as a warning to others,” TechinPacific, May 2023, https://www.techinpacific.com/the-vanuatu-ransomware-attack-serves-as-a-warning-to-others.
    6    Nikhilesh De, “State of crypto: Ransomware is a crypto problem,” Coindesk, February 10, 2022, https://www.coindesk.com/policy/2021/06/08/state-of-crypto-ransomware-is-a-crypto-problem/
    7    Sheera Frenkel, “Hackers find ‘ideal testing ground’ for attacks: Developing countries” The New York Times, July 2, 2017,  https://www.nytimes.com/2017/07/02/technology/hackers-find-ideal-testing-ground-for-attacks-developing-countries.html.
    8    Jai Viljayan, “Majority of ransomware attacks last year exploited old bugs.,” Dark Reading, February 20, 2023, https://www.darkreading.com/cyberattacks-data-breaches/dozens-of-vulns-in-ransomware-attacks-offer-adversaries-full-kill-chain.
    9    Paul Tassi, “Why Microsoft is giving away Windows 10 to Pirates,” Forbes, March 19, 2015, https://www.forbes.com/sites/insertcoin/2015/03/19/why-microsoft-is-giving-away-windows-10-to-pirates/?sh=51c6e4ae712f.
    10    Victor DeMarines, “Look before you click: The risk of buying pirated software,” Revenera, January 17, 2020, https://www.revenera.com/blog/software-monetization/look-before-you-click-the-risk-of-buying-pirated-software/
    11    Sope Williams-Elegbe, “Systemic corruption and public procurement in developing countries: are there any solutions?,” Journal of Public Procurement (2018) vol. 18, no. 2, 131-147,  https://doi.org/10.1108/JOPP-06-2018-009.
    12    “Global Cyberattacks Continue to Rise with Africa and APAC suffering most,” Checkpoint Research.
    13    “Global Connectivity Report 2022, ITU, 2022,  https://www.itu.int/itu-d/reports/statistics/global-connectivity-report-2022.
    14    John Jack, “E-governance in Vanuatu: A whole-of-government approach,” Asia Pacific Journal of Public Administration (2018), https://www.tandfonline.com/doi/abs/10.1080/23276665.2018.1545354.
    15    Valentina Rotondi, “Leveraging mobile phones to attain sustainable development,” Proceedings of the National Academy of Sciences, June 1, 2020, https://www.pnas.org/doi/full/10.1073/pnas.1909326117.
    16    Raif Bahrini, and Alaa A. Qaffas, “Impact of information and communication technology on economic growth: Evidence from developing countries,” Economies (2019)vol. 7, no. 1, https://www.mdpi.com/2227-7099/7/1/21
    17    Andrea Willige, “Here’s Why Africa is the World Leader in Digital and Mobile Banking, World Economic Forum, November 21, 2023, https://www.weforum.org/agenda/2023/11/africa-digital-mobile-banking-financial-inclusion.
    18    Raif Bahrini and Alaa A. Qaffas, “Impact of information and communication technology on economic growth.” 
    19    Laura Wood, “The future of African fiber markets 2023,” BusinessWire, June 15, 2023, https://www.tandfonline.com/doi/abs/10.1080/23276665.2018.1545354
    20    Extending Reach and Increasing Impact: Information and Communications for Development, World Bank, 2009, https://documents1.worldbank.org/curated/en/645821468337815208/pdf/487910PUB0EPI1101Official0
    Use0Only1.pdf
    21    Ankit Fadia, Mahir Nayfeh, and John Noble, “Follow the leaders: How governments can combat intensifying cybersecurity risks,” McKinsey & Company, September 16, 2020, https://www.mckinsey.com/industries/public-and-social-sector/our-insights/follow-the-leaders-how-governments-can-combat-intensifying-cybersecurity-risks
    22    Temitaya Jalyeola, “73% Africans in rural areas lack internet access,” Punch, December 19, 2022, https://punchng.com/73-africans-in-rural-areas-lack-internet-access.
    23    World Bank Data – Vanuatu,  The World Bank, 2020, https://data.worldbank.org/country/VU.
    24    Jason Rainbow, “Starlink approved in Nigeria and Mozambique, says Elon Musk,” Spacenews, May 27, 2022, https://spacenews.com/starlink-approved-in-nigeria-and-mozambique-says-elon-musk/.
    25    Jens Ivo Engels, “Corruption and anticorruption in the era of modernity and beyond” in Ronald Kroeze, Andre Vitoria, G. Geltner (Eds.),  Anticorruption in History,  (Oxford: Oxford University Press, 2018).
    26    Sandipto Dasgupta, “The power of corruption,” Comparative Studies of South Asia, Africa, and the Middle East (2019), vol. 29, no. 3, https://doi.org/10.1215/1089201X-7885524.
    27    Dante Deo, “Mega millions lost to software procurement fraud and error,” ITWeb, May 9, 2023, https://www.itweb.co.za/article/mega-millions-lost-to-software-procurement-fraud-and-error/KBpdg7pmNKjMLEew
    28    Jonathan Klaaren et al., “Public Procurement and Corruption in South Africa,” Public Affairs Research Institute, October 2022, https://ideas.repec.org/p/osf/osfxxx/bej9z.html
    29    Rajeev K. Goel and Michael A. Nelson, “Determinants of software piracy: economics, institutions, and technology,” Journal of Technology Transformation (2009), 34, https://link.springer.com/article/10.1007/s10961-009-9119-1.
    30    “Software Management: Security Imperative, Business Opportunity,” Business Software Alliance, June 2018, https://www.bsa.org/files/2019-02/2018_BSA_GSS_Report_en_.pdf.
    31    Brian Prince, “Software piracy costly to enterprise security, research finds,” Security Week, March 20, 2014, https://www.securityweek.com/software-piracy-costly-enterprise-security-research-finds/
    32    “Drivers of Corruption: A Brief Review,” The World Bank, 2014, https://documents1.worldbank.org/curated/en/808821468180242148/text/Drivers-of-corruption.txt.
    33    David P. Nolan, “Procurement fraud – an old fraud flourishing in emerging markets and costing businesses billions,” Financier Worldwide Magazine, September 2017, https://www.financierworldwide.com/procurement-fraud-an-old-fraud-flourishing-in-emerging-markets-and-costing-businesses-billions#.Y-jdcC-B3X8.
    35    Atanu Lahiri, “Revisiting the incentive to tolerate illegal distribution of software products,” Decision Support Systems (2012)vol. 52, no. 2, 2012, https://doi.org/10.1016/j.dss.2012.01.007.
    36    Peerayuth Charoensukmongkol et al., “Analyzing software piracy from supply and demand factors: The competing roles of corruption and economic wealth,” International Journal of Technoethics (2012), vol. 3 no. 1, https://econpapers.repec.org/article/iggjt0000/v_3a3_3ay_3a2012_3ai_3a1_3ap_3a28-42.htmhttps://econpapers.repec.org/article/iggjt0000/v_3a3_3ay_3a2012_3ai_3a1_3ap_3a28-42.htm.
    37    Antonio R. Andres and Simplice A. Asongu, “Fighting software piracy: Which governance tools matter in Africa?” Journal of Business Ethics (2013), 118, https://www.econstor.eu/bitstream/10419/87824/1/730896803.pdf
    38    Rana Shahbaz, “Neglect caused FBR cyber-attack,” The Express Tribune, August 22, 2021, https://tribune.com.pk/story/2316604/neglect-caused-fbr-cyber-attack.
    39    Jehangir Nasir, “US accuses FBR of using pirated software,” ProPakistani. January 30, 2020, https://propakistani.pk/2020/01/30/us-accuses-fbr-of-using-pirated-software-report.
    40    Haroon Hayder, “Here’s the real reason why FBR system got hacked,” ProPakistani, August 20, 2021, https://propakistani.pk/2021/08/20/heres-the-real-reason-why-fbrs-system-got-hacked/.
    41    Cynthia Brumfield, “2022 was the year of crippling ransomware attacks on small countries,” README Blog, December 16, 2022, https://readme.security/2022-was-the-year-of-crippling-ransomware-attacks-on-small-countries-e63b5bc3b756.
    42    “Microsoft investigates Iranian attacks against the Albanian Government,” Microsoft, September 8, 2022, https://www.microsoft.com/en-us/security/blog/2022/09/08/microsoft-investigates-iranian-attacks-against-the-albanian-government.
    43    Christopher Cottrell, “Vanuatu officials turn to phone books and typewriters, one month after cyberattack,” The Guardian, November 29, 2022, https://www.theguardian.com/world/2022/nov/29/vanuatu-officials-turn-to-phone-books-and-typewriters-one-month-after-cyber-attack
    44    Matt Burgess, “Conti’s attack against Costa Rica sparks a new ransomware era, Wired, June 12, 2022, https://www.wired.co.uk/article/costa-rica-ransomware-conti.
    45    Andy Greenberg, “Ransomware payments hit a record $1.1 billion in 2023,” Wired, February 7, 2024, https://www.wired.com/story/ransomware-payments-2023-breaks-record.
    46    Bill Toulas, “Ransomware payments drop to record low as victims refuse to pay,” Bleeping Computer, January 29, 2024, https://www.bleepingcomputer.com/news/security/ransomware-payments-drop-to-record-low-as-victims-refuse-to-pay/.
    47    The review of 23 national cybersecurity strategies consisted of 14 documents published by countries in Africa (Botswana, Benin, Burkina Faso, Gambia, Ghana, Kenya [draft], Malawi, Mauritius, Mozambique [draft], Nigeria, Sierra Leone, South Africa, Tanzania, and Uganda); and 9 documents published by countries in the Asia-Pacific region (Afghanistan, Bangladesh, China, India, Malaysia, Nepal [draft], Philippines, Samoa, and Vanuatu).
    48    The International Telecommunication Union (ITU), The World Bank, Commonwealth Secretariat (ComSec), the Commonwealth Telecommunications Organisation (CTO), NATO Cooperative Cyber Defence Centre of Excellence (NATO CCD COE). 2018. Guide to Developing a National Cybersecurity Strategy – Strategic engagement in cybersecurity. Creative Commons Attribution 3.0 IGO (CC BY 3.0 IGO). https://www.itu.int/hub/publication/d-str-cyb_guide-01-2018/.
    49    Michael Hill, “Governments should not pay ransoms, International Counter Ransomware initiative members agree,” CSO, November 2, 2023, https://www.csoonline.com/article/657877/governments-should-not-pay-ransoms-international-counter-ransomware-initiative-members-agree.html.
    50    “International Counter Ransomware Initiative Joint Statement,” The White House, November 1, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/international-counter-ransomware-initiative-2023-joint-statement.  
    51    Olena Removska and Robert Coalson, “Ukraine’s trade privileges on line over intellectual piracy concerns,” Radio Free/Radio Liberty, March 14, 2013, https://www.rferl.org/a/ukraine-sanctions-intellectual-property/24928537.html.
    52    “Software management: Security Imperative, Business Opportunity,” Business Software Alliance. 
    53    Patrick Tucker, “Russia launched cyberattacks against Ukraine before ship seizures,” Defense One, December 7, 2018, https://www.defenseone.com/technology/2018/12/russia-launched-cyber-attacks-against-ukraine-ship-seizures-firm-says/153375/.
    54    Mark Temnycky, “Russian Cyber Threat: US Can Learn from Ukraine,” Atlantic Council, May 27, 2021, https://www.atlanticcouncil.org/blogs/ukrainealert/russian-cyber-threat-us-can-learn-from-ukraine.
    55    Olena Removska and Robert Coalson, “Ukraine’s trade privileges on line over intellectual piracy concerns.”
    56    Andy Greenberg, “The untold story of NotPetya, the most devastating cyberattack in history,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
    57    “Anti-Corruption Reforms in Ukraine: Pilot 5th Round of Monitoring Under the OECD Istanbul Anti-Corruption Action Plan,” OECD, 2022. https://www.oecd-ilibrary.org/docserver/b1901b8c-en.pdf?expires=1707274542&id=id&accname=guest&checksum=E8B9D84D2CAB41F47CCF08E6A475AB17.
    58    Christopher Yukins and Steven Kelman, “Overcoming corruption and war: Lessons from Ukraine’s ProZorro procurement system,” NCMA Contract Management Magazine, July 2022, https://www.hks.harvard.edu/publications/overcoming-corruption-and-war-lessons-ukraines-prozorro-procurement-system.
    59    Andre Petheram, Walter Pasquarelli, and Richard Stirling, “The next generation of anti-corruption tools: Big data, open data, and artificial intelligence,” Oxford Insights, 2022, https://ec.europa.eu/futurium/en/system/files/ged/researchreport2019_thenextgenerationofanti-corruptiontools_bigdataopendataartificialintelligence.pdf
    60    “Guidelines for non-Ukrainian suppliers on participation in public procurement tenders in Ukrainian,” European Bank for Reconstruction and Development,  November 2020, https://infobox.prozorro.org/upload/files/main/1398/547/gpa-guide-ukraine-fin-update2020-2.pdf.
    61    Nataliya Synyutka, Oksana Kurylo, and Mariya Bondarchuk, “Digitalization of public procurement: The case study of Ukraine,” Annales Oeconomia (2019), https://journals.umcs.pl/h/article/viewFile/9273/6961.  
    62    “In 2022, CERT-UA reports 2,194 cyberattacks,” Ukraine Media Center, January 17, 2023, https://mediacenter.org.ua/in-2022-cert-ua-reports-2-194-cyberattacks-a-quarter-of-them-against-government-agencies-state-service-for-special-communications/.
    63    Jon Bateman, “Russia’s wartime cyber operations in Ukraine: Military impacts, influences, and implications,” Carnegie Endowment for International Peace, 2022, https://carnegie-production-assets.s3.amazonaws.com/static/files/Bateman_Cyber-FINAL21.pdf.
    64    “Trojanized Windows 10 Operating System Installers Targeted Ukrainian Government,” Mandiant Intelligence, “December 15, 2022, https://cloud.google.com/blog/topics/threat-intelligence/trojanized-windows-installers-ukrainian-government/.
    65    Emma Schroeder and Sean Dack, “A parallel terrain: Public-private defense of the Ukrainian information environment,” Atlantic Council, February 27, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/a-parallel-terrain-public-private-defense-of-the-ukrainian-information-environment.
    66    Robert Peacock, “How Ukraine has defended itself against cyberattack – Lessons for the US,” The Conversation, April 5, 2022, https://theconversation.com/how-ukraine-has-defended-itself-against-cyberattacks-lessons-for-the-us-180085.  
    67    Daryna Antoniuk, “Second top Ukrainian cyber official arrested amid corruption probe,” The Record, November 27, 2023, https://therecord.media/second-cyber-official-detained-zhora.
    68    “Sectoral Cybersecurity Maturity Model,” The World Bank, June 2023, https://documents1.worldbank.org/curated/en/099062623085028392/pdf/P17263707c36b702309f7303dbb7266e1cf.pdf
    69    Shahrukh Khan, “Cybersecurity Challenges in Pakistan: An Assessment,” Science Diplomacy, March 2022, https://www.researchgate.net/publication/360256123_Cyber_Security_Challenges_in_Pakistan_An_Assessment.
    70    Wanjohi Githae, “Concern over graft as state centralizes IT procurement,” Nation. January 12, 2019, https://nation.africa/kenya/news/concern-over-graft-as-state-centralises-it-procurement–127312.
    71    Pedro Bustamante, et al., “Government by code? Blockchain applications to public sector governance,” Frontiers in Blockchain (2022), vol. 5, https://doi.org/10.3389/fbloc.2022.869665
    72    Benjamin J. Birkinbine, Incorporating the Digital Commons: Corporate Involvement in Free and Open Source Software, (London: University of Westminster Press, 2020), https://library.oapen.org/bitstream/handle/20.500.12657/37226/1/incorporating-the-digital-commons.pdf.
    73    Stewart Scott, et al., “Avoiding the success trap: Toward policy for open-source software as infrastructure,” Atlantic Council, August 8, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/
    74    Isabelle Adam and Mihaly Fazekas, “Are emerging technologies helping win the fight against corruption? A review of the state of the evidence,” Information Economics and Policy (2021), vol. 57, December 2021, https://www.sciencedirect.com/science/article/pii/S016762452100038X.

    The post The impact of corruption on cybersecurity: Rethinking national strategies across the Global South   appeared first on Atlantic Council.

    ]]>
    Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains https://www.atlanticcouncil.org/insight-impact/in-the-news/transatlantic-economic-statecraft-report-cited-in-the-international-cybersecurity-law-review-on-semiconductor-supply-chains/ Tue, 25 Jun 2024 13:57:00 +0000 https://www.atlanticcouncil.org/?p=779317 Read the journal article here.

    The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

    ]]>
    Read the journal article here.

    The post Transatlantic Economic Statecraft Report cited in the International Cybersecurity Law Review on semiconductor supply chains appeared first on Atlantic Council.

    ]]>
    Designing a blueprint for open, free and trustworthy digital economies https://www.atlanticcouncil.org/blogs/econographics/designing-a-blueprint-for-open-free-and-trustworthy-digital-economies/ Fri, 14 Jun 2024 21:21:25 +0000 https://www.atlanticcouncil.org/?p=773476 US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

    The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

    ]]>
    More than half a century into the information age, it is clear how policy has shaped the digital world. The internet has enabled world-changing innovation, commercial developments, and economic growth through a global and interoperable infrastructure. However, the internet is also home to rampant fraud, misinformation, and criminal exploitation. To shape policy and technology to address these challenges in the next generation of digital infrastructure, policymakers must confront two complex issues: the difficulty of massively scaling technologies and the growing fragmentation across technological and economic systems.

    How today’s policymakers decide to balance freedom and security in the digital landscape will have massive consequences for the future. US digital policy must be aimed at improving national security, defending human freedom, dignity, and economic growth while ensuring necessary accountability for the integrity of the technological bedrock.

    Digital economy building blocks and the need for strategic alignment

    Digital policymakers face a host of complex issues, such as regulating and securing artificial intelligence, banning or transitioning ownership of TikTok, combating pervasive fraud, addressing malign influence and interference in democratic processes, considering updates to Section 230 and impacts on tech platforms, and implementing zero-trust security architectures. When addressing these issues, policymakers must keep these core building blocks of the digital economy front and center:

    • Infrastructure: How to provide the structure, rails, processes, standards, and technologies for critical societal functions;
    • Data: How to protect, manage, own, use, share, and destroy open and sensitive data; and
    • Identity: How to represent and facilitate trust and interactions across people, entities, data, and devices.

    How to approach accountability—who is responsible for what—in each of these pillars sets the stage for how future digital systems will or will not be secure, competitive, and equitable.

    Achieving the right balance between openness and security is not easy, and the stakes for both personal liberty and national security amid geostrategic competition are high. The open accessibility of information, infrastructure, and markets enabled by the internet all bring knowledge diffusion, data flows, and higher order economic developments, which are critical for international trade and investment.

    However, vulnerabilities in existing digital ecosystems contribute significantly to economic losses, such as the estimated $600 billion per year lost to intellectual property theft and the $8 trillion in global costs last year from cybercrime. Apart from direct economic costs, growing digital authoritarianism threatens undesirable censorship, surveillance, and manipulation of foreign and domestic societies that could not only undermine democracy but also reverse the economic benefits wrought from democratization.

    As the United States pursues its commitment with partner nations toward an open, free, secure internet, Washington must operationalize that commitment into specific policy and technological implementations coordinated across the digital economy building blocks. It is critical to shape them to strengthen their integrity while preventing undesired fragmentation, which could hinder objectives for openness and innovation.

    Infrastructure

    The underlying infrastructure and technologies that define how consumers and businesses get access to and can use information are featured in ongoing debates and policymaking, which has led to heightened bipartisan calls for accountability across platform operators. Further complicating the landscape of accountability in infrastructure are the growing decentralization and aggregation of historically siloed functions and systems. As demonstrated by calls for decentralizing the banking system or blockchain-based decentralized networks underlying cryptocurrencies, there is an increasing interest from policymakers and industry leaders to drive away from concentration risks and inequity that can be at risk in overly centralized systems.

    However, increasing decentralization can lead to a lack of clear lines of responsibility and accountability in the system. Accountability and neutrality policy are also impacted by increasing digital interconnectedness and the commingling of functions. The Bank of the International Settlement recently coined a term, “finternet,” to describe the vision of an exciting but complexly interconnected digital financial system that must navigate international authorities, sovereignty, and regulatory applicability in systems that operate around the world.

    With this tech and policy landscape in mind, infrastructure policy should focus on two aspects:

    • Ensuring infrastructure security, integrity, and openness. Policymakers and civil society need to articulate and test a clear vision for stakeholders to coordinate on what openness and security across digital infrastructure for cross-economic purposes should look like based on impacts to national security, economic security, and democratic objectives. This would outline elements such as infrastructure ecosystem participants, the degree of openness, and where points for responsibility of controls should be, whether through voluntary or enforceable means. This vision would build on ongoing Biden administration efforts and provide a north star for strategic coordination with legislators, regulators, industry, civil society, and international partners to move in a common direction.
    • Addressing decentralization and the commingling of infrastructure. Technologists must come together with policymakers to ensure that features for governance and security are fit for purpose and integrated early in decentralized systems, as well as able to oversee and ensure compliance for any regulated, high-risk activity.

    Data

    Data has been called the new oil, the new gold, and the new oxygen. Perhaps overstated, each description nonetheless captures what is already the case: Data is incredibly valuable in digital economies. US policymakers should focus on how to surround how to address the privacy, control, and integrity of data, the fundamental assets of value in information economies.

    Privacy is a critical area to get right in the collection and management of information. The US privacy framework is fragmented and generally use-specific, framed for high risk sectors like finance and healthcare. In the absence of a federal-government-wide consumer data privacy law, some states are implementing their own approaches. In light of existing international data privacy laws, US policy also has to account for issues surrounding harmonization and potential economic hindrances brought by data localization.

    Beyond just control of privacy and disclosure, many tech entrepreneurs, legislators, and federal agencies are aimed at placing greater ownership of data and subsequent use in the hands of consumers. Other efforts supporting privacy and other national and economic security concerns are geared toward protecting against the control and ownership of sensitive data by adversarial nations or anti-competitive actors, including regulations on data brokers and the recent divest-or-ban legislation targeted at TikTok.

    There is also significant policy interest surrounding the integrity of information and the systems reliant on it, such as in combating the manipulation of data underlying AI systems and protecting electoral processes that could be vulnerable to disinformation. Standards and research are rising, focused on data provenance and integrity techniques. But there remain barriers to getting the issue of data integrity right in the digital age.

    While there is some momentum for combating data integrity compromise, doing so is rife with challenges of implementation and preserving freedom of expression that have to be addressed to achieve the needed balance of security and freedom:

    • Balancing data security, discoverability, and privacy. Stakeholders across various key functions of law enforcement, regulation, civil society, and industry must together define what type of information should be discoverable by whom and under what conditions, guided by democratic principles, privacy frameworks, the rule of law, and consumer and national security interests. This would shape the technical standards and requirements for privacy tech and governance models that government and industry can put into effect.
    • Preserving consumer and democratic control and ownership of data. Placing greater control and localization protections around consumer data could bring great benefits to user privacy but must also be done in consideration of the economic impacts and higher order innovations enabled from the free flow and aggregation of data. Policy efforts could pursue research and experimentation for assessing the value of data
    • Combating manipulation and protecting information integrity. Governments must work hand in hand with civil society and, where appropriate, media organizations to pursue policies and technical developments that could contribute to promoting trust in democratic public institutions and help identify misinformation across platforms, especially in high-risk areas to societies and democracies such as election messaging, financial services and markets, and healthcare.

    Identity

    Talk about “identity” can trigger concerns of social credit scores and Black Mirror episodes. It may, for example, evoke a sense of state surveillance, criminal anonymity, fraud, voter and political dissident suppression, disenfranchisement of marginalized populations, or even the mundane experience of waiting in line at a department of motor vehicles. As a force for good, identity enables critical access to goods and services for consumers, helps provide recourse for victims of fraud and those seeking public benefits, and protects sensitive information while providing necessary insights to authorities and regulated institutions to hold bad actors accountable. With increasing reliance on digital infrastructure, government and industry will have to partner to create the technical and policy fabric for secure, trustworthy, and interoperable digital identity.

    Digital identity is a critical element of digital public infrastructure (DPI). The United States joined the Group of Twenty (G20) leaders in committing to pursue work on secure, interoperable digital identity tools and emphasized its importance in international fora to combat illicit finance. However, while many international efforts have taken root to establish digital identity systems abroad, progress by the United States on holistic domestic or cross-border digital identity frameworks has been limited. Identity security is crucial to establish trust in US systems, including the US financial sector and US public institutions. While the Biden administration has been driving some efforts to strengthen identity, the democratized access to sophisticatedAI tools increased the threat environment significantly by making it easy to create fraudulent credentials and deepfakes that circumvent many current counter-fraud measures.

    The government is well-positioned to be the key driver of investments in identity that would create the underlying fabric for trust in digital communications and commerce:

    • Investing in identity as digital public infrastructure. Digital identity development and expansion can unlock massive societal and economic benefits, including driving value up to 13 percent of a nation’s gross domestic product and providing access to critical goods and services, as well as the ability to vote, engage in the financial sector, and own land. Identity itself can serve as infrastructure for higher-order e-commerce applications that rely on trust. The United States should invest in secure, interoperable digital identity infrastructure domestically and overseas, to include the provision of secure verifiable credentials and privacy-preserving attribute validation services.
    • Managing security, privacy, and equity in Identity. Policymakers must work with industry to ensure that identity systems, processes, and regulatory requirements implement appropriate controls in full view of all desired outcomes across security, privacy, and equity, consistent with National Institute of Science and Technology standards. Policies should ensure that saving resources by implementing digital identity systems also help to improve services for those not able to use them.

    Technology by itself is not inherently good or evil—its benefits and risks are specific to the technological, operational, and governance implementations driven by people and businesses. This outline of emerging policy efforts affecting digital economy building blocks may help policymakers and industry leaders consider efforts needed to drive alignment to preserve the benefits of a global, interoperable, secure and free internet while addressing the key shortfalls present in the current digital landscape.


    Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council, where Carole will soon be returning as the Special Advisor for Cybersecurity and Critical Infrastructure Policy. This article reflects views expressed by the author in her personal capacity.

    The post Designing a blueprint for open, free and trustworthy digital economies appeared first on Atlantic Council.

    ]]>
    “Reasonable” cybersecurity in forty-seven cases: The Federal Trade Commission’s enforcement actions against unfair and deceptive cyber Practices https://www.atlanticcouncil.org/in-depth-research-reports/report/reasonable-cybersecurity-in-forty-seven-cases-the-federal-trade-commissions-enforcement-actions-against-unfair-and-deceptive-cyber-practices/ Wed, 12 Jun 2024 20:16:00 +0000 https://www.atlanticcouncil.org/?p=817237 The FTC has brought 47 cases against companies for unfair or deceptive cybersecurity practices. What can we learn from them?

    The post “Reasonable” cybersecurity in forty-seven cases: The Federal Trade Commission’s enforcement actions against unfair and deceptive cyber Practices appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    The Federal Trade Commission (FTC) is a small US government agency whose consumer protection remit is increasingly the starting point to govern the design and operation of a multitude of impactful digital products and services. In the absence of either a comprehensive, federal-level consumer privacy or data security law in the US, the FTC has used its legal authority to police “unfair and deceptive acts and practices”1 in commerce to become the lead federal enforcer for the privacy and security of consumer data.

    This report provides a historical examination of forty-seven FTC enforcement actions related to unfair and deceptive acts and practices related to cybersecurity from 2002 to 2024, including the cybersecurity practices (or lack thereof) that caused the FTC to pursue each case and the requirements it placed upon companies to establish a comprehensive information security program in response. This analysis reveals how the FTC, armed with a mandate from 1914, has effectively constructed a body of “reasonable” cybersecurity practices and clear precedent for their enforcement.

    Throughout these cases, the FTC’s central identity as a consumer protection agency is clear. The commission’s cyber enforcement, in part due to the scope of its authorities, has focused heavily on addressing instances of insecurity that drive harm, whether due to the volume or nature of consumer data at risk. The FTC has levied complaints against companies of all shapes and sizes for an equally diverse range of security bad practices, from allowing users to share credentials to failing to monitor technical vulnerability reports. Many of these complaints hinge on questions of “reasonable” cyber practices to protect consumers from harm or to uphold promises made in privacy policies and similar; thus, compiling the complaints begins to illuminate a body of baseline reasonable cybersecurity practices, as well as illustrate the persistence, over twenty years, of certain unsafe practices.

    Over most of the last two decades, the agency’s language for consent decrees—agreements that prescribe corrective actions that must be undertaken by the companies that agree to them—changed little between cases, despite the diversity of companies and practices that these decrees addressed. This trend changed following a 2019 ruling from the Eleventh Circuit that the FTC’s data security consent decree against LabMD was “unenforceabley vague.”2 Since then, consent decrees have become more specific and tailored to the security failures that instigated the FTC’s complaint. Yet, even these new decrees illustrate the ways in which the FTC’s consent decrees combine both general and specific obligations to build requirements that can endure across the changes in technology and security practice that inevitably occur during a consent decree’s twenty-year lifetime.

    This paper reviews these trends from these forty-seven cases in light of recent policy debates over resolving persistent cyber insecurity, including the Biden administration’s 2023 proposal to implement liability for vendors of insecure software3 and recent proposals to codify data security standards as part of a federal consumer privacy law.4 Many of these debates involve questions of how to define good and bad behavior with respect to cybersecurity and how to balance specificity and adaptability in the design of such frameworks. Studying the standards embedded within the “common law”5 for consumer data security that the FTC has built through its cases offers an immediately useful foundation for the creation of cyber standards in the software liability context and beyond.

    This analysis also illustrates some of the challenges with this model— the FTC as the stopgap federal enforcer for consumer cybersecurity—not least of which is the fact that the agency has had only forty-seven cases in which to articulate reasonable practices for twenty years’ worth of blistering technological and commercial progress in consumer technology. The arrow of change in digital technology points toward yet wider dependence on common architectures and broadly adopted platforms, so, the paper briefly concludes with consideration of whether and how the FTC and future cyber policy mechanisms can adapt to meet this challenge.

    Introduction

    Despite the growing importance of computing technology and the increasing sensitivity of the data collected by myriad systems from social media websites to wearable tech, the United State lacks a federal regulator with the explicit authority to set baseline cybersecurity standards for systems that hold and process sensitive consumer data.

    With no singular federal data security regime in place, legal cybersecurity requirements have come from a patchwork of alternate sources including state-level privacy laws, sector-specific privacy and security rules, reporting requirements, and cybersecurity standards for government contractors.6 Many states have passed privacy laws that often include requirements for companies processing personal data to abide by certain cybersecurity standards.“7 At the federal level, both the healthcare and financial sectors are subject to specific regimes governing privacy and data security—the Health Insurance Portability and Accountability Act (HIPAA)8 and the Gramm-Leach Bliley Act (GLBA).9 The FTC dictates cybersecurity protections for certain types of data under these laws as well others under the Fair Credit Reporting Act (FCRA)10 and the Children’s Online Privacy Protection Act (COPPA).11 The Federal Communications Commission regulates “common carriers“ such as telephone network providers; under the 2015 Open Internet Order, it designated broadband internet access providers as common carriers subject to Title II of the Telecommunications Act, requiring them to adopt new data protection and privacy rules (and excluding them from the FTC’s jurisdiction).12 Other federal entities regulate disclosures—not practices—relating to cyber incidents: the Securities and Exchange Commission (SEC) recently adopted rules requiring publicly traded companies to disclose material cybersecurity incidents for the benefit of their investors;13 and the US Cybersecurity and Infrastructure Agency recently put out proposed rules to implement required reporting under the Cyber Incident Reporting for Critical Infrastructure Act.“14 Thus, different US federal enforcers have bitten off different pieces of the cybersecurity ecosystem, regulating specific types of data, technologies, or behaviors such as disclosures of cyber incidents.

    For the consumer data and consumer technologies that remain, the main stopgap protection comes in the form of the FTC’s consumer protection authority. Section 5a of the FTC Act grants the agency broad latitude to hold entities liable for “unfair and deceptive acts and practices” in commerce.15 It is this authority that the FTC has used to become, in some sense, the United States’ cyber regulator of last resort.

    This report is concerned with the question of how the FTC has used this stopgap authority. What types of companies and failures has the agency prioritized? What practices or behaviors recur as drivers of insecurity in the consumer context? And, what lessons do the FTC’s actions thus far offer for US policymakers considering how to establish a more comprehensive approach to consumer data security? The authors begin with an overview of the FTC itself and the authorities it has used to undertake this stopgap cyber oversight, as well as the nature of the complaints and consent decrees that are the legal tools through which this strategy is realized. Next, the report reviews the methods used to select and analyze the dataset of cases that underpins its analysis, and then presents the findings, identifying practices and remedies put forward by the FTC in the context of specific cases as well as high-level trends and themes that stretch across the cases. Finally, it extrapolates these findings into takeaways for policymakers seeking to design or refine mechanisms and authorities relating to cyber protections for consumers.

    Cybersecurity as consumer protection

    The FTC has a mandate to protect consumers and promote competition. Within that ambit, it has the power to bring cases against companies, trade associations, nonprofit organizations, government agencies, and individuals,16 for a range of practices from phone scams to advertisements for fake COVID-19 cures, thus serving as the enforcer for a dizzyingly swath of the US economy.

    The FTC’s broad jurisdiction to investigate and curtail “unfair and deceptive acts and practices”17 (often shortened to UDAP) comes from Section 5a of the Federal Trade Commission Act of 1914 (FTC Act), which states that “unfair or deceptive acts or practices in or affecting commerce…are…declared unlawful.”18 “Deceptive” acts or practices are defined as any ”material representation, omission, or practice likely to mislead a consumer otherwise acting reasonably in the circumstances,“19 and “unfair” acts or practices are those that “cause or [are] likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”20

    When it comes to cybersecurity, the FTC can bring actions for deception when a company fails to live up to its claims in privacy policies, marketing materials, or public statements about its security practices or programs (e.g., that it uses “reasonable” or “industry-standard” security practices to protect data). Cyber practices can be unfair when they cause or are likely to cause harm to consumers such as leaking information that could lead to identity theft and when consumers cannot take action to reasonably avoid the harm—typically the case, since most consumers cannot themselves audit a company’s security practices. Thus, the FTC has been able to use these twin authorities to bring a variety of cases against companies whose poor cyber practices allowed or could have allowed the theft or leak of consumer information.

    The FTC challenges violations of the FTC Act by instituting administrative adjudications. When the commission has reason to believe a violation has occurred or is currently occurring, the commission issues a complaint setting forth the charges. Respondents must either contest these charges in court or settle and enter into a “consent decree.” A consent decree does not concede liability, but is a binding agreement for a period, usually twenty years, stipulating specific practices and regulations the defendant must put in place to address the behavior that led to the complaint. For cybersecurity-related failures, consent decrees typically include requirements to establish an adequate data security program with certain mandatory elements. Even when companies choose to challenge the FTC’s case, most still result in a consent decree: if a defendant does not settle, the FTC will litigate the case before an administrative law judge, from whom it typically obtains a mandatory injunction requiring defendants to sign a consent decree.

    Thus, the FTC’s explicit construction as a consumer protection agency fundamentally shapes its approach to cyber enforcement. Its focus is on ensuring companies have adequate (and honestly described) protections to shield consumers from harm caused by improper access to their data, all in order to protect consumers and ensure the fair functioning of the market for consumer goods and services—a market full of products that increasingly implicate the security of consumers’ personal data.

    Methods

    This paper is based on a review of the complaints and consent decrees associated with forty-seven cases brought by the FTC for cybersecurity-related UDAP Section 5a violations between 2002 and 2024. The methods focus on three stages: how these cases were selected, how the complaints were analyzed to extract a list of (un)reasonable security practices, and how the consent decrees were analyzed to identify themes and patterns in these FTC-mandated information security programs.

    Case selection

    The forty-seven cases in this dataset were identified using tools from the FTC’s legal library website and researcher review. First, the research team filtered the FTC’s online legal library using built-in tools to a set of cases that were (1) identified as relating to the agency’s consumer protection mandate and (2) tagged with “privacy and security.” The research team then reviewed this set of 306 cases and selected forty-seven that met two criteria. First, that each case was brought on the basis of, or involved, a cybersecurity failure as instantiated in a vulnerability in or a third-party compromise of an information system. This meant omitting violations that occurred due to insecure disposal, physical theft of sensitive information, or insider threats. Second, that each case was brought based solely on a violation of Section 5a—e.g., an unfair or deceptive act or practice. Thus, the dataset omits cybersecurity-related cases brought on the basis of other laws, such as COPPA or GLBA, and dual violations brought on the basis of violations of both 5a and another law. This second criterion ensures that the analysis of the resulting complaints and consent decrees can speak directly to the strengths and limitations of the FTC’s stopgap UDAP authorities versus the specific cybersecurity regulatory authorities granted to it by Congress. An analysis of cases stemming from violations of these specific regulations, and a comparison of those findings with the findings from strictly UDAP cases that make up this dataset, could be a valuable direction for future work.

    The texts for the complaints and the consent decrees were obtained using the most recent versions of both documents available via the FTC’s online case library.A dataset containing the forty-seven cases, associated metadata, and the links to each complaint and consent decree document is available online, together with text versions of the documents and utility scripts used for data analysis tasks. 21

    As the contents of complaints and consent decrees are referenced frequently in this paper, shorthand names are used throughout. Table 1 in the appendix provides the complete citation for the complaint and consent decree documents corresponding to each shorthand name. 

    Complaint analysis

    The authors analyzed each complaint document to identify the security failures that formed the basis for the action in order to identify the practices that the FTC understands as having failed to meet the bar for “reasonable” cybersecurity. As such, this component of the analysis omitted three cases22 in which the FTC alleged deception about a specific security practice—for example, a company that deceptively claimed that a product utilized end-to-end encryption when it did not—since these cases do not implicate a definition of ”reasonable“ cybersecurity practices. The remaining forty-four complaints all contained the word “reasonable” as applied to cybersecurity practice, further evidence that this subset of the main dataset speaks directly to the FTC’s conception of reasonable cyber practice.

    The first step in this analysis was the research team’s manual review of the data to extract and classify categories of unreasonable practices outlined in each decree. This work was then cross-checked using simple Python scripts that searched all the complaints for relevant keywords related to the practices.

    Within consent decrees, the researchers analyzed the content and specific provisions of the FTC-mandated security programs to understand how the commission envisions adequate or good cybersecurity practices as well as how they have designed enduring security programs across a variety of organizations.

    This analysis was conducted on the full set of forty-seven cases, including the three cases where the defendant made a more specific deceptive claim about their practices. Here again, the bulk of the analytical work was manual researcher review to identify eras and changes between documents. Additionally, researchers used basic natural language processing—embedding the documents as vectors using the bge-small-en model from HuggingFace23 and then using k-means clustering to group those vectors—to identify clusters of similar consent decrees that were then manually reviewed.

    Overview of the dataset

    The actions within this dataset span from 2002 to 2024. As shown in the graphic below, the first cybersecurity-related cases that the FTC brought were all actions linked to deceptive practices. The first unfairness cases followed a few years later, and within roughly the past decade the agency began bringing actions that accused companies of both unfair and deceptive practices with respect to security.

    This dataset contains cases brought against software providers, major retailers, e-commerce platforms, Internet of Things manufacturers, mobile applications, hardware manufacturers, and others. The security failures explored in this paper have contributed to the exposure of consumer names;24 dates of birth;25 physical addresses;26 credit card information including card numbers, expiration dates, and security codes;27 Social Security numbers (SSNs);28 bank account and routing numbers;29 driver’s license numbers;30 tax returns;31 medical information including medical history, medication, and examination notes;32 email addresses;33 video recordings of homes;34 and communications with loved ones.35

    Reasonable cybersecurity practices: The complaints

    Complaints filed as a part of FTC actions outline the alleged misconduct of a defendant, providing the clearest understanding of the types of practices that the FTC considers unfair or deceptive when it comes to processing and protecting consumer data. Each complaint lists ways in which the defendant’s practices—individually and collectively—failed to provide “reasonable or appropriate security” for the consumer information they collected. Thus, through their evolving adjudication of Section 5a cases, the FTC has expanded and evolved a de facto list of inadequate data security practices.

    While these security failures differ from case to case and evolved over time to include practices relating to newer technologies (e.g., the failure to securely store cloud bucket credentials), certain security failures have persisted throughout the two decades of FTC complaints. Distilling the shortcomings outlined in these complaints provides both a window into the FTC’s conception of reasonable baseline security practices for companies that interact with consumer data and a repository of information about security failures that have caused tangible consumer harm.

    1. Encrypt data

    Encryption—the practice of cryptographically transforming data so that it is unreadable to those without the proper “key” to decrypt it—has been a fundamental building block of information security since at least the 1970s.36

    Encrypt data at rest

    Encrypting data at rest means applying encryption to data that is statically stored in databases or other locations on an information system. The dataset reviewed in this report contained twenty complaints37 in which the FTC identified, as an unreasonable security practice, a company storing consumer information at rest in an unencrypted format. These cases span from 2005 to 2024 and include cases in which defendants were storing cleartext (i.e., unencrypted) data in on-premise and cloud databases alike.38

    Encrypt Data in Transit

    Encrypting data in transit refers to encrypting data as it moves around an information system, such as when it is transmitted over the internet. In five cases,39 the FTC identified the failure to encrypt consumer data in transit as an unreasonable or inadequate security practice. These cases include instances where companies failed to encrypt user data as it was transmitted over the internet40 and where companies sent unencrypted information within their corporate network.41

    2. Mitigate commonly known vulnerabilities

    Vulnerabilities in software make it possible for an attacker to take undesired actions such as escalating their access or accessing resources that are supposed to be restricted. While some vulnerabilities are sophisticated and hard to detect ahead of time, many arise from commonly known weaknesses.42 In seventeen complaints,43 the FTC identified failures to mitigate “commonly known” (or “well-known”) or “reasonably foreseeable” vulnerabilities and attacks in their websites or products as an unreasonable security practice. These included failure to:

    • Mitigate Standard Query Language (SQL) injection vulnerabilities, with nine cases from across the dataset—the most recent in 202344
    • Prevent cross-site scripting attacks45 or cross-site request forgeries46
    • Prevent predictable resource location vulnerabilities47

    The FTC’s language around these vulnerabilities typically emphasized that they were commonly known vulnerabilities—showing that they had been the subject of warnings from security experts or had been featured in previous publicly reported security incidents. These criteria are discussed in further detail at the end of this section.  

    3. Enforce good credential practices

    Enforcing good credential-management practices—both for customer and employee credentials—is a long-running theme within this dataset, appearing in complaints associated with cases from 2006 to cases from 2022. Since credentials are the “keys to the kingdom” that allow access to sensitive user information and organizational resources, protecting credentials and making it difficult for attackers to obtain or guess them is a core principle in information security.

    Strong passwords and hard-to-guess credentials

    The FTC has identified the use of weak or easy-to-guess passwords or credentials in ten cases48  in this dataset as a bad practice both for employee and user credentials. Specifically, the FTC identified the failure to:

    • Make administrative passwords difficult to guess49 or to require network administrators to use strong passwords50
    •  “Establish or enforce rules sufficient to make user credentials hard to guess”51
    •  “Require employees, vendors, and others with access to personal information to use hard-to-guess passwords”52
    Prohibit sharing of credentials

    In four of the cases53 contained in this dataset, the FTC has identified as an unreasonable practice that a company allowed users,54 employees,55 to share credentials. This also included cases where companies allowed employees to reuse passwords to access multiple servers and services,56 or ”permit[ed] all programs and engineers to use a single AWS access key that provided full administrative privileges over all data in the Amazon S3 Datastore.”57

    Monitor use of credentials

    Monitoring the use of credentials to identify suspicious patterns can help prevent attackers from obtaining and abusing legitimate credentials. Five complaints58 in the dataset referenced defendants’ failure to monitor the use of credentials, including lacking a way to “monitor unsuccessful log-in attempts,”59 or “suspend user credentials after a certain number of unsuccessful log-in attempts.”60

    Encrypt credentials

    Storing or transmitting credentials without encryption can allow attackers to more easily steal the credentials from devices or as they are sent over the network. The FTC has faulted companies for:

    • Transmitting user credentials in cleartext61
    • Allowing users to “store their user credentials in a vulnerable format in [unencrypted] cookies”62
    • Using “outdated and unsecure cryptographic hash functions to protect users’ passwords”63

    The FTC has also found it unreasonable for companies to lack policies and controls that would prevent employees from storing unencrypted credentials on their machines or systems, faulting companies for failure to “prohibit storage of administrative passwords in plain text”64 or “prevent the retention of passwords and encryption keys in clear text files.”65

    [Deprecated best practice] Require periodic changing of credentials

    Changing user credentials is one area where the FTC has evolved its approach over time. Between 2008 and 2011, four complaints faulted defendants for failing to enforce or require the periodic change of user credentials or administrative passwords. However, the FTC has revised its thinking on this issue—in 2016 it published a blogpost suggesting that organizations “rethink mandatory password changes,” citing recent findings in the information security field suggesting that this practice did not actually improve security.66 And, no complaints after 2016 list a failure to require periodic changing of credentials as an unreasonable practice. This deprecated practice shows that the FTC has evolved the practices it cites as a threat to cybersecurity based on broader consensus within the information security field.

    4. Use multifactor authentication

    Multifactor authentication is a way of enhancing the security of username-and-password requirements for authentication by requiring a “third factor” such as a mobile device or security key in possession of the user. Multifactor authentication can stop attacks such as those in which threat actors use leaked user credentials to log into information systems, since the attackers will lack access to the third factor needed to log in. Three of the complaints67—all of them recent, from 2022 to 2024—cite companies’ failure to use multifactor authentication as an unreasonable practice.

    5. Monitor and control network access

    Attackers who can successfully infiltrate an organization’s network can potentially access sensitive data and resources or deploy malware such as ransomware. In fourteen of the cases68 included in this dataset, the FTC faulted defendants for failing to implement adequate practices for monitoring and controlling network access. These include failures to limit access between a defendant’s network and the internet, including by:

    • Using firewalls between the internet and a corporate network69 and limit access between computers within the corporate network70
    • Limiting access through wireless access points to networks71
    • Using “reasonable” or “sufficient” measures to detect or investigate unauthorized network access, such as intrusion detection systems or monitoring and reviewing logs72
    • Monitoring their networks and systems for attempts to transfer or exfiltrate data outside of network boundaries73
    • Restricting inbound connections to known IP addresses74 (a more novel safeguard, from only one recent case)

    6. Maintain a written security program

    A written security program can help organizations lay out a plan for how they will implement necessary controls and oversight within their network, as well as how they will respond to security incidents or other events. Ten of the FTC complaints75 invoke either the nonexistence or the inadequacy of a defendant’s written information security program as an unreasonable practice. The FTC mentions components of such a plan including:

    • An incident response plan76
    • “Standards, policies, procedures, or practices” for data security77
    • “Standards, policies, procedures or practices” for third-party software78

    7. Maintain a process for accepting and addressing vulnerability reports

    Users or independent security researchers can detect vulnerabilities in a company’s product or network before the company does; in these cases, it is beneficial for the company to have standard practices by which these users and researchers can report vulnerabilities for resolution so their reports are not lost or overlooked. In five of the complaints in this dataset,79 the FTC described as unreasonable that the defendant failed to have a process for monitoring, receiving, and addressing external security vulnerability reports. Various complaints emphasized that this lack of a process “delay[s] the opportunity to correct discovered vulnerabilities or respond to reported incidents”80 or highlighted the “existence of free tools to conduct such monitoring.”81

    8. Stay up to date with patches

    Attackers can exploit known vulnerabilities in unpatched software to gain a foothold into the network. Timely application of critical patches for software used within a company’s network can help protect companies from this threat. Five complaints82 in the dataset mentioned companies’ failure to patch the software they used in a timely manner as an unreasonable practice, including, specifically:

    • Failure to patch servers running on their network83
    • Failure to patch security tools such as antivirus software84
    • Using versions of software that no longer received patches85
    • Failure to “implement patch management policies and procedures to ensure the timely remediation of critical security vulnerabilities and [use of] obsolete versions of database and web server software that no longer received patches”86

    9. Perform testing and auditing

    Proactive testing and auditing of software products, websites, and corporate networks can help an organization proactively identify points of weakness or vulnerability, which could be targeted by malicious actors. The FTC cited the lack of proactive testing and auditing (such as penetration testing) in eleven of its complaints,87 including:

    • Failure to penetration test networks88.
    • Failure to penetration test software or applications89
    • Failure to penetration test hardware devices90
    • Failure to test software “such as by inputting invalid, unanticipated, or random data to the software”91
    • Failure to perform code review of software systems92
    • Failure to “test the [software] before distributing it to consumers or monitor the [software]’s operation thereafter to verify that the information it collected was consistent with respondent’s policies”93
    • Failure to “test, audit, assess, or review its products’ or applications’ security features; and conduct regular risk assessments, vulnerability scans, and penetration testing of its networks and databases94

    10. Minimize data retention and Aacess

    Implement data retention

    One way to protect consumers is to delete their data when it is no longer necessary for a business purpose—even if a hack were to occur, hackers cannot steal data that is not there. In this dataset, the FTC has faulted ten companies95 for retaining unnecessary consumer information, citing failures including:

    • Storage of consumer information “indefinitely” on their networks “without a business need”96
    • Lack of “appropriate data retention schedules and deletion practices”97
    • Lack of “policy, process, or procedure” for “inventorying and deleting” consumer and employee information that was no longer needed98
    Limit access to data by need

    Another way for companies to protect consumer information is to make sure that employees access to consumer information is limited to what is required to do their job: if every user has access to every resource, then if any one of them is compromised, the company’s trove of data is at risk. In the dataset, the FTC faulted eight companies99 for failing to implement access controls for consumer information—that is, failing to ensure access to sensitive data is limited to employees or individuals with a direct business need. This includes restricting their access to consumers’ sensitive information based on their job function.100

    Purpose-limited access is not only applicable to sensitive consumer information, but also applies to employees’ access to security controls or security-relevant resources like source code. FTC complaints have faulted companies for continuing to allow employees to have access to administrative controls101 or source code102 after they no longer needed such access.

    11. Oversee service providers

    In addition to the primary defendants in FTC cases, third-party service providers often also have access to sensitive consumer data. These service providers include third-party cloud service providers as well as companies that “receive, process, or maintain”103 consumer information on behalf of the primary defendant. In eight complaints contained in this dataset, the commission has suggested that defendants must oversee their service providers.104 suggested that companies should require their service providers, by contract, to adopt specific practices, including:

    • To implement “simple, low-cost, and readily available defenses to protect consumers’ personal information”105
    • To provide employees with “secure development training or other data security training appropriate to their job duties”106

    12. Train employees and personnel

    Training employees and personnel in security practices can help them understand and implement proper security practices in their day-to-day work, from avoiding clicking on phishing links to properly configuring software systems that process consumer data. The failure to adequately train personnel was an unreasonable practice mentioned in twelve of the complaints in this dataset.107 These instances include:

    • Failure to provide employees with “data security training” or to train personnel “to perform their data security related duties and responsibilities”108
    • Failure to provide employees with “adequate guidance” regarding information secu109
    • Failure to provide adequate training to “engineering staff,“110 or to employees responsible for “testing third-party software“111 or “designing, testing, overseeing, and approving software specifications and requirements “112

    Analysis: How the FTC constructs reasonableness

    Analyzing the above complaints reveals a few ways in which the FTC constructs and supports its argument that a company’s practices fail to provide reasonable cybersecurity for its customers.

    1. Foreseeability: Expert warnings, industry practice, and prior attacks

    To justify why a practice should be considered unreasonable, the FTC explicitly refers to widely available information about a vulnerability or practice as evidence that the defendant should have known that they needed to address (or avoid) the failure.

    One such reason is that security experts have already issued public warnings about a practice or attack. For example, in one complaint, the FTC wrote, “security professionals have issued public warnings about the security risk presented by weak user ID and password structures since the late 1990s,”113 and at least five other complaints include references to “security experts” or “security professionals” when arguing that a company should have reasonably known that its practice was flawed.  

    The FTC also references broader industry consensus, such as citing causes of cybersecurity weaknesses “commonly known in the information technology industry.”114

    Additionally, in some cases, the FTC relies on publicly disclosed incidents attributable to similar flaws or practices to make the case that a company should have known that a particular practice was unreasonably risky. In its 2022 case against Drizly, the FTC stated, “numerous publicly reported security incidents since 2013 have highlighted the dangers of storing passwords and other access keys in GitHub repositories,”115 using real-world patterns of cyber incidents to support its claim that the company’s practices clearly failed to provide a reasonable measure of security.

    These approaches are not mutually exclusive—for example, addressing failures to mitigate commonly known web application vulnerabilities, the FTC in a single complaint said that “the risk of such web application attacks is well known in the information technology industry […] security experts have been warning the industry about these vulnerabilities since at least 1997; […] and in 2000 the industry began receiving reports of successful attacks on web applications.”116

    2. Availability and cost of mitigations

    The FTC also factors into its arguments the existence of “readily available” and “free or low-cost”117 tools that would have mitigated the causes of failure. For example, in one complaint, the FTC stated that the defendant failed to encrypt credentials “despite the existence of free software, publicly available since 2008, that would have enabled respondent to secure such stored credentials.”118 This statement emphasizes that software mitigations were publicly available, that they had been available for a long time, and that they were available at no cost, presumably to make even clearer the unreasonable nature of the defendant’s failure to adopt such safeguards.

    3. Prior FTC actions

    Aside from providing specific examples of industry standard security practices in their complaints, the FTC also points to previous cases to reinforce the point that the defendant should have known that its security behavior was unfair or deceptive.

    For example, in a 2022 complaint, the FTC highlighted how “the Commission’s 2018 Complaint against Uber Technologies, Inc. specifically publicized and described credential reuse, lack of multifactor authentication, and insecure AWS credentials exposed through GitHub repository code as failures contributing to the breach and exposure of consumers’ personal information,”119 further bolstering the commission’s argument that the defendant should have known that these practices were unreasonable and insufficient to provide adequate security.

    Comprehensive security programs: Consent decrees

    Consent decrees are legally binding agreements between a defendant and the FTC that stipulate the actions the defendant must take to remediate some legal breach or violation. Most data security-related decrees begin by listing prohibitions on certain activities that led to the original complaint. Then, the order lists the mandated comprehensive security program that the defendant must implement. From there, defendants are required to obtain initial and biennial data security assessments for a stipulated amount of time (typically twenty years). The next part of the decree requires the defendant to disclose all necessary information to the security assessor and to submit an annual certification to the FTC that the defendant has implemented the requirements listed in the consent decree. The final part of the consent decrees includes a reporting and compliance provision, such as recordkeeping requirements. The subsequent analysis focuses specifically on the security program mandated by the consent decrees. These security programs provide a window into how the FTC thinks about adequate cybersecurity practices for companies, and, without a clear cybersecurity law in place, other companies (beyond those required to) have looked to the FTC’s consent decrees to guide their cybersecurity practices.

    The below visualization highlights the changes across the security programs mandated in FTC consent decrees throughout the history of the dataset.

    1. Identify risks, implement safeguards

    Microsoft (2002)

    In 2002, the FTC entered into the dataset’s first consent decree, settling with Microsoft Corporation on charges that the company had falsely represented their data and security practices. Ordering Microsoft to establish and maintain “a comprehensive security program,” this consent decree would define the language that persisted throughout decades of consent decrees to follow. The security program was to be established in writing, designed to protect the “security, confidentiality, and integrity” of consumer information, and to include “administrative, technical and physical safeguards” appropriate for Microsoft’s size, complexity, the nature of their activities, and the sensitivity of the consumer information they collected. It also included a few specific additional requirements:

    • Designation of an employee to lead the information security program
    • Identification of risks to the confidentiality, security, and integrity of customer information that could result in its unauthorized use or disclosure
    • Assessment of existing safeguards for mitigating such risks, including, specifically:
      • “employee training and management;”
      • “information systems, including network and software design, information processing, storage, transmission, and disposal;” and
      • “prevention, detection, and response to attacks, intrusions, or other systems failures.”120
    • Design and implementation of safeguards to control the identified risks
    • Regular testing or monitoring of these safeguards
    • Ongoing evaluation, monitoring, and updating of the information security program itself, according to the identified risks and the results of the testing of the safeguards121

    This is, broadly, a risk-based approach. Rather than requiring Microsoft to adopt specific safeguards or practices, the FTC placed upon them the impetus to identify risks to customer information and design appropriate (and documented) safeguards. This risk-based approach was to become an enduring feature of the cybersecurity consent decrees within this dataset.

    2. Oversee your service providers

    Guidance Software (2007)

    In 2007, the FTC settled with Guidance Software, a vendor of software and materials, services, and training for customers to investigate and respond to computer breaches and security incidents. Included in the Guidance Software consent decree was a new requirement that would become ingrained in consent decree language going forward:

    • Developing and implementing “reasonable steps” to work only with service providers “capable of appropriately safeguarding personal information;”
    • Requiring services providers, by contract, to implement and maintain “appropriate safeguards;” and
    • Monitoring service providers’ protection of personal information.122

    At first glance, these new provisions seem surprising. According to what is recorded in the complaint, Guidance Software was not harmed by a service provider; instead, it was the service provider, and its website’s vulnerability to SQL injection harmed the companies that were its customers. Thus, this change seems to suggest that the FTC may intentionally use consent decrees to set broader standards, beyond responding to the narrow circumstances of a single instance of failure, based on its knowledge of the range of practices that could harm security.123 (This idea was partially at issue, in fact, in a later legal challenge to the FTC’s decrees.)

    3. Insecure devices

    HTC America (2013), TRENDnet (2014)

    HTC America was one of the first cases in the dataset that addressed insecure devices rather than insecure corporate networks. Therefore, the consent decree had some unique features. For example, it made reference to “material internal and external risks to the security of covered devices that could result in unauthorized access to or use of covered device functionality,” rather than risks to consumer information. It also required:

    • That the mandatory security program include an assessment of risks and the adequacy of safeguards related to:
      • “product design, development and research;”
      • “secure software design and testing, including secure engineering and defensive programming;” and
      • “review, assessment, and response to third-party security vulnerability reports.”
    • That implemented safeguards be evaluated “through reasonable and appropriate software security testing techniques”124

    These changes reflected the FTC’s adaptation of its core security program requirements to apply to an entity that sold devices and software toconsumers, rather than operating software systems that processed consumer data. The consent decrees for ASUSTeK and TRENDnet—two computing device sellers—contained similar requirements, but added a more specific requirement that “appropriate software security testing techniques” should include practices such as “(1) vulnerability and penetration testing; (2) security architecture reviews; (3) code reviews; and (4) reasonable and appropriate assessments, audits, reviews, or other tests to identify potential security failures and verify that access to covered information is restricted consistent with a user’s security settings.”

    4. Insecure applications

    Fandango (2014), Credit Karma (2014)

    On August 21, 2014, the FTC settled charges with two companies—Fandango and Credit Karma—after their security failures left consumer information vulnerable despite assurances that their mobile apps were secure.125 The consent decrees for these cases built upon the precedent set in the insecure device cases, adding similar requirements, including to assess risks related to product design and development and to processes for handling third-party vulnerability reports. These consent decrees also added a new requirement: to assess the adequacy of employee training and management related to “secure engineering and defensive programming.”

    Many of these requirements would recur in other consent decrees for companies developing consumer-facing software applications like Snapchat, ASUSTeK, and Uber. The Uber order added more specificity to some of these requirements, including, “secure software design, development, and testing, including access key and secret key management and secure cloud storage,” and “review, assessment, and response to third-party security vulnerability reports, including through a ‘bug bounty’ or similar program.”

    5. Personal liability for executives

    GMR Transcription Services (2014), BLU Products (2018), InfoTrax (2020), Support King (2020), Drizly (2023)

    In select cases, the first of which was settled in 2014, the FTC named not only companies but their C-suite level executives—including chief executive officers (CEOs), presidents, and vice presidents—as individual defendants in the consent decree.126 This standard of liability centers around the executive having authority to “control” or “participate in” the company’s information security practices.127 In each of these cases the FTC provides evidence of the executives’ culpability ranging from “not implement[ing], or properly delegat[ing] the responsibility to implement, reasonable information security practices” to an executive having “reviewed and approved” the corporation’s information security policies.128

    In these consent decrees, the FTC includes specific stipulations that the individual executive must carry out, in addition to those that the corporation is subject to. These practices include:

    • For twenty years following the order, for any business that the executive is the majority owner or controls directly or indirectly, the executive must deliver a copy of the consent decree to all principles, officers, directors, and LLC managers and members, all employees with managerial responsibilities, and any new business entity.
    • One year following the order, the executive must submit a compliance report to the FTC including their telephone numbers and all physical, postal, email, and internet addresses; identify all business activities; describe in detail their involvement in each business activity; and submit a compliance notice within fourteen days on changes in the executive’s name, address, and title or role in a business activity.
    • Complete required recordkeeping for any business that the executive is a majority owner in or controls directly or indirectly for twenty years following the consent decree.
    • For ten years following the order, the individual must report to the FTC any change in name, address, or role in a business activity on the basis that the individual is an employee or has “ownership experience” and “direct or indirect control.”129

    Additionally, in the FTC’s 2023 case against online alcohol marketplace Drizly and CEO James Cory Rellas, the commission required him to implement an information security program at any future company that collects consumer information of more than twenty-five thousand people where he is the majority owner, CEO, or a senior officer for ten years following the order.130

    6. Struck down

    LabMD (2018)

    In 2005, an employee at the medical testing company LabMD downloaded a peer-to-peer file sharing application, unintentionally exposing a file on their computer that contained the health and personal information of 9,300 patients. In 2008, another company, Tiversa, obtained the file and took it to LabMD, requesting payment in exchange for fixing the vulnerability; when LabMD declined, Tiversa took the file to the FTC.131

    Following an investigation in 2013, the FTC issued an administrative complaint against LabMD alleging that it had engaged in unfair practices because its failure to uphold reasonable cybersecurity practices led to the exposure of sensitive consumers data. The consent decree that the FTC proposed looked no different than those that had come before, including stipulations to establish a comprehensive information security program.

    However, unlike almost every company before it, LabMD chose to challenge the case rather than settle. Upon review, an administrative law judge dismissed the case, stating that the FTC had failed to show that the exposure of consumer information caused or could potentially cause consumer injury—a requirement for unfairness cases, which must pertain to practices that “cause or [are] likely to cause substantial injury to consumers”—especially since there was no evidence that anyone other than Tiversa had accessed the file.132 Appealing the decision in 2016, the FTC reversed the judge’s dismissal and reopened the case, holding that the exposure of the information constituted a privacy harm that provided sufficient basis for it to bring an unfairness claim—regardless of whether or not it could be linked later to more tangible harms such as identity theft.133

    LabMD then petitioned the Eleventh Circuit to review the FTC’s decision. Following their review, in 2018, the Eleventh Circuit court vacated the FTC decree, determining that it was unenforceable because of its lack of specificity. The court found that a fundamental flaw with the order was that it “does not instruct LabMD to stop committing a specific act or practice,” and suggested that the FTC did not have the right to force LabMD to “overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.”134

    Because the court’s ruling hinged solely on the enforceability of the consent decree, it avoided taking an explicit stance on whether the harm that the FTC cited as the basis for its action—the privacy harm—was sufficient to form the basis for an unfairness complaint. In response, the FTC would continue to bring these kinds of cases while changing its approach to the mandatory information security programs required within consent decrees.

    7. Specificity and flexibility

    Fourteen Cases (2019–24)

    In the wake of the LabMD case, the FTC publicly stated that it would respond to the case—and seek to more generally improve its data security-related consent decrees—by intentionally increasing the specificity of the security practices required.135 The fourteen consent decrees in this dataset adjudicated after the LabMD decision are significantly more detailed and specific, with many new requirements. The updated consent decrees contain more requirements for respondents to document their plans, practices, and assessments, including a cadence for doing so (every twelve months or after a security incident). Nine of the fourteen consent decrees specify that the security program must be established and implemented within a certain timeframe—ranging from 30 to 180 days after the order is issued.136

    The updated consent decrees also contain specific safeguards that companies must implement, often related to the security failure they experienced. For example, different consent decrees from this time period included requirements to provide automatic firmware updates,137 to detect unknown file uploads,138 to rate-limit log-in attempts,139 and to encrypt specific categories of data.140

    Some of the most common practices required include:

    • Conduct routine penetration testing (twelve out of fourteen cases)141
    • Implement data access controls (eight out of fourteen cases)
    • Log and monitor access to sensitive information (seven out of fourteen cases)
    • Implement multifactor authentication (six out of fourteen cases)
    • Conduct code review (three out of fourteen cases)

    Yet, these new consent decrees preserve two of the most fundamental requirements from the prior generation of FTC consent decrees: that the defendant, on a regular cadence, must “assess and document […] reasonably foreseeable internal and external risks to the security, confidentiality, or integrity of Personal Information within the[ir] possession, custody, or control” and “design, implement, maintain, and document safeguards that control for the internal and external risks identified.”142

    8. Maintain a data retention program

    Chegg (2023), Drizly (2023), Blackbaud (2024)

    In three of the most recent consent decrees contained in this dataset, the FTC included a new provision: a requirement to establish a data retention program. As part of this program, the consent decree specifically requires a publicly available retention schedule for consumer information that must include:

    • The purpose of information collection
    • The business’ need for retaining the information
    • The timeframe for deletion of the information

    Even though the FTC had cited a failure to discard no-longer-needed consumer data as an unreasonable practice in complaints from before these three cases, these were the first examples of the FTC specifically requiring a data retention program in the resulting consent decrees. This suggests a concerted effort by the agency to foreground data minimization as a core part of a comprehensive information security program. Perhaps it is a recognition of the fact that this was not a common component of the risk-based programs that businesses were implementing.

    Analysis: Commonalities and differences in security programs

    These forty-seven consent decrees illustrate the FTC’s evolving conception of a comprehensive security program and reveal trends in the ways that the agency has utilized both broad and specific requirements to advance data security practices at responding companies.

    Trend: Setting a standard

    The FTC’s consistent use of the same baseline requirements in the information security programs suggests a desire to define a set of broad norms for reasonable information security programs that go beyond the specific security failures that triggered the complaints. This approach is particularly notable in consent decrees like the one related to Guidance Software, which added an additional element to the mandatory information security program that had little relation to the particular facts of the Guidance Software case. And, it is precisely this practice which landed the agency in hot water in the LabMD case: the court suggested that the FTC consent decrees did not seek to prevent companies from undertaking particular acts or practices, but instead sought to force companies to overhaul their security programs wholesale.

    Trend: The benefits of a risk-based approach

    Across all the consent decrees, the FTC articulates a foundational requirement for businesses to identify risks and implement appropriate safeguards. This construction allows the decrees to avoid laying out a static set of activities that would be sufficient to keep a company wholly secure, which would require a great deal of specialized knowledge about its technologies, networks, and data, and would be likely to become outdated during the twenty years of the decree’s application. Even after the LabMD case forced the FTC to be more specific in the practices it requested from defendants, consent decrees continued to require respondents to identify and mitigate risks in addition to implementing other more specific controls. Taken together, these factors suggest that risk-based approaches and mitigations have an enduring place in the FTC’s conception of how to construct an information security program.

    Trend: The benefits of specificity

    At the same time, the decrees are not limited to this requirement. Several decrees—even before the LabMD case required more specificity—supplemented the basic decree with carefully delineated provisions and requirements, such as the obligation to oversee service providers and, in the case of companies developing software, to use secure programming practices and application testing. That the consent decrees have evolved in this way over time suggests that the FTC might have seen evidence, in its work to supervise consent decrees, that general risk mitigation requirements should be married with more specific practices applicable to the company and its activities. Later evolutions, such as the addition of a required program for data minimization, also suggest that the agency might have viewed specificity as a better way to advance practices that contribute to security but that might not otherwise be prioritized by companies under a simple risk-based approach.   

    Conclusions

    Consumers are our business

    The FTC’s consumer protection mandate both broadens and limits its power in the cyber domain. The scoping function for the FTC’s cyber enforcement is consumer protection; for cases brought on the basis of unfairness, this hinges upon the agency’s contention that bad data security practices cause or are likely to cause harm to consumers. This is a superpower—where consumers go, the agency can follow, without restrictive focus on a single class or instance of technology. These potential repercussions or harms to consumers can range from financial injury (fraudulent transactions) to emotional distress (public exposure of sensitive information)143 to threats to safety (consumer location information or gender identity),144 encompassing a range of companies and types of data. The FTC can tackle large companies when their practices have the potential to harm many; it can focus on small companies whose practices are egregiously harmful; and it can apply the most stringent scrutiny to companies that process the most sensitive data.

    Giving an agency a broad mandate to protect consumers is a powerful way to allow them to continue to evolve their standards in the face of an ecosystem both as rapidly evolving and as heterogenous as cybersecurity. No set of standards drafted in 2002 would be likely to encompass the security failures that occurred in the FTC’s 2024 case against Global Tel*Link Corporation—including a failure to employ a perimeter firewall, log monitoring solution, and automated monitoring software.145 This broad mandate allows the agency to reach across heterogenous technology projects, from massive cloud infrastructure to consumer apps to enterprise data storage systems to Internet of Things devices, and to update the kinds of practice it pursues alongside the rapid evolution in the underlying technologies that must be secured.

    However, the question of harm is also a potential Achilles’ heel for the FTC. Whether judges will agree with the FTC’s contention in the LabMD case, that privacy harms form sufficient basis for its unfairness actions, is still relatively open. In 2023, a judge rejected the FTC’s case against Kochava, a location data broker, because the agency had failed to show how the company’s practices could cause “substantial injury” to consumers.146 In February 2024, the judge allowed the FTC’s case to proceed after the agency amended its complaint and enumerated the potential harms that could befall consumers as the result of the data the company sold. However, the final status of the case has yet to be determined—and a ruling on whether or not the FTC can treat privacy harms as sufficient to create the basis for a claim of unfairness will have substantial ripple effects on the agency’s efficacy as an enforcer in the data security space. Even beyond the specific focus of the FTC, the question about the right way to think about privacy harms has dogged the US privacy conversation.147

    Generality and specificity

    The consent decrees within this dataset provide a model of how the FTC has combined general and specific obligations in the information security programs it requires companies to uphold. Specific requirements makes it easy to verify whether a company has a control in place and to show wrongdoing when they lack it, and can help advance certain behaviors that might not otherwise be a part of companies’ risk mitigation menu, such as data minimization. After the LabMD case, the agency itself stated that the added specificity would make it easier to enforce its decrees.148 Yet, even after LabMD, the FTC has also maintained in its consent decrees certain general requirements—e.g., that companies identify risks to the data they hold and implement controls and protections for those risks appropriate to their size and activities. This requirement creates a flexible obligation that can adapt alongside changing best practices in the cybersecurity field. The ever-evolving list of bad practices evident in the FTC complaints—including new failures such as to secure cloud credentials or failures to enable multifactor authentication—suggests that enforcement structures in this space will need a mechanism to continuously evolve if they are to keep pace with the evolution of cybersecurity best practice. (And perhaps one even faster than multi-year federal rule-making processes.)

    To incentivize the adoption of specific behaviors or controls while preserving flexibility, policymakers seeking to design legal structures to advance—whether for companies processing consumer data, or for those selling software—might consider these kinds of blended approaches. Specific lists of best practices, perhaps even drawn from existing examples of legal standards for unreasonable cybersecurity behavior, could provide a set of known practices that companies must implement or avoid; these specific practices could be paired with broader obligations for each company to assess and appropriately mitigate the risks they face.  

    Certain proposed data security frameworks, like the new American Privacy Rights Act, adopt this kind of general-and-specific structure: the bill combines a requirement for entities to adopt “reasonable” cybersecurity measures based on their size and activities with a requirement for them to adopt a few specific practices including assessing vulnerabilities, deleting unnecessary data, and training their employees. While the bill allows the FTC to develop process-based regulations for this section’s implementation, there is no structure by which the agency can outline specific required practices or controls for companies. Instead, much would hinge on the question of how enforcers, judges, and companies themselves interpret the question of reasonableness.

    I find you unreasonable

    The FTC’s construction of reasonableness provides a model of how companies’ claims about their security behavior can be assessed: against the broader state of knowledge within a field, against expert warnings and past failures that should serve to inform companies of the risks they face and the precautions they must take. Evaluating companies against what is widely known or widely adopted within the field provides a neat way to respect the fact that different technologies face different kinds of risks and different types of data processing activities—or different types of data itself—that demand different levels of risk mitigation. While these standards are not perfectly fixed, necessitating elements of judgement, they at least provide a measuring stick that companies can use to assess their own security posture. While flexibility creates its own challenges in terms of both compliance and enforcement, it will be challenging to define any static, single standard that can tackle an ecosystem as sprawling as consumer cybersecurity.

    The debate over software liability in the United States is raging at this moment,149 with a key question being how to define standards that will be adaptive over time while providing businesses a measure of certainty about their obligations (and avoiding an overabundance of litigation without corresponding gains in security). Several of the cases in the dataset pertain to questions about the reasonable design of software systems. The kinds of unreasonable behavior outlined within could play a role in defining certain baseline standards associated with a federal software liability regime; perhaps such a regime could even make use of a federal agency already designed and primed to handle questions of reasonable behavior with respect to consumer harm and cybersecurity.

    On the FTC

    The consistency of the FTC’s cyber enforcement actions is notable, even as administrations have changed and FTC commissioners have come and gone. New commissioners have brought shifts in focuses to be sure: take, for example, the move to address mobile and Internet of Things devices under Edith Ramirez, and current Chair Lina Khan’s prioritization of actions against data brokers. Yet, the lack of gaps or manifest periods of total de-prioritization of cyber enforcement is remarkable.

    At the same time, despite its consistency, the relatively small size of this dataset is notable too: the FTC has typically resolved only a few cyber enforcement cases each year. What would you do if you had just a few cases each year to address cybersecurity standards and failures for new and unregulated technologies that were causing potential harm to consumers? How would you prioritize and adapt? This is the question the FTC has had to consider for the past twenty years, with just a few dozen staff in its Division of Privacy and Identity Protection.

    In the years since 2002, when the first action in this dataset was brought, the technology landscape has changed drastically. Major social media sites Facebook, YouTube, Instagram, Snapchat, and TikTok emerged; Amazon, Google, and Microsoft began offering cloud computing services; the iPhone was born; and meaningful consumer-facing generative artificial intelligence services have emerged. These innovations have changed the ways in which software is used and users’ digital data is collected, stored, and processed. Each case that the FTC brings requires substantial resources, from investigation to filing and negotiations to supervising companies’ compliance with eventual consent decrees. Thus, the agency must choose exemplar cases, hoping that other companies—those in the same industry, or processing the same kinds of data, or lacking the at-issue protections—get the message.

    In addition to creating challenges in bringing new cases, the extremely limited number of staff in the FTC’s Division of Privacy and Identity Protection may also imperil the oversight and enforceability of the consent decrees into which the agency does enter. Because most consent decrees last for twenty years, the FTC has only just finished the monitoring period for some of its very first cases covered in this analysis. Not only does technology grow more complex and more embedded into many of the systems with which consumers must interact—the sheer administrative burden of overseeing existing decrees grows. The question of how to measure the FTC’s enforcement capacity and the impact of changes in that capacity is an important question and one for future work.

    Beyond capacity, in April of 2021, the Supreme Court stripped the FTC of its authority to seek monetary redress for first-time violations of Section 5a under Section 13b, in AMC Capital Management, LLC, v. FTC.150 The fact that companies cannot face fines for UDAP violations may reduce their incentives to proactively implement cybersecurity safeguards, lessening the disciplinary power that these cases have over the broader market. While there exist alternative pathways for the FTC to seek redress, they are arduous, prompting calls from FTC commissioners for legislation that would give the FTC the legal authority to seek monetary redress for Section 5a violations in federal court.151

    Another challenge to the FTC’s ability to create lasting change in the cybersecurity ecosystem may come in the form of future litigation. The FTC has already faced two major legal challenges to their cybersecurity authority—Wyndham v. FTC in 2015, in which the Third Circuit upheld the FTC’s authority to police cybersecurity-related violations of Section 5a, and LabMD in 2018—there is a real possibility that companies may increasingly start to challenge the FTC’s authority. One avenue for challenges that could particularly threaten the FTC’s UDAP cyber enforcement is the question of whether privacy harms form a sufficient basis for an unfairness claim, as addressed in the previous section—any finding that they do not would imperil the very basis of many of the FTC’s enforcement actions in the cyber space, since these actions tend to hinge upon the theft or exposure of consumer data.

    To be sure, the FTC has done a lot with little: with a mandate from 1914 and minimal staff members, in just forty-seven cases, they have litigated a variety of harmful practices from the indefinite retention of consumers personal information152 to the storage of AWS cloud bucket credentials in GitHub repositories.153 However, the question remains whether this mandate and capacity can keep pace with the continued evolution of digital technologies. Without more explicit consumer data protection authorities, the FTC will need to continue standard-setting through its slow drumbeat of UDAP cases, rather than being able to set proactive standards and requirements. Without more resources and staff, there will continue to be real capacity constraints on the volume of cases and practices that the agency can pursue. Without the ability to levy penalties for violations, companies may see little incentive to pay heed to the evolving standards within the FTC’s complaints and decrees. And without legal clarification, a new court decision could imperil the very foundations of its enforcement actions to date.

    Without the clear specification and enforcement of baseline security practices, consequential failures in the security of digital technologies will continue to stack up, even as more and more of the world is dependent on digital infrastructure. In the meantime, and in the continued absence of a wider liability regime, the FTC works quietly along, pursuing unfair and deceptive data security practices and shaping a set of standards for consumer data security with the tools they have at hand.

    Appendix

    Table 1: Short form names and full citations

    Short Form NameCitation
    Ashley Madison ComplaintComplaint, Ruby Corp., Ruby Life Inc. d/b/a AshleyMadison.com, ADL Media Inc. (Dec 14, 2016)
    ASUSTeK ComplaintComplaint, ASUSTeK Inc., FTC File No. 142-3156, (Jul 18, 2016)
    BJ’s ComplaintComplaint, BJ’s Wholesale Club, Inc., FTC File No. 042-3160, (Sep 20, 2005)
    Blackbaud ComplaintComplaint, Blackbaud, Inc., FTC File No. 052-3094, (Jun 15, 2011)
    Blackbaud OrderDecision and Order, Blackbaud, Inc., FTC File No. 052-3094, (Jun 15, 2011)
    BLU Products ComplaintComplaint, BLU Products, Samuel Ohev-Zion., FTC File No. 172-3025, (Sep 6, 2018)
    BLU Products OrderDecision and Order, BLU Products, Inc., Samuel Ohev-Zion, FTC File No. 172-3025, (Sep 6, 2018)
    Card Systems Solutions ComplaintComplaint, Card Systems Solutions, Inc., FTC File No. 052-3148 (Sep 5, 2006)
    Ceridian ComplaintComplaint, Ceridian Corporation, FTC File No. 102-3160 (Jun 8, 2011)
    Chegg ComplaintComplaint, Chegg, Inc., FTC File No. 202-3151, (Jan 25, 2023)
    Chegg OrderDecision and Order, Chegg, Inc., FTC File No. 202-3151, (Jan 25, 2023)
    Compete ComplaintComplaint, Compete, Inc., FTC File No. 102-3155, (Feb 20, 2013)  
    Credit Karma ComplaintComplaint, Credit Karma, FTC File No. 202-3138, (Aug 19, 2014)
    Credit Karma OrderDecision and Order, Credit Karma, FTC File No. 202-3138, (Aug 19, 2014)
    D-Link ComplaintComplaint, D-Link Systems, Inc., FTC File No. 052-3094000-39, (Jul 2, 2019)
    D-Link OrderDecision and Order, D-Link Systems, Inc., FTC File No. 052-3094000-39, (Jul 2, 2019)
    Dave & BustersComplaint, Dave & Busters, Inc., FTC File No. 082-3153 (May 20, 2010)
    Drizly ComplaintComplaint, Drizly LLC, James Cory Rellas, FTC File No. 202-3185 (Oct 3, 2012)
    Drizly OrderComplaint, Drizly LLC, James Cory Rellas, FTC File No. 202-3185 (Jan 10, 2023)
    DSW ComplaintComplaint, DSW Inc., FTC File No. 052-3096 (Aug 1, 2006)
    EPN Checknet ComplaintComplaint, EPN Inc. d/b/a Checknet, Inc. FTC File No. 112-3143 (Aug 1, 2006)
    Fandango ComplaintComplaint, Fandango, LLC, FTC File No. 132-3089 (Aug 13, 2014)
    Fandango OrderDecision and Order, Fandango, LLC, FTC File No. 132-3089 (Aug 13, 2014)
    Genica ComplaintComplaint, Genica Corporation and compgeeks.com d/b/a Computer Geeks Discount Outlet and Geeks.com, FTC File No. 082-3133, (Mar 16, 2009)
    Global Tel*Link ComplaintComplaint, Global Tel*Link Corp., FTC File No. 212-3012 (Feb 23, 2024)
    Global Tel*Link OrderDecision and Order, Global Tel*Link Corp., FTC File No. 212-3012 (Feb 23, 2024)
    GMR Transcription Services ComplaintComplaint, GMR Transcription Services, Inc., Ajay Prasad, Shreekant Srivastava FTC File No. 122-3059 (Aug 14, 2014)
    GMR Transcription Services OrderDecision and Order, GMR Transcription Services, Inc., Ajay Prasad, Shreekant Srivastava FTC File No. 122-3059 (Aug 14, 2014)
    Guess ComplaintComplaint, Guess?, Inc. and Guess.com, Inc., FTC File No. 052-3057 (Jul 13, 2003)
    Guidance Software ComplaintComplaint, Guidance Software, Inc., FTC File No. 052-3094062-3057, (Mar 30, 2007)
    Guidance Software OrderDecision and Order, Guidance Software, Inc., FTC File No. 052-3094062-3057, (Mar 30, 2007)
    1Health.io OrderDecision and Order, 1Health.io/Vitagene, FTC File No. 1923170, (Sept 7, 2023)
    HTC America ComplaintComplaint, HTC America, Inc., FTC File No. 122-3049, (Jun 25, 2013)
    HTC America OrderDecision and Order, HTC America, Inc., FTC File No. 122-3049, (Jun 25, 2013)
    InfoTrax ComplaintComplaint, InfoTrax Systems, L.C, Mark Rawlins, FTC File No. 162-3130, (Dec 13, 2019)
    InfoTrax OrderDecision and Order, InfoTrax Systems L.C, Mark Rawlins Inc., FTC File No. 162-3130, (Dec 13, 2019)
    James V. Grago, Jr. (ClixSense) ComplaintComplaint, James V. Grago, Jr. d/b/a ClixSense.com, FTC File No. 172-3003, (Jun 19, 2019)
    Lenovo ComplaintComplaint, Lenovo, Inc., FTC File No. 172-3003, (Jun 19, 2019)
    Life is Good ComplaintComplaint, The Life is Good Company, FTC File No. 152-3134 (Dec 20, 2017)
    LifeLock ComplaintComplaint, LifeLock Inc., Robert J Maynard, Richard Todd Davis (Mar 8, 2010).
    Lookout Services ComplaintComplaint, Lookout Services, Inc., FTC File No. 102-3076, (Jun 15, 2011)
    Microsoft ComplaintComplaint, Microsoft Corporation, FTC File No. 012-3240, (Dec 20, 2002)
    Microsoft OrderDecision and Order, Microsoft Corporation, FTC File No. 012-3240, (Dec 20, 2002)
    MTS (Tower Records) ComplaintComplaint, MTS, Inc. d/b/a Tower Records/Books/Video, Tower Direct, LLC d/b/a Towerrecords.com, FTC File No. 032-3209, (Mar 4, 2005)
    Petco ComplaintComplaint, Petco Animal Supplies, Inc., FTC File No. 03203221, (Jun 15, 2011)
    Reed Elsevier ComplaintComplaint, Reed Elsevier Inc and Seisint, Inc., FTC File No. 052-3094, (Aug 1, 2008)
    Residual Pumpkin (CafePress) ComplaintComplaint, Residual Pumpkin Entity, LLC, FTC File No. 052-3094192, (Jan 10, 2024 )
    Residual Pumpkin (CafePress) OrderDecision and Order, Residual Pumpkin Entity, LLC, FTC File No. 052-3094192, (Jan 10, 2024 )
    Ring ComplaintComplaint, Ring LLC, FTC File No. 052-30941549, (Jun 16, 2023)
    Ring OrderDecision and Order, Ring LLC, FTC File No. 052-30941549, (Jun 16, 2023)
    SkyMed ComplaintComplaint, SkyMed International d/b/a SkyMed Travel and Car Rental Pro, FTC File No. 192-3140 (Jan 26, 2011)
    SkyMed OrderDecision and Order, SkyMed International d/b/a SkyMed Travel and Car Rental Pro, FTC File No. 192-3140 (Jan 26, 2011)
    Support King ComplaintComplaint, Support King, LLC and Scott Zuckerman, FTC File No. 192-3003, (Dec 20, 2021)
    Support King OrderDecision and Order, Support King LLC and Scott Zuckerman, FTC File No. 192-3003, (Dec 20, 2021)
    Tapplock ComplaintComplaint, Tapplock Corp., FTC File No. 192-3011, (May 18, 2020)
    Tapplock OrderDecision and Order, Tapplock Corp., FTC File No. 192-3011, (May 18, 2020)
    TJX ComplaintComplaint, The TJX Companies, Inc., FTC File No. 072-3055, (Jul 29, 2008)
    TRENDnet ComplaintComplaint, TRENDnet, Inc. FTC File 122-3090, (Feb 7, 2014)
    Uber ComplaintComplaint, Uber Technologies, Inc., FTC File No. 152-3054 (Oct 25, 2018)
    Uber OrderDecision and Order, Uber Technologies, Inc., FTC File No. 152-3054 (Oct 25, 2018)
    Upromise ComplaintDecision and Order, Upromise, Inc., FTC File No. 102-3116 (Mar 27, 2012)
    Wyndham ComplaintComplaint, Wyndham Worldwide Corporation Inc., File No. 052-3094, (Dec 23, 2014)
    Zoom OrderDecision and Order, Zoom Video Communications, Inc., File No. 192-3167, (Feb 1, 2021)

    Acknowledgements

     Thank you to Josephine Wolff, Natalie Thompson, Chris Hoofnagle, Stew Scott, and Trey Herr for feedback and suggestions on various versions of this document—it is far stronger for it. Thank you to the policymakers and researchers who spoke with us in conversations on background to inform this work. And, finally, thank you to Nancy Messieh, who built the interactive visuals that substantially enrich this document’s analysis. 

    About the authors

    Isabella Wright was a consultant and Young Global Professional with the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). She graduated from the University of California, Berkeley where she majored in history with an emphasis in the history of science and technology.

    Maia Hamin is an associate director with the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs. She works on the intersection of cybersecurity and technology policy, including projects on the cybersecurity implications of artificial intelligenceopen-source softwarecloud computing, and regulatory systems like software liability.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    Federal Trade Commission Act of 1914, § 5, 15 U.S.C. § 45 (1914)
    2    LabMD, Inc v. Federal Trade Commission, 16-16270 (11th Cir 2018).
    3    The White House, National Cybersecurity Strategy, March 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    4    US Congress, American Privacy Rights Act of 2024 Discussion Draft,https://d1dth6e84htgma.cloudfront.net/American_Privacy_Rights_Act_of_2024_Discussion_Draft_0ec8168a66.pdf.
    5    Woodrow Hartzog and Daniel J. Solove argue for the use of this term to describe the FTC’s case precedents; see Daniel J. Solove and Woodrow Hartzog, “The FTC and the New Common Law of Privacy,” Columbia Law Review (2014), 583
    6    Maia Hamin and Isabella Wright, “The U.S.’s FAR-Reaching New Cybersecurity Rules for Federal Contractors,” Lawfare, February 1, 2024, https://www.lawfaremedia.org/article/the-u.s.-s-far-reaching-new-cybersecurity-rules-for-federal-contractors.
    7    State Laws Related to Digital Privacy,” National Conference of State Legislatures, accessed March 26, 2024, https://www.ncsl.org/technology-and-communication/state-laws-related-to-digital-privacy.
    8    Health Insurance Portability and Accountability Act (HIPAA) of 1996, § 264a, 42 U.S.C. § 1320d-2 (1996)
    9    Financial Services Modernization Act of 1999, 15 U.S.C. § 6803 (1999)
    10    Fair Credit Reporting Act, 15 U.S.C. § 1681s (1970)
    11    Children’s Online Privacy Protection Act, 15 U.S.C. §§ 6501–6506 (1998)
    12    Federal Communications Commission, “FCC Releases Open Internet Order,” March 12, 2015, https://www.fcc.gov/document/fcc-releases-open-internet-order.
    13    Maia Hamin, “Who’s Afraid of the SEC?,” DFRLab, June 14, 2023, https://dfrlab.org/2023/06/14/whos-afraid-of-the-sec/.
    14    Cyber Incident Reporting for Critical Infrastructure Act of 2022,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/cyber-incident-reporting-critical-infrastructure-act-2022-circia.
    15    Federal Trade Commission Act of 1914, § 5, 15 U.S.C. § 45 (1914)
    16    Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy, (New York: Cambridge University Press, 2016), 98
    17    Federal Trade Commission Act of 1914, § 5, 15 U.S.C. § 45 (1914)
    18    Federal Trade Commission Act of 1914, 15 U.S.C. § 45(a)(1) (1914)
    19    Federal Trade Commission Act of 1914, 15 U.S.C. § 45(n) (1914)
    20    Federal Trade Commission Act of 1914, 15 U.S.C. § 45(n) (1945)
    22    Complaint, BLU Products and Samuel Ohev-Zion, File No. 172-3025, (September 11, 2018); Complaint, 1Health.io/Vitagene, File No. 192-3170, (September 7, 2023); Complaint, Zoom Video Communications Inc., File No. 192-3167, (Feb 1, 2021)
    23    Shitao Xiao et al., “C-Pack: Packaged Resources To Advance General Chinese Embedding,” arXiv, last updated May 12 2024, https://doi.org/10.48550/arXiv.2309.07597.
    24    See Appendix 1: James V. Grago, Jr. (ClixSense) Complaint
    25    See Appendix 1: ASUSTeK Complaint
    26    See Appendix 1: James V. Grago, Jr. (ClixSense) Complaint
    27    See Appendix 1: DSW Complaint
    28    See Appendix 1: Reed Elsevier Complaint
    29    See Appendix 1: InfoTrax Complaint
    30    See Appendix 1: Uber Complaint
    31    See Appendix 1: ASUSTeK Complaint
    32    See Appendix 1: Blackbaud Complaint
    33    See Appendix 1: Genelink Complaint
    34    See Appendix 1: Zoom Complaint
    35    See Appendix 1: Blackbaud Complaint
    36    “History of Encryption,” Thales Group, last updated June 10, 2023, https://www.thalesgroup.com/en/markets/digital-identity-and-security/magazine/brief-history-encryption.
    37    See Appendix 1: DSW Complaint; Guidance Software Complaint; Life is Good Complaint; TJX Complaint; Genica Complaint; LifeLock Complaint; Ceridian Complaint; Upromise Complaint; TRENDnet Complaint; Wyndham Complaint; Uber Complaint; James V. Grago, Jr. (ClixSense) Complaint; D-Link Complaint; InfoTrax Complaint; Support King Complaint; SkyMed Complaint; Ring Complaint; Residual Pumpkin (CafePress) Complaint; Chegg Complaint; Global Tel*Link Complaint
    38    See Appendix 1: InfoTrax Complaint
    39    See Appendix 1: BJ’s Complaint; TJX Complaint; LifeLock Complaint; Upromise Complaint; Compete Complaint
    40    See Appendix 1: Compete Complaint
    41    See Appendix 1: LifeLock Complaint
    42    “Common Weakness Enumerations,” MITRE, last updated May 13, 2024, https://cwe.mitre.org/
    43    See Appendix 1: Guess Complaint; MTS (Tower Records) Complaint; Petco Complaint, Card Systems Solutions Complaint; Guidance Software Complaint; Life is Good Complaint; Reed Elsevier Complaint; Genica Complaint; Ceridian Complaint; ASUSTek Complaint; James V. Grago, Jr. (ClixSense) Complaint; LifeLock Complaint; Lookout Services Complaint; D-Link Complaint
    44    See Appendix 1: Guess Complaint; Petco Complaint; Card Systems Solutions Complaint; Guidance Software Complaint; Life is Good Complaint. Genica Complaint; LifeLock Complaint; Ceridian Complaint; Residual Pumpkin (CafePress) Complaint
    45    See Appendix 1: Reed Elsevier Complaint; ASUSTeK Complaint; Residual Pumpkin (CafePress) Complaint
    46    See Appendix 1: ASUSTeK Complaint; Residual Pumpkin (CafePress) Complaint
    47    See Appendix 1: Lookout Services Complaint
    48    See Appendix 1: Reed Elsevier Complaint; LifeLock Complaint; Wyndham Complaint; Residual Pumpkin (CafePress) Complaint
    49    See Appendix 1: Twitter Complaint.
    50    See Appendix 1: Card Systems Solutions Complaint; TJX Complaint; Twitter Complaint; ASUSTeK Complaint; Drizly Complaint; Blackbaud Complaint
    51    See Appendix 1: Reed Elsevier. Complaint
    52    See Appendix 1: LifeLock Complaint
    53    See Appendix 1: Reed Elsevier Complaint; Ashley Madison Complaint; Uber Complaint; James V. Grago, Jr. (ClixSense) Complaint
    54    See Appendix 1: Reed Elsevier Complaint
    55    See Appendix 1: Ashley Madison Complaint or third partiesSee Appendix 1: James V. Grago, Jr. (ClixSense) Complaint
    56    See Appendix 1: Ashley Madison Complaint
    57    See Appendix 1: Uber Complaint
    58    See Appendix 1: Reed Elsevier Complaint; LifeLock Complaint; Lookout Services Complaint; Twitter Complaint; Ashley Madison Complaint
    59    See Appendix 1: Ashley Madison Complaint
    60    See Appendix 1: Reed Elsevier Complaint
    61    See Appendix 1: Guidance Software Complaint; Lookout Services Complaint; TRENDnet Complaint; James V. Grago, Jr. (ClixSense) Complaint
    62    See Appendix 1: Reed Elsevier Complaint
    63    See Appendix 1: Chegg Complaint
    64    See Appendix 1: Twitter Complaint
    65    See Appendix 1: Ashley Madison Complaint
    66    Lorrie Cranor, “Time to Rethink Mandatory Password Changes,” Federal Trade Commission (Office of Technology Blog), March 2, 2016, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2016/03/time-rethink-mandatory-password-changes
    67    See Appendix 1: Drizly Complaint; Chegg Complaint; Blackbaud Complaint
    68    See Appendix 1: BJ’s Complaint; TJX Complaint; Card Complaint; DSW Complaint; Genica Complaint, Guidance Software Complaint; Life is Good Complaint; Dave & Busters Complaint; Wyndham Complaint; James V. Grago, Jr. (Clixsense) Complaint; SkyMed Complaint; Chegg Complaint; Global Tel*Link Complaint; Blackbaud Complaint
    69    See Appendix 1: TJX Complaint; Dave & Busters Complaint; Wyndham Complaint
    70    See Appendix 1: Card Complaint; DSW Complaint; Genica Complaint; Dave & Busters Complaint; James V. Grago, Jr. (ClixSense) Complaint.
    71    See Appendix 1: BJ’s Complaint; DSW Complaint; TJX Complaint; Dave & Busters Complaint
    72    See Appendix 1: Genica Complaint; LifeLock Complaint; Dave & Busters Complaint; Lookout Services Complaint; EPN Checknet complaint; Wyndham Complaint
    73    See Appendix 1: SkyMed Complaint; EPN Checknet Complaint; Chegg Complaint; Blackbaud Complaint
    74    See Appendix 1: Drizly Complaint
    75    See Appendix 1: EPN CheckNet Complaint; Ashley Madison Complaint; Uber Complaint; Tapplock Complaint; SkyMed Complaint; Drizly Complaint, Chegg Complaint; SkyMed Complaint; Chegg Complaint.
    76    See Appendix 1: EPN CheckNet Complaint.
    77    See Appendix 1: Tapplock Complaint; SkyMed Complaint; Drizly Complaint; Chegg Complaint; SkyMed Complaint; Chegg Complaint.
    78    See Appendix 1: Lenovo Complaint.
    79    See Appendix 1: HTC America Complaint; Fandango Complaint; TRENDnet Complaint; ASUSTeK Complaint; Residual Pumpkin (CafePress) Complaint
    80    See Appendix 1: HTC America Complaint; TRENDnet Complaint; ASUSTeK Complaint
    81    See Appendix 1: TRENDnet Complaint
    82    See Appendix 1: TJX Complaint; Wyndham Complaint; Residual Pumpkin (CafePress) Complaint; Blackbaud Complaint; LifeLock Complaint
    83    See Appendix 1: Wyndham Complaint; Residual Pumpkin (CafePress) Complaint; LifeLock Complaint
    84    See Appendix 1: TJX Complaint
    85    See Appendix 1: Wyndham Complaint; Residual Pumpkin (CafePress) Complaint
    86    See Appendix 1: Residual Pumpkin (CafePress) Complaint.
    87    See Appendix 1: Upromise Complaint; HTC America Complaint; Fandango Complaint; Credit Karma Complaint; TRENDnet Complaint; ASUSTeK Complaint; InfoTrax Complaint; Tapplock Complaint; SkyMed Complaint; Drizly Complaint; Blackbaud Complaint
    88    See Appendix 1: InfoTrax Complaint; SkyMed Complaint; Drizly Complaint
    89    See Appendix 1: Fandango Complaint; Credit Karma Complaint; TRENDnet Complaint; ASUSTeK Complaint.
    90    See Appendix 1: HTC America Complaint.
    91    See Appendix 1: TRENDnet Complaint.
    92    See Appendix 1: TRENDnet Complaint; ASUSTeK Complaint; InfoTrax Complaint.
    93    See Appendix 1: Upromise Complaint.
    94    See Appendix 1: Drizly Complaint; Blackbaud Complaint.
    95    See Appendix 1: BJ’s Complaint; DSW Complaint; Life is Good Complaint; Ceridian Complaint;  Residual Pumpkin (CafePress) Complaint; InfoTrax Complaint; SkyMed Complaint; Drizly Complaint;  Chegg Complaint; Blackbaud Complaint
    96    See Appendix 1: Residual Pumpkin (CafePress) Complaint.
    97    See Appendix 1: Blackbaud Complaint.
    98    See Appendix 1: Chegg Complaint.
    99    See Appendix 1: LifeLock Complaint; Twitter Complaint; Ashley Madison Complaint; Uber Complaint; InfoTrax Complaint; Support King Complaint; Drizly Complaint; Ring Complaint
    100    See Appendix 1: Ashley Madison Complaint; Ring Complaint
    101    See Appendix 1: Twitter Complaint
    102    See Appendix 1: Drizly Complaint
    103    See Appendix 1: Global Tel*Link Complaint
    104    See Appendix 1: Upromise Complaint; Genelink Complaint; Credit Karma Complaint; GMR Transcription Services Complaint; Ashley Madison Complaint; Lenovo Complaint; Support King Complaint; Global Tel*Link Complaint Some casesSee Appendix 1: Tel*Link Complaint; Ashely Madison Complaint; Support King Complaint
    105    See Appendix 1: Global Tel*Link Complaint
    106    See Appendix 1: Global Tel*Link Complaint
    107    See Appendix 1: MTS (Tower Records) Complaint; Compete Complaint; Upromise Complaint; HTC America Complaint; TRENDnet Complaint; Ashely Madison Complaint; Lenovo Complaint; Uber Complaint; Tapplock Complaint; SkyMed Complaint; Ring Complaint; Chegg Complaint
    108    See Appendix 1: Ashely Madison Complaint
    109    ritySee Appendix 1: Chegg Complaint, SkyMed Complaint
    110    See Appendix 1: HTC America Complaint
    111    See Appendix 1: Lenovo Complaint
    112    See Appendix 1: Tapplock Complaint
    113    See Appendix 1: Reed Elsevier Complaint
    114    See Appendix 1: Ceridian Complaint
    115    See Appendix 1: Drizly Complaint
    116    See Appendix 1: Guess Complaint
    117    See Appendix 1: Ceridian Complaint
    118    See Appendix 1: TRENDnet Complaint
    119    See Appendix 1: Drizly Complaint
    120    See Appendix 1: Microsoft Order
    121    See Appendix 1: Microsoft Order
    122    See Appendix 1: Guidance Software Order
    123    See Appendix 1: Guidance Software Order
    124    See Appendix 1: HTC America Order
    125    See Appendix 1: Fandango Order; Credit Karma Order
    126    See Appendix 1: Drizly Order.
    127    See Appendix 1: Support King Order; Drizly Order; BLU Products Order; InfoTrax Order; GMR Transcription Services Order
    128    See Appendix 1: InfoTrax Order
    129    See Appendix 1: Support King Order
    130    See Appendix 1: Drizly Order
    131    Douglas Meal, Michelle Visser, and David Cohen, “Key Takeaways from LabMD: The Implications May Be Broader Than You Think,” Bloomberg Law, December 2018, https://www.bloomberglaw.com/external/document/XBJH6ROS000000/data-security-professional-perspective-key-takeaways-from-labmd-
    132    Meal, Visser, and Cohen, “Key Takeaways from LabMD.”
    133    Gabe Maldoff, “LabMD and the New Definition of Privacy Harm,” International Association of Privacy Professionals, August 22, 2016, https://iapp.org/news/a/labmd-and-the-new-definition-of-privacy-harm.
    134    Meal, Visser, and Cohen, “Key Takeaways from LabMD.”
    135    Andrew Smith, “New and Improved FTC Data Security Orders: Better Guidance for Companies, Better Protection for Consumers,” Federal Trade Commission, January 6, 2020, https://www.ftc.gov/business-guidance/blog/2020/01/new-and-improved-ftc-data-security-orders-better-guidance-companies-better-protection-consumers
    136    See Appendix 1: Zoom Order; Global Tel*Link Order; Blackbaud Order; 1Health.io Order; Ring Order, Chegg Order; Drizly Order; Cafe Press Order; SkyMed Order
    137    See Appendix 1: D-Link Order
    138    See Appendix 1: InfoTrax Order
    139    See Appendix 1: Zoom Order.
    140    See Appendix 1: SkyMed Order.
    141    See Appendix 1: InfoTrax Order; Tapplock Order; SkyMed Order; Support King Order; Drizly Order; Chegg Order; Ring Order; Health.io Order; Residual Pumpkin (CafePress) Order; Blackbaud Order; GlobalTelLink Order
    142    See Appendix 1: Global Tel*Link Order
    143    See Appendix 1: Global Tel*Link Complaint
    144    See Appendix 1: Global Tel*Link Complaint
    145    See Appendix 1: Global Tel*Link Complaint
    146    Suzanna Smalley, “Judge allows case against geolocation data broker Kochava to proceed,” The Record, February 5, 2024, https://therecord.media/judge-allows-ftc-case-against-kochava-data-broker-to-proceed.
    147    Danielle Keats Citron and Daniel J. Solove, “Privacy Harms,” Boston University Law Review (2022), 793.
    148    Smith, “New and improved FTC data security orders.”
    149    Maia Hamin, Trey Herr, and Stewart Scott, ”Three Questions on Software Liability,” Lawfare, September 7, 2023, https://www.lawfaremedia.org/article/three-questions-on-software-liability.
    150    AMG Capital Management, LLC v. Federal Trade Comm’n, 593 U.S. ___ (2021)
    151    Hearing on Oversight of the Federal Trade Commission, Before the Comm. on Commerce, Science, and Transportation, 116th Cong. 6. (2020) (statement of the Federal Trade Commission).
    152    See Appendix 1: Life is Good Complaint
    153    See Appendix 1: Drizly Complaint

    The post “Reasonable” cybersecurity in forty-seven cases: The Federal Trade Commission’s enforcement actions against unfair and deceptive cyber Practices appeared first on Atlantic Council.

    ]]>
    Modern technology is shaping global defense. Here’s how. https://www.atlanticcouncil.org/content-series/defense-technology-monitor/modern-technology-is-shaping-global-defense-heres-how/ Sat, 01 Jun 2024 14:22:00 +0000 https://www.atlanticcouncil.org/?p=797585 Modern defense technology as we know it is rapidly changing. Forward Defense's Defense Technology Monitor explores some of the innovations and initiatives at the core of that shift.

    The post Modern technology is shaping global defense. Here’s how. appeared first on Atlantic Council.

    ]]>
    Below is an abridged version of the Forward Defense initiative’s Defense Technology Monitor, a bimonthly series tracking select developments in global defense technology and analyzing technology trends and their implications for defense, international security, and geopolitics.

    Modern defense technology as we know it is rapidly changing.

    That’s in part because drones are revolutionizing battlefields through enhanced reconnaissance and offensive capabilities. They are being extensively used in conflicts such as the wars in Ukraine and Gaza—and that is also sparking advancements in counter-drone technologies such as laser and radio frequency weapons.

    Artificial intelligence (AI) is seeing its applications in defense grow. AI can have a significant impact on military operations, particularly in targeting and data analysis. But there are broader implications from AI, including its exploitation by criminals for cyberattacks and phishing, which present serious security challenges. Additionally, there are ethical and operational complexities of integrating AI into defense strategies to consider.

    Advancements in cyber and electronic warfare are also having an impact on modern defense technology. Calls for robust security in the information domain and the electromagnetic spectrum have helped fuel the development of new technologies, for example, new quantum navigation technologies that offer protection against jamming. Safeguarding critical infrastructure from cyber threats, particularly from malicious actors like Chinese hackers, is becoming more and more urgent.

    Below is a quick look at some of the new innovations and initiatives that are shaping global defense.

    AI and data

    The utilization of AI by the Israeli Defense Forces (IDF) has ignited intense scrutiny over the ethical ramifications and operational validity of using the technology in military applications. Israeli news outlet +972 Magazine reported that the IDF developed AI tools to “mark” suspected operatives of Hamas as targets for bombing, citing Israeli officials familiar with the IDF’s AI systems. Israel denied these allegations, saying that the AI tools it has deployed don’t automatically generate targets. The ensuing scrutiny highlighted how AI systems tasked with identifying and striking military targets would pose significant risks of collateral damage, especially in densely populated areas. Critics argue that AI systems lack the nuanced judgment required to distinguish between combatants and noncombatants effectively. The prospect of using of AI in military applications raises larger geopolitical concerns, as the reliance on automated decision making in such sensitive contexts could escalate conflicts unintentionally and affect global perceptions of AI in warfare. As AI continues to play a pivotal role in military strategies, there are increasing calls from the international community for stringent oversight, transparent engagement rules, and ethical constraints to govern its use.

    Autonomous systems

    Malicious actors are increasingly harnessing generative AI to enhance their operations, creating more sophisticated threats to digital security. From crafting undetectable phishing emails to generating convincing deepfake audios and videos, these tools allow for a range of deceptive practices previously unattainable with traditional methods. This trend poses new challenges for cybersecurity defenses, necessitating advancements in digital verification techniques and the development of countermeasures to detect and mitigate the effects of AI-generated content. The implications are broad, affecting everything from individual identity security to national security, as these technologies can be used to influence public opinion, manipulate stock markets, or even sway political elections.

    Platforms and weapons systems

    The development of laser weapon systems by the US military highlights significant advancements in directed energy applications for defense. These laser systems, designed for precision targeting and minimal collateral damage, are tested under various operational scenarios to determine their efficacy against threats like drones, missiles, and other aerial targets. While promising, the deployment of these systems faces hurdles such as the need for substantial power sources, environmental limitations affecting beam propagation, and integration challenges with existing military platforms. Ongoing research aims to overcome these obstacles, with the goal of fully operationalizing laser weapons to provide a cost-effective, reliable, and scalable defense solution.

    The information domain, cyber, and electronic warfare

    Emerging concerns over cyber threats to US infrastructure have been amplified by revelations about covert operations by Chinese hackers. These operations involve embedding software in critical systems that could be activated remotely to cause significant disruption during geopolitical tensions. This strategy represents a shift towards more aggressive postures in cyber warfare, where the potential for damage extends beyond espionage to actual physical and economic harm. The United States is responding by bolstering its cybersecurity defenses, with an emphasis on enhancing resilience, detecting preemptive breach attempts, and mitigating potential impacts through rapid response and recovery strategies.

    Manufacturing and industry

    The South Korean defense minister announced the country’s intention to pursue collaboration on emerging technologies with the United States, United Kingdom, and Australia as part of Pillar II of AUKUS. The potential expansion of the AUKUS alliance into advanced technology sectors under Pillar II reflects a strategic initiative to deepen military cooperation beyond traditional domains. This collaboration aims to leverage cutting-edge technologies (including AI, quantum computing, and advanced cyber defenses) to maintain a competitive edge in the Indo-Pacific region. While this expansion requires careful management of technology transfers, alignment of regulatory standards, and protection of intellectual property rights, the initiative could set a precedent for future international defense and security collaborations, fostering a more integrated approach to global security challenges.

    If you are interested in reading this month’s full issue of the Defense Technology Monitor, please contact Forward Defense Project Assistant Curtis Lee.

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    The post Modern technology is shaping global defense. Here’s how. appeared first on Atlantic Council.

    ]]>
    Who’s a national security risk? The changing transatlantic geopolitics of data transfers https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/whos-a-national-security-risk-geopolitics-of-data-transfers/ Wed, 29 May 2024 19:34:02 +0000 https://www.atlanticcouncil.org/?p=767982 The geopolitics of data transfers is changing. How will Washington's new focus on data transfers affect Europe and the transatlantic relationship?

    The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

    ]]>

    Table of contents

    Introduction
    Data transfer politics come to America
    Data transfer politics in Europe
    Conclusions

    Introduction

    The geopolitics of transatlantic data transfers have been unvarying for the past decade. European governments criticize the US National Security Agency (NSA) for exploiting personal data moving from Europe to the United States for commercial reasons. The US government responds, through a series of arrangements with the European Union, by providing assurances that NSA collection is not disproportionate, and that Europeans have legal avenues if they believe their data has been illegally used. Although the arrangements have not proven legally stable, on the whole they have sufficed to keep data flowing via subsea cables under the Atlantic Ocean.

    Now the locus of national security concerns about international data transfers has shifted from Brussels to Washington. The Biden administration and the US Congress, in a series of bold measures, are moving aggressively to interrupt certain cross-border data flows, notably to China and Russia.

    The geopolitics of international data flows remain largely unchanged in Europe, however. European data protection authorities have been mostly noncommittal about the prospect of Russian state surveillance collecting Europeans’ personal data. Decisions on whether to transfer European data to Russia and China remain in the hands of individual companies.

    Will Washington’s new focus on data transfers to authoritarian states have an impact in Europe? Will Europe continue to pay more attention to the surveillance activities of its liberal democratic allies, especially the United States? Is there a prospect of Europe and the United States aligning on the national security risks of transfers to authoritarian countries?

    Data transfer politics come to America

    The US government long considered the movement of personal data across borders as primarily a matter of facilitating international trade.1 US national security authorities’ surveillance of foreigners’ personal data in the course of commercial transfers was regarded as an entirely separate matter.

    For example, the 2001 EU-US Safe Harbor Framework,2 the first transatlantic data transfer agreement, simply allowed the United States to assert the primacy of national security over data protection requirements, without further discussion. Similarly, the 2020 US-Mexico-Canada Free Trade Agreement3 and the US-Japan Digital Trade Agreement4 contain both free flow of data guarantees and traditional national security carve-outs from those obligations.

    Edward Snowden’s 2013 revelations of expansive US NSA surveillance in Europe put the Safe Harbor Framework’s national security derogation into the political spotlight. Privacy activist Max Schrems then challenged its legality under EU fundamental rights law, and the Court of Justice of the European Union (CJEU) ruled it unacceptable.5

    The 2023 EU-US Data Privacy Framework6 (DPF) is the latest response to this jurisprudence. In it, the United States commits to hold national security electronic surveillance of EU-origin personal data to a more constrained standard, as the European Commission has noted.7 The United States’ defensive goal has been to reassure Europe that it conducts foreign surveillance in a fashion that can be reconciled with EU fundamental rights law.

    Now, however, the US government has begun expressly integrating its own national security considerations into decisions on the foreign destinations to which US-origin personal data may flow. It is a major philosophical shift from the prior free data flows philosophy, in which national security limits played a theoretical and marginal role.

    One notable development is a February 28, 2024, executive order, Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.8 The EO empowers the Department of Justice (DOJ), in consultation with other relevant departments, to identify countries “of concern” and to prohibit or otherwise regulate bulk data transfers to them, based on a belief that these countries could be collecting such data for purposes of spying on or extorting Americans. A week later DOJ issued a proposed rule describing the envisaged regulatory regime, and proposing China, Cuba, Iran, North Korea, Russia, and Venezuela as the countries “of concern.”9

    The White House, in issuing the bulk data EO, was at pains to insist that it was limited in scope and not inconsistent with the historic US commitment to the free flow of data, because it applies only to certain categories of data and certain countries.10 Nonetheless, as has been observed by scholars Peter Swire and Samm Sacks, the EO and proposed rule are, for the United States, part of “a new chapter in how it regulates data flows” in that they would create an elaborate new national security regulatory regime applying to legal commercial data activity.11

    Hard on the heels of the bulk data EO came congressional passage in April of the Protecting Americans’ Data from Foreign Adversaries Act, which the president signed into law.12 It prohibits data brokers from selling or otherwise making available Americans’ sensitive information to four specified countries: China, Iran, North Korea, and Russia. The new law has a significantly broader scope than the EO. It cuts off certain data transfers to any entity controlled by one of these adversary countries, apparently including corporate affiliates and subsidiaries. It extends to any sensitive data, not just data in bulk. It remains to be seen how the administration will address the overlaps between the new law and the EO.

    Another part of the same omnibus legislation ordered the ban or forced sale of TikTok, the Chinese social media platform widely used in this country.13 Advocates of the law point to the government of China’s ability under its own national security law to demand that companies operating there turn over personal data, including, potentially, TikTok users’ data transferred from the United States. Critics have cast the measure as a targeted punishment of a particular company, done without public evidence being offered of national security damage. TikTok has challenged the law as a violation of the First Amendment.14

    Finally, the data transfer restrictions in these measures are thematically similar to a January 29 proposed rule from the Commerce Department obliging cloud service providers to verify the identity of their customers, on whose behalf they transfer data.15 The rule would impose know your customer (KYC) requirements—similar to those that apply in the international banking context—for cloud sales to non-US customers, wherever located.

    This extraordinary burst of legislative and executive action focused on the national security risks of certain types of data transfers from the United States to certain authoritarian states is indicative of how far and fast political attitudes have shifted in this country. But what of Europe, which faces similar national security data challenges from authoritarian states? Is it moving in a similar direction as the United States?

    Data transfer politics in Europe

    The EU, unlike the United States, has long had a systematic set of controls on personal data flows from EU territory abroad, articulated in the General Data Protection Regulation (GDPR).16 The GDPR conditions transfers to a foreign jurisdiction on the “adequacy” of its data protection safeguards—or, as the CJEU has refined the concept, their “essential equivalence” to the GDPR regime.

    The task of assessing foreign legal systems falls to the European Commission, the EU’s quasi-executive arm. Article 45 of the GDPR instructs it to consider, among other things, “the rule of law, respect for human rights and fundamental freedoms, relevant legislation . . . including concerning . . . the access of public authorities to personal data.”

    For much of the past decade, the central drama in the European Commission’s adequacy process has been whether the United States meets this standard. As previously noted, the CJEU invalidated first the Safe Harbor Framework,17 in 2015, and then the Privacy Shield Framework,18 in 2020. The DPF is the third try by the US government and the European Commission to address the CJEU’s fundamental rights concerns. Last year, the European Commission issued yet another adequacy decision that found the DPF adequate.19 The EU understandably has focused its energies on the United States, since vast amounts of Europeans’ personal data travels to cloud service providers’ data centers in the United States and, as Snowden revealed, offered an inviting target for the NSA.

    Separately, the European Commission has gradually expanded the range of other countries benefiting from adequacy findings, conferring this status on Japan,20 Korea,21 and the United Kingdom.22 However, the 2019 adequacy decision for the UK continues to be criticized in Brussels. On April 22, the Committee on Civil Liberties, Justice, and Home Affairs (LIBE) of the European Parliament wrote to the UK House of Lords complaining about UK national security bulk data collection practices and the prospect of onward transfer of data from UK territory to jurisdictions not deemed adequate by the EU.23 Next year, the European Commission will formally review the UK’s adequacy status.

    List of countries with European Commission Adequacy Decisions

    This past January, the European Commission renewed the adequacy decisions for eleven jurisdictions which had long enjoyed them, including, notably, Israel.24 On April 22, a coalition of civil society groups published an open letter to the European Commission questioning the renewal of Israel’s adequacy decision.25 The letter expressed doubts about the rule of law in Israel itself, the specific activities of Israeli intelligence agencies in Gaza during the current hostilities there, and the surveillance powers exercised by those agencies more generally.

    Also delicate is the continuing flow of personal data from the European Union to Russia and China. Although neither country has been—or is likely to be—accorded adequacy status, data nonetheless can continue to flow to their territories, as to other third countries, if accompanied by contractual data protection safeguards. The CJEU established in its Schrems jurisprudence that such standard contractual clauses (SCCs) must uphold the same fundamental rights standards as an adequacy decision. The European Data Protection Board (EDPB) subsequently issued detailed guidance on the essential guarantees against national security surveillance that must be in place in order for personal data to be sent to a nonadequate jurisdiction.26

    In 2021, the EDPB received an outside expert report27 on several foreign governments’ data access regimes. Its findings were clear. “Chinese law legitimises broad and unrestricted access to personal data by the government,” it concluded. Similarly, with respect to Russia, “The right to privacy is strongly limited when interests of national security are at stake.” The board did not take any further steps to follow up on the report, however.

    Shortly after Russia invaded Ukraine, Russia was excluded from the Council of Europe and ceased to be a party to that body’s European Convention on Human Rights.28 The European Data Protection Board issued a statement confirming that data transfers to Russia pursuant to standard contract clauses remained possible, but stressed that safeguards to guard against Russian law enforcement or national security access to data were vital.29

    Over two thousand multinational companies continue to do business in Russia, despite the Ukraine war, although a smaller number have shut down, according to a Kyiv academic research institute.30 Data flows between Europe and Russia thus remain substantial, if less than previously. Companies engaged in commerce in Russia also are subject to requirements that data on Russian persons be localized in that country.31 Nonetheless, data flows from Europe to Russia are not subject to categorical exclusions, unlike the new US approach.

    The sole reported case of a European data protection authority questioning data flows to Russia involves Yango, a taxi-booking mobile app developed by Yandex, a Russian internet search and information technology company. Yango’s European services are based in the Netherlands and are available in other countries including Finland and Norway. In August 2023, Finland’s data protection authority (DPA) issued an interim decision to suspend use of Yango in its territory because Russia had just adopted a decree giving its state security service (FSB) unrestricted access to commercial taxi databases.32

    The interim suspension decision was short-lived. A month later, the Finnish authority, acting in concert with Norwegian and Dutch counterparts, lifted it, on the basis of a clarification that the Russian decree in fact did not apply to use of the Yango app in Finland.33 The Finnish authority further announced that the Dutch authority, in coordination with it and Norway, would issue a final decision in the matter. The Dutch investigation reportedly remains open, but it does not appear to be a high priority matter.

    The day after lifting the Yango suspension, the Finnish data protection authority rushed out yet another press release advising that its decision “does not address the legality of data transfers to Russia,” or “mean that Yango data transfers to Russia would be in compliance with the GDPR or that Russia has an adequate level of data protection.”34

    One can interpret this final Finnish statement as at least indirectly acknowledging that continued commercial data transfers from an EU jurisdiction to Russia may raise rule of law questions bigger than a single decree allowing its primary security agency, known as the FSB, to access certain taxi databases. Otherwise, the Finnish decision could be criticized for ignoring the forest for the birch trees.

    Equally striking is the limited extent of DPA attention to data transfers between EU countries and China. China maintains an extensive national security surveillance regime, and lately has implemented a series of legal measures that can limit outbound data transfers for national security reasons.35 In 2023, the Irish Data Protection Commissioner36 imposed a substantial fine on TikTok for violating the GDPR with respect to children’s privacy, following a decision by the EDPB.37 This inquiry did not examine the question of whether Chinese government surveillance authorities had access to European users’ data, however.

    Personal data actively flows between Europe and China in the commercial context, pursuant to SCCs. China reportedly may issue additional guidance to companies on how to respond to requests for data from foreign law enforcement authorities. To date there is no public evidence of European DPAs questioning companies about their safeguard measures for transfers to China.

    Indeed, signs recently have emerged from China of greater openness to transfers abroad of data generated in the automotive sector, including from connected cars. Data from connected cars is a mix of nonpersonal and personal data. China recently approved Tesla’s data security safeguards, enabling the company’s previously localized data to leave the country.38 In addition, the government of Germany is trying to ease the passage of data to and from China on behalf of German carmakers. On April 16, several German government ministers, part of a delegation visiting China led by Chancellor Olaf Scholz, issued a joint political statement with Chinese counterparts promising “concrete progress on the topic of reciprocal data transfer—and this in respect of national and EU data law,” with data from connected cars and automated driving in mind.39

    Conclusions

    The United States and the European Union are, in some respects, converging in their international data transfer laws and policies. In Washington, free data transfers are no longer sacrosanct. In Europe, they never have been. Viewed from Brussels, it appears that the United States is, finally, joining the EU by creating a formal international data transfers regime—albeit constructed in a piecemeal manner and focused on particular countries, rather than through a comprehensive and general data privacy law.

    Yet the rationales for limiting data transfers vary considerably from one side of the Atlantic to the other. Washington now focuses on the national security dangers to US citizens and to the US government from certain categories of personal data moving to the territories of “foreign adversaries.” Brussels instead applies more abstract criteria relating to foreign governments’ commitment to the rule of law, human rights, and especially their access to personal data.

    A second important difference is that the United States has effectively created a blacklist of countries to which certain categories of data should not flow, whereas the EU’s adequacy process serves as a means of “white listing” countries with comparable data protection frameworks to its own. Concretely, this structural difference means that the United States concentrates on prohibiting certain data transfers to China and Russia, while the EU institutionally has withheld judgment about transfers to those authoritarian jurisdictions. Critics of the EU’s adequacy practice instead have tended to concentrate on the perceived risks of data transfers to liberal democracies with active foreign surveillance establishments: Israel, the United Kingdom, and the United States.

    The transatlantic—as well as global—geopolitics of data transfers are in flux. The sudden US shift to viewing certain transfers through a national security lens is unlikely to be strictly mirrored in Europe. In light of the emerging differences in approach, the United States and European governments should consider incorporating the topic of international data transfers into existing political-level conversations. Although data transfer topics have thus far not figured into the formal work of the EU-US Trade and Technology Council (TTC),40 which has met six times since 2022 including most recently in April,41 there is no evident reason why that could not change. If the TTC resumes activity after the US elections, it could become a useful bilateral forum for candid discussion of perceived national security risks in data flows.

    Utilizing a broader grouping, such as the data protection and privacy authorities of the Group of Seven (G7), which as a group has been increasingly active in the last few years,42 also could be considered. The deliberations of this G7 group already have touched generally on the matter of government access, and they could readily expand to how its democratic members assess risks from authoritarians in particular. Eventually, such discussions could be expanded beyond the G7 frame into broader multilateral fora. The Organisation of Economic Co-operation and Development (OECD) Declaration on Government Access43 is a good building block.

    The days when international data transfers were a topic safely left to privacy lawyers are long gone. It’s time for Washington and Brussels to acknowledge that the geopolitics of data flows has moved from the esoteric to the mainstream, and to grapple with the consequences.

    About the author

    Related content

    The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

    1    Kenneth Propp, “Transatlantic Digital Trade Protections: From TTIP to ‘Policy Suicide?,’” Lawfare, February 16, 2024, https://www.lawfaremedia.org/article/transatlantic-digital-trade-protections-from-ttip-to-policy-suicide.
    2    U.S.-EU Safe Harbor Framework: Guide to Self-Certification, US Department of Commerce, March 2009, https://legacy.trade.gov/publications/pdfs/safeharbor-selfcert2009.pdf.
    3    “Chapter 19: Digital Trade,” US-Mexico-Canada Free Trade Agreement, Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/FTA/USMCA/Text/19-Digital-Trade.pdf.
    4    “Agreement between the United States of America and Japan Concerning Digital Trade,” Office of the United States Trade Representative, https://ustr.gov/sites/default/files/files/agreements/japan/Agreement_between_the_United_States_and_Japan_concerning_Digital_Trade.pdf.
    5    Schrems v. Data Protection Commissioner, CASE C-362/14 (Court of Justice of the EU 2015), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62014CJ0362.
    6    “President Biden Signs Executive Order to Implement the European Union-U.S. Data Privacy Framework,” Fact Sheet, White House Briefing Room, October 7, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/10/07/fact-sheet-president-biden-signs-executive-order-to-implement-the-european-union-u-s-data-privacy-framework/.
    7    European Commission, “Commission Implementing Decision of 10.7.2023 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Level of Protection of Personal Data under the EU-US Data Privacy Framework,” July 10, 2023, https://commission.europa.eu/system/files/2023-07/Adequacy%20decision%20EU-US%20Data%20Privacy%20Framework_en.pdf.
    9    Department of Justice, “National Security Division; Provisions Regarding Access to Americans’ Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern,” Proposed Rule, 28 C.F.R. 202 (2024), https://www.federalregister.gov/d/2024-04594.
    10    “President Biden Issues Executive Order to Protect Americans’ Sensitive Personal Data,” Fact Sheet, White House Briefing Room, February 28, 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/02/28/fact-sheet-president-biden-issues-sweeping-executive-order-to-protect-americans-sensitive-personal-data/.
    11    Peter Swire and Samm Sacks, “Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging,” Lawfare, February 28, 2024, https://www.lawfaremedia.org/article/limiting-data-broker-sales-in-the-name-of-u.s.-national-security-questions-on-substance-and-messaging.
    12    “Protecting Americans from Foreign Adversary Controlled Applications Act,” in emergency supplemental appropriations, Pub. L. No. 118–50, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/7520/text.
    13    Cristiano Lima-Strong, “Biden Signs Bill That Could Ban TikTok, a Strike Years in the Making,” Washington Post, April 24, 2024, https://www.washingtonpost.com/technology/2024/04/23/tiktok-ban-senate-vote-sale-biden/.
    14    “Petition for Review of Constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act,” TikTok Inc. and ByteDance Ltd. v. Merrick B. Garland (US Court of Appeals for the District of Columbia Cir. 2024), https://sf16-va.tiktokcdn.com/obj/eden-va2/hkluhazhjeh7jr/AS%20FILED%20TikTok%20Inc.%20and%20ByteDance%20Ltd.%20Petition%20for%20Review%20of%20H.R.%20815%20(2024.05.07)%20(Petition).pdf?x-resource-account=public.
    15    Department of Commerce, “Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities,” Proposed Rule, 15 C.F.R. Part 7 (2024), https://www.govinfo.gov/content/pkg/FR-2024-01-29/pdf/2024-01580.pdf.
    16    “Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” 2016/679, Official Journal of the European Union (2016), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679.
    17    Schrems v. Data Protection Commissioner.
    18    Data Protection Commissioner v. Facebook Ireland & Schrems, CASE C-311/18 (Court of Justice of the EU 2020), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62018CJ0311.
    19    The Commission’s decision has since been challenged before the CJEU. See Latombe v. Commission, No. Case T-553/23 (Court of Justice of the EU 2023), https://curia.europa.eu/juris/document/document.jsf?text=&docid=279601&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1498741.
    20    European Commission, “European Commission Adopts Adequacy Decision on Japan, Creating the World’s Largest Area of Safe Data Flows,” Press Release, January 23, 2019, https://commission.europa.eu/document/download/c2689793-a827-4735-bc8d-15b9fd88e444_en?filename=adequacy-japan-factsheet_en_2019.pdf.
    21    “Commission Implementing Decision (EU) 2022/254 of 17 December 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the Republic of Korea under the Personal Information Protection Act,” Official Journal of the European Union, December 17, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022D0254.
    22    “Commission Implementing Decision (EU) 2021/1772 of 28 June 2021 Pursuant to Regulation (EU) 2016/679 of the European Parliament and of the Council on the Adequate Protection of Personal Data by the United Kingdom,” Official Journal of the European Union, June 28, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021D1772.
    23    European Parliament Justice Committee, Correspondence to Rt. Hon. Lord Peter Ricketts regarding Inquiry into Data Adequacy, April 22, 2024, https://content.mlex.com/Attachments/2024-04-25_L75PCWU60ZLVILJ5%2FLIBE%20letter%20-%20published%20EAC.pdf.
    24    “Report from the Commission to the European Parliament and the Council on the First Review of the Functioning of the Adequacy Decisions Adopted Pursuant to Article 25(6) of Directive 95/46/EC,” European Commission, January 15, 2024, https://commission.europa.eu/document/download/f62d70a4-39e3-4372-9d49-e59dc0fda3df_en?filename=JUST_template_comingsoon_Report%20on%20the%20first%20review%20of%20the%20functioning.pdf.
    25    European Digital Rights et al., Letter to Vice-President of the European Commission Věra Jourová Regarding Concerns following  Reconfirmation of Israel’s Adequacy Status, April 22, 2024, https://edri.org/wp-content/uploads/2024/04/Concerns-Regarding-European-Commissions-Reconfirmation-of-Israels-Adequacy-Status-in-the-Recent-Review-of-Adequacy-Decisions-updated-open-letter-April-2024.pdf.
    26    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Recommendations 02/2020 on the European Essential Guarantees for Surveillance Measures,” Prepared for European Data Protection Board (EDPB), November 10, 2020, https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf.
    27    Milieu Consulting and Centre for IT and IP Law of KU Leuven, “Government Access to Data in Third Countries,” EDPB, EDPS/2019/02-13, November 2021, https://www.edpb.europa.eu/system/files/2022-01/legalstudy_on_government_access_0.pdf.
    28    European Convention on Human Rights, November 4, 1950, https://www.echr.coe.int/documents/d/echr/Convention_ENG.
    29    Statement 02/2022 on Data Transfers to the Russian Federation, European Data Protection Board, July 12, 2022,
    https://www.edpb.europa.eu/system/files/2022-07/edpb_statement_20220712_transferstorussia_en.pdf.
    30    “Stop Doing Business with Russia,” KSE Institute, May 20, 2024, #LeaveRussia: The List of Companies that Stopped or Still Working in Russia (leave-russia.org).
    31    “Russian Data Localization Law: Now with Monetary Penalties,” Norton Rose Fulbright Data Protection Report, December 20, 2019, https://www.dataprotectionreport.com/2019/12/russian-data-localization-law-now-with-monetary-penalties/.
    32    “Finnish DPA Bans Yango Taxi Service Transfers of Personal Data from Finland to Russia Temporarily,” Office of the Data Protection Ombudsman, August 8, 2023, https://tietosuoja.fi/en/-/finnish-dpa-bans-yango-taxi-service-transfers-of-personal-data-from-finland-to-russia-temporarily.
    33    “European Data Protection Authorities Continue to Cooperate on the Supervision of Yango Taxi Service’s Data Transfers–Yango Is Allowed to Continue Operating in Finland until Further Notice,” Office of the Data Protection Ombudsman, September 26, 2023, https://tietosuoja.fi/en/-/european-data-protection-authorities-continue-to-cooperate-on-the-supervision-of-yango-taxi-service-s-data-transfers-yango-is-allowed-to-continue-operating-in-finland-until-further-notice.
    34    “The Data Protection Ombudsman’s Decision Does Not Address the Legality of Data Transfers to Russia–the Matter Remains under Investigation,” Office of the Data Protection Ombudsman, September 27, 2023, https://tietosuoja.fi/en/-/the-data-protection-ombudsman-s-decision-does-not-address-the-legality-of-data-transfers-to-russia-the-matter-remains-under-investigation#:~:text=The%20Office%20of%20the%20Data%20Protection%20Ombudsman%27s%20decision,Protection%20Ombudsman%20in%20October%2C%20was%20an%20interim%20decision.
    35    Samm Sacks, Yan Lou, and Graham Webster, “Mapping U.S.-China Data De-Risking,” Freeman Spogli Institute for International Studies, Stanford University, February 29, 2024), https://digichina.stanford.edu/wp-content/uploads/2024/03/20240228-dataderisklayout.pdf.
    36    “Irish Data Protection Commission Announces €345 Million Fine of TikTok,” Office of the Irish Data Protection Commissioner, September 15, 2023, https://www.dataprotection.ie/en/news-media/press-releases/DPC-announces-345-million-euro-fine-of-TikTok.
    37    “Following EDPB Decision, TikTok Ordered to Eliminate Unfair Design Practices Concerning Children,” European Data Protection Board, September 15, 2023, https://www.edpb.europa.eu/news/news/2023/following-edpb-decision-tiktok-ordered-eliminate-unfair-design-practices-concerning_en.
    38    “Tesla Reaches Deals in China on Self-Driving Cars,” New York Times, April 29, 2024, https://www.nytimes.com/2024/04/29/business/elon-musk-tesla-china-full-self-driving.html.
    39    “Memorandum of Understanding with China,” German Federal Ministry of Digital and Transport, April 16, 2024,
    https://bmdv.bund.de/SharedDocs/DE/Pressemitteilungen/2024/021-wissing-deutschland-china-absichtserklaerung-automatisiertes-und-vernetztes-fahren.html.
    40    Frances Burwell and Andrea Rodríguez, “The US-EU Trade and Technology Council: Assessing the Record on Data and Technology Issues,” Atlantic Council, April 20, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/us-eu-ttc-record-on-data-technology-issues/.
    41    “U.S.-EU Trade and Technology Council (TTC),” US State Department, https://www.state.gov/u-s-eu-trade-and-technology-council-ttc/.
    42    “G7 DPAs’ Action Plan,” German Office of the Federal Commissioner for Data Protection and Freedom of Information (BfDI), June 22, 2023, https://www.bfdi.bund.de/SharedDocs/Downloads/EN/G7/2023-Action-Plan.pdf?__blob=publicationFile&v=1.
    43    OECD, Declaration on Government Access to Personal Data Held by Private Sector Entities, December 14, 2022, OECD/LEGAL/0487, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0487.

    The post Who’s a national security risk? The changing transatlantic geopolitics of data transfers appeared first on Atlantic Council.

    ]]>
    What to do about ransomware payments https://www.atlanticcouncil.org/blogs/econographics/what-to-do-about-ransomware-payments/ Tue, 14 May 2024 16:57:36 +0000 https://www.atlanticcouncil.org/?p=764759 And why payment bans alone aren’t sufficient.

    The post What to do about ransomware payments appeared first on Atlantic Council.

    ]]>
    Ransomware is a destabilizing form of cybercrime with over a million attacks targeting businesses and critical infrastructure every day.  Its status as a national security threat, even above that of other pervasive cybercrime, is driven by a variety of factors like its scale, disruptive nature, and potential destabilizing impact on critical infrastructure and services—as well as the sophistication and innovation in ransomware ecosystems and cybercriminals, who are often Russian actors or proxies.   

    The ransomware problem is multi-dimensional. Ransomware is both a cyber and a financial crime, exploiting vulnerabilities not only in the security of digital infrastructure but also in the financial system that have enabled the rise of sophisticated Ransomware-as-a-Service (RaaS) economies.  It is also inherently international, involving transnational crime groups operating in highly distributed networks that are targeting victims, leveraging infrastructure, and laundering proceeds without regard for borders.  As with other asymmetric threats, non-state actors can achieve state-level consequences in disruption of critical infrastructure.

    With at least $1 billion reported in ransomware payments in 2021 and with incidents targeting critical infrastructure like hospitals, it is not surprising that the debate on ransomware payments is rising again. Ransomware payments themselves are problematic—they are the primary motive for these criminal acts, serving to fuel and incentivize this ecosystem.  Many are also inherently already banned in that payments to sanctioned actors are prohibited. However, taking a hardline position on ransomware payments is also challenging because of its potential impact on victims, visibility and cooperation, and limited resources.

    Cryptocurrency’s role in enabling ransomware’s rise

    While ransomware has existed in some form since 1989, the emergence of cryptocurrencies as an easy means for nearly-instantaneous, peer-to-peer, cross-border value transfer contributed to the rise of sophisticated RaaS economies. Cryptocurrencies use largely public, traceable ledgers which can certainly benefit investigations and disruption efforts. However, in practice those disruption efforts are hindered by weaknesses in cryptocurrency ecosystems like lagging international and industry compliance with anti-money laundering and countering financing of terrorism (AML/CFT) standards; growth of increasingly sophisticated methods of obfuscation leveraging mixers, anonymity-enhanced cryptocurrencies, chain-hopping, and intermixing with off-chain and traditional finance methods; and insufficient steps taken to enable real-time, scaled detection and timely interdictionof illicit cryptocurrency proceeds.

    Despite remarks by some industry and policymaker advocates, RaaS economies would not work at the same level of scale and success without cryptocurrency, at least in its current state of compliance and exploitable features. Massively scaled ransomware campaigns targeting thousands of devices could not work by asking victims to pay using wire transfers and gift cards pointing to common accounts at regulated banks or widely publishing a physical address. Reliance on traditional finance methods would require major, and likely significantly less profitable, evolution in ransomware models.

    The attraction of banning ransomware payments

    Any strategy to deal with ransomware needs to have multiple elements, and one key aspect is the approach to ransomware payments. The Biden Administration’s multi-pronged counter-ransomware efforts have driven unprecedented coordination of actions combating ransomware, seen in actions like disrupting the ransomware variant infrastructure and actors, OFAC and FinCEN designations of actors and financial institutions facilitating ransomware, pre-ransomware notifications to affected companies by CISA, and a fifty-member International Counter-Ransomware Initiative.

    However, ransomware remains a significant threat and is still affecting critical infrastructure. As policymakers in the administration and in Congress consider every tool available, they will have to consider the effectiveness of the existing policy approach to ransomware payments. Some view payment bans as a necessary action to address the risks ransomware presents to Americans and to critical infrastructure. Set against the backdrop of the moral, national security, and economic imperatives to end this destabilizing activity, bans could be the quickest way to diminish incentives for targeting Americans and the significant amounts of money making it into the hands of criminals.

    Additionally, banning ransomware payments promotes other Administration policy objectives like driving a greater focus on cybersecurity and resilience. Poor cyber hygiene, and especially often poor identity and access management, are frequently exploited in ransomware. Removing payments as a potential “escape hatch” is seen by some as a way to leverage market forces to incentivize better cyber hygiene, especially in a space where the government has limited and fragmented regulatory authority.

    Those who promote bans typically do not come to that position lightly but instead see them as a last resort to try to deter ransomware.  The reality is that we have not yet been able to sufficiently scale disruption to the extent needed to diminish this threat below a national security concern—driven by insufficient resourcing, limits on information sharing and collaboration, timeliness issues for use of certain authorities, and insufficient international capacity and coordination on combating cyber and crypto crime. When policymakers are in search of high-impact initiatives to reduce the high-impact threat of ransomware, many understandably view bans as attractive.

    Challenges with banning ransomware payments

    However, taking a hardline position on ransomware payments can also present practical and political challenges:

    • Messaging and optics of punishing victims:A ban inherently places the focus of the policy burden and messaging on the victims, potentially not stopping them from using this tool but instead raising the costs for them to do so. Blaming victims that decide to pay in order to keep their company intact presents moral and political challenges.
    • Limited resources that need to be prioritized against the Bad Guys:  For a ban to be meaningful, it would have to be enforced. Spending enforcement resources against victims to enforce a ban—resources which could have been spent on scaling disruption of the actual perpetrators—could divert critically limited resources from efforts against the ransomware actors.
    • Likelihood that payments will still happen as companies weigh the costs against the benefits:  Many feel that companies, if faced between certain demise and the costs of likely discovery and legal or regulatory action by the government, will still end up making ransomware payments.
    • Disincentivizing reporting and visibility:  A ban would also make companies less likely to report that they have been hit with ransomware, as they will aim to keep all options open as they decide how to proceed. This disincentivizes transparency and cooperation from companies needed to drive effective implementation of the cyber incident and ransomware payment reporting requirements under the Cybersecurity Incident Reporting for Critical Infrastructure Act (CIRCIA) regulations to the Cybersecurity and Infrastructure Security Agency (CISA). Diminished cooperation and transparency could have a devastating effect on investigations and disruption efforts that rely on timely visibility.
    • Asking for permission means the government deciding which companies survive:  Some advocates for bans propose exceptions, such as supplementing a presumptive ban with a licensing or waiver authority, where the government is the arbiter of deciding which companies get to pay or not.  This could enable certain entities like hospitals to use the payment “escape hatch.” However, placing the government in a position to decide which companies live and die is extremely complicated and presents uncomfortable questions.  It is unclear what government body could be capable, or should be endowed with the authority of making that call at all, especially in as timely a fashion as would be required.  Granting approval could also place the government in the uncomfortable position of essentially approving payments to criminals.

    Additional policy options that can strike a balance for practical implementation

    In light of the large-scale, disruptive threat to critical infrastructure from ransomware, policymakers will have to consider other initiatives along with its ransomware payment approach to strike a balance on enhancing disruption and incentivizing security measures:

    • Resource agencies and prioritize counter-ransomware efforts: Government leadership must properly resource through appropriations and prioritize disruption efforts domestically and internationally as part of a sustained pressure campaign against prioritized ransomware networks.
    • International cyber and cryptocurrency capacity building and pressure campaign: Agencies should prioritize targeted international engagement, such as capacity building where capability lags and diplomatic pressure where political will lags, toward defined priority jurisdictions.  Capacity building and pressure should drive both cybersecurity and cryptocurrency capacity, such as critical infrastructure controls, regulatory, and law enforcement capabilities. Jurisdictional prioritization could account for elements like top nations where RaaS actors and infrastructure operate and where funds are primarily laundered and cashed out.
    • Enhance targeting authorities for use against ransomware actors: Congress should address limitations in existing authorities to enable greater disruptive action against the cyber and financial elements of ransomware networks. For example, Congress could consider fixes to AML/CFT authorities (e.g., 311 and 9714 Bank Secrecy Act designations) for better use against ransomware financial enablers, as well as potential fixes that the defense, national security, and law enforcement communities may need.
    • Ensure government and industry visibility for timely interdiction and disruption of ransomware flows: Congressional, law enforcement, and regulatory agencies should work with industry to ensure critical visibility across key ecosystem participants to enable disruption efforts, such as through: Enforcing reporting requirements of ransomware payments under CIRCIA and US Treasury suspicious activity reporting (SAR) requirements; Mandating through law that entities (such as digital forensic and incident response [DFIR] firms) that negotiate or make payments to ransomware criminals on behalf of victims, including in providing decryption services for victims, must be regulated as financial institutions with SAR reporting requirements; Driving the evolution of standards, like those for cyber indicators, to enable real-time information sharing and ingestion of cryptocurrency illicit finance indicators for responsible ecosystem participants to disrupt illicit finance flows.
    • Prioritize and scale outcome-driven public-private partnerships (PPPs): Policymakers should prioritize, fund, and scale timely efforts for PPPs across key infrastructure and threat analysis actors (e.g., internet service providers [ISPs], managed service providers [MSPs], cyber threat firms, digital forensic and incident response [DFIR] and negotiation firms, cryptocurrency threat firms, cryptocurrency exchanges, and major crypto administrators and network-layer players [e.g., mining pools and validators]) focused on disruption of key ransomware activities and networks.
    • Incentivize and promote better security while making it less attractive to pay ransoms: Policymakers could leverage market and regulatory incentives to drive better security measures adoption to deter ransomware and make it less attractive to pay.  For example, legislation could prohibit cyber insurance reimbursement of ransomware payments. Regulatory action and legislative authority expansion could also drive implementation of high-impact defensive measures against ransomware across critical infrastructure and coordination of international standards on cyber defense.

    While attractive for many reasons, banning ransomware payments presents challenges for limiting attacks that demand a broader strategy to address. Only this kind of multi-pronged, whole-of-nation approach will be sufficient to reduce the systemic threats presented by disruptive cybercrime that often targets our most vulnerable.


    Carole House is a nonresident senior fellow at the Atlantic Council GeoEconomics Center and the Executive in Residence at Terranet Ventures, Inc. She formerly served as the director for cybersecurity and secure digital innovation for the White House National Security Council.

    The post What to do about ransomware payments appeared first on Atlantic Council.

    ]]>
    International Cyberspace & Digital Policy Strategy: AC Tech Programs Markup https://www.atlanticcouncil.org/content-series/tech-and-markets/international-cyberspace-digital-policy-strategy-ac-tech-programs-markup/ Mon, 13 May 2024 21:41:00 +0000 https://www.atlanticcouncil.org/?p=817959 On May 6, the Department of State released the United States International Cyberspace & Digital Policy Strategy. Read along with AC Tech Programs staff, fellows, and experts for commentary and analysis.

    The post International Cyberspace & Digital Policy Strategy: AC Tech Programs Markup appeared first on Atlantic Council.

    ]]>

    Last week, the State Department released its first ever International Cyberspace and Digital Policy Strategy. America’s top diplomat travelled to a conference of computer programmers in San Francisco instead of a far-flung capitol to rollout the strategy, an early sign – echoed within the document – of the increasing centrality of the tech community in America’s role in the world. 

    An essential part of the strategy is the workforce behind it, as well as how the State Department is organizing itself on tech. In 2021, Secretary Blinken announced the creation of a new Bureau of Cyberspace and Digital Policy led by the first ever US Ambassador at Large. This brought together the parts of America’s foreign policy apparatus working on cyber, digital economy, and digital freedom policy into one entity capable of drawing on all elements of US power and resources, and capable of implementing such an ambitious strategy. 

    The strategy is a comprehensive outlook and, more importantly, an affirmative plan in an era of increasing geopolitical competition and dizzying technological change. It delineates three overarching principles which guide four strategic pillars supported by four areas of action with a combined twenty-three lines of effort—all centered around the idea of digital solidarity, of working together across all allies and partners for mutual advancement.  The clarity with which the strategy lays out the US vision of this ecosystem is itself a significant step forward – especially as the Chinese and Russian governments work tirelessly to promote their authoritarian models through tech investment, policy, and use around the world. One could be forgiven for labeling this a purely “cyber” strategy, as it includes everything from AI innovation to international cyber norms to internet freedom to addressing state threats while protecting human rights online and much more. The strategy effectively escapes the confines of past perceptions that one must only be technical to contribute to tech policy.  

    Four common threads tie the tech strategy together. First, a strong endorsement of a rules-based order and an (often unwieldy) multistakeholder system deemed existential to keeping digital ecosystems open, interoperable, secure, and reliable. Second, an explicit connection between how the United States designs, funds, and governs technology at home and how it engages internationally. Third, an understanding of the tremendous tech advantages the United States enjoys matched with the necessity of working with allies and partners to realize them in an era of increased interdependence. Fourth, the existential need to be proactively for something, as opposed to an articulation of grievances and nefarious tech usage the United States stands against. 

    The strategy and the Bureau of Cyberspace and Digital Policy are new, so naturally the document is more a statement of purpose than actualized impact – a clearer goalpost for all those who care about the security and accessibility of this global domain. Much of what is laid out depends on how the new bureau gets specific in implementation and works with the rest of the department and US government, allies and partners, and even industry.  

    The work of the Atlantic Council Technology Programs – including the Cyber Statecraft Initiative, Democracy + Tech Initiative, Digital Forensic Research Lab, GeoTech Center, and a newly formed capacity building initiative – advances nearly every element of the strategy. In the following markup, a broad array of experts across our team and expert community share unique analysis, examples, and insights on the implementation that lay ahead.  

    Our markup contributors include Emerson Brooking, Safa Shahwan Edwards, Rose Jackson, Konstantinos Komaitis, Iria Puyosa, Trisha Ray, Emma Schroeder, Justin Sherman, Bobbie Stempfley, Kenton Thibaut, and Moira Whelan.  

    Graham Brookie, Vice President, Technology Programs and Strategy, Atlantic Council

    Table of Contents

    Instructions

    Markup

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩

    Back to top ↩


    Authors and Contributors

    Graham Brookie, Vice President, Technology Programs and Strategy, Atlantic Council

    Emerson Brooking, Director of Strategy and Resident Senior Fellow, Digital Forensic Research Lab (DFRLab), Atlantic Council

    Safa Shahwan Edwards, Director, Capacity Building & Communities, Atlantic Council,

    Rose Jackson, Director, Democracy & Tech Initiative, Atlantic Council,

    Konstantinos Komaitis, Resident Senior Fellow, Global and Democratic Governance, DFRLab, Atlantic Council,

    Iria Puyosa, Senior Research Fellow, DFRLab, Atlantic Council,

    Trisha Ray, Associate Director and Resident Fellow, GeoTech Center, Atlantic Council,

    Emma Schroeder, Associate Director, Cyber Statecraft Initiative, Atlantic Council,

    Justin Sherman, Nonresident Fellow, Cyber Statecraft Initiative, Atlantic Council; Founder and CEO of Global Cyber Strategies,

    Bobbie Stempfley, Nonresident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council; Vice President and Business Unit Security Officer, Dell Technologies,

    Kenton Thibaut, Resident China fellow, DFRLab, Atlantic Council, and

    Moira Whelan, Nonresident Senior Fellow, DFRLab, Atlantic Council; Director, Democracy and Technology, National Democratic Institute.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post International Cyberspace & Digital Policy Strategy: AC Tech Programs Markup appeared first on Atlantic Council.

    ]]>
    Lessons learned from the Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/capacity-building-initiative/lessons-learned-from-the-cyber-9-12-strategy-challenge/ Tue, 07 May 2024 21:57:00 +0000 https://www.atlanticcouncil.org/?p=817966 Students from Tufts University tell us their greatest lessons learned from competing in the Cyber 9/12 Strategy Challenge.

    The post Lessons learned from the Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>
    Our four-member team, comprised of Sara Mishra, Hannah Dora Patterson, Ethan Moscot, and Andrew Vu from Tufts University’s M.S. degree program in Cybersecurity and Public Policy, competed in the Cyber 9/12 Strategy Challenge competitions hosted in New York and Washington D.C. in Fall 2022 and Spring 2023. While our individual levels of technical- and policy-focused expertise varied, we were eager to put our master program’s interdisciplinary design, with coursework in Tufts University’s Department of Computer Science and the Fletcher School of Law and Diplomacy, to the test. Having all entered our graduate program at the same time, we could not find an extracurricular activity that matched the nuance and interdisciplinarity of our degree program which fused technical and policy analysis.  

    Sara knew of the Cyber 9/12 competition programs from her time working at the Atlantic Council, and she suggested we form a team to compete. It quickly became clear that Cyber 9/12 represented the ideal opportunity to learn and experience firsthand just how technical knowledge interacts with policymaking outside of the classroom and how to impart technical information to nontechnical audiences. Overall, what started as competition that was unfamiliar yet piqued our interest turned out to be an unrivaled experience in responding to a fictional cyberattack with high-level stakeholders and fascinating case studies. 

    Being able to participate in Cyber 9/12 competitions at both Columbia University and American University allowed our team members to further develop their ability to craft comprehensive public policy solutions for cybersecurity challenges. After receiving the briefing packet, our team would divide the tabs up and allow each member to leverage their background knowledge and share their thoughts on the varying topics.  

    Our team always began by creating a word chart associating the inject’s biggest themes with each other. From there, we would write down a list of possible policy prescriptions and rank the ideas in order of feasibility or accessibility to the judges. We then outlined our objectives, utilizing the DIME (Diplomatic, Information, Military, Economic) paradigm to ensure we incorporated these four instruments of power so as to consider the necessary dimensions and all principal stakeholders. Finally, we also devised an acronym such that our idea could be easily absorbed and represent several pathways to achieve a desired solution based upon considerations in the DIME paradigm to create the finalized decision document. 

    When we were developing our response to the challenges in New York, as Fletcher students, we were drawn to framing our proposal at the international level. The prompt involved tension among China, India, and Pakistan due to uncertainty in the region surrounding critical infrastructure development, as well as an Advanced Persistent Threat (APT) acting against a hydroelectric dam in New York. When we were challenged with responding to a cyberattack on that dam, it required a balancing act to make sure we addressed all of the above. Sara’s well-rounded grasp of the scenario upon its release primed her to summarize the situation at hand. Based on her overview of both our written and oral components, Ethan and Hannah were then able to identify three strategic objectives. 

    We presented our “3D” approach, which included deploying Cyber National Mission Forces (CNMF) and coordinating with Indian and Pakistani governments to deploy Hunt Forward Operations (HFO) teams to create malware reports, developing a diplomatic framework through a US-led Cybersecurity Partnership Conference, and delivering USAID grants that incentivize private corporate investment to India and Pakistan. These three objectives would counter adversarial aggression, safeguard US critical infrastructure, and work to ease regional tensions.  

    Finally, Andrew offered insight to further contextualize our proposals. In this instance, the defensive appearance of CNMF HFOs would reduce the threat of escalation, the international conference could directly challenge the People’s Republic of China’s (PRC) influence and USAID initiatives in India and Pakistan would signal US commitment to regional peace and security. This empowered us to be proactive in addressing judges’ concerns as to how the scenario may evolve, both technically and geopolitically. In terms of implementation time for our proposals, our response options contained short-term, medium-term, and long-term solutions. By using technical- and policy-focused lenses, as well as different timeframes, we recognized we could offer practical solutions, such as the deployment of CNMF HFOs, and simultaneously propose more grandiose ones, like our US-led conference. 

    While we made sure to tackle the geopolitical and cyber-related events unfolding, the judges in the first round still pointed out that we could have better addressed concerns that trickled down to the state and local levels. We integrated this feedback into our semi-final round approach, where we encountered opponents in this head-to-head round who presented a technically driven solution that proved successful with the judges. 

    For the Washington, DC competition, our team knew that we’d have to continue to leverage tried-and-true methods when consuming and responding to the scenario, while also remaining nimble to adjust to new challenges and themes presented in the prompt. When devising our response to the scenario prepared for the Washington DC competition, we sought to highlight a flaw in what we perceived to be a growing market for exploits. In this scenario, when a biometric identity verification technology company discovered a data breach in their systems used for air travel, preliminary forensic analysis uncovered a vulnerability inside a compromised subsystem allowing for possible violations of confidentiality and integrity of customer data, and also revealed the National Security Agency’s knowledge of this vulnerability. In addition to making note of a rise in tension with the European Union (EU) over data protection concerns, we returned to our practice of developing easy-to-remember proposals on a short-term, medium-term, and long-term timeframe. Specifically, we suggested designing a cyber response and attribution plan, distributing resources for sanction enforcement via the Department of Justice and Treasury, and developing a revised Vulnerabilities Equities Process (VEP).  

    While still making use of our best practices and presentation lineup in designing our proposals, we believed examining the management of a fundamentally technical issue, as our opponents did in New York, would be useful in demonstrating our resolve and creativity in working to prevent a repeat of the incident we now faced. Specifically, we contended that the National Security Council (NSC) should modify the VEP by placing the Office of the National Cyber Director (ONCD) in charge. The ONCD leading the role in place of the NSA, in our view, would shift the overall process away from bias toward the Department of Defense, and allow the process to address cybersecurity issues more holistically. Judges commended us for our creativity and willingness to think critically about managing oversight of technical issues beyond coordination of incident response, but we were told the specifics of doing so needed further fine-tuning.  

    Overall, these experiences have empowered us to face unique and dynamic scenarios and to refine our approach to very crucial skills outside of the competition environment: expertly crafting long-form briefing papers, short-form decision documents, and informed oral briefings. From first competing at Columbia University to then trying our hand again at the Cyber 9/12 competition at American University, we learned to adjust in terms of fully addressing the complexity from the fallout of a cyberattack and how to position our brief principal stakeholders in a balanced manner. The Cyber 9/12 competitions truly bring briefings to life when facing judges posing as the NSC. Our team had to contend with questions like “What exactly am I going to tell the President?” and “On a scale of one to five, with five being the most severe, how would you rate the severity of this incident?” It could not be more clear that relaying and contextualizing technical details in a succinct and easily interpretable manner is paramount. Reflecting upon our successes and areas for improvement, we will look back with pride as we embark on our future endeavors, whether they be in future Cyber 9/12 competitions or professional roles in the public and private sectors. 

    Acknowledgements  

    We are incredibly thankful for Diana Park, our coach, a doctoral student of international relations at the Fletcher School. It is because of her probing, insight, and expertise that we were able to develop in-depth analysis and critically reflect upon it. In addition, we would also like to thank our advisor and program founder, Professor Susan Landau, for her active support of our team’s participation


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post Lessons learned from the Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>
    The 5×5—The XZ backdoor: Trust and open source software https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-xz-backdoor-trust-and-open-source-software/ Wed, 01 May 2024 15:40:00 +0000 https://www.atlanticcouncil.org/?p=818118 Open source software security experts share their insights into the XZ backdoor, and what it means for open source software security.

    The post The 5×5—The XZ backdoor: Trust and open source software appeared first on Atlantic Council.

    ]]>
    Last month’s discovery of a backdoor in XZ Utils, an open source data compression utility widely used in Linux operating systems, has reignited discussions about the security of open source software (OSS), with some analysts drawing comparisons to well-known historical OSS incidents.  

    The XZ saga began when the original maintainer of XZ Utils was pressured by other contributor accounts into adding user JiaT75 as a maintainer of the project. JiaT75 had been contributing to the XZ Utils community since 2022. A group of accounts and JiaT75 questioned the original maintainer’s ability to maintain the XZ Utils project and spent years convincing them to bring JiaT75 on board as an additional maintainer. Once JiaT75 was provided maintainer access, they  replaced the original maintainer’s contact information with their own on oss-fuzz, a project that scans open source projects for vulnerabilities. After further preparation, they issued commits for XZ Utils versions 5.6.0 and 5.6.1, implementing the backdoor into the code. This backdoor had the potential to infect Linux operating systems, but thanks to the keen eye and curiosity of a Microsoft engineer, it was discovered before causing widespread harm. 

    For this edition of the 5×5, we brought together five open source software security experts to discuss the XZ backdoor’s implications for the OSS community and policymakers.  

    1. What, if anything, differentiates the XZ episode from other well-known open source compromises, and how should policymakers update their mental model of open source security accordingly?   

    Tobie Langel (he/him/his), Principal and Managing Partner, UnlockOpen; Board member, OpenJS Foundation; Vice Chair, Cross Project Council, OpenJS Foundation

    “The XZ utils backdoor represents a turning point for open source security and is already sending shockwaves through the industry and beyond, very much like Heartbleed did a decade ago. Up until today, open source vulnerabilities were the result of accidental bugs. For the first time, there was a deliberate and nearly successful attempt to introduce malicious code into a widely used open source library. The threat actor leveraged open source’s Achilles heel: the lack of support for the maintenance of critical projects. 

    For policymakers, this should be a wakeup call: open source sustainability issues directly impact software supply chain security. We can no longer afford to ignore it. Maintenance needs to be professionalized and properly supported.” 

    Aeva Black (they/she), Section Chief, Open Source Security, Cybersecurity and Infrastructure Security Agency (CISA)

    “Previously well-known open source vulnerabilities, such as Log4Shell and HeartBleed, while severe in their impact, were not malicious, and they weren’t the result of an individual being targeted by a bunch of possibly fake accounts. 

    So, I think, for a lot of folks in the open source community, this situation feels more personal, like something that could happen to any of them. It was a targeted, planned, malicious attempt to abuse the trust placed in open source software maintainers. For an online community that runs on trust, this incident hit folks pretty hard. It’s caused many of my friends to feel a sense of a loss of safety. 

    CISA’s open source security efforts account for both types of threats — vulnerabilities latent in open source packages and malicious compromises of upstream packages. Our Open Source Software Security Roadmap, published last year, lays out how we’re working to support the security of the open source ecosystem.” 

    Stewart Scott (he/him/his), Associate Director, Cyber Statecraft Initiative, Atlantic Council  

    “While the XZ compromise is different from some infamous open source incidents based on vulnerabilities like log4j and Heart Bleed, abusing trust is by no means new to open source or software in general. We’ve seen similar OSS threats before in various forms. These include disaffected maintainers removing their projects or building in forms of protest-ware, malicious actors adding well-known companies and contributors to malicious packages as maintainers, and, most similarly, maintainers adding attackers as legitimate project maintainers because the attackers simply asked the original, overworked owners. After XZ, we’ve also seen similar methods discovered in other environments, which I would bet is more a function of XZ highlighting that threat vector to analysts than inspiring other bad actors to make similar attempts. 

    As for the differences, the XZ attempt seems more sophisticated than the above examples, but not fundamentally changed. For policymakers, this should serve as a reminder that open source software is ubiquitous and often under resourced, but not a source per se of any insecurity.” 

    Christopher Robinson (he/him/his), Chairperson, OpenSSF Technical Advisory Council; Director of Security Communications, Intel

    “The attack itself is not novel; it strings together a series of social engineering/cyber-bullying tactics, and leverages embedding offline malicious files during the CI/CD stage of publication.  What is unique is how well the attacker studied and exploited common community behaviors and norms to penetrate the project and take maintainership that could allow the later actions in secret.” 

    Fiona Krakenbürger (she/her), Co-Founder, Sovereign Tech Fund  

    “The key difference is the dedication and lengths the attacker had to go through—they had to hide the attack in the build system, disable the checks that were in place, and contribute for a long time to gain trust. That said, the underlying structural challenges it laid bare are anything but new. XZ utils are one of the many open source components that are heavily utilized and critical for a functioning digital ecosystem, yet they were maintained by just one person and did not receive the support needed. If anything, the XZ attack is another reminder that we should acknowledge and update our mental model based on what we already know: open source is critical infrastructure and thus it needs adequate support.” 

    2. What kind of impacts might this compromise have enabled, and how widespread could the impacts have been if it had not been discovered?  

    Tobie Langel 

    “To understand the impact of open source vulnerabilities, it is important to consider both the ubiquity of open source software (it is present in over 95 percent of codebases where it accounts for more than 75 percent of the code) and how the tech industry has gravitated towards a common set of low-level open source components that are present in almost all applications. Compromising one of them opens backdoors in hundreds of millions of devices.” 

    Aeva Black  

    “If unnoticed, this backdoor would have become included in many major Linux distributions, and as updates rolled out over time, this could have created a hidden “skeleton key” in many network-connected systems around the world—particularly across most cloud-based services today. We are fortunate that the open nature of the wider open source ecosystem allowed a developer to spot this supply chain compromise before it could cause much harm.” 

    Stewart Scott 

    “From what I’ve read, it seems like the backdoor would have allowed remote code execution for anyone with the private key required to use it. That’s bad. And it seems like it would have been widespread once pulled into mainstream Linux installations. Relatedly, policy has often considered the question of how it can identify niche OSS dependencies, such as XZ, that are widespread but maintained by a very small team. It’s interesting that attackers seem able to identify and target some of these nodes, which adds urgency to the task of identifying them and figuring out how to support and use them responsibly.” 

    Christopher Robinson 

    “If the backdoor had been merged into the latest versions of the community Linux distributions, it would have seen broad uptake within consumers of those operating systems.  Given more time, that malicious package, if undiscovered, would have been integrated into enterprise Linux distributions, exponentially expanding the scale of where it would have been deployed.  The malicious package could have allowed remote access and code execution for the threat actor.” 

    Fiona Krakenbürger 

    “The attack is still being analyzed, however, what we do know is that it targeted widely used and critical Linux distributions and would have given the attacker the ability to execute code on compromised machines. Like other well-known security incidents before, this speaks volumes about the criticality of open source infrastructure and how important it is to take responsibility and collective action in securing it and reducing the likelihood of similar attacks. We are lucky, as more grave consequences were averted, but we must expect that similar attacks will happen or have happened if the pressure of highly critical components and their maintenance continues to rest on as few shoulders.” 

    3. How are insider threats for open source software different from and/or similar to those faced by proprietary software? How worried should policymakers be that similar compromises have succeeded in other codebases, both open source and proprietary? 

    Tobie Langel 

    “Insider threats are a problem everywhere. Open source has been mostly preserved so far, but it was only a matter of time before that would change. Ultimately however, this is a problem that is broader than insider threats: a hostile account takeover or a vulnerability in software used to build or distribute open source code would have the same consequences. 

    Up until now, the open source community wasn’t thought of as a potential cyber attack target. When you don’t have access to valuable information, why would you become a target? Now we know: if you are a stepping stone to valuable information somewhere else, you are a potential target too. And this is exactly what those ubiquitous low-level open source components have become: stepping stones to the internal networks of corporations and governments all over the world.”

    Aeva Black  

    “I’ve heard many describe this as an insider threat, but I don’t think that’s quite it. Traditional guidance for insider threat prevention differs in two ways: first, it focuses on behavioral changes of an insider as they become a threat, and second, it focuses on preventing harm to the organization that all parties (both the threat actor and the observer) are members of.  The ‘Jia Tan’ threat actor was originally outside of the project and tried to hide their intent in order to compromise other organizations. So, this is more accurately described as a social engineering attack. 

    When looking at the early activity in this situation, and when I think about how to help open source communities protect themselves from this going forward, well, anti-social engineering techniques are more likely to be successful. If a stranger online seems too eager to get commit access to your project, maybe they have another motive. A healthy dose of caution – particularly for maintainers of low-level system libraries in widespread use – is needed, now more than ever before.” 

    Stewart Scott 

    “I’m not sure they are fundamentally different. OSS projects might be closer to the outside world than proprietary code, but the threat of someone with access to any codebase adding in malicious components is still there. The manner in which they achieve that access—e.g. compromising credentials or being given access willingly—points toward different failures in best practice, but the underlying risk is not much changed. One difference between open source and proprietary software that does stick out for insider threats, however, is the well-established fact that OSS maintainers are overworked and under-resourced. This is not the source of insider threats (or social engineering tactics, to Aeva’s and Chris’s points about definitions), but it does augment their risk—OSS maintainers have less time to vet collaborators they bring on to a project as well as the code they add, and they have strong incentives to bring on help. After XZ, several foundations released guidance for maintainers to help them know how to spot the tactics of similar efforts. This is useful, but an incomplete solution—ultimately, so long as OSS maintainers are under-supported and overburdened, malicious actors will have leverage to offer support in bad faith. Policymakers should think of this as yet another reason to support OSS directly—to reduce the strain on those who maintain critical digital infrastructure. 

    And sure, everyone should worry that compromises have already succeeded and gone unnoticed, but this is not unique to insider threats or OSS. A 1984 paper, Reflections on Trusting Trust, highlights well that you can’t fully trust software that you didn’t build yourself from scratch, and hardly any software is made that way today—and for good reason, as building it in such a manner would be incredibly inefficient. More important are policies that set clear thresholds for trust and verify software against those, from design choices and secure development practices to code signing infrastructure.” 

    Christopher Robinson 

    “While this is better classified as a social engineering attack, when the attacker became the project maintainer, they became the ultimate insider and controller of the project.  Open source projects are equally as susceptible to these insider threats as enterprises, corporations, and government agencies are.  The difference is that OSS projects do not have access to typical controls an enterprise might have such as background and credit checks for employees, or behavioral and network monitoring that an enterprise may use.  Those types of controls are not economically nor socially acceptable within the free and open source developer community.” 

    Fiona Krakenbürger 

    “While there are certainly differences in how security risks are created and handled in proprietary and open software, we should be wary of creating a false dichotomy here. By asking questions like these we are distorting the fact that open source tools and technologies like XZ are essential for a functioning software ecosystem. Developers rely heavily on these open resources for developing, maintaining, testing and improving software–there is no proprietary alternative for these millions of software packages, that part of the equation simply does not exist. Open source infrastructure has to and will inevitably continue to be part of our digital surroundings – we therefore need to adapt the way we maintain its safety and sustainability.” 

    4. Usually, the open source community’s discovery of a vulnerability or compromise is considered a success, framed in some variation of ‘this is the open source model working.’ Is that the full story for XZ and in general, or is there room to improve this process in some circumstances?   

    Tobie Langel 

    “Clearly, open source saved the day here. Had XZ Utils been proprietary, the engineer whose Spidey sense was tickled would never have been able to carry out his investigation, and the backdoor he discovered would have been widely deployed. 

    That doesn’t mean that there isn’t a whole new category of threat vectors for open source to consider and address. If critical open source projects are now seen as stepping stones for industrial espionage, ransomware attacks, or cyberwarfare, maintainers of these projects will need to adopt comparable security practices to those found in target organizations. This creates a set of challenges for open source because of its highly distributed nature and volunteer-based model. It also bolsters the argument for professionalizing critical infrastructure maintenance and creating proper support structures for maintainers.”

    Aeva Black  

    “At CISA, we have not seen any compromises resulting from XZ – so, yes, this is an example of ‘the open source model working.’ Compared to proprietary software, the open source nature of XZ allowed it to be detected by an unaffiliated third party and remediated quickly, before it had been widely deployed. 

    Of course, there’s always room to improve. At CISA, we’ve been collaborating in real time with open source community members to better understand the impact of XZ and identify ways we can help communities respond if this happens again. In fact, the OpenSSF and OpenJS foundations recently noticed similar social engineering attacks against a few projects and published an alert about the observed pattern. CISA also recently released a tabletop exercise packet, based on a similar threat scenario, that any open source community can use to practice and refine their incident response coordination abilities.” 

    Stewart Scott 

    “On the one hand, it is very cool how Freund found this backdoor, before it was widely distributed, and for those interested in his investigation, I’d definitely check out an Oxide interview with his firsthand account. And we see similar feats in cybersecurity somewhat regularly—the single researcher who uncovered the log4j vulnerability, or the custom alert system the Department of State had in place that helped a single analyst catch an intrusion by state-backed actors last summer. That said, the persistent reliance on single analysts makes me a bit nervous, even if it’s selection bias based on those being very reportable stories. Maybe the phenomenon is just an artifact of cybersecurity in practice rather than in theory, but if, say, your favorite football team has to rely on outstanding individual performances to win games, either they are very evenly matched with their opponent, or, more worryingly, covering up structural shortcomings. In my mind, the problem of OSS projects being insufficiently resourced and thus delegating some of this out remains unaddressed, and it would be great to see more support from those using and relying on OSS projects. The entire security model of OSS is premised as ‘many eyes make all bugs shallow’—but that only works if the many eyes that could be looking at an OSS project are actually looking at it.” 

    Christopher Robinson 

    “A community member discovered this attack because the software that was manipulated was open, transparent, and observable.  This attack would not have been prevented if it was conducted against a closed-source program, as was the case in the Solarwinds hack. Open source software is driven forward and improved by such humble community contributions.  The beauty of the OSS ecosystem is the constant testing, refinement, and ultimate improvement of software code and processes donated by the community.  Many within that ecosystem are already planning ways to protect and detect both the technical and social engineering aspects of this attack.  This specific pattern will be much less successful in the future as projects work to identify and prevent them, and more broadly as the issues of identity and verification are worked out in the open.”

    Fiona Krakenbürger 

    “As mentioned above, the attacker invested a lot of time and effort, and yet they failed at the end. This shows the resilience of the open source model, but also that people who want to compromise it are putting an increasing amount of resources towards that. Typically, contributions are reviewed, tested, and discussed before they end up in a code base, but whether that happens in a resource-strapped software project is another question. Therefore, policymakers need to respond and increase the number of resources we are spending on security to counter that.” 

    5. What are some processes, either practiced or proposed, that could prevent similar incidents or mitigate their possible impacts?  What role can investments in open source projects play here?  

    Tobie Langel 

    “Meaningfully improving security at scale while preserving the ethos, culture, and diversity of communities that characterize open source and that are largely responsible for its success isn’t an easy task. 

    There is a real risk of veering towards performative security theater on one end or an excessive crackdown on the other. Both would be alienating to the open source community. Similarly, shoehorning corporate approaches into open source communities without consideration for their specificities would also lead to a backlash. 

    The right approach is to double down on the kind of community-driven experimentation that the German Sovereign Tech Fund has been funding and scale the successful ones.” 

    Aeva Black  

    “Practices such as public, peer-driven code review, open design and planning meetings, automated security testing with public logging, code signing, and more all help to protect open source technologies that we depend on from accidental bugs – and from malicious code. But this is both tooling and time intensive, and this approach doesn’t work as well for projects with only one or few maintainers, and many of the volunteers who sustain open source software are suffering from burnout, as we saw in this case. 

    Additional investments in software supply chain transparency could help organizations identify critical open source dependencies in the products they use. Without this clarity in the supply chain, it can remain difficult to know where to offer support. 

    The most important takeaway from all this? Community stewardship and peer accountability in open source keep us safe – and these communities need ongoing support. Every software manufacturer that integrates open source software into their products should, consistent with Secure by Design principles, help sustain the open source communities they depend on either through their employees’ time or through financial or in-kind contributions.” 

    Stewart Scott 

    “It’s hard to speculate here given that the backdoor was caught before widespread deployment, but two things stick out in this case. The first is resourcing—an overworked maintainer is more likely to want to load share, which is eminently reasonable. At the same time, that creates an avenue for bad actors to pressure that maintainer to share the work with them. More resourcing for maintainers would help here, as would more responsible conduct around making demands of maintainers—the precedent of heaping demands upon maintainers is both distasteful and a material security issue. Some of that resourcing can even be security infrastructure. And on the usage end of things, the more that companies relying on OSS can support those projects without burdening maintainers, the better the ecosystem will be—and companies need not think of this as charity, as they directly benefit from supporting their own dependencies.” 

    Christopher Robinson 

    “Arming projects and maintainers with education and tooling to recognize social engineering and cyberbully is the first step.  Experiments are underway to work on automation to detect tampering of software between source code and binary artifact publication that should foil similar future attacks sneaking malware in during the build and publication stages of software delivery.” 

    Fiona Krakenbürger 

    “In the past weeks, we’ve seen a lot of conversations about practices that could possibly mitigate risks in open source software. There is clearly no silver bullet, but there are ways to improve the resilience and security posture of software projects, e.g. by making code more maintainable or investing in audits, testing infrastructure, and build tooling. However, the implementation of these requires meaningful investments and paying maintainers for their work. Financial resources are similarly not a silver bullet, however they are a part of the solution. We need to actively and carefully listen and understand the needs of those working on critical software to make more informed decisions on how we advocate for or provide the necessary support.” 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—The XZ backdoor: Trust and open source software appeared first on Atlantic Council.

    ]]>
    Markets matter: A glance into the spyware industry https://www.atlanticcouncil.org/in-depth-research-reports/report/markets-matter-a-glance-into-the-spyware-industry/ Mon, 22 Apr 2024 22:06:00 +0000 https://www.atlanticcouncil.org/?p=817973 The Intellexa Consortium is a complex web of holding companies and vendors for spyware and related services. The Consortium represents a compelling example of spyware vendors in the context of the market in which they operate—one which helps facilitate the commercial sale of software driving both human rights and national security risk.

    The post Markets matter: A glance into the spyware industry appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    The Intellexa Consortium, a complex web of holding companies and vendors for spyware and related services, have been the subject of recent, extensive sanctions by the US Department of the Treasury and the focus of reporting by the European Investigative Collaborations among others. The Consortium represents a compelling example of spyware vendors in the context of the market in which they operate—one which helps facilitate the commercial sale of software driving both human rights and national security risk.1  This paper addresses an international policy effort among US partners and allies, led by the French and British governments, as well as a surge of US policy attention to address the proliferation of this spyware. This paper offers a case study of the Intellexa Consortium, based on public records and open source reporting, as an argument for policymakers to consider the wider network of investors and counterparties present in this market rather than constraining their focus on individual vendors. This consortium showcases many of the trends observed in how other spyware vendors organize, straddle jurisdictions, and create overlapping ownership structures. This paper argues that policymakers must approach the market as a whole, a large and complex but interlinked system, in designing future policy interventions against these vendors and their respective supply chains. In closing, the paper offers several tangible impacts and insights into this market, calling for greater transparency writ large, but also for increased attention into the individuals and investors that facilitate the proliferation of spyware. 

    Introduction

    For decades, private companies have developed, sold, and maintained software to steal digital data from computing devices and sell it to others—eroding the notion that digital espionage is an activity limited to governments. Mobile phones and their operating systems have been an especially popular target as, in many ways, the devices are a slickly packaged espionage party pack of microphones, cameras, Global Positioning System (GPS), and cell network location transmitters, with the applications to obtain sensitive personal data like messaging and contacts. The customers for this software-enabled spying are myriad, including law enforcement, domestic security, and intelligence organizations across the globe. Spyware has garnered international attention due to some governments’ utilization of the software to violate human rights and for its use in internal surveillance and policing, as well as larger national security risk in transferring offensive cyber capabilities to states without means to provide lawful oversight and democratic input on their use. 

    In the decades that this spyware has been built and sold, profiles have been written about as many as a dozen vendors. Reports from companies like Google,2 Meta,3 civil society champions Amnesty International4 and the Citizen Lab5, and news outlets like Reuters6 and Forbes,7as well as the Atlantic Council,8have examined the behavior of these companies, the services they sell, and the corresponding harm that software can pose.  

    But absent from most of this analysis, save some at the edges of industry and academia, is an accurate picture of these vendors as a whole market—one in which firms conduct business under multiple names, work with investors across the globe, and where webs of interpersonal relationships underpin a shifting roster of corporate names and titles. These factors have hampered policy efforts to extract transparency from this market and limit the sale and use of spyware. 

    Figure 1: Groups targeted by spyware

    Recently, the US government took policy action to target specific firms and several named individuals developing and selling this software. In March 2024, the Treasury Department sanctioned the Intellexa Consortium, profiled in more detail in this brief, following the listing of several vendors by the Department of Commerce in 20239Together with policy efforts like those in the UK- and French-led Pall Mall process10launched in February 2024 and the widely discussed but ultimately inconclusive PEGA Committee,11 which the European Parliament convened in 2023, there has been a sharp increase in interest from governments in the activities of this market and their potential for harm.  

    Policy that addresses these vendors and their major financial and supplier relationships within this market will be more impactful than targeting single vendors alone. To sharpen the emerging government efforts mentioned above, this paper presents a case study of the Intellexa Consortium and its investor and subsidiary ties as a prototype of analysis focusing on a vendor’s relationship to this wider spyware market, in addition to their own activities. 

    In the pages that follow, the paper offers some basic definitions and examines previously reported and open source information about the specific case of the Intellexa Consortium, which recent US Treasury actions highlighted. Sanctions are particularly useful in targeting individuals across multiple jurisdictions and companies, and this is the first time the US has used this policy lever against a spyware vendor.12 The case study summarizes the corporate entities, investors, and founders that make up this consortium along with key public business relationships and how those relationships have evolved over time. Finally, the paper highlights several features of the Intellexa Consortium organization and implications for policy.13 This is just one case study, but it demonstrates a model for what is possible in a more holistic analysis of the spyware market and the utility of that approach to policymakers, researchers, and advocates alike. 

    Terms of debate

    This section offers definitions for some key terms as applied in this work and present in many others as a way of scoping the analysis. Policymaking around spyware has suffered in the past due to unclear terminology and inconsistent definitions. Recognizing the significant energy present across international policymaking efforts like the Pall Mall process, this section seeks to better specify terms of an ongoing debate. The authors submit these terms as analytically useful to the purpose, concise, and sufficiently rigorous so as to capture much of the discussion happening in the seams and gaps between both policymaking and information security research communities.  

    Spyware

    Spyware is a type of malware14 that facilitates unauthorized remote access to an internet-enabled target device for purposes of surveillance or data extraction. Spyware is sometimes referred to as “commercial intrusion [or] surveillance software,” with effectively the same meaning. Spyware works without willing consent of the target or anyone with access to their device; thus, this paper does not consider the market for so-called ‘stalkerware,’ which generally requires interaction from a spouse, partner, or someone else with access to a user’s device. This definition also excludes software that never gains access to a target device, such as surveillance technologies that collect information on data moving between devices over wire (i.e., packet inspection or ‘sniffing’) or wireless connections. This definition also excludes hardware such as mobile intercept devices known as IMSI-catchers, or any product requiring physical access to a target device such as forensics tools.15

    This definition is limited, by design, to disentangle the lumping of various other surveillance toolsets into the definition of spyware. 16 Hardware devices require physical device access that adheres to jurisdiction-specific regulations. Passive surveillance technologies intercept and monitor communications using a broad set of tools, often in some combination of hardware and software technologies and frequently without requirement for preexisting knowledge of a target.17

    “Commercial” spyware?

    The term spyware often becomes a proxy debate for the scope of policy. Varying definitions attempt to embed conditions as to the source or legitimacy of these software. The debate over what constitutes a legitimate use, and the channel to acquire spyware is ongoing. To avoid confusion in both analysis and policy—the authors do not embed the term “commercial” in this definition (e.g. “commercial spyware,” more on this below). Spyware defines a set of technical capabilities, wherever those might be acquired. Policy addressing the “market” for spyware necessarily supposes a commercial source rather than those developed within government organizations.

    Vendor

    A spyware vendor is a commercial entity that develops, supports, and sells spyware to an end user. This development and support can include vulnerability research and exploit development, malware payload development, technical command and control, operational management, and training and support, but need not include all.18 To limit discussion of spyware vendors to only those offering ‘end-to-end’ capabilities would risk obscuring critical commercial relationships significant to this discussion, as will become clear in the Intellexa Consortium case below.  

    Holding company

    Several of the vendors in the Intellexa Consortium are part of one or several holding companies. A holding company is a type of business entity whose sole purpose is to own a controlling interest in other companies.19 These companies control subsidiaries. Rather than produce a good or supply a service, the functionality of a holding company is often tied to its ownership of its subsidiaries.20

    Supplier

    A supplier sells a component or service in support of a spyware service to other suppliers and vendors but does not develop or operate a spyware service or work directly with end users. In common parlance, vendors can be suppliers. Here the authors focus suppliers on those firms enabling the activity of spyware vendors but without any capacity to build or sell comparable surveillance services. For example, a supplier might sell a vulnerability or a subscription of exploits to a spyware vendor or establish a service relationship. A supplier helps with the operation of a service rather than providing that service directly. Suppliers are a crucial but often underlooked part of this market. Those vendors that cannot develop some part of a spyware service in-house—most often the regular supply of software exploits needed for continued access to major operating systems—look to procure these capabilities from a supplier, which can help drive proliferation of spyware through an even more diverse market. 

    A question of scope

    The definition of spyware offered here does not describe the full scope of the case study to follow. While this paper is concerned with the Intellexa Consortium and its sale of spyware, this collection of firms includes several that sell services complementary to spyware to steal credentials and surveil wireless networks. The case study of the Intellexa Consortium here is motivated by the sale and use of spyware, but does not necessarily limit its consideration of vendors and suppliers of that product. 

    A related, and important, issue of scope is the particular policy problem that the spyware market presents. As we have noted in previous work, “The proliferation of offensive cyber capabilities (OCC)—the combination of tools, vulnerabilities, and skills, including technical, organizational, and individual capacities used to conduct offensive cyber operations—presents an expanding set of risks to states and challenges commitments to protect openness, security, and stability in cyberspace. The profusion of commercial OCC vendors, left unregulated and ill-observed, poses national security and human rights risks. For states that have strong OCC programs, proliferation of spyware to state adversaries or certain non-state actors can be a threat to immediate security interests, long term intelligence advantage, and the feasibility of mounting an effective defense on behalf of less capable private companies and vulnerable populations. The acquisition of OCC by a current or potential adversary makes them more capable.21

    Many human rights violations associated with OCC occur in the context of their use for national security purposes (e.g., by state intelligence agencies). This dichotomy illustrates the diverse set of risks that the proliferation of OCC pose. These risks include what Lin and Trachtman term “vertical” uses (by states against their own populations) and “diagonal” uses (against the population of other states, including diaspora).22 In some cases, these capabilities are deployed intentionally, through commercial transactions or disclosure, and in other cases without intention; for example, the ‘breakout’ of “capabilities like EternalBlue, allegedly engineered by the United States, have already been used by the Russian, North Korean, and Chinese governments.” 23    

    This piece focuses on a subset of these capabilities, spyware, through a case study within the spyware market. That focus does not suggest that harm from the use of spyware is derived from their commercial sale or development outside government institutions. The commercial vendors of spyware may be the more unpredictable and less constrained source of intentional proliferation today, but they are far from the only source of harm and insecurity. Policy that seeks only to mitigate harms from the commercial sale of these capabilities risks ignoring its wider harms from a variety of sources. Commercial sale is a poor proxy for ‘responsible’ or ‘mature’ use of offensive cyber capabilities and history has shown that this market is only one, intentional part of this wider proliferation problem. Pinning policy activity on an assumption that states that can develop their own capabilities are deemed ‘responsible,’ and those that must resort to the open market are not, risks undermining even well-intentioned policy despite what it might offer in crafting consensus at home or abroad.  

    Intellexa: Behind the music

    How do these terms work in practice and what does a spyware vendor look like in 2024? This section reviews the case of Intellexa Consortium, a group of companies that has reportedly sold spyware to customers in Armenia, Colombia, Côte d’Ivoire, Egypt, Germany, Greece, Oman, the Philippines, Saudi Arabia, Serbia, and Vietnam, in addition to other countries “around the globe.”24, 25, 26 The service has also been used to covertly surveil US government officials, journalists, and policy experts.27

    The Intellexa Consortium is made up of two main groups, Intellexa Group and Intellexa Alliance. Intellexa Group is comprised of four known subcompanies, each of which specializes to complement one another, and houses the developer of the consortium’s spyware. The Intellexa Alliance is a partnership between the Intellexa Group and the Nexa Group, a cluster of five other companies.  

    Figure 2: Chart of the Intellexa Consortium and subsequent groupings

    The phrase “Intellexa Consortium” is an analytical term that researchers and policymakers28 have used to describe this collection of companies with close ties, apparent commercial partnerships, and comingled owners. Although both Intellexa Group and Intellexa Alliance are part of the Intellexa Consortium, neither are registered legal entities in any of the jurisdictions surveyed for this paper.  Meanwhile, known entities that bear Intellexa’s name, Intellexa S.A. is registered in Greece,29 and Intellexa Limited is registered in the British Virgin Islands30 and Ireland.31 Part of what makes Intellexa Group unusual is this collection of customer-facing support and marketing to amplify the reach and efficacy of their services. The corporate infrastructure of Intellexa Group is configured similarly and some of these companies share common ownership. For example, Tal Dilian founded both WS WiSpear Systems Limited and Intellexa S.A. and operated the two firms simultaneously.32

    Each of the Intellexa Group companies have a business relationship with many other entities in the group and many share the “Intellexa” name in some fashion. Intellexa Group, with or through one of the companies in the cluster, is responsible for the sale and support of the Predator spyware service.33 Predator is a spyware service engineered to infiltrate, monitor, and steal data from a target device. Predator installation occurs via “zero-click” or “one-click” infections. One form of zero-click infection takes place when a victim’s mobile browser secretly redirects to a malicious website.34 Alternatively, one-click infections require that victims unknowingly click on a malicious link, such as an article posted to X (formerly Twitter), which the user believed to be a legitimate website.35 After installation, Predator provides remote access to monitor the target device, manipulate local microphones and cameras, and extract data, including files, messages, and location information. Predator has been sold to states that have used it to commit human rights abuses.36  

    Intellexa Group is also part of the broader Intellexa Alliance, in partnership with Nexa Group, a consortium of four known different companies.  

    Figure 3: Known companies and groupings that comprise the Intellexa Alliance

    Reporting has often conflated these two separate clusters, identifying them as a unified entity instead of the set Intellexa Group and superset Intellexa Alliance (together with Nexa Group). This distinction is important as it helps to disentangle the complicated corporate structure and create more effective policy that targets specific clusters. The overlapping corporate structures found here are an extreme example of otherwise common trends found throughout the spyware market covering more than thirty firms with similarly named subsidiaries and nested investor and partner relationships. The figure below highlights features of the Intellexa Group and the Intellexa Alliance to clarify the operations of each association and recommend policy actions based on emerging market phenomena.

    Intellexa Group

    Intellexa Group’s story starts with its founder Tal Dilian. Dilian was sanctioned by the US Treasury Department in March 2024 and so discussed here as a prominent entity of interest to the US policy community. Tracing Dilian’s career trajectory helps parse through the complex and convoluted structure of the Intellexa Group. 

    Figure 4: Known companies and groupings of Intellexa Group

    Dilian, a former commander of the Israel Defense Forces Intelligence Corps’ Unit 81,Unit 81 focuses on developing innovative cyber technologies that provide specific functionality for IDF operations.37 is the founder of several companies that operate or have operated in the spyware market. The first such firm was established in 2010; Circles Solutions Ltd is based in Cyprus and uses Single System 7 vulnerabilities for geolocation with phone numbers as the preferred device identifier, a useful complement to vendors selling spyware targeting mobile phones.38 In 2014, Dilian sold Circles Solutions Ltd to Francisco Partners, a private equity firm based in the United States. From 2014 to 2019, Francisco Partners also held an “indirect controlling interest” of another spyware vendor, NSO Group.39, 40 , 41 As part of its acquisition, Circles Solutions Ltd became a subsidiary of NSO Group.42, 43

    Before completing the $130 million sale of Circles to Francisco Partners, Dilian founded WS WiSpear Systems Limited in 2013.44WS WiSpear Systems Limited specialized in intercepting target Wi-Fi signals and extracting passwords and communications at long range.45 In 2018, WS WiSpear Systems Limited acquired the year-old spyware vendor Cytrox AD, based in North Macedonia.46 Cytrox AD is notable as the original vendor of Predator spyware, the service that would be popularized and sold by Intellexa Group.  

    In 2018, Dilian began to organize what analysts would later come to term Intellexa Group—to include WS WiSpear Systems Limited (since renamed Passitora Ltd),47 Cytrox AD, and adding Senpai Technologies Ltd the following year.48 Senpai Technologies Ltd is an Israel-based company, specializing in open-source intelligence and in analyzing data from phones infected by spyware.49This left Intellexa Group with three complimentary offerings for any surveillance-minded government: Cytrox AD’s Predator spyware service, WS WiSpear Systems Limited’s Wifi-intercept and password-extraction technology, and Senpai Technologies Ltd’s data exploitation and open-source research tools.   

    Two years later, in 2020, Intellexa Group expanded to add Intellexa S.A. (previously known as Intellexa Single Member SA).50 Intellexa S.A.’s role within this consortium remained unclear until recently, with a corporate registry specifying no more than “computer systems design and related services.”51 In March 2024 however, the US Treasury Department described Intellexa S.A. as the primary channel through which Intellexa Group sells Predator spyware.52 A global network of investors supports Intellm exa, and many companies within Intellexa Group’s investor base also have personal connections to Dilian. Aliada Group, based in the British Virgin Islands,53 has Dilian listed as a shareholder 54 and in 2018 became the majority stakeholder in WS WiSpear Systems Limited,55 which would go on to acquire Cytrox AD.56 In 2020, Miros Development Group Inc., based in the British Virgin Islands, purchased Aliada Group.57 That same year, Miros Development Group Inc. was purchased by Thalestris Limited, a company based in Ireland.58 , 59 The director of Thalestris Limited, Sara Hamou, is Dilian’s ex-wife and an offshore specialist.60

    Intellexa Group distributes corporate ownership through an ecosystem of holding companies. Holding companies are developed to control subsidiaries. Cytrox AD is known to be held by:  

    • Cytrox Holdings ZRT, based in Hungary 
    • Cytrox EMEA Ltd (renamed Balinese Ltd in 2019), based in Israel, and,  
    • Cytrox Software Ltd (renamed Peterbald Ltd in 2019), also based in Israel.61 

    These holding companies may serve to protect the assets and owners within Intellexa Group. Other known limited liability companies bearing the same name of Intellexa also exist in Ireland and the British Virgin Islands as Intellexa Limited. Intellexa S.A. is held by: 

    • Intellexa Limited based in the British Virgin Islands62
    • Intellexa Limited based in Ireland.63

    The structure of these holding companies may have been intended to protect assets in the core service provider companies—WS WiSpear Systems Limited, Cytrox AD, and Senpai Technologies Ltd, as well as Dilian and other investors in the Intellexa Group companies.64, 65

    Intellexa Alliance

    Announced in 2019,66 the Intellexa Alliance was a partnership between the entities that comprise Intellexa Group and those of the Nexa Group.67The precise corporate structure of the alliance is murky, and the nature of the relationship remains unknown, although one prominent research outlet has described it as akin to the Star Alliance partnership of airlines.68 Nexa Group is also used to describe a group of companies that markets a set of products under one name but is not a legal entity itself;. It is comprised of Nexa Technologies (France), Nexa Technologies CZ s.r.o. (Czech Republic), Advanced Middle East Systems Fz llc (United Arab Emirates), Serpikom (France), and Trovicor FZ (United Arab Emirates). 

    Figure 5: Known companies and groupings of Nexa Group

    Several key moments provide starting points for analysis of the Nexa Group. In 2012, Nexa Technologies was established as a spin-off of the interception business established by Amesys in France.69 Founded in 2004, Amesys developed and sold its signature Eagle surveillance technology to the former regime of Muammar Gaddafi in Libya.70 Eagle expanded traditional techniques by allowing for the surveillance of internet traffic running to an entire country. To implement such a system, Amesys set up “two high-bandwidth ‘mirrors’” that copied this traffic into a searchable database for use by government security services.71 This traffic included voice over Internet Protocol (VoIP) conversations, email, and online chatroom postings.72 Rather than selecting a few targets to surveil, Eagle allowed the Gaddafi regime to learn about any and all anti-regime activities and discussions taking place over a variety of communications systems.73  

    Bull Group SA (France) bought Amesys in 2010. A year later, the International Federation for Human Rights (FIDH) and the Human Rights League (France) filed a civil party complaint against Amesys and Amesys company executives for “complicity in acts of torture” due to the Libyan government’s use of Amesys technologies.74 However, the court did not approve the opening of an investigation into this matter until 2013, at which point Nexa Technologies had been established to take over Eagle, Amesys’ main interception product.  

    In 2013, two Nexa Group companies were established: Nexa Technologies in France, which took over the development of Eagle surveillance system, and Advanced Middle East Systems  in the United Arab Emirates to function as a sales branch for Nexa Technologies products.75 Nexa Technologies CZ was founded in 2015 as a research and development arm of the company with a particular focus on cryptography.76 Nexa Technologies built upon Eagle to produce and sell its successor product, Cerebro, to governments in Egypt, Kazakhstan, Qatar, Singapore, and the United Arab Emirates.77 In 2019, Boss Industries, the parent company of Nexa Group, acquired Trovicor fz/Trovicor Intelligence, a competing company in the interception technology space. Like its predecessor Amesys, in 2021, Nexa Technologies found itself under indictment for “complicity in acts of torture and of enforced disappearances” based on the Egyptian government’s use of Cerebro technologies against its citizens.I78  

    Nexa Group companies underwent several name changes over the years. As early as 2019, Boss Industries likely held ownership of Nexa Group companies including Nexa Technologies (France), Nexa Technologies CZ, Advanced Middle East Systems (United Arab Emirates), Trovicor fz/Trovicor Intelligence (United Arab Emirates), and Serpikom (France).79 In 2021, ChapsVision acquired Nexa Technologies France.80 The government-facing branch of ChapsVision now purports to build “a sovereign cyber intelligence and cyber security solution, dedicated to the defence, intelligence and security markets”.81 As of 2022, Nexa Technologies CZ operates under the name Setco Technology Solutions, and as of 2023, Nexa Technologies (France) operates under the name RB 42.82

    Nexa Technologies’ integrated hardware-software surveillance product might well have complemented the Intellexa Group companies’ spyware and related service offerings. Nexa’s Cerebro allowed for the passive surveillance of entire populations. Cerebro collects massive amounts of communications data to identify potential targets for enhanced surveillance scrutiny. Once Cerebro identifies a target, Intellexa could deploy Predator spyware to infect that individual’s device to collect more intimate data.  

    Intellexa Consortium- Interaction with suppliers and customers

    Some spyware vendors rely primarily on procuring their vulnerabilities and exploits from third-party suppliers,83 while others, like NSO Group, balance procuring these tools from the market with their own in-house research and development.84 Intellexa Group companies appear to source exploits to support the Predator spyware with enough speed to maintain an eight-figure price point for the product, suggesting both in-house and third-party suppliers for exploits and vulnerability information.85Suppliers from which the Intellexa Group purchases vulnerabilities and exploits is not publicly available. 

    The Intellexa Consortium has faced scrutiny for where and to whom they have sold their wares. In 2007, a known member of the Intellexa Alliance, Nexa Technologies (France)—operating at the time as Amesys—sold its surveillance hardware to Libya. In 2011 and again in 2014, the International Federation for Human Rights and the Human Rights League filed complaints against Nexa Technologies for complicity in acts of torture from the sale of this technology.86 87

    In 2022, the Guardian newspaper revealed that Predator spyware had been used to monitor individuals across Greek politics through the Greek intelligence service.88 Most recently, Intellexa Group companies have been accused of selling Predator to a customer aligned with government interests in Vietnam.89 In 2021, the civil society group Citizen Lab also reported “likely customers” of Predator in Armenia, Egypt, Greece, Indonesia, Madagascar, Oman, Saudi Arabia, and Serbia.90

    Recent policy action on spyware

    In 2022, in response to the investigative findings of the Pegasus Project, an international investigative journalism initiative, the European Parliament set up the PEGA Committee to investigate the misuse of surveillance spyware including the NSO Group’s Pegasus and similar spyware services.91 The committee concluded that European Union governments abused spyware services, lacked necessary safeguards to prevent misuse, and in one jurisdiction the government even facilitated the heedless export of spyware technologies to authoritarian regimes.92 Despite the committee’s recommendations, the EU has not adopted any legislation as a bloc to curb the development or sale of spyware. In March 2023, the United States first proposed to block the US government agencies’ operational use of “commercial spyware.” Under Executive Order 14093, the Biden administration prohibited the operational use of commercial spyware that presents a significant threat to national security.93 Four months later, the US Department of Commerce added four Intellexa Group companies to its Entity List alongside other spyware vendors NSO Group and Candiru, to curb these firms’ ability to obtain commodities, software, and technology needed to develop spyware surveillance tools.94 The move targeted four entities: Intellexa S.A., Cytrox AD Holdings ZRT, Intellexa Limited (Ireland), and Cytrox AD (North Macedonia) because they were “trafficking cyber exploits … used to gain access to information systems, threatening the privacy and security of individuals and organizations worldwide.” 95

    In 2024, the US Department of Treasury Office of Foreign Assets Control levied sanctions against several of the entities listed in the 2023 Commerce action, while adding three more.96 Ultimately Treasury sanctioned Tal Dilian, Sara Hamou, Intellexa S.A., Intellexa Limited, Cytrox AD, Cytrox Holdings Crt, and Thalestris Limited.97 So far, US actions have not included at least five additional entities within the Intellexa Group, Balinese Ltd (formerly Cytrox AD Software Ltd), Peterbald Ltd (formerly Cytrox AD EMEA Ltd), Passitora Ltd (formerly WS WiSpear Systems Limited), and Senpai Technologies Ltd, as well as the British Virgin Islands-domiciled Intellexa Limited.  

    Takeaways for policy and research

    Each member company of the Intellexa Consortium sells spyware or ancillary surveillance support capabilities. The Intellexa Group offers a vertical integration of spyware targeting and delivery as well as information exploitation services. The Intellexa Alliance extends that integration to cover several major European jurisdictions. By bringing talent and complimentary services under an interlinked set of corporate partners, the Intellexa Consortium aggregates behaviors observed from other spyware vendors into a tighter, more robust cluster of entities.  

    This expansiveness of firms across various geographies allows the Intellexa Consortium to exploit jurisdictional arbitrage that can result in different regulatory treatment of the same transaction in different legal systems. Just like in the case of financial arbitrage, high costs are an impediment to arbitrage. In policy for spyware, high transaction costs could act as hindrance to leave a jurisdiction and high entry costs into a more favorable jurisdiction, thus inhibiting this activity in practice. Policymakers could achieve this by requiring more detailed disclosure of where companies intend to relocate when exiting a jurisdiction and their business purpose as well as strengthening business incorporation rules and laws to include more robust investigation of intended business activities of companies (and their beneficial owners, such as a recent change in US reporting rules).98
     
    Media reporting about the Intellexa Consortium often reduces this sprawling group of companies to a single entity, which makes it difficult to identify the operating jurisdiction of that firm. Policymakers should also consider providing universal jurisdiction for cases of spyware with other like-minded states. Cyprus, the Czech Republic, France, Greece, Hungary, Ireland, Israel, and US99 already provide for universal jurisdiction over certain kinds of crimes, a fruitful existing coalition to pursue such a change. 

    Virtually no information exists to explain the business consequences of Intellexa Alliance “membership.” Policymakers cannot make sense of how to target parts or all of the alliance without clearly understanding the constraints of this relationship.  

    Efforts to improve transparency in, and limit the harms of, the spyware market are hobbled if they focus solely on transactions or individual vendors. The rich ties of influence over participants in this market are in their financial and organizational dependencies with others. Policymakers must consider a multipronged approach that incorporates action for not only vendors themselves, but also key subsidiaries, investors, suppliers, and individuals that make up this market. Ably demonstrated by the Intellexa Consortium, the ebb and flow of corporate relationships, constant name changes, and confusing business structures, not only makes it difficult to track what is happening behind the veil with a vendor, but makes policy strictly chasing vendors neglect other pieces of this puzzle.  

    Enhancing the transparency of this market would provide more accurate and timely information to policymakers. Proposals for governments to create know-your-vendor requirements for all those from whom they acquire spyware or related services would substantially benefit policymakers’ visibility into this market and these relationships. Better information about spyware vendor’s business structures would help drive precise regulatory activity and allow for improved awareness of jurisdictions providing a ready home for investors, or vendors, associated with particular harms.  

    This transparency would help realize more effective targets of enforcement as well. Vendors change, but individuals often move between them. Transparency about ownership will assist policymakers in regulating individuals associated with spyware vendors, their subsidiaries, as well as investors. The Intellexa Consortium highlights a vital detail in this picture, where individuals who cultivate businesses around spyware will be repeat players in the market. Tal Dilian was founder of Circles Solutions (now under the NSO Group umbrella) and WS WiSpear Systems Limited (the majority stakeholder in Cytrox AD), along with creating the Intellexa Group. Enhancing transparency in this market will help policymakers find and fix on critical individuals within this market rather than only playing whack a mole with corporate registries.  

    A final potential benefit of this improved transparency is the prospect for efficient regulation of investors. While vendors’ jurisdictions might sometimes be outside the reach of proactive states, publicly known investors in spyware companies appear, at present, to be concentrated in geographies with government interest in intervention against the spyware market, notably the US and UK. For example, while the Intellexa Consortium operates largely within the European Union as a vendor, several of its holding companies and investors are based in the continental United States and the British Virgin Islands. More widely, a 2021 report from Amnesty International found that out of the 50 largest venture capital firms and three start up accelerators worldwide, only one had any sort of due diligence processes for human rights.100

    The case of the Intellexa Consortium is curious for the internal complexity of these firms’ relationships and the potential these business relationships hold for policymakers, researchers, and advocates working to limit the harms of the spyware market. The case is an example of the value that a market perspective can hold as well as the analytic challenges posed by contemporary research into these vendors and their activities. The prospects for policy in this domain are bright and for the first time in more than a decade hold the potential for material change in the shape and impact of the spyware market. We remain hopeful that potential will be realized. 

    Acknowledgements

    Thank you to more than two dozen researchers and analysts who shared their time, expertise, and feedback in the development of this project. Credit is owed to Jen Roberts, for the initial design of many of these graphics, and to Winnona DeSombre Bernsen for her tireless analysis and support throughout the development of this paper. Major thanks to Sopo Gelava, Jean le Roux, and Nancy Messieh who did foundational work on this dataset and its visualization. Thank you for peer review of this paper to Graham Brookie, Winnona DeSombre Bernsen, Kimberly Donovan, Maia Hamin, Kirsten Hazelrig, Sarah McKune, Stewart Scott, and several others who shall remain anonymous. Finally, the authors wish to acknowledge the often-thankless work of those journalists, researchers, and a small community of government analysts and policymakers who have sought to understand this market and its impact on people around the world. There is little in this or any other art which springs forth entirely original and we owe a debt of gratitude to their efforts. The team gratefully acknowledges support for this work from Microsoft and the UK National Cyber Security Centre.

    About the authors

    Jen Roberts is an Assistant Director with the Atlantic Council’s Cyber Statecraft Initiative. She primarily works on CSI’s Proliferation of Offensive Cyber Capabilities and Combating Cybercrime work. Jen also helps support the Cyber 9/12 Strategy Challenge and is passionate about how the United States with its allies and partners, especially in the Indo-Pacific, can cooperate in the cyber domain. Jen holds an MA in International Relations and Economics from Johns Hopkins University’s School of Advanced International Studies (SAIS) where she concentrated in Strategic Studies. She also attained her BA in International Studies from American University’s School of International Service.  

    Trey Herr is assistant professor of Global Security and Policy at American University’s School of International Service and Senior Director of the Atlantic Council’s Cyber Statecraft Initiative. At the Council, the CSI team works at the intersection of cybersecurity and geopolitics across conflictcloud computingsupply chain policy, and more. At American, Trey’s work focuses on complex interactions between states and non-state groups, especially firms, in cyberspace. Previously, he was a senior security strategist with Microsoft handling cybersecurity policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution at Stanford University. He holds a PhD in Political Science and BS in Musical Theatre and Political Science.

    Emma Taylor is a Research Assistant with the School of International Service and a highly interdisciplinary professional pursuing an M.S. in Computer Science and Cybersecurity with previous experience in the technology industry.

    Nitansha Bansal is an Assistant Director with the Atlantic Council’s Cyber Statecraft Initiative. Prior to joining the Council, Bansal worked with the Government and Public Affairs team of Open Source Elections Technology Institute (OSET) where she created visual dashboard for enhancing transparency in American elections. Previously, she worked as a Research Associate with Takshashila Institution, a think tank in India at the intersection of space and cybersecurity policy, and advised Members of Parliament in India on multiple legislative, economic and policy issues. Bansal holds a Masters in Public Administration from Columbia University’s School of International and Public Affairs. Her course of study was concentrated on cyber espionage, cybersecurity and business risk, mis/disinformation, social media policy, deepfake, and trust and safety. Originally from New Delhi, India, she speaks Hindi and Rajasthani.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1     US Department of the Treasury, “Treasury Sanctions Members of the Intellexa Commercial Spyware Consortium,” March 5, 2024, https://home.treasury.gov/news/press-releases/jy2155; “Predator Files: How European Companies Supplied Dictators Cyber-Surveillance Tools for More than a Decade,” European Investigative Collaborations, accessed April 10, 2024, https://eic.network/projects/predator-files.html.
    2    “Buying Spying: Insights into Commercial Spyware Vendors,” Google Threat Analysis Group, February 6, 2024, https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Buying_Spying_-_Insights_into_Commercial_Surveillance_Vendors_-_TAG_report.pdf
    3    AJ Vicens,”Meta Details Actions Against Eight Spyware Firms,” Cyberscoop, February 14, 2024,  https://cyberscoop.com/meta-details-actions-against-eight-spyware-firms/
    4    Amnesty International, The Predator Files: Caught in the Net, October 9, 2023, https://www.amnesty.org/en/documents/act10/7245/2023/en/. 
    5    Bill Marczak et al., “Pegasus vs. Predator: Dissident’s Doubly-Infected iPhone Reveals Cytrox AD Mercenary Spyware,” The Citizen Lab, December 16, 2021, https://citizenlab.ca/2021/12/pegasus-vs-predator-dissidents-doubly-infected-iphone-reveals-Cytrox AD-mercenary-spyware/.
    6    Christopher Bing, “U.S. Slaps Sanctions on Greek Spyware Vendor, Says it Targeted U.S. officials,” ReutersMarch 5, 2024, https://www.reuters.com/technology/cybersecurity/us-slaps-sanctions-greek-spyware-vendor-says-it-targeted-us-officials-2024-03-05/.
    7    Thomas Brewster, “A Multimillionaire Surveillance Dealer Steps out of the Shadows … And His $9 Million WhatsApp Hacking Van,” Forbes, April 5, 2019, https://www.forbes.com/sites/thomasbrewster/2019/08/05/a-multimillionaire-surveillance-dealer-steps-out-of-the-shadows-and-his-9-million-whatsapp-hacking-van/?sh=70e4bcfd31b7.
    8    Winnona DeSombre et al., Counter Cyber Proliferation: Zeroing in on Access-as-a-Service, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/
    9    .US Department of Commerce, “Commerce Adds NSO Group and Other Foreign Companies to Entity List for Malicious Cyber Activities,” November 3, 2021, https://www.commerce.gov/news/press-releases/2021/11/commerce-adds-nso-group-and-other-foreign-companies-entity-list. 
    10    Sven Herpig and Alexandra Paulus, “The Pall Mall Process on Cyber Intrusion Capabilities,” Lawfare, March 19, 2024, https://www.lawfaremedia.org/article/the-pall-mall-process-on-cyber-intrusion-capabilities.
    11    “European Parliament Draft Recommendation to the Council and the Commission Following the Investigation of Alleged Contraventions and Maladministration in the Application of Union Law in Relation to the Use of Pegasus and Equivalent Surveillance Spyware,” European Parliament, May 22, 2023, https://www.europarl.europa.eu/doceo/document/B-9-2023-0260_EN.html.
    12    Christopher Bing, “U.S. Slaps Sanctions on Greek Spyware Vendor, Says it Targeted U.S. officials,” ReutersMarch 5, 2024, https://www.reuters.com/technology/cybersecurity/us-slaps-sanctions-greek-spyware-vendor-says-it-targeted-us-officials-2024-03-05/.
    13    Andrew Selsky, “Oregon Examines Spyware Investment amid Controversy,” OPB, August 5, 2021, https://www.opb.org/article/2021/08/05/oregon-examines-spyware-investment-amid-controversy/; Stephanie Kirchgaessner, “US Announces New Restrictions to Curb Global Spyware Industry,” The Guardian, February 5, 2024, https://www.theguardian.com/us-news/2024/feb/05/us-biden-administration-global-spyware-restrictions; Nomaan Merchant, “Victims of NSO’s Pegasus Spyware Warn It Could Be Used to Target US,” The Times of Israel, July 28, 2022, https://www.timesofisrael.com/victims-of-nsos-pegasus-spyware-warn-it-could-be-used-to-target-us/; Miles Kenyon, “Reported Blackstone NSO Deal Failure and the Risks of Investing in Spyware Companies,” The Citizen Lab, August 15, 2017, https://citizenlab.ca/2017/08/reported-blackstone-nso-deal-failure-risks-investing-spyware-companies/.
    14    “Spyware,” United States Computer Emergency Readiness Team, updated October 2008, https://www.cisa.gov/sites/default/files/publications/spywarehome_0905.pdf.
    15    Also referred to as ‘Stingrays’ after the Harris Corporation’s eponymous product line; Amanda Levendowski, “Trademarks as Surveillance Technology,” Georgetown University Law Center, 2021, https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=3455&context=facpub
    16    This paper’s scope is slightly wider than spyware, owing to the activities of several firms in the Intellexa Consortium, as discussed briefly below.
    17    In the United States, Democratic Senator Ron Wyden of Oregon has advocated for the overhaul of Signaling System 7 (SS7), an international telecommunications protocol containing known vulnerabilities that can be exploited to provide passive surveillance capabilities; see https://www.bloomberg.com/news/articles/2024-02-29/senator-demands-overhaul-of-telecom-security-to-curb-abuses
    18    Winnona DeSombre et al., A primer on the proliferation of offensive cyber capabilities, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/a-primer-on-the-proliferation-of-offensive-cyber-capabilities/.
    19    Amy Fontinelle, “Holding Company: What It Is, Advantages and Disadvantages,” Investopedia, February 13, 2024, https://www.investopedia.com/terms/h/holdingcompany.asp. 
    20    Holding companies might provide oversight for subsidiaries; however, they are not involved in daily operations and remain protected from financial losses that might implicate subsidiaries.Fontinelle, “Holding Company.” 
    21    ”Winnona DeSombre et al. Countering Cyber Proliferation: Zeroing in on Access-as-a-ServiceAtlantic Council, May 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/.
    22    Herb Lin and Joel P. Trachtman, ”Using International Export Controls to Bolster Cyber Defenses,” Protecting Civilian Institutions and Infrastructure from Cyber Operations: Designing International Law and Organizations,” Center for International Law and Governance, Tufts University, September 10, 2018,  https://sites.tufts.edu/cilg/files/2018/09/exportcontrolsdraftsm.pdf.
    23    Winnona DeSombre et al. Countering Cyber Proliferation: Zeroing in on Access-as-a-ServiceAtlantic Council, May 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/; Gil Baram, “The Theft and Reuse of Advanced Offensive Cyber Weapons Pose a Growing Threat,” Council on Foreign Relations (blog), June 19, 2018, https://www.cfr.org/blog/theft-and-reuse-advanced-offensive-cyber-weapons-pose-growing-threat; Insikt Group, “Chinese and Russian Cyber Communities Dig Into Malware From April Shadow Brokers Release,” Recorded Future (blog), April 25, 2017, https://www.recordedfuture.com/shadow-brokers-malware-release/; Leo Varela, “EternalBlue: Metasploit Module for MS17-010,” Rapid7 (blog), May 19, 2017, https://blog.rapid7.com/2017/05/20/metasploit-the-power-of-the-community-and-eternalblue/.
    24    David Agranovich, Mike Dvilyanski, and Nathaniel Gleicher, Threat Report on the Surveillance-for-Hire Industry, Meta, December 16, 2021, https://about.fb.com/wp-content/uploads/2021/12/Threat-Report-on-the-Surveillance-for-Hire-Industry.pdf.
    25    Marczak et al., “Pegasus vs. Predator.”
    26    Amnesty International, Predator Files.
    27    United States Department of the Treasury, “Treasury Sanctions Members of the Intellexa Commercial Spyware Consortium,” March 5, 2024, https://home.treasury.gov/news/press-releases/jy2155.  
    28    “Report of the Investigation of Alleged Contraventions and Maladministration in the Application of Union Law in Relation to the Use of Pegasus and Equivalent Surveillance Spyware,” European Parliament, May 22, 2023,  https://www.europarl.europa.eu/doceo/document/A-9-2023-0189_EN.pdf; “Amendments 241-510 Draft report,” European Parliament, January 1, 2023, https://www.europarl.europa.eu/doceo/document/PEGA-AM-740916_EN.pdf; US Department of the Treasury, “Treasury Sanctions Members.”
    29    “Intellexa Company,” Athens Chamber of Commerce and Industryaccessed March 22, 2024, https://directory.acci.gr/companies/details/140944573.
    30    “Intellexa Ltd., British Virgin Islands,” Dato Capital, https://www.datocapital.vg/companies/Intellexa-Ltd.html.
    31    “Intellexa Limited,” Companies Registration Office Ireland, accessed March 22, 2024, https://core.cro.ie/e-commerce/company/697890.
    32    “Briefing for the PEGA Mission to Cyprus and Greece,” European Parliament, October 2022, https://www.europarl.europa.eu/RegData/etudes/STUD/2022/738330/IPOL_STU(2022)738330_EN.pdf. 
    33    Amnesty International, Global: ‘Predator Files’ Investigation Reveals Catastrophic Failure to Regulate Surveillance Trade, October 5, 2023, https://securitylab.amnesty.org/latest/2023/10/global-predator-files-investigation-reveals-catastrophic-failure-to-regulate-surveillance-trade/; “Read the Intellexa Pitch on Its Spyware Tool,” The New York Times, December 8, 2022, https://www.nytimes.com/interactive/2022/12/08/us/politics/intellexa-commercial-proposal.html?searchResultPosition=1; Bill Marczak et al., “Pegasus vs. Predator.”
    34    Bill Marczak et al., “Predator in the Wires: Ahmed Eltantawy Targeted with Predator, The Citizen Lab, September 22, 2023, https://citizenlab.ca/2023/09/predator-in-the-wires-ahmed-eltantawy-targeted-with-predator-spyware-after-announcing-presidential-ambitions/.
    35    Bill Marczak et al., “Independently Confirming Amnesty Security Lab’s Finding of Predator Targeting of U.S. & Other Elected Officials on Twitter/X,” The Citizen Lab, October 9, 2023, https://citizenlab.ca/2023/10/predator-spyware-targets-us-eu-lawmakers-journalists/.
    36    Amnesty International, Predator Files.
    37    Corin Degani, “An Elite Israeli Intelligence Unit’s Soldiers are Sworn to Secrecy – but Tell All on LinkedIn,” Haaretz, November 18, 2021, https://www.haaretz.com/israel-news/tech-news/2021-11-18/ty-article/.premium/an-israeli-intell-units-soldiers-are-sworn-to-secrecy-but-tell-all-on-linkedin/0000017f-e0e5-d568-ad7f-f3ef63350000.
    38    Thomas Brewster, “A Multimillionaire Surveillance Dealer Steps out of the Shadows…And His $9 Million WhatsApp Hacking Van,” Forbes, April 5, 2019, https://www.forbes.com/sites/thomasbrewster/2019/08/05/a-multimillionaire-surveillance-dealer-steps-out-of-the-shadows-and-his-9-million-whatsapp-hacking-van/?sh=70e4bcfd31b7.
    39    Brewster, “A Multimillionaire Surveillance Dealer.”
    40    NSO Group is an Israel-based spyware vendor that developed the Pegasus spyware suite and has been reported on widely as a focus of a recent EU Parliamentary commission investigation into government abuse of the spyware globally to suppress human rights; see https://www.amnesty.org/en/latest/news/2022/03/the-pegasus-project-how-amnesty-tech-uncovered-the-spyware-scandal-new-video/; see https://www.europarl.europa.eu/committees/en/pega/home/highlights.
    41    “Operating from the Shadows: Inside NSO Group’s Corporate Structure,” Amnesty International, May 31, 2021, https://www.amnesty .org/en/documents/doc10/4182/2021/en/
    42    “Operating from the Shadows,”https://www.amnesty.org/en/documents/doc10/4182/2021/en/
    43    European Parliament, “Report of the Investigation,” https://www.europarl.europa.eu/doceo/document/A-9-2023-0189_EN.pdf
    44     WS WiSpear Systems, Eφορος Εταιρειών/Registrar of Companies,” Accessed March 22, 2024, https://efiling.drcor.mcit.gov.cy/DrcorPublic/SearchResults.aspx?name=WS WiSpear Systems Limited&number=%25&searchtype=optStartMatch&index=1&tname=%25&sc=0 “Operating from the Shadows,” Amnesty International. 
    45    Patrick Howell O’Neil, “Israeli Startup Touting ‘the Longest’ Range Wi-Fi Spying Tool in the World,” Cyberscoop, September 21, 2017, https://cyberscoop.com/WS WiSpear Systems Limited-wifi-interception-israel-unit-8200/#:~:text=WS WiSpear Systems Limited%2C%20launched%20in%202016%20by,passwords%20and%20other%20communications%20%E2%80%94%20at%20%E2%80%9C.
    46    European Parliament, “Brief for the PEGA Mission.”
    47    Marczak et al., “Pegasus vs. Predator.”
    48    European Parliament, “Brief for the PEGA Missions”; “Predator Files: Technical deep-dive into Intellexa Alliance’s surveillance products,” Amnesty International, October 6, 2023, https://securitylab.amnesty.org/latest/2023/10/technical-deep-dive-into-intellexa-alliance-surveillance-products/
    49    “The Predator Files,” Amnesty International, https://www.calcalistech.com/ctech/articles/0,7340,L-3772040,00.html.
    50    European Parliament, “Brief for the PEGA Mission.”
    51    “Intellexa S.A.”, dun & bradstreet, accessed March 22,2024, https://www.dnb.com/business-directory/company-profiles.intellexa_sa.00b9d3be2fdd11150913f55266c391e8.html.
    52    US Department of the Treasury, “Treasury Sanctions Members.” 
    53    European Parliament, “Brief for the PEGA Mission.”
    54    Shuki Sadeh, “A Shady Israeli Intel Genius, His Cyber-Spy Van and Million-Dollar Deals,” Haaretz, December 31, 2020, https://www.haaretz.com/israel-news/tech-news/2020-12-31/ty-article-magazine/.highlight/a-shady-israeli-intel-genius-his-cyber-spy-van-and-million-dollar-deals/0000017f-f21e-d497-a1ff-f29ed7c30000.
    55    European Parliament, “Brief for the PEGA Mission.”
    56    European Parliament, “Brief for the PEGA Mission.”
    57    Michalis Hariatis, “The SYRIZA-PASOK Findings on Wiretapping: Both a Scandal and a Cover-Up,” Ieidiseis, October 10, 2022, https://www.ieidiseis.gr/politiki/167144/ta-porismata-syriza-pasok-gia-tis-ypoklopes-kai-skandalo-kai-sygkalypsi.
    58    European Parliament, “Brief for the PEGA Mission.”
    59    Colm Keena, “Ireland Being Used by Predator Spyware Group to Avoid Tax, Claims Dutch MEP,” Irish Times, February 10, 2023, https://www.irishtimes.com/business/economy/2023/02/10/shady-business-ireland-accused-of-facilitating-tax-avoidance-by-spyware-group/; David Kenner, “The Spy, the Lawyer and Their Global Surveillance Empire,” International Consortium of Investigative Journalists, November 15, 2023, https://www.icij.org/investigations/cyprus-confidential/israeli-predator-spyware-cyprus-offshore-intellexa/.
    60    Kenna, “The Spy.” 
    61    “WS WiSpear Systems,” Eφορος Εταιρειών/Registrar of Companies, accessed March 22, 2024, https://efiling.drcor.mcit.gov.cy/DrcorPublic/SearchResults.aspx?name=WS+WS WISPEAR SYSTEMS LIMITED+SYSTEMS+LIMITED&numbnu=%25&searchtype=optStartMatch&index=1&tname=%25&sc=1; Bill Marczak et al., “Pegasus vs. Predator;” https://or.justice.cz/ias/ui/rejstrik-firma.vysledky?subjektId=919037&typ=UPLNY; https://www.europarl.europa.eu/RegData/etudes/STUD/2022/738330/IPOL_STU(2022)738330_EN.pdf. 
    62    “Intellexa Ltd., British Virgin Islands,” Dato Capital, accessed March 22, 2024, https://www.datocapital.vg/companies/Intellexa-Ltd.html
    63    Companies Registration Office Ireland, “Intellexa Limited.” 
    64    Fontinelle, “Holding Company.” 
    65    US Department of the Treasury, “Treasury Sanctions Members.” 
    66    Nexa Technologies, “Intellexa Alliance,” February 16, 2019,  https://web.archive.org/web/20200109072024/https:/www.nexatech.fr/intellexa-alliance-press-news.
    67    “Executives of surveillance companies Amesys and Nexa Technologies indicted for complicity in torture,” Amnesty International, June 22,2021, https://www.amnesty.org/en/latest/press-release/2021/06/executives-of-surveillance-companies-amesys-and-nexa-technologies-indicted-for-complicity-in-torture/; Intellexa “The Intellexa Alliance Expands with the Addition of New Members and the Enhancement of Its End-to-End Offering,” Release Wire, June 20, 2019, http://www.releasewire.com/press-releases/the-intellexa-intelligence-alliance-expands-with-the-addition-of-new-members-and-the-enhancement-of-its-end-to-end-offering-1234811.html. 
    68    The Star Alliance in non-spyware space is a partnership of airlines that offer travelers shared benefits for flying within partner airlines; Marczak et al., “Pegasus vs. Predator.”
    69    Clairfield International, Clairfield Annual Outlook 2020,” January 13, 2020, https://www.clairfield.com/wp-content/uploads/Clairfield-Annual-Outlook-2020.pdf.
    70    Paul Sonne and Margaret Coker, “Firms Aided Libyan Spies,” The Wall Street Journal, August 30, 2011, https://www.wsj.com/articles/SB10001424053111904199404576538721260166388.
    71    Matthieu Aikins, “Jamming Tripoli: Inside Moammar Gadhafi’s Secret Surveillance Network,” Wired, May 18, 2012, https://www.wired.com/2012/05/ff-libya/
    72    Aikins, “Jamming Tripoli.”
    73    Aikins, “Jamming Tripoli.”
    74    International Federation for Human Rights, “Q/A Surveillance and Torture in Egypt and Libya: Amesys and Nexa Technologies Executives Indicted,” June 22, 2021, https://www.fidh.org/en/region/north-africa-middle-east/egypt/q-a-surveillance-and-torture-in-egypt-and-libya-amesys-and-nexa#.
    75    Clairfield International, “Project <<Aspen>> Expert in Homeland Security Solutions,”Clairfield International. September 2016. https://s3.documentcloud.org/documents/21116576/project-cerebro-nexa-technologies.pdf.
    76    Clairfield, “Project Aspen.”
    77    Sven Becker et al., “European Spyware Consortium Supplied Despots and Dictators,” Spiegel International, May 10, 2023, https://www.spiegel.de/international/business/the-predator-files-european-spyware-consortium-supplied-despots-and-dictators-a-2fd8043f-c5c1-4b05-b5a6-e8f8b9949978.
    78    nternational Federation for Human Rights, “Surveillance and Torture.”
    79    “The Predator Files,” Amnesty International.
    80    “Raising the Bar: A Selection of M&A Deals,” Eversheds Sutherland, Accessed March 22, 2024. https://www.es-archive.com/documents/global/czech-republic/cz/Tombstone%20M&A_CR_SR.pdf.
    81    “ChapVision Cybergov”, accessed March 22, 2024, https://www.chapsvision-cybergov.com/.
    82    “Setco Technology Solutions s.r.o.,” Verejny restrik (obchodni rejstrik)/Public Register (Commercial Register), Accessed March 22, 2024, https://or.justice.cz/ias/ui/rejstrik-firma.vysledky?subjektId=919037&typ=UPLNY  “The Predator Files,” Amnesty International.
    83    “Hacking Team: a zero-day market case study,” Vlad Tsyrklevich, (personal website), July 22, 2015, https://tsyrklevich.net/2015/07/22/hacking-team-0day-market/.
    84    Winnona DeSombre et. al “Countering Cyber Proliferation.”
    85    Victor Ventura, “Intellexa and Cytrox AD: From Fixer-Upper to Intel Agency-Grade Spyware,” Talos, December 21, 2023, https://blog.talosintelligence.com/intellexa-and-Cytrox AD-intel-agency-grade-spyware/.  “Read the Intellexa Pitch,” The New York Times
    86    International Federation for Human Rights, “FIDH and LDH File a Complaint Concerning the Responsibility of the Company AMESYS in Relation to Acts of Torture,” October 19, 2011, https://www.fidh.org/en/region/north-africa-middle-east/libya/FIDHand-LDH-file-a-complaint.
    87    International Federation for Human Rights, “Q/A Surveillance.”
    88    Helena Smith, “Greek ‘Watergate’ Phone-Tapping Scandal Puts Added Pressure on PM,” The Guardian, August 28, 2022, https://www.theguardian.com/world/2022/aug/28/greek-watergate-phone-tapping-scandal-threatens-to-topple-pm. 
    89    “The Predator Files,” Amnesty International.
    90    Marczak et al., “Pegasus vs. Predator.”  
    91    “Report of the Investigation of Alleged Contraventions and Maladministration in the Application of Union Law in Relation to the Sse of Pegasus and Equivalent Surveillance Spyware,” European Parliament, May 22, 2023, https://www.europarl.europa.eu/doceo/document/A-9-2023-0189_EN.pdf.
    92    European Commission, “Report on the Investigation.”
    93    The White House, “Fact Sheet: President Biden Signs Executive Order to Prohibit U.S. Government Use of Commercial Spyware That Poses Risks to National Security,” March 27, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/27/fact-sheet-president-biden-signs-executive-order-to-prohibit-u-s-government-use-of-commercial-spyware-that-poses-risks-to-national-security/.
    94    US Department of Commerce, “Commerce Adds Four Entities to Entity List for Trafficking in Cyber Exploits,” July 18, 2023, https://www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3297-2023-07-18-bis-press-package-spyware-document/file.
    95    US Department of Commerce, “Commerce Adds Four.” 
    96    This sanction designation freezes all assets held in the United States and prohibits these individuals and entities from conducting business within the United States. Furthermore, if a financial institution continues to do business with these designated entities and individuals, it may be subject to sanctions or enforcement actions. Finally, if a sanctioned individual or entity owns 50 percent or more of a company not listed, those firms are also subject to sanctions.
    97    US Department of the Treasury, “Treasury Sanctions Members.”
    98    “New Report: US is catching up with beneficial ownership,” Thomas Reuters, January 24, 2023, https://www.thomsonreuters.com/en-us/posts/corporates/beneficial-ownership-report-2024/. 
    99    “Universal Jurisdiction: A Preliminary Survey of Legislation Around the World – 2012 Update,” Amnesty International, October 09, 2012, https://www.amnesty.org/en/documents/ior53/019/2012/en/.
    100    “Risky Business: How Leading venture Capital Firms Ignore Human Rights when Investing in Technology,” Amnesty International, July 30, 2021, https://www.amnesty.org/en/documents/doc10/4449/2021/en/.  

    The post Markets matter: A glance into the spyware industry appeared first on Atlantic Council.

    ]]>
    O$$ security: Does more money for open source software mean better security? A proof of concept https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/o-security-does-more-money-for-open-source-software-mean-better-security-a-proof-of-concept/ Thu, 18 Apr 2024 22:50:00 +0000 https://www.atlanticcouncil.org/?p=817991 A proof-of-concept study looking for correlation between open source software project funding and security practices at scale.

    The post O$$ security: Does more money for open source software mean better security? A proof of concept appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    The security of open source software has transformed from a niche technology topic to a matter of broad interest in recent years, including for the national security community. Underlying this interest is an emerging consensus that too many users and beneficiaries of open source software are free riders, devoting little money, staff time, and other resources to the health and security of the open source software they depend on, leaving over-burdened and under-compensated open source software maintainers to deal with their code and security fixes, feature improvements, vulnerability remediation, and more on their own. Consequently, this perspective implies that the security and sustainability of open source software suffer from a lack of financial investment. This historical underinvestment has motivated several companies and foundations to recently invest in open source software and associated initiatives.

    But is there evidence that more general financial investment (“more money”) improves security for open source software projects?

    At this project’s inception, we could find no existing, large-scale studies on this question,1so the project created a novel methodology and dataset to investigate the issue. The project first identified the 1000 most downloaded open source software packages in the Python and npm (Javascript) programming language ecosystems, two of the largest and most popular open source package repositories. The project used a tool called “funder-finder” to determine a project’s funding sources. Funder-finder sources include GitHub sponsors (both organizational and individual), Tidelift, Open Collective, Google Summer of Code, and NumFOCUS. Additionally, the project used a tool called “Security Scorecard,” maintained by the Open Source Security Foundation (OpenSSF), to quantitatively measure the security posture of open source software projects. Finally, the analysis used descriptive statistics and simple statistical procedures to search for evidence of any relationship between funding and security among these popular packages.

    Three research findings stand out:

    1. The statistical evidence, which is of only moderate strength, supports the view that some general-purpose open-source funding vehicles do correlate moderately with more secure open-source software projects. In short, more money does seem to correlate with better security practices.
    2. Additionally, there is some evidence that a greater number of unique sources of open source funding for a given project also corresponds to a project with better security practices.
    3. The detailed quantitative evidence suggests that more funding positively correlates with better compliance with several, thought not all, security practices (as measured by the OpenSSF Security Scorecard tool), instead of any single security practice.

    Confidence in these research results must remain only moderate, though, until there is additional research and analysis. These findings rely on cross-sectional data—data at one point in time—and data from only two open source software ecosystems and a small set of funding mechanisms. A later section in the paper describes the analytical limits of this analysis and opportunities for future research.

    However, these findings should nonetheless enrich the debate about open source software funding and security. Most importantly, this study presents prima facie evidence of a positive effect of general open source software funding on open source software security. This can help funding organizations—companies, non-profits, or governments—make decisions related to funding open source software projects. The findings also suggest that the security effects of funding can be found via relatively straightforward automated analysis and do not strictly require manual data collection from the project maintainers themselves or the invention of new security measurement tools. This can help inform parties that want to evaluate the effects of open source project funding on security to ensure that these dollars are well spent.

    Finally, this project—itself more a proof of concept than the final word—highlights several questions for researchers and policymakers interested in open source software funding and security. To mention only a few: How should projects be selected for funding? What is the definition of “security” for an open source project? Can only randomized controlled trials ascertain the true security effects of open source project funding? How well do security practices reduce or prevent negative security outcomes?

    In the meantime, interested parties will likely need to adopt the “rough consensus and running code”2 intellectual style often associated with the open source movement to make sense of the open source project funding and security landscape. That mindset applied to these results leads to a first-cut answer of the main research question of whether more money leads to better open source software security: kind of.

    Introduction: Open source software, money, and security

    Among those who pay attention to opensource software—code released under licenses allowing anyone to use, inspect, and modify it—there increasingly exists a view that open-source software receives too little investment.3 Those with this view, moreover, often subscribe to the belief that such under-investment poses substantial risks for much, if not all, of society. One argument considers open-source software as infrastructure, like roads and bridges, or even as critical infrastructure like water reservoirs or hospitals.4 No matter the exact framing, this reasoning maintains that open source software is ubiquitous throughout the modern digital economy and that risks in open source software—whether arising from unintentional or malicious security concerns, the so-called “health” of the project, or any of other sources—therefore pose dangers to everyone.

    This partially explains the Cambrian explosion-like outbreak of open-source software project funding from key players like the Ford Foundation,5 GitHub Sponsors,6    the Linux Foundation and associated sub-organizations such as the Open Source Security Foundation (OpenSSF),7 the National Science Foundation’s Pathways to Enable Open-Source Ecosystems (POSE) funding,8 Open Collective,9 the Open Technology Fund,10 the Sovereign Tech Fund,11 thanks.dev,12 Tidelift,13 Spotify,14 and many others. These funders pursue a variety of goals. Some allow any party with a bank account to simply provide money—no strings attached—to open-source software maintainers.15 Others are less altruistic and seek social, political, or commercial goals from their funding, such as ensuring that a set of open source software projects critical to their organization’s business follow standard engineering and security practices.

    The logic of providing funding to open source software maintainers is, at first glance, straightforward. Open source software projects are often maintained by volunteers. While these volunteers are sometimes paid staff from a company dependent on a project, this is usually not the case, and the relative impact of these individuals is well understood. Crucially, this also means that there is often no formal contractual relationship between the parties that depend on the software and the parties that produce and maintain the software.16 As such, open source software maintainers are often stretched thin.17Funding is meant to ease this situation, enabling otherwise busy maintainers to devote more time to their projects.

    A related logic is also embedded in this strategy: funding should go to certain “critical” projects, those of particular importance or “central” to a wide number of other entities. Such prioritization, in theory, should ensure that scarce funding dollars are allocated efficiently since there are millions of open-source projects, many of which are no more than personal pet projects.

    While these ideas are sensible, there is a tautological danger lurking within. If the “problem” of open source software funding is defined simply as “there is too little funding,” then the obvious solution is more funding, more types of it, and more mechanisms for delivering it. Success then becomes defined as many “critical” projects receiving funding, regardless of whether that funding helps improve the projects. While there are certainly moral and ethical arguments to be made in support of such an arrangement, this mindset may ultimately produce disappointing results for advocates interested in changing the underlying projects by funding them. Without a clear sense of purpose, funding could dry up or jump aimlessly from initiative to initiative and project to project. It might also have no discernible effect in the absence of tailored mechanisms to improve security.18 Ideally, funders of open source software initiatives would state clear goals and use assessment tools19 similar to those used by international development or medical professionals to evaluate the efficacy of their funding.

    To avoid this risk, this research project set out to pilot a relatively formal analysis and measuring of the  effect of open source software funding on project security at scale.

    So, why security? Security has become a prominent concern among those interested in open source software. Though this movement pre-dates the infamous December 2022 Log4j incident,20 where a severe vulnerability was found in a ubiquitous open-source logging framework, it certainly gained steam in the aftermath of that major revelation. Key open source software organizations like the Eclipse Foundation and OpenSSF, as well as major companies such as Google, have become evangelists of open source software security. This activity has also spread to the US government (and others internationally), which has led to the US Cybersecurity and Infrastructure Security Administration (CISA) developing an Open Source Software Security Roadmap and the establishment of the interagency Open Source Software Security Initiative (OS3I).21

    This research sought to uncover whether general funding—funding not tied to a specific security goal—measurably improves the security of open source software projects. While the next section will explain the methodological details, it is worth explaining the project’s focus on and assumptions around “general” funding. Broadly, the question of whether targeted funding can improve security is both easier to answer and less relevant to the discussion. There is already some evidence that initiatives specifically focused on improving open source software security can succeed. The US Department of Homeland Security’s 2006-2009 Open Source Hardening Project is one example,22 and simply paying for security audits or tooling is a viable approach already piloted by some organizations.23It is less clear, however, whether unrestricted or general funds also have security benefits for open source projects. This question is a key part of policy conversations about open source software security—namely how resources should be spent on improving security compared to other dependency attributes and which projects should receive said support are critical to efficient policymaking.

    Research methodology: Or, how to embrace “Running Code”

    There is no established body of datasets or techniques for studying the effect of open source software funding on security. One recent study by the company Tidelift that evaluates the impact of its own funding on open source project security is, to our knowledge, the only exception.24 Beyond the fact that interest in this topic is fairly recent, there are practical reasons for the current research gap. First, the heterogeneous and sprawling nature of open source software means that there is no central authority for funding data, so collection requires either relatively manual processes or new tools. Second, the definition of the “security” of an open source software project—and most software and systems in general—is difficult to pin down, at least for quantitative study. Fourth, even should a researcher create a dataset that tracks funding and “security” for some set of open-source software projects, determining from a methodological standpoint whether that funding causes improved security is by no means simple. This research nonetheless attempts to overcome some of these hurdles and answer the question of whether general funding for open source software projects improves their security.

    While many might prefer a large-scale randomized controlled trial to answer this research question, such an approach was, for this project, highly impractical. Not only would such a trial require a substantial amount of funding, but it would also be onerous and complicated to administer. Providing financial funding to open source projects not designed to receive funds is also a sensitive matter. In particular, what person or party should receive the funds? This is not always a straightforward question given that many open source projects lack the centralized hierarchy and formal governance that would enable a clear answer. To compensate, this research used regression analysis of cross-sectional data from two open source software ecosystems to measure the statistical relationship between funding and security: the Python Package Index (PyPI, pronounced “pie pea eye”) and npm.25

    PyPI and npm were natural starting points due to their popularity. As of October 2023, PyPI hosts nearly 500,000 open source packages, and npm hosts over one million. These package registries are akin to mobile app stores but house open source packages rather than mobile apps. PyPI has become the go-to source of open source packages for data science, machine learning, artificial intelligence, and other data-related programming activities, making it a central component of modern software. npm, a package registry for Javascript, is the leading package manager for software developers building web applications, both the “front-end” parts visible to a user through a browser and the “back-end” code running on servers. Additionally, these ecosystems have already been the focus of other software security research, which suggested the feasibility of this study.26 Future analysis can extend this project’s analysis to other ecosystems.

    The next step of the project involved creating a clear definition of “funding.” There are admittedly many types of funding for open source software projects, serving a wide variety of purposes and methods ranging from straightforward transfers of cash to portioning out developer time from large IT firms. Funding, for the purposes of this project, is defined as whether there exists evidence that a particular project has any of the following funding sources:

    • Official funding through GitHub Sponsors,27 an official funding program created by GitHub, for the parent GitHub organization of a project
    • Individual funding from GitHub Sponsors for any of the top three contributors to a project
    • Funding from Tidelift,28 a company that matches funding from companies with open-source software maintainers
    • Funding from Open Collective,29 a tool for grassroots fundraising sometimes used by open-source projects as well as by large, centralized funding entities
    • Funding from NumFOCUS,30 a non-profit that, among other things, provides fiscal sponsorship of open source projects related to research, data, and scientific computing
    • Funding from Google Summer of Code, a program that provides funding for new contributors to work on open source projects in an internship-like fashion

    This study assumes that these are general sources of funding, though caveats and exceptions to this assumption are discussed in the limitations section. These funding parameters were assessed via the open-source tool funder-finder, which uses simple heuristics to determine whether a given open source project (specifically, a GitHub URL) has evidence of these types of funding.31 Other approaches are possible, though these have their drawbacks. For instance, surveying the developers associated with a set of open source projects is feasible but arduous. Additionally, it would also be possible to partner with a group of funding entities and standardize their data, although this would require significant formal cooperation.

    To measure the “security” of an open source project, this research used the OpenSSF Security Scorecard tool,32 which provides a score from zero to ten to grade the maturity and trustworthiness of a project’s security development practices or its security posture. The tool rely on a series of subchecks and heuristics to determine whether a project follows a set of well-known security practices. The benefit of using Scorecard is that the assessment is automated, comparable, and focused on practices rather than outcomes, which are challenging to measure and prone to significant biases. There are other potential approaches for measuring the security posture of an open source project, such as the mean time to remediation for disclosed vulnerabilities, but such datasets tend to be expensive to collect and are often incomplete.

    The actual analysis involved measuring the funding and security scores for the 1000 most popular projects in both Python and npm and then comparing their scores against those of projects without funding. The analysis also compared the security scores of projects by funding type. Focusing on only the top downloaded projects increases the likelihood that the funded and unfunded are comparable in terms of organization and scope (with some exceptions). This approach should minimize the risk of comparing major, relatively well-organized projects (that receive funding) to projects that are in the “long tail” of open-source development (and are unfunded), which are in reality no more than minor personal projects.

    For both PyPI and npm, the analysis followed these steps:

    1. Create a list of the 100 most popular open-source packages. For Python, the most popular packages were defined as those with the most downloads.33 For npm, the most popular packages were defined according to npm rank, which provided a most-depended-upon list.34
    2. Identify the source code URL for each project. For the Python packages, the list of URLs was created via the open-source tool deps2repos.35 For npm, the rankings list also provided URLs.
    3. Run funder-finder on all GitHub URLs. Funder-finder can only report results for GitHub URLs. Fortunately for this analysis, most of the referenced projects are hosted on GitHub.
    4. Run Scorecard on all GitHub URLs.
    5. Create descriptive statistics for funder-finder and Scorecard results.

    Results: Does more money mean better OSS security?

    The main results from this analysis are four-fold:

    • There are a variety of funding types in both the PyPI and npm ecosystems.
    • Some funding types do appear to correlate to substantially higher security posture scores. In particular, GitHub organization sponsorship, Open Collective funding, and, to a lesser extent, Tidelift funding appear to correlate strongly with security benefits. GitHub individual funding does not appear to influence a project’s security posture.
    • There is moderate evidence that combined funding (i.e. having more unique funding sources) is also correlated with better security posture.
    • The detailed quantitative evidence suggests that funding positively correlates with an array of security practices (as measured by the OpenSSF Security Scorecard), meaning that funding does not correlate with a better score in any single security practice alone .

    Table 1. Funding prevalence by ecosystem and funding type

    Most strikingly, the npm ecosystem appears to have relatively more funding than PyPI. For most categories of funding, the number of npm projects with funding in that category is double or triple the number of Python projects with that category of funding. It is also notable that some types of funding such as GitHub Sponsors (especially individual sponsors) are quite common while others, such as NumFOCUS and Google Summer of Code, are rare. This is not to imply that they are insignificant investments into the OSS ecosystem—indeed they might prioritize quality support to a small number of projects with critical or niche uses instead of widespread funding—but rather that for this analysis, the latter two simply provide too small a sample for showing statistical significance. However, several types of funding have sufficient data to enable robust statistical analysis.

    Statistical analysis of the Python ecosystem reveals that all forms of funding correlate with an improvement in the security posture of projects (see Table 2).

    Table 2. Effect of funding by funding type for the Python ecosystem

    For the Python ecosystem, some forms of funding, especially Tidelift, Open Collective, and GitHub Organizational Sponsorship correlate with a significantly better security score, approximately one point or more higher on average.

    Table 3. Effect of funding by funding type for the npm ecosystem

    Table 3 reveals that the funding effects in the npm ecosystem vary. Some types of funding appear to have no effect. But two forms of funding, GitHub Sponsors organizational funding and Open Collective, correlate with a better overall security posture. Additional analysis also suggests that an increase in the number of unique funders also leads to an improvement in the average scorecard for projects in both ecosystems (see Table 4).

    Table 4. Average scorecard score by number of unique funders for both the Python and npm ecosystems

    Python projects with two or more funding sources have noticeably higher average overall security posture scores. npm projects with three or more funding sources also appear to have higher average security posture scores. This analysis suggests that there are potentially distinct benefits from having more unique sources of funding on top of a project having a source of funding in general.

    Subchecks

    OpenSSF Security Scorecard scores are a composite of several different subscores, each produced from assessing a project’s adherence to some security practice given a specific weight for the final calculation. Looking at which of these subchecks drove variations in overall scores highlights the details of the funding-scorecard relationship. Table 5 summarizes these for Python and Table 6 for npm, providing the difference for each subcheck between the mean scores of each funder and the mean scores of unfunded projects.

    The following practices were significantly more common in Python projects with any funder:

    • Continuous Integration (CI) tests
    • Core Infrastructure Initiative (CII) best practices
    • A diversity of recently active company-affiliated contributors
    • Avoidance of dangerous workflows
    • Fuzzing tools
    • Active maintainers
    • Official package building practices
    • Cryptographical signatures on releases
    • Read-only permissions on GitHub workflow tokens
    • Remediation of known vulnerabilities.

    Some practices were significantly more common among specific funders. Projects with Open Collective funding were more likely to review code before merging it, and projects with either Open Collective or Tidelift support were significantly more likely to use dependency update tools and have security policies. Tidelift-supported projects were also much more likely to use static code analysis tools.

    Table 5. Scorecard subchecks across funders in PyPI – Differences from no-funder average (* = p<.05)

    For npm projects, the following were more common among projects with any source of funding:

    • A diversity of recently active company-affiliated contributors
    • An absence of dangerous workflows
    • Security policies
    • Remediation of known vulnerabilities

    Oddly, dependency update tools and branch protection were significantly less common among funded npm projects. Some practices were significantly more common only among specific funders. Open Collective projects saw more CII best practices, dependency update tools, code review, fuzzing, maintenance, packaging, pinned dependencies, and read-only token permissions. Meanwhile, Tidelift-supported projects saw significantly better vulnerability remediation, presence of security policies, and organizationally-backed contributors. GitHub organizational sponsors were more common in projects with branch protection, CI tests, organizationally-backed contributors, fuzzing, maintenance, packaging, read-only token permissions, and security policies.

    Table 6. Specific scorecard components across funders in npm – Differences from no-funder average (* – p<.05)

    In short, these findings indicate with moderate confidence that there is a meaningful connection between more open-source project funding and improved security posture. Some practices are strongly associated with funding, and more funding generally correlates with more dramatically differentiated security practices. Not all funders had scores that indicated significantly differentiated security practices in all ecosystems, but all did have a significant number of subchecks with dramatic score improvements correlating to funding.

    If these results indicate that funding leads to better security practices, the causal explanation is relatively simple and intuitive. More money for maintainers and even developers means more flexibility in dedicating time to project management, which can include developing security policies, remediating vulnerabilities, using better permission tokens, and including CI testing, fuzzing, signatures, packaging, update tools, and more—all of which increase Scorecard scores. Additionally, funding may help purchase either tooling or project services that would similarly contribute to security posture.

    Limitations and discussion

    There are several limitations to this analysis that are important to acknowledge. First and most significantly, escaping the causality-correlation question is particularly challenging in this space. It seems reasonable that projects with good general practices, including security practices, are more likely to attract funding—or at least as reasonable as the notion that funding helps projects improve their security practices. This logic is particularly salient for some of the subchecks. For example, on the one hand, general funding might allow a project’s maintainers to spend more time working on it and enable them to bring on additional maintainer support, increasing the chances that the project receives a high maintainer subcheck score. On the other, a project with a vibrant maintainer cohort seems reasonably more likely to receive funding by virtue of having administrators in the position to advocate for their project as well as to seek out and receive funds. While strictly disaggregating causation from correlation in this project is out of scope, the next section discusses future research avenues, which include methods of tackling this causation-correlation challenge.

    Still, some evidence suggests that funding predates rather than results from better security posture. Tidelift’s 2023 Open Source Development Impact Report tracks the Scorecard results of a cohort of Tidelift-supported projects over twenty months, which received direct incentives to improve some Scorecard subcheck scores. The study also compares the cohort’s results to the average score of all open-source packages in a vaguely specified peer group that did not receive the incentives treatment. The result was that, over the reporting period, the Scorecard score of the cohort of Tidelift projects steadily increased while the scores of the other open-source projects remained static or even declined.36 Moreover, the study, which involved direct incentives to improve Scorecard results, also asked maintainers their thoughts on the arrangement and determined that maintainers found the incentives either neutral or better compared to the added compliance burden, and that 55 percent of the cohort was either neutral on continuing the Scorecard work or unlikely to continue without the provided incentives. This may suggest that direct incentivization can drive security posture improvements, although the size of the examined cohort is small (twenty six) and the study’s methodology is not fully clear. Even if funding results from, rather than leads to, improved security posture, that would at least create an incentive for maintainers to improve security practices—although this does little to address obstacles to resourcing those changes in the first place.

    Relatedly, this study’s assumption that all the examined funders are general funders is an oversimplification. Open Collective, for example, serves as a funding conduit, enabling both funders and fund recipients. Some of the funders it enables include large firms such as Salesforce, Morgan Stanley, or Google, as well as other entities outside the private sector, including, to some degree, NUMFocus. Tidelift, similarly, works with maintainers to explicitly improve security practices and requires supported projects to take some measures that might boost their scorecards.37 One example is the robust association between Tidelift funding and the presence of a project security policy, particularly in the npm ecosystem. Tidelift terms require such a policy, and so long as the file is named SECURITY.md, it will satisfy the Scorecard check. In this way, Tidelift funding is more specific than general funding, but the other requirements it makes of maintainers appear minimal enough to still constitute as general funding for the purposes of this proof-of-concept study. Moreover, the question of whether funding directly incentivizes a certain security practice or simply enables project maintainers to establish a practice they intended or wished to adopt given enough time and resourcing is out of the scope of this study. More broadly, capturing the full extent of funding obligations is difficult at scale as these criteria might vary significantly from project to project, but this is an issue worthy of future research. Some projects captured by this study are likely both funded and supported by software foundations, which may impose governance obligations on projects or provide additional resourcing or requirements for security—in theory, it would be possible that only foundation support causes improved Scorecard results, but funding is a moderately strong predictor of foundation support among very popular projects.

    Finally, the OpenSSF Scorecard tool itself has several quirks relevant to this analysis.38 First, Scorecard does not necessarily capture all the security improvements that might occur during a project. For example, a security audit, paid for voluntarily by maintainers who received funding, might look for and remediate new vulnerabilities while likely not directly improving any Scorecard metric. Some practices may also meet OpenSSF criteria in principle but be missed by the automated scorecard check. For example, a project might include a strong security policy, but with a different filename than what the scorecard search is looking for, and thus not pass the subcheck. The conversion of subchecks to an overall score also likely dampens some of the stronger correlations in the overall analysis as some scores are weighted less than others.

    Implications for future research

    This pilot study suggests a variety of future research efforts that could enrich the current state of knowledge related to open source software funding and security.

    First, expanding the set of analyzed open source software ecosystems would be a clear improvement. This study covered only the PyPI and npm ecosystems. There are, of course, many others. Examining other popular ecosystems, like Maven Central (Java) or RubyGems (Ruby), is one potentially useful next step. Additionally, this study focused only on the top 1000 most popular packages in each ecosystem. Examining fewer of the most popular packages or many more packages could also yield analytical dividends, as could expanding the criteria of what constitutes “important packages” beyond top downloads.

    Second, there are many more types of funding for open source software that could be studied beyond the handful analyzed in this report. One particularly fruitful approach could be partnering with one or more organizations that provide open source software funding and using their presumably more detailed and accurate funding datasets. Building on “funder-finder” is another option, as is examining projects supported by the allocation of full-time developer hours from industry or the difference in projects supported by foundations or stewardship models versus other funding structures.

    Third, the definition of “security” employed in this study is admittedly narrow and is tied to the OpenSSF Scorecard tool. There are two broad options for expanding and improving on this definition. One is to undertake research that validates the usefulness of the Scorecard tool by examining the relationship between the checks in Scorecard and actual security outcomes. This would be an ambitious but valuable research avenue, directly tying security practice and outcome. Another is to simply leave OpenSSF scorecards behind and build new “security” datasets (without simply reverting to theoretically flawed counts of known vulnerabilities). One possible angle is to focus on the remediation time for known vulnerabilities, though collecting such a comprehensive dataset for open source software projects would be a substantial undertaking on its own.

    Fourth, several methodological alternatives could potentially provide a more reliable estimate of the actual causal effect of funding on open-source project security. One relatively straightforward option is research that measures both funding and Scorecard score over time to better approximate causation. Scorecard time-series data is already available for most projects, although gathering data on funding over time is likely an intensive process. A more ambitious approach is to create randomized controlled trials in which funding is actually allocated to projects at random. While this methodology would be the most desirable, there would be a number of operational and practical challenges, although a serious funder may be willing to consider such an expensive evaluation option. Additionally, interviewing maintainers of both funded and unfunded projects could shed light on how receipt of funding changes project practices and outcomes, both generally and those related to security.

    Fifth, why exactly general open-source software funding correlates with improved software security posture remains an open question. Through what mechanisms does this funding operate and why are they effective? Understanding these mechanisms could potentially allow the design of more effective funding programs.

    Implications for public policy

    Many readers will likely finish this study with more questions than answers. These questions, however, are key steps toward a more rigorous understanding of how policy can impact security outcomes. First, this study provides prima facie evidence that some types of general-purpose open source software funding correlate to better open source project security posture. Even funding that is not specifically aimed at security appears to correlate to better security posture, at least marginally. It seems plausible that open source software funding specifically focused on security is even more likely to have tangible security benefits.39

    Second, this study also suggests that it is indeed feasible to conduct quantitative evaluations of the effect of open source software funding on open source software security. This could move funders to adopt a new definition of “success” beyond simply considering the disbursement of funds to projects they deem important sufficient in and of itself. Instead, funders can realistically use approaches like this one to measure security improvements from their funding.

    Perhaps the most enduring contribution of this study will be to inspire a set of questions that must be implicitly answered by any organization that wants to create large-scale funding of open source software projects that leads to positive security benefits. These questions are inspired by the methodological challenges faced during this study.

    First, how should government agencies and other funders select projects? This study focused on only the most popular projects in two widely-used open source ecosystems, but these counts of download or dependency do little to account for the context of a projects use, which significantly impacts the risks that might come from its compromise. They also might not reflect the true spread of a project, which industry reliance on internal repositories might skew. How could a public program make principled decisions about project selection? One option is to solicit funding requests, though this option has the disadvantage that poorly-resourced projects might lack the means to apply. Another option is using data from government agencies about what open source software is most “critical” to their operations, although it is unclear how onerous it would be to create such a dataset given the strenuous task of identifying and updating dependencies and their contexts frequently and at massive scale.

    Second, what is the exact definition of security that such a program seeks? Is automated security posture analysis, such as OpenSSF Security Scorecard, adequate? Or is some other type of measurement required? These modest results are a reason that funders will likely not be able—at least in the short term—to adopt the high-confidence decision-making style sometimes associated with more mature public policy areas. Instead, those interested in open source software funding for security will need to adopt an intellectual style often associated with the open source movement: “rough consensus and running code.”40 Only in this way will they currently be able to make sense of the open source project funding and security landscape. Returning to the question that originally animated this study: does more money for open source software projects correlate to better security posture? Our answer: in many cases, yes, although with modest confidence. What is very clear, however, is the necessity for researchers and policymakers to look more closely at mechanisms designed to incentivize good cybersecurity practices and understand how they do (and do not) drive behavior and outcomes.

    Acknowledgments

    We thank Jennifer Melot for being the brains behind “funder-finder,” the open source software tool for measuring open source software funding that made this analysis possible. We’re also grateful to Trey Herr for nurturing this research agenda. Additionally, we thank Abhishek Arya, Aeva Black, Derek Zimmer and Amir Montazery, and Trey Herr for their valuable feedback on earlier versions of this paper.

    About the authors

    Sara Ann Brackett is a research associate at the Atlantic Council’s Cyber Statecraft Initiative. She focuses her work on opensource software security (OSS), software bills of materials (SBOMs), software liability, and software supply-chain risk management within the Initiative’s Systems Security portfolio. Brackett is currently an undergraduate at Duke University, where she majors in Computer Science and Public Policy and is currently writing a thesis on the effects of market concentration on cybersecurity. She participates in the Duke Tech Policy Lab’s Platform Accountability Project and works with the Duke Cybersecurity Leadership Program as part of Professor David Hoffman’s research team.

    John Speed Meyers is a nonresident senior fellow with the Atlantic Council’s Cyber Statecraft Initiative and the head of Chainguard Labs at Chainguard. He oversees research on open source software security, software supply chain security, and container security. He previously worked at IQT Labs, the RAND Corporation, and the Center for Strategic and Budgetary Assessments. He holds a PhD in policy analysis from the Pardee RAND Graduate School, a master’s in public affairs (MPA) from Princeton’s School of Public and International Affairs, and a BA in international relations from Tufts University..

    Stewart Scott is an associate director with the Atlantic Council’s Cyber Statecraft Initiative. He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    Some funding entities do release limited studies on their efficacy. For examples, see “Tidelift Open Source Maintainer Impact Report,” Tidelift,  June 2023, https://4008838.fs1.hubspotusercontent-na1.net/hubfs/4008838/Tidelift%202023%20OSS%20Maintainer%20Impact%20Report%20(1).pdf and Michael Scovetta and Michael Winser, “Alpha-Omega 2022 Annual Report,” OpenSSF, December 2022, https://openssf.org/wp-content/uploads/2022/12/OpenSSF-Alpha-Omega-Annual-Report-2022.pdf.
    2    Andrew L. Russel, “‘Rough Consensus and Running Code’ and the Internet-OSI Standards War,” IEEE Annals of the History of Computing (2006), vol. 28, no. 3, 48-61, https://courses.cs.duke.edu/common/compsci092/papers/govern/consensus.pdf.
    3    See any of the following: Chris Grams, “Maintainers to industry: We don’t have the time nor money to do more,” Tidelift, May 11, 2023, https://blog.tidelift.com/maintainers-to-industry-we-dont-have-the-time-nor-money-to-do-more; James Turner, “Open source has a funding problem,” January 7, 2021, https://stackoverflow.blog/2021/01/07/open-source-has-a-funding-problem/; Paul Sawers, “Why Sequoia is funding open source developers via a new equity-free fellowship,” TechCrunch, February 15, 2024, https://techcrunch.com/2024/02/15/sequoia-open-source-fellowship-developer-funding/; Nadia Eghbal, “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure,” 2016, The Ford Foundation, https://www.fordfoundation.org/work/learning/research-reports/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure/.
    4    Eghbal, “Roads and Bridges”; Stewart Scott, Sara Ann Brackett, Trey Herr, Maia Hamin, “Avoiding the success trap: Toward policy for open-source software as infrastructure,” The Atlantic Council, February 8, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/.
    5    “Critical Digital Infrastructure Research,” The Ford Foundation, 2020, https://www.fordfoundation.org/work/learning/learning-reflections/critical-digital-infrastructure-research/.
    6    “GitHub Sponsors,” https://github.com/sponsors.
    7    Open Source Security Foundation, https://openssf.org/.
    8    “Pathways to Enable Open-Source Ecosystems (POSE),” US National Science Foundation, 2024, https://new.nsf.gov/funding/opportunities/pathways-enable-open-source-ecosystems-pose.
    9    Open Collective, https://opencollective.com/.
    10    Open Technology Fund, https://www.opentech.fund/.
    11    The Sovereign Tech Fund, https://www.sovereigntechfund.de/.
    12    thanks.dev, https://thanks.dev/home.
    13    Tidelift, https://tidelift.com/.
    14    Per Ploug, “Announcing the Spotify FOSS Fund,” Spotify, April 22, 2022, https://engineering.atspotify.com/2022/04/announcing-the-spotify-foss-fund/.
    15    Most open-source software licenses include a clause distributing the software “as is,” protecting contributors from liability for issues arising from the use of the software. Some conceptions of this licensing consider it an exchange—that the price of using code with no financial cost is assuming all liability for any issues within the code. For more, see “History of the OSI,” OSI, September 19, 2006, https://opensource.org/history; “Legal Disclaimer and Notices,” Github, https://opensource.guide/notices/;  and Thomas Depierre, “I Am Not a Supplier,” Software Maxims, December 31, 2022, https://www.softwaremaxims.com/blog/not-a-supplier.
    16    Depierre, “I Am Not a Supplier.”
    17    Chris Grams, “Maintainer burnout is real. Almost 60% of maintainers have quit or considered quitting maintaining one of their projects,” Tidelift, May 25, 2023, https://blog.tidelift.com/maintainer-burnout-is-real.
    18    John Speed Meyers and Jacqueline Kazil, “How to ‘harden’ open-source software,” Binding Hook, November 7, 2023, https://bindinghook.com/articles-binding-edge/how-to-harden-open-source-software/.
    20    “Review of the December 2023 Log4j Event,” CISA Cyber Safety Review Board, July 11, 2022, https://www.cisa.gov/sites/default/files/publications/CSRB-Report-on-Log4-July-11-2022_508.pdf.
    21    “CISA Open Source Software Roadmap,” CISA, September 2023, https://www.cisa.gov/sites/default/files/2023-09/CISA-Open-Source-Software-Security-Roadmap-508c.pdf; “Fact Sheet: Biden-⁠Harris Administration Releases End of Year Report on Open-Source Software Security Initiative,” The White House, January 30, 2024, https://www.whitehouse.gov/oncd/briefing-room/2024/01/30/fact-sheet-biden-harris-administration-releases-end-of-year-report-on-open-source-software-security-initiative/.
    22    Meyers and Kazil, “How to ‘harden’ open-source software.”
    23    Open Source Technology Improvement Fund, https://ostif.org/; Alpha-Omega, https://alpha-omega.dev/; Chris Aniszczyk, “Open sourcing the Kubernetes security audit,” Cloud Native Computing Foundation, August 6, 2019, https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/.
    24    Lauren Hanford, “New data showing the impact of paying maintainers to improve open source security,” Tidelift, July 20, 2023, https://blog.tidelift.com/new-data-showing-the-impact-of-paying-maintainers-to-improve-open-source-security.
    25    Python Package Index, https://pypi.org/; npm, https://www.npmjs.com/.
    26    Nusrat Zahan et al., “What are weak links in the npm supply chain?,” ICSE-SEIP ‘22: Proceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice (2022): 331–340, https://dl.acm.org/doi/abs/10.1145/3510457.3513044; Duc-Ly Vu, Zachary Newman, and John Speed Meyers, “Bad Snakes: Understanding and Improving Python Package Index Malware Scanning,” ICSE ‘23: Proceedings of the 45th International Conference on Software Engineering (2023): 499–511, https://dl.acm.org/doi/abs/10.1109/ICSE48619.2023.00052.
    27    GitHub Sponsors, https://github.com/sponsors.
    28    Tidelift, https://tidelift.com/.
    29    Open Collective, https://opencollective.com/.
    30    NumFOCUS, https://numfocus.org/.
    31    Funder-finder, Georgetown Center for Security and Emerging Technology (CSET), https://github.com/georgetown-cset/funder-finder.
    32    Scorecard, OpenSSF, https://github.com/ossf/scorecard.
    34    Andrei Kashcha,Top 1000 most depended-upon packages, https://gist.github.com/anvaka/8e8fa57c7ee1350e3491#file-01-most-dependent-upon-md.
    35    Deps2repos, Open Source Software Neighborhood Watch, https://github.com/Open-Source-Software-Neighborhood-Watch/deps2repos.
    36    “Tidelift Open Source Maintainer Impact Report,” Tidelift,  June 2023, https://4008838.fs1.hubspotusercontent-na1.net/hubfs/4008838/Tidelift%202023%20OSS%20Maintainer%20Impact%20Report%20(1).pdf.
    37    Caitlin Bixby, “Lifter tasks overview,” Tidelift, November 2023, https://support.tidelift.com/hc/en-us/articles/4406288074260-Lifter-tasks-overview#h_01HFPT03434FVANGJPS3SMFRTV.
    38    Nustra Zahan, et al., “OpenSSF Scorecard: On the Path Toward Ecosystem-wide Automated Security Metrics,” arXiv, June 15, 2023,  https://arxiv.org/pdf/2208.03412.pdf.
    39    Meyers and Kazil, “How to ‘harden’ open-source software;” Hanford, “New data showing the impact of paying maintainers to improve open source security.”
    40    Russel, “Rough Consensus and Running Code’ and the Internet-OSI Standards War.”

    The post O$$ security: Does more money for open source software mean better security? A proof of concept appeared first on Atlantic Council.

    ]]>
    #BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic https://www.atlanticcouncil.org/content-series/balkans-debrief/balkansdebrief-why-is-france-refocused-on-security-in-the-balkans-a-debrief-with-alexandre-vulic/ Mon, 15 Apr 2024 17:46:31 +0000 https://www.atlanticcouncil.org/?p=757169 In this episode of #BalkansDebrief, Europe Center Nonresident Senior Fellow Ilva Tare welcomes Alexandre Vulic. They discuss France's security concerns for the Western Balkans.

    The post #BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic appeared first on Atlantic Council.

    ]]>

    IN THIS EPISODE

    The Western Balkans remain a security concern, particularly Bosnia and Herzegovina. Recently, France has deployed a battalion as part of the Strategic Reserve Force to assist the EUFOR mission and exercise a level of deterrence in Bosnia and Kosovo, two countries with security issues, where France wants to see progress.

    Ilva Tare, a Nonresident Senior Fellow at the Europe Center, discusses regional security issues with Alexandre Vulic, Deputy Director General for Strategic Affairs, International Security, and Arms Control at the French Ministry of Europe and Foreign Affairs.

    Why does France consider the situation in Bosnia as stable yet fragile? What are the main concerns that threaten security in the region? How do cybersecurity, disinformation, and false narratives affect the Western Balkans? And how can France counter Russia’s influence, which is exercised via proxies and nationalist forces?

    MEET THE #BALKANSDEBRIEF HOST

    The Europe Center promotes leadership, strategies, and analysis to ensure a strong, ambitious, and forward-looking transatlantic relationship.

    The post #BalkansDebrief – Why is France refocused on security in the Balkans? | A debrief with Alexandre Vulic appeared first on Atlantic Council.

    ]]>
    Atkins in E&E News by POLITICO https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-ee-news-by-politico/ Tue, 09 Apr 2024 17:54:43 +0000 https://www.atlanticcouncil.org/?p=756656 On April 8, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an E&E News by POLITICO article, in which he discussed the vulnerabilities of the US power grid, which is suffering increased state-sponsored cyberattacks.

    The post Atkins in E&E News by POLITICO appeared first on Atlantic Council.

    ]]>

    On April 8, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an E&E News by POLITICO article, in which he discussed the vulnerabilities of the US power grid, which is suffering increased state-sponsored cyberattacks.

    The post Atkins in E&E News by POLITICO appeared first on Atlantic Council.

    ]]>
    Ralby quoted in the Washington Post on the Baltimore bridge collapse https://www.atlanticcouncil.org/insight-impact/in-the-news/ralby-quoted-in-the-washington-post-on-the-baltimore-bridge-collapse/ Wed, 27 Mar 2024 13:50:17 +0000 https://www.atlanticcouncil.org/?p=753214 The post Ralby quoted in the Washington Post on the Baltimore bridge collapse appeared first on Atlantic Council.

    ]]>

    The post Ralby quoted in the Washington Post on the Baltimore bridge collapse appeared first on Atlantic Council.

    ]]>
    Break up TikTok, arm Ukraine https://www.atlanticcouncil.org/content-series/inflection-points/break-up-tiktok-arm-ukraine/ Wed, 20 Mar 2024 11:30:00 +0000 https://www.atlanticcouncil.org/?p=749993 The United States and its allies need to address both Russia’s military threats and Chinese influence operations.

    The post Break up TikTok, arm Ukraine appeared first on Atlantic Council.

    ]]>
    The US Congress should force the sale of TikTok or ban the app, and it should pass its long-delayed aid package for Ukraine. Just as important, it should signal to American voters that both represent the front lines in the strategic battle for the global future.

    What’s surprising is that the same House Republican minority that has blocked Ukraine funding for more than five months hasn’t made this connection. What might help this group is a close reading of the recently released “Annual Threat Assessment of the US Intelligence Community”—and Peggy Noonan’s latest Wall Street Journal column.

    News reports have focused public attention on the new intelligence report primarily because of its assessment of Israeli Prime Minister Benjamin Netanyahu’s “viability as a leader” as being “in jeopardy.” Even more important, however, are the links it draws between regional conflicts in Europe and the Middle East and our unfolding, generational contest with China to shape the future.

    “During the next year,” the assessment explains, “the United States faces an increasingly fragile global order strained by accelerating strategic competition among major powers, more intense and unpredictable transnational challenges, and multiple regional conflicts with far-reaching implications.”

    Regarding Beijing, the assessment underscores China’s growing efforts online, resembling the long-standing Moscow playbook, “to exploit perceived US societal divisions . . . for influence operations.” That includes experimentation with artificial intelligence. TikTok accounts run by a Chinese government propaganda arm “reportedly targeted candidates from both political parties during the US midterm election cycle in 2022,” it notes, something the Atlantic Council’s Digital Forensic Research Lab was the first to show through an open-source investigation.

    In a valuable new report, the Atlantic Council’s own analysts stopped short of calling for a breakup or ban of TikTok as a means of addressing the platform’s threats to US national security. “TikTok: Hate the Game, Not the Player” argues that an exclusive focus on the Chinese app overlooks “broader security vulnerabilities in the US information ecosystem.”

    Peggy Noonan makes a compelling case for why the United States should nevertheless target TikTok. “It uses algorithms to suck up information about America’s 170 million users, giving it the potential to create dossiers,” she writes. Federal Bureau of Investigation Director Christopher Wray, Noonan adds, has warned that China “has the ability to control software on millions of devices in the US.”

    That brings me to Ukraine.

    It’s difficult to gather hard evidence to illustrate how the Chinese government is deploying the TikTok weapon, yet the existing and potential dangers were sufficient to prompt a bipartisan House vote against it of 352-65, unifying members of Congress such as Democrat Nancy Pelosi and Republican Elise Stefanik, who are more often poles apart.

    By comparison, the evidence of Russian President Vladimir Putin’s murderous intentions is incontestable. Russian forces are advancing, and US dithering is costing Ukrainian lives. It’s also encouraging an increasingly close autocratic partnership built on the shared belief that now is the moment to test US and Western staying power and resolve.

    “Russia’s strengthening ties with China, Iran, and North Korea to bolster its defense production and economy are a major challenge for the West and its partners,” says the new report by the US intelligence community. On Tuesday, Reuters reported that Putin will visit Chinese leader Xi Jinping in May, building upon what he has called their “no limits” partnership.

    Weeks ago, a large Senate majority voted in favor of an aid package that would bring $60 billion in aid to Ukraine alongside support for Israel and Taiwan. A similar House majority would support that, but thus far a small Republican minority in the lower chamber has blocked a vote. This needs to be fixed quickly either by Speaker Mike Johnson permitting a floor vote, or through a discharge petition signed by a bipartisan majority.

    With the stakes of such a historic nature, the United States and its allies should address both Russia’s military threats, with Chinese support, and Chinese influence operations, with Russian inspiration.

    It’s not one or the other—but both. And now.


    Frederick Kempe is president and chief executive officer of the Atlantic Council. You can follow him on Twitter: @FredKempe.

    This edition is part of Frederick Kempe’s Inflection Points Today newsletter, a column of quick-hit insights on a world in transition. To receive this newsletter throughout the week, sign up here.

    The post Break up TikTok, arm Ukraine appeared first on Atlantic Council.

    ]]>
    Atkins in CyberScoop https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-cyberscoop/ Sat, 16 Mar 2024 19:27:06 +0000 https://www.atlanticcouncil.org/?p=752696 On March 15, IPSI Nonresident Senior Fellow Victor Atkins was quoted in a Cyberscoop article discussing industry complacency as Chinese hacking operations become increasingly threatening.

    The post Atkins in CyberScoop appeared first on Atlantic Council.

    ]]>

    On March 15, IPSI Nonresident Senior Fellow Victor Atkins was quoted in a Cyberscoop article discussing industry complacency as Chinese hacking operations become increasingly threatening.

    The post Atkins in CyberScoop appeared first on Atlantic Council.

    ]]>
    Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. https://www.atlanticcouncil.org/blogs/new-atlanticist/will-the-us-crack-down-on-tiktok-six-questions-and-expert-answers-about-the-bill-in-congress/ Wed, 13 Mar 2024 23:42:14 +0000 https://www.atlanticcouncil.org/?p=747735 The US House has just passed a bill to force the Chinese company ByteDance to either divest from TikTok or face a ban in the United States.

    The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

    ]]>
    The clock is ticking. On Wednesday, the US House overwhelmingly passed a bill to force the Chinese company ByteDance to divest from TikTok, or else the wildly popular social media app would be banned in the United States. Many lawmakers say the app is a national security threat, but the bill faces an uncertain path in the Senate. Below, our experts address six burning questions about this bill and TikTok at large.

    1. What kind of risks does TikTok pose to US national security?

    Chinese company ByteDance’s ownership of TikTok poses two specific risks to US national security. One has to do with concerns that the Chinese Communist Party (CCP) could use its influence over the Chinese owners to use TikTok’s algorithm for propaganda purposes. Addressing this security concern is tricky due to legal protections for freedom of expression. The other risk, and the one addressed through the current House legislation, has to do with the ability of the CCP to use Chinese ownership of TikTok to access the massive amount of data that the app collects on its users. This could include data on everything from viewing tastes, to real-time location, to information stored on users’ phones outside of the app, including contact lists and keystrokes that can reveal, for example, passwords and bank activity.

    Sarah Bauerle Danzman is a resident senior fellow with the Economic Statecraft Initiative in the Atlantic Council’s GeoEconomics Center.

    This debate is not over free speech or access to social media: The question is fundamentally one of whether the United States can or should force a divestment of a social media company from a parent company (in this case ByteDance) if the company can be compelled to act under the direction of the CCP. We have to ask: Does the CCP have the intent or ability to compel data to serve its interests? There is an obvious answer here. We know that China has already collected massive amounts of sensitive data from Americans through efforts such as the Office of Personnel Management hack in 2015. Recent unclassified reports, including from the Office of the Director of National Intelligence, show the skill and intent of China to use personal data for influence. And the CCP has the legal structure in place to compel companies such as ByteDance to comply and cooperate with CCP requests.

    Meg Reiss is a nonresident senior fellow at the Scowcroft Strategy Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security.

    2. Are those risks unique to TikTok?

    TikTok is not an unproblematic platform, and there are real and significant user risks that could pose dangers to safety and security, especially for certain populations. However, focusing on TikTok ignores broader vulnerabilities in the US information ecosystem that put Americans at risk. An outright ban of TikTok as currently proposed—particularly absent clearer standards for all platforms—would not meaningfully address these broader risks and would in fact potentially undermine US interests in a much more profound way.

    As our recent report outlines in detail, a ban is unlikely to achieve the intended effect of meaningfully curbing China’s ability to gather sensitive data on Americans or to conduct influence operations that harm US interests. It also may contribute to a global curbing of the free flow of data that is essential to US tech firms’ ability to innovate and maintain a competitive edge.

    Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab.

    Some have argued that TikTok, while on the aggressive end of the personal data collection spectrum, collects similar data to what other social media companies collect. However, the US government would counter with two points: First, TikTok has a history of skirting data privacy rules, such as those limiting data collection on children and those that prevent the collection of phone-specific identifiers called MAC numbers, and therefore the company cannot be trusted to handle sensitive personal data in accordance with the law. And second, unlike other popular apps, TikTok is ultimately beholden to Chinese regulations. This includes the 2017 Chinese National Intelligence Law that requires Chinese companies to hand over a broad range of information to the Chinese government if asked. Because China’s legal system is far more opaque than the United States’, it is unclear if the US government or its citizens would even know if the Chinese government ever asked for this data from TikTok. While TikTok’s management has denied supplying the Chinese government with such data, insider reports have uncovered Chinese employees gaining access to US user data. In other words, the US government has little reason to trust that ByteDance is keeping US user data safe from the CCP.

    —Sarah Bauerle Danzman

    3. What does the House bill actually do?

    There are two important, related bills. The one that passed the House today is the Protecting Americans from Foreign Adversary Controlled Applications Act, which forces divestment. It is not an outright ban, and it is intended to address the real risk of ByteDance—thus TikTok—falling under the jurisdiction of China’s 2017 National Intelligence Law, which compels Chinese companies to cooperate with the CCP’s requests. However, divestment doesn’t completely solve for the additional potential risks of the CCP using TikTok in a unique or systemic way for data collection, algorithmic tampering (e.g. what topics surface or don’t surface to users), or information operations (e.g. an influence campaign unique to TikTok as opposed to on other platforms as well). Second, the Protecting Americans’ Data from Foreign Adversaries Act, which cleared a House committee last week, more directly addresses a broader risk of blocking the Chinese government’s access to the type of data that TikTok and many other social media platforms collect on the open market. The former without the latter is an incomplete approach to protecting Americans’ data from the CCP—and even the two combined falls short of a federal data privacy standard.

    Graham Brookie is vice president and senior director of the Digital Forensic Research Lab.

    There is no question China seeks to influence the American public and harvests large amounts of data on American citizens. As our recent report illuminates however, the Chinese state’s path to these goals depends very little on TikTok.

    Today’s actions in the House underscore the disjointed nature of the US approach to governing technology. Rather than focus on TikTok specifically, it would be both legally and geopolitically wiser to pass legislation that sets standards for everyone, and not just one company. That could mean setting standards for what actions or behavior by any social media company would be unacceptable (for example on the use of algorithms or collection and selling of data). Or Congress could focus on prohibiting companies that are owned by states proven to have conducted hostile actions toward US digital infrastructure to operate in the United States. That would certainly include TikTok (and many other companies). This bill takes a halfway approach, both tying itself explicitly to TikTok owner ByteDance and hinting that it could apply to “other social media companies.”

    Rose Jackson is the director of the Democracy and Tech Initiative at the Digital Forensic Research Lab.

    The recently passed House bill, if it were to become law, would create a pathway to force the divestment of Chinese ownership in TikTok or ban the app from app stores and web hosting sites. Unlike previous attempts by the Trump administration to ban the app outright or force a divestment through the Committee on Foreign Investment in the United States, the Protecting Americans from Foreign Adversary Controlled Applications Act would not just affect TikTok. Instead, the legislation would create a process through which the US government could designate social media apps that are considered to be under the control of foreign adversaries as national security threats. Once identified as threats, the companies would have 180 days to divest from the foreign ownership or be subject to a ban.

    —Sarah Bauerle Danzman

    4. What would be some of the global ripple effects of a TikTok ban?

    The United States has always opposed efforts by authoritarian nations seeking to build “great firewalls” around themselves. This model of “cyber sovereignty” sees the open, interoperable, and free internet as a threat, which is why countries like China already have a well-funded strategy to leverage global governance platforms to drive the development of a less open and more authoritarian-friendly version. A TikTok ban would ironically benefit authoritarian governments as they seek to center state-level action (over multi-stakeholder processes) in internet governance. TikTok should not lead the United States to abandon its longstanding commitment to the values of a free, open, secure, and interoperable internet.

    A ban could generate more problems than it would solve. What the United States should consider instead is passing federal privacy laws and transparency standards that apply to all companies. This would be the single most impactful way to address broader system vulnerabilities, protect US values and commitments, and address the unique risks related to TikTok’s Chinese ownership, while avoiding the potential significant downsides of a ban. 

    Kenton Thibaut

    5. What do you make of TikTok’s response, particularly in pushing its users to flood Capitol Hill with calls?

    Members of Congress were rightfully alarmed by TikTok’s use of its platform to send push notifications encouraging users to call their representatives. However, Uber and Lyft used this exact same tactic in California when trying to defeat legislation that would have required it to provide benefits to its drivers. If we try to solve “TikTok” and not the broader issue TikTok is illuminating, we will keep coming back to these same issues over and over again. 

    —Rose Jackson

    6. How is China viewing this debate?

    The CCP has a tendency to throw a lot of spaghetti at the wall in an attempt to make its arguments, in this case that the divestment of TikTok from its Chinese parent company ByteDance is unnecessary. When the CCP has justified the internment of Uyghurs, it has thrown out everything from defending its repression based on terrorist beliefs across the population to claiming that it was just helping with social integration and developing work programs. The CCP has already made claims that the divestment would cause investors to lose faith in the US market and that it shows a fundamental weakness and abuse of national security. Expect many different versions of these arguments and more. But all the anticipated pushback will be focused on diverting the public argument away from the fundamental concern: The Chinese government can, under law, force a Chinese company to share information. 

    —Meg Reiss

    The post Will the US crack down on TikTok? Six questions (and expert answers) about the bill in Congress. appeared first on Atlantic Council.

    ]]>
    Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks https://www.atlanticcouncil.org/insight-impact/in-the-news/kramer-on-role-of-congress-in-deterring-chinese-cyber-attacks/ Tue, 05 Mar 2024 21:44:00 +0000 https://www.atlanticcouncil.org/?p=751700 Kramer advocates for US action against Chinese cyber threats, emphasizing their risk to economic and infrastructure security.

    The post Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks appeared first on Atlantic Council.

    ]]>

    On March 4, Scowcroft Center for Strategy and Security Distinguished Fellow and Board Director Franklin D. Kramer published an op-ed in The National Interest on the role of Congress in deterring Chinese cyber attacks.

    In the article, Kramer highlights the serious threats Chinese cyberattacks pose to US economic security and critical infrastructure. It suggests four measures: providing cybersecurity tax credits to support small businesses, academia, and infrastructure; leveraging AI to improve security software; creating a corps of private-sector cybersecurity providers for wartime; and addressing the cybersecurity workforce shortage to enhance national resilience.

    China’s determined cyber attacks on the United States call for significant actions to enhance national resilience both now and in the event of conflict.

    Franklin D. Kramer

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    The post Kramer authors op-ed on the role of Congress in deterring Chinese cyber attacks appeared first on Atlantic Council.

    ]]>
    Experts react: What Biden’s new executive order about Americans’ sensitive data really does https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-what-bidens-new-executive-order-about-americans-sensitive-data-really-does/ Thu, 29 Feb 2024 19:05:56 +0000 https://www.atlanticcouncil.org/?p=742382 US President Joe Biden just issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” Atlantic Council experts share their insights.

    The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

    ]]>
    It’s a personal matter. On Wednesday, US President Joe Biden issued an executive order restricting the large-scale transfer of personal data to “countries of concern.” The order is intended to prevent genomic, health, and geolocation data, among other types of sensitive information, from being sold in bulk to countries such as China, which could use it to track or blackmail individuals. Can Biden’s directive stop sensitive data from slipping into the wrong hands? And what are the implications for privacy and cybersecurity more broadly? Below, Atlantic Council experts share their personal insights.

    Click to jump to an expert analysis:

    Rose Jackson: The absence of a federal US data protection law threatens national security

    Kenton Thibaut: The focus on data brokers targets a key vulnerability in the US information ecosystem

    Graham Brookie: An essential, baseline step for shoring up US data security

    Sarah Bauerle Danzman: It will be essential to sort out how new rules fit in with the current regulatory structure

    Justin Sherman: Congress must get involved to tame data brokerage over the long term

    Maia Hamin: A welcome step, but beware of data brokers exploiting backdoors and work-arounds


    The absence of a federal US data protection law threatens national security

    The United States desperately needs a federal privacy or data protection law; the absence of one threatens our national interest and national security. While we wait for Congress to take the issue seriously, the Biden administration seems to be looking to leverage its executive authorities to take action where it can. Wednesday’s executive order should be understood in that context. The order takes particular aim at what are called data brokers—a lucrative market most Americans have likely never heard of. These companies quietly buy up troves of information collected through social media and credit card companies, consumer loyalty programs, mobile phone providers, health tech services, and more, then sell the combined files to whoever wants it. That means that currently, the Chinese intelligence service doesn’t need an app like TikTok to collect data on US citizens; they can just buy it from a US company. So while this executive order won’t address all of the issues related to this unregulated and highly extractive market, it will close an obvious and glaring national security gap by barring the sale of such data to foreign adversaries.

    Another significant piece of the executive order is its focus on genomic data as a particularly risky category. Genomic data are all but banned from provision to adversarial nations in any form. While this is a good step, the administration does not have the authority to ban the sale of genomic data to non-adversarial nations or domestically. This means there is a high likelihood that absent congressional or other action, the market for US genomic data will only grow. This underscores an uncomfortable reality when it comes to tech policy; there is no separating the foreign and domestic. Markets grow where there is incentive, and our continued failure in the United States to meaningfully grapple with how we want tech to be governed means we are choosing not to have input on the direction our own world-changing innovations will take.

    Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the US State Department.


    The focus on data brokers targets a key vulnerability in the US information ecosystem

    While further details are still being developed (including rightsizing thresholds for what constitutes “bulk data”), the executive order is a welcome development for those concerned about data security. The focus on data brokers—as opposed to targeting a single app, like TikTok—targets a key vulnerability in the US information ecosystem. Data brokers compile detailed profiles of individuals—including real-time location data—from various sources, including social media, credit card companies, and public records. This creates vulnerabilities for espionage and exploitation by foreign adversaries. That means while the national security community has raised concerns over the Chinese government’s ability to use TikTok to access data on Americans, it pales in comparison to what China already accesses through hacking and legal purchases via US data brokers. 

    Data security threats extend beyond individual apps to include data brokers and the broader lack of regulation in the tech industry. To protect privacy and national security, stronger regulations and transparency measures are needed, and the United States should pass comprehensive federal privacy legislation. However, in the interim, the administration has done what it can with this executive order to help stem the tide of Americans’ sensitive personal data flowing abroad. 

    Kenton Thibaut is a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).


    An essential, baseline step for shoring up US data security

    The executive order preventing the sale of bulk data to adversarial countries may sound technical, bureaucratic, and even opaque. However, it is one of the most essential baseline steps the United States needs to take in shoring up security in an era in which technology is at the forefront of geopolitical competition. Enormous amounts of information about Americans is bought and sold on the open market every single day. This measure is intended to make it harder for specific adversarial countries to buy billions of data points about citizens legally.

    As many other more challenging technical issues arise—such as how to govern the rapid development of artificial intelligence—a standard for data privacy for every single person in the United States is sorely needed. Data privacy is the foundation for establishing a rights-respecting and rights-protecting approach in an era of both rapid technological change and geopolitical competition. The executive order is an important step that can be built on. The policy is a threat-based approach to securing citizens’ data and information from the worst foreign actors. Congress can strengthen this approach and address the limitations of an executive order by passing legislation for a strong federal data privacy standard that not only protects Americans’ data from foreign adversaries, but also provides Americans protection in general.

    Graham Brookie is the vice president for technology programs and strategy, as well as senior director, of the Atlantic Council’s Digital Forensic Research Lab. He previously served in various roles over four years at the White House National Security Council.


    It will be essential to sort out how new rules fit in with the current regulatory structure

    With its latest executive order and related advance notice of proposed rulemaking, the Biden administration is trying to find transparent, clearly defined legal channels to address a specific set of national security challenges. These are the challenges that arise from the unmitigated and largely untracked commercial world of bulk data transfer to entities owned by, controlled by, or subject to the jurisdiction or direction of potential adversaries. The administration’s proposed rules demonstrate its seriousness of purpose in attempting to craft rules that are narrow in scope and application, while also anticipating and countering potential circumvention techniques of untrusted actors. They are also complicated. For example, they seek to stand up a new licensing line of effort with financial sanctions and export licenses based on a model from the Department of Justice and on the experiences of the Office of Foreign Assets Control and the Bureau of Industry and Security. This complexity raises questions about the feasibility and costs of compliance and enforcement.

    Some parts of the proposed rules overlap significantly with existing regulatory structure, and especially with the Committee on Foreign Investment in the United States (CFIUS). In particular, the regulation will cover investments by covered persons and entities in US businesses that collect covered data, a class of transactions typically handled by the CFIUS. It will be important for the government to clearly articulate how the new rules and the different government entities involved will relate to each other, with a goal toward reducing rather than exacerbating regulatory complexity that leads to higher compliance costs and confusion. The proposed rules suggest that the CFIUS might take precedence, but the CFIUS is a costly and time-intensive case-by-case review that is supposed to be a tool of last resort. It would be more efficient and probably more effective to first apply investment restrictions based on these new rules and preserve case-by-case CFIUS review only in situations in which the new data security prohibitions and restrictions do not adequately address national security risks associated with a particular transaction. Doing so would reduce pressure on the CFIUS’s ever-growing caseload and would provide businesses with bright lines rather than black boxes.

    Sarah Bauerle Danzman is a resident senior fellow with the GeoEconomics Center’s Economic Statecraft Initiative. She is also an associate professor of international studies at Indiana University Bloomington where she specializes in the political economy of international investment and finance.


    Congress must get involved to tame data brokerage over the long term

    Data brokerage is a multi-billion-dollar industry comprising thousands of companies. Foreign governments such as China and Russia obviously have many ways to get sensitive data on Americans, from hacking to tapping into advertising networks—and one of those vulnerabilities lies in the data brokerage industry.

    Data brokers collect and sell data on virtually every single person in the United States, and that includes data related to government employees, security clearance-holding contractors, and active-duty military personnel. My team at Duke’s Sanford School of Public Policy published a detailed study in November 2023, where we purchased sensitive, individually identified, and nonpublic information such as health conditions, financial information, and data on religion and children about active-duty US military servicemembers from US data brokers—with little to no vetting, and for as cheap as twelve cents per servicemember. It would be easy for the Chinese or Russian governments to set up a website and purchase data on select Americans to blackmail individuals or run intelligence operations. With some datasets available for cents on the dollar per person, or incredibly granular datasets available for much more, it may be considerably cheaper than the cost of espionage for foreign governments to simply tap into the unregulated data brokerage ecosystem and buy data.

    Of course, an executive order isn’t going to fix everything. At the end of the day, the fact that data brokers gather and sell Americans’ data at scale, without their knowledge, often without controls, is a congressional problem—and has signified a major congressional failure to act. Federal and state legislation is what will ultimately best tackle the privacy, safety, civil rights, and national security risks from the data brokerage industry. But that doesn’t mean the executive branch shouldn’t act in the meantime. If the executive branch can introduce even a few additional regulations for data brokers to better vet their customers or to stop selling certain kinds of data to certain foreign actors, that’s an important improvement from the status quo.

    Over the coming months, important challenges for the executive branch will be defining terms such as “data broker,” ensuring that covered data brokers are required to properly implement “know your customer” requirements, and figuring out ways to manage regulatory compliance in light of the size and operating speed of the data brokerage industry.

    Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative and founder and CEO of Global Cyber Strategies.


    A welcome step, but beware of data brokers exploiting backdoors and work-arounds

    The commercial data broker ecosystem monetizes and sells Americans’ most sensitive data, often piggybacking off of invasive ad-tracking infrastructure to vacuum up and auction off specific information about Americans, such as their location history or mental health conditions. This executive order is a useful step toward making it more difficult for specific adversary countries to purchase that data, and it makes clear sense from a national security perspective.

    However, while this market remains (otherwise) largely unregulated and flourishing in the United States, in the absence of a comprehensive privacy law or other restrictions on data brokering, Americans’ privacy will continue to suffer. Leaving this market intact domestically runs the risk of opening up potential backdoors and work-arounds to the limitations in the executive order. It also—perhaps not coincidentally—leaves the door open for the US government itself to continue purchasing and using commercial data in its own intelligence programs. 

    That’s all to say, cracking down on data brokers is always welcome, so it’s great to see this order (and recent action from the Federal Trade Commission as well). Next, let’s challenge Congress and the executive to push it further.

    Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab.

    The post Experts react: What Biden’s new executive order about Americans’ sensitive data really does appeared first on Atlantic Council.

    ]]>
    The 5×5—Alumni perspectives on Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-alumni-perspectives-on-cyber-9-12-strategy-challenge/ Wed, 28 Feb 2024 17:17:00 +0000 https://www.atlanticcouncil.org/?p=818163 Alumni of Cyber 9/12 Strategy Challenge share their experiences, and discuss the impact of such simulated exercises to prepare for real life cyber attacks.

    The post The 5×5—Alumni perspectives on Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>
    mulation from the Atlantic Council’s Cyber Statecraft Initiative. It is designed to challenge students to respond to a realistic simulation of international cyber crisis, analyze the threat it poses to national, international, and private-sector interests, and provide recommendations on the best course of action to mitigate the crisis. It was launched in Washington, DC in 2012, and has since expanded its reach nationally and globally, with competitions across the United States, as well as in London, Dundee, Geneva, Paris, Santo Domingo, Tashkent, and Cape Town.  

    Entering its twelfth year, the Challenge provides mentorship, recruiting, and networking opportunities for thousands of students a year. For this month’s 5×5 edition, we invited five Cyber 9/12 alumni to share their experiences, and discuss the impact of such simulated exercises to prepare for real life incident response in case of cyber attacks and the necessity to strike the right balance between government and private sector actions to tackle cyber crisis. Our alumni have also shared advice for prospective competitors, and their insight into different aspects of the competition. 

    Find out more about the Cyber 9/12 Strategy Challenge at our website here.  

    1. Why did you choose to participate in Atlantic Council’s Cyber 9/12 Strategy Challenge, and if you could do it again, given your current work or research experience, would you and why? 

    Tionge Mughogho (she/her/hers), Cyber Security Specialist, National Computer Emergency Response Team, Malawi Communications Regulatory Authority; Winner, 2021 and 2022 Cape Town Cyber 9/12 Strategy Challenge  

    “The Cyber 9/12 strategy challenge was my first experience with cybersecurity competitions, and I decided to participate because I saw great potential for my professional growth through enhanced critical thinking, creativity as well as networking opportunities. If I could, I would participate in the competition again as given my current position working with the National CERT, I have more experience and knowledge on how the simulations in the competition are handled in real life.”

    Grant Versfeld (he/him/his), Security Engineer, Cloud Threat Intelligence, Google; Winner, 2021 ATX Cyber 9/12 Strategy Challenge  

    “I participated because I wanted to develop my policy knowledge and strengthen my ability to communicate technical topics to all audiences. As someone who now does research in the cybersecurity industry, I would definitely compete again because the Cyber 9/12 challenge offers participants a unique opportunity to learn from leaders across the policy and security industries. Whether it’s someone’s first time joining, or they are a frequent competitor, I think everyone can benefit from briefing a panel of experts and then receiving feedback in such a supportive setting.

    Frances Schroeder (she/her/hers), Congressional Innovation Fellow, TechCongress; Winner, 2023 DC Cyber 9/12 Strategy Challenge  

    “I initially participated in Cyber 9/12 because the format of the simulated national security crisis was incredibly intriguing to me. In my final year at Stanford, I was deep into the post-grad job application mindset, so I viewed the competition as a valuable opportunity to stress test whether I wanted to pursue national security and tech policy. I similarly viewed it as a chance to meet experts in the cyber and national security world to form valuable mentor relationships. If I were still in school, I would definitely participate in Cyber 9/12 again. I’d especially love to be able to participate in new versions of the competition around the world, including the new Trust and Safety-focused competition.”

    Gabriel Cajiga (he/him/his), Associate Attorney, Cajigas & Co.; Vice President, Panamerican Institute of Law and Technology (IPANDETEC); Semi-finalist, 2021 ATX and 2021 Geneva Cyber 9/12 Strategy Challenge  

    “I would definitely do it again! I chose to participate because I wanted to put into motion what I was learning in cybersecurity law with Prof. Chesney at Texas Law School. I ended up in an incredible multidisciplinary team and I ended up even learning about the military, public policy, public relations, and tech aspects during a cyber crisis. Shout out to the DSM-5 team!”

    Nitansha Bansal (she/her/hers), Assistant Director, Cyber Statecraft Initiative, Atlantic Council; Semi-finalist, 2021 NYC and 2022 DC Cyber 9/12 Strategy Challenge  

    “There were multiple factors which led to my decision to participate in my first Cyber 9/12- the 2021 NYC Cyber 9/12 Strategy Challenge. As a first-year policy student at Columbia University, I was eager to learn about cybersecurity and tech policy but had little to no idea of the subjects. The Digital and Cyber Group at Columbia SIPA was a well-known student group on campus, and helped to organize the NYC Cyber 9/12. This is how I had my rendezvous with the competition, and decided to participate as a litmus test of a career in the field. It was a way for me to understand what a cybersecurity expert does, who the stakeholders in case of a crisis are, what role an individual with a background in policy can play, and what is the skillset I must strive to build for this career. Even after three years since my first Cyber 9/12 and as a working professional, I would like to participate in the competition to challenge my understanding of different topics related to cybersecurity, and test my briefing, research, and writing skills.” 

    2. How did your team balance the role of government intervention and private sector action in your policy recommendations? Would you strike the same balance knowing what you do today?   

    Tionge Mughogho 

    “In my team’s recommendations, we balanced the role of government intervention and private sector action by keeping collaboration, coordination and communication lines open between the two sectors. Of course, the extent of the intervention and action from the government and private sector respectively was dependent on the severity and evidenced impact of the cyber attack at hand on both sectors. Yes, with the knowledge I have today I would suggest the same method with a greater emphasis on collaborations and coordination even before a cyberattack (prevention) rather than only when a cyberattack has occurred.”

    Grant Versfeld 

    “Our recommendations called for government agencies to lead response actions while working alongside industry partners to implement those plans. Having seen first-hand how government and organizations can work together to mitigate vulnerabilities, I would strike a similar balance because it’s important for stakeholders on both sides to work together to take advantage of each other’s expertise.” 

    Frances Schroeder 

    “In our preparation for each competition, we mapped out the actors involved in the crisis and the entities available and well-suited to respond. This included exploring the ways in which we could leverage both public and private sector action to respond to the crisis. Based in Silicon Valley, my team was especially interested in these public-private partnerships. Throughout each competition, as the crisis escalated and more immediate actions were necessary, we focused more on government intervention. In briefing the ‘National Security Council,’ we viewed government action as the most immediate lever of action available to our principals to respond to the crisis. I would not necessarily change this approach, but I think the importance of public-private partnerships in this space cannot be overstated.” 

    Gabriel Cajiga 

    “This is still a challenge that is highlighted more when you realize there are clear differences in how the US, the EU, and Latin America approach a crisis– the regions where I competed. At the time, we tried to balance the role of government and private sector by always promoting collaborative policies among those two, and on the international stage.” 

    Nitansha Bansal

    “As a policy student, I was taught the importance of governments while as an economist, the first thing I learnt was the self-correcting mechanism of markets. For Cyber 9/12, I had to bring both aspects of my being together, and understand the significance of public-private partnerships and collaboration. It was not easy to navigate what channels of communication exist between the government and the private entities involved so we sometimes suggested the establishment of new channels, and at other times, recommended the re-activation of the existing ones. After working in the industry for some time, I think I would now assign a larger role to the private sector including more responsibilities and accountability. I say this mostly because I understand the ownership of infrastructure better. However, my response could vary based on the region/nation.” 

    3. Some critics argue that simulations and/or tabletop exercises don’t accurately replicate real time urgency. What do you think, and how did your team perform well under pressure?  

    Tionge Mughogho 

    “I believe simulations and tabletop exercises offer valuable preparedness and skill development opportunities, and to address the real-world urgency factor the Cyber 9/12 challenge uniquely incorporates escalation with a limited time of response in its second part of the challenge, simulating real-life pressure and urgency. My team and I, participating three times, dedicated sleepless nights to addressing the escalated scenario, relying on quick decision-making, collective expertise, and adaptive strategies to navigate the challenge and achieve our objectives effectively. With this, our team performed well under the pressure of the escalated scenario in the second day of the competition.”

    Grant Versfeld 

    “While it is difficult to fully capture the stakes that might be at play in real life, I think the increasingly short timeframes that teams are given simulate some of the urgency of a real-life incident. My team performed well even as the pressure mounted by staying calm and focusing on the task at hand – this was crucial for the finals when we had only 15 minutes to prepare our briefing. We attributed this strategy’s success to our preparation, helpful mentors, and trust in one another that each person would execute on their focus area in the final briefing.”

    Frances Schroeder 

    “Throughout my experience competing in Cyber 9/12, the simulated national security crises felt incredibly urgent and high pressure. With a massive amount of intelligence injected throughout the weekend with short turnaround periods, the stakes felt high. By participating in the competitions, I honed valuable skills — briefing principals, outlining courses of action, and making specific recommendations based on your expertise — that I exercise daily in my career now. These are tangible skills that few academic experiences prepare students for and that illustrate the value of simulations and tabletop exercises.”

    Gabriel Cajiga 

    “I understand the critique that it’s easy to not ponder upon the actual stakes (real feedback from a judge), but simulations help prepare and put to test your ‘first-aid kit.’ For a student with probably no crisis management experience, this is a great way to level up the sense of urgency a crisis brings and know how to work as a team in challenging hours (at 5 AM). As for the team, we kept in mind not having all of us research the same topic. Time and resource management is essential.” 

    Nitansha Bansal

    “I believe simulations are a wonderful way of preparing our brains to perform systematically under pressure- when everything becomes chaotic, and nothing seems to make sense. If my brain has dealt with a problem earlier, my muscle memory will help me deal with that problem later in a more efficient way. I mean, what better way to deal with a crisis than to deal with it without losing all your money, crashing your stock value, or breaking the entire infrastructure! And I believe the escalatory nature and structure of a Cyber 9/12 scenario compels competitors to think swiftly but systematically- which is the most important skill when faced with a real cybersecurity crisis. In our case, my team and I held extensive discussions about our strengths to understand how we can contribute to the team, and accordingly divided our tasks throughout the competition. This helped us capitalize on each other’s skillsets, and perform better under pressure, especially during the Q&A sessions.” 

    4. Did your team come from diverse backgrounds? How did that contribute to the way you approached the competition?  

    Tionge Mughogho 

    “Our team consisted of members from the same field but with diverse specializations such as cybersecurity, networking, and forensics. While this diversity greatly aided us in formulating technical cybersecurity recommendations, we encountered challenges in areas relating to law, politics, and the military. However, these challenges provided valuable learning opportunities, fostering a deeper understanding of cyber laws and the impact of political and diplomatic considerations on cyber attack response strategies.”

    Grant Versfeld

    “Yes, we built our team with members from a variety of academic and social backgrounds to expand the types of knowledge that each team member brought to the competition. This encouraged us to be up front about our strengths and weaknesses so that we could best support each other, and it also helped us build meaningful friendships with like-minded peers who we might not have otherwise met. As we approached the competition, we spent time teaching one another about our areas of expertise, which proved useful during the Q&A since everyone had a stronger knowledge base to rely upon.”

    Frances Schroeder 

    “As an all-female group, my team, the Cyber Super Girls, approached the competition with our own unique perspectives and experiences. We were the first team to participate from our university in many years. As a result, we were all new to the competitions, which meant that we had no preconceived notions about how we were supposed to approach Cyber 9/12. I believe this allowed us to be creative and offered us a unique approach based on each of our individual previous experiences.”

    Gabriel Cajiga  

    “This might be one of the most important takeaways of the competition. I fortunately had a very diverse team (see my answer to question 1), and if we felt we were missing knowledge on a topic the challenge was bringing we sought advice from experts from our network! It is essential to understand how other professions contribute to solving a crisis.” 

    Nitansha Bansal

    “Yes, during both the competitions, my teams had representation from different educational and professional backgrounds, nationality, and sex. This allowed us to look at the scenario holistically- from different angles, and hence provide policy recommendations which covered diplomatic, technical, regional, and national level, and short term and long term actions. This also meant that we excelled at different skills- reading lengthy government documents, drafting written statements, designing team logo and decision documents, writing presentation speech, and answering questions confidently- so we could effectively adopt the ‘divide and conquer strategy’ to our team’s benefit.” 

    5. What is one piece of advice you wish you knew before you competed in your first Cyber 9/12? What is the advice you would give to future competitors?   

    Tionge Mughogho 

    “Before competing in my first Cyber 9/12, I wish I had realized that as much as the competition is centered on cybersecurity issues simulations, every aspect of the scenario is important for an effective policy recommendation brief and response. For future competitors, I advise prioritizing a holistic understanding of policy, legal, and geopolitical implications alongside technical aspects of the scenario, as they profoundly influence crisis response strategies. Additionally, practicing time management and maintaining composure under pressure are essential for effectively navigating the challenges presented during the competition.”

    Grant Versfeld

    “As a first-time competitor, I remember worrying that our policy recommendations were off the mark, but they proved to be rather strong. That feeling went away over the next few times I competed, helping me gain confidence to the point where my teams routinely made it to the semi-finals or finals. My advice is to have faith in the preparation you and your teammates did prior to and during Cyber 9/12, especially when you’re giving the oral briefing. Given the amount of research that goes into writing a written brief and decision doc, everyone should feel confident presenting their work since the hard part is arguably over. Most importantly, have fun!”

    Frances Schroeder 

    “Go all in. Opportunities like Cyber 9/12 are few and far between, especially for students. Competitors should take full advantage of the rare opportunity to gain tangible analytic and briefing skills, develop their professional network, and explore whether this is a field that they want to pursue. Due to Stanford’s academic schedule of the quarter system, the national competition fell on the weekend right before finals. As stressful as that was for me academically, once the competition began, I made a significant effort to put a pause on worrying about my finals. Instead of trying to cram studying into the few free moments during the competition, I spent the free time I had meeting as many other competitors and judges as I could. As much as you can, put your outside responsibilities on hold for the duration of the competition, so that you can dive in and gain as much as possible from such a valuable opportunity.”

    Gabriel Cajiga 

    “On the competition side, know your judges and ask for feedback always! Also, it’s important to sleep. On the teamwork side, be communicative on what you don’t know, what you can provide, and how much time you can compromise.”

    Nitansha Bansal 

    “It would have been good to know that we were enough even if we had never participated in a Cyber 9/12 earlier or had any professional experience of working in the field of cybersecurity. Oh, and also that everyone in the room was feeling the imposter syndrome (but no one had to)! To the future competitors, I would say – don’t wait for the next Cyber 9/12 because you have an exam (they come every semester), don’t try to build the perfect team (that’s not how the real world works, anyway) but make sure you know each other’s strengths and weaknesses, and pick an interesting team name (first impression is the last impression, after all, but mostly because it is fun reading them!).” 

    The post The 5×5—Alumni perspectives on Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>
    Braw featured in Politico on espionage in Europe https://www.atlanticcouncil.org/insight-impact/in-the-news/braw-featured-in-politico-on-espionage-in-europe/ Tue, 27 Feb 2024 22:15:01 +0000 https://www.atlanticcouncil.org/?p=741978 On February 27, Transatlantic Security Initiative senior fellow Elisabeth Braw wrote an opinion piece in Politico discussing the changes in espionage tactics by authoritarian regimes.   

    The post Braw featured in Politico on espionage in Europe appeared first on Atlantic Council.

    ]]>

    On February 27, Transatlantic Security Initiative senior fellow Elisabeth Braw wrote an opinion piece in Politico discussing the changes in espionage tactics by authoritarian regimes.

      

    The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

    The post Braw featured in Politico on espionage in Europe appeared first on Atlantic Council.

    ]]>
    To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition https://www.atlanticcouncil.org/blogs/new-atlanticist/to-combat-chinese-cyber-threats-the-us-must-spearhead-a-new-indo-pacific-intelligence-coalition/ Tue, 27 Feb 2024 17:56:19 +0000 https://www.atlanticcouncil.org/?p=741039 Such a coalition would help disrupt cyber threats, signal US resolve, and ideally help deter future cyberattacks from China.

    The post To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition appeared first on Atlantic Council.

    ]]>
    When the highest-ranking US law enforcement official describes a concern as “the defining threat of our generation,” it should be taken seriously. On January 31, FBI Director Christopher Wray testified before Congress about China’s capability to threaten US national and economic security. In particular, he identified the imminent cyber threat that Chinese hackers pose to critical infrastructure. A China-sponsored cyber group called “Volt Typhoon,” Wray explained, has prepositioned cyberattack capabilities in the US communications, energy, transportation, and water sectors intended to “destroy or degrade the civilian critical infrastructure that keeps us safe and prosperous.” Alarming in its own right, Volt Typhoon is just the latest example of Beijing’s ongoing “cyber onslaught,” Wray added.

    This story is not new. Since at least 2019, the US government has publicly sounded the alarm about the threat that China’s cyberattack and espionage enterprise poses to US national security and to regional stability in East Asia. The 2023 annual threat assessment by the US Office of the Director of National Intelligence (ODNI) states that China “uses coordinated, whole-of-government tools to demonstrate strength and compel neighbors to acquiesce to its preferences.” The assessment adds that China’s cyber capabilities are essential for orchestrating espionage, malign influence, and attack operations in support of Chinese interests.

    To confront the threat to critical infrastructure posed by Volt Typhoon and other state-sponsored Chinese cyber actors, the United States should launch an expansive new multilateral cyber threat intelligence sharing coalition in the Indo-Pacific. This coalition should utilize some of the lessons learned from the Five Eyes intelligence alliance, and it would incorporate members of the Five Eyes alliance, US Indo-Pacific partners, and even some European states. The expanded reach and resources of such a coalition would help disrupt cyber threats, signal to the world that the United States and its partners are committed to protecting both cyber and physical infrastructure from malicious actors, and ideally help deter future cyber threats from China. 

    Meeting the threat

    The Biden administration has already taken some steps to improve cybersecurity cooperation in the Indo-Pacific region, such as recent commitments with Japan and South Korea. In each case, the partners recognize the importance of sharing cyber threat intelligence information related to critical infrastructure threats. A goal of this cooperation is to enhance cybersecurity in the region, especially through capacity building and sharing best practices with network defenders and incident responders. In practice, this often amounts to arming individual critical infrastructure asset owners with better tools and procedures that will improve their cybersecurity posture over time.

    Increased cybersecurity at the point of a potential attack is necessary, but it is not sufficient given the urgency and scope of the threat. Dedicated, well-resourced state-sponsored adversaries, as demonstrated by Volt Typhoon, have already proven they can establish a cyberattack foothold in the control systems that operate critical infrastructure.

    In fact, this strategy of merely sharing cybersecurity information with network defenders may play into Beijing’s hands, since malicious actors already present with deep access privileges in these networks could be prepositioned to observe how new cybersecurity programs are implemented, potentially giving them valuable information to evade detection in the future.

    The additional key to interrupting China’s cyberattack enterprise as it exists today is for the United States and its allies and partners to detect and dismantle global command-and-control (C2) infrastructures that Chinese-supported threat groups use to perform “living off the land” techniques. These techniques are very difficult for network defenders to identify because they use a network’s built-in administration tools to closely mimic normal network business traffic and operational protocols. For any threat actor to execute disruptive actions within a victim network, they must first establish remote C2 connections through external communication access points, such as the open internet or web-based channels. Network defenders might miss these remote C2 connections, lost in a cacophony of legitimate network traffic. However, US and allied intelligence services are often better equipped to monitor, track, and disrupt covert C2 activities wherever they occur around the world.

    Building out a new coalition from the Five Eyes alliance

    Thankfully, the United States does not need to imagine a radical solution for this challenge. The US intelligence community already has decades of experience managing a complex foreign intelligence-sharing alliance with multiple countries that routinely collaborate to monitor adversaries of mutual concern.

    The “Five Eyes” intelligence sharing partnership among the United States, Australia, Canada, New Zealand, and the United Kingdom was established in the 1940s to surveil the Soviet Union and Eastern Bloc nations. It then expanded to monitor terrorism-related activities after the 9/11 attacks. Just as the original Five Eyes members were driven to confront the autocratic Soviet threat to capitalist democracy, it is easy to imagine how a new cyber-focused alliance of US and Indo-Pacific partners could coalesce to counter Beijing’s manipulation of cyberspace. It is just as easy, in the absence of such a coalition, to imagine China continuing its quest to dominate East Asia and undermine US military efforts to support US regional allies and partners.

    Five Eyes is especially adept at sharing intelligence derived from electronic signals and systems used by foreign targets, called signals intelligence. While there are important differences between signals intelligence and cyber threat intelligence, an established intelligence sharing system in the former gives Five Eyes countries a model to work from, since the latter is largely derived from intercepts of digital signals in network traffic that reveal indicators of malicious activities. In addition, it is more effective to build governance measures, such as security protocols, that protect sensitive sources and that uphold shared democratic values, within the structure of a coalition than, say, trying to manage these issues in a series of cumbersome bilateral security arrangements.  

    A consequential first step would be for the United States to engage current Five Eyes partners on a strategy to bring more Indo-Pacific intelligence liaison partners into the fold. Highlighting the recent danger posed by Volt Typhoon, the United States and Five Eyes partners could underscore for this expanded group the urgency of working together to find and disrupt similar threats.

    Given that Australia is an existing Five Eyes member with clear regional security interests, it would be an ideal partner with the United States to lead engagements with capable and like-minded partners to lay the groundwork for a more expansive cyber intelligence coalition.

    Obvious starting points are Japan and South Korea, which already have bilateral agreements with the United States to enhance cyber intelligence sharing. The United States also has long-standing military alliances with the Philippines and Thailand, which could be further developed to include intelligence analysis and collection components focused on Chinese cyber activities. India and the United States have recently committed to partner on sharing information about cyber threats and vulnerabilities as part of their Comprehensive Global and Strategic Partnership. And building upon President Joe Biden’s steps to upgrade US relations with Vietnam and Indonesia to Comprehensive Strategic Partnerships—both of which include elements to improve digital cooperation—the groundwork exists for expansion into more sophisticated cyber intelligence sharing arrangements with partners in Southeast Asia.

    Leadership for this new coalition should come from the ODNI, with support from the National Security Agency (NSA), which is the primary US intelligence community element responsible for sharing signals intelligence within the existing Five Eyes alliance. The NSA has all the required authorities, experience, and expertise to operationalize intelligence-informed insights on Chinese cyber threats to assist Indo-Pacific intelligence liaison partners in strengthening their own intelligence sharing mechanisms to contribute to the alliance’s mission. Moreover, these efforts should be carried out in ways that complement and boost, but do not detract from, the ongoing work of the Five Eyes alliance.

    Deterring Beijing in cyberspace

    The United States must act soon. The revelations about Volt Typhoon are a wake-up call not only about the operations China currently has underway, but also about the far-reaching threat it will continue to pose. China has proven it is willing and able to exploit cyberspace to achieve its objectives, and until the United States and partner nations confront it in places where it operates, it will only become more dangerous.  

    In addition to the immediate benefits of disrupting ongoing operations like Volt Typhoon, an expanded multilateral Indo-Pacific cyber threat intelligence alliance might contribute to long-term deterrence strategies. More eyes on this adversary could increase opportunities to disrupt China’s future cyber activities, making them less likely to succeed over time. Increased attribution could also cause the Chinese government reputational harm internationally, in addition to the direct financial costs Beijing would suffer each time it needed to reconstitute C2 upon discovery.

    If the United States wants to achieve its strategic vision of an “open, free, global, interoperable, reliable, and secure” internet that “that uplifts and empowers people everywhere,” then Washington must commit to pushing back on any efforts to weaponize cyberspace to achieve autocratic or coercive geopolitical objectives. None of these efforts is likely to deter China completely from mounting cyberattacks, of course. But more eyes on malicious Chinese cyber activities targeting critical infrastructure through a comprehensive, coordinated cyber intelligence alliance would make it more difficult and costly for Beijing to continue its current course. Equally valuable, this would send a clear signal to the world that the United States and its regional allies and partners are willing to contest Beijing in cyberspace to secure the enduring freedom of the global digital ecosystem.


    Victor Atkins is a nonresident fellow with the Atlantic Council’s Indo-Pacific Security Initiative, where he specializes in cyber intelligence, national security, and industrial cybersecurity issues. He was previously a leader within the Department of Energy’s Cyber Intelligence Directorate, where his teams provided all-source foreign intelligence analytical support to the US energy sector.

    The views expressed in this article are the author’s and do not reflect those of the Department of Energy or the US intelligence community.

    The post To combat Chinese cyber threats, the US must spearhead a new Indo-Pacific intelligence coalition appeared first on Atlantic Council.

    ]]>
    How tech innovations are changing the trajectory of military competitions and conflicts https://www.atlanticcouncil.org/content-series/defense-technology-monitor/how-tech-innovations-are-changing-the-trajectory-of-military-competitions-and-conflicts/ Tue, 20 Feb 2024 21:15:00 +0000 https://www.atlanticcouncil.org/?p=797421 In the February edition of the Defense Technology Monitor, our experts explore emerging technology trends that are shaping global defense.

    The post How tech innovations are changing the trajectory of military competitions and conflicts appeared first on Atlantic Council.

    ]]>
    Below is an abridged version of the Forward Defense initiative’s Defense Technology Monitor, a bimonthly series tracking select developments in global defense technology and analyzing technology trends and their implications for defense, international security, and geopolitics.

    There are three emerging trends in defense technology to watch in the months and years ahead.

    First, innovations in technology, tactics, and operational concepts are driving a shift in crucial military competitions and conflicts. Following Ukraine’s adoption of commercial drone technology, Russia has responded by adopting counter-drone electronic warfare capabilities, placing pressure back on Ukraine.

    Second, there are increasing efforts to explore safely and responsibly integrating emerging technologies for military applications: For example, the Department of Defense’s investigation of the utility and risks associated with generative artificial intelligence (AI) and the establishment of a new task group within Task Force 59 focused on operational adoption.

    Third, potential adversaries are increasingly viewing conflict with the United States and its partners and allies as a conflict between systems of systems. The recent reveal of the ongoing effort by hackers associated with China to target US civil and military infrastructure via cyberattacks shows how such actors can target critical nodes in the system to reduce a country’s capacity to respond.

    Embedded throughout these trends are emerging and advanced technologies that are powering military activities globally. Below are new innovations and initiatives that are shaping global defense.

    AI and data

    The recent boom in the commercial development and use of generative AI tools such as Chat GPT-4 has triggered both interest and concern from defense and intelligence communities across the world. This is certainly the case with the US Department of Defense, which established Task Force Lima within the Chief Digital and Artificial Intelligence Office’s (CDAIO) Algorithmic Warfare Directorate in August 2023 to investigate the opportunities and risks of generative AI adoption. On January 29, CDAIO launched the first of two artificial intelligence “bias bounty” exercises designed to identify unknown or unanticipated risk areas in large language models.

    Autonomous systems

    Ukrainian Armed Forces received an initial batch of new AQ-400 Scythe attack drones made by Ukrainian company Terminal Autonomy in December 2023. The Scythe’s design, supply chain, and manufacturing gave Ukraine an easily produced and assembled long-range unmanned aerial vehicle that is highly effective against Russian forces. The drone war has appeared to have entered a new phase, however. This sentiment was put forth in a Foreign Affairs article by Eric Schmidt that assessed that the balance of drone conflict in Ukraine has been altered by the combination of increased Russian capacity, responsive and adaptive Russian tactics, and “Russia’s superior electronic warfare capabilities [that] allow it to jam and spoof the signals between Ukrainian drones and their pilots.”

    Platforms and weapons systems

    In mid-November, the Japan Maritime Self-Defense Force Izumo-class destroyer Kaga began sea trials following modifications of its deck to allow F-35B fighter jets to take off and land on the ship.

    Although Japan has carefully avoided referring to Izumo-class destroyers as aircraft carriers due to post-World War II constitutional provisions, the government decided to convert the Kaga and its sister destroyer, the Izumo, into ships capable of carrying the short take-off and vertical-landing capable F-35B amid growing concern over China’s more assertive territorial claims to the Senkaku Islands in the East China Sea.

    Computing power

    On January 17, NATO released a summary of its first-ever quantum strategy, in which the Alliance offered its perspective on the importance of quantum technologies in military-technological competition and on how the Alliance can gain and maintain an advantage in these crucial technologies. The summary begins by noting that advancements in quantum technologies are bringing the Alliance closer “to a profound shift for science and technology” that will have “far-reaching implications” for the economy, security, and defense. It goes on to detail NATO’s strategic vision for a “quantum-ready” Alliance and emphasizes the need to “prevent the formation of new capability gaps in a world where peer competitors adopt quantum technologies themselves.”

    Sensors and detection

    In late December, a team of Chinese scientists published a paper in the Chinese-language journal Cryogenics and Superconductivity that claimed they had developed an ultra-sensitive version of superconducting quantum interference devices (SQUIDs) at reduced costs. SQUIDs are highly sensitive detectors used to measure extremely weak magnetic fields. Improving undersea detection and operations is an understandable priority for the People’s Liberation Army as the United States has long been perceived as having a significant undersea advantage.

    The information domain, cyber, and the electromagnetic spectrum

    On January 31, the US Cybersecurity and Infrastructure Security Agency (CISA) urged manufacturers of small office and home office routers to ensure their devices are secure against ongoing cyberattacks attempting to hijack them, especially those coordinated by Chinese hacking group Volt Typhoon (also known as Bronze Silhouette). The CISA announcement followed acknowledgment from the US Federal Bureau of Investigation that it had sought and received court authorization to remotely disable a KV botnet attack from Volt Typhoon that targeted US critical infrastructure, accessing certain brands of small office and home office routers to hide the activity. These types of penetrations of US civil and military infrastructure hold significant, multilayered risks that include the collection of sensitive information on US infrastructure and the ability to hold this infrastructure at risk, undermine the capacity of the United States to respond to a crisis, and reduce domestic political will for confrontation.

    Manufacturing and industry

    On January 16, the Atlantic Council concluded its Commission on Defense Innovation Adoption with the release of the project’s final report. The Commission was launched in 2022 with the primary objective to “take the [Department of Defense’s] acquisition process, and Congress’ role in that system, out of the Cold War era.” The result was ten recommendations for policymakers and defense officials.

    If you are interested in reading this month’s full issue of the Defense Technology Monitor, please contact Forward Defense Project Assistant Curtis Lee.

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    The post How tech innovations are changing the trajectory of military competitions and conflicts appeared first on Atlantic Council.

    ]]>
    Hacking with AI https://www.atlanticcouncil.org/in-depth-research-reports/report/hacking-with-ai/ Thu, 15 Feb 2024 19:23:00 +0000 https://www.atlanticcouncil.org/?p=817758 Can generative AI help hackers?
    By deconstructing the question into attack phases and actor profiles, this report analyzes the risks, the realities, and their implications for policy.

    The post Hacking with AI appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    Questions about whether and how artificial intelligence—in particular, large language models (LLMs) and other generative AI systems—could be a tool for malicious hacking are relevant to ongoing conversations and policy frameworks seeking to manage risks from innovations in the field of artificial intelligence. This report maps the existing capabilities of generative AI (GAI) models to the phases of the cyberattack lifecycle to analyze whether and how these systems might alter the offensive cyber landscape. In so doing, it differentiates between GAI capabilities that can help less sophisticated actors enter the space or scale up their activities—potentially increasing the overall volume of opportunistic activities such as cyber crime—and those that can enhance the capabilities of sophisticated malicious entities such as state-backed threat actors. Each phase of the cyberattack lifecycle is investigated using desk research into research papers and written accounts that examine GAI models’ utility for relevant tasks or activities. This research is augmented with the findings from a novel experiment conducted in June 2023 that tasked participants with differing amounts of technical or hacking experience to complete cyber war games using the help of either ChatGPT or search engines and existing online resources.

    The results of the analysis suggest that there are certain phases for which both sophisticated and unsophisticated attackers may benefit from GAI systems, most notably in social engineering, where the ability to write convincing phishing emails or to create convincing audio or video deepfakes can benefit both types of actors. Both sophisticated and unsophisticated actors, but particularly those who are more resource-constrained, will likely benefit from models’ ability to speed and scale up activities such as open-source information gathering. For other phases, there was less evidence to suggest that contemporary GAI systems can provide meaningful new capabilities to hackers: for example, at present, LLMs do not appear to outperform existing tools at vulnerability discovery, although this is an area of ongoing development and thus potential risk. Our experiment suggested that GAI models can help novice hackers more quickly develop working code and commands, but also that these users are not well-positioned to vet and manage false or misleading model outputs, or “hallucinations,”1 limiting their usefulness for this purpose. Built-in safeguards appeared to make LLMs less useful for these novice users seeking high-level instruction on how to complete hacking tasks, but even these users found ways to circumvent safeguards. Through many of the phases, LLM outputs useful for malicious hacking—such as code for a script or text for an email—closely resemble outputs useful for more benign tasks. This resemblance will make it challenging to create safeguards that prevent models from generating outputs that could be used for hacking.

    Table 1: Summary of level of capability enhancement from GAI across different phases of the cyberattack lifecycle

    While most of this paper focuses on GAI systems as tools for human hackers, questions about autonomy, or the ability of GAI-based systems to string together multiple actions without human intervention, are also highly relevant when evaluating new offensive cyber risks that may emerge from AI. There is not yet evidence that LLM systems have the capability to complete multiple phases of an attack without human intervention, but several factors demand ongoing attention to this question, including the way that the unsupervised learning paradigm creates capabilities overhang (in which certain model abilities are only discovered over time, including after release2), as well as increasing focus and development energy around autonomous systems. The report contains a section examining the current state of autonomy as well as where autonomy might be particularly impactful in the cyberattack lifecycle.

    To address these challenges, this report concludes with policy recommendations, including:

    • Develop testing standards for leading-edge models that assess cyber risks across different phases, actors, and levels of autonomy, prioritizing transparency and participation
    • Assess and manage cyber risk arising from GAI systems while protecting the equities of open model development
    • Mobilize resources to speed up technical and standards-setting work on AI content labeling with a focus on implementation potential
    • Begin investing in policy structures and technical measures to address potential risks associated with AI-based autonomous agents

    Throughout, this report urges leaders to design policy based on an empirical assessment of present and future risks, avoiding reactive decision-making while ensuring that adaptive structures are in place to keep pace with the rapid rate of change in the field of AI and the potentially far-reaching implications of the technology.

    Introduction

    Generative AI models have brought a renewed surge of interest and attention to the idea of intelligent machines. In turn, this surge has also triggered renewed conversation about the potential risks of harmful capabilities and negative societal impacts, both in the current generation of models and in future successor systems.

    The question of whether AI systems are now or could in the future be capable of materially assisting malicious hackers is highly relevant for national security, as cyber criminals or nation-state adversaries could potentially harness such tools to perform more, or more successful, cyber intrusions against companies and governments. It is also of interest to those concerned by more existential fears of superintelligence: hacking would likely be a key stepping-stone for an intelligent system to escape limitations imposed by its creator. The ability of AI systems to support hacking is at the fore of many AI policy discussions: a recent Executive Order on AI from the Biden administration requires developers and Infrastructure as a Service (IaaS) providers to make reports to the federal government related to the training of “dual-use foundation models,” defined in terms of their potential capability to pose serious threats to national security such as through enabling automated offensive cyber operations.3 This centers the question of the cyber capabilities of GAI systems as a core concern in the US AI policy landscape.

    How close is the reality of AI-assisted or autonomous hacking? This report seeks to answer this question by deconstructing “hacking” into a series of constituent activities and examining the potential for generative AI models (as their capabilities are currently understood) to materially assist with each phase. Rather than treating “hacking” as a monolith, this analysis relies upon known and battle-tested models of different activities used by malicious hackers to compromise a system. This report also considers the varying profiles of potential operators (ranging from cyber “noobs” to sophisticated hackers) and the various capabilities of the models themselves. In this way, different case studies and examples can be better contextualized to determine the current level of risk of AI in the cyber landscape.

    But first, a few notes on terminology and scope. The term “hacking” is fraught with meaning and history in the computer security context. Many kinds of hacking are merely a kind of technical exploration of an information system, rather than an attempt to subvert its controls for malicious ends. This report specifically examines the capability of GAI systems to assist hackers with attacking information systems for malign purposes, ranging from crime to espionage. While the report examines these models’ usefulness for hacking as a broad class of activity, whether and how much contemporary GAI systems can help hackers for any specific case will be informed by contextual factors including the relative strength or vulnerability of the target, the complexity and nature of the information systems at play, and the skills or behavior of the specific human operator.

    The term “artificial intelligence” is also charged, describing not so much a single technology as a goal– the creation of machines with human-like intelligence–shaped by a research field with a long history that spans paradigms from rules-based systems to deep neural networks. This report focuses primarily on generative AI (GAI) as an area of recent progress and policy focus. GAI broadly refers to computational systems based on neural networks that are capable of generating novel content in different modalities (such as text, code, or images) in response to user prompts.4 In their current form, such systems are trained through a combination of unsupervised learning from large amounts of unstructured data combined with other techniques like reinforcement learning from human feedback (RLHF), which is used to align models with more helpful and desirable behavior. GAI systems such as LLMs (like OpenAI’s GPT-series models) and image diffusion models (like OpenAI’s DALL-E series models) are not the first, nor will they be the last, incarnation of AI systems. However, AI systems created through different combinations of paradigms will likely be useful in different ways for malicious cyber activities; while this report stops short of examining each of these paradigms, the taxonomies it provides on GAI and hacking may be useful in studying or understanding the capabilities of successor systems.

    Throughout, this report distinguishes between GAI capabilities that can help novice actors, such as opportunistic criminals entering the offensive cyberspace or seeking to scale up their activities, versus those that could make sophisticated hackers more effective. This distinction is material to understanding the impacts of AI on the cyber landscape. For example, capabilities that can help less technically resourced malicious actors enter the space could enable an expanded set of opportunistic cyber criminals to exploit more businesses with ransomware or increase other types of financially motivated cybercrime. Capabilities that improve the skills of experienced hackers, on the other hand, might pose national security concerns in the hands of experienced nation-state adversaries who might utilize the technology for espionage or in conflict.

    Finally, this report focuses primarily on GAI’s usefulness as a tool for malicious human hackers in each phase of the cyberattack lifecycle. In the concluding section on autonomy, this report examines the potential ability of GAI systems to enable additional functions, up to and including serving as end-to-end “hacking bots” themselves rather than as tools to produce outputs for human hackers. The utility of GAI models as a tool for human hackers is a useful indicator for this question in some ways. For example, a model’s ability to provide outputs that materially assist a human with each phase of the hacking lifecycle is likely a prerequisite for the model being able to create such outputs without human direction. However, autonomy is also a distinct area of AI development, with its own trajectory and unique associated risks.

    Deconstructing malicious hacking into a set of activities and malicious hackers into a set of profiles relative to their capability and resources is a way to impose structure onto a highly uncertain and fast-moving space of great policy interest. This structure forces the analysis away from platitudes and generalities about the potential of GAI systems and towards a more realistic examination of their current abilities paired with known activities that constitute malicious hacking.

    This is not the first work to examine the question of using AI to automate the process of hacking. A 2020 report from the Georgetown Center for Security and Emerging Technology examined the potential use of AI across some of the same activities (drawn instead from the Lockheed Martin “cyber kill chain” model) and identified similar areas of risk.5 However, it predated the rapid commercialization and subsequent diffusion of transformer-based models. A recent report from the UK’s National Cyber Security Center (NCSC) examined the question of the near-term impact of AI on the cyber threat landscape, also focusing on similar questions and technologies.6 While the public version of the NCSC report did not explain in detail the reasoning for its findings, they largely align with those in this analysis; this report will discuss the NCSC’s findings in more detail as they reinforce or contradict its own. The rate of change within the field of AI will necessitate that work to evaluate models’ usefulness for hacking be iterative and adaptive. This report is a contribution to, not the final form of, this important process.

    Deconstructing the question

    The attack lifecycle

    The MITRE ATT&CK framework is a taxonomy for adversary tactics and techniques across the phases of a cyberattack.7The framework has 14 unique steps, consolidated here into five sections:Reconnaissance, Initial Access, Privilege Escalation, and Lateral Movement, Impact, and Defense Evasion. Each of these sections examines which GAI capabilities are most relevant to the phase, as well as existing research, online accounts, and experimental results to form a tentative answer to the question of whether GAI’s known capabilities could provide a benefit to either new or experienced actors.

    • Reconnaissance is the phase in which a would-be attacker collects intelligence that helps them select their targets and design their attack. This can include information potentially useful for social engineering—for example, names and emails of employees of an organization—as well as information about networks and software systems such as assets, software versions, IP addresses, and open ports.

    • Gaining Access describes the process of an attacker gaining a foothold into their target’s information system. One common way to secure access is to steal credentials from a legitimate user and abuse their privileges to move within the system. Another method is to exploit a software vulnerability to perform an action that gives an attacker access, such as forcing a server to execute code or uploading a malicious file that provides an attacker with a backdoor into the system.

    • Privilege Escalation and Lateral Movement are steps that an attacker takes once they have initially breached a system to gain additional privileges to carry out desired actions or gain access to other (potentially more sensitive or valuable) systems and resources.

    • Impact refers to steps that a hacker takes to perform actions that represent the fulfillment of their goals within the information system. For example, encrypting files for ransomware or exfiltrating files for data theft.

    • Evasion of Defenses refers to the various means by which malicious actors conceal their activity to avoid detection. This includes utilizing specialized software to evade monitoring systems that may look for signs of malicious activity such as signatures, improper attempts to alter or gain access to data, or questionable inbound and outbound connections used for command and control or data exfiltration.

    Profiles of a malicious hacker

    In examining GAI’s utility for malicious hacking, there are two key questions about the relative sophistication of the potential user of the model:

    1. Does GAI enhance or improve the capabilities of existing, sophisticated cyber adversaries in this stage of the attack lifecycle?
    2. Does GAI expand the universe of potential cyber adversaries who might be able to undertake this stage of the attack lifecycle, such as by lowering the barrier to entry for those without much hacking expertise?

    The answers to these questions lead to different risks and thus may demand different public policy interventions.

    If generative AI can enhance the capabilities of existing cyber players, national security policymakers (and everyone else) should be concerned about the safety of sensitive information systems, as sophisticated nation-state adversaries or other advanced persistent threat (APT) groups could use GAI systems to support more effective cyber operations such as espionage. Policymakers would then need to consider how to limit the use of generative AI for these purposes or determine other interventions to secure systems against the new capabilities of AI-assisted actors.

    If generative AI can expand the universe of cyber actors, then the question is one of scale. How much worse off is national security if many more actors can become somewhat competent hackers? How would organizations’ digital infrastructure hold up against a surge in (perhaps not very sophisticated) attacks? There are good reasons to suspect that the answer might be “not well.” Already many organizations are exploited every year through social engineering or vulnerable software, and there is little evidence to suggest that these hacks represent the exploitation of all existing vulnerabilities. As more entities around the world realize that cybercrime can be a lucrative source of income,8 tools that make it easier for new actors to scale activities could cause substantial harm to businesses and consumers and create significant new costs for securing networks from a significantly increased volume of attacks.

    Relevant GAI capabilities

    To map GAI capabilities to phases in the cyberattack lifecycle, this report taxonomizes current GAI uses that seem potentially useful for hacking activities:

    • Text generation: This describes the generation of text in English or other natural (human) languages intended to be used wholesale: for example, generating the text of an email that could be used for phishing.
    • Text analysis: Instead of asking the model to generate new text based on a prompt alone, GAI systems can also be given a text input and then asked to synthesize, summarize, or otherwise transform that information, such as by extracting information about an organization that might be useful for social engineering. This ability could be part of a system that automates part of the process of retrieving the text, such as a tool that uses an LLM to summarize the contents of a web page.
    • Code generation: This refers to GAI’s ability to generate computer-executable code according to the user’s specifications (often but not always provided in natural language). This is likely the set of capabilities that would be most helpful for hacking if deeply developed, as the ability to generate (and even run) code gives a model a direct means to affect an information system.
    • Code analysis: Combining some of the above elements, this relates to the idea of giving an LLM access to a piece of code and asking it to analyze it for another task, such as explaining what it does or searching for vulnerabilities. The outputs of this process could be natural language explanation (e.g., “this code is vulnerable to a SQL injection attack”) or generated code (e.g., an additional code block that performs some function informed by the analyzed code).
    • Media generation: This describes the ability of multimodal models to generate images, audio recordings, or videos in response to user prompts. This media might borrow from the likeness of a real person for impersonation attacks, or otherwise be used in social engineering such as to create a sense of fear or urgency on the part of the victim.
    • Operational instruction or question-answering: This category describes the usefulness of GAI systems for providing instruction or guidance on how to complete a task. An example might be breaking down the process of an attack into discrete steps a hacker must take and providing the user with options or instructions. This function could be achieved by simply asking the language model for an answer or might be combined with the above functions, such as asking the model to search the internet for an answer.

    This report primarily, although not exclusively, discusses the capabilities of general-purpose GAI systems – those trained to perform domain-neutral text, code, or image generation, rather than for specific offensive hacking tasks. For certain tasks, such as vulnerability discovery, general-purpose GAI models could likely be made even more useful through modifications such as fine-tuning, in which a model undergoes additional domain-specific training to improve its performance of a specific task.

    An experimental contribution

    The following section discusses a week-long experiment run by the authors of this report. The experiment asked participants with little to no technical background in hacking to compete in hacking “wargames”9 with the aid of either ChatGPT or Google Search. Many online accounts of using ChatGPT or other LLM systems in support of hacking were conducted by experts who knew what to ask the tool; this experiment aimed to explore the question of how useful GAI systems are as an aid for less-sophisticated actors.

    Methods

    The experiment asked four participants–three with no coding experience and one with three years of coding experience–to solve online cyber war games that teach and test basic skills in penetration testing (ethical hacking). All participants completed two different game paths. The first, the “server game path,” involved interacting with a remote file system, to complete tasks such as finding hidden files, searching for secrets within files, or exfiltrating information over an outbound connection. The second, the “web game path,” involved interacting with a website to access hidden information by modifying cookies, injecting prompts, or uploading malicious executables.

    Both game paths were broken into levels that became progressively more challenging. Both required participants to explore the technical system (e.g., the file system or website) and then write and execute commands, code snippets, or other actions to successfully obtain a password that would allow the participant to access the next level.

    For each level, participants used either Google Search (and other web resources) or ChatGPT (specifically, GPT 3.5 from June of 2023).Given the fast rate of improvement in models, repeating this experiment with newer generations of GPT models or with other systems would be valuable We collected data on the time it took participants to complete each level, a participant’s score of each level’s difficulty, and self-reporting from participants on their experience using each tool. We interweave our observations from this process throughout the following sections.

    Results: AI in the attack lifecycle

    Overview

    The below table summarizes for each phase of the attack lifecycle:

    • The most relevant GAI capabilities
    • Whether such GAI capabilities meaningfully enhance the capabilities of sophisticated actors (based upon a review of the relevant literature)
    • Whether such GAI capabilities meaningfully expand the set of less-sophisticated actors or enable them to scale up their operations (based upon a review of the relevant literature and the results of our own experiment)

    The below table summarizes, for each attack lifecycle, which GAI capabilities are most relevant and what present case studies suggest about whether current GAI systems can meaningfully assist sophisticated and unsophisticated cyber actors.

    Table 2: Overview of relevant GAI capabilities and level of capability enhancement across different phases of the cyberattack lifecycle

    Notably, significant improvements in model capabilities with respect to the correctness of generated outputs, especially generated code, would change this calculus, enabling low-sophistication actors and speeding up sophisticated actors. The emergence of meaningful autonomous capabilities would also significantly alter these results: autonomy could provide new capabilities to sophisticated actors for tasks such as evading defenses and enabling semi- or fully-autonomous “hacking bots” could dramatically expand the set of potential opportunistic bad actors and the volume of malicious cyber activity.

    The potential risks that created by model capability improvements are not equally distributed among the phases of the attack lifecycle. In particular, the Gaining Access and Escalation and Movement phases face the most risk from potential improvements in the ability of GAI models to identify vulnerabilities in code and to develop exploits. This risk is not yet realized today but seems likely to materialize in the future given substantial research interest in developing capabilities for vulnerability identification for cyber defense. The Evading Defenses phase stands to benefit disproportionately from increasing capabilities towards autonomy. The below table summarizes, for each attack phase, which capabilities might create risk as they improve and the level of that risk according to the likelihood and impact of substantial improvement.

    Table 3: Overview of risk level of GAI capability enhancement across different phases of the cyberattack lifecycle

    Reconnaissance

    In which a would-be attacker collects intelligence that helps them select their targets and design their attack.

    Some parts of the reconnaissance phase are similar to other kinds of data compilation and analysis tasks where GAI is already being utilized. For example, a task that relies on compiling open-source information available on the internet, such as creating a list of an organization’s employees,10 could be completed by GAI systems with access to internet search, like Microsoft’s LLM chatbot.11 Internet-connected LLMs that search for and summarize data could present a small speed improvement over a human using a search, but they would not necessarily grant access to new or unknown information. This capability would likely benefit unsophisticated actors, who are more likely to be resource-constrained and opportunistic—the ability to process open-source information at scale could enable them to speed up this part of their work and thus target more organizations. For sophisticated actors, the consequences are less clear: if these actors are already motivated and specific in their targets, the efficiency benefits of automating or speeding up parts of the reconnaissance process might be welcome but not differentiated in terms of capability. Additionally, there is a plethora of tools available for reconnaissance of this type, including for searching through publicly accessible information (such as social media content) and data dumps (such as databases of user credentials available on the dark web) that sophisticated actors likely already know how to leverage.12Therefore, one significant open question in this area is whether there are types of large-scale data sources where LLMs can unlock significant new insights not otherwise available through either human review or standard searches using keywords or similar. If so, sophisticated actors might stand to see more benefit.  

    Separate from searching the internet for open-source information (often called “passive collection”), the reconnaissance phase also involves “active collection” in which attackers interact directly with a target information system to gather information such as the different assets in the network and the software running on each. GAI models seem less likely to aid this phase of intelligence gathering. Hackers already use semi-automated tools such as port13 and vulnerability scanners14 and network mappers to probe or scan target systems and identify information such as open ports, operating systems, and software versions that help them craft their attempts to compromise a system. These tools are widely accessible to current and would-be hackers.15In most cases, it is likely easier for experienced hackers to use existing tools rather than generate new custom code via GAI to reimplement the same functionality. However, inexperienced hackers could potentially benefit from GAI’s ability to point them to these tools and provide easy-to-use instructions.

    A test in 2023 purported to show that ChatGPT could answer questions about an organization’s website, such as its IP address, domain names, and vendor technologies.16 But, there is a major caveat here—the study did not test whether the information returned by ChatGPT was accurate. GAI systems are prone to returning false but plausible-sounding “hallucinations.” Their knowledge ends at the end of their training data – unless the answers to these questions were present in their training data and have not changed since that data was collected, the answers returned by the model were likely fabrications. For a task like identifying the IP address or vendor technologies used by an organization, inaccurate information is equal to or worse than no information at all. Accounts like this are therefore of little use without context on the accuracy of the model’s outputs.

    The report from the UK’s NCSC also found that AI has the potential to moderately improve sophisticated actors and to more substantially improve unsophisticated ones in the reconnaissance phase.17 That finding largely aligns with those in this report: resource-constrained actors stand to benefit the most from the potential utility of GAI models and tools built on them to automate parts of the reconnaissance process, but there also may be other avenues that benefit skilled actors who can find new ways to leverage large data sources.

    Unfortunately, it will be challenging to devise safeguards for GAI systems that can limit their potential use in the Reconnaissance phase. Limitations or safeguards applied to GAI models to reduce their usefulness for open-source research for hacking reconnaissance are likely to hamper their usefulness for other legitimate tasks. For example, journalists, researchers, or financial analysts all might have legitimate reasons to ask models to amass information like a list of people who work at a particular company. A prohibition on use cases that aid hacking reconnaissance could limit many other kinds of legitimate and benign activities. This is a throughline throughout many of the phases of the attack lifecycle: many hacking activities are very similar to benign GAI use cases, presenting a major challenge for safeguarding models so that their outputs cannot support hacking.

    Gaining access

    In which an attacker gains a foothold into the target information system, such as through credential theft or the exploitation of software vulnerabilities.

    Phishing and social engineering

    According to IBM’s Cost of a Data Breach report for 2023, the most common initial access vector for data breaches was phishing, in which attackers send emails or other communications that trick victims into sharing sensitive information like their password or into interacting with a malicious resource, such as a link to a fake log-in page that steals credentials or a file that provides an attacker with access to the system on which it is downloaded. Given the fact that LLMs are explicitly designed to be good at generating well-written text, they can easily be co-opted to help write text for phishing emails or other communications with a more malign purpose.

    Yet, the research on their efficacy for this purpose is mixed. Two different studies found that LLM-generated emails were less effective than human-created emails at getting users to click on a phishing link.18 Both studies used relatively expert humans who either had experience in social engineering or used known models for drafting effective phishing emails. It is possible LLMs would provide an advantage to hackers without experience writing phishing emails or those not fluent in the language of the organization they are targeting. LLMs could also help craft a large number of customized emails in a short amount of time. Overall, the existing research indicates that LLM-drafted phishing emails are unlikely to enhance the capabilities of existing, motivated hackers, but they could be a tool to expand phishing capabilities to a broader class of actors or provide benefits in terms of efficiency and scale.

    In one of the studies, ChatGPT rebuffed requests to draft phishing emails due to its safeguards against illegal and unethical behavior. However, the authors were able to circumvent this limitation by asking the model to help them create a marketing email, into which a malicious link was inserted. As the authors note, “the only difference between a good phishing email and a marketing email can be the intention […] if we were to prevent LLMs from creating realistic marketing emails, many legitimate use cases would be prohibited.”19

    GAI capabilities such as image, audio, and video generation also create new potential threats around a specific type of phishing known as an “impersonation attack,” in which an attacker impersonates someone (perhaps a boss or coworker) to trick a user into handing over credentials or performing an action. Hackers have already used deepfake technology on video calls to pose as the CEO of crypto exchange Binance, successfully persuading crypto leaders to pay a “fee.”20 A recent segment news report demonstrated how AI systems can generate a fake voice on a phone call as part of a social engineering attack.21 The ability to convincingly falsify a voice or video recording of a trusted individual can augment sophisticated, targeted attacks and more run-of-the-mill, low-tech scams. Additionally, as organizations—including the US government—increasingly turn to systems such as biometrics to verify identity from afar,22 AI-based impersonation could pose another challenge to identity verification and security. Finally, image generation capabilities could also be used for social engineering purposes outside of impersonation, such as using an AI-generated image to trick a victim into thinking there has been an emergency or accident at their home or workplace, creating a sense of fear and urgency that characterizes many successful phishing messages.23

    The NCSC report found that AI had the potential to improve sophisticated actors and to significantly improve the abilities of unsophisticated actors concerning social engineering attacks, including phishing.24 The findings in this report largely align. Opportunistic actors who generate a high volume of phishing emails might gain the most from the ability to generate content for simple social engineering such as phishing emails, but GAI systems do not appear likely to make sophisticated actors more effective in this area given human-written phishing emails appear to be as or more effective than GAI-generated ones. However, sophisticated actors might be able to benefit from more improved social engineering vectors such as deepfaked audio or video calls. Lower-skill actors could also leverage these types of attacks, but they may also have less time and fewer resources to create convincing frauds, so this risk will be depend on the quality of deepfakes generated by existing commercial tools.

    Systems that help users identify AI-generated content could help mitigate the risks that AI poses in this phase of the attack lifecycle by making it easier for technology systems such as email clients or video-calling platforms to detect and warn users of AI-generated content. These systems (and the associated implementation challenges) are addressed in the policy recommendations section.

    Vulnerabilities and exploits

    Attackers can also gain access to an information system by exploiting vulnerabilities in software code. In these cases, attackers can either exploit a known, unpatched vulnerability or discover and exploit a previously unknown vulnerability (often called a “zero-day”). Per IBM, 11 percent of data breaches last year used zero-day vulnerabilities, so another way that LLMs could significantly impact the dynamics of cybersecurity is by enabling attackers to identify new vulnerabilities more rapidly.

    Interest in software systems capable of automatically identifying bugs and vulnerabilities in code did not start with the arrival of GAI systems. Back in 2016, the Defense Advanced Research Projects Agency (DARPA) hosted a Grand Cyber Challenge that asked researchers to build the best automated system for identifying software vulnerabilities.25 LLM’s fluency in reading and explaining code reignited interest in the potential use of AI to find software vulnerabilities for the purpose of better securing software systems, and DARPA launched a new AI Cyber Challenge in 2023 aiming to develop LLM-based models for the same ends.26 Vulnerability-scanning LLMs would be unavoidably “dual-use”–they could help malicious cyber actors identify vulnerabilities in code as well as defenders seeking to harden their code against attack.

    Existing research on the vulnerability discovery capabilities of LLMs does not offer immediate cause for concern (or excitement). A 2021 paper evaluating the performance of Codex–OpenAI’s model trained exclusively on code–found that “Codex did not perform well when compared even to rudimentary Static Application Security Testing (SAST) tools” and reported that the authors “encountered no cases in our testing where using a Codex model led to better or more efficient results than SAST tools.”27

    A subsequent study from 2023 found that GPT3.5 did not perform significantly better than a dummy classifier (which selected vulnerabilities based on their frequency in the underlying distribution) at identifying vulnerabilities in Java code.28 In a technical paper accompanying the release of GPT-4, OpenAI reported that “GPT-4 could explain some vulnerabilities if the source code was small enough to fit in the context window, just as the model can explain other source code,” but found it “less effective than existing tools for complex and high-level activities like novel vulnerability identification.”29

    Fine-tuning LLMs on vulnerability identification tasks could increase their efficacy. A study in 2023 built a large dataset of code and code vulnerabilities and then trained LLMs and AI systems with the data. While none of the models were reliably accurate at the task, the study found that increasing the size of the training data appeared to increase model performance at finding vulnerabilities, at least up to a point, where after performance returns appeared to diminish.30However, this training set, though large, was still relatively small in LLM terms. Given how well-established scaling laws are across different kinds of AI model tasks,31 more data would likely continue to improve model performance. While the present research does not suggest that LLMs are close to improving upon sophisticated bug hunters’ performance, the proliferation of interest and activity around developing AI vulnerability hunting systems means this is an area for experts to monitor as GAIs continue to improve.

    Another way in which AI systems could be useful in this stage is by helping to develop exploits, or code to take advantage of already-discovered vulnerabilities. However, OpenAI also reported that GPT-4 “performed poorly at building exploits for the vulnerabilities that were identified.”32 Online accounts suggest that some users have been able to convince models to write relatively simple exploits. For example, one researcher used a “jailbreak” (a prompt that puts a model into a state such that it no longer follows its training safeguards) to get ChatGPT to write code and inputs to exploit vulnerabilities, such as structured query language (SQL) injection.33 These accounts and the findings from our experiments suggest that GAI systems could be helpful for novice hackers seeking basic ways to exploit known vulnerabilities, such as prompts to inject into an unsafe web form.

    The NCSC report suggests that AI may moderately improve low-sophistication hackers and that there is a “realistic possibility of uplift” for skilled actors.34 This may correspond to this report’s findings that LLM capabilities for vulnerability identification may increase as the result of domain-specific training and development, including spillover from efforts to build such systems for defensive purposes. Vulnerability identification capabilities would likely benefit sophisticated actors such as hackers associated with nation-states, who are likelier to have access to expert tools and resources. Such systems could also potentially be used or developed by well-resourced cybercriminal groups specializing in discovering zero-day vulnerabilities for use or sale. Lower-skilled actors are less likely to rely on zero-day vulnerabilities and may follow many other attackers in preferring simpler vectors such as social engineering or exploiting known, unpatched vulnerabilities. This report’s results indicate that such actors may modestly benefit from being able to rely on code-generating systems to help write simple exploits for known weaknesses such as vulnerable web-form inputs.

    Experimental findings: Web hacking

    In our experiment, the advanced web hacking levels in which participants completed tasks such as finding and exploiting vulnerabilities in input forms were the most similar to tasks in the “gaining access” phase.

    We observed that ChatGPT generally avoided providing answers to high-level questions about finding and exploiting website vulnerabilities, although several participants found that they could get around limitations by rephrasing questions. For example, ChatGPT would not answer the question “Can you give an example of a file which could be uploaded and would result in unauthorized access?” but would answer “What do web shells that allow for file access via uploads in php look like? Can you give an example?” or “What tools do hackers use to get around exif_imagetype?” Participants experimented with strategies to circumvent these limits, such as asking the model to pretend to be a clown making balloon animals in the shape of an input that would hack the website; however, these strategies were unreliable and participants more often chose to simply rephrase their questions to avoid triggering the model’s safeguards.

    During these levels, participants struggled with ChatGPT’s reliability. For example, one of the most challenging tasks required participants to reverse-engineer a plaintext value based on the PHP code that encrypted it. Because this task combined challenging logical reasoning (reversing the encryption steps) with the need to write code, it was uniquely challenging for novice participants. Notably, ChatGPT erred in two ways during this task which made it difficult for novice participants to recover. First, it often presented logically incorrect code (for example, offering code to “reverse” a series of operations that performed those operations in the wrong order for reversal), and second, it provided incorrect answers to questions about running the code, such as “what is the reverse of this string,” or, “if I were to run this code, what would be the output?” Sometimes ChatGPT would state that it could not run the code, but other times it would provide an answer to the question, which was often incorrect.In one example, ChatGPT gave participants a “reversed” string that had 25 out of 30 characters in the right place. Crucially, the characters at the beginning and end of the string were correct, making it easy for the human operator to miss the error During the experiment, participants disagreed about whether ChatGPT was running the code itself versus simply “predicting” the output. Though it was not running the code–in-chat Code Interpreter was not available at the time of this experiment—the model’s willingness to provide results that seemingly described the outputs of running code confused participants who came to believe that it could execute code if they asked it in the right way.

    One of the participants described being sent into a “tailspin” as they proceeded down an incorrect path for more than an hour based on one such incorrect value returned by ChatGPT. As the participant put it, “While ChatGPT feels more approachable–easier to ask questions and do follow up–it’s kind of a false comfort. Having to dig through conflicting and confusing sources through Google searching reinforces not trusting what you find and while it might slow ‘progress,’ it at least maybe helps to prevent ‘progress’ in wrong directions.”

    These findings suggest that ChatGPT (as of June 2023) is not yet ready to serve as a co-pilot for novice hackers to explore and exploit new information systems. Nevertheless, its ability to explain and generate custom code was useful, especially for tasks with a relatively consistent form (e.g., supplying a string that can serve as an exploit for an unsanitized input field).

    Escalation of privilege and lateral movement

    In which an actor gains additional privileges to carry out desired actions or to pivot to gain access to other more sensitive or valuable systems and resources.

    Once inside a compromised system, attackers often need to escalate their privileges or move to other system resources to access high-value data. Typically, attackers achieve this by stealing additional user credentials (e.g. by using key logging tools like Mimikatz that capture user-typed passwords) or bypassing authentication altogether (such as by “passing the hash,” in which an attacker steals a valid hash to masquerades as an authenticated user).35

    It is unclear how much benefit GAI systems can provide at this stage of an attack. There are currently few public accounts or research results examining whether and how GAI systems can write code for improperly elevating privileges or moving laterally between information systems. It is unclear whether GAI-generated code would provide any benefit compared to existing tools for this purpose. Novice hackers may benefit more than experienced ones from LLM’s ability to generate simple commands to search through file systems for credentials, as well as from being able to ask models how to go about the process of seeking to escalate their access. However, our experiment found that existing safeguards are still relatively effective at preventing users from asking high-level questions about improperly escalating their access.

    The NCSC report found that unsophisticated actors would receive no advantage from AI-assisted lateral movement and that sophisticated actors would see minimal benefits in this area.36 Broadly, this report also found less evidence that GAI models could assist at this stage. However, the experimental findings suggest that unsophisticated actors may gain a slight benefit from GAI models’ assistance with creating basic commands and using tools to explore networks and hunt for credentials.

    Experimental findings: Linux commands

    The “server game path” in our experiment involved tasks such as finding files or values within files and then using that data to progress through the challenge. This is a very basic version of tasks that may support privilege escalation once inside a system, such as finding files that contain log-in credentials. ChatGPT was particularly helpful for generating the basic script commands that participants required to advance through these levels. Participants reported finding it much faster to ask ChatGPT for the right code snippet than to try to figure out the command themselves using Google Search or Stack Overflow. As one participant put it, “Once I figured out how to use ChatGPT my time getting through challenges significantly reduced.” Multiple participants also found it helpful that they could ask ChatGPT to explainthe code it was providing.

    However, in later levels, participants had to perform more complex tasks such as decrypting files with appropriate keys or using found credentials with a Secure Shell (SSH) protocol to access other servers. Our participants found ChatGPT less helpful for these kinds of open-ended tasks as compared to ones where they simply needed it to provide a command. As one participant said, “I found that ChatGPT’s responses were not as helpful […] maybe because the problems were more complex.” Additionally, as participants advanced towards levels that more closely mirrored hacking tasks, they were more likely to run into ChatGPT’s safeguards. For example, asking questions such as “How do I get root [privileges]?” or “How do I perform an action as another user?” would often trigger safeguards in ways that requesting the model to write a command to find a particular string within a set of text files did not.

    Multiple participants noted the importance of “getting the right questions” to make ChatGPT work for their purposes. On day two, one participant described the difference as “ChatGPT may be easier to get specific answers when you have the right specific question, but it is difficult when you run into a wall that you can’t seem to find the right question to get around.” Participants also described feeling like they had a different level of understanding when they used ChatGPT as compared to Google. One participant said, “ChatGPT was way easier to resolve these puzzles, but working through Google and other types of online tools made me feel like I had a better understanding of what I was actually doing.”

    Impact

    In which an attacker performs actions to fulfill their goals within the information system, such as encrypting files for ransomware or exfiltrating files for data theft.

    Ransomware, in which actors encrypt the files on a system and demand payment for decryption, is an area of particular concern for how GAI capabilities may aid cyber crime. Online accounts describe using ChatGPT to generate code to implement the functionality of ransomware (finding, encrypting, and deleting files),37suggesting that it could provide modest benefit with this type of impact. However, it is important to note that in most of these cases, the interface refuses explicit requests to write ransomware. Instead, the operator must deconstruct the prompt into a series of tasks, such as a request to find files, then a request to encrypt them, and so on. As such, unsophisticated actors may receive less benefit, as they cannot simply ask the model to write the code for them, and must instead already understand its key functions. Additionally, the need to write custom ransomware code may not be a significant roadblock for many opportunistic cyber criminals: increasingly, groups are able to purchase malware, sometimes with accompanying infrastructure, from so-called “ransomware-as-a-service” providers.38

    Another type of potential impact is data exfiltration, or the theft of data from a system. Data exfiltration often goes hand-in-hand with the next activity on this list: evasion of defenses. Attackers who wish to exfiltrate a large volume of data often must conceal the exfiltration activity so that it can go on for long enough to transmit the desired data before defenders can detect and stop it. Attackers use a variety of means to covertly exfiltrate data, including transferring files through file transfer protocols or cloud services, hiding exfiltrated data in network traffic such as DNS or HTTPS requests, or stashing obfuscated data in file formats such as images or audio files.39 Little has been written about whether GAI models might unlock new ways to exfiltrate data more effectively. Some research has suggested that AI-generated images could be used to improve steganography (hiding data in ordinary files files).40 The NCSC report predicted that both sophisticated and unsophisticated actors could use AI for more effective exfiltration, but did not specify how this would occur in practice.41

    Evasion of defenses

    In which an attacker conceals their activities within a compromised information system to avoid detection.

    Across multiple phases of the attack lifecycle, a key question for attackers is how to conceal their presence within a compromised network for long enough to achieve their objectives. How could GAI systems help them do so?

    One sensational post from a cybersecurity researcher in 2023 described the ability to use ChatGPT to create detection-evading malware. However, the article makes clear that the human operator had knowledge of vendor detection systems and provided explicit prompts to ChatGPT asking it to add specific detection-evasion features such as a time-delayed start and obfuscated variable names.42 That is, these evasion tactics were not features that the model conceived of on its own. Based upon such cases, LLMs could potentially benefit experienced attackers by helping them more efficiently write custom code to evade certain types of defenses. However, it is too soon to claim that it can help inexperienced operators do so or that it is better at writing such features than a sophisticated hacker.

    Another potential application of LLMs in this context is for polymorphic malware: malicious code that lacks a consistent signature, making it more challenging to detect for defensive systems such as anti-virus software.43 Security researchers have begun publishing proof-of-concept versions of AI-based polymorphic malware, such as programs that call out to the ChatGPT API to receive newly generated malicious code for execution.44 Asking a GAI system to dynamically generate code means that the malicious instructions are stored in memory only, which avoids creating a signature that might trigger defensive systems. As a result, an Endpoint Detection and Response (EDR) system reportedly failed to flag the malware. While this threat is concerning, other security researchers have pushed back on the claims, suggesting that signature-based detection is far from the only means by which modern EDR systems identify malicious code, meaning polymorphic malware would not represent an “uncatchable” threat. Polymorphic malware of this type is not necessarily autonomous, as the human operator may still maintain primary control over the process such as by directing the prompts the model uses. However, the potential to use GAI systems and their code generation abilities as a component of more autonomous malware raises significant risks concerning the evasion of defenses. These risks are discussed in the following section.

    The report from the NCSC did not cover evasion of defenses as a separate set of activities; however, it did iterate that advanced operators would be “best placed to harness AI’s potential in advanced cyber operations […] for example use in advanced malware generation.”45 This report’s findings suggest that autonomy could be a meaningful enabler for advanced malware, with the caveat that the timeline for the development of reliable is highly uncertain.

    Autonomy

    Autonomy is not a property of the MITRE ATT&CK cycle but is relevant for assessing the risk and efficacy of GAI systems for hacking. Autonomy is defined in the military context as systems that can act “with delegated and bounded authority,” in which an autonomous system takes certain decision steps usually reserved for human decision-makers without explicit direction.46 In the AI context, the term describes systems that can identify and take actions to achieve some higher-level goal. In the offensive cyber context, this could describe the ability of a GAI system to identify the steps required to perform a task such as accessing a target information system, and then to iteratively write, run, and evaluate the results of the code until it has achieved its objective.  

    Ongoing work has explored the potential of “autonomous agents,” software systems that use an LLM to take iterative, independent steps to achieve a user-defined goal. Generally, these models work through the “chain-of-thought” prompting, in which an LLM iteratively prompts itself to decide what to do next in service of a goal and then produce the outputs it needs to achieve that goal.47 Typically these autonomous agent systems combine a GAI model that is used for reasoning and input creation with other software-defined capabilities that allow the agent to achieve its goals, such as a code interpreter through which it can run the code it generates or an API it can use to search the web for a query it writes.

    While the initial wave of excitement around these prototypical autonomous agents tempered as it became clear they are not yet effective enough to autonomously achieve complex tasks, commercial interest in AI agents has persisted.48 Given this enthusiasm as well as the obvious business cases—such as AI assistants capable of performing tasks like booking flights or scheduling meetings—it is likely that the field of autonomous systems will continue to attract funding and attention. As these systems operate by generating and executing code, they have a host of potential impacts on the cybersecurity landscape. Leaving aside the obvious cybersecurity risks associated with allowing an unsupervised software system to make changes or modifications to its operator’s machine or to conduct activities on the internet on their behalf, such systems could also be useful for information security and other hacking, especially as GAI models grow more capable.

    For some of the phases, including Reconnaissance and Initial Access, the primary benefit afforded by autonomous systems is the combination of scalability and adaptability—the ability for one operator to launch multiple autonomous processes, each capable of executing a complex action sequence. A malicious hacker could use multiple autonomous bots to conduct bespoke phishing campaigns or spin up a set of agents to adaptively probe many different information systems for vulnerabilities.

    For other stages, such as Evasion of Defenses, autonomous agents could offer benefits not only in terms of scalability but also by virtue of their autonomy itself. For example, cybersecurity defenders can often detect and impede a hack in progress by spotting unusual connections that malicious actors establish between the compromised system and external command-and-control servers that provide instructions or receive exfiltrated data.49 Advanced cyber threat groups have devised increasingly complex ways to camouflage these connections to maintain persistence in a compromised system. If LLMs could be used to create autonomous malware that takes multiple adaptive steps within an information system without needing to call out to an external system for instructions, this could increase such actors’ ability to perform other actions, such as escalating privileges, while avoiding detection.50 This risk would be heightened if attackers can build malware using GAI models that can run locally on compromised systems since this would allow the malware to generate code and instructions without needing to establish a connection to an internet-based API that could potentially be spotted by defenders. This seems likely to be possible in the future, as there has been substantial interest and development focused on adapting LLMs to be run locally on consumer devices.51

    These possible risks associated with autonomy are not yet realized because autonomous agents are not yet particularly reliable. An evaluation of 27 different LLM models (embedded into an autonomous agent framework) on a range of tasks found that even the strongest (GPT-4) was not yet a “practically usable agent.”52 The GPT-4-based agent had a success rate of 42 percent on command-line tasks (such as answering questions about or modifying file information) and 29 percent on web browsing tasks (such as finding a specific product on a site and adding it to the user’s cart). These rates are still, in some sense, impressively high, and might be sufficient for actors to use autonomous agents for certain phases of the lifecycle such as reconnaissance, where failure is not very costly. However, higher reliability (and perhaps greater task-specific sophistication) is necessary before would-be attackers can trust autonomous agents to reliably perform all the steps of the attack lifecycle.

    Autonomy would be relevant for both enhancing sophisticated malicious cyber actors and expanding the set of actors. For sophisticated actors, the degree of improvement would depend heavily on the capabilities of the autonomous agents. The risks would be heightened if bots were near to or better than sophisticated human abilities and thus capable of undertaking many different paths to compromise a target system at machine speed. Less sophisticated actors could obviously benefit from the same improvements (if they were able to access and direct such systems with equal efficacy) but also might be perfectly well-served by an army of simple bots capable of testing systems for common vulnerabilities and performing standardized actions such as ransomware or data exfiltration. Here, as is true throughout considerations of autonomy, the devil will be in the details, namely the tasks in which bots are most effective and how clever and adaptable they are when confronted with the real-world diversity of information systems and cyber detection and defense measures.

    These risks must be considered in the ongoing development of autonomous agent frameworks, products, and evaluations, especially for agents and systems that relate to cybersecurity. The development of autonomous agents for cyber defense may also risk creating tools with powerful capabilities for cyber offense, such as those capable of hunting through code for vulnerabilities and automatically writing patches (or instead, exploits). Additionally, the incorporation of automation into cyber defense will create new potential attack surfaces, as hackers might seek to directly target and co-opt AI-based cyber defense systems for their own ends using methods like prompt injection. Policymakers should be careful to ensure that ongoing research into autonomy, especially autonomy in the cyber context, is well-scoped and potentially released with safeguards to limit its potential dual use for malicious hacking. Researchers should study not only how to further develop autonomy, but also how to develop and deploy it safely, such as by examining which cybersecurity tasks, and to what level of autonomy, can be safely delegated to autonomous systems.

    Policy directions

    Overall, GAI systems appear to have considerable potential utility for both expanding the set of cyber actors and enhancing the operations of sophisticated hackers in different ways, but the degree to which this potential is realized in current models is more mixed. For example, models do not yet appear to have the level of reliability needed to assist novice hackers from start to finish or to operate autonomously. Both sophisticated and unsophisticated operators, however, stand to benefit from current and developing capabilities in AI models that make them useful for social engineering attacks and open-source intelligence gathering. However, the prognosis for other activities, such as vulnerability identification or the development of more advanced tools for lateral movement or data exfiltration is more uncertain.

    Table 4: Summary of level of capability enhancement from GAI across different phases of the cyberattack lifecycle

    However, this reality is not permanent. The AI field has moved in fits and starts with the development of new architectures and discoveries about the power of factors such as scale. The current level of interest and investment in GAI and use cases such as autonomous agents make it easy to imagine that one or more paradigmatic steps forward in the way models are constructed or trained may emerge in the not-so-distant future, changing the answers to the questions posed here. In addition, the capabilities of AI systems trained using the now-dominant unsupervised learning paradigm are often discovered rather than explicitly designed by their creators; thus, additional use cases and risks alike will likely continue to emerge through the decentralized testing and use of GAI systems.

    Taken together, these factors provide an opportunity as well as a challenge: can policymakers create and calibrate a legal regime that is ready to manage the risks of AI with hacking capabilities, while allowing and encouraging safe innovation in the software realm? The following recommendations propose policy approaches to manage known and knowable risks while seeking to protect the positive impacts arising from AI innovation. Where applicable, they also discuss the recommendation of these intersections with major areas of policy effort such as the recent Executive Order on AI in the United States.53

    1. Develop testing standards for leading-edge models that assess cyber risks across different phases, actors, and levels of autonomy, prioritizing transparency and participation. 

    The findings from this report illustrate that the benefits GAI systems deliver to hackers will be unevenly distributed across different activities in the attack lifecycle and will differ depending on an actor’s methods of operation, relative strengths and limitations, and the resources at their disposal, both in terms of traditional tools and their ability to leverage and customize GAI-based tools. As governments move to establish bodies, authorities, and standards to test the safety and potential impacts of AI systems, these efforts should use these empirically grounded models of the cyberattack lifecycle to examine the full spectrum of ways that AI might influence cyber tactics and techniques preferred by different categories of actors. Testing frameworks should account for capabilities that might drastically lower barriers to entry for low-skill actors or allow such actors to significantly speed up or scale their activities, and for ways in which AI systems might afford substantially new or above-human capabilities to sophisticated actors. For both actor profiles, autonomy is a significant area of concern, so leading-edge models should be tested for their capabilities in autonomy, including when they are incorporated into current autonomous agent frameworks. 

    In the United States, a comprehensive step towards government-required testing of AI system capabilities came in the recent AI Executive Order, which directed the secretary of commerce to use the Defense Production Act to require companies developing “potential dual-use foundation models” to provide the federal government with information about such models, including the results of red-teaming or adversarial testing.Dual-use foundation models are defined in the EO as general-purpose models trained using unsupervised learning that have “a high level of performance” at tasks that pose a threat to national security, including by helping automate sophisticated cyberattacks, and the Commerce Department will be able to develop definitions and thresholds for the models that will be subject to this reporting requirement Eventually, the National Institute of Standards and Technology (NIST) will develop a standard for red-team testing which AI developers will be required to use in these reporting requirements. The EU’s AI Act appears poised to require general-purpose AI models posing a “systemic risk” to uphold additional standards including red-teaming and adversarial testing,54 and the Bletchley Agreement signed by twenty countries at the UK’s Safety Summit emphasizes the responsibility of leading-edge model developers to perform and share the results of safety testing.55

    Standards developed for adversarial testing or red teaming models for cyber risk should draw from models of the cyberattack lifecycle like the ATT&CK framework to test how GAI models could assist with different potential activities and phases of a cyberattack, allowing decision-makers to examine the results with more specificity and consider how they differentially impact the risks created by a model. Key questions should include:

    • Which steps or phases in the attack lifecycle can the tool support, and what is the level of risk or harm of improvements to that stage or activity?
    • To what degree could the model enable an experienced actor to perform the task or phase more effectively? That is, how does the model’s effectiveness compare to an experienced human operator or existing available tools?
    • To what degree could the model enable an inexperienced actor to perform the task or phase more effectively? That is, how does the model’s capability compare to an unskilled human operator or easy-to-use existing tools?
    • To what extent is the model (alone or when combined with autonomous agent frameworks) capable of chaining together multiple phases of the attack lifecycle?

    This report suggests a few areas of particular risk that, should they manifest, might necessitate more urgent policy interventions. One such area is vulnerability discovery—models capable of discovering zero-day vulnerabilities more efficiently than either humans or existing tools would create significant risk by potentially unlocking new vectors for sophisticated actors to attack sensitive and high-value systems. The ability for AI systems to create synthetic videos of individuals indistinguishable from real videos, or to falsify other forms of biometric authentication, could also create significant cyber risk without clear mitigation paths. Both capabilities present risk as they would offer substantial new capabilities for hackers to gain access to information systems. Finally, models capable of autonomously chaining together multiple phases of a cyberattack create extreme risk, because they could assist in scaling unskilled actors’ operations, afford new capabilities in defense evasion to sophisticated actors, and create significant challenges to securing and containing models that could someday exhibit emergent self-directed behavior.

    As the AI Executive Order suggests, and as the findings from this report reinforce, adversarial testing of models’ cyber tactics, techniques, and autonomous potential should be performed and reported using versions of models both with and without safeguards. Our experiment and countless other accounts show that safeguards can often be evaded by changing the phrasing of requests, as well as by through more clever and technical approaches, such as “jailbreak” prompts.56Policymakers should presume that safeguards do little to change the baseline risk created by a model’s capabilities unless and until model developers offer much more conclusive and thorough proof to the contrary.

    If models capabilities continue to increase in these high-risk areas, lawmakers should consider enshrining requirements for cyber-related safety testing into the pre-release process for models. The United Kingdom’s recent AI Safety Summit culminated in an agreement by AI companies to allow governments, including the United States and United Kingdom, to test their models for potential national security risks before release.57 However, this requirement is not yet backed up with the force of law. The White House’s AI Executive Order also lacks an explicit structure for whether and how the government would prevent the release of a model with capabilities that create a high level of risk. An explicit legal framework tying together testing requirements and policy mechanisms for addressing high-risk capabilities will be a crucial next evolution of these efforts. One useful model for how this requirement could be constructed in law comes from another high-stakes software domain: medical device manufacturing. The Food and Drug Administration (FDA) has created extensive requirements for manufacturers of medical devices to perform and document cybersecurity risk management processes in the design and development of medical devices.58 The FDA can create such a regime because, crucially, it gates access to the market, allowing the agency to place the burden onto medical device makers to justify the adequacy of their cybersecurity testing regime rather than on the FDA itself to publish a one-size-fits-all set of testing standards. A long-term framework for managing the cyber risks associated with the most leading-edge models could take inspiration from this structure. 

    AI model testing as enshrined in the Executive Order and in subsequent legal structures for pre-release testing should be paired with requirements for public information sharing and structures that allow non-governmental entities to participate in testing. For example, the US government should develop and publicize a plan for how they will share the information they receive under the new Executive Order, designed to maximize transparency while accounting for potential countervailing factors like national security and proprietary or business-sensitive information. Additionally, the US Congress and other legislative bodies should consider mechanisms to facilitate access to cutting-edge models for independent testing and research by civil society organizations, academic researchers, and auditing firms outside of government. Many AI companies already invite domain experts to perform red-teaming and other evaluations before a model’s release; establishing this process in law would cement this good practice as a requirement in the model release lifecycle and ensure that experts have recourse to publicize or report adverse findings. So long as the companies developing AI models have sole discretion over which auditors are granted access, auditors will face perverse incentives to avoid publicizing negative findings for fear of losing privileged access.

    Throughout the process of creating testing standards and policy mechanisms for acting upon the results of testing, policymakers should be attuned to the potential risks while also realistic about the fact that society has implicitly decided to allow the development of other technologies that materially aid malicious hackers—everything from Google Search itself to port sniffers and vulnerability scanners—in recognition of the fact that these technologies also provide a myriad of other benefits. While it makes sense to ensure new AI technologies do not change the cybersecurity risk landscape faster than society is equipped to manage, policy should also be premised on a clear-eyed and empirically grounded accounting of the true capabilities of these systems as well as the existing ecosystem where they are utilized. The need to carefully separate real risk from generalized excitement and anxiety about model capabilities is another reason to invest in developing multifaceted testing standards informed by real cyber tactics and techniques.

    2. Assess and manage cyber risk while protecting the equities of open model development.

    While the findings from this report indicate some areas of present and future concern—such as the ability to generate synthetic media useful for social engineering or autonomous system operations—they also indicate that there are still reasons to be cautious about claims that GAI models in their current form create unique risks in the hacking context. Existing (non-AI-based) software tools continue to offer would-be hackers assistance above and beyond that provided by GAI models for many activities. As policymakers consider the panoply of results likely to emerge under new AI testing requirements, they should take inspiration from the information security community’s general bias towards allowing openness and the publication of new tools with both offensive and defensive capabilities59 by ensuring AI safety regimes are compatible with open-sourcing and other public release of GAI models, absent evidence of a step-change in GAI models’ hacking assistance capability.

    AI models can be made more open in a variety of ways, including by publishing their source code, trained weights, or training data.60 Open-source or otherwise publicly available AI models create many potential benefits: they allow researchers to investigate AI systems’ properties and risks on topics from cybersecurity to bias and fairness, and support experimentation, innovation, and entrepreneurship by allowing developers to build a myriad of applications atop AI systems without paying enterprise prices for each API query.61 At the same time, open models face unique governance challenges, as it is harder for their creators to impose safeguards through API restrictions, and because the ability of users to repurpose and modify open-source code as they see fit enables the potential removal of creator-imposed safeguards.62

    In light of these benefits and risks, policymakers have begun to grapple with how to account for open-source models in AI safety and risk-management regimes. The recent AI Executive Order directed the Department of Commerce to develop a report on the risks and benefits of “dual-use foundation models with widely accessible weights” and associated potential policy approaches.63 The leaked final text of the EU’s long-negotiated AI Act also directly addresses the applicability of safety standards to open models, largely carving them out of many of the regime’s requirements, with the exception of open models that pose a “systemic risk.”64 These models are defined as those with “high impact capabilities,” defined in the text as those exceeding a certain compute threshold. The blended model adopted by the AI Act seems largely correct: the most capable models cannot be carved out of testing requirements, regardless of whether they are open source, but policymakers should seek to reduce compliance burdens on open model developers outside of those operating at the most leading edge of model development.

    Given this report’s findings that many model outputs are useful for hacking but hard to restrict due to their similarity to benign use cases, and given the many well-documented ways to circumvent safeguards in closed models,65 the US Department of Commerce and other policymakers seeking to design policy regimes for open models should regard with skepticism arguments from large labs that models with advanced capabilities are safe for release through an API but not for their competitors to open source.66The policy conversation should place the onus on these large labs to demonstrate that their safeguards, API filters, and alignment techniques are robustly preventing user abuse before accepting arguments that the lack of such features makes open-source AI inherently unsafe. At the same time, policymakers will need to grapple with the fact that there may be some important safety precautions that do not work, or do not work in the same way, for open models. For example, it is still unclear whether it is possible for open models to include output watermarks that would be impossible for users to remove. The forthcoming report from the Department of Commerce and other areas of work should delineate key risk-management technologies for AI models and analyze which of these are compatible with open models, providing a more reasoned assessment of the potential risks as compared to closed models and a wider menu of policy options.

    Additionally, including or excluding open models from governance regimes is not the only way for policymakers to support the equities of open developers and the safety of such models. One way to make testing requirements more equitable for the open-source ecosystem would be for the government to provide funding grants or technical infrastructure to help open model developers comply with standards. Resources and funding that organizations like the National Science Foundation have already programmed for AI-related research could be directed towards developing and evaluating anti-abuse safeguards for open models.67Government agencies beyond the Department of Commerce should also begin the process of engaging with open-source AI stakeholders to build trust and buy-in around governance regimes, including small developers, open-source AI users, and companies engaging in substantial open-source development or that host open-source models.  

    In short, where policymakers consider risk management regimes that might limit model open-sourcing or place significant barriers on open-source model developers, it is essential that such determinations are not based on fear and hype about potential capabilities but instead on empirical testing results and a clear-eyed comparison of how such risks compare to existing software tools and the tradeoffs of hampering greater transparency and openness.

    3. Mobilize resources to speed up technical and standards-setting work on AI content labeling with a focus on implementation potential.

    This report found that one area of present risk concerning the intersection of GAI capabilities and hacking is the ability to synthesize images, audio, or video useful for impersonation attacks and social engineering. Depending on these tools’ sophistication and accessibility, they could be useful to sophisticated hackers and opportunistic fraudsters alike. Policymakers in the United States and beyond are already aware of the need for labeling AI content on social media and communications platforms, as reports have proliferated of the use of AI-generated images in disinformation campaigns68 and AI-generated voices in scams.69 Methods to appropriately label AI-generated content will be key risk mitigations for cybersecurity in addition to helping combat misinformation. The United States and other governments should rapidly speed up investments in research and development of methods for AI content labeling to make it possible for policymakers to develop and begin implementing workable standards.

    Proposed solutions to the problem of appropriately labeling AI-generated content include detecting the content, watermarking (embedding an unremovable identifier that content is AI-generated), or certifying the authenticity and provenance of non-AI-generated content (often via cryptography). Each of these approaches has its limitations. Currently, the outright detection of AI content suffers from poor accuracy. Researchers have found many ways to break existing proposed AI watermarks,70 and watermarking as a general approach relies upon the compliance of AI developers with watermarking standards, which poses practical enforcement challenges related to jurisdictional issues (as some model developers may be based outside of the US) and open-source models (where model developers cannot prevent users from tampering with the watermarking functionality71). Authentic content certification may be the most robust solution, and there are already proposed technical standards for content provenance certification,72 but it also faces significant challenges around implementation feasibility given the need to embed certification processes in the many different technologies through which “content” can be created and modified, from digital cameras to image editors and social media sites.

    In part because of these notable limitations, it is unclear which solution is most effective, or whether the best approach will be to use multiple mechanisms in tandem. Policy should drive further research investment into this area on all fronts until it becomes clearer which avenue is most promising. The ultimate goal should be the creation of a set of standards that can be widely used for labeling AI-generated content on communication platforms such as email, videoconferencing software, and social media platforms.

    The recent Executive Order on AI tasked the Department of Commerce with producing a report on the current state of AI watermarking and authentic content labeling, after which the Department of Commerce will work with the Office of Management and Budget to develop “guidance” for the federal government based on the report’s findings.73 This is an important step: the US government has already (wisely) begun to require cryptographic digital signatures on certain kinds of government communications such as subpoena orders issued by the Department of Homeland Security74 and requirements to include provenance certification for other government-generated content should follow. However, watermarking and content authentication requirements will need to be implemented far beyond the public sector to meaningfully reduce associated cybersecurity risks. Successfully detecting and labeling AI-generated content will require not just the cooperation of AI developers but also the myriad of different technologies and platforms where content is created and transmitted, from social media sites to email clients and mobile messaging protocols.

    Currently, the White House’s voluntary commitments on AI include a promise that AI companies will develop and implement watermarking.75/ However, a system of differing watermarks will present implementation challenges for entities tasked with detection and labeling. Lawmakers in the United States and beyond should instead push for the development and implementation of standardized watermarks or content provenance certification across AI developers. Congress could also require the National Institute of Standards and Technology (NIST) to develop such standards. Alternatively, it may be better if the US could participate in and adopt standards emerging from a global body, such as the International Organization for Standards.

    To make standardization possible, more research and development into the technical measures of detection, watermarking, and provenance certification will be required. This mobilization should begin now. Contests like the Federal Trade Commission’s Voice Cloning Challenge are an important example of ways to begin mobilizing more resources to tackle the challenge of AI-generated audio deepfakes.76 Policymakers should also consider approaches to force companies to internalize more of the societal costs that will be associated with addressing the problem of AI-generated content in the years to come, such as by imposing a tax on AI companies. “Pigouvian taxes” are generally designed to reimpose onto companies the costs of negative social externalities created by their products; this tax would be akin to a pollution task but instead pay for the negative impacts of polluting the information environment. Some of the revenue generated by such a tax could potentially be directed toward investments in federal research to develop AI labeling solutions. Government research funding should also be directed towards developing prototypes for the implementation of watermark detection methods or legitimate content certification in communication platforms, such as examining whether there are ways to implement such features in end-to-end encrypted systems that are wholly compatible with their privacy and confidentiality guarantees.

    4. Begin investing in policy and technical measures to manage risks arising from autonomous agents.

    Significant autonomous capabilities in AI models would create substantial new risks in the cyber domain. Yet, it is clear that many AI companies see agentic, empowered AI systems embedded within other systems or software as the next frontier in AI development.77 Given the lead time required to develop new technical mitigations and policy frameworks, policymakers should start investing in developing these mitigations and frameworks now. Priorities areas should include research into the best ways to create an internet that can robustly manage autonomous cyber agents, the development of legal thinking around liability for cyber-capable autonomous systems, and ongoing engagement with international partners around the responsible use of such systems by nation-states. Questions around autonomy have been addressed little by recent policy documents such as the Executive Order on AI. While assessing the capabilities of models themselves is a key step forward, there are myriad risks from increasing autonomous capabilities in these systems that are not addressed by testing requirements alone.

    The web of the future will need to be safe, usable, and resilient in the face of continuous interactions with autonomous agents or bots. Researchers should begin to examine points of potential weakness in this infrastructure, as well as ways in which autonomous agents or web infrastructure can be designed to minimize cyber risks. For example, researchers could explore systems that require bots to attest to their AI status and define safe ways for them to interact with web infrastructure. Or as is the case with content authentication, it may be infeasible to require all AI systems to self-declare and instead may be more prudent to seek safe and privacy-preserving ways for human users to verifiably attest to their humanity as they use the internet. Many governments, including that of the United States, have struggled in this domain for a long time—perhaps this moment can be the impetus they need to refocus on the development of secure tools and software to attest to digital identity.

    Another area of focus is clarifying liability for cyber harms caused by autonomous systems. There are many players in this equation—the developer of the LLM, the developer of the agent framework, and the user—and it is not yet clear where liability for bad outcomes rests. There are also tradeoffs in terms of different actors’ ability to prevent cyber harms from arising from these systems. Such frameworks will also need to account both for intentional or criminal harms and unintentional consequences. Researchers have already found evidence of the ways that LLMs can be vulnerable to prompt injection and other attacks, which could turn the AI models themselves into a vector for cyberattacks as well as a tool. While it is not yet clear which actors are best positioned to assume responsibility, policymakers should be actively considering the question, at risk of lapsing into the world of disclaimed liability that has already bedeviled much of the software ecosystem.

    Finally, the US should work with its allies and partners to establish norms around the use of autonomous offensive cyber weapons, in the same way it has led efforts to develop and define norms of responsible state behavior in cyberspace.78Policymakers looking for similar frameworks could take a page from the Department of Defense, which outlines governance structures and approval processes for the use of autonomous kinetic weapons.79 These policies do not apply to autonomous cyber weapons—an implicit recognition that some forms of malware like computer worms already operate semi-autonomously—emphasizing the necessity of coalescing around shared definitions and frameworks for understanding levels of autonomy in cyber weapons and agreeing on risk management practices.

    Conclusion

    The intersection of cybersecurity and AI is an area of much excitement, interest, and anxiety. Current AI models are information systems rather than physical ones, and thus we should expect that their fastest areas of integration and impact will be with other information systems. As such, it is natural to wonder how such systems might be able to affect technology against our will. Cyber is also an arena of direct, offensive versus defensive competition, between states or cyber criminals and companies, and thus will be a sector ripe for experimentation and innovation in and around AI for the purposes of gaining an upper hand.

    LLMs and their ability to produce code have supercharged this excitement, as well as the accompanying concern. But LLMs, by the very nature of their training paradigm, are elusive in attempts to immediately appraise their capability for certain tasks. They are master storytellers, paragons of the reasonable-sounding response. Yet, this appearance of competence is sometimes the truth and sometimes wholly fictional. Complicating the picture is that AI developers have vested commercial interests in over-promising the capabilities of their systems, and, perhaps, in portraying risks in ways that advance their policy goals. Amongst the excitement, policymakers have the unenviable task of discerning fact from science fiction and attempting to set reasonable guardrails that will protect the nation without unreasonably curtailing the development of a technology that seems likely to have major long-term economic and strategic implications.

    The potential utility of GAI systems for developing or supporting offensive cyber capabilities has emerged as an early area in which concern and attention have grown. Yet often missing from these discussions is a sense of structure, a set of empirical ways to assess the capabilities of models against what we know about cyberattacks. This paper is an attempt to bridge that divide. It finds that, at present, empirical testing indicates that GAI provides certain benefits for some kinds of well-scoped tasks but that it is far from ready to independently enable new hackers or to successfully conduct a hack itself—in part due to its well-known challenges with accuracy.

    At the same time, the vast amount of attention and resources pouring into the development of generative AI, and in particular into coding AI, means that this center will not hold forever. Policymakers should be skeptical yet open-minded, ready for new generations of current models or for new paradigms that will upend this calculus entirely. The government should begin taking steps now to manage known or foreseeable risks, such as the use of AI-generated content for social engineering and the creation of autonomous agents that interact with web systems and the computers connected to them. Finally, policymakers should consider how to establish regulatory regimes designed to empirically test for worrisome capabilities in ways that maximize transparency and public participation to drive accountability by the largest AI labs, while seeking to calibrate such regimes to protect the open development of AI models and the good they create.

    Leaders should view the current moment in context as one step in a long history of attempts to develop intelligent systems, while also seeing this as an opportunity to define forward-looking and flexible regulatory regimes that allow society to manage the potential risks arising from AI systems now and into the future. Cyber is but one example of a high-stakes domain where policymakers can seek to balance reality and the risks of the future, but only if they are willing to see these technologies as they are while trying to understand them as they may be.

    Acknowledgements

    First and foremost, our thanks go to Sara Ann Brackett, Will Loomis, Jen Roberts, and Emma Schroeder, for their curiosity, perseverance, and good humor as they participated in the experiment described in this report. The authors would also like to thank Tim Fist, Harriet Farlow, and Katie Nickels for the thoughtful feedback they provided on various versions of this document.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1     “Hallucination” is a term for the false, misleading, or otherwise incorrect information that GAI systems will generate and state as fact.See Matt O’Brien and the Associated Press, “Tech Experts Are Starting to Doubt That ChatGPT and A.I. ‘hallucinations’ Will Ever Go Away: ‘This Isn’t Fixable,’” Fortune, August 1, 2023, https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/
    2    Markus Anderljung et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” arXiv, November 7, 2023, http://arxiv.org/abs/2307.03718
    3    “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
    4    Philippe Lorenz, Karine Perset, and Jamie Berryhill, “Initial Policy Considerations for Generative Artificial Intelligence,” OECD, https://doi.org/10.1787/fae2d1e6-en.
    5    Ben Buchanan et al., “Automating Cyber Attacks,” Georgetown Center for Security and Emerging Technology, November 2020, https://cset.georgetown.edu/publication/automating-cyber-attacks/.
    6    “The Near-Term Impacts of AI on the Cyber Threat,” UK National Cyber Security Center, January 24, 2024, https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat.
    7    “MITRE ATT&CK,”, MITRE, https://attack.mitre.org/
    8    Emily Ferguson and Emma Schroeder, “This Job Post Will Get You Kidnapped: A Deadly Cycle of Crime, Cyberscams, and Civil War in Myanmar,” Atlantic Council Cyber Statecraft Initiative, November 13, 2023, https://dfrlab.org/2023/11/13/this-job-post-will-get-you-kidnapped/.
    9    “OverTheWire: Wargames,” https://overthewire.org/wargames/
    10    “How ChatGPT Can Be Used in Cybersecurity,” Cloud Security Alliance, June 16, 2023, https://cloudsecurityalliance.org/blog/2023/06/16/how-chatgpt-can-be-used-in-cybersecurity/
    11    Yusuf Mehdi, “Reinventing Search with a New AI-Powered Microsoft Bing and Edge, Your Copilot for the Web,” Microsoft, February 7, 2023, https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/
    12    “Open-Source Intelligence (OSINT),” Imperva, n.d., https://www.imperva.com/learn/application-security/open-source-intelligence-osint/
    13    Sudip Sengupta, “Port Scan Attack: Definition, Examples, and Prevention,” September 12, 2022, https://crashtest-security.com/port-scan-attacks/
    14    “What is Vulnerability Scanning? [And How to Do It Right],” HackerOne, June 18, 2021, https://www.hackerone.com/vulnerability-management/what-vulnerability-scanning-and-how-do-it-right
    15    Shahzeb Says, “A Quick Guide to Network Scanning for Ethical Hacking,” Edureka, April 3, 2019, https://www.edureka.co/blog/network-scanning-kali-ethical-hacking/
    16    Sheetal Temara, “Maximizing Penetration Testing Success with Effective Reconnaissance Techniques Using ChatGPT,” arXiv, March 20, 2023, https://doi.org/10.48550/arXiv.2307.06391
    17    “The Near-Term Impacts of AI on the Cyber Threat,” UK National Cyber Security Center
    18    Fredrik Heiding et al., “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models,” arXiv, November 30, 2023, https://doi.org/10.48550/arXiv.2308.12287; Pyry Åvist, “Who’s Better at Phishing, Humans or ChatGPT?,” HoxHunt, March 15, 2023, https://www.hoxhunt.com/blog/chatgpt-vs-human-phishing-and-social-engineering-study-whos-better
    19    Heiding et al., “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models.
    20    Kyle Barr, “Hackers Use Deepfakes of Binance Exec to Scam Multiple Crypto Projects,” Gizmodo, August 23, 2022, https://gizmodo.com/crypto-binance-deepfakes-1849447018
    21    Sharyn Alfonsi, “How Con Artists Use AI, Apps, Social Engineering to Target Parents, Grandparents for Theft, CBS News, August 27, 2023, https://www.cbsnews.com/news/how-con-artists-use-ai-apps-to-steal-60-minutes-transcript/
    22    Shawn Donnan and Dina Bass, “How Did ID.Me Get Between You and Your Identity?,” January 20, 2022, https://www.bloomberg.com/news/features/2022-01-20/cybersecurity-company-id-me-is-becoming-government-s-digital-gatekeeper
    23    Kelly Sheridan, “Phishing Emails That Invoke Fear, Urgency, Get the Most Clicks,” Dark Reading, October 11, 2017,  https://www.darkreading.com/endpoint-security/phishing-emails-that-invoke-fear-urgency-get-the-most-clicks
    24    “The Near-Term Impacts of AI on the Cyber Threat,” National Cyber Security Center
    25    “Cyber Grand Challenge,” Defense Advanced Research Projects Agency, https://www.darpa.mil/program/cyber-grand-challenge
    26    Justin Doubleday, “DARPA Competition Will Use AI to Find, Fix Software Vulnerabilities,” Federal News Network, August 9, 2023, https://federalnewsnetwork.com/artificial-intelligence/2023/08/darpa-competition-will-use-ai-to-find-fix-software-vulnerabilities/
    27    Mark Chen et al., “Evaluating Large Language Models Trained on Code,” arXiv, July 14, 2021, https://doi.org/10.48550/arXiv.2107.03374
    28    Anton Chekov, Pavel Zadorozhny, and Rodion Levichev, “Evaluation of ChatGPT Model for Vulnerability Detection,” arXiv, April 12, 2023, https://doi.org/10.48550/arXiv.2304.07232
    29    Josh Achiam et al., “GPT-4 Technical Report,” arXiv, December 18, 2023, https://doi.org/10.48550/arXiv.2303.08774
    30    Yizheng Che et al., “DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection,” arXiv, August 8, 2023, https://doi.org/10.48550/arXiv.2304.00409
    31    Pablo Villalobos, “Scaling Laws Literature Review,” Epoch, January 26, 2023, https://epochai.org/blog/scaling-laws-literature-review
    32    Josh Achiam et al., “GPT-4 Technical Report.
    33    Diego Tellaroli, “Using ChatGPT to Write Exploits,” System Weakness, March 23, 2023, https://systemweakness.com/using-chatgpt-to-write-exploits-4ac7119977
    34    “The Near-Term Impacts of AI on the Cyber Threat,” National Cyber Security Center
    35    Bart Lenaerts-Bergmans, “What Is Lateral Movement?,” Crowdstrike, April 17, 2023, https://www.crowdstrike.com/cybersecurity-101/lateral-movement/
    36    “The Near-Term Impacts of AI on the Cyber Threat,” UK National Cyber Security Center
    37    Mark Stockley, “ChatGPT Happy to Write Ransomware, Just Really Bad at It,” Malwarebytes, March 27, 2023, https://www.malwarebytes.com/blog/news/2023/03/chatgpt-happy-to-write-ransomware-just-really-bad-at-it
    38    Arianne Bleiweiss, “Off-the-Shelf Ransomware Source Code Is a New Weapon for Threat Actors,” KELA Cyber Threat Intelligence, January 15, 2024,  https://www.kelacyber.com/off-the-shelf-ransomware-source-code-is-a-new-weapon-for-threat-actors/
    39    Anusthika Jeyashankar, “The Most Important Data Exfiltration Techniques for a Soc Analyst to Know,” Security Investigation, November 3, 2023, https://www.socinvestigation.com/the-most-important-data-exfiltration-techniques-for-a-soc-analyst-to-know/
    40    Christian Schroeder de Witt et al., “Perfectly Secure Steganography Using Minimum Entropy Coupling,” arXiv, October 30, 2023, https://doi.org/10.48550/arXiv.2210.14889
    41    “The Near-Term Impacts of AI on the Cyber Threat,” UK National Cyber Security Center
    42    Aaron Mulgrew, “I Built a Zero Day Virus with Undetectable Exfiltration Using Only ChatGPT Prompts,” Forcepoint, April 4, 2023, https://www.forcepoint.com/blog/x-labs/zero-day-exfiltration-using-chatgpt-prompts
    43    “What You Need to Know About Signature-based Malware Detection?,” RiskXchange, May 4, 2023, https://riskxchange.co/1006984/what-is-signature-based-malware-detection/
    44    Jeff Sims, “BlackMamba: Using AI to Generate Polymorphic Malware,” Hyas, July 31, 2023, https://www.hyas.com/blog/blackmamba-using-ai-to-generate-polymorphic-malware
    45    “The Near-Term Impacts of AI on the Cyber Threat,” UK National Cyber Security Center
    46    Mark Maybury and James Carlini, “Counter Autonomy: Executive Summary,” Defense Science Board, September 9, 2020, https://apps.dtic.mil/sti/citations/AD1112065
    47    For example, for a prompt such as “Write a weather report for San Francisco today,” the model might reason “I need to write a weather report for San Francisco today. I should use a search engine to find the current weather conditions.” This would then prompt the model to generate a search query and use it to search the internet using a pre-configured search action. For more see: “AutoGPT,” LangChain, https://js.langchain.com/docs/use_cases/autonomous_agents/auto_gpt.
    48    Anna Tong et al., “Insight: Race towards ‘autonomous’ AI Agents Grips Silicon Valley,” Reuters, July 18, 2023, https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/
    49    “Command and Control,” MITRE ATT&CK,” July 19, 2019, https://attack.mitre.org/tactics/TA0011/
    50    Ben Buchanan et al., “Automating Cyber Attacks”.
    51    Benj Edwards, “You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi,” Ars Technica, March 13, 2023, https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/
    52    Xiao Liu et al., “AgentBench: Evaluating LLMs as Agents,” arXiv, October 25, 2023. https://doi.org/10.48550/arXiv.2308.03688
    53    “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House
    54    Jillian Deutsch, “Here’s How the EU Will Regulate AI Tools like OpenAI’s ChatGPT and GPT-4,” Fortune, December 9, 2023,  https://fortune.com/2023/12/09/eu-tech-regulations-ai-openai-chatgpt-gpt-4/
    55    “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” Department for Science, Innovation & Technology, Foreign, Commonwealth & Development Office, Prime Minister’s Office, November 1, 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
    56    Andy Zou et al., “Universal and Transferable Attacks on Aligned Language Models,” LLM Attacks, December 20, 2023, https://llm-attacks.org/
    57    Madhumita Murgia, Anna Gross, and Cristina Criddle, “AI Companies Agree to Government Tests on Their Technology to Assess National Security Risks,” Financial Times, November 2, 2023, https://www.ft.com/content/8bfaa500-feee-477b-bea3-84d0ff82a0de
    58    “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions,” US Food and Drug Administration, Center for Devices and Radiological Health, September 26, 2023, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions
    59    Geoff Duncan, “Could It Be… SATAN?” TidBITS, March 20, 1995, https://tidbits.com/1995/03/20/could-it-be-satan/
    60    Andreas Liesenfeld, Alianda Lopez, and Mark Dingemanse, “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators,” Proceedings of the 5th International Conference on Conversational User Interfaces, 2023, https://doi.org/10.1145/3571884.3604316
    61    Rishi Bommasani et al., “Issue Brief Considerations for Governing Open Foundation Models,” Stanford Center for Human-Centered Artificial Intelligence, December 13, 2023, https://hai.stanford.edu/issue-brief-considerations-governing-open-foundation-models
    62    Pranav Gade et al., “BadLlama: Cheaply removing safety fine-tuning from Llama 2-Chat 13B,” arXiv, October 31, 2023, https://arxiv.org/abs/2311.00117
    63    “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House
    64    Allen Overy, “EU AI Act: Key Changes in the Recently Leaked Text,” January 25, 2024. https://www.allenovery.com/en-gb/global/blogs/tech-talk/eu-ai-act-key-changes-in-the-recently-leaked-text
    65    Andy Zou et al., “Universal and Transferable Attacks on Aligned Language Models.”
    66    Cade Metz and Mike Isaac, “In Battle Over A.I., Meta Decides to Give Away Its Crown Jewel,.” The New York Times, May 18, 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html
    67    “NSF Announces 7 New National Artificial Intelligence Research,” National Science Foundation, May 4, 2023. https://new.nsf.gov/news/nsf-announces-7-new-national-artificial
    68    David E. Sanger and Steven Lee Myers, “China Sows Disinformation About Hawaii Fires Using New Techniques,” The New York Times, September 11, 2023, https://www.nytimes.com/2023/09/11/us/politics/china-disinformation-ai.html
    69    Carter Evans and Analisa Novak, “Scammers Use AI to Mimic Voices of Loved Ones in Distress,” CBS News, July 19, 2023, https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/
    70    Kate Knibbs, “Researchers Tested AI Watermarks—and Broke All of Them,” Wired, October 3, 2023, https://www.wired.com/story/artificial-intelligence-watermarking-issues/
    71    Siddarth Srinivasan, “Detecting AI Fingerprints: A Guide to Watermarking and Beyond,” The Brookings Institution, January 4, 2024, https://www.brookings.edu/articles/detecting-ai-fingerprints-a-guide-to-watermarking-and-beyond/
    73    “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House
    74    Stephen Davidson, “New U.S. Senate Bill Proposes Digital Signatures to Protect Sensitive Court Orders,” DigiCert, August 12, 2021, https://www.digicert.com/blog/new-senate-bill-proposes-digital-signatures-for-sensitive-court-documents
    75    “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” The White House, July 21, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai
    76    “The FTC Voice Cloning Challenge,” Federal Trade Commission, November 9, 2023, https://www.ftc.gov/news-events/contests/ftc-voice-cloning-challenge
    77    Kevin Roose, “Personalized A.I. Agents Are Here. Is the World Ready for Them?” The New York Times, November 10, 2023, https://www.nytimes.com/2023/11/10/technology/personalized-ai-agents.html
    78    “Joint Statement on Advancing Responsible State Behavior in Cyberspace,” US Department of State, September 23, 2019, https://www.state.gov/joint-statement-on-advancing-responsible-state-behavior-in-cyberspace/
    79    “Directive 3000.09: Autonomy in Weapons Systems,” US Department of Defense, January 25, 2023. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf

    The post Hacking with AI appeared first on Atlantic Council.

    ]]>
    Hinata-Yamaguchi in SCMP https://www.atlanticcouncil.org/insight-impact/in-the-news/hinata-yamaguchi-in-scmp/ Tue, 13 Feb 2024 16:31:00 +0000 https://www.atlanticcouncil.org/?p=747415 On February 12, IPSI Nonresident Senior Fellow Ryo Hinata-Yamaguchi was quoted in a South China Morning Post article, where he warned that Japan’s allies will be hesitant to share sensitive information if Japan cannot strengthen its cybersecurity measures. 

    The post Hinata-Yamaguchi in SCMP appeared first on Atlantic Council.

    ]]>

    On February 12, IPSI Nonresident Senior Fellow Ryo Hinata-Yamaguchi was quoted in a South China Morning Post article, where he warned that Japan’s allies will be hesitant to share sensitive information if Japan cannot strengthen its cybersecurity measures. 

    The post Hinata-Yamaguchi in SCMP appeared first on Atlantic Council.

    ]]>
    Atkins in Industrial Cyber https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-in-industrial-cyber/ Mon, 12 Feb 2024 22:08:00 +0000 https://www.atlanticcouncil.org/?p=747194 On February 11, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an Industrial Cyber article, where he discussed key takeaways related to protection of critical infrastructure and operational technology (OT) from recent Congressional hearings on cybersecurity.  

    The post Atkins in Industrial Cyber appeared first on Atlantic Council.

    ]]>

    On February 11, IPSI Nonresident Senior Fellow Victor Atkins was quoted in an Industrial Cyber article, where he discussed key takeaways related to protection of critical infrastructure and operational technology (OT) from recent Congressional hearings on cybersecurity.  

    The post Atkins in Industrial Cyber appeared first on Atlantic Council.

    ]]>
    The competition for influence in the Americas is now online https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-competition-for-influence-in-the-americas-is-now-online/ Mon, 12 Feb 2024 15:00:00 +0000 https://www.atlanticcouncil.org/?p=726580 China is expanding its footprint in Latin America and the Caribbeans’s emerging technology and critical infrastructure arenas, while Russia is engaging in foreign influence operations via the cyber domain. These challenges require a proactive stance by the United States.

    The post The competition for influence in the Americas is now online appeared first on Atlantic Council.

    ]]>
    The Biden administration identified China and Russia as strategic competitors in its 2022 National Security Strategy, and this rivalry with malign state actors is on full display in the western hemisphere. For decades, the People’s Republic of China (PRC) and Russia have been expanding their influence across the Americas via the diplomatic, informational, military, and economic domains. Now they are engaging in new areas to include emerging technologies, cyberspace, and outer space. These strategic competitors have been supporting autocratic regimes and threatening democracy, prosperity, and security in the region. The Chinese have aggressive investment and commercial projects underway to secure new markets and strategic resources to expand their global Belt and Road Initiative (BRI). Since the Cold War, Russia has challenged US influence in the Americas by sponsoring like-minded regimes including Cuba, Venezuela, and Nicaragua and fomenting unrest in democratic states. This article will examine PRC efforts to expand its economic footprint in the region in the emerging technologies and critical infrastructure arenas. It will also analyze Russian foreign influence operations in the cyber domain with disinformation campaigns intended to destabilize democratic governments allied with the United States. To counter growing Chinese and Russian influence in the cyber and emerging technologies domains in the region, the United States must adopt a more proactive stance by doubling down on constructive investment and commercial activities with partner nations, and educating the region on US engagements that contribute to economic growth and democracy, and discredit disinformation campaigns in the Americas.

    China’s dominance in emerging technologies in the Americas

    China has expanded its economic influence, becoming a key trading partner across Latin America over the past two decades. Since Beijing joined the World Trade Organization, the bilateral trade in goods increased significantly from $14.6 billion in 2001 to $315 billion in 2020. In the same period, the trade in goods between the United States and Latin America nearly doubled, reaching $758.2 billion from $364.3 billion.1 China has secured natural resources, investment opportunities, and markets for its exports across the region and now, twenty-one of the thirty-one Latin America and Caribbean (LAC) countries are participating in the Belt and Road Initiative. A major development interest of China has been in infrastructure, with the BRI providing financing for ports, transportation networks, power plants, and telecommunications facilities. China is now aggressively expanding its activities in emerging technologies and critical infrastructure across the region.

    While many insist Chinese interests in the Americas are purely economic, US Southern Commander Gen. Laura J. Richardson stated on March 2023 before Congress that the PRC now possesses the ability to extract resources, establish ports, and potentially build dual-use space facilities, which if true would make the area of responsibility of the US Southern Command the home of the most space facilities out of all the combatant commands. In addition, China is able to manipulate local governments through predatory investment practices.2 The US Southern Command believes that PRC activities have included investments across realms such as infrastructure and technology and malicious activities such as intellectual property theft, the spread of ensuring long-term CCP access and influence in the political, economic, and security sectors of the western hemisphere.3 More recently, China has expanded its ventures in the telecommunications, cloud computing, and surveillance sectors. Gen. Richardson repeatedly underscores the security threat posed by the expansion of activities from malign state actors like China and Russia in the region in her public remarks.

    Huawei, the Chinese technology firm, perhaps best exemplifies how dominant China is becoming in the emerging technology and communications space. Huawei controls a majority of the region’s telecommunications infrastructure and is poised to play a significant role in future technological developments, including 5G and the Internet of Things.4 Unfortunately, there are few competitive options to Huawei for 5G in terms of service and pricing available in Latin America. Huawei is lobbying hard to secure 5G contracts in countries, such as Colombia, and has established cloud computing with data centers in Mexico, Chile, and Brazil.5 Gen. Richardson has expressed concern that 5G deals between the region and China could undermine the information-sharing partnerships that the region holds with the United States.6

    Across the region, Huawei consistently offers incentives for companies to utilize Huawei clouds for their core processes and to store their intellectual property. In Panama, Huawei designed a digital free trade zone, consisting of a $38 million project with involvement from nearly one hundred companies in product distribution, as well as cloud computing services.7 According to Strand Consult, a research firm focused on the telecommunications industry, data centers built and run by Chinese firms, including Huawei, routinely process US internet traffic. Alongside governments of all levels, private companies, including healthcare providers, use Chinese data centers.”8

    China has been increasingly active in the surveillance and security sector. Chinese state linked companies such as Huawei and Hikvision have combined cameras, biometrics, data processing, and other tools to offer “safe” and “smart cities” solutions throughout the region, including in Ecuador and Bolivia.9 Such services have become increasingly attractive as violence and insecurity have been amplified by the economic impact of COVID-19.With few alternative service providers available, Huawei is emerging as the dominant force in emerging technologies and surveillance services across Latin America and the Caribbean. The Heritage Foundation has observed that “Huawei often functions as an extension of the Chinese Communist Party’s security enterprise. If Huawei develops 5G networks in Latin America, China will essentially control the communications, infrastructure, and sensitive technology of the entire region.”10 The United States must recognize this Chinese expansion into critical infrastructure sectors in LAC as a formidable threat to US influence in the region.

    Russian disinformation campaigns in the Western Hemisphere

    Russia has tried to counter US influence in the region by supporting communist and left-leaning regimes and movements since the Cold War. Moscow has conducted foreign influence operations in the region that have spread disinformation and sown discord, resulting in an undermining of democratic institutions and values. At the 2022 Summit of the Americas, US Secretary of State Antony Blinken warned of rising disinformation across Latin America, especially from China and Russia, and stated that the United States was committed to countering it.11 In recent elections in Brazil, Chile, and Colombia, disinformation propagated by online trolls and fake social media accounts sowed the seeds of doubt in electoral processes. Latin America has one of the highest risk perceptions regarding misinformation at 74.2 percent of internet users.12

    Russia has a clear track record of manipulating the information environment, often using influence operations and information warfare tactics that are now further magnified in cyberspace. Russia’s presence in Latin America has only become more evident since the invasion of Ukraine in February 2022. Russia has capitalized on its expertise in the cyber realm by manipulating social media to spur on massive protests in several countries such as Chile and Colombia.

    Russia has established a significant media and information footprint throughout the region with Russia Today and Sputnik News. Russia Today’s Spanish-language affiliate, Actualidad RT, has over 3.5 million followers on X (formerly Twitter) and it’s YouTube channel, now blocked, had over six million subscribers. On Facebook, RT’s Spanish-language page is now more popular than its English-language counterpart, “pushing Russia’s preferred narratives in Latin America, stoking anti-Americanism and praising authoritarian regimes, all under the veil of a supposedly objective platform,” wrote León Krauze in a Washington Post opinion column.13 According to a DisinfoLab analysis, “The majority of RT en Español’s website traffic comes from Venezuela (21.29 percent), Argentina (16.93 percent), Mexico (13.33 percent), and Colombia (5.52 percent).”14 Russia’s media presence in Latin America demonstrates its use of the information instrument of national power to challenge US influence.

    Russia has used Moscow-linked social media accounts in an attempt to stir up civil unrest in South American countries calling for the resignation of Nicolás Maduro of Venezuela (namely in Ecuador, Peru, Bolivia, Colombia, and Chile) since 2018. Russian bots and trolls were found to have exacerbated the massive protests that broke out in these countries.15 Russian activities sought to increase polarization and decrease confidence in democratic institutions across the region, especially in countries with a pro-US stance in foreign policy, like in Colombia and possibly Chile and Mexico.

    As the closest, long-standing ally of the United States in the region, Colombia has been a top target of Russian espionage and disinformation campaigns. Iván Duque’s government, which was in power from 2018 to 2022, confronted Russia for the malign influence of promoting social protests from 2020 to 2022. In 2020, Colombian Vice President Marta Lucía Ramírez blamed Russia and Venezuela for fomenting protests and discord using social media platforms.16 In the past two years, Colombia has experienced sophisticated cyberattacks targeting its energy, military, and political sectors that only a few nations could employ, with some attacks being traced back to Russian and Venezuelan proxy servers.17

    The case of Russian national Sergei Vagin sheds light on how Russia tried to use asymmetrical warfare to destabilize Colombia. On March 30, 2022, the Colombian National Police and the Attorney General’s Office arrested Sergei Vagin on a variety of charges including aggravated conspiracy to commit a crime and abusive access to computer systems.18 According to the presiding judge, Vagin is accused of financing illicit activities through fraudulent online betting platforms, receiving money through third parties from countries such as Russia and Ukraine. He also has alleged ties with the ELN terrorist group engaged in arms and drug trafficking.19 On April 1, 2022, President Duque voiced support for the investigation by the Prosecutor’s Office against Vagin on illicit financing and the alleged interference of Russian mafias in Colombian territory. He reassured that there were indications that would prove the use of the money to finance protest activities related to the national strike of 2021. According to several intelligence reports, the Prosecutor’s Office was able to establish that Sergei Vagin had already participated in previous marches of November 21, 2020, and March 8, 2022.”20 Moreover, “a CIA dossier published by the newspaper El Tiempo states that Sergei Vagin, also known as alias ‘Servac,’ mobilized important sums of money from Russia in order to finance violent actions in the main cities in Colombia; and he had ties with members of the so-called First Line that organized the social protests.”21 The case of Colombia demonstrates how Russia has been exploiting foreign influence operations and disinformation campaigns as a form of asymmetrical warfare against the United States and its democratic allies in LAC. Russia uses these asymmetrical operations as it does not possess the same economic might China does to expand its influence across the region.

    Measures to counter China and Russia’s expanding influence in emerging technologies and cyberspace in the Americas

    In light of the growing influence of China and Russia in the emerging technologies and cyber arenas, the United States must improve its ability to detect, understand, and counter its strategic competitors’ activities in Latin America and the Caribbean. In a 2023 Commanders Series discussion at the Atlantic Council, Gen. Richardson acknowledged that the threats to prosperity, security, and democracy posed by the PRC and Russia in the western hemisphere, saying: “The US needs to step up its game in our neighborhood to rival malign state and non-state actors.”22

    The United States should increase engagement with partner countries in the region on the political, economic, information, and technology fronts to safeguard democratic institutions, competitive economies, the free flow of accurate information, and the rules-based order that both Russia and China are challenging.

    Countering China’s influence

    To counter China’s growing dominance in the emerging technology and critical infrastructure sectors in the western hemisphere, the United States should:

    • Deepen economic engagements with partner nations by expanding existing free trade agreements and brokering new ones that can include issues like near-shoring, manufacturing, and the digital economy across the region.
    • Implement global infrastructure initiatives that were included in the Biden administration’s Build Back Better World (B3W), which was brokered with the Group of Seven as a counterweight to China’s BRI, in four areas of focus: climate, health, digital technology, and equality with an emphasis on gender.
    • Identify public- and private-sector opportunities to collaborate with LAC countries in emerging technologies like the cloud, artificial intelligence, and quantum computing to challenge China’s growing monopoly on the critical infrastructure and communications sectors.
    • Implement the 2022 CHIPS and Science Act (Pub. L. No. 117-167), which seeks to both strengthen the US semiconductor supply chain by promoting the research and development of advanced technologies domestically and identifying LAC partners who can directly contribute to these efforts.
    • Adopt legislative initiatives, such as the proposed Americas Trade and Investment Act, that seek to “prioritize partnerships in the western hemisphere to improve trade, bring manufacturing back to our shores, and compete with China,” as well as capitalize on the “full economic potential of the United States and Latin America.”23

    Countering Russia’s influence

    To counter Russia’s expanded influence operations and disinformation campaigns in cyberspace to undermine democracy in the hemisphere, the United States should:

    • Engage in more proactive strategic communications in the region to inform and educate on important US government and private-sector contributions aimed at protecting and enhancing prosperity, security, and democracy in the Americas, and correct the record of the disinformation circulated by the Russians and their proxies.
    • Improve US understanding of Russian disinformation campaigns’ content, tactics, techniques, and procedures through our intelligence and law enforcement agencies, and tailor more timely and effective ways to counter them along with partner nations.
    • Leverage the State Department Global Engagement Center and its programs to assist LAC countries to counter disinformation.24
    • Share US efforts to counter disinformation with US partner nations through, for example, the Federal Bureau of Investigation’s Foreign Influence Task Force, which could shed light on investigations, operations, and best practices in partnering with private-sector technology companies.
    • Promote media literacy and education to raise awareness of disinformation across Latin America.
    • Encourage social media companies like Meta to identify and remove certain Russian state-affiliated accounts, such as Sputnik and RT en Español, from their platforms to stop the flow of fake news.

    The threat of strategic competition from China and Russia in the Americas is real and is manifesting itself in new domains, such as emerging technologies and cyberspace. As several recent elections have brought left-leaning governments sympathetic to the PRC and Russia to power in the western hemisphere, the United States must actively invest political, economic, and technological capital in our neighbors to the south to remain the partner of choice for Latin America and Caribbean partner countries. The stakes are high. China and Russia seek to undermine the rules-based order, democracy, and free market principles in the Americas and challenge US dominance in the region. However, by harnessing American ingenuity and innovation, capital, technology, and democratic values, the United States has significant opportunities to curb and counter the influence of malign state actors like China and Russia in the Americas—and must seize such opportunities without delay.


    About the author

    Celina Realuyo is Professor of Practice at the William J. Perry Center for Hemispheric Defense Studies at the National Defense University where she focuses on US national security, illicit networks, transnational organized crime, counterterrorism and threat finance issues in the Americas.

    The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

    The Adrienne Arsht Latin America Center broadens understanding of regional transformations and delivers constructive, results-oriented solutions to inform how the public and private sectors can advance hemispheric prosperity.

    1    Sophie Wintgens, “China’s Growing Footprint in Latin America,” fDi Intelligence (a Financial Times unit), March 10, 2023, https://www.fdiintelligence.com/content/feature/chinas-growing-footprint-in-latin-america-82014
    2    “2023 Posture Statement to Congress,” Excerpts from Commander’s House Armed Services Committee Testimony, US Southern Command (website), March 8, 2023, https://www.southcom.mil/Media/Special-Coverage/SOUTHCOMs-2023-Posture-Statement-to-Congress/
    3    Center for a Secure Free Society, “China Expands Strategic Ports in Latin America,” VRIC Monitor No. 28(2022), https://www.securefreesociety.org/research/monitor28/; VRIC stands for Venezuela, Russia, Iran, and China.
    4    R. Evan Ellis, New Developments in China-Latin America Engagement, Analysis, Peruvian Army Center for Strategic Studies, December 20, 2022, https://ceeep.mil.pe/2022/12/20/nuevos-desarrollos-en-las-relaciones-entre-china-y-america-latina/?lang=en.
    5    Dan Swinhoe, “Huawei Planning Second Mexico Data Center, More across Latin America,” Data Center Dynamics, August 26, 2021, https://www.datacenterdynamics.com/en/news/huawei-planning-second-mexico-data-center-more-across-latin-america/.
    6    Naveed Jamali and Tom O’Connor, “China Influence Reaches U.S. ‘Red Zone,’” Newsweek, July 25, 2023, https://www.newsweek.com/exclusive-china-influence-has-reached-red-zone-our-homeland-us-general-warns-1814448
    7    Hope Wilkinson, “Explainer: B3W vs BRI in Latin America,” Council of the Americas, December 14, 2021, https://www.as-coa.org/articles/explainer-b3w-vs-bri-latin-america.
    8    Silvia Elaluf-Calderwood, “Huawei Data Centres and Clouds Already Cover Latin America—Chinese Tech Influence Is a Gift to Countries and Politicians That Don’t Respect Human Rights,” Strand Consult, February 7, 2022,  https://strandconsult.dk/blog/huawei-data-centres-and-clouds-already-cover-latin-america-chinese-tech-influence-is-a-gift-to-countries-and-politicians-that-dont-respect-human-rights/
    9    R. Evan Ellis, “Chinese Surveillance Complex Advancing in Latin America,” Newsmax, April 12, 2019, https://www.newsmax.com/evanellis/china-surveillance-latin-america-cameras/2019/04/12/ id/911484/.
    10    Ana Rosa Quintana, “Latin American Countries Must Not Allow Huawei to Develop Their 5G Networks,” Issue Brief, Heritage Foundation, January 25, 2021, https://www.heritage.org/americas/report/latin-american-countries-must-not-allow-huawei-develop-their-5g-networks
    11    Claudia Flores-Saviaga and Deyra Guerrero, “In Latin America, Fact-Checking Organizations Attempt to Counter Russia’s Disinformation,” Power 3.0 (blog), International Forum for Democratic Studies, July 6, 2022,  https://www.power3point0.org/2022/07/06/in-latin-america-fact-checking-organizations-and-cross-regional-collaborations-attempt-to-counter-russias-disinformation/
    12    Aleksi Knuutila, Lisa-Maria Neudert, and Philip N. Howard, “Who Is Afraid of Fake News? Modeling Risk Perceptions of Misinformation in 142 Countries,” Harvard Kennedy School, Misinformation Review, April 12, 2022, https://misinforeview.hks.harvard.edu/article/who-is-afraid-of-fake-news-modeling-risk-perceptions-of-misinformation-in-142-countries/
    13    Leon Kreuze, “Russia’s Top Propagandist in Latin America Has a Change of Heart,” Washington Post, May 8, 2022, https://www.washingtonpost.com/opinions/2022/05/08/russia-today-propagandist-latin-america-change-of-heart/
    14    India Turner, “Why Latin America is Susceptible to Russian War Disinformation,” DisinfoLab, Global Research Institute, September 13, 2022, https://www.disinfolab.net/post/why-latin-america-is-susceptible-to-russian-war-disinformation
    15    Center for Strategic & International Studies, “An Enduring Relationship—from Russia, with Love,” Blog, September 24, 2020, https://www.csis.org/blogs/post-soviet-post/enduring-relationship-russia-love
    16    Lara Jakes, “As Protests in South America Surged, So Did Russian Trolls on Twitter, U.S. Finds,” New York Times, January 19, 2020, https://www.nytimes.com/2020/01/19/us/politics/south-america-russian-twitter.html
    17    Guido L. Torres, “Nonlinear Warfare: Is Russia Waging a Silent War in Latin America?,” Small Wars Journal, January 24, 2022, https://smallwarsjournal.com/jrnl/art/nonlinear-warfare-russia-waging-silent-war-latin-america.
    18    Loren Moss, “Alleged Russian Spy Charged . . . with Running a Gambling Mafia,” Finance Colombia, April 12, 2022, https://www.financecolombia.com/alleged-russian-spy-chargedwith-running-a-gambling-mafia/
    19    Las pruebas que comprobarían la participación de ciudadano ruso en actividades ilegales (Evidence Proving the Involvement of Russian Citizen in Illegal Activities),” Noticias RCN, Abril 1, 2022, https://www.noticiasrcn.com/bogota/pruebas-comprobarian-actuar-de-ciudadano-ruso-en-actividades-ilegales-414593
    20    “Hay indicios de que financiaban las protestas”: Duque sobre ruso capturado (“There are Indications that They were Financing the Protests”: Duque on Captured Russian),” Abril 1, 2022, https://www.noticiasrcn.com/colombia/presidente-duque-habla-sobre-injerencia-de-rusos-en-el-paro-nacional-414601.
    21    “Sergei Vagin, el ruso capturado por la Fiscalía, aseguró que no tiene nada que ver con el Paro Nacional (Sergei Vagin, the Russian Captured by the Prosecutor’s Office, Assured that He has Nothing to do with the National Strike),” Infobae, Marzo 30, 2022, https://www.infobae.com/america/colombia/2022/03/30/sergei-vagin-el-ruso-capturado-por-la-fiscalia-aseguro-que-no-tiene-nada-que-ver-con-el-paro-nacional/.
    22    “A Conversation with Laura J. Richardson on Security across the Americas,” Commander Series, Atlantic Council, January 19, 2023, https://www.atlanticcouncil.org/event/a-conversation-with-general-laura-j-richardson-on-security-across-the-americas/.
    23    In January, US Senator Bill Cassidy, MD, and US Representative Maria Elvira Salazar released a discussion draft of the Americas Trade and Investment Act (Americas Act), https://www.cassidy.senate.gov/imo/media/doc/Americas%20Act%20Senator%20Bill%20Cassidy.pdf.
    24    The State Department Global Engagement Center’s mission is to direct, lead, synchronize, integrate, and coordinate US government efforts to recognize, understand, expose, and counter foreign state and nonstate propaganda and disinformation efforts aimed at undermining or influencing the policies, security, or stability of the United States, its allies, and partner nations.

    The post The competition for influence in the Americas is now online appeared first on Atlantic Council.

    ]]>
    Future-proofing the Cyber Safety Review Board https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/future-proofing-the-cyber-safety-review-board/ Thu, 08 Feb 2024 18:42:00 +0000 https://www.atlanticcouncil.org/?p=817742 The Cyber Safety Review Board seeks to examine and learn from complex failures in cyberspace. As Congress considers how to design its next iteration, there are ways to make it more effective and adaptable for the increasing challenges to come.

    The post Future-proofing the Cyber Safety Review Board appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    The US government’s Cyber Safety Review Board (CSRB) was established in a 2021 Executive Order to investigate complex cybersecurity failures and translate their lessons into recommendations to improve the nation’s cyber safety. The work of the Board to date has revealed its potential but also illuminated how the organization will need to evolve to meet its loftiest goals and resist the vicissitudes of political calculus. This brief makes several suggestions for how codifying the Board in legislation should tackle key design decisions about the CSRB. These recommendations are informed by lessons from the young history of the CSRB as well as historical analogy to the National Transportation Safety Board (NTSB), the independent federal agency charged with investigating aviation and other transport accidents and serving as significant inspiration for the purpose, if not yet the structure, of the CSRB.

    Some of these changes focus on how the CSRB can best conduct the three key phases of its work: incident selection; investigation; and reporting and recommendations. On incident selection, standardized public criteria for how the Board chooses whether to investigate a particular incident—and offering opportunities for public feedback on its decision-making— would help build trust with both lawmakers and the broader public, allowing the Board to explain systematically decisions like its controversial choice to not review the infamous Sunburst/SolarWinds breach. It is almost incumbent that the Board is vested with subpoena powers to compel information from uncooperative entities, or else risk remaining hamstrung in its ability to tackle hard cases. Finally, legislation should include explicit mechanisms that compel other government agencies to respond to CSRB recommendations—mirroring the structure that has allowed the NTSB to see many of its recommendations implemented by the Federal Aviation Agency and other federal offices.

    Other recommendations in this issue brief address broader questions about the structure and bureaucratic home of the CSRB. These include the issue of membership: currently the CSRB has only part-time members who retain their “day jobs,” unlike the NTSB and its full-time Commissioners. To balance the need for independence with the benefits of part-time members with high-level, current insight into industry or government, lawmakers should consider a hybrid structure with some part- and some full-time members, as well as a robust public process for handling conflicts of interest. Similarly, the CSRB has benefited from its placement in the Department of Homeland Security (DHS) and should remain there in the near term. Lawmakers should consider how and when the Board could transform into an independent agency, similar to the NTSB’s transition from under the Department of Transpiration after concerns arose from its position within the same Department as the FAA, the agency to which it often makes recommendations. In all these structures, one of the most important capabilities for a future CSRB is a capacity for evolution. Digital systems evolve constantly, as do the risks created by the integration of these technologies in core economic, social, and political processes. The incidents the CSRB will be called on to do its work—make systematic inquiries to discover and examine facts— will only grow more complex and contested over time. It is essential that the CSRB can grow and mature alongside these challenges. Armed with the right tools and the right structure, an ever-evolving CSRB can help the nation learn from its cyber mistakes in service of building a more resilient, safer cyber future.

    Introduction

    Understanding how and why complex systems fail has always been difficult. Investigations into the lapses behind airplane crashes1 or oil spills2 can take years, and when systems cause harm—economic crises, wars, social upheaval—analysis and investigation can roll on for decades. In recent years the development pace of digital systems and their staggering intricacy have accelerated to an unprecedented degree. Sprawling software supply chains, labyrinthian cloud infrastructure, and an ever-expanding internet are woven together to form a constantly evolving mosaic of digital systems. The potential consequences of the failure of these systems grow every day as they are more closely integrated with the real world. Market forces that push firms to move quickly while declaiming liability compounds the challenge of ensuring safety—an issue that the current administration3is grappling with.

    The Cyber Safety Review Board (CSRB) was born from one of these failures—the sprawling Sunburst/SolarWinds compromise—and offers a solution to the enormous public interest in improving the safety of digital systems by learning from their shortfalls4 This will require an impartial, comprehensive account of major cyber safety incidents and their larger, systemic context. No entity in the private sector is positioned or incentivized to do this work justice. Incident response firms must consider their relationships with current and former clients; compromised companies must manage their reputation, legal exposure, and shareholders; and all stakeholders lack the wide lens required to repeatedly and rigorously investigate connected risks in the systems they build, operate, and secure. Only a body insulated from both market tumult and government turnover can take the long view needed to better understand and mitigate these increasingly complex cyber risks. Having only been established by an executive order in 2021, there is growing interest in further institutionalizing the CSRB, evidenced by a legislative proposal from the Department of Homeland Security (DHS)5 to codify the CSRB into law as well as a recent hearing on the same topic.6 This momentum presents an opportunity for assessment—not of the quality of the Board’s work to date but instead of how far it has yet to go to realize its potential.

    This issue brief will briefly review the CSRB’s current design and recent work before building upon these lessons to suggest how its next incarnation could be structured to achieve its mandate. The discussion is arranged according to the lifecycle of a CSRB investigation: how cyber incidents are selected for investigation; how incidents and their causal factors are investigated; and how recommendations
    stemming from investigations are crafted and tracked. The brief will then propose design features that would maximize the CSRB’s ability to learn from and across cyber incidents, communicate its processes and findings, avoid conflicts of interest with both industry and government, and improve itself as an investigative body amid a rapidly changing cyber landscape.

    What’s in a Cyber Safety Review Board?

    The story of the CSRB so far

    Executive Order (EO) 14028 established the CSRB in response to the Sunburst/SolarWinds incident7 with the mandate to “review and assess…threat activity, vulnerabilities, mitigation activities, and agency responses” related to “significant cyber incidents…affecting FCEB [Federal Civilian Executive Branch] Information Systems or non-Federal systems.”8 The Board consists of one government representative each from the Department of Defense, the Department of Justice, the Cybersecurity and Infrastructure Security Agency (CISA), the Department of Homeland Security (DHS), the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Office of the National Cyber Director (ONCD)—as well as an optional representative from the Office of Management and Budget (OMB) for incidents affecting FCEB systems. Currently, seven industry representatives from firms such as Google, Palo Alto Networks, Verizon, and others also serve as Special Government Employees on the Board.9 This group convenes at the discretion of the President or the Secretary of Homeland Security, as well as any instance when a cyber incident leads to the establishment of a Cyber Unified Coordination Group (UCG), such as in the wake of the Sunburst/SolarWinds campaign.10

    Once the CSRB concludes a report it follows a standard dissemination process. The Director of CISA provides the CSRB’s report to the Secretary of Homeland Security, who then passes the full version of the investigation to the President before making an unclassified version available to the public. So far, the CSRB has published reports covering the Log4j incident and the Lapsus$ criminal group, and it is currently working on a review of the July 2023 Microsoft cloud security incident.11   The Board also produced a self-assessment covering its early work, which included recommendations for changing its design.12

    Early investigations

    The CSRB’s first review covered the Log4j incident, where a vulnerability in a ubiquitous open source software library offered attackers crippling access to a large number of affected systems. The investigation revealed important information, such as the fact that there was no evidence the vulnerability had been exploited before the disclosure, and made recommendations such as addressing ongoing risks from the vulnerability; driving best practices for security, vulnerability management, and software development; improving the cohesion of and visibility into the larger software ecosystem; and bolstering longer-term investments in security. While the inaugural report received widespread praise from cybersecurity commentators,13 certain concerns lingered. For one, the fact that the Board’s report was released so close to the public announcement of the Log4j vulnerability positioned it as something closer to incident response than the Board’s notional goal of incident review, emphasized by the public acknowledgment from CISA of the exploit of Log4j within a federal agency more than four months after the Board’s report and uncovered during CISA’s incident response engagement.14 Additionally, the report’s recommendations were notably broad, which is somewhat understandable given the Board’s novelty at the time and the sprawling reach of Log4j, but worth considering in terms of practicality.15 The Board’s second report covered Lapsus$, a criminal group that utilized familiar but highly effective social engineering tactics to launch a series of high-profile attacks against several large companies.16 The Board’s decision to focus on Lapsus$ received more mixed reviews than its first investigation. Some experts critiqued the utility of reviewing a group that was already known and studied by the industry (its direct victims) and clearly in the remit of government’s Joint Ransomware Task Force. These critiques prompted increased calls for transparency in the Board’s incident selection process. 17 The report on Lapsus$ included recommendations for covering securing identity and access management (IAM) systems, managing vulnerabilities specific to telecommunications firms and their resellers, making business process providers more resilient, better coordinating law enforcement responses, and disincentivizing cybercrime.18

    The Board’s most recent investigation focuses on an incident from the summer of 2023 in which a threat actor exploited flaws in Microsoft’s cloud infrastructure to access government information systems, including the email accounts of senior officials.19 The cloud industry and its increasingly important yet opaque systems are well worthy of review, and the announcement drew praise from experts. The involvement of a major industry player such as Microsoft, and the potential takeaways for other cloud firms, also meant that this investigation was the first in the Board’s history to see instances of voluntary Board member recusal due to conflicts of interest.20

    Lessons learned

    Certain key questions about the current design and function of the CSRB provide useful insight into potential next steps for the Board as an institution. The first is how well the CSRB has lived up to its envisioned purpose. Here, one divergence looms large: the absence of an investigation into Sunburst/SolarWinds. That incident was the impetus for the CSRB’s creation and the first incident it was explicitly asked to review; that attack also led to a cyber UCG, a criterion that would have triggered a review under the CSRB’s current charter. Rob Silvers, Undersecretary for Policy at DHS, argued the lack of an investigation into Sunburst/SolarWinds was due to a difficult tradeoff, stating that “the White House and the Department of Homeland Security together determined that when the board was launched, that at that point in time, the best use of the board’s expertise and resources was to examine the recent events involved in the Log4j vulnerability.”21 Cyber commenters have speculated about additional potential rationales for the decision, including that it would have cast an unwelcome light on the state of government cybersecurity or that it would have been impractical for an institution without subpoena power to investigate such a high-profile attack.22 That these factors may impact the Board’s willingness and ability to examine important incidents highlights key design considerations for a codified CSRB. Potential concerns around the CSRB scrutinizing government cybersecurity highlight its need for eventual independence and challenges around the compliance of entities with its reviews necessitate strengthened investigatory tools.

    Perhaps the greatest missed opportunity of the absent Sunburst/SolarWinds investigation is the chance for the CSRB to investigate not just singular incidents but larger patterns of compromise and their context. Abuse of Microsoft identity and access management (IAM) systems played a significant role in the Sunburst/SolarWinds campaign.23These are the same linchpin technologies likely to play a starring role in the Board’s forthcoming examination of the role of cloud services in threat actor Storm-0558’s breach of Microsoft and several government agencies in the summer of 2023 (which also resembled SolarWinds in the intelligence-gathering motivations of the perpetrators).24 These architectural flaws illustrate the importance of the Board’s ability to impartially examine singular, complex incidents as well as across multiple breaches with common traits.

    Abuse of Microsoft identity and access management (IAM) systems played a significant role in the Sunburst/SolarWinds campaign. These are the same linchpin technologies likely to play a starring role in the Board’s forthcoming examination of the role of cloud services in threat actor Storm-0558’s breach of Microsoft and several government agencies in the summer of 2023 (which also resembled SolarWinds in the intelligence-gathering motivations of the perpetrators).

    A second question in evaluating the current Board is its progress toward driving the adoption of its recommendations. Assessing this question is difficult, in part because adoption within the industry is opaque and not easily measured. In some cases, the Board appears to have already spurred change. See, for example, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel saying simply, “the Cyber Safety Review Board…recommended that we take action to support consumer privacy and cut off these [SIM-swapping] scams. That is exactly what we do today,” regarding recent FCC requirements and guidance.25In other instances, though, the impact of the Board’s recommendations is far less clear.  Since the Log4j report, open source software has gained more explicit support in government and industry, evidenced by initiatives such as CISA’s OSS26 and the ONCD’s Open-Source Software Security Initiative. However, these projects have yet to come into full force, and related legislation, such as the Securing Open-Source Software Act, remains unenacted. Similarly, the recent proposal27 from the Department of Defense, General Services Administration, and NASA to reform the Federal Acquisition Regulation to require that contractors develop and maintain software bills of materials largely aligns with the Log4j report’s recommendations, but the proposal itself points more directly toward EO 14028 as its source. As such, recent action around open-source software and software supply chain security might well have stemmed from the Log4j and Sunburst/SolarWinds incidents themselves more so than the CSRB’s reporting.

    The CSRB of the future

    What is the unique value that the CSRB offers as an investigative entity? In short, the CSRB has the opportunity to serve as a non-partisan, independent, and deeply transparent organization that studies the underlying causes and context of cyber incidents, threats, risks, and trends. This is essential for unpacking the complex causal chains that create cyber failures, which, in turn, is a prerequisite for informing and developing cyber risk management policies and practices informed by the complexity of real-world cases. The CSRB’s investigations should be factual accounts from which it can identify and recommend policies and practices to improve cybersecurity and safety outcomes for US citizens, national security, industry, and key allies and partners alike. In doing so, the Board should also evaluate and draw lessons from the relationships between the individual cases of their reviews, evaluating risk and safety in the interconnected cyber ecosystem. It should also track and analyze the progress of the implementation of its recommendations, including their impact, lessons learned, and roadblocks, in service of improving itself as an institution.

    No other entity in the cyber ecosystem can replicate this set of functions. Many organizations  have  reasonable incentives to hide information related to the causes of their failures and even, sometimes, their existence. Self-investigation by government or industry carries obvious motivations—financial, legal, and reputational—to mitigate fault finding, or at least its public reporting. Incident response firms are focused on recovery rather than review and are subject to market forces, the need to appease clients, and time pressures not conducive to systemic analysis. Law-enforcement efforts, meanwhile, are more geared toward proving criminal liability rather than exposing the full picture of an incident. The limited liability structures for cybersecurity failures in the US mean that such cases are often brought on the basis of fraud, where an entity misrepresented its security practices, rather than examining all factors contributing to an incident or its broader context.28 Such investigations are not designed to produce concrete policy recommendations and understandably disincentivize transparency.

    The Cyber Safety Review Board was inspired in significant ways by lessons learned from safety investigations in other domains, particularly in aviation and transportation.29 In these sectors, one agency in particular bears a remarkable similarity to the mission and the design of the CSRB: the National Transportation Safety Board (NTSB). The NTSB is an independent agency charged with investigating a significant portion of transportation incidents, including but not limited to aviation accidents and failures. It produces factual, impartial accounts of complex failures that inform (often remarkably specific) recommendations, many of which are implemented by industry and government. It enjoys a large full-time staff, access to industry experts, and a stable budget, carrying subpoena power but effectively no regulatory authority. Moreover, the NTSB specifically tracks the status of its most-desired policy changes as well as which of its recommendations government and industry implement over time.30

    These are all useful designs for the CSRB to draw from. However, the subject mandated to the CSRB—cyber safety—bears some important differences from the NTSB’s. The information covered in CSRB analysis (such as digital products or sensitive government systems) raises far more concerns about confidentiality than airplane crashes or train derailments. Frustratingly, the consequences of cybersecurity failures are often less directly connected to their source, too, with hard-to-quantify and widespread knock-on effects such as intelligence compromise and private-sector revenue losses. The very systems the CSRB must investigate are also much more complex and are intertwined with seemingly countless facets of industry and society, as well as with one another. And unlike the familiar world of transportation regulation, the CSRB’s domain changes rapidly and unexpectedly depending on new technology and vulnerabilities, all while the CSRB remains a nascent government agency with still-growing institutional processes and expertise. The following recommendations address both divergences and similarities between the CSRB and NTSB.

    The lifecycle of a CSRB investigation

    The lifecycle of a CSRB investigation provides a useful structure for addressing different design questions that arise at each stage. The next sections are structured according to this model.

    • Incident selection: The incidents that the CSRB selects for review should support the Board’s broader goal of identifying causes of cyber failure to inform impactful changes in policy and practice. Its processes for doing so should prioritize transparency and trust-building to help policymakers and the public understand its criteria and how they are applied.
    • Incident review: To investigate cyber incidents in enough depth to understand their complex causes and illuminate practices and policies that could have prevented or limited their associated harms, the CSRB will require the authority to access a significant amount of information. To build trust with potential parties in the investigation, from the private sector to the government itself, the Board should also establish procedures to ensure it safely handles the information it obtains. The Board’s membership structure will need to balance the need for independence against the benefits of closer integration with industry and government, containing robust public processes for navigating conflicts of interest and recusals.
    • Recommendations: The CSRB’s main vehicle for improving cyber safety is its recommendations. Its structures and processes for making recommendations should focus on driving efficacy without the need for regulatory authorities, such as legal requirements for other agencies to respond to the Board’s recommendations. Additionally, the Board itself should be responsible for tracking its recommendations and the progress other agencies and the private sector have made toward implementation.

    Other considerations, such as about the CSRB’s location within the executive branch, cut across all three phases and are consolidated in their own section at the end of this issue brief.

    Incident selection

    The process for the CSRB’s selection of incidents for review should be designed, from the outset, to maximize the Board’s success at identifying causes of cyber failure and ways to increase cyber safety through their remediation.

    Currently, the President and the Secretary of Homeland Security can nominate incidents for CSRB review, and the Board also considers incidents that lead to the formation of a Cyber UCG. Once confirmed through one of these channels, the Board has the ultimate authority to decide which incidents to prioritize. This structure works well but could be augmented by explicitly allowing members of the Board to nominate incidents as well. This pathway would be especially useful if the CSRB’s capacity is expanded—a recommendation made in later sections—allowing the Board to potentially pursue multiple investigations simultaneously.

    Greater changes are needed to the process of deciding whether to launch an investigation for a nominated cyber incident. Currently, the CSRB makes these decisions in private according to non-public criteria. This should change. In its legislation, Congress should develop criteria for incident prioritization or require the CSRB itself to determine and publicize an independent set of standards. A public set of criteria for incident selection would serve several purposes.

    The first is simply that such transparency creates the opportunity for public debate and comment on the factors that the CSRB uses to select cyber incidents. Public criteria would allow Congress or other stakeholders to advocate for changes to better align the CSRB investigative process with its mission and address the lack of trust that can accompany opaque reporting processes. In recognition of the utility of public input, as well as the fact that the Board itself may learn additional factors it considers important in the selection process, Congress should build a mechanism for the Board to update these standards.

    Second, these public criteria can be useful as the Board justifies its decision-making on specific cases. For example, when the Board selects a case, it can publicly defend its decision in terms of how it stacks up against its selection criteria. This would establish a common understanding of an incident’s significance and contribute to driving cross-incident analysis. Also, these standards would provide useful common ground for discussions of the reasons that an incident was not reviewed. If the Board consistently evaluates major cyber incidents against its selection criteria, it could publicize its reasoning for not taking up a particular incident in response to Congressional or public inquiries (as have persisted regarding Sunburst/SolarWinds) in a more prominent, consistent format.31 This is not to cast doubt on the Board’s intentions or methods but instead to build in, with the force of law, a standard and an obligation for transparent reasoning and to continually develop trust in the Board’s judgment.32

    The following incident criteria, while overlapping significantly with each other and reflecting much of the Board’s extant thinking, are a useful start. These criteria should not preclude other triggers for investigation, such as the formation of a cyber UCG or the discretion of the President or the Secretary of Homeland Security.

    • Severity of harm: The magnitude and reach of an incident’s harm to US citizens and national interests, as well as the potential for ongoing impact if the initial incident remains unaddressed.
    • Incident generalizability: The likelihood that the failure could generalize to other systems or organizations if left unaddressed, for example, due to the effect an incident has on some common piece of technology or core digital infrastructure, or because the failure implicates widespread organizational practices.
    • Policy context: The degree to which an incident reveals potential flaws in policy, such as existing requirements that were unenforced or ineffective at preventing an incident, or where relevant policy controls were simply nonexistent.
    • CSRB context: The relevance of the incident to previous CSRB investigations and nominated incidents, striving to capture incidents that are indicative of larger systems issues while avoiding duplicative work.

    Incident Investigation

    The CSRB should not be a punitive entity, but it also should be unflinching in its questioning and analysis. Only an agency with the proper authorities, independence, and powers will be able to conduct the hard analyses critical to the CSRB’s broad mission of improving cyber safety in the national interest.

    At present, the powers the Board has at its disposal have limitations. Cooperation with Board investigations is voluntary, as the body cannot issue administrative subpoenas. Legislative codification should grant the CSRB subpoena authority akin to the NTSB’s. Without the ability to compel the production of information, the Board cannot gather data from companies or branches of government that decline to cooperate, severely hamstringing its ability to tackle some of the most important cases. These cases may pertain to sensitive systems, flagrant negligence, or other features an entity would understandably want to keep hidden from the public.

    DHS’s proposed legislation usefully pairs the ability of the CSRB to make requests for voluntary responses with subpoena powers for non-compliant entities. The proposal cleverly provides an additional incentive for disclosure by protecting voluntarily disclosed information from being used as the basis for enforcement actions or otherwise used in civil litigation, while offering no such protections for subpoenaed information.33 Ultimately, the CSRB’s investigations should largely resemble its current process with the addition of subpoena power and DHS’s reasonable proposal to waive actions taken against voluntarily disclosed information.

    One factor that significantly distinguishes the NTSB from the CSRB is the NTSB’s stated policy to hand off an investigation to local law enforcement or the FBI should an accident be determined to have been a criminal act.34 This focuses the NTSB’s activities on failure and accident rather than premeditated malice. The CSRB, in contrast, will need to and has already investigated incidents where digital systems are compromised by a malicious party. For this reason, the CSRB, by design, cannot and should not hand off incidents simply because they were caused by a malicious criminal act. To maximize its success at improving cyber safety, and to avoid duplicating law enforcement and public-sector investigations of specific cyber threat actors, the CSRB should focus more on the causes and conditions that lead to cyber insecurity rather than on the perpetrators of cyber harm.

    Report and recommendations

    The final stage of a CSRB investigation is the creation of a report on the incident. This report should describe the causal chain of the failure and the lessons learned, which are then translated into recommendations for the private sector and policymakers.

    How the CSRB formulates its recommendations and ensures implementation is a key challenge for a body without regulatory authority. Again, the development of the NTSB offers instructive lessons. The NTSB works closely with regulators within the Department of Transportation (DOT) like the FAA to implement its recommendations. This close collaboration is backstopped by a hard legal requirement for the DOT and its constituent agencies to respond to NTSB recommendations within 90 days. Because of this requirement, agencies like the FAA have established uniform procedures for responding to NTSB requests.35

    Likewise, federal agencies addressed in CSRB recommendations should be required to respond to the investigation’s recommendations within 90 days. This written response should include an assessment of the feasibility of implementing the recommendations and a plan of action to respond to the report. This would include agencies that contribute to federal government cyber security, such as the Department of Homeland Security (including CISA), the Office of Management and Budget, and the General Services Administration. It would also include agencies that regulate the cyber practices of certain critical infrastructure sectors, such as the Department of Health and Human Services (healthcare and public health sector) and the Department of Treasury (financial services sector).

    As the CSRB continues to review, report, and recommend, it will develop a larger body of recommendations, and more evidence will become available on their implementation status. The Board’s codification in law should also require the CSRB itself to systematically track its recommendations and their degree of implementation (or lack thereof), much as the NTSB does.36

    It is essential that CSRB continue to released publicly its investigations to inform the decision-making of private sector entities as well as government. Sometimes, in the course of its work, the CSRB will need to interact with classified information. The CSRB should have an obligation in law to formulate its reports and recommendations, to the maximum extent possible, to be publicly releasable while avoiding the publication of classified information. Where the Board decides that achieving the goals of a report necessitates describing classified information or creating recommendations that would necessitate classification, it should formulate both a classified and unclassified version of the report and release the latter publicly. Similarly, the CSRB will need to interact with plenty of confidential business information during its investigations. The CSRB’s first obligation should always be to the public by relaying critical information and recommendations. However, it should minimize the extent to which its final reports reveal confidential private-sector information beyond what is required to achieve its mandate.

    Structure of the board

    Membership

    The efficacy of the CSRB as an institution will rely heavily on the makeup of the Board. Board members will play several key executive oversight and functional roles throughout the full lifecycle of an investigation. The CSRB’s membership would ideally maximize both its independence and its investigative and recommendation capacity throughout these phases. However, these two goals point in slightly different directions.

    To strike a balance between the two, codifying legislation should establish that the Board be composed of half full-time members and half part-time members from industry, with the chair position held by a full-time member. The full-time Board members would buoy the CSRB against conflicts of interest and provide significant investigative capacity as well as the potential for institutional knowledge, while the part-time members would ensure the Board’s proximity to and professional currency in the technology systems it must investigate. The presidential appointment of one full-time member as Board chair (and thus the tiebreaking vote) would further mitigate the influence of conflicts of interest.

    Conflicts of interest arising from part-time Board membership, if unmanaged, could severely harm the integrity and value of the Board’s work as well as its reception. Current government employees serving on the Board might be disincentivized to find fault with their own agency’s oversight for fear of negative ramifications in their current role or relationships. Private-sector employees might avoid investigating their own employer for similar reasons or seek out opportunities to investigate competitors. Yet, a Board with current government or private sector employees also creates notable advantages concerning its capacity. Primarily, this allows the Board to attract senior and experienced members who might otherwise be disinclined to resign from their current positions—the same individuals who have contemporary expertise on the underlying technologies that the CSRB investigates. A blended model of full- and part-time members would help to balance these advantages and costs. With both full- and part-time members having equal voting power, there would always be sufficient “independent” votes to select potentially controversial or far-reaching (but important) cases, all while preserving the benefits of increased expertise and connectivity available through the part-time model.

    Even with such a hybrid model, the CSRB must have a well-developed and publicly documented process for handling conflict-of-interest recusals. The Board’s current recusal process, per recent comments made by DHS Undersecretary for Policy Rob Silver, involves DHS ethics lawyers reviewing members’ financial disclosures and, for each case, conflicting incentives.37 While this structure’s broad contours are reasonable, the details of the process and the criteria by which lawyers make their judgments about the threshold for recusal should be made public by the CSRB. Documentation of this process will build trust among policymakers and the public that conflicts of interest cannot threaten the integrity of the CSRB’s selection, investigation, and recommendation processes. As such, lawmakers should require the Board itself to develop and publicize this process and the relevant criteria. Board members should have the opportunity to recuse themselves from certain parts of the life cycle of an investigation, from the initial vote to the investigative and recommendations processes, as each of these activities may create different potential conflicts of interest.

    Regardless of the constitution of the Board itself, the CSRB as an organization should have a budget for more full-time investigative staff. Between the accelerating pace of cyber incidents and the demands of rigorous investigations, limiting CSRB resources to just a few full-time employees is a disservice to its mission and the importance of the public interest of its investigations. The NTSB, for example, has hundreds of full-time staff and can draw on more from across industry and government. While the structure of the CSRB does not need to be identical to that of the NTSB—part of the strength of the CSRB is that Board members participate more in actual investigations—increasing its number of full-time staff will allow the CSRB to respond to a greater number of cybersecurity incidents while treating each with appropriate care. Eventually, the goal should be to build the Board’s capacity to the point where it can perform more than one investigation at the same time, similar to the NTSB.

    Finally, lawmakers should codify the explicit authority for the Board to bring in external experts to assist with particular cases, mirroring the “party system” of the NTSB, which “enlists the support and oversees the participation of technically knowledgeable industry and labor representatives who have special information and/or capabilities” in its investigations.38 If included, this should be a privilege of the Board itself, rather than a right afforded to the Secretary of Homeland Security as the current DHS-proposed legislation suggests.

    Finding a home

    The prospect of legal codification offers an opportunity to consider whether the CSRB’s current position within DHS is the best possible structure for its long-term success. The CSRB has benefitted thus far from its proximity to agencies and departments with considerable resources and expertise, as well as from the ability to utilize DHS’s broader infrastructure for the various operational and administrative tasks required of a federal organization. Eventually, though, the goal should be to transform the CSRB into an independent agency, similar to the trajectory of the NTSB.

    The NTSB began as an agency within the Department of Transportation (DoT). Yet, it often investigated policies and actions of the FAA, a fellow DOT agency, creating natural conflicts of interest. Several years after its creation, Congress addressed these foundational questions by establishing the NTSB as an independent agency.39

    When the CSRB investigates compromised FCEB systems and critical infrastructure providers, it must look to the role of fellow DHS entity CISA, which is responsible for helping FCEB agencies and critical infrastructure providers manage their security and cyber risk. So long as the Board is housed within DHS, this risks creating conflicts of interest between the Board and the agency in which it resides (and upon whose infrastructure it relies for day-to-day operations). Because different critical infrastructure sectors work with a variety of Sector Risk Management Agencies, simply finding a different departmental host for the CSRB is liable to create similar risks. Thus, in the long term, the Board should become an entirely independent agency.

    What is less clear is whether this transition should occur in tandem with the Board’s legislative codification or whether the CSRB should follow a similar path to NTSB and become independent only once it is more established. Standing up the resources needed for a new, independent agency is difficult and so may be reasonable grounds to table the issue for another few years while the CSRB develops into a full-fledged investigative body with significant resourcing.

    It is also important to note that the future independence of the CSRB does not require the Board to sever ties with CISA or DHS. The NTSB and the FAA still investigate in tight coordination and with significant cooperation, but the NTSB has sufficient independence to both inform and critique the FAA’s decisions.40So too must the CSRB have the freedom to speak, directly and honestly, to all other parts of the government, while still working alongside the agencies most affected by its decision-making.

    Evolution

    Part of the CSRB’s key contribution to cybersecurity is its ability to consider failures across the ecosystem in connection with each other, from a position that affords long-term analysis rather than immediate response. The CSRB should be required to perform additional forms of meta-review in support of this end. For example, Congress could require, at regular intervals, a report from the CSRB on its past findings and the connections among the systems it investigates—a synthesis report. Similarly, the CSRB should be required to collect and examine recommendations that have gone unimplemented and assess the likely causes of inaction. This information can also help inform Government Accountability Office (GAO) investigations, which have long found and attempted.41 In addition, the Board should be explicitly empowered to revisit and revise reports when new information comes to light after their investigation. Several of these functions might be delegated to CSRB subcommittees, which are already established in its charter.

    Along with these meta-reviews, the CSRB should also have mechanisms for required self-review. Congress should require the CSRB to review its structure and make recommendations to Congress on potential reforms every five years. This would include ways to refine its case selection criteria, membership structure, budget and staffing, and investigative procedures—as well as a self-assessment of how well the Board is meeting its mandate. Such mechanisms would vest Congress with a key decision-making role over the CSRB and would provide means for ongoing adaptation of the structure and function of the Board.

    Congress cannot and should not expect to remake the CSRB in the NTSB’s image in a single legislative act. Yet, neither should it be satisfied with a similar decades-long timeline of growth. The threat landscape is too fast-changing, and the stakes of failure in the cyber domain are too high. In short, policymakers will need to design a Board that can and must iterate and improve over time.

    Conclusion

    The creation of the CSRB, and the efforts towards its enshrinement into law, reflect an understanding and a commitment from the federal government: addressing the challenges created by the proliferation of digital systems across every facet of society will necessitate self-examination and self-improvement. Fact-finding among the complexities, interrelationships, jargon, finger-pointing, and sales pitches of the cyber ecosystem is a challenging task, and the CSRB is the only entity custom-built to tackle it.

    The CSRB is developing at a crucial moment, as industry- and government-led mechanisms to improve the accountability and security of digital vendors have begun to bloom. Examples include mechanisms like the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the Security and Exchange Commission’s welcome public-disclosure rules, as well as the voluntary use of software bills of material by some of the most cybersecurity-mature organizations and the adoption of similar requirements in public-sector contracts. The CSRB’s findings have a clear audience. Over the next decade this network will grow further, magnifying the influence of the Board’s investigations and findings.

    In light of the importance of the CSRB’s mission, as well as the importance of this moment in the wider cybersecurity ecosystem, questions about its design and operation are critical. It is rare to face the opportunity to stand up a policy structure from scratch, at the right moment, with widespread expert enthusiasm, and with helpful past lessons at hand—and all the more important to get it right as a result. The suggestions raised in this issue brief to illustrate how legislative codification can make the body even more effective than it is today. The Board can become more transparent and participatory in its selection of incidents while increasing its investigative and fact-finding capacity. It can also interact in more meaningful ways with the many other organs of government tasked with managing a piece of the cyber puzzle, maximizing its efficacy as an agent of change while managing conflicts of interest.

    The challenges ahead in this domain, and the difficulty of understanding how to ensure the safety and resilience of ever-more complex systems will only grow. Policymakers now have the opportunity and the challenge to create a CSRB that can meet this consequential moment while having the ability to evolve to tackle the risks and dynamics of the future.

    About the authors

    Maia Hamin is an associate director with the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs. She works on the intersection of cybersecurity and technology policy, including projects on the cybersecurity implications of artificial intelligenceopen-source softwarecloud computing, and regulatory systems like software liability.

    Trey Herr is assistant professor of Global Security and Policy at American University’s School of International Service and Senior Director of the Atlantic Council’s Cyber Statecraft Initiative. At the Council, the CSI team works at the intersection of cybersecurity and geopolitics across conflictcloud computingsupply chain policy, and more. At American, Trey’s work focuses on complex interactions between states and non-state groups, especially firms, in cyberspace. Previously, he was a senior security strategist with Microsoft handling cybersecurity policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution at Stanford University. He holds a PhD in Political Science and BS in Musical Theatre and Political Science.

    Stewart Scott is an associate director with the Atlantic Council’s Cyber Statecraft Initiative. He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy

    Alphaeus Hanson is an assistant director with the Cyber Statecraft Initiative, part of the the Atlantic Council Tech Programs. Hanson studies the decision-making of technology companies around risk and geopolitics, including the interaction between insurance companies and capital markets. Prior to joining the Council, Hanson was an analyst at Krebs Stamos Group (KSG). 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “The Investigative Process,” National Transportation Safety Board, https://www.ntsb.gov/investigations/process/Pages/default.aspx.
    2    “Deep Water: The Gulf Oil Disaster And The Future Of Offshore Drilling – Report to the President (BP Oil Spill Commission Report),” National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, January 11, 2011, https://www.govinfo.gov/app/details/GPO-OILCOMMISSION.
    3    Maia Hamin, Sara Ann Brackett, and Trey Herr, with Andy Kotz, “Design Questions in the Software Liability Debate,” Atlantic Council DFRLab, January 16, 2024, https://dfrlab.org/2024/01/16/design-questions-in-the-software-liability-debate/.
    4    “Deep Water: The Gulf Oil Disaster And The Future Of Offshore Drilling – Report to the President (BP Oil Spill Commission Report).”
    5    “A Bill to Establish the Cyber Safety Review Board,” CISA, https://www.cisa.gov/sites/default/files/2023-04/dhs_leg_proposal_-_csrb_508c.pdf.   
    6    US Congress, Senate, Committee on Homeland Security and Government Affairs, The Cyber Safety Review Board: Expectations, Outcomes, and Enduring
    Questions, 118th Congress, 2nd session, 2024, https://www.hsgac.senate.gov/hearings/the-cyber-safety-review-board-expectations-outcomes-and-enduringquestions-2/.
    7    For more on this incident, seeTrey Herr, Will Loomis, Emma Schroeder, Stewart Scott, Simon Handler, and Tianjiu Zuo, “Broken Trust: Lessons from Sunburst,” Atlantic Council, March 29, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/broken-trust-lessons-from-sunburst/
    8    “Executive Order on Improving the Nation’s Cybersecurity,” The White House, May 12, 2021, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/.
    9    “Cyber Safety Review Board (CSRB) Members,” CISA, https://www.cisa.gov/cyber-safety-review-board-csrb-members
    10    “CYBERSECURITY: Federal Response to SolarWinds and Microsoft Exchange Incidents,” Government Accountability Office, January 2022, https://www.gao.gov/assets/720/718495.pdf.
    11    “Department of Homeland Security’s Cyber Safety Review Board to Conduct Review on Cloud Security,” US Department of Homeland Security, August 11, 2023, https://www.dhs.gov/news/2023/08/11/department-homeland-securitys-cyber-safety-review-board-conduct-review-cloud.
    12    “Cyber Safety Review Board of Inaugural Proceedings,” CISA, October 18, 2022, https://www.cisa.gov/resources-tools/resources/cyber-safety-review-board-inaugural-proceedings.
    13    Tom Uren, “Srsly Risky Biz: Thursday July 21,” Seriously Risky Business, July 20, 2022, https://srslyriskybiz.substack.com/p/srsly-risky-biz-thursday-july-21.
    14    “Iranian Government-Sponsored APT Actors Compromise Federal Network, Deploy Crypto Miner, Credential Harvester,” CISA, November 25, 2022, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-320a.
    15    Uren, “Srsly Risky Biz: Thursday July 21.”
    16    “Review Of The Attacks Associated with Lapsus$ And Related Threat Groups Report,” CISA, August 10, 2023, https://www.cisa.gov/resources-tools/resources/review-attacks-associated-lapsus-and-related-threat-groups-report
    17    John Sakellariadis,“With Lapsus$, Cyber Review Board Draws Mixed Reviews,” Politico, December 5, 2022, https://www.politico.com/newsletters/weekly-cybersecurity/2022/12/05/with-lapsus-cyber-review-board-draws-mixed-reviews-00072144.
    19    “Department of Homeland Security’s Cyber Safety Review Board to Conduct Review on Cloud Security.”
    20    Heather Adkins (@argvee), “Today, CISA’s Cyber Safety Review Board announced it will review Cloud Security and assess the recent Microsoft intrusion. Given scope of this study, I have recused myself from the Board’s review,” X, August 11, 2023, https://twitter.com/argvee/status/1690015584740687872.
    21    Mariam Baksh, “Cyber Safety Review Board Closes the Book on SolarWinds While Reporting on Log4j,” NextGov, July 14, 2022, https://www.nextgov.com/cybersecurity/2022/07/cyber-safety-review-board-closes-book-solarwinds-while-reporting-log4j/374220/.
    22    Jeff Stone, “US Cyber Review Punts on Russian Hack, Hinting at Limitations,” Bloomberg, November 16, 2022, https://www.bloomberg.com/news/newsletters/2022-11-16/us-cyber-review-punts-on-russian-hack-hinting-at-limitations
    23    Herr et al., “Broken Trust: Lessons from Sunburst.”
    24    Trey Herr, “Three Key Unanswered Questions about the Chinese Breach of Microsoft Cloud Services.” CyberScoop, July 20, 2023, https://cyberscoop.com/microsoft-cloud-breach-china/.
    25    Jessica Rosenworcel, “Protecting Consumers from SIM Swap and Port-Out Fraud, WC Docket No. 21-341, Report and Order and Further Notice of Proposed Rulemaking,” FCC, November 15, 2023, https://docs.fcc.gov/public/attachments/FCC-23-95A2.pdf.
    26    Roadmap “CISA Open Source Software Security Roadmap,” CISA, September 12, 2023, https://www.cisa.gov/resources-tools/resources/cisa-open-source-software-security-roadmap
    27    “Federal Acquisition Regulation: Cyber Threat and Incident Reporting and Information Sharing,” Federal Register, October 3, 2023, https://www.federalregister.gov/documents/2023/10/03/2023-21328/federal-acquisition-regulation-cyber-threat-and-incident-reporting-and-information-sharing
    28    “SEC Charges SolarWinds and Chief Information Security Officer with Fraud, Internal Control Failures,” SEC Press Release, October 30, 2023, https://www.sec.gov/news/press-release/2023-227.
    29    Robert Knake, Adam Shostack, and Tarah Wheeler, “Learning from Cyber Incidents: Adapting Aviation Safety Models to Cybersecurity,” Belfer Center for Science and International Affairs, November 12, 2021, https://www.belfercenter.org/publication/learning-cyber-incidents-adapting-aviation-safety-models-cybersecurity.
    30    “Safety Recommendations,” NTSB, https://www.ntsb.gov/investigations/Pages/safety-recommendations.aspx.
    31    Vicens, “Cyber Safety Review Board to Analyze Cloud Security in Wake of Microsoft Hack.”
    32    Sakellariadis, “With Lapsus$, Cyber Review Board Draws Mixed Reviews.”
    33    CISA, “A Bill to Establish the Cyber Safety Review Board.”
    34    NTSB, “The Investigative Process.”
    35    “Order 1220.2G – FAA Procedures for Handling National Transportation Safety Board (NTSB) Recommendations,” FAA, May 13, 2011, https://www.faa.gov/documentLibrary/media/Order/1220.2G.pdf.
    36     NTSB, “Safety Recommendations.”
    37    Patrick Gray and Adam Boileau, “Risky Business #733 — Say cheese, motherf—er,” Risky Business, January 24, 2024, https://risky.biz/RB733/.
    38    “What Is the National Transportation Safety Board?” NTSB, https://www.ntsb.gov/about/Documents/SPC0502.pdf.
    39    “History of The National Transportation Safety Board,” NTSB,  https://www.ntsb.gov/about/history/pages/default.aspx.
    40    “Failure of FAA to Implement NTSB Recommendations Contributed to Fatal Air Tour Helicopter Crash, NTSB Says,” NTSB, May 10, 2022, https://www.ntsb.gov/news/press-releases/Pages/NR20220510.aspx.
    41    Cybersecurity: NIH Needs to Take Further Actions to Resolve Control Deficiencies and Improve Its Program,” Government Accountability Office, December 7, 2021, https://www.gao.gov/products/gao-22-104467.

    The post Future-proofing the Cyber Safety Review Board appeared first on Atlantic Council.

    ]]>
    Atkins published in Cyber Defense Magazine https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-published-in-cyber-defense-magazine/ Tue, 06 Feb 2024 19:28:19 +0000 https://www.atlanticcouncil.org/?p=735085 On February 5, IPSI nonresident senior fellow Victor Atkins published a piece in Cyber Defense Magazine titled “Closing the Gap: Safeguarding Critical Infrastructure’s IT and OT Environments.” In this article, Atkins discusses the importance of shoring up informational and operational technology systems’ protections against cyberattacks. 

    The post Atkins published in Cyber Defense Magazine appeared first on Atlantic Council.

    ]]>

    On February 5, IPSI nonresident senior fellow Victor Atkins published a piece in Cyber Defense Magazine titled “Closing the Gap: Safeguarding Critical Infrastructure’s IT and OT Environments.” In this article, Atkins discusses the importance of shoring up informational and operational technology systems’ protections against cyberattacks. 

    The post Atkins published in Cyber Defense Magazine appeared first on Atlantic Council.

    ]]>
    If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager https://www.atlanticcouncil.org/blogs/new-atlanticist/if-the-us-and-eu-dont-set-ai-standards-china-will-first-say-gina-raimondo-and-margrethe-vestager/ Wed, 31 Jan 2024 16:31:02 +0000 https://www.atlanticcouncil.org/?p=730814 The standardization of technologies is already being dominated by nonmarket and Chinese players, the two officials warned at an AC Front Page event.

    The post If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager appeared first on Atlantic Council.

    ]]>
    Watch the event

    According to US Commerce Secretary Gina Raimondo, the United States and European Union (EU) don’t have a moment to wait in setting standards for the development and use of artificial intelligence (AI). “If the US and EU don’t show up,” she warned, “China will, [and] autocracies will.” 

    Raimondo spoke at an Atlantic Council Front Page event on Tuesday alongside European Commission Executive Vice President Margrethe Vestager, who cautioned that the field of standardization in technologies is already being “dominated by nonmarket players or Chinese players.” But “we need to be much more present in standardization for us,” she said. “We need to have a presence.” 

    The leaders spoke shortly after the fifth meeting of the EU-US Trade and Technology Council (TTC) in Washington, where officials touched on everything from AI to climate policy to semiconductors. 

    In the US-EU relationship, “there are irritants for sure,” Raimondo admitted, “but fundamentally what binds us is massively more consequential than the irritants.” 

    Below are more highlights from the conversation, which was moderated by Atlantic Council President and Chief Executive Officer Frederick Kempe. 

    EU+US on AI

    • Raimondo argued that the TTC will “prove to be exceedingly valuable” as AI tools continue to evolve. She said that the “muscle” the TTC has “built up”—in generating trust between private-sector stakeholders and the governments leading the United States and EU—will help in “bringing us together to write the rules of the road of AI.” 
    • But while Raimondo said that a “transatlantic approach” to AI could possibly come out of the TTC, she said she doubted whether “joint regulation” is feasible. “It will be some time before the US Congress passes a law that relates to the governing of AI,” the commerce secretary pointed out. “In the absence of that, there’s an awful lot of work to be done.” 
    • The commerce secretary explained that normally when governments develop regulations, each country first goes about writing rules separately before gathering with others to harmonize. “With AI, we can harmonize from the get-go because we haven’t yet written these regulations or rules or standards,” she said. 
    • In response to concerns that governments will always be regulating too slowly to keep up with tech, Vestager said that she thinks “that is just plain wrong” and that it is the responsibility of governments to ensure that technology respects society’s values. 

    Keep “eyes wide open” on China 

    • Vestager outlined the EU’s “very complex relationship” with China, given that it is an important partner in fighting climate change—but it is also a systemic rival and an economic competitor. She explained that the EU is working to “derisk [its] dependencies” on countries such as China by getting more countries to screen foreign direct investments and working to prevent countries from skirting export controls. 
    • Regarding China, it is in the “self-interest” of the United States and EU “to work together,” Raimondo argued. “There are real national security concerns for both of us,” she added, “and we have to be eyes wide open about that and work together to protect… our countries.” 
    • Among those national security concerns: “We have to keep our eye on the number of Chinese-made electric vehicles being sold in Europe,” Raimondo said, explaining that sophisticated electric vehicles collect a “huge amount of information” about the driver and their surroundings. “Do we want all that data going to Beijing?” she asked. 

    A TTC progress report 

    • Raimondo argued that the TTC has offered the transatlantic partners another opportunity to build trust, collaborate, and share information. The TTC has the added benefit of offering the partners a forum “where we can complain about each other in a constructive manner,” Vestager chuckled. 
    • While steel tariffs and the US Inflation Reduction Act have been examples of what Raimondo called “irritants” in the relationship, she said it is important that the transatlantic partners agree on “the principles and goals” behind regulating trade and tech. She attributed differing US and EU regulation methods to “differences in our systems of government… [and] political realities.” 
    • Beyond the ministerial meetings, Vestager said that the TTC has resulted in tech and trade teams having “gotten to know each other really well,” which she says made it easier to, for example, work together on sanctioning Russia after its full-scale invasion of Ukraine in 2022. “It went so fast with very little sort of bumps [in] the road because people knew each other,” she explained. “I think it’s really important not to underestimate what it means that you know who to call.” 
    • With elections looming on both sides of the Atlantic—and thus the possibility of new leaders who feel differently about US-EU collaboration—Raimondo said that the forum is taking measures to solidify its plans, for example renewing its memoranda of understanding. Raimondo added that with the TTC engaging stakeholders from the private sector, she hopes that there’s demand from both industry and civil society to keep the collaboration going. “There’s much more work to be done,” she said. 

    Katherine Walla is an associate director of editorial at the Atlantic Council. 

    Watch the full event

    The post If the US and EU don’t set AI standards, China will first, say Gina Raimondo and Margrethe Vestager appeared first on Atlantic Council.

    ]]>
    The great despiser: The BSA, memory safety, and how to make a good argument badly https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/the-great-despiser-the-bsa-memory-safety-and-how-to-make-a-good-argument-badly/ Sat, 27 Jan 2024 01:03:00 +0000 https://www.atlanticcouncil.org/?p=818028 Memory-safe programming languages are in the cyber policy mainstream, but some hesitation remains. Looking at the arguments around memory safety is informative for larger cyber policy debates too.

    The post The great despiser: The BSA, memory safety, and how to make a good argument badly appeared first on Atlantic Council.

    ]]>
    As cybersecurity policymaking takes on more complex issues, its debates demand more evidence and rigor. That growth is good—it means that cyber policy has begun to grasp the full scope of challenges it faces. However, that growth also means that speculation and the vaguely invoked, ever-fragile spirit of innovation are increasingly insufficient arguments.

    In September 2023, the Business Software Alliance (BSA), a technology industry trade association, published Memory Safety: A Call for Strategic Adoption. The piece presents perfectly reasonable recommendations and flags sensical concerns regarding memory safety, an area of recent policy focus. However, Strategic Adoption’s argumentation highlights the danger of an ever-diminishing burden of proof.  The fact that, in this instance, little harm was done is more testament to the robustness of memory safety arguments than anything else. Similar reasoning trade associations have offered in other cyber policy discussions does impact policy—take, for example, the 2022 software bills of materials letters.  Strategic Adoption presents a useful opportunity to highlight how cybersecurity policy in general and memory safety in particular are argued: key debates cannot continue to shirk the obligation to provide concrete evidence—or to imply someone else should do it for them—much longer. This article will highlight Strategic Adoption’s most egregious moments and how harmful those arguments’ methods are to the broader cybersecurity policy conversation. But first, what is memory safety?

    Memory safety, strategic adoption, and you 

    Memory safety is a seductive silver bullet for cybersecurity policy. Clickbait headlines practically write themselves—‘Hackers Hate this One Simple Trick,’ ‘Engineers Made All Their Code Memory Safe and You’ll Never Guess How,’ ‘Say Adios to 70 percent of Your Vulns,’ and so on.

    Memory safety is a characteristic of coding languages, describing those that are immune to memory safety bugs and vulnerabilities. Memory-unsafe languages require a developer to not just tell the computer what to do, but to outline how much space to set aside for the task and what to do with that space once finished—in other words, manual memory management. Thus, the program runs quickly, and its developers can speak directly into the guts of a machine. In return, memory-unsafe languages leave the potential for common catastrophic bugs and vulnerabilities, which arise when developers inevitably make mistakes in memory management. Some languages employ a garbage collector, which follows a program to clean up its memory use. This slows those languages considerably but ensures memory safety by obviating the need for manual memory management.

    Other memory-safe languages allow manual memory management but employ rules that refuse to compile a program unless it is provably memory-safe. These languages are fast, and, especially early in the learning process, painful to write software in, as developers struggle to conform to stringent rules. One newcomer language, Rust, offers relatively straightforward rules to achieve memory safety while preserving the speed of memory-unsafe languages like C and C++. Rust has featured heavily in the news cycle, with proposals to convert critical software into Rust for improved security, equal or better performance, and easier long-term modifiability. Rust programs are less likely to break when changed than a comparable C program, for instance, because manual memory management is often fragile and convoluted.

    Policymakers have understandably jumped at the opportunity to eliminate entirely a large and devastating class of software vulnerabilities, weaving memory safety into proposed legislation, requests for information, campaigns by the Cybersecurity and Infrastructure Security agency (CISA), and even the White House National Cybersecurity Strategy and its implementation plan. Some policy initiatives even look for similar opportunities to eliminate entire attack avenues with other fundamental changes to software development.

    It might be worth tempering that enthusiasm. It seems unlikely that security returns of similar strength as those offered by memory safety are easily found in other parts of the cybersecurity world. Regarding Rust, the ecosystem of expert developers and tried-and-tested tooling necessary for widespread adoption is still in its early days. As for figuring out what software to convert, there has been no serious effort at the federal level to date to catalog what dependencies are critical nodes of risk, what their potential lack of memory safety might mean in the context of their implementation, or the costs and benefits associated with their potential compromise or rewriting.

    The BSA’s Strategic Adoption flags all these concerns. It urges for more tooling, more developer training, incentives to develop new code natively in memory-safe languages rather than focus overly on conversion, and the strategic prioritization of scant time and money. It highlights well that in cybersecurity, resources are limited—there are never enough funds, people, or hours to accomplish every well-intentioned security initiative.

    This is all reasonable, more or less. But so long as policy lacks the frameworks and language needed to make specific cost-informed decisions, any proposed initiative will be vulnerable to the same basic argument—resources are limited. Any proposal will need to endlessly prove its worth to detractors, even when they present no evidence to the contrary. Proceeding too far without developing those quantitative muscles risks sinking the cybersecurity landscape into an inertial bog, siloing efforts within individual companies and agencies where they would otherwise serve the ecosystem better at scale, and bending the knee to weak generalizations about limited resources, precious innovation, and alternative interventions.

    The BSA piecepreviews this quagmire, positing solid conclusions but offering little along the way—no citations, no data, no prioritization system, and no policy specifics; only unsubstantiated hypotheticals, mischaracterizations, and vague inertial resistance to any change or regulation. It is probably unfair to look to a trade association publication for that requisite rigor—vague inertial resistance might in fact be in their DNA, if not their business model. However, looking there anyway raises the question of why the companies involved in such an association, with ample policy and security engineering talent, do not provide the conspicuously absent data to either back up or counter the arguments made by Strategic Adoption, especially given that many of those same companies are indeed “going big on Rust” (which is a great thing!). The rest of this article will highlight seven myths or argumentation missteps presented in the BSA piece, with an eye to their larger implications for the state of cyber policy discourse. In the process, it will make the pardonable sin of conflating general memory safety and Rust, except where it matters—Rust is far from the only memory-safe contender, just a particularly useful example. Critically, none of these seven argumentations leads to a bad conclusion—this article does not argue that because of its methodology Strategic Adoption’s positions should be invalidated. Rather, it strives to highlight the issues that cyber policy discussions encounter even when they arrive at sound conclusions.

    Myths and missteps 

    #1 – Policymakers are proposing to require the rewriting of all memory-unsafe code into memory-safe languages (“why not simply require all software producers and government agencies to convert code?”).  No one is suggesting this, and it is not possible. One might say that it is simply a rhetorical device to broach the topic of prioritization, but the implication that such an absolute approach is part of conversation at all is disingenuous. Importantly, it undercuts the agonizing rulemaking processes behind reforms such as the Cyber Incident Reporting for Critical Infrastructure Act of 2022, Federal Acquisition Regulation provisions, and Securities Exchange Commission cybersecurity incident disclosure requirements.

    #2 – Widespread memory-safe rewrites will introduce many new vulnerabilities into codebases so as to challenge the benefits of memory safety ex ante (“policymakers should expect that converting trillions of lines of code to memory-safe languages will reduce vulnerabilities associated with memory safety but create risks associated with other vulnerabilities in the new code.”). Memory-safe languages mean fewer memory-safety bugs. By the count of companies such as Google, Apple, and Microsoft (a BSA member) memory-safety flaws account for more than two-thirds of vulnerabilities in large codebases (and have so since 2006 for Microsoft!). If the BSA is aware of other classes of risk that memory-safe languages introduce at similar scale—again, more than two-thirds of all vulnerabilities—it seems a disservice to the debate here, and to security in general, not to cite even one of them specifically. This argument seems to undermine the entire concept of memory safety, which is odd given the millions recently invested by at least one of BSA’s member companies. BSA need not align with all of its individual members, but to discount without caveat the argument made by the act of investment is concerning.

    #3 – The time and tooling thrown at ensuring the security of memory-unsafe code by some producers makes the topic of conversion a nonissue (“many software producers that use secure software development practices have already scanned and mitigated risks associated with memory safety.”). This seems to argue that companies have already dealt sufficiently with memory-safety vulnerabilities without the use of memory-safe languages. However, even large technology vendors such as Apple, Google, and Microsoft have publicized that memory-safety bugs, despite their best efforts and vast resources, are the majority by a wide margin, and persistently so over time and against many mitigation practices (fuzzing, static analysis, compiler updates, code rules, etc.). If even these companies are still inundated by memory-safety vulnerabilities, it is hard to see how any others are positioned to fare better through tooling and grit alone. It is harder still to see any merit to the argument implied in Strategic Adoption—that memory safety is a mostly solved problem already. The mitigations around memory-safety vulnerabilities from unsafe languages are important, sure—but they are demonstrably insufficient and incomplete, too. The abstract argument is also dangerous: awareness of and mitigation around a security issue is not the whole story, especially where reconsidering insecure design decisions at the outset might be more efficient in the long run.

    #4 – Memory-safe languages are a foreign concept to many software developers (“many software developers have neither trained in nor have gained experience with memory-safe languages.”). Here, our lazy amalgamation of Rust and memory safety falls apart. Rust is indeed a young language with relatively few expert engineers. But memory safety is old and widespread. Java (created in 1995), Python (1991), JavaScript (1995), and C# (2000) are all memory-safe by virtue of having garbage collectors, and they also happen to be the four most popular coding languages. A discussion of workforce shortages in Rust expertise would be useful to policymakers, but an incorrect generalization is not. More broadly, the idea that novelty—real or imagined—should act as a meaningful disincentive would radically limit the realm of possible security improvements.

    #5 – Customer adoption will prove an obstacle to any proposed conversions (“if a software producer adopts a memory-safe language for an application, a customer may need to update its version of the application…experience tells us that customers are often slow to update software…”). A corollary of this argument is that we should rarely patch vulnerabilities because “customers are often slow to update software.” It is a hot take and a shame to not see it given any further discussion, especially given that it directly contradicts cybersecurity guidance and practice from most of the largest IT firms as well as BSA’s own members. The argument could actually contain some incredibly interesting thoughts about security by design, if it had been seriously argued at all. It also offers as fact the inconsistently true notion that consumers have much say in the software offerings from which they choose. The larger idea that customer demand signals for security are weak is a key part of the discussion around realigned responsibility in the National Cybersecurity Strategy, but the full breadth of economic debate about information asymmetry, externalities, and more is out of scope for a document about writing code such that it breaks less often.

    #6  – The fact that other security interventions might work better (“products and services that have not yet implemented other cybersecurity best practices would likely benefit more from adopting those…than converting to a memory-safe language”) and that other threats might be more pressing in other contexts (“a threat model may demonstrate that different uses, for example a mobile application or a cloud service, face different threats”) should temper enthusiasm around memory safety. These are both true of any policy proposal—a better one might exist, and one proposal might not solve all problems in all places. If Strategic Adoption offered evidence that the net cost-benefit of memory-safety conversions and, say, MFA adoption pointed away from the former, it would make a striking contribution to cybersecurity discourse everywhere—showing, with hard data, that one practice is a better investment than another. It does not, however. Similarly, it does not highlight specific contexts where threat models point toward other interventions. Ironically, it instead only vaguely points to areas where the limited available evidence suggests exactly the opposite—cloud services and mobile applications. The fact that a proposal is not a universal panacea does not mean much.

    #7 – All cybersecurity resources are interchangeable, and invention is more important than implementation(“Resources an organization uses to adopt memory-safe language are then not available to address known exploitable vulnerabilities in an application, implement multi-factor authentication, or invent the next security technology needed to protect against evolving threats.”). This spending model is an incomplete accounting. Not all cybersecurity investments are fungible, and the longer-term payoff of spending less to deal with a vulnerability class that no longer exists merits consideration. Moreover, diverting resources from implementing one security technology in order to invent “the next” raises obvious issues. The implication of a zero-sum game in the claim that “prioritizing writing new programs in memory-safe languages over transitioning existing programs into memory-safe languages is likely to produce better security for the same investment” is worth considering, too. Securing existing, critical, widely adopted software guarantees impact at a scale that harder to assure when creating new, more secure systems. In other words, there are some programs so critical that successfully rewriting them will do more than simply hoping newly written memory-safe code achieves similar criticality. And one can, to a considerable degree, do both.

    Final thoughts 

    In the grand scheme of things, cybersecurity policy has much room to improve when it comes to assessing costs and benefits and prioritizing the precise location of security investments. Instead of contributing to that effort though, the BSA simply suggests a set of answers to a quantitative question without offering any quantitative reasoning. This is particularly disappointing given the potential value to policy of a deep look at cost-benefit tradeoffs in security improvements, and how well-positioned the BSA is among its many venerable member firms to provide consolidated data on such a topic.

    Strategic Adoption’s writing often puts the burden of proof back on proponents of memory safety, where it should sit with the piece’s authors, who have notably provided scant evidence or citations of their own. A more generous reading of Strategic Adoption would be that it, in fact, argues exactly for the evidence-based prioritization that it fails to provide itself. The fact that some things might not benefit much from memory-safe conversion would be incredibly useful information in prioritizing limited security resources—if the piece gave it any serious discussion. If Strategic Adoption pointed to those cases with evidence and specificity, it would help policymakers and industry avoid wasted effort. This is not to say that memory-safety advocates have no obligation to provide evidence themselves. They in fact already have.

    Strategic Adoption’s harmless conclusions obscure its flawed argumentation, which resembles a kind that threatens the integrity of cybersecurity policy writ large. Making unbacked but supportable arguments aims to slow change. One could take a piece raising the possibility of these claims and urging their evaluation in good faith, but Strategic Adoption is not that. It is reactive inertia, a corporatized resistance to any intervention that might cost a member firm—even those that are actively pursuing that change themselves, which is most of them—because even the possibility of reduced profit outweighs the potential for, yes, substantial cost saving in reducing time spent fixing bugs, remediating their consequences, or even looking for them in the first place.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The great despiser: The BSA, memory safety, and how to make a good argument badly appeared first on Atlantic Council.

    ]]>
    International law doesn’t adequately protect undersea cables. That must change. https://www.atlanticcouncil.org/content-series/hybrid-warfare-project/international-law-doesnt-adequately-protect-undersea-cables-that-must-change/ Thu, 25 Jan 2024 15:00:00 +0000 https://www.atlanticcouncil.org/?p=727834 What's missing: A global effort to protect undersea cables in international waters.

    The post International law doesn’t adequately protect undersea cables. That must change. appeared first on Atlantic Council.

    ]]>
    Undersea cables are important tools for transmitting sensitive data and supporting international telecommunications—but they’re relatively vulnerable. Sensitive data remains safe as long as undersea cables are in good physical condition, but events such as severe sabotage—in the form of cutting cables—could leak data and interrupt vital international communications. Today, when events that damage or cut a cable, (including acts of sabotage) happen in international waters, there is no effective regime to hold the perpetrator of a physical attack accountable.

    The United States and its allies and partners have come to understand how important it is to secure the world’s undersea cables. But there haven’t yet been enough efforts that incorporate all countries in a protection pact. The reality is that cable cutting could severely impact the lives of citizens in countries across the globe, from Tonga to Norway and far beyond. Thus, intergovernmental organizations such as the United Nations (UN) must take undersea cable security seriously, including by forming internationally recognized and formalized protections.

    Risks are growing under the sea

    Threats to undersea cables are increasing. For example, Russia is well positioned to conduct malicious attacks on undersea cables with the help of its intelligence ship, Yantar, which was spotted loitering near cable locations in 2019 and 2021. NATO Assistant Secretary General for Intelligence and Security David Cattler expressed particular concern about Russian activity in European waters, following the 2022 invasion of Ukraine. Cattler told reporters in May 2023 that Russia could attack infrastructure such as undersea cables in an attempt to “disrupt Western life and gain leverage over those nations that are providing support to Ukraine.”

    For a sense of how interruptive cable cutting could be, look to the African continent and the Matsu Islands. In April 2018, damage to the Africa Coast to Europe cable—which at the time connected twenty-two countries along the western coast of Africa and Europe—caused significant connectivity issues (and in some cases days-long blackouts) for ten countries. Reporters suggested that the damage could have been caused by Sierra Leone, as the country’s government seemed to have imposed other internet blackouts on its citizens around the same time, impacting communications for not just social but also economic and governance matters.

    In February 2023, two Chinese vessels on two separate instances severed cables in the East China Sea—one on February 2 and another on February 8. Although there is no direct evidence that the vessels did so intentionally, Taiwanese local officials said that the cable cuts are part of repeated cable breaks that amount to harassment by China. For nearly two months, the over thirteen thousand residents of the Taipei-governed Matsu Islands endured an internet outage, encountering great difficulty when conducting business and communicating. For China, understanding how undersea cable cuts can impact Taiwan provides useful insights that can be leveraged in both traditional and hybrid warfare.

    These interruptions hit particularly hard when countries don’t have many connection points. For example, while Saudi Arabia has sixteen cable connections, the Matsu Islands only have two connections. Norway’s Svalbard archipelago similarly only has two connections, while Tonga only has one. The impact of a severe cable cut also depends on a country’s ability to fix damaged or degraded cables. It took Taiwan over a month to repair cables stretching to the Matsu Islands. For Tonga, whose cable was damaged by a volcanic eruption in 2022, it took ten days for a cable repair ship stationed in Papua New Guinea to even reach the island before beginning repairs, which then took several weeks.

    Clusters of countries have begun to acknowledge the increasing threats to undersea cables. For example, in 2019, Japan outlined the Data Free Flow with Trust (DFFT) concept that promotes the free flow of data and the protection of individual privacy, national security, and intellectual property by connecting undersea cables only with allies and partner nations. At a May 2023 summit in Hiroshima, the Group of Seven (G7) endorsed the creation of the Institutional Arrangement for Partnership, which puts DFFT into action. The G7 also issued a communiqué (albeit more of a political consensus than any sort of treaty) with a section committing to collaborate more on undersea cable security.

    Should the G7 countries follow through on their commitment—for example, by investing in an undersea cable project together—they could affect geopolitics in the undersea cable world and highlight to political and business leaders how necessary it is to keep countries connected through cables.

    The G7’s progress and NATO’s recent establishment of a London-based center on protecting undersea cables are examples of how the United States prefers to share cables with likeminded countries. These efforts also demonstrate how democratic states are joining together in smaller consortia to invest in establishing and securing undersea communication cables.

    Democratic states are also investing in undersea cables as a way to spread the free flow of data. In June 2023, the East Micronesia Cable project to connect several islands in Oceania began, funded by Australia, Japan, and the United States—with the understanding that connectivity is vital to economic development and, in this case, a means to counter Chinese influence in the region. The project was slow to start, as it faced a stalemate after China’s HMN Technologies submitted a tempting bid to build the cable, and the United States warned the Pacific islands about the risks associated with the participation of a Chinese company. Soon after, all bids were deemed noncompliant and removed from consideration, a challenge to China’s increasing control of digital traffic in Oceania.

    China’s influence in the undersea cable world has grown immensely in recent years. In 2019, China owned, supplied, or was a landing point for over 11 percent of the world’s undersea cables, and it is aiming to grow this proportion to 20 percent by 2030. US warnings about Chinese cable companies demonstrate how Washington, with its allies and partners, is working to counter Chinese influence in supplying undersea cables in the Pacific.

    A global deterrence plan

    The world’s information is in serious danger, as perpetrators could resort to malicious attacks not only to interrupt connectivity but also to tap into the cables and eavesdrop. When undersea cables are cut or damaged, the laws that determine who is responsible for sabotage vary depending on where the cables are laid. For example, a coastal state has sovereign rights in its territorial sea, according to Article 21 of the UN Convention on the Law of the Sea (UNCLOS). In addition, a coastal state may exercise its rights to repair and maintain undersea cables in its exclusive economic zone, according to UNCLOS Article 58.

    However, in regard to cables that are sabotaged in international waters, there is currently no effective regime to hold the perpetrator of damage responsible. If cables are willfully or accidentally damaged by a ship or person, the jurisdiction to determine an appropriate punishment for the perpetrator lies with the state under whose flag the ship operates or that of the person’s citizenship. Because this places onus on the perpetrator’s state, not the state that owns the cable, there is no effective regime to ensure that the responsible party is held accountable directly.

    It is time for an intergovernmental organization such as the UN or its International Telecommunication Union (ITU) to take undersea cable security seriously and establish internationally recognized protocols under a formalized protection plan that deters actions against undersea cables and prioritizes the security of digital communications.

    Such a protection plan should give jurisdiction to the cable owner’s state. Under such a plan, the fact that the cable owner’s state could take the perpetrator’s state to court might make intentional saboteurs think twice, creating a deterrent effect, especially if fines or remediation costs are significant. It should also take into account nonstate actors, such as armed groups or large multinational business companies, who could interfere with the cables. UNCLOS, as a traditional treaty between states, does not hold nonstate actors responsible, even in a scenario in which a terrorist group were to inflict damage.

    The type of first-rate technology required to cut undersea cables is immensely expensive and not typically affordable for nonstate actors or militia groups—and even for many states. Only a few countries have submarines: For example, China owns vessels such as the Jiaolong and Russia owns vessels such as the Losharik. However, countries often rely on companies to manufacture and lay cables, and there are concerns that untrustworthy companies maintaining undersea cables could become involved in disrupting the data inside the cables—for example, by spying or stealing information.

    However, if the ITU is to be the origin of such a regime, it must look inward and address what some democratic countries would call a major controversy: China’s increasing influence in the UN body. From 2015 to 2022, Chinese engineer Houlin Zhao served as the ITU’s secretary-general, and during that time he championed China’s Digital Silk Road vision and notably increased Chinese employment at the ITU. He seemed to forget his position as a neutral international civil servant, acting more like a Chinese diplomat.

    During Zhao’s term, Huawei and the Chinese government introduced its “New IP” proposal to the ITU which quickly became controversial for sacrificing the privacy of individuals and making state control and monitoring of digital communications easier. Despite not yet being debated, it was backed by two authoritarian governments (China and Russia) and opposed by the United States, Sweden, the United Kingdom, and several other democratic nations.

    While Zhao was replaced by an American engineer—Doreen Bogdan-Martin—China has been sending more individuals than other states to various study groups at the ITU. It is also one of the top contributors to the ITU’s annual budget, providing about $7.5 million in 2023. It is clear that China recognizes the importance and influence to be had in the digital space through undersea cables, and its attempts to influence the management of this global infrastructure should not be left uncountered.

    In the UN, increasing factionalization could make finding common ground for a new regime difficult but not impossible. Countries would need to agree that managing undersea cables together is important. Similar agreement has been reached on the need for nuclear protocols and for deconfliction in space operations—areas where states are generally more willing to share information, despite counterintelligence concerns.

    From a hybrid warfare perspective, sabotaging or destroying undersea cables can be a powerful tool for adversaries. As countries come to rely more on digital communications and infrastructures, a sudden or unexpected blackout can increase social angst and foster political instability. World leaders must switch focus to establishing a working international regime that governs how the world responds to undersea cable sabotage to deter those who may see an opportunity in attacking the system. The effort should be directed at creating a working international regime that enhances individual privacy, not more government control of the internet, when protecting the data in undersea cables.

    The world’s interconnectivity provides for the movement of tremendous wealth, improved access to information, and international relationships that would have been impossible only fifty years ago. With huge benefits come huge risks, and for undersea cables, those risks include significant vulnerabilities that global leaders must take seriously. They must build better protections now, before nefarious actors come to view undersea cables as a viable target.


    Amy Paik is an associate research fellow at the Korea Institute for Defense Analyses (KIDA). She has been with the Center for Security and Strategy at KIDA since 2013. She is also a visiting scholar at the Reischauer Center for East Asian Studies at Johns Hopkins University School of Advanced International Studies.

    Jennifer Counter is a nonresident senior fellow in the Scowcroft Center for Strategy and Securitys Forward Defense Program. She is a member of the Gray Zone Task Force focusing on influence, intelligence, and covert action.

    This piece is based on a doctoral dissertation, written by Paik, entitled “Building an International Regulatory Regime in Submarine Cables and Global Marine Communications.”

    The post International law doesn’t adequately protect undersea cables. That must change. appeared first on Atlantic Council.

    ]]>
    The 5×5—Forewarned is forearmed: Cybersecurity policy in 2024 https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-forewarned-is-forearmed-cybersecurity-policy-in-2024/ Wed, 24 Jan 2024 16:52:00 +0000 https://www.atlanticcouncil.org/?p=818133 Members of the Cyber Statecraft Initiative team discuss the regulatory requirements and emerging technology they are closely following in 2024, and forewarn of the year ahead.

    The post The 5×5—Forewarned is forearmed: Cybersecurity policy in 2024 appeared first on Atlantic Council.

    ]]>
    New year, new cyber policies. Or well maybe, old cyber policies with more nuanced understanding is a more realistic way to introduce the year 2024!

    We have the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), a National Cybersecurity Strategy and its Implementation Plan, those Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure requirements from the SEC, and even a National Cyber Workforce and Education Strategy. The launch of ChatGPT in 2022 led to a myriad of generative AI related legislative proposals including the 117-pages long Executive Order on The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and this year, we can expect these policies and regulations to come to fruition and shape the cybersecurity landscape. 

    So, for the first 5×5 edition of this year, we brought together some members of the Cyber Statecraft Initiative team to tell us which regulatory requirements and emerging technology they are closely following, and which dominant technology they think could give way to a better one in 2024. From disinformation and spyware to artificial intelligence and warfare, this edition forewarns of the year ahead. 

    1. What is the one emerging technology, industry or sector that you think could most adversely impact the cybersecurity landscape and you recommend the governments should proactively monitor in 2024? 

    Maia Hamin (she/her/hers), Associate Director, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council

    This one is tricky. I am in some ways tempted to say that emerging AI systems have the greatest potential for a ‘black swan’ disruption of the cyber landscape; AI systems could substantially disrupt or alter the offensive/defensive balance if they develop new capabilities that can be meaningfully harnessed for cybercrime or espionage. However, I don’t necessarily think that this scenario is the most likely to happen. More likely is something like a gnarly vulnerability of some kind in a widely deployed and used technology system that has a long-tailed remediation process, allowing bad actors to continue to exploit it for long after its discovery. But then, what else is new?” 

    Stewart Scott (he/him/his), Associate Director, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council 

    I worry about AI, but for reasons different than (and not conflicting with) my colleague, Maia. The amount of metaphorical oxygen AI discussions consume in the policy room is staggering, and I worry that other more concrete issues are neglected as a result. Not that AI does or doesn’t present risks and challenges for policy, but rather it seems to have struck some perfect blend of abstraction, novelty, and hype to consume policymakers. I’m not saying ignore it, but don’t put down all the other important work out there either.” 

    Jen Roberts (she/her/hers), Assistant Director, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council

    “It is not a necessarily emerging technology, but I would say spyware. While some policy action was made to regulate this space in 2023 with the executive order, joint statement, and PEGA committee findings, policy attention on the spyware market seems to be focused on vendor specific action rather than looking at the marketplace as a whole, including who is investing in this type of technology.” 

    Alexander Beatty (he/him/his), Assistant Director, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council

    “It’s hard to look at this and not immediately think of developments in artificial intelligence but I think most of these claims are hysterical. 2024 is posed to have the highest ever number of democratic elections across the globe, so governments need to proactively monitor the systematic spread of disinformation online to ensure free and fair elections around the world.” 

    Emma Schroeder (she/her/hers), Associate Director, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council 

    I believe that governments should take proactive steps to better understand the ways in which cyber operations have altered how warfare is conducted. At the tactical level, the ways in which the cyber and the kinetic can be melded on the battlefield are still undergoing a phase of dramatic evolution. On a wider scope, governments must seek to better understand how the cyber environment and cyber tools alter the types of actors engaged in conflict and the roles they play. Perhaps since the ‘state’ was conceptualized, there has never been a time in which on a global scale, warfare is less in the hands of the state. The same division that exists, whether in reality or just in theory, in the physical domains between the civilian and the combatant, doesn’t quite exist in the same way in the cyber domain.”

    2. Are there any regulatory changes or compliance requirements expected to significantly impact cybersecurity practices in 2024? What are some policy changes that you are most eagerly waiting for? 

    Maia Hamin

    “There are a bunch of moves I’m watching in terms of how the federal government will update its own security practices and requirements and expand their applicability – from revised FedRAMP guidance, which governs government cloud security; to new proposed updates to the Federal Acquisition Rules that would require government vendors to maintain Software Bills of Materials; to an overdue requirement from a 2022 EO for government software vendors to comply with NIST’s Secure Software Development framework. I’m curious to see how these different risk management frameworks and approaches will be implemented and whether they will improve the cybersecurity of the federal enterprise. This experience will provide useful information as the government considers whether and how to formulate software and cybersecurity requirements for broader swathes of industry.”

    Stewart Scott

    “I don’t know if they will fully land in 2024, but I’m excited to see how the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the cybersecurity disclosure requirements of the US Securities and Exchange Commission (SEC) play out. Empirical data on cyber incidents at scale would be such a cool, useful asset for policymakers to work with.”

    Jen Roberts

    “The marketplace for offensive cyber capabilities (OCC) is an industry the government should proactively monitor in 2024 and introduce more policy changes into. As the OCC market continues to grow to meet demand and remains an affordable and attractive option for governments who do not have homegrown capabilities, efforts to shape this marketplace must be comprehensive.”

    Alexander Beatty

    “Unfortunately, the news of aviation safety incidents (cyber or otherwise) have been prevalent in recent weeks, but with the FAA starting to increase pressure on carriers to comply with regulations introduced in 2023 and giving them 3 years to comply, we are likely going to see more US and international carriers start to conduct more organizational risk assessments which will be a valuable first step in improving cybersecurity across the aviation industry.”

    Emma Schroeder

    “This may be a bit of a cheat answer, but after the release of the National Cybersecurity Strategy, and its implementation plan in 2023, I am very much looking forward to following how the government actually works to implement and forward its strategic goals. In particular, the strategy had a strong theme of rebalancing responsibility for risk and security in cyberspace. I am eager to see how this effort to make the private sector take more responsibility in the cyber domain will proceed.”

    3. What’s a long dominant technology which will start to fade, or has begun to already, in 2024? What does this mean for cyber policy? 

    Maia Hamin

    One challenge with cyber policy is that technologies don’t stop existing, just because they ‘fade’ in current use – there are still important systems in active use that are written in FORTRAN or COBOL! It’s important that cyber policy both conceives of a better world (for example, pushes developers to use more memory safe languages) while also acknowledging that we will continue to need to use and manage risks arising from ‘legacy’ languages and technologies in application stacks that will never be migrated.”

    Stewart Scott

    “Ooh, interesting question. I’m not sure! I assume that whatever technological change occurs, it will continue requiring cyber policy to evolve at a pace that it’s not ready to match. I think that’s a more interesting angle though—technologies gain and lose importance (and maybe more relevantly, the spotlight) all the time (vacuum tubes anyone?), but we’ve long struggled to design policy systems that account for this without being overly prescriptive or unhelpfully vague.”

    Jen Roberts

    “Passwords. In 2023, we saw some companies like Microsoft offer passwordless authentication. With the benefits a passwordless world offers in terms of risk reduction and cost efficiency, 2024 can expect to see a wider push towards a passwordless world and those who can’t remember a password no matter how hard they try will rejoice!”

    Alexander Beatty

    “One can only hope that 2-Factor Authentication (2FA) using SMS will begin to fade, and this will mean an uptick in far safer and more resilient multi-factor authentication systems being implemented as industry standards. Will that actually happen, though? Plenty of organizations are already behind the times with their security standards, so it’s not unlikely that we’ll have to see some high-profile failings of SMS 2FA before there is any meaningful change. The Twitter/X removal of SMS 2FA for non-paying members may have, surprisingly, helped sound the death knell for this system.”

    Emma Schroeder

    Not a particular technology, but rather a characteristic of technology that has been disappearing for some time is the lack of understanding of how a specific technology functions. As technology becomes more advanced, less people will be able to understand how it functions and while this is in some ways offset by an increased focus on usability, the rising barrier for people to understand the ‘back end’ of the technology they are interacting with means that they might also not understand the risks that they are accepting. This means that the government needs to step in to help make those risks clearer to the population through requiring additional transparency from the companies that sell or provide these goods and services, and that the government must understand the contours of the digital landscape on which it and its citizens rely.”

    4. With the watershed launch of ChatGPT, industry and the government have refocused their resources towards AI policy amid a flurry of commercialization. What are some of the cybersecurity and digital policy issues that might have taken a back seat in 2023 but should be reconsidered in 2024? 

    Maia Hamin

    I continue to think that the US government (and probably others, though I’m less qualified to say) will be hampered in certain key policy efforts so long as they cannot get digital identity right. Privacy-preserving digital identity solutions are desperately needed to solve a host of challenges from how to digitally deliver benefits and government services to more thorny questions about whether and where to require (privacy-protective!) proof of identity on the internet. These challenges are likely to be intensified as AI content and bots proliferate on the internet. Existing commercial solutions generally have significant problems, chief among them that they rely heavily on the commercial data broker ecosystem. US government should move this issue back to the policy forefront.”

    Stewart Scott

    “Every day, I wake up hoping that the Cyber Safety Review Board will decide to examine the SolarWinds incident, and every day my dreams are crushed before 9:00am ET. To be clear, this is purely selfish—I just want to know more about the incident because I am a nerd. More broadly though, I don’t think cybersecurity policy has particularly robust learning mechanisms built into it. It’s hard to know how effective policies were or how well or poorly things are going, let alone why. The amount of time spent speculating about how new technological capabilities—generative AI is hardly a new technology per se—will change the status quo is somewhat bewildering given we don’t know much about what the status quo is, at least with any serious rigor and empiricism. The cybersecurity issue that I think takes a back seat, as a result, is less a topic or technology than a frame. Cybersecurity policy would benefit massively from institutionalized learning mechanisms—reviews of major incidents, analysis of whether policy interventions achieved their desired outcomes, wide-ranging studies on security control efficacy, empirical surveys on cyber incident damages, etc.”

    Jen Roberts

    “Workplace readiness and capacity building need to remain at the forefront of the cyber policy agenda. Governments across the world have a workforce shortage, that is not going away, but can be minimized by attracting talent to cyber, especially individuals who have international relations, political science, and legal interests because ‘cyber’ doesn’t happen in a silo. The White House’s National Cyber Workforce Strategy was a step in the right direction.”

    Alexander Beatty

    “The development of the cyber workforce both in the US and all around the world. With the launch of the National Cyber Workforce and Education Strategy in mid-2023, we saw workforce development, to follow the analogy, getting out of the back seat and getting into the passenger seat. In 2024, we both hope and need to see the development of the cyber and digital workforce hop in the driver’s seat.” 

    Emma Schroeder

    Cyber policy, like most policy areas, often operates on a schedule of sprints and fixations in response to significant technological advancements (or the perception of significant technological advancements) and severe cyber incidents. This means that it can sometimes be difficult to maintain the consistency needed to ameliorate cyber policy. There are many issues that either had their ’15 minutes’ and since faded from attention or that have never really had their day in the sun and yet are incredibly important. I will, however, give the obvious answer that in 2024 perhaps no digital policy issue will be more important in the US than countering mis- and disinformation surrounding our elections.”

    5. Let’s close this with some fiction. If you were to define the current cyber policy landscape through a movie or web series or storyline of a book, which one would you pick, and why?  

    Maia Hamin

    Comparing cyber to movies is hard. I watch a lot of fantasy and sci-fi movies. These movies usually have one big bad guy, and in the end, the hero and their buddies triumph over the big baddie. In cyber, there are a lot of different bad guys, so you may not know sometimes who you’re fighting. And you do win, but you also lose, and it’s a lot more obvious when you lose than when you win. Sometimes your buddies work with you, but sometimes they won’t really talk to you because they’re scared that they’ll get in trouble, even though all you really want is to win together. I think there must be a sports movie that is a better metaphor for this, but right now my brain is only giving me the 1982 movie- The Thing.”

    Stewart Scott

    “I would compare the current cyber policy landscape to the book/movie Moneyball, but only the part before all the statisticians start bending the ear of baseball management. Our general inability to answer, with any kind of empirical data, questions like ‘are cybersecurity outcomes better or worse than they were a year ago,’ ‘how impactful was this cyber policy intervention,’ or ‘which of these security controls is most effective given the cost of its implementation’ is, uh, not great.”

    Jen Roberts

    “Love, Death & Robots. A lot of fascination surrounding the future of AI, IOT, and new and developing technologies.”

    Alexander Beatty

    “The Girl Who Saved the King of Sweden by Jonas Jonasson – there’s a lot going on at the moment, there are so many interweaving policies, storylines, characters, and motivations, all with some pretty strong themes of mild, if not grave peril (no spoilers!) – in the end some well executed community building and understanding will help us avoid catastrophe.” 

    Emma Schroeder

    Difficult question. The book I’ve decided to go with, after much deliberation, is Piranesi by Susanna Clarke. The book follows a man named Piranesi (sort of) who lives in an ever-changing, seemingly infinite house. Piranesi spends his days documenting the movements and changes within the house, the layout of the statues, the patterns of the birds, the ebb and flow of the tides, trying to understand the labyrinthine world he lives in. I don’t want to give too much more away, because I believe the best way to go into this book is knowing almost nothing. So, check it out and let me know if you think this is an apt comparison.”


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—Forewarned is forearmed: Cybersecurity policy in 2024 appeared first on Atlantic Council.

    ]]>
    Makanju quoted in TIME Magazine on cybersecurity capabilites https://www.atlanticcouncil.org/insight-impact/in-the-news/makanju-quoted-in-time-magazine-on-cybersecurity-capabilites/ Wed, 17 Jan 2024 20:04:00 +0000 https://www.atlanticcouncil.org/?p=740767 On January 17, Transatlantic Security Initiative nonresident senior fellow Anna Makanju was mentioned in an article in TIME Magazine discussing OpenAI’s work providing cybersecurity capabilities to the Pentagon.   

    The post Makanju quoted in TIME Magazine on cybersecurity capabilites appeared first on Atlantic Council.

    ]]>

    On January 17, Transatlantic Security Initiative nonresident senior fellow Anna Makanju was mentioned in an article in TIME Magazine discussing OpenAI’s work providing cybersecurity capabilities to the Pentagon.

      

    The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

    The post Makanju quoted in TIME Magazine on cybersecurity capabilites appeared first on Atlantic Council.

    ]]>
    Design questions in the software liability debate https://www.atlanticcouncil.org/in-depth-research-reports/report/design-questions-in-the-software-liability-debate/ Tue, 16 Jan 2024 18:11:00 +0000 https://www.atlanticcouncil.org/?p=817702 Software liability—resurgent in the policy debate since its mention in the 2023 US National Cybersecurity Strategy—describes varied potential structures to create legal accountability for vendors of insecure software. This report identifies key design questions for such regimes and tracks their discussion through the decades-long history of the debate.

    The post Design questions in the software liability debate appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary

    Legal liability for insecure software is a deceptively simple-sounding concept that is, in practice, associated with a multifaceted and decades-long legal and policy debate. This paper identifies a set of core design questions for policy regimes to create legal liability for vendors of insecure software and surveys 123 articles from the wide-ranging literature on software liability to examine their balance of viewpoints with respect to these key decisions.

    The first design questions focus on what can create liability, often a combination of a failure to meet standards for good behavior with respect to the development and deployment of secure software and the manifestation of insecurity in software flaws that cause harm to a software user. These questions also raise the issue of responsibility—how to link the behavior of a software vendor to bad cybersecurity outcomes and account for the behavior of the software user with respect to software-specific practices such as patching. The next set of design questions focuses on the scope of a liability regime: whether it applies to all or only a subset of software, such as software that is used in high-risk sectors, that performs particular high-risk functions, that is produced by entities of a certain size, or software that is available for sale (versus released under an open source license). The third set of questions pertains to matters of governance and enforcement—how and by whom standards are defined, compliance assessed, violations prosecuted, and consequences determined.

    In the sampled literature, certain legal questions—such as whether to favor tort liability based on product or negligence theories—have been much debated over time with little evidence of emerging consensus. Other questions, such as whether to hold software used in different sectors to different standards, or which specific security frameworks or practices to require, were less discussed. Some of these less-discussed questions, such as whether to include developers of open source software in a liability regime, show relative consensus where they arise. Others, such as how to handle software patching, are disputed even in the more limited discussion that has occurred.

    Perhaps the most important design question in the framework is that of the policy goal of such a regime. What problems within the existing software ecosystem does liability seek to correct? These potential goals, such as improving ecosystem-wide security behavior or providing redress to harmed parties, often point in different directions with respect to how to resolve design questions in the construction of the regime. Debates, including those that have swirled since the mention of software liability in last year’s National Cybersecurity Strategy, have rarely articulated the full set of design questions available in the construction of a regime or explicitly mapped these questions to the goals of such an endeavor. This report concludes with a section using original analysis and the literature sample to examine how different design questions might be informed by the goals of a software liability regime.

    Introduction

    Legal liability has long been a solution proposed to fix markets in which buyers are ill-positioned to protect themselves through purchasing decisions or to rectify threats from too-dangerous products. Many today argue that the market for software security is broken in just such a way: makers of software face too little pressure from consumers to secure their software because consumers are ill-equipped to evaluate the security of such software itself and manufacturers pay few costs if their software is later found to be insecure. Legal scholars and cybersecurity researchers alike have long been interested in the idea of liability for insecure software, in hopes of providing redress for victims of insecurity or shifting incentives toward a better-secured software ecosystem. Following its mention in the National Cybersecurity Strategy,1 the question of how to implement liability for vendors of insecure software is once again in the conversation.

    However, the term liability itself and the goals that motivate it point not to a single type of legal regime but instead to a set of heterogenous policy constructs. Two broad buckets of such constructs are potential regimes based on torts versus potential regimes based on regulation. Torts allow one entity to sue another for “act[s] or omission[s] that cause legally cognizable harm to persons or property,”2and have evolved mostly through state standards and common law, or judges’ rulings rather than explicit laws passed by federal lawmakers. However, Congress can pass laws that impact the implementation of torts, and many roads to software liability might involve a law that changes the way in which existing theories of tort have applied to software. In contrast, in a regulatory regime, a government body such as an expert agency defines standards and requirements for specific entities such as software vendors and then (often, though not always) enforces these requirements itself. Within both the broad buckets of torts liability and regulatory liability, there are different potential forms, from product- versus negligence-based torts to premarket approval requirements versus requirements to self-certify certain key practices with penalties for misrepresentation.

    Thus, many questions remain about the form and nature of liability that would best achieve the goals laid out in the National Cybersecurity Strategy, and about the relative advantages of different potential paths to get there. These questions are not new, even if they are newly relevant; the debate over software liability has been evolving throughout academic research and writing, judicial opinion, and policy for almost as long as software has existed.

    This report makes two contributions.

    First, it deconstructs the liability debate into a set of policy design questions, and then, for each, identifies design options and models from existing legal structures that could be used to build and implement such element as well as articulating how each element relates to other questions and to the overall goals of such a regime. This framework deliberately uses terms that are different from the legal terms of art for certain concepts (for example, “harm” as a potential trigger for liability is closely related to the legal concept of “injury”), to avoid taking a normative position on torts-based versus statutory or regulatory approaches and to avoid prejudging the design questions presented here of how to impose legal disincentives for the sale of insecure software.   

    Second, to draw from the voluble historical debate and to help focus the current discussion onto a core set of design decisions and tradeoffs, this report surveys 123 academic articles and other pieces of writing that discuss some variation of software liability. These articles have been coded with respect to their stances on some of these design questions and examined both for trends in the balance of viewpoints as well as their evolution in time to seek to establish where there is existing consensus or relationships among variables that might inform the debate.

    A note on scope: this paper is intended to address issues around liability for vendors of software related to cybersecurity practices and problems. Software liability as a term could encompass a wider range of potential considerations around software-mediated harms that could create legal liability, such as products-related liability for algorithmic systems. Legal liability can also arise for operators of software—such as liability for organizations that process personal data and experience a data breach—rather than the entity that created and sold such software. These questions are important but beyond the scope of this work.

    Methods and framework

    Methods

    This report is based on a review of 123 pieces of writing from the scholarship on software liability, including law review articles as well as white papers, essays, and blog posts. The articles stretch over many decades of the debate – the earliest of the sampled articles was published in 1967, the most recent in 2023.

    These articles were collected in two tranches: the first assembled by a single expert based on keyword searches of online scholarly databases; the second borrowed from a literature review created by an expert working group on the topic of software liability. This process resulted in a corpus of 171 articles, which was cut down during the coding process to a final corpus of 123 articles which were accessible and relevant to the topic of legal liability for insecure software.

    Each of the collected articles was coded against a rubric developed by the authors to codify key policy design choices in the construction of a software liability regime. The articles were reviewed by two human coders who scored each article based on whether it endorsed or criticized the design choice or mentioned it without explicit criticism or endorsement. The threshold to distinguish between mention and endorsement or criticism was determined by the coders based on a holistic assessment of the viewpoint of the entire article, meaning there is necessarily an aspect of subjectivity in the data that appears below.

    Due to the subjective nature of both the data collection and coding, the findings reported below should not be considered representative or statistically significant claims about the entire scholarly body of work relating to software liability. The analysis and visualizations are intended to illuminate certain broad trends and frame discussions of policy choices and models, a tool to inform the debate rather than an absolute claim about the state of consensus in a field.

    In the literature sample, most articles focused on examining a specific component of or context for liability rather than a proposing a holistic regime, meaning that relatively few articles addressed every single aspect of this framework. For this reason, in many cases, visualizations address only those articles that address the question at hand in some form, while also seeking to contextualize how much of the broader sample of literature is included in that set.   

    Framework

    Triggers: What makes you liable?

    Liability is typically understood as legal responsibility for one’s actions (or inactions). From a policy perspective, what actions or inactions should make software makers legally accountable for poor software security?

    There are two important concepts that are relevant across policy approaches: standards and harms.

    Standards define good and bad behavior as it relates to developing secure software. Such measures range from design decisions such as choosing memory-safe programming languages or requiring user accounts to have multifactor authentication, to the use of tools or checks such as static analysis tools that scan code for vulnerabilities, descriptions of properties of code such as free of known vulnerabilities or known common weaknesses, or organizational practices such as having a security review step for code requests and secure release processes to avoid becoming a vector for a supply chain attack. The design of explicit standards, or decisions about how standards will implicitly be shaped over time, is a key part of a liability policy regime as it will define the behavior toward which software vendors are incentivized.

    Harms relate to the ways in which insecurity can manifest itself in practice. Insecurity can manifest in code, such as in flawed code patterns that are vulnerable to prompt injections or that allow a user to bypass authentication, or in weaknesses in security-relevant processes such as code releases. Harms arise when such flaws are exploited to cause harm to the user of the software, from data breaches to ransomware, intellectual property theft, or physical injury.

    Though regulatory liability could be triggered by a failure to meet standards alone, and torts are definitionally connected to a harm, both standards and harms play a role in each type of regime from a policy perspective. While not required in fact, in practice, enforcement for regulatory violations often follows news of a data breach or another harmful incident. For software torts, judges would need to consider questions that implicitly reply upon known or accepted standards or behavior with respect to cybersecurity, such as whether a software maker upheld a duty they owed to the user in creating the software (in negligence-based torts) or if the design they chose was foreseeably risky (under strict liability, most typically associated with products liability).

    Standards

    A liability regime can take different approaches to defining the standards it includes and how it incorporates them. A law or regulation could reference frameworks or controls developed by standard-setting bodies such as the International Standards Organization (ISO) or the National Institute of Standards and Technology (NIST). A law could also create new standards through regulation, such as by directing an expert agency to create new rules. Alternatively, it could defer the question to the courts, by using a legal term left up to interpretation such as “reasonable cybersecurity measures.” While explicit standards are more typical of a regulatory regime and case-by-case determination more typical for torts, articles and documents including the National Cybersecurity Strategy have endorsed hybrid models that combine torts with explicit standards in a “safe harbor” model, under which the law delineates a set of standards that, if a company can prove it upheld them, protect that company from tort liability. A safe harbor sets a behavioral “ceiling” for liability, dictating a level of behavior that wholly insulates entities from liability and thus defining an upper limit of the behavioral changes that a liability regime requires. Tort regimes could also use standards to define a “floor” on liability—a set of bad behaviors that create a presumption of negligence on the part of the software maker—while also leaving the door open for judges to examine specific cases and decide that software makers failed in their obligations to the software user in other ways. 

    Standards built explicitly into a regime, whether through regulatory approaches or a safe harbor in a tort regime, will delineate expected behavior by software makers more clearly and quickly than case-by-case approaches, which will need more settled cases (each of which can take years to resolve) to provide software makers with any measure of legal certainty about their obligations. On the other hand, avoiding a specific set of standards could make a regime more flexible, enabling a judge to review each case with respect to current industry best practices (which are always evolving, creating challenges for static regulation) as well as to use additional discretion to require safety behaviors that are above and beyond industry best practice3

    This illustrates a general challenge in defining explicit standards: tradeoffs between flexibility and specificity. A simple and specific list of practices that are easy for a company or an authority to audit for compliance may not be sufficient to guarantee that software is designed and implemented securely or to provide accountability for complex design flaws in software (see for example how businesses such as Microsoft, which espouse secure development principles,4 have experienced severe incidents as the result of flawed design and implementation5), while standards that can encompass a wider class of design flaws provide less specificity and certainty for software makers. For example, concepts such as “secure-by-design” and “secure-by-default” as recently championed by the Cybersecurity and Infrastructure Security Agency (CISA)6 are powerful principles that span multiple levels of abstraction from principles to specific practices. However, the highest level and most encompassing principles from this framework may be challenging to define in a way that makes it easy for businesses to ensure their compliance or for a potential enforcer to easily prove noncompliance.

    47 of the 123 articles surveyed mentioned the idea of using secure development standards as a basis for standards in a liability regime, with 34 of those articles explicitly endorsing secure development standards as a component of a liability regime.

    Such standards appear to have been relatively popular over time within the sampled literature, having been mentioned since the late 1980s.

    Some of these articles mentioned only the general idea of incorporating such secure development standards into a regime or suggested entities that could develop such standards, while others named specific standards, including government-developed standards such as NIST’s Secure Software Development Framework (SSDF) or standards developed by standards organizations such as the International Organization for Standardization (ISO) or the Institute of Electrical and Electronics Engineers (IEEE).

    The SSDF is a framework created to “reduce the number of vulnerabilities in released software, reduce the potential impact of the exploitation of undetected or unaddressed vulnerabilities, and address the root causes of vulnerabilities to prevent recurrences.” It includes suggestions to prepare an organization (such as developing organizational policy with respect to software security procedures), to protect software (such as using version control and code and commit signing), to produce secure software (such as using risk modelling, documenting design decisions, performing human or software-based security auditing, evaluating third-party software components), to follow secure coding practices (such as avoiding unsafe functions or unverified inputs, and selecting secure default configurations), and to respond to vulnerabilities (such as gathering and investigating reports, planning and implementing risk-based remediations, and analyzing root causes to feed back into security processes).7 The NIST standards combine elements that speak to the security of the code itself with those that address an organization’s relevant policies, the security of their development processes as a potential vector for supply chain attacks, and their behavior with respect to known good practices such as addressing vulnerabilities and performing security audits. By Executive Order, the US government has moved towards requiring its software vendors to comply with the SSDF; CISA has instantiated requirements based on the SSDF into a Secure Software Self-Attestation Form that (once finalized) will need to be completed by all vendors who sell software to the government.8

    The coding rubric also included a few specific elements  of such frameworks to see how often they were specifically named in the articles. Many fewer articles—only 19 of 123—focused on requirements for software makers to have policies, procedures, or specific behaviors with respect to how they address or disclose vulnerabilities in their code, and only a single article explicitly discussed code security auditing or penetration testing as a part of a regime.9

    Harms

    A liability regime may or may not require, for liability to accrue, that software insecurity causes actual harm to software users. Regimes based on torts almost definitionally require a harm to trigger liability, but regulatory regimes can simply require certain behavior of software makers.

    One disadvantage of requiring harm to trigger software liability is that cyber outcomes (and thus harms) are dependent not only on the actions of the software maker, but also on the actions of an adversary or bad actor that exploits a vulnerability to cause harm. This adds into the equation complicating questions about the skills and capabilities of different kinds of adversaries and whether it is fair or desirable to hold software makers equally responsible if they are hacked by a sophisticated and well-resourced entity such as a nation-state, versus by run-of-the-mill cyber criminals. On the other hand, hinging liability on harms, in a sense, scales enforcement to the manifested negative consequences of insecurity, providing an inbuilt mechanism for imposing harsher punishments on those entities whose insecurity is more societally deleterious or costly.

    Harms from cyber incidents can include costs to businesses, negative consequences for individuals such as the loss of privacy, and harms to national security such as through the theft of intelligence-relevant information. Financial costs to businesses are perhaps the best understood and best-represented under existing theories of tort liability (with some major caveats to be addressed later). Businesses impacted by a cyber incident can face financial costs stemming from operational disruptions or data loss; ransomware payments; technical remediation and incident response; notifying impacted consumers and providing identity monitoring; declines in share prices; and fines or lawsuits from government or shareholders.. Estimates of the precise costs of cyber incidents vary widely, but CISA reported several studies with estimates for the median cost of an incident ranging between $50,000 and $250,000 and the mean ranging between $400,000 and $7 million. 10

    Albeit less common than financial harms, cyber incidents can also cause physical harm. Physical harms from cyber incidents are likelier to arise from high-stakes, software-enabled products such as medical devices, airplanes, and cars.

    Questions around which types of harms can create liability for software makers were widely discussed in the liability literature surveyed, perhaps in part because such questions have frustrated past attempts to use common law torts to bring cases against the makers of insecure software. “Economic loss doctrine,” a legal theory in place in many states, holds that product liability should not allow one party to seek compensation for economic damages—essentially, any harms outside of physical harms or property damage—beyond what was outlined in the contract they agreed to.11 Because software often causes only financial harms to impacted businesses, and because software vendors often sell or license software under contracts that absolve them of most liability, this doctrine has limited the success of past tort cases for software insecurity.

    Discussed in 89 articles, the question of which harms can potentially trigger liability was one of the most-discussed in the literature, behind only the questions of product and negligence-based torts. In papers that explicitly mentioned the question of which types of harms should qualify, the majority view was that both economic and physical harms should serve as a potential basis for liability.

    Responsibility

    A liability policy regime will also need to consider how to allocate responsibility for failures between software manufacturers and software users. Software security is a problem of “shared responsibility”: users of software, in addition to its developers, have significant control over cybersecurity outcomes through their own security practices. Torts already have conceptions of “comparative negligence” when the behavior of the harmed party contributed significantly to the harmful outcome—policymakers might want to map this concept explicitly to the software context to balance certain policy goals.

    The most canonical question around the allocation of responsibility in software liability regimes is around “patching,” the practice in which vendors release fixes for discovered vulnerabilities and bugs in the form of software updates that their customers must then apply. Put simply, should a vendor continue to be liable for harms arising from a vulnerability, even after they released a patch that would fix it (and the customer failed to apply it)?

    On the one hand, frequent patching is an ongoing challenge for many organizations,12 especially those with the least resources to dedicate to information technology management and security.13 A world in which vendors ship insecure code and then inundate their users with countless security-critical patches seems undesirable, and holding developers liable for code regardless of patch availability would certainly incentivize them to release more secure code. At the same time, expecting developers to release fully and perpetually secure software is likely an unrealistic goal, and patching is thus a relatively accepted part of the current software delivery paradigm. There exist genuine policy goals both in reducing the number of patches that organizations need apply, and in providing incentives for software developers to release patches in a timely fashion and for software users to apply these patches. Any liability regime that rests on or can be triggered by harm will need to draw lines in the sand about whether and when, once a vulnerability is known and a patch available, subsequent bad outcomes are the fault of the developer or the user.

    Beyond just timely updating, there are other practices in the security context that software operators control that contribute significantly to security outcomes.14 Software operators must maintain firewalls and monitoring capabilities on their network. They must correctly configure products and choose secure settings. If a software liability regime seeks to incorporate some concept of comparative negligence for cases in which the software operator’s actions (or inactions) contributed significantly to the harm that arose from the software’s insecurity, it may also need—explicitly or implicitly—standards for the behavior of software operators and developers.

    40 of the articles surveyed mentioned the idea that a liability regime for software makers should codify considerations or requirements pertaining to the behavior of the software user, such as questions about whether a software patch was available but unapplied. Nine of those articles explicitly endorsed the idea and two critiqued it, with these two critiques occurring more recently than any of the endorsements.

    Some articles from the literature examined other potential policy approaches to the patching problem such as “patch liability,” the idea of instead requiring software developers to pay the costs associated with the resources their customers need to expend in order to apply software patches.15 In general,  these ideas appear relatively underexplored relative to the complexity of the policy tradeoffs at play, with only a few articles mentioning the potential impacts of different liability approaches on developers’ and users’ incentives and behaviors with respect to patching. 

    Scope: Who can be liable?

    Scope describes the myriad questions around which software and software vendors fall under the purview of a liability regime.

    Software for high-risk sectors and contexts

    One way to scope a liability regime would be to limit its requirements to a specific sector or application in which software might operate (or to include multiple sectors but to tailor elements such as standards to each). It makes certain intuitive sense from a policy perspective to apply higher standards of cybersecurity care for manufacturers of medical device or airplane software than creators of general-purpose word processing or customer management software. This approach would generally mirror the approach taken with existing cybersecurity standards for software operators in the United States, which tend to apply for specific high-risk sectors or data processing activities.

    Within the literature, 31 articles explicitly discussed considerations around sector-specific scoping or sector-specific standards for software liability. Just under half of the articles which mentioned the idea endorsed it, and both endorsements and neutral mentions stretch over multiple decades of the debate.

    Within the literature, healthcare and medical devices were most often mentioned as sectors that might be treated differently, with articles also mentioning autonomous vehicles, airplanes, voting machines, and nuclear plants. These sectors typical combine both potentially unique, application-specific software such as software embedded into medical devices, airplanes, or voting machines with heightened risks of potential bad outcomes (often but not always in terms of potential loss of life) from insecurity.

    A liability regime could adopt a model premised on specific kinds of sector-specific software (e.g., heightened liability for makers of autonomous vehicle software) or one premised on liability for any type of software used by high-risk sectors (e.g. heightened liability for any type of software sold to autonomous vehicle companies). The latter model faces a challenge in the fact that many types of software can be used across high- and low-risk sectors without distinction by the vendor or due notice by the customer. Many types of software are purpose-general (e.g., email clients) and can be deployed across a broad range of organizations and operating contexts, creating a leveling problem between the design context of software and its use. Cabining liability to certain types of software that are specific and high risk within these sectors appealingly avoids this problem. Yet, clear line drawing is a problem even under this approach, with many examples of technologies that provide essential support to the function of such devices but that are not specific to them, such as operating systems or cloud data processing. Depending on how these lines are drawn, software for use in these sectors is likely to become more expensive and more bifurcated than standard, consumer-grade applications.

    Software for high-risk security functions

    Some types of software are sensitive not because of the context in which they are deployed and used, but instead because they perform security-critical or risky functions. For example, identity and access management systems control access to other computing resources and are frequently targeted by hackers seeking to escalate their permissions to access sensitive data or perform privileged actions. Other software systems with potentially important and systemic security impacts include tools like hypervisors and virtualization software in cloud computing environments or network management tools and firewalls. Different applicability or standards for software of different security risk levels are present in existing policy regimes such as the European Union’s Cyber Resilience Act, which makes use of such a distinction and applies higher standards of security to software performing certain high-risk and security-critical functions.16

    Software sellers of a certain scale

    Another standard that a law could use to scope who can be liable—or to tier other elements of the regime, such as standards—would be based on the size of the entity that sold the software. For example, liability could kick in once companies are of a certain size as defined by financial metrics such as revenue, or sales of the software in question (noting that this question might be difficult to answer—for example, how to treat the sale of one license to one company, but that may result in hundreds of installs of the software). Conversely, small entities—those with low revenues or that have sold few instances of the software in question—could be carved out of a liability regime or subject to less complex or burdensome security standards. Such differentiation would reduce the compliance burden for small businesses that sell software. Such a system could also intersect with other scoping or tiering systems; for example, it might be the case that a software vendor that sells software to a water treatment plant or power station should always be liable, regardless of size, while the same might not be true for those that sell to non-critical infrastructure companies.

    Open source software

    Open source software (OSS) is not software sold by a vendor; rather, it is software whose source code is publicly available, distributed under a license that grants others total permission to use and modify the software, while ensuring that the software’s original creator offers no guarantees about its use nor accepts responsibility for any harms caused.

    While discussion of OSS has increased since the year 2010 as compared to prior decades, it has only been mentioned in 9 articles, much less than many of the other design questions in the framework. Of all articles that mentioned the question, none endorsed the idea of including developers of OSS in a software liability regime.

    This finding broadly aligns with the authors’ prior supposition that liability is not the right policy tool to use to improve security in the open source ecosystem. Much of open source code is published by academics, researchers, and hobbyists; threatening these unpaid volunteers with legal liability for sharing their code would likely have a chilling effect on their participation and thus harm an ecosystem that has provided myriad benefits for academic knowledge-sharing and the distribution of useful components. Even for widely used and widely supported open source packages, creating potential liability for contributors could disincentivize hobbyists and corporate employees alike from contributing security features and fixes back to the package—exactly the opposite of what most open source packages need from a security perspective. Besides these issues, there are more practical ones, such as to which contributors to apply liability when open source packages often incorporate contributions from dozens or hundreds of developers. There are myriad other ways to support the security of OSS (funding, auditing, encouraging companies to contribute back to OSS security17) that are a better fit for the unique context of open source that lacks clear contracts, transactions, or payments between a software’s developer and its users.

    The inclusion of open source developers is not the only means by which a liability policy regime could interact with open source software security. A liability regime could place requirements around responsible use of open source code on software vendors as an element of standards. This would incentivize software vendors to more carefully vet and to contribute back to the security of open source code that they want to use, improving the health of the broader ecosystem and the security of proprietary code that incorporates open source while avoiding the chilling effects of placing liability directly onto the developers of open source code.

    Governance and enforcement: Who holds you liable (and how)?

    Equally important to the “what” of a liability regime is the “who.” That is, which entities are responsible for implementing the components that make up the regime? Enforcement and governance are essential elements that differentiate liability from mechanisms of self-governance or voluntary standards.

    Setting standards

    There are a few existing models of standard-setting that might be ported over to the software cyber liability context.

    One model would be akin to that taken by certain cyber-physical systems such as airplanes and medical devices: in this model, a regulator sets standards for disclosures or information that a software product must submit to the regulator before the product comes to market. For example, the Federal Aviation Administration  has increasingly embedded cybersecurity into its approval processes for airplanes,18 and the Food and Drug Administration (FDA) requires medical device makers to adopt and disclose standards around secure development before their devices can be approved to go to market.19 These preemptive approval models allow regulators to more easily include standards around secure development into their processes: rather than needing to give companies a checklist of practices by which they must abide, they can force companies to affirmatively attest to or describe the secure-by-design and secure-by-default practices they followed in the creation of their software. In these models, the same entity also certifies compliance (e.g., allows the product to come to market) and, often enforces against violators (although in medical devices, for example, consumers can also bring suit under products liability). These models are relatively powerful, but they hinge on the fact that the regulator controls entry to the market, in that their approval is required as a precondition of the product being sold. This model is less realistic for all software products—software ranges from industrial control systems to video games created by small independent developers, and requiring even the smallest of software programs to be approved before coming to market would likely result in a severely throttled software ecosystem.

    Another model would be having an expert agency set standards such as secure development standards, which would apply to certain types of software without requiring disclosures or filings before a product comes to market. In most models from existing law, the entity that sets the standards is also the one that enforces them [e.g., the Federal Trade Commission (FTC) both sets standards for and enforces the Gramm Leach Bliley Act,20], but sometimes the two functions are divided. Yet another model would be requiring software makers to include statements of compliance with particular (federally selected or developed) standards in their contracts, thus giving software buyers the opportunity to sue software vendors for contractual violations if they fell short. For example, a recent proposed update to the Federal Acquisition Regulation would require contractors developing software on behalf of the government to certify compliance with Federal Information Processing Standards developed by the National Institute of Standards and Technology (NIST); if a company misrepresents their compliance, an action can brought by the Department of Justice under the False Claims Act.21

    Another approach would be to avoid prespecification of standards altogether. For example, a law could state that a company has a duty to its customers to uphold “reasonable” security standards, thereby allowing a judge in a case to determine what measures are reasonable. In such cases a judge may well look to existing standards and industry best practices to judge whether a practice was or was not reasonable—but these standards and practices are not identified a priori in the regime itself. As discussed in the section on standards, this approach create flexibility by trading off speed and certainty.

    Assessing compliance

    Depending on the structure of a liability regime, some entity or entities may be empowered to audit, assess, or certify compliance as part of the scheme. One approach would be self-certification—requiring entities to certify their own behaviors or compliance with standards, facing penalties if their attestations were later found to be false. Self-certification would likely need to be paired with some requirements for what entities must certify, to avoid race-to-the-bottom situations in which companies seek to promise nothing so they can be accountable for nothing. Self-certification was mentioned in only four of the articles and endorsed by none. However, it is a component of existing regimes such as Europe’s Cyber Resilience Act.

    Other approaches would involve external auditing of some form. External auditing to determine compliance was mentioned by relatively few articles, which were split on its desirability.

    External auditing could take several forms. A regulator could certify compliance as a prerequirement for the sale of software—mirroring regimes such as the approval processes for medical devices and airplanes outlined above, or the Federal Motor Vehicle Safety Standards.“Laws & Regulations,” 22Alternatively, audits could be reactive rather than proactive, such as those performed by the Health and Human Services (HHS) Office of Civil Rights to investigate inbound tips and assess compliance.23

    External auditors could also come from outside government; government can certify outside entities to assess compliance with the standards of the regime. For example, the Children’s Online Privacy Protection Act allowed industry groups to certify self-regulatory frameworks, which, after approval by the government, satisfy the law’s safe harbor requirements.24 Liability regimes can also combine other variables such as scope with auditing requirements: the European Union’s (EU) Cyber Resilience Act allows noncritical entities to perform a self-evaluation of their conformity with the requirements of the Act, while critical entities must be certified by an external (EU-approved) auditor.

    Enforcing violations

    One general dividing question is, do companies and users have the right to directly sue those responsible for insecure software, or does a government entity (e.g. a federal agency such as the FTC or State Attorneys General) enforce the law? Although the two delineate general models for enforcement, they are not mutually exclusive.

    Consumer enforcement

    One option for enforcement is to allow the entities harmed by insecure software—perhaps most often businesses, but also including individual consumers—to directly sue the company that sold them the software. Such a regime could be brought about by passing a law to change how product or negligence torts have been interpreted by the courts when it comes to software insecurity. Alternately, a law could simply establish new responsibilities or obligations that software makers owe to their customers and include a private right of action that allows those customers to directly sue software makers that have violated their rights under that law.

    Product or negligence torts for software were the two most widely discussed topics coded in the literature, with 97 and 95 mentions of each concept respectively. Data on article stances shows that product liability has both more supporters and more detractors than does a negligence standard. Generally speaking, authors adopted an either/or approach: of the 86 articles that mentioned both concepts, only seven articles endorsed both approaches.

    Visualizing the distribution of these articles by their year of publication suggests that this debate has been ongoing since the beginning of the literature sample and that neither approach has come to dominate over time.

    Another approach that Congress could take to structure a law with consumer enforcement would be a private right of action, a federal law that places obligations on companies and then grants consumers the right to bring suit to enforce their rights under the law. Eight articles endorsed the idea of allowing both government and consumer enforcement by creating a federally enforced regime with a private right of action.

    Government enforcement

    Another approach, and one taken with many existing cyber standards, is to have a federal or state agency (or agencies) enforce the law’s requirements instead.

    Many existing federal-level cybersecurity standards in the United States are sector-specific and thus enforced by the sector regulator, or by the Federal Trade Commission (FTC) if no such is available: for example, Health Insurance Portability and Accountability Act (HIPAA), the law imposing cybersecurity standards on healthcare entities in the processing of health data, is enforced by Health and Human Services (HHS); the Gramm Leach Bliley Act, which pertains to financial institutions, is enforced by the Consumer Financial Protection Bureau, the FTC, and other financial regulators; Children’s Online Privacy Protection, which protects children’s data, is enforced by the FTC as well; and cyber standards for pipeline operators by the Transportation Security Administration.

    The idea of federal government enforcement was less often discussed in the sampled literature than torts-based approaches, appearing in only 43 of the articles surveyed.

    Visualizing articles’ stances relative to their year of publication suggests that this idea emerged slightly later than did the idea of torts approaches and that it has gained relatively more endorsements relative to mentions especially within the past decade. However, its only two criticisms have also occurred recently. 

    With respect to which agency or agencies should serve as an enforcer, in the data, only the FTC and the FDA (the latter typically within the context of medical devices) were named as potential federal enforcing entities in more than two articles. Additionally, 12 of the articles endorsed the idea of granting enforcement power to state law enforcement such as State Attorneys General.

    Consequences

    Another key question is what happens to software makers that are found liable (and by whom). Most often, consequences come in the form of a requirement to pay money: either a penalty (for a violation of regulatory requirements) or damages (in torts, to compensate a harmed party). Tort-based regimes are necessarily civil, rather than criminal, proceedings; however, a statutory regime could create potential criminal liability with potential consequences including imprisonment. For example, violations of HIPAA, which regulates security controls for health care, can lead to both civil and criminal penalties, with criminal cases enforced by the Department of Justice rather than HHS.25

    Regulatory regimes could draw from a few existing models to establish the monetary penalties to be applied for violations. They could structure the law as a penalty-per-violation—for example, the FTC can extract monetary penalties from entities that violate Children’s Online Privacy Protection of up to $50,120 per violation.26In past, the FTC has extracted penalties in the hundreds of millions of dollars from the largest wrongdoers.27However, such regime would need to either set this per-violation cost to be very high or ensure that the number of violations is proportionate to the impact of the incident (for example, counting each separate instance of insecure software sold) in order to ensure that companies cannot walk away from a security failure that caused widespread harm with only a small fee to pay.  Alternately, other regimes permit regulators to extract penalties based on the revenues of the penalized entity—for example, Europe’s Cyber Resilience Act permits enforcers to extract penalties of up to15 million euros or 2.5 percent of a company’s total sales for the previous year, whichever is greater.28 The European General Data Protection Regulation follows a similar model based on a percentage of the firm’s worldwide annual revenue, with different tiers of possible fines depending on the specific provision violated.29

    Under a regime structured using torts, companies would need to pay damages assessed by a judge. These damages can be “compensatory,” or designed to compensate the impacted party for the harms they suffered, or “punitive,” which damages are intended explicitly as a punishment above and beyond the harm caused. If implemented through torts, judges could draw upon a robust body of existing jurisprudence to determine appropriate compensation for harms arising from software insecurity; if instead achieved through a federal regime with a private right of action, lawmakers could tweak this penalty as well.

    Goals: What does the regime try to achieve?

    Goals in the literature

    Each of the elements outlined above can be mixed and matched according to the goal of the liability regime. A liability regime must have a goal, explicit or implicit—or else, why effect a change? The rubric coded articles with respect to two broad-bucket potential goals. The first is to incentivize better security behavior by software vendors, typically in service of improving cybersecurity outcomes more broadly. The second is the question of providing redress, or ensuring that entities that are financially or otherwise harmed by a software vendors’ failures are justly compensated. While the rubric coded only for these two goals, there are other possible ones, such as the desire to harmonize, unify, or preempt potentially diverse sets of cybersecurity requirements and liabilities that may emerge in the future under the evolution of common law doctrines or state law.

    While these goals are not at all incompatible, they are also distinct—the fulfillment of one does not imply the fulfillment of the others. A regime could drive better software security without necessarily providing recompense to victims of insecurity, and vice versa. This section discusses each of the goals as represented in the literature and then explains how each goal might parameterize key design questions outlined above.

    Rebalance responsibility – Incentivize better security

    41 of the 123 surveyed articles described a potential goal for a liability regime in terms of changing incentives for software makers to push them to adopt better security behaviors and practices.

    This goal has appeared in the sampled literature across multiple decades.   

    We expected this goal to be closely related to discussions of market failures or information asymmetries that limit the ability of the market to effectively incentivize better software security (e.g., the idea that software consumers are ill-positioned to evaluate the security of the software they buy and thus the market inadequately incentivizes investment in security). Indeed, of the 41 articles with a stated goal of driving better security, ten explicitly cited market failures or information asymmetries as a current challenge with the ecosystem—a much higher rate than the six articles that endorsed this idea from the 130 remaining articles without such a goal. However, the idea of market failures and information asymmetries in security entered into the discussion in the surveyed literature relatively later, only after 2000.  

    Providing redress for harms

    56 of the 123 articles explicitly endorsed the goal of providing redress for harmed software users as an explicit goal of a liability regime, with another 38 mentioning the idea without explicitly stating that it was a core goal or motivator for imposing a liability regime. That means this goal was present in more of the surveyed literature than that of improving security behavior and outcomes (though also more often mentioned without explicit endorsement).

    This goal also appears earlier than the goal of incentivizing better behavior in the sampled literature, first appearing as early as 1977.

    The goal of providing redress might reasonably be closely linked to the fact that currently, consumers and businesses struggle to recover losses from makers of insecure software. Of the 56 articles that endorsed providing redress to harmed parties as a core goal, 35 also mentioned current challenges and barriers to winning software cases, as compared to eight of the 115 articles that did not endorse providing redress as an explicit goal. Mention of difficulties winning current lawsuits have also been present in the corpus for nearly four decades:  

    This idea was emphasized during the 1990s, which may have been precipitated by ProCD, Inc. v. Zeidenberg, a court case that found so-called “shrink-wrap licenses”—licenses that the user “accepted” by opening the shrink-wrap that protected physical media like CDs that contained software for install—to be legally valid.30However, this idea has continued to be mentioned throughout the articles in the years since.

    Finally, the two goals are hardly incompatible: 25 articles explicitly endorsed both incentivizing security and redressing harm as a goal or motivation for imposing a liability regime.

    Matching goals to other elements of a liability regime

    Different design choices in the construction of a liability regime will make the regime apply to different entities, incentivize different behavior, and provide different remedies. All these choices will shape its results and the changes it effects. Therefore, the explicit goal or goals of a liability regime provide direction on many of the key design choices outlined above.

    Rebalance responsibility – Incentivize better security

    A regime designed to provide incentives for vendors to adopt more secure behavior is likely to focus strongly on the standards component of a regime, whether these standards are required in regulation or provide a safe harbor from tort liability. The standards baked into a regime will define the set of behaviors toward which software vendors will be incentivized, making it essential for policymakers with this goal to devise either strong standards, or a means for developing adaptive strong standards, which they believe will drive better security outcomes if adhered to.   

    Indeed, among articles from the literature in which the author identified incentivizing better security behavior as a core goal of a liability regime, a majority endorse the idea of including secure development standards as a component—not true for articles without such a goal. This substantiates the idea that there is a connection between the goal of improving security and a focus on the specific standards and practices that would need to be required in law to do so.

    A goal of driving better security behavior might also make policymakers more interested in enforcement structures such as federal enforcement or torts liability with a safe or unsafe harbor, since these structures make it easier and faster to delineate clear standards through policy rather than waiting for courts to decide them over time.  A regulatory regime might be particularly attractive for this goal because, unlike torts, it would not require harm to occur before action could be taken, potentially allowing enforcers to intervene before security malfeasance results in individual or societal harm.

    Indeed, a much larger percentage of articles with a goal to incentivize better security endorse federal enforcement, in contrast to articles that did not state an explicit goal of incentivizing better cybersecurity behavior. 

    Likewise, state government enforcement was more popular among articles that explicitly stated a goal of driving better security behavior. Delegating authority in such a way might increase the resources and enforcement power of the federal government, an appealing proposal for driving wider compliance. Surprisingly, governance mechanisms such as external auditing were less popular in articles with this goal than in the overall set, contravening the expectation that such measures would be popular because they would increase compliance and avoid the need to wait for a security incident to identify violators.

    Articles with the goal of incentivizing better security behavior  were more likely than those without to explicitly endorse either product or negligence liability regimes—for example, 16 out of 41 articles with this goal endorsed product liability as opposed to 13 out of 130 without the goal. Articles with this goal endorsed both product and negligence liability with approximately equal rates to the base set of literature.

    Providing redress for harms

    If the goal of a liability regime is to provide redress to users of software who were harmed by its insecurity, such a regime will be focused on the harms that can trigger liability, perhaps moreso than on the specific standards that software makers must uphold. In fact, policymakers with this goal in mind might select a regime with very strict standards or no standards at all to avoid cases in which harmed software users are denied redress because the software vendor met the legal baseline of responsible behavior.

    Indeed, we found that articles with a stated goal of providing redress for harmed users were much more likely to endorse strict product liability—more focused on whether the product itself was defective than on the manufacturer’s intent—than articles without such a goal. These articles were also more likely to endorse strict product liability than negligence liability, which would incorporate a standard of care that defines software makers’ obligations.

    Limitations and directions for future work

    The sample of the literature conducted herein has several limitations that could be improved on in future work. First, the selection of articles was based on keyword searches and expert judgement rather than a measure such as citation count for all articles, which limits our ability to understand whether the sample is representative of the broader debate. Second, several factors of particular interest in this debate resolved only after the coding was completed, meaning the rubric did not incorporate some relevant questions such as limiting the applicability of a regime by type of software product or how to handle different types of supply chain compromise. Future work might consider a more robust methodology for article selection and a more extensive rubric. It might also lessen the degree of subjectivity in the actual coding by codifying standards and examples of endorsement, mention, and criticism ahead of time, or by having multiple reviewers code the same article and then using the average of their judgements to inform a final score. 

    Conclusion

    Conducting a meta-analysis of a complicated debate such as software liability necessarily produces data that is more illustrative than it is dispositive. The trends outlined above are not meant to present definitive answers as to the right approach on liability, but instead to provide a structuring framework that can help illuminate how different policy design questions—and the relationships between such questions—have been discussed (and sometimes under-discussed) thus far in the scholarly debate. In particular, some of the topics that were relatively more neglected in the literature sample, such as specific frameworks that could form the basis for standards in a liability regime, how to handle the problem of user behavior and patching, and how to scope the regime or its standards to different sectors or types of software, seem to be areas where further study and debate are much needed.

    Though it is tempting to analogize software liability neatly to other products or goods for which policymakers have constructed successful liability regimes—a popular metaphor is cars—these metaphors obscure important details of the ways that software is meaningfully different and poses greater challenges to regulate as a class of technology than many that have come before. Software is everywhere—it is found in every industry, in every application from the most trivial to the most consequential. It ranges almost unimaginably in scale and complexity, from tiny calculator applications to vast and sprawling networks of cloud computing infrastructure on which an incalculable number of other computing applications depend. These factors create paradoxes for regulators: societally and economically, we benefit hugely from a fast-moving, innovative, and thriving ecosystem of software development. At the same time, the persistent ethos of “ship now, fix later” has led to vulnerabilities that have cost collective billions of dollars31 and damaged individual privacy32 and national security33 through myriad cyber incidents great and small.  

    In practice, software liability may not be realized through a single, comprehensive regime that addresses every concern and every type of software at once. Instead, it might be an incremental form of progress: the creation of a duty of care for the largest vendors, or a requirement for the majority to adopt a small set of known best practices. One key throughline is likely to be adaptability: the ability of a regime to adapt to evolving best practices in the software security landscape, to adapt standards to different paradigms and functions for software, and to adapt to the different scales and stakes of various software applications.

    The policy task ahead on software liability is complex and contested—it will demand common language in addition to common purpose.  This work brings forward a set of core design questions from the history of the debate to help advance the current policy conversation around software liability, all in service of one goal: to improve outcomes in a world ever more reliant on the security of the software it consumes.


    Authors and acknowledgements

    The authors would like to thank John Speed Meyers, Josephine Wolff, Bryan Choi and Melanie Teplinsky for their feedback on various versions of this document. We also thank the many unnamed individuals from government, academia, and industry who met with us and participated in events and group conversations on the topic of software liability, helping sharpen and clarify our thinking on the topics herein. 

    Maia Hamin is an Associate Director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She works on the intersection of cybersecurity and technology policy, including projects on the cybersecurity implications of artificial intelligence, open-source software, and cloud computing. Prior to joining the Council, Maia was a TechCongress Congressional Innovation Fellow serving in the office of Senator Ron Wyden, and before that a software engineer on Palantir’s Privacy and Civil Liberties team. She holds a B.A. in Computer Science from Princeton University. 

    Dr. Trey Herr is the director of the Atlantic Council’s Cyber Statecraft Initiative and an assistant professor of Cybersecurity and Policy at American University’s School of International Service. At the Council, the CSI team works at the intersection of cybersecurity and geopolitics across conflict, cloud computing, supply chain policy, and more. Previously, he was a senior security strategist with Microsoft handling cloud computing and supply chain security policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution at Stanford University. He holds a PhD in Political Science and BS in Musical Theatre and Political Science. 

    Sara Ann Brackett is a research associate at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses on open-source software security (OSS), software bills of materials (SBOMs), software liability, and software supply-chain risk management within the Initiative’s Systems Security portfolio. She is an undergraduate at Duke University, where she majors in Computer Science and Public Policy, participates in the Duke Tech Policy Lab’s Platform Accountability Project, and works with the Duke Cybersecurity Leadership Program as part of Professor David Hoffman’s research team.

    Andy Kotz is a recent graduate of Duke University, where he majored in Computer Science and Political Science and served as a Cyber Policy Research Assistant on Professor David Hoffman’s research team. He is now at Millicom (Tigo), working in Digital Education, Telecommunications, and Cybersecurity.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “National Cybersecurity Strategy,” The White House, March 1, 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    2    US Library of Congress, Congressional Research Service, Introduction to Tort Law, by Andreas Kuersten, 2023, IF11291.
    3    “The T.J. Hooper,” Casebriefs, accessed December 4, 2023, https://www.casebriefs.com/blog/law/torts/torts-keyed-to-epstein/the-negligence-issue/the-t-j-hooper-3/.
    4    “What are the Microsoft SDL Practices,” Microsoft, accessed December 4, 2023, https://www.microsoft.com/en-us/securityengineering/sdl/practices
    5    Dan Goodin, “Microsoft Finally Explains Cause of Azure Breach: An Engineer’s Account Was Hacked,” Ars Technica, September 6, 2023, https://arstechnica.com/security/2023/09/hack-of-a-microsoft-corporate-account-led-to-azure-breach-by-chinese-hackers/
    6    “Secure by Design,” Cybersecurity and Infrastructure Security Agency (CISA), accessed December 4, 2023, https://www.cisa.gov/securebydesign.
    7    Murugiah Souppaya, Karen Scarfone, and Donna Dodson. “Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities.” National Institute of Standards and Technology US Department of Commerce, February 2022. https://doi.org/10.6028/NIST.SP.800-218
    8    Cybersecurity and Infrastructure Security Agency, Request for Comment on Secure Software Development Attestation Common Form, Accessed January 5, 2024. https://www.cisa.gov/secure-software-attestation-form
    9    Jane Chong, “Bad Code: The Whole Series,” Lawfare, November 4, 2013,  https://www.lawfaremedia.org/article/bad-code-whole-series
    10    Cybersecurity and Infrastructure Security Agency, “Cost of a Cyber Incident: Systematic Review and Cross-Validation”, October 26, 2020, https://www.cisa.gov/sites/default/files/publications/CISA-OCE_Cost_of_Cyber_Incidents_Study-FINAL_508.pdf
    11    Catherine M. Sharkey, “Can Data Breach Claims Survive the Economic Loss Rule?,” DePaul Law Review 66 (2017), last updated August 21, 2017, https://ssrn.com/abstract=3013642
    12    “2022 Top Routinely Exploited Vulnerabilities,” Cybersecurity and Infrastructure Security Agency (CISA), August 3, 2023, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-215a
    13    Evan Sweeney, “For Hospitals Defending against Cyberattacks, Patch Management Remains a Struggle,” Fierce Healthcare, May 17, 2017, https://www.fiercehealthcare.com/privacy-security/for-hospitals-defending-against-cyberattacks-patch-management-remains-a-struggle
    14    “2022 Top Routinely Exploited Vulnerabilities.”
    15    Terrence August and Tunay I. Tunca, “Who Should Be Responsible for Software Security? A Comparative Analysis of Liability Policies in Network Environments.” Management Science 57 (2011): 934–59, http://www.jstor.org/stable/25835749.
    16    Markus Limacher, “Cyber Resilience Act – Get Yourself and Your Products up to Speed for the CRA,” InfoGuard,  December 4, 2023. https://www.infoguard.ch/en/blog/cyber-resilience-act-get-yourself-and-your-products-up-to-speed-for-the-cra
    17    Stewart Scott, Sara Ann Brackett, Trey Herr, Maia Hamin, “Avoiding the Success Trap: Toward Policy for Open-Source Software as Infrastructure.” Atlantic Council (blog), February 8, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/
    18    “Advisory Circular on Guidelines for Design Approval of Aircraft Data Link Communication Systems Supporting Air Traffic Services (ATS),” US Department of Transportation, Federal Aviation Administration, September 28 2016, https://www.faa.gov/documentLibrary/media/Advisory_Circular/AC_20-140C.pdf
    19    “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions,” US Food and Drug Administration, Center for Devices and Radiological Health, September 26, 2023, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/cybersecurity-medical-devices-quality-system-considerations-and-content-premarket-submissions
    20    “Gramm-Leach-Bliley Act,” Federal Trade Commission, June 16, 2023, https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act
    21    “Government Contractors Beware: New Cybersecurity Rules and False Claims Act Enforcement Actions on the Rise,” Akin Gump Strauss Hauer & Feld LLP, accessed December 4, 2023, https://www.akingump.com/en/insights/alerts/government-contractors-beware-new-cybersecurity-rules-and-false-claims-act-enforcement-actions-on-the-rise.
    22    NHTSA, accessed December 4, 2023. https://www.nhtsa.gov/laws-regulations
    23    “Enforcement Process,” US Department of Health and Human Services, May 7, 2008, https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/enforcement-process/index.html
    24    “COPPA Safe Harbor Program,” Federal Trade Commission, January 7, 2015, https://www.ftc.gov/enforcement/coppa-safe-harbor-program
    25    “HIPAA Violations & Enforcement,” American Medical Association, November 28, 2023, https://www.ama-assn.org/practice-management/hipaa/hipaa-violations-enforcement
    26    “Complying with COPPA: Frequently Asked Questions,” Federal Trade Commission, July 20, 2020, https://www.ftc.gov/business-guidance/resources/complying-coppa-frequently-asked-questions
    27    “Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law,” Federal Trade Commission, September 4, 2019, https://www.ftc.gov/news-events/news/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations-childrens-privacy-law
    28    “EU Cyber Resilience Regulation Could Translate into Millions in Fines.” Help Net Security (blog), January 19, 2023, https://www.helpnetsecurity.com/2023/01/19/eu-cyber-resilience-regulation-fines/
    29    “What Are the GDPR Fines?,” GDPR.eu, July 11, 2018, https://gdpr.eu/fines/.
    30    “ProCD, Inc. v. Zeidenberg,” Casebriefs, accessed December 4, 2023, https://www.casebriefs.com/blog/law/contracts/contracts-keyed-to-farnsworth/the-bargaining-process/procd-inc-v-zeidenberg/.
    31    Annie Lowrey, “Sony’s Very, Very Expensive Hack.” New York Magazine, December 16, 2014, https://nymag.com/intelligencer/2014/12/sonys-very-very-expensive-hack.html
    32    Federal Trade Commission, “Equifax to Pay $575 Million as Part of Settlement with FTC, CFPB, and States Related to 2017 Data Breach,” July 22, 2019, https://www.ftc.gov/news-events/news/press-releases/2019/07/equifax-pay-575-million-part-settlement-ftc-cfpb-states-related-2017-data-breach
    33    Trey Herr et al., “Broken Trust: Lessons from Sunburst,” Atlantic Council (blog), March 29, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/broken-trust-lessons-from-sunburst/

    The post Design questions in the software liability debate appeared first on Atlantic Council.

    ]]>
    The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. https://www.atlanticcouncil.org/blogs/new-atlanticist/the-sentencing-of-a-us-navy-sailor-is-a-window-into-chinese-espionage-heres-how-the-us-should-respond/ Sat, 13 Jan 2024 17:09:30 +0000 https://www.atlanticcouncil.org/?p=724859 China’s intelligence services recognize that national security information does not have to be classified to provide them with value.

    The post The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. appeared first on Atlantic Council.

    ]]>
    The United States and its allies and partners are under constant threat from pervasive efforts by China to collect intelligence, though this rarely makes it into the public eye. This week provided a clear reminder of this threat. On January 8, US Navy sailor Wenheng Zhao, who pled guilty in October 2023 in the Central District of California to one count of conspiring with a foreign intelligence officer and one count of receiving a bribe, was sentenced to twenty-seven months in prison and ordered to pay a $5,500 fine.

    Zhao was one of two active duty US servicemembers indicted in August 2023 for providing sensitive US military information to China. The second, Jinchao Wei, was indicted for violating an espionage statute and multiple export violations in the Southern District of California. According to the indictment, he was granted US citizenship while the alleged illegal activities were taking place. (Wei is, of course, presumed innocent until proven guilty in a court of law.)

    These two cases are playing out as tensions remain high between the United States and China, even after the November 2023 meeting between US President Joe Biden and Chinese leader Xi Jinping. In response to these court cases, there will be an understandable temptation for the United States to react by doing something to address Chinese espionage, and perhaps even pressure for systemic changes to the US counterintelligence approach. But big, sudden changes often create new and potentially greater vulnerabilities. Instead, policymakers should respond carefully and deliberately by seizing this moment to manage counterintelligence and security risks more effectively over the long term.

    This can be done by decreasing the probability of future similar events from occurring, while avoiding creating new risks. Specifically, the response should consider focusing on prevention via training, enhanced information-sharing with allies and partners, and a shift to a more holistic risk-based personnel security approach for all US military members.

    Intelligence collection doesn’t always mean stealing classified secrets

    These two cases suggest that China’s intelligence services recognize that national security information does not have to be classified to provide them with value.  

    Although both Zhao and Wei reportedly had secret-level security clearances, they were not assigned to particularly sensitive military occupational specialties, and there are no indications within the indictments that they passed classified information to Beijing’s intelligence services.

    Wei was assigned to the USS Essex amphibious assault ship, which operates as a “Lightning carrier,” a platform for fifth generation F-35B Lightning strike aircraft. He allegedly used his phone to take photos that he provided to China’s intelligence services, while also providing information regarding potential vulnerabilities of the USS Wasp class of US Navy ship.

    Zhao reportedly provided Chinese intelligence with information regarding the electrical system for a storage facility at a base in Okinawa housing a Ground/Air Task-Oriented Radar system. This radar system is used for expeditionary warfare that supports Marines in a contested or potentially contested maritime area—the kind of warfare that would matter in a conflict in the Western Pacific.

    Given China’s resources, these were low-cost operations relative to the information allegedly received and a high return on investment to enhance Beijing’s hard power. As compensation for their alleged activities, Wei reportedly received between $10,000 and $15,000, while Zhao received the equivalent of almost $15,000.  

    Three new steps to bolster counterintelligence and security

    While these cases shed light on national security risks for the United States and its allies and partners, they also present the opportunity to justify new options for Washington to respond. That response should not, for example, be to limit the opportunities for foreign nationals to serve honorably in the US military or take measures that could damage recruitment and retention. Rather, it should take careful, measured steps to reinforce the foundations of counterintelligence and security. There are three steps policymakers should take next:

    1. Focus more on prevention relative to treatment

    In the medical community, doctors think of solutions in terms of prevention and treatment. For national security, the United States must do both, but in this instance, prevention—via training—should be the focus.

    Specifically, the Department of Defense should enhance its counterintelligence threat awareness and reporting training program. This can be done by increasing the frequency of the training, presenting the information in different ways, and requiring a signed acknowledgement of responsibility from the training recipient. Such prevention measures would require additional resources for the Department of Defense counterintelligence and security system, but it would be worth the cost since the enhanced training requirements would decrease risk and potential costs overall.

    2. Mobilize allies and partners to work together on counterintelligence

    While protecting the integrity of the criminal justice process, the United States should consider sharing as much information as possible with its allies and partners about the methods that China’s intelligence services use to conduct their operations, particularly US allies and partners in the Indo-Pacific, since they are likely being targeted using similar methods. 

    Specifically, the US counterintelligence community should host periodic events with its allies and partners to exchange information regarding how Beijing’s intelligence services target military members. This will help educate their military personnel regarding the evolving threat, including the types of cover used to approach potential targets. In the case of Zhao, the Chinese intelligence officer reportedly portrayed himself to Zhao as a maritime economic researcher, who needed information in order to “inform investment decisions.”

    3. Establish a more holistic approach to personnel security that better integrates counterintelligence

    Finally, the Department of Defense should consider enhancing the current security clearance-based system with a more holistic, risk-based personnel security approach. This would include those US military members who do not require access to classified information.

    How might this work? There are various policies and systems already in place for personnel security and information security, especially for individuals who hold top secret security clearances and those who work in sensitive compartmented information facilities (SCIFs). Those important safeguards for security clearance holders should remain, but there are currently disconnects between security considerations (Do the duties of a position require working with sensitive information?) and counterintelligence findings (What information might China or other countries want?). The goal, then, should be to more closely integrate security and counterintelligence. Such an approach would fuse counterintelligence information regarding the evolving capabilities and intentions of foreign intelligence services with information about the duties of the position.

    The risks of national security information being provided to foreign intelligence services have always existed and can never be eliminated, so the objective should be to optimally manage those risks. This could best be accomplished by investing in training, increasing sharing with allies and partners, and shifting to a more holistic risk-based personnel security approach for all US military members. 

    Given the long-term and dynamic challenges of US-China strategic competition, now is the time to adapt US counterintelligence and security policy to effectively meet those challenges posed by China’s intelligence collection efforts.


    Andrew Brown is a nonresident fellow with the Atlantic Council’s Indo-Pacific Security Initiative, where he specializes in defense and intelligence issues. He was previously a criminal investigator with the Department of Defense and was assigned to the Office of the Director of National Intelligence (ODNI).

    The views expressed in this article are the author’s and do not reflect those of the Department of Defense or ODNI.

    The post The sentencing of a US Navy sailor is a window into Chinese espionage. Here’s how the US should respond. appeared first on Atlantic Council.

    ]]>
    Global China Hub Nonresident Fellow Dakota Cary spoke to CNN https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-spoke-to-cnn/ Fri, 12 Jan 2024 19:51:00 +0000 https://www.atlanticcouncil.org/?p=725971 On January 12, GCH Nonresident Fellow Dakota Cary spoke to CNN on how the Chinese government relies on the private sector to help its cybersecurity capacities.

    The post Global China Hub Nonresident Fellow Dakota Cary spoke to CNN appeared first on Atlantic Council.

    ]]>

    On January 12, GCH Nonresident Fellow Dakota Cary spoke to CNN on how the Chinese government relies on the private sector to help its cybersecurity capacities.

    The post Global China Hub Nonresident Fellow Dakota Cary spoke to CNN appeared first on Atlantic Council.

    ]]>
    Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-on-click-here/ Thu, 11 Jan 2024 15:47:34 +0000 https://www.atlanticcouncil.org/?p=723872 On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

    The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

    ]]>

    On January 10, GCH Nonresident Fellow Dakota Cary was brought on to Click Here to discuss his report, “Sleigh of hand: How China weaponizes software vulnerabilities,” which explains how Chinese software vulnerability laws require Chinese businesses to report coding flaws to a government agency, which in turn shares this information with state-sponsored hacking groups.

    The post Global China Hub Nonresident Fellow Dakota Cary Featured on Click Here appeared first on Atlantic Council.

    ]]>
    Global China Hub Nonresident Fellow Dakota Cary quoted in CNN https://www.atlanticcouncil.org/insight-impact/in-the-news/global-china-hub-nonresident-fellow-dakota-cary-quoted-in-cnn/ Wed, 10 Jan 2024 19:52:00 +0000 https://www.atlanticcouncil.org/?p=725969 On January 10, GCH Nonresident Fellow Dakota Cary was quoted in CNN on China’s surveillance capabilities.

    The post Global China Hub Nonresident Fellow Dakota Cary quoted in CNN appeared first on Atlantic Council.

    ]]>

    On January 10, GCH Nonresident Fellow Dakota Cary was quoted in CNN on China’s surveillance capabilities.

    The post Global China Hub Nonresident Fellow Dakota Cary quoted in CNN appeared first on Atlantic Council.

    ]]>
    Ukraine is on the front lines of global cyber security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraine-is-on-the-front-lines-of-global-cyber-security/ Tue, 09 Jan 2024 21:37:52 +0000 https://www.atlanticcouncil.org/?p=722954 Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new Russian cyber attacks, writes Joshua Stein.

    The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

    ]]>
    There is no clear dividing line between “cyber warfare” and “cyber crime.” This is particularly true with regard to alleged acts of cyber aggression originating from Russia. The recent suspected Russian cyber attack on Ukrainian mobile operator Kyivstar is a reminder of the potential dangers posed by cyber operations to infrastructure, governments, and private companies around the world.

    Russian cyber activities are widely viewed as something akin to a public-private partnership. These activities are thought to include official government actors who commit cyber attacks and unofficial private hacker networks that are almost certainly (though unofficially) sanctioned, directed, and protected by the Russian authorities.

    The most significant government actor in Russia’s cyber operations is reportedly Military Unit 74455, more commonly called Sandworm. This unit has been accused of engaging in cyber attacks since at least 2014. The recent attack on Ukraine’s telecommunications infrastructure was probably affiliated with Sandworm, though specific relationships are intentionally hard to pin down.

    Stay updated

    As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

    Attributing cyber attacks is notoriously difficult; they are designed that way. In some cases, like the attacks on Ukraine’s electrical and cellular infrastructure, attribution is a matter of common sense. In other cases, if there is enough information, security firms and governments can trace attacks to specific sources.

    Much of Russian cyber crime occurs through private hacker groups. Russia is accused of protecting criminals who act in the interests of the state. One notable case is that of alleged hacker Maksim Yakubets, who has been accused of targeting bank accounts around the world but remains at large in Russia despite facing charges from the US and UK.

    The Kremlin’s preferred public-private partnership model has helped make Russia a major hub for aggressive cyber attacks and cyber crime. Private hacker networks receive protection, while military hacking projects are often able to disguise their activities by operating alongside private attacks, which provide the Kremlin with a degree of plausible deniability.

    More than ten years ago, Thomas Rid predicted “cyber war will not take place.” Cyber attacks are not a battlefield, they are a race for digital resources (including access to and control of sensitive devices and accounts). This race has been ongoing for well over a decade.

    Part of the reason the US and other NATO allies should be concerned about and invested in the war in Ukraine is that today’s cyber attacks are having an impact on cyber security that is being felt far beyond Ukraine. As Russia mounts further attacks against Ukrainian targets, it is also expanding its resources in the wider global cyber race.

    Andy Greenberg’s book Sandworm documents a range of alleged Russian attacks stretching back a number of years and states that Sandworm’s alleged operations have not been limited to cyber attacks against Ukraine. The United States indicted six GRU operatives as part of Sandworm for their role in a series of attacks, including attempts to control the website of the Georgian Parliament. Cyber security experts are also reasonably sure that the NotPetya global attack of 2016 was perpetrated by Sandworm.

    The NotPetya attack initially targeted Ukraine and looked superficially like a ransomware operation. In such instances, the victim is normally prompted to send cryptocurrency to an account in order to unlock the targeted device and files. This is a common form of cyber crime. The NotPetya attack also occurred after a major spree of ransomware attacks, so many companies were prepared to make payouts. But it soon became apparent that NotPetya was not ransomware. It was not meant to be profit-generating; it was destructive.

    The NotPetya malware rapidly spread throughout the US and Europe. It disrupted global commerce when it hit shipping giant Maersk and India’s Jawaharlal Nehru Port. It hit major American companies including Merck and Mondelez. The commonly cited estimate for total economic damage caused by NotPetya is $10 billion, but even this figure does not capture the far greater potential it exposed for global chaos.

    Ukraine is currently on the front lines of global cyber security and the primary target for groundbreaking new cyber attacks. While identifying the exact sources of these attacks is necessarily difficult, few doubt that what we are witnessing is the cyber dimension of Russia’s ongoing invasion of Ukraine.

    Looking ahead, these attacks are unlikely to stay in Ukraine. On the contrary, the same cyber weapons being honed in Russia’s war against Ukraine may be deployed against other countries throughout the West. This makes it all the more important for Western cyber security experts to expand cooperation with Ukraine.

    Joshua Stein is a researcher with a PhD from the University of Calgary.

    Further reading

    The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

    The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

    Follow us on social media
    and support our work

    The post Ukraine is on the front lines of global cyber security appeared first on Atlantic Council.

    ]]>
    Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion https://www.atlanticcouncil.org/blogs/ukrainealert/ukrainian-telecoms-hack-highlights-cyber-dangers-of-russias-invasion/ Thu, 21 Dec 2023 00:09:09 +0000 https://www.atlanticcouncil.org/?p=718878 An unprecedented December 12 cyber attack on Ukraine's largest telecoms operator Kyivstar left tens of millions of Ukrainians without mobile services and underlined the cyber warfare potential of Russia's ongoing invasion, writes Mercedes Sapuppo.

    The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

    ]]>
    A recent cyber attack on Ukraine’s largest telecommunications provider, Kyivstar, caused temporary chaos among subscribers and thrust the cyber front of Russia’s ongoing invasion back into the spotlight. Kyivstar CEO Oleksandr Komarov described the December 12 hack as “the biggest cyber attack on telco infrastructure in the world,” underlining the scale of the incident.

    This was not the first cyber attack targeting Kyivstar since Russia launched its full-scale invasion in February 2022. The telecommunications company claims to have repelled around 500 attacks over the past twenty-one months. However, this latest incident was by far the most significant.

    Kyivstar currently serves roughly 24 million Ukrainian mobile subscribers and another million home internet customers. This huge client base was temporarily cut off by the attack, which also had a knock-on impact on a range of businesses including banks. For example, around 30% of PrivatBank’s cashless terminals ceased functioning during the attack. Ukraine’s air raid warning system was similarly disrupted, with alarms failing in several cities.

    Kyivstar CEO Komarov told Bloomberg that the probability Russian entities were behind the attack was “close to 100%.” While definitive evidence has not yet emerged, a group called Solntsepyok claimed responsibility for the attack, posting screenshots that purportedly showed the hackers breaching Kyivstar’s digital infrastructure. Ukraine’s state cyber security agency, known by the acronym SSSCIP, has identified Solntsepyok as a front for Russia’s GRU military intelligence agency.

    Stay updated

    As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

    The details of the attack are still being investigated but initial findings indicate that hackers were able to breach Kyivstar security via an employee account at the telecommunications company. This highlights the human factor in cyber security, which on this occasion appears to have enabled what Britain’s Ministry of Defense termed as “one of the highest-impact disruptive cyber attacks on Ukrainian networks since the start of Russia’s full-scale invasion.”

    This latest cyber attack is a reminder of the threat posed by Russia in cyberspace. Ever since a landmark 2007 cyber attack on Estonia, Russia has been recognized as one of the world’s leading pioneers in the field of cyber warfare. The Kremlin has been accused of using both state security agencies and non-state actors in its cyber operations in order to create ambiguity and a degree of plausible deniability.

    While cyber attacks have been a feature of Russian aggression against Ukraine since hostilities first began in 2014, the cyber front of the confrontation has been comparatively quiet following the launch of the full-scale invasion almost two years ago. Some experts are now warning that the recent attack on the Kyivstar network may signal an intensification of Russian cyber activities, and are predicting increased cyber attacks on key infrastructure targets in the coming months as the Kremlin seeks to make the winter season as uncomfortable as possible for Ukraine’s civilian population.

    Ukraine’s cyber defense capabilities were already rated as robust before Russia’s full-scale invasion. These capabilities have improved considerably since February 2022, not least thanks to a rapid expansion in international cooperation between Ukraine and leading global tech companies. “Ukraine’s cyber defense offers an innovative template for other countries’ security efforts against a dangerous enemy,” the Financial Times reported in July 2023. “Constant vigilance has been paired with unprecedented partnerships with US and European private sector groups, from Microsoft and Cisco’s Talos to smaller firms like Dragos, which take on contracts to protect Ukraine in order to gain a close-up view of Russian cyber tradecraft. Amazon Web Services has sent in suitcase-sized back-up drives. Cloudfare has provided its protective service, Project Galileo. Google Project Shield has helped fend off cyber intrusions.”

    As Ukraine’s cyber defenses grow more sophisticated, Russia is also constantly innovating. Ukrainian cyber security officials recently reported the use of new and more complex malware to target state, private sector, and financial institutions. Accelerating digitalization trends evident throughout Ukrainian society in recent years leave the country highly vulnerable to further cyber attacks.

    There are also some indications that Ukrainian cyber security bodies may require reform. In November 2023, two senior officials were dismissed from leadership positions at the SSSCIP amid a probe into alleged embezzlement at the agency. Suggestions of corruption within Ukraine’s cyber security infrastructure are particularly damaging at a time when Kyiv needs to convince the international community that it remains a reliable partner in the fight against Russian cyber warfare.

    The Kyivstar attack is a reminder that the Russian invasion of Ukraine is not only a matter of tanks, missiles, and occupying armies. In the immediate aftermath of the recent attack on the country’s telecommunications network, Ukrainian Nobel Peace Prize winner and human rights activist Oleksandra Matviichuk posted that the incident was “a good illustration of how much we all depend on the internet, and how easy it is to destroy this whole system.” Few would bet against further such attacks in the coming months.

    Mercedes Sapuppo is a program assistant at the Atlantic Council’s Eurasia Center.

    Further reading

    The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

    The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

    Follow us on social media
    and support our work

    The post Ukrainian telecoms hack highlights cyber dangers of Russia’s invasion appeared first on Atlantic Council.

    ]]>
    Kroenig on Fox News podcast discussing cyber intrusions by China https://www.atlanticcouncil.org/insight-impact/in-the-news/kroenig-on-fox-news-podcast-discussing-cyber-intrusions-by-china/ Wed, 13 Dec 2023 18:32:18 +0000 https://www.atlanticcouncil.org/?p=715903 On December 13, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed by Fox News Rundown on how China could use its cyber intrusions into private sector entities to interfere with US efforts to protect Taiwan.

    The post Kroenig on Fox News podcast discussing cyber intrusions by China appeared first on Atlantic Council.

    ]]>

    On December 13, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed by Fox News Rundown on how China could use its cyber intrusions into private sector entities to interfere with US efforts to protect Taiwan.

    I think [these cyber intrusions] really [are] about the strategic competition and really about China preparing for war.

    Matthew Kroenig

    The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

    The post Kroenig on Fox News podcast discussing cyber intrusions by China appeared first on Atlantic Council.

    ]]>
    The 5×5—2023: The cybersecurity year in review https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-2023-the-cybersecurity-year-in-review/ Wed, 13 Dec 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=714286 A group of Atlantic Council fellows review the past year in cybersecurity, which organizations and initiatives made positive steps, and areas for improvement going forward. 

    The post The 5×5—2023: The cybersecurity year in review appeared first on Atlantic Council.

    ]]>
    This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

    It has been a busy year in cybersecurity and in the land of policy. On March 2, 2023, the Biden administration released its long-awaited National Cybersecurity Strategy, laying out an ambitious plan to maintain the United States’ advantage in cyberspace and boost the security and resilience of critical technical systems across the economy and society. The document was followed by its Implementation Plan and the National Cyber Workforce and Education Strategy later that summer.

    This year saw other noteworthy developments, including cybersecurity failures that resulted in major hacks of organizations ranging from T-Mobile and 23andMe to critical infrastructure in Guam and the Ukrainian military amidst its war with Russia.  There has been no shortage to discuss in 2023, so we brought together a group of Atlantic Council fellows to review the past year in cybersecurity, which organizations and initiatives made positive steps, and areas for improvement going forward. 

    Editors of the editor note: The 5×5’s founder and inaugural editor, Simon Handler, is moving on to new adventures, but it bears a note of thanks to Simon for his wit and work ethic in taking this series from an idea through to forty-two issues over the last four years. The series continues, but meanwhile thank you, Simon, and good luck. 

    #1 What organization, public or private, had the greatest impact on cybersecurity in 2023? 

    Amélie Koran, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

    “Progress Software, the makers of the MOVEit file transfer service which has been the gift that has kept on giving this year when it comes to notable breaches this year. It has impacted private and public sector organizations and over sixty million individuals around the world, with more than 80 percent of the impacted organizations based in the United States. There was rarely a cybersecurity-adjacent news story in 2023 that did not have a component tied to this software.” 

    John Speed Meyers, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; principal research scientist, Chainguard

    “Since there is not, to a first approximation, a scale on which cybersecurity has been or is measured, it is hard for me to say anything objective. That said, assuming the scale extends below zero, I would like to vote for C and C++ software developers.” 

    Justin Sherman, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; founder and chief executive officer, Global Cyber Strategies

    “There are, in some ways, too many to pick from—both good and bad. On the positive side in 2023, the United Kingdom’s National Cyber Security Centre continues to roll out voluntary, systemic internet security protections for British networks and organizations, most recently offering its free Domain Name System (DNS) security service to schools. Such decisions exemplify the concept of security at scale, identifying the points with great ‘leverage’ improve security, something with which US policy still struggles. On the side of undermining US cybersecurity, the Chinese government’s expanded efforts to require companies to disclose software vulnerabilities to the state increase a number of hacking risks to the United States and plenty of other countries.” 

    Maggie Smith, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director, Cyber Project, Irregular Warfare Initiative

    “I think everyone’s mind immediately goes to Microsoft and its ongoing efforts to assist Ukraine. But I think the company’s impact on cybersecurity goes beyond the all-consuming narrative around the role of the private sector before, during, and in the aftermath of conflict. In September, I read a great post by Cynthia Brumfield on the <Meta>curity Substack (I highly recommend subscribing to its ‘Best Infosec-Related Long Reads for the Week’) about the technical blunders made by Microsoft that gave Chinese actors access to US government emails. For me, it tied a bow around how I feel about how to approach cybersecurity: there is no silver bullet, and no one is ever truly secure. China’s hack highlighted how a company that is literally helping prevent catastrophic cyberattacks can simultaneously be the victim of one. This is a dichotomy inherent to the domain of cyberspace and the impact of seeing it so publicly with Microsoft was my 2023 cybersecurity ‘woah’ moment.” 

    Bobbie Stempfley, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; vice president and business unit security officer, Dell Technologies

    “It is hard not to say that the Security and Exchange Commission (SEC) has had the greatest impact on cybersecurity, given how active it has been in this space. That being said, recognizing the National Institute for Standards and Technology and its publication of post-quantum encryption standards for three of its four selected algorithms and intention to evaluate the next wave of algorithms has great impact on national security.” 

    #2 What was the most impactful cyber policy or initiative of 2023? 

    Koran: “I would say that the US National Cybersecurity Strategy would count in this category because it was released, debated, and followed with an implementation plan. Getting any policy or directive out of the government and through the gauntlet of reviews, markup, critique, and public consumption is to be lauded. Is it perfect? No. Is it a good start? Yes. For it to succeed and the United States to continue to lead in these policy areas, policymakers need maintain, revise, and consider it a living document. For the implementation plan, leaders need to realize that these were lofty goals with aggressive timelines—many of which may be missed—but to keep trying.” 

    Meyers: “Overlooking the aforementioned lack of a cybersecurity impact scale, I would nominate the Internet Security Research Group’s Prossimo project or, more parochially, the creation of Wolfi, a new security-first Linux distribution.” 

    Sherman: “The 2023 US National Cybersecurity Strategy is particularly significant because of its strong, explicit bent toward regulation. It is the product of an important, positive, and long overdue decision to focus US cyber policy on where and why companies are not investing in cybersecurity, rather than continue to speak purely about public-private partnerships and ignore the failures of the market to address the risks to citizens, businesses, and the country. As a point of comparison for this shift, the 2023 cyber strategy mentions ‘regulation’ or some variant of it forty times—while the previous National Cyber Strategy, released in 2018, did not say ‘regulation’ once.” 

    Smith: “For impact in 2023, the Department of Defense (DoD) Cyber Strategy is at the top of my list because it places a hard stop on DoD by clearly defining its jurisdictional limits. With the rise of ransomware and other forms of pervasive cybercrime, US Cyber Command has often worked to support other US entities to combat attacks. Many viewed DoD’s activity as blurring the line and stepping dangerously close to getting involved in domestic cybersecurity. The 2023 DoD Cyber Strategy clearly draws the line: The Department, in particular, lacks the authority to employ military forces to defend private companies against cyber-attacks. It may do so only if directed by the President, or (1) if the Secretary of Defense or other appropriate DoD official approves a request for defense support of civil authorities from the Department of Homeland Security, Federal Bureau of Investigation, or another appropriate lead Federal agency; (2) at the invitation of such a company; and (3) in coordination with the relevant local or Federal authority. Given this—and the limited circumstances in which military cyber forces would be asked to defend civilian critical infrastructure—the Department will not posture itself to defend every private sector network.” 

    Stempfley: “The Delaware Court of Chancery ruling that expands the duty of care from ‘directors’ to ‘officers’ and takes an expansive view of what an officer is at a company.  The ruling in the McDonald’s Corporation Stockholder Derivative Litigation, while not getting the same attention as the SEC rule or the National Cyber Strategy, is creating impact by lining up top-to-bottom conversations about cyber risks in organizations. Additionally, it is likely to lead to more standardization and clarity around the role of the Chief Information Security Officer and other relevant officers.”

    #3 What is the most important yet under-reported cyber incident of 2023?

    Koran: “The T-Mobile data breaches. If we answer the question of ‘what day is it?’ and reply ‘another day for a T-Mobile breach,’ the company has not learned from its long history of breaches, nor has regulatory framework aided in curbing the regularity and impact of these breaches. While other telecommunications companies have not had as many regular lapses as T-Mobile has had, one wonders what makes them different than the others and if the issue can be remedied. Additionally, the company has decided to cut more jobs and the only thing keeping people away from sensitive areas of the company is a sign on the door of a data center with a strongly worded message of ‘please do not steal any more data.’” 

    Meyers: “Using a loose definition of ‘incident,’ I would like to nominate the Cyber Safety Review Board’s decision to investigate the extortion activities of Lapsus$ prior to investigating the Russian intelligence agencies’ epic SolarWinds hack.” 

    Sherman: “Among others—recognizing that I am cheating on this response by picking a few—a Chinese state-sponsored group called Volt Typhoon hacked US critical infrastructure systems, including in Guam, which speaks to the cyber-focused risks associated with any potential kinetic conflict with Beijing in the future; hackers exploited the log4j vulnerability to hack into devices and then sell the information to ‘proxyware’ services, which speaks to the intersection of major vulnerabilities and the cryptojacking, adware, and other similar markets; and Russia’s military intelligence agency built malware specifically targeting Android devices to spy on Ukrainian devices and, for a period, gained access to the Ukrainian military’s combat data exchange.” 

    Smith: “Earlier this year genetic testing company 23andMe was hacked multiple times. For a long time, I have wondered about mail-order DNA kits and how they store, protect, and manage an individual’s data—consumer genetic testing data, for example, does not fall under the Health Insurance Portability and Accountability Act (HIPAA). As someone who has done genetic testing for a medical reason and felt the ripple effects of what it can reveal, the 23andMe hacks confirmed my fears that sensitive, personal genetic information gathered for commercial purposes may put marginalized groups at risk if stolen. Many genetic mutations, for example, fall in the ‘founder mutation’ category, meaning the mutation is observed with high frequency in a group that is or was geographically or culturally isolated, in which one or more of the ancestors was a carrier of the altered gene. Therefore, it is relatively easy to determine a person’s ethnicity if a founder mutation is present. 23andMe tests for many known founder mutations because they do tell people a lot about their personal history. With antisemitism at peak levels and the first 23andMe hack targeting those of Ashkenazi Jewish heritage, I think the hacking of commercial genetic data deserves a lot more attention.” 

    Stempfley: “Ransomware has gotten a great deal of coverage, from the Ransomware Task Force to its highlights in the Verizon Data Breach Report (VDBR) and the financial impact—so what is under-reported in ransomware? The now documented impact to public safety. Early in the year, published research explicitly tied ransomware at hospitals and health care delivery points to impact to patient care. This study showed that in 44 percent of the cases that were studied patient care was impeded leading to negative patient outcomes. This report was published in the Journal of American Medicine Association a mainstream medical journal, not in a security publication.” 

    More from the Cyber Statecraft Initiative:

    #4 What cybersecurity issue went unaddressed in 2023 but deserves greater attention in 2024? 

    Koran: “Not to flog the buzzwords, but better forward-leaning policies and regulations toward security in artificial intelligence (AI) and large language model (LLM) services deserve more attention. Putting these tools and services on the market well before their safety has been successfully worked out, vetted, and peer reviewed greatly increases risk to critical and non-critical infrastructure. While these tools may not be directly flipping switches at power plants and hospitals, the impact of their generated content on mis- and disinformation, at a time when the public is not critically thinking about their output, is dangerous. Even non-LLM or AI-based tools that are labelled as being backed or run by these technologies not only engender a false sense of safety and completeness but also fuel the hype train.” 

    Meyers: “The ungodly amount of time that software professionals spend identifying, triaging, and remediating known software vulnerabilities. I thought computers were supposed to make our lives better.” 

    Sherman: “Some of the most important protocols for internet traffic transmission globally, such as the Border Gateway Protocol (BGP), remain fundamentally insecure, and many companies and organizations still have not implemented the available cybersecurity improvements. Policymakers should also remember, amid excitement, fear, and craze about generative AI, to think about the cybersecurity of physical internet infrastructure that underpins GenAI—such as the cloud computing systems used to train and deploy models.” 

    Smith: “In March, the Environmental Protection Agency (EPA) released a memorandum stressing the need for states to assess cybersecurity risk to drinking water systems and issued a new rule that added cybersecurity assessments to annual state-led Sanitary Survey Programs for public water systems. However, the EPA rescinded the rule after legal challenges. Attorneys general in Iowa, Arkansas, and Missouri, joined by the American Water Works Association and the National Rural Water Association, claimed that making the cybersecurity improvements were too costly for suppliers and those costs would pass to the consumers. Importantly, EPA Assistant Administrator Radhika Fox warned, ‘cyberattacks have the potential to contaminate drinking water, which threatens public health.’ I hope to see more action to protect our public water systems, as well as other systems critical to public health and welfare.” 

    Stempfley: “The impact of Generative AI on entry-level positions in the cyber workforce [deserves greater attention]. The cyber workforce shortage has been widely reported, as has the challenge that many new entrants to the field have experienced, but we have not begun to talk about how the impacts from this technology will be disproportionately aligned to those least experienced in the field, potentially doing away with most entry level roles. If this happens, it will require us to think about the workforce in different ways.” 

    #5 At year’s end, how do you assess the efficacy of the Biden administration’s 2023 National Cybersecurity Strategy?

    Koran: “In a short word, it has been ineffective—despite, as I note above, being the most impactful. Barring the momentum of the software bill of materials (SBOM) message train, the suggested movements by public and private sector organizations to align with the strategy have been resisted or questioned, even though many of the ideas and efforts proposed are laudable. There was not a lot of momentum for these groups to push some of these efforts, and it will take years, not weeks or months, to meet the strategy’s goals. The strategy is a way finder, but Congress—in disarray for quite some time—needs to act to power it. Until Congress passes legislation and appropriations that support government efforts, private sector organizations will have little reason to align unless the market demands change. Everything else has also been overshadowed by global events and politics, and momentum to achieve the goals set out by the strategy will be hard to come by.” 

    Meyers: “To be determined. Perhaps it shifted the Overton window on software security and liability, though I suspect that general suspicion of large technology companies did that more than the issuing of any one strategy.” 

    Sherman: “The Biden administration’s strategy, particularly with its emphasis on regulation, is an important and long-overdue shift in how the US government is messaging and advancing its cybersecurity policy. However, there is still much to be done, and it is not yet clear exactly how the administration intends to implement the emphasis on regulation in practice—the implementation guidance for the National Cybersecurity Strategy entirely omitted certain sections of the Strategy itself.” 

    Smith: “I think it is too early to assess the efficacy of the strategy, but I do think that it is a step forward. As a wild example, the October 22 60 Minutes brought the Five Eyes (United States, Australia, New Zealand, United Kingdom, and Canada) intelligence chiefs together for an interview—something that has never happened before! Before the interview they released a rare joint statement to confront the ‘unprecedented threat’ China poses to the innovation world, and that from quantum technology and robotics to biotechnology and artificial intelligence, China is stealing secrets in various sectors. The best part about the interview, in my opinion, is that it is conducted in a sparse, dimly lit room with all the chiefs sitting around a non-descript round table, adding to the spook factor!” 

    Stempfley: “The National Cybersecurity Strategy, its associated implementation plan, and workforce strategy have been important documents and have certainly set the national direction—this direction has served the administration well in domestic and international discussions. The strategy’s influence in the federal budget process and in those elements of industry that do not typically engage in public private partnerships have not been as substantive as hoped.”

    Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—2023: The cybersecurity year in review appeared first on Atlantic Council.

    ]]>
    Ukraine’s AI road map seeks to balance innovation and security https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-ai-road-map-seeks-to-balance-innovation-and-security/ Tue, 12 Dec 2023 21:37:02 +0000 https://www.atlanticcouncil.org/?p=715576 As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security, writes Ukraine's Minister for Digital Transformation Mykhailo Fedorov.

    The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

    ]]>
    As the world grapples with the implications of rapidly evolving Artificial Intelligence (AI) technologies, Ukraine has recently presented a national road map for AI regulation that seeks to balance the core values of innovation and security.

    Businesses all over the world are currently racing to integrate AI into their products and services. This process will help define the future of the tech sector and will shape economic development across borders.

    It is already clear that AI will allow us all to harness incredible technological advances for the benefit of humanity as a whole. But if left unregulated and uncontrolled, AI poses a range of serious risks in areas including identity theft and the dissemination of fake information on an unprecedented scale.

    One of the key objectives facing all governments today is to maximize the positive impact of AI while minimizing any unethical use by both developers and users, amid mounting concerns over cyber security and other potential abuses. Clearly, this exciting new technological frontier must be regulated that ensure the safety of individuals, businesses, and states.

    Some governments are looking to adopt AI policies that minimize any potential intervention while supporting business; others are attempting to prioritize the protection of human rights. Ukraine is working to strike a balance between these strategic priorities.

    Stay updated

    As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.

    Today, Ukraine is among the world’s leading AI innovators. There are more than 60 Ukrainian tech companies registered as active in the field of artificial intelligence, but this is by no means an exhaustive list. Throughout Ukraine’s vibrant tech sector, a large and growing number of companies are developing products and applications involving AI.

    The present objective of the Ukrainian authorities is to support this growth and avoid over-regulation of AI. We recognize that the rapid adoption of regulations is always risky when applied to fast-moving innovative fields, and prefer instead to adopt a soft approach that takes the interests of businesses into account. Our strategy is to implement regulation through a bottom-up approach that will begin by preparing businesses for future regulation, before then moving to the implementation stage.

    During the first phase, which is set to last two to three years, the Ukrainian authorities will assist companies in developing a culture of self-regulation that will enable them to control the ethics of their AI systems independently. Practical tools will be provided to help businesses adapt their AI-based products in line with future Ukrainian and European legislative requirements. These tools will make it possible to carry out voluntary risk assessment of AI products, which will help businesses identify any areas that need improvement or review.

    Ukraine also plans to create a product development environment overseen by the government and involving expert assistance. The aim is to allow companies to develop and test AI products for compliance with future legislation. Additionally, a range of recommendations will be created to provide stakeholders with practical guidelines for how to design, develop, and use AI ethically and responsibly before any legally binding regulations come into force.

    For those businesses willing to do more during the initial self-regulation phase, the Ukrainian authorities will prepare voluntary codes of conduct. Stakeholders will also be issued a policy overview providing them with a clear understanding of the government’s approach to AI regulation and clarifying what they can expect in the future.

    During the initial phase, the Ukrainian government’s role is not to regulate AI usage, but to help Ukrainian businesses prepare for inevitable future AI regulation. At present, fostering a sense of business responsibility is the priority, with no mandatory requirements or penalties. Instead, the focus is on voluntary commitments, practical tools, and an open dialogue between government and businesses.

    The next step will be the formation of national AI legislation in line with the European Union’s AI Act. The bottom-up process chosen by Ukraine is designed to create a smooth transition period and guarantee effective integration.

    The resulting Ukrainian AI regulations should ensure the highest levels of human rights protection. While the development of new technologies is by nature an extremely unpredictable process for both businesses and governments, personal safety and security remain the top priority.

    At the same time, the Ukrainian approach to AI regulation is also designed to be business-friendly and should help fuel further innovation in Ukraine. By aligning the Ukrainian regulatory framework with EU legislation, Ukrainian tech companies will be able to enter European markets with ease.

    AI regulation is a global issue that impacts every country. It is not merely a matter of protections or restrictions, but of creating the right environment for safe innovation. Ukraine’s AI regulation strategy aims to minimize the risk of abuses while making sure the country’s tech sector can make the most of this game-changing technology.

    Mykhailo Fedorov is Ukraine’s Vice Prime Minister for Innovations and Development of Education, Science, and Technologies, and Minister of Digital Transformation.

    Further reading

    The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

    The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

    Follow us on social media
    and support our work

    The post Ukraine’s AI road map seeks to balance innovation and security appeared first on Atlantic Council.

    ]]>
    Kroenig on Fox & Friends on Chinese cyber intrusions https://www.atlanticcouncil.org/insight-impact/in-the-news/kroenig-on-fox-friends-on-chinese-cyber-intrusions/ Tue, 12 Dec 2023 18:54:39 +0000 https://www.atlanticcouncil.org/?p=715442 On December 12, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed on Fox & Friends on cyber intrusions into critical US entities by the People’s Republic of China. Dr. Kroenig argues that these intrusions demonstrate that China is preparing for war with the United States, and he contends that, to […]

    The post Kroenig on Fox & Friends on Chinese cyber intrusions appeared first on Atlantic Council.

    ]]>

    On December 12, Matthew Kroenig, Atlantic Council vice president and Scowcroft Center senior director, was interviewed on Fox & Friends on cyber intrusions into critical US entities by the People’s Republic of China. Dr. Kroenig argues that these intrusions demonstrate that China is preparing for war with the United States, and he contends that, to defend against cyberattacks, the US government needs to “be clear with the American people that we are in a new Cold War with China.”

    We are in a serious rivalry. This isn’t some kind of competition like a tennis match.

    Matthew Kroenig

    The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

    The post Kroenig on Fox & Friends on Chinese cyber intrusions appeared first on Atlantic Council.

    ]]>
    2024 DC Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2024-dc-cyber-9-12-strategy-challenge/ Tue, 05 Dec 2023 16:48:11 +0000 https://www.atlanticcouncil.org/?p=708927 The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the twelfth annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 15-16, 2024. This event will be held in a hybrid format, meaning teams are welcome to attend either […]

    The post 2024 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>

    The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the twelfth annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 15-16, 2024. This event will be held in a hybrid format, meaning teams are welcome to attend either virtually via Zoom, or in-person at American University’s Washington College of Law. The agenda and format will look very similar to past Cyber 9/12 Challenges, except that it will be held in a hybrid format. Plenary sessions will be livestreamed via Zoom.

    Held in partnership with:

    Frequently Asked QuestionsVirtual

    How do I log in to the virtual sessions? 

    Your team and coach will be sent an invitation to your round’s Zoom meeting in the week leading up to the event using the emails provided during registration

    How will I know where to log in, and where is the schedule? 

    For competition rounds you will receive an email invitation with your Zoom link. For all plenary sessions and for the team room assignments and agenda please check the Cyber 9/12 Linktree. 

    How are the virtual sessions being run? 

    Virtual sessions will be run very close to the traditional competition structure and rules. Each Zoom meeting will be managed by a timekeeper. This timekeeper will ensure that each team and judge logs on to the conference line and will manage the competition round.  

    At the beginning of the round, decision documents will be shared by the timekeeper via Zoom and judges will have 2 minutes 30 seconds to review the document prior to the competitors’ briefing.  

    Teams will have 10 minutes to present their briefing and 10 minutes for Q&A. Judges will be asked to mute themselves for the 10-minute briefing session. 

    Judges will then engage the team in a Q&A session, playing the role of members of the National Security Council (or other organization as listed on the Intelligence Report instructions).

    Judges will then be invited to a digital breakout room and will have 5 minutes to discuss scores and fill out their scorecards via JotForm.  

    After the scoring is over, judges will have 10 minutes to provide direct feedback to the team.  

    A 10-minute break is scheduled before the start of the next round. Each round has been allotted several minutes of transition time for technical difficulties and troubleshooting. 

    What do I need to log into a virtual session?  

    Your team will need a computer (recommended), tablet, or smartphone with a webcam, microphone, and speaker or headphones. 

    Your team will be provided with a link to the Zoom conference for each competition round your team is scheduled for. If you have any questions about the software, please see Zoom’s internal guide here. 

    Will my team get scored the same way on Zoom as in-person? 

    Yes, the rules of the competition remain the same, including the rubric for scoring. You can see the rules and the grading rubric here.

    How does advancing through the competition work in a hybrid format? 

    After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

    How will my team receive Intelligence Report 2 and 3? 

    We will send out the Intelligence Reports via email to all qualifying teams. 

    How will the final round be run? 

    The final round will be run identically to the traditional final round format, except that the judges will be in-person. The virtual team will follow the standard final round format as outlined in the rules. After finishing the competition round, the virtual finalist team(s) will then join the plenary session webinar for the final round and watch the remaining finalist teams present.

    Frequently Asked QuestionsIn-person

    Where will the event be held in-person? 

    For participants attending in-person, the Cyber 9/12 Strategy Challenge will be held at American University’s Washington College of Law (WCL).

    What time will the event start and finish? 

    While the final schedule has yet to be finalized, participants will be expected at American University WCL at 8:00am on Day 1, and the competition will run until approximately 5:00pm, with an evening reception at approximately 6:30pm. Day 2 will commence at approximately 9:00am, and will finish at approximately 5:30pm. The organizing team reserves the right to modify the above timing. The official schedule of events will be distributed to teams in advance of the event and will be available on the Cyber 9/12 Linktree. All times are EST. 

    Will my team get scored the same way in-person as on Zoom? 

    Yes, the rules of the competition remain the same, including the rubric for scoring. You can see the rules and the grading rubric here.

    How does advancing through the competition work in a hybrid format? 

    After the Qualifying Round on Day 1, the top 50% of in-person teams and the top 50% of virtual teams will advance to the Semi-Final Round on Day 2. After the Semi-Final Round, the top 3 teams, in-person or virtual, will advance to the Final Round.

    Can teams who are eliminated on Day 1 still participate in Day 2 events? 

    Yes! All teams are welcome at all of the side-programming events. We strongly encourage teams eliminated on Day 1 to attend the competition on Day 2. There will be side-programming events such as Careers Talks, Resume Workshops, and other fun, cyber-related activities. See the Cyber 9/12 Linktree in the lead up to the event to see the full schedule of event.

    Will meals be included for in-person attendees?

    Yes, breakfast and lunch will be provided for all participants on both days. Light refreshments & finger foods will be provided at the evening reception on Day 1.

    What should I pack/bring to a Cyber 9/12 event?

    At the event: Please bring at least 5 printed copies of your decision documents to give to the judges on Day 1. Teams who do not have their decision document to give to judges will be assessed a penalty. We will help print documents on Day 2. Name tags will be provided to all participants, judges, and staff at registration on March 15. We ask you to wear these name tags throughout the duration of the competition. Name tags will be printed using the exact first and last name provided upon registration.

    Dress Code: We recommend that students dress in business casual attire as teams will be conducting briefings. You can learn more about business casual attire here.

    Electronic Devices: Cell phones, laptops, and wearable tech will not be used during presentations but we recommend teams bring their laptops as they will need to draft their decision documents for Day 2 and conduct research. Please refer to the competition rules for additional information and for our policy on technology accommodations.

    Presentation Aids: Teams may not use any visual aid other than their decision documents in their oral policy brief, including but not limited to slideshow presentations, additional handouts, binders, or folders.

    How do we get to American University?

    American University is on the DC Metro Red line. Metro service from both Dulles International Airport (IAD) and Reagan National Airport (DCA) connect with the Metro Red Line at Metro Center. 

    Zoom

    What is Zoom? 

    Zoom is a free video conferencing application. We will be using it to host the competition remotely. 

    Do I need a Zoom account? 

    You do not have to have an account BUT we recommend that you do and download the desktop application to participate in the Cyber 9/12 Strategy Challenge. 

    Please use your real name to register so we can track participation. A free Zoom account is all that is necessary to participate.  

    What if I don’t have Zoom? 

    Zoom is available for download online. You can also access Zoom conferences through a browser without downloading any software or registering.  

    How do I use Zoom on my Mac? Windows? Linux Machine? 

    Follow the instructions here and here to get started. Please use the same email you registered with for your Zoom to sign up.

    Can I use Zoom on my mobile device? 

    Yes, but we recommend that you use a computer or tablet 

    Can each member of my team call into the Zoom conference line independently for our competition round? 

    Yes. Please see the troubleshooting section below for tips if multiple team members will be joining the competition round on independent devices in the same room.  

    Can other teams listen-in to my team’s session? 

    Zoom links to competition sessions are team specific—only your team, your coach and your judges will have access to a session and sessions will be monitored once all participants have joined. If an observer has requested to watch your team‘s presentation, your timekeeper will notify you at the start of your round.

    Staff will be monitoring all sessions and all meetings will have a waiting room enabled in order to monitor attendance. Any team member or coach in a session they are not assigned to will be removed and disqualified. 

    Troubleshooting

    What if my team loses internet connection or is disconnected during the competition? 

    If your team experiences a loss of internet connection, we recommend following Zoom’s troubleshooting steps listed here. Please remain in contact with your timekeeper.

    If your team is unable to rejoin the Zoom conference – please use one of the several dial-in lines included in the Zoom invitation.  

    What if there is an audio echo or other audio feedback issue? 

    There are three possible causes for audio malfunction during a meeting: 

    • A participant has both the computer and telephone audio active. 
    • A participant computer and telephone speakers are too close together.  
    • Multiple participant computers with active audio are in the same room.  

    If this is the case, please disconnect the computer’s audio from other devices, and leave the Zoom conference on one computer. To avoid audio feedback issues, we recommend each team use one computer to compete. 

    What if I am unable to use a video conference, can my team still participate? 

    Zoom has dial-in lines associated with each Zoom conference event and you are able to call directly using any landline or mobile phone. 

    We do not recommend choosing voice only lines unless absolutely necessary.

    Other

    Will there be keynotes or any networking activity remotely? 

    Keynotes will continue as reflected on our agenda and will be broadcast with links to be shared with competitors the day before the event. Some side-programming events may not be available virtually. We apologize for the inconvenience.

    We also encourage competitors and judges to join the Cyber 9/12 Strategy Challenge Alumni Network on LinkedIn where we regularly share job and internship postings, as well as information about events and how to be a part of the cyber policy community worldwide.

    How should I prepare for a Cyber 9/12?

    Check out our preparation materials, which includes past scenarios, playbooks including award-winning policy recommendations and a starter pack for teams that includes templates for requesting coaching support or funding.

    Cyber Statecraft Initiative

    The post 2024 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

    ]]>
    Community watch: China’s vision for the future of the internet https://www.atlanticcouncil.org/in-depth-research-reports/report/community-watch-chinas-vision-for-the-future-of-the-internet/ Mon, 04 Dec 2023 14:00:00 +0000 https://www.atlanticcouncil.org/?p=707988 In 2015, Beijing released Jointly Building a Community with a Shared Future in Cyberspace, a white paper outlining the CCP’s vision for the future of the internet. In the eight years since then, this vision has picked up steam outside of China, largely as the result of Beijing’s efforts to export these ideas to authoritarian countries.

    The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

    ]]>
    Table of contents

    Executive summary
    Introduction
    The core of China’s approach
    Case studies in China’s “shared future”

    Executive summary

    China recognizes that many nondemocratic and illiberal developing nations need internet connectivity for economic development. These countries aim to digitize trade, government services, and social interactions, but interconnectivity risks better communication and coordination among political dissidents. China understands this problem and is trying to build global norms that facilitate the provision of its censorship and surveillance tools to other countries. This so-called Community with a Shared Future in Cyberspace, is based around the idea of cyber sovereignty. China contends that it is a state’s right to protect its political system, determine what content is appropriate within its borders, create its own standards for cybersecurity, and govern access to the infrastructure of the internet. 

    Jointly Building a Community with a Shared Future in Cyberspace, a white paper from the government of the People’s Republic of China (most recently released in 2022 but reissued periodically since 2015), is a continuation of diplomatic efforts to rally the international community around China’s concept of cyber sovereignty.1 By extending the concept of sovereignty to cyberspace, China makes the argument that the state decides the content, operations, and norms of its internet; that each state is entitled to such determinations as a de facto right of its existence; that all states should have equal say in the administration of the global internet; and that it is the role of the state to balance claims of citizens and the international community (businesses, mostly, but also other states and governing bodies). 

    But making the world safe for authoritarian governments is only part of China’s motivation. As the key provider of censorship-ready internet equipment and surveillance tools, China’s concept of cyber sovereignty offers political security to other illiberal governments. Case studies in this report demonstrate how such technologies may play a role in keeping China’s friends in power.

    The PRC supports other authoritarian governments for good reason. Many countries in which Chinese state-owned enterprises and PRC-based companies own mineral drawing rights or have significant investments are governed by authoritarians. Political instability threatens these investments, and, in some cases, China’s access to critical mineral inputs to its high-tech manufacturing sector. Without a globally capable navy to compel governments to keep their word on contracts, China is at the mercy of democratic revolutions and elite power struggles in these countries. By providing political security to a state through censorship, surveillance, and hacking of dissidents, China improves its chances of maintaining access to strategic plots of land for military bases or critical manufacturing inputs. A government that perceives itself to be dependent on China for political security is in no position to oppose it.

    Outside of China’s strategic objectives, the push for a Community with a Shared Future in Cyberspace may also have an operational impact on state-backed hacking teams.  

    As China’s cybersecurity companies earn more customers, their defenders gain access to more endpoints, better telemetry, and a more complete view of global cyber events. Leveraged appropriately, a larger customer base improves defenses. The Ministry of Industry and Information Technology’s Cybersecurity Threat and Vulnerability Information Sharing Platform, which collects information about software vulnerabilities, also collects voluntary incident response reports from Chinese firms responding to breaches of their customers.2 Disclosure of incidents and the vulnerabilities of overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. China’s own defensive posture should also improve as its companies attract more global clients. 

    China’s offensive teams could benefit, too. Many cybersecurity firms often allow their own country’s security services to operate unimpeded in their customers’ networks.3 Therefore, it is likely that more companies protected by Chinese cybersecurity companies means fewer networks where China’s offensive hacking teams must worry about evading defenses. 

    This report uses cases studies from the Solomon Islands, Russia, and beyond to show how China is operationalizing its view of cyber sovereignty. 

    Introduction

    A long black slate wall covered in dark hexagonal tiles runs along the side of Nuhong Street in Wuzhen, China, eighty miles southwest of Shanghai. A gap in the middle of the wall leads visitors to the entrance of the Waterside Resort that, for the last nine years, has hosted China’s World Internet Conference, a premier event for Chinese Communist Party (CCP) cyber policymakers.

    The inaugural conference didn’t seem like a foreign policy forum. The thousand or so attendees from a handful of countries and dozens of companies listened to a speaker circuit asserting that 5G is the future, big data was changing the world, and the internet was great for economic development—hardly groundbreaking topics in 2014.4 But the internet conference was more than a platform for platitudes about the internet: it also served as China’s soft launch for its international strategy on internet governance.

    By the last evening of the conference, some of the attendees had already left, choosing the red-eye flight home over another night by the glass-encased pool on the waterfront. Around 11 p.m., papers slid under doorways up and down the hotel halls. Conference organizers went room by room distributing a proclamation they hoped attendees would endorse just nine hours later.5 Attendees were stunned. The document said: “During the conference, many speakers and participants suggest [sic] that a Wuzhen declaration be released at the closing ceremony.” The papers, stapled and stuffed under doors, outlined Beijing’s views of the internet. The conference attendees—many of whom were members of the China-friendly Shanghai Cooperation Organization—balked at the last-minute, tone-deaf approach to getting an endorsement of Beijing’s thoughts on the internet. The document went unsigned, and the inaugural Wuzhen internet conference wrapped without a sweeping declaration. It was clear China needed the big guns, and perhaps less shady diplomatic tactics, to persuade foreigners of the merits of their views of the internet. 

    President Xi Jinping headlined China’s second World Internet Conference in 2015.6 This time the organizers skipped the late-night antics. On stage and reportedly in front of representatives from more than 120 countries and many more technology firm CEOs, Xi outlined a vision that is now enshrined in text as “Jointly Building a Community with a Shared Future in Cyberspace.”7 The four principles and five proposals President Xi laid out in his speech, which generally increase the power of the state and aim to model the global internet in China’s image, remain a constant theme in China’s diplomatic strategy on internet governance.8 In doing so, Xi fired the starting gun on an era of global technology competition that may well lead to blocs of countries aligned by shared censorship and cybersecurity standards. China has reissued the document many times since Xi’s speech, with the latest coming in 2022. 

    Xi’s 2015 speech came at a pivotal moment in history for China and many other authoritarian regimes. The Arab Spring shook authoritarian governments around the world just years earlier.9 Social media-fueled revolutions saw some autocrats overthrown or civil wars started in just a few months. China shared the autocrats’ paranoia. A think tank under the purview of the Cyberspace Administration of China acutely summarized the issue of internet governance, stating: “If our party cannot traverse the hurdle represented by the Internet, it cannot traverse the hurdle of remaining in power for the long term.”10 Another PRC government agency report went even further: blaming the US Central Intelligence Agency for no fewer than eleven “color revolutions” since 2003: the National Computer Virus Emergency Response Center claimed that the United States was providing critical technical support to pro-democracy protestors.11 Specifically, the center blamed the CIA for five technologies—ranging from encrypted communications to “anti-jamming” WiFi that helped connect protestors—that played into the success of color revolutions. Exuberance in Washington over the internet leveling the playing field between dictators and their oppressed citizens was matched in conviction, if not in tone, by leaders from Beijing to Islamabad.

    But China and other repressive regimes could not eschew the internet. The internet was digitizing everything, from social relationships and political affiliations to commerce and trade. Authoritarians needed a way to reap the benefits of the digital economy without introducing unacceptable risks to their political systems. China’s approach, called a Community with a Shared Future in Cyberspace,12 responds to these threats as a call to action for authoritarian governments and a path toward more amenable global internet governance for authoritarian regimes. It is, as one expert put it, China switching from defense to offense.13

    The core of China’s approach

    The PRC considers four principles key to structuring the future of cyberspace. These principles lay the conceptual groundwork for the five proposals, which reflect the collective tasks to build this new system. Table 1 shows the principles, which were drawn from Xi’s 2015 speech.14


    Table 1: China’s Four Principles, in Xi’s Words

    • Respect for cyber sovereignty: “The principle of sovereign equality enshrined in the Charter of the United Nations is one of the basic norms in contemporary international relations. It covers all aspects of state-to-state relations, which also includes cyberspace. We should respect the right of individual countries to independently choose their own path of cyber development, model of cyber regulation and Internet public policies, and participate in international cyberspace governance on an equal footing. No country should pursue cyber hegemony, interfere in other countries’ internal affairs or engage in, connive at or support cyber activities that undermine other countries’ national security.”
    • Maintenance of peace and security: “A secure, stable and prosperous cyberspace is of great significance to all countries and the world. In the real world, there are still lingering wars, shadows of terrorism and occurrences of crimes. Cyberspace should not become a battlefield for countries to wrestle with one another, still less should it become a hotbed for crimes. Countries should work together to prevent and oppose the use of cyberspace for criminal activities such as terrorism, pornography, drug trafficking, money laundering and gambling. All cyber crimes, be they commercial cyber thefts or hacker attacks against government networks, should be firmly combated in accordance with relevant laws and international conventions. No double standards should be allowed in upholding cyber security. We cannot just have the security of one or some countries while leaving the rest insecure, still less should one seek the so-called absolute security of itself at the expense of the security of others.”
    • Promotion of openness and cooperation: “As an old Chinese saying goes, ‘When there is mutual care, the world will be in peace; when there is mutual hatred, the world will be in chaos.’ To improve the global Internet governance system and maintain the order of cyberspace, we should firmly follow the concept of mutual support, mutual trust and mutual benefit and reject the old mentality of zero-sum game or ‘winner takes all.’ All countries should advance opening-up and cooperation in cyberspace and further substantiate and enhance the opening-up efforts. We should also build more platforms for communication and cooperation and create more converging points of interests, growth areas for cooperation and new highlights for win-win outcomes. Efforts should be made to advance complementarity of strengths and common development of all countries in cyberspace so that more countries and people will ride on the fast train of the information age and share the benefits of Internet development.”
    • Cultivation of good order: “Like in the real world, freedom and order are both necessary in cyberspace. Freedom is what order is meant for and order is the guarantee for freedom. We should respect Internet users’ rights to exchange their ideas and express their minds, and we should also build a good order in cyberspace in accordance with law as it will help protect the legitimate rights and interests of all Internet users. Cyberspace is not a place beyond the rule of law. Cyberspace is virtual, but players in cyberspace are real. Everyone should abide by the law, with the rights and obligations of parties concerned clearly defined. Cyberspace must be governed, operated and used in accordance with law, so that the Internet can enjoy sound development under the rule of law. In the meantime, greater efforts should be made to strengthen ethical standards and civilized behaviors in cyberspace. We should give full play to the role of moral teachings in guiding the use of the Internet to make sure that the fine accomplishments of human civilizations will nourish the growth of cyberspace and help rehabilitate cyber ecology.”

    The four principles are not of equal importance. “Respecting cyber sovereignty” is the cornerstone of China’s vision for global cyber governance. China introduced and argued for the concept in its first internet white paper in 2010.15 But cyber sovereignty is not itself controversial. The idea that a government can regulate things within its borders is nearly synonymous with what it means to be a state. Issues arise with the prescriptive and hypocritical nature of the three following principles. 

    Under the “maintenance of peace and security principle,” China—a country with a famously effective and persistent ability to steal and commercialize foreign intellectual property16—suggests that all countries should abhor cyberattacks that lead to IP theft or government spying. Xi’s statement establishes equivalency between two things held separate in Western capitalist societies: intellectual property rights and trade secrets versus espionage against other governments. China holds what the US prizes but cannot defend well, IP and trade secrets, next to what China prizes but cannot guarantee for itself, the confidentiality of state secrets. The juxtaposition was an implicit bargain and one that neither would accept. In considering China’s proposition, the US continuation of traditional intelligence-collection activities contravenes China’s “peace and security principle,” providing the Ministry of Foreign Affairs spokesperson a reason to blame the United States when China is caught conducting economic espionage. 

    “Promotion of openness and cooperation” is mundane enough to garner support until users read the fine print or ask China to act on this principle. Asking other countries to throw off a zero-sum mentality and view the internet as a place for mutual benefit, Xi unironically asks states to pursue win-win benefits. This argument blatantly ignores the clear differences in market access between foreign tech companies in the PRC and Chinese firms’ access to foreign markets. Of course, if a country allows a foreign firm into its market, by Xi’s argumentation, the country must have decided it was a win-win decision. It’s unclear if refusing market access to a Chinese company would be acceptable or if that would fall under zero-sum mentality and contravene the value of openness. Again, China’s rhetoric misrepresents the conditions it would likely accept. 

    Cultivating “good order” in cyberspace, at least as Xi conceptualizes it, is impossible for democratic countries with freedom of speech. Entreaties that “order” be the guarantor of freedom of speech won’t pass muster in many nations, at least not the “order” sought by China’s policymakers. A report from the Institute for a Community with a Shared Future shines light onto what type of content might upset the “good order.” In its Governing the Phenomenon of Online Violence Report, analysts identify political scandals like a deadly 2018 bus crush in Chongqing or the 2020 “Wuhan virus leak rumor” as examples of online violence, alongside a case where a woman was bullied to suicide.17 Viewing political issues as “online violence” associated with good order is not just a one-off report. Staff at the Institute argue that rumors spread at the start of the pandemic in 2020 “highlight the necessity and urgency of building a community with a shared future in cyberspace.”18 For China, “online violence” is a euphemism for speech deemed politically sensitive by the government. If “making [the internet] better, cleaner and safer is the common responsibility of the international community,”19 as Xi argues, how will China treat countries it sees as abrogating its responsibility to combat such online violence? Will countries whose internet service providers rely on Chinese cloud companies or network devices be able to decide that criticizing China is acceptable within its own borders?

    China’s five proposals 

    The five proposals used to construct China’s Community with a Shared Future in Cyberspace carry less weight and importance than its four principles. The proposals are not apparently attached to specific funding or policy initiatives, and did not receive attention from China’s foreign ministry. They are, at most, way stations along the path to a shared future. The proposals are:

    1. Speeding up the construction of a global internet infrastructure and promoting interconnectivity.
    2. Building an online platform for cultural exchange and mutual learning.
    3. Promoting the innovative development of the cyber economy and common prosperity. 
    4. Maintaining cyber security and promoting orderly development. 
    5. Building an internet governance system and promoting equity and justice.

    Implications and the future of the global internet

    China’s argument for its view of global internet governance and the role of the state rests on solid ground. The PRC frequently points to the General Data Protection Regulation (GDPR) in the European Union as a leading example of the state’s role in internet regulation. The GDPR allows EU citizens to have their data deleted, forces businesses to disclose data breaches, and requires websites to give users a choice to accept or reject cookies (and what kind) each time they visit a new website. China points to concerns in the United States over foreign interference on social media as evidence of US buy-in on China’s view of cyber sovereignty. Even banal regulations like the US “know your customer” rule—which requires some businesses to collect identifying personal information about users, usually for tax purposes—fit into Beijing’s bucket of evidence. But the alleged convergence between the views of China and democratic nations stops there.

    Divergent values between liberal democracies and the coterie of PRC-aligned autocracies belie our very different interpretations of the meaning of cyber sovereignty. A paper published in the CCP’s top theoretical journal mentions both the need to regulate internet content and “promote positive energy,” a Paltrowesque euphemism for party-boosting propaganda, alongside 

    endorsements of the cyber sovereignty principle.20 The article extrapolates on what Xi made clear in his 2015 speech. For the CCP, censorship and sovereignty are inextricably linked. 

    These differences are not new. Experts dedicate significant coverage to ongoing policy arguments at the UN, where China repeatedly pushes to classify the dissemination of unwanted content—read politically intolerable—as a crime.21 As recently as January 2023, China offered an amendment to a UN treaty attempting to make sharing false information online illegal.22 A knock-on effect of media coverage related to disinformation campaigns from China and Russia—despite their poor performance23—means policymakers, pundits, and journalists make China’s point that narratives promoted by other nations is an issue to be solved. What counts as disinformation can be meted out on a country-by-country basis. The tension between the desire to protect democracy from foreign influence and the liberal value of promoting free speech and truth in authoritarian systems is palpable. 
    The United States has fueled the CCP’s concern with its public statements. China’s internet regulators criticized the United States’ Declaration for the Future of the Internet.24 The CCP, which is paranoid about foreign attempts to support “color revolutions” or foment regime change, is rightfully concerned. The United States’ second stated principle for digital technologies is to promote “democracy,” a value antithetical to continuing CCP rule over the PRC. The universal value democratic governments subscribe to—the consent of the governed—drives the US position on the benefits of connectedness. That same value scares authoritarian governments. 

    Operationalizing our shared future

    Jointly Build a Community with a Shared Future in Cyberspace alludes to the pathways the CCP will use to act on its vision. The document includes detailed statistics about the rollout of IPv6—a protocol for issuing internet-connected device addresses that could ease surveillance—use of the Beidou Satellite Navigation system within China and elsewhere, the domestic and international use of 5G, development of transformational technologies like artificial intelligence and Internet of Things devices, and the increasingly widespread use of internet-connected industrial devices.25 The value of different markets, like that of e-commerce or trade enabled by any of the preceding systems, are repeated many times over the course of the document. It’s clear that policymakers see the fabric of the internet—its devices, markets, and economic value—as expanding. Owning the avenues of expansion, then, is key to spreading the CCP’s values as much as it is about making money.  

    Authoritarian and nondemocratic developing countries provide a bountiful market for China’s goods. Plenty of developing nations and authoritarian governments want to tighten control over the internet in their countries. Recent research demonstrates an increasing number of incidents when governments shut off the internet in their countries—a good proxy for their interest in censorship.26 These governments need the technology and tools to finely tune their control over the internet. Owing to the political environment inside the PRC, Chinese tech firms already build their products to facilitate censorship and surveillance.27 Some countries are having luck rolling out these services. The Australian Strategic Policy Institute found that “with technical support from China, local governments in East Africa are escalating censorship on social media platforms and the internet.”28 These findings are mirrored by reporting from Censys, a network data company, that found, among other things, a significant footprint for PRC-made network equipment in four African countries.29 In fact, there is no public list of countries that acknowledge supporting the Community with a Shared Future in Cyberspace approach, but there are good indicators for which nations are mostly likely to participate. 

    A 2017 policy paper entitled International Strategy of Cooperation on Cyberspace indicated that China would carry out “cybersecurity cooperation” with “the Conference on Interaction and Confidence Building Measures in Asia (CICA), Forum on China-Africa Cooperation (FOCAC), China-Arab States Cooperation Forum, Forum of China and the Community of Latin American and Caribbean States and Asian-African Legal Consultative Organization.”30 But an international strategy document stating the intent to cooperate with most of the Global South is not the same as actually doing so. The 2017 strategy document is, at most, aspirational.

    Instead, bilateral agreements and technical agreements between government agencies to work together on cybersecurity or internet governance are better indicators of who is part of China’s “community with a shared future.” For example, Cuba and the PRC signed a comprehensive partnership agreement on cybersecurity in early 2023, though the content of the deal remains secret.31 China has made few public announcements about other such agreements. In their place, the China National Computer Emergency Response Center (CNCERT) has “established partnerships with 274 CERTs in 81 countries and territories and signed cybersecurity cooperation memorandums with 33 of them.”32 But even these countries are not publicly identified.33 A few nations or groups are regularly mentioned around the claims of CNCERT’s international partnerships, however. Thailand, Cambodia, Laos, Malaysia, the Association of Southeast Asian Nations, the United Arab Emirates, Saudi Arabia, Brazil, South Africa, Benin, and the Shanghai Cooperation Organization are frequently mentioned. The paper on jointly building a community also mentions the establishment of the China-ASEAN Cybersecurity Exchange and Training Center, the utility of which may be questioned given China’s track record of state-backed hacking campaigns against its members.34

    Along with the identity of their signatories, the contents of these agreements and their benefits also remain private. Without access to any of these agreements, one can only speculate about their benefits. There are also no countries especially competent at cyber operations or cybersecurity mentioned in the list above. The result may be that CNCERT and its certified private-sector partners receive “first dibs” when government agencies or other entities in these countries need incident response services; receiving favorable terms or financing from the Export-Import Bank of China to facilitate the purchase of PRC tech also aligns with other observed behavior.35

    Besides favorable terms of trade for PRC tech and cybersecurity firms, some of the CNCERT international partners may also be subject to intelligence-sharing agreements. CNCERT operates a software vulnerability database called China National Information Security Vulnerability Sharing Platform, which accepts submissions from the public and partners with at least three other vulnerability databases.36 CNCERT’s international partnerships could add another valuable pipeline of software vulnerability information into China’s ecosystem. Moreover, under a 2021 regulation, Chinese firms conducting incident response for clients can voluntarily disclose those incidents to the Ministry of Industry and Information Technology’s “Cybersecurity Threat and Vulnerability Information Sharing Platform,” which has a separate system for collecting information about breaches.37 The voluntary disclosure of incidents and mandatory disclosure of vulnerabilities observed in overseas clients of Chinese cybersecurity firms would significantly increase the PRC’s visibility into global cyber operations by other nations or transnational criminal groups. 

    Offensive capabilities, not just global cybersecurity, might be on CCP policymakers’ minds, too, when other countries agree to partner with China. Cybersecurity firms frequently allow their own country’s offensive teams to work unimpeded on their customers’ networks: with each new client China’s cybersecurity companies add to their rosters, China’s state-backed hackers may well gain another network where they can work without worrying about defenders.38 In this vein, Chen Yixin, the head of the Ministry of State Security, attended a July 2023 meeting of the Cyberspace Administration of China that underlined the importance of the Community with a Shared Future in Cyberspace.39 In September 2023, Chen published commentary in the magazine of the Cyberspace Administration of China arguing that supporting the Shared Future in Cyberspace was important work.40 Researchers from one cybersecurity firm found that the PRC has been conducting persistent, offensive operations against many African and Latin American states, even launching a special cross-industry working group to monitor PRC activities in the Global South.41 Chinese cybersecurity companies operating in those markets have not drawn similar attention to those operations. 

    But China’s network devices and cybersecurity companies don’t just facilitate surveillance, collect data for better defense, or offer a potential offensive advantage, they can also be used to shore up relationships between governments and provide Beijing an avenue for influence. The Wall Street Journal exposed how Huawei technicians were involved in helping Ugandan security services track political opponents of the government.42 China’s government and its companies support such operations elsewhere, too. One source alleged that PRC intelligence officers were involved in cybersecurity programs of the UAE government, including offensive hacking and collection for the security services.43 The closeness of the relationship is apparent in other ways, too. The UAE is reportedly allowing China’s military to build a naval facility, jeopardizing the longevity of US facilities in the area, and tarnishing the UAE’s relationship with the United States.44

    Providing other nondemocratic governments with offensive services and capabilities allows China to form close relationships with other regimes whose primary goal, like the CCP, is to maintain the current government’s hold on power. In illiberal democracies, such cooperation helps Beijing expand its influence and provides backsliding governments capabilities they would not otherwise have. 

    China is plainly invested in the success of many other nondemocratic governments. Around the world, state-owned enterprises and private companies have inked deals in extractive industries that total billions of dollars. Many of these deals, say for mining copper or rare earth elements, provide critical inputs to China’s manufacturing capacity—they are the lifeblood of many industries, from batteries to semiconductors.45 In countries without strong rule of law, continued access to mining rights may depend on the governments that signed and approved those operations staying in power. China is already suffering from such abrogation of agreements in Mexico after the country’s president renationalized the country’s lithium deposits.46 Countries where China has significant interests, like the Democratic Republic of the Congo, are also considering nationalizing such assets.47 Close relationships with political elites, bolstered by agreements that provide political security, make it more difficult for those elites to renege on their contracts—or lose power to someone else who might. 

    China cannot currently project military power around the world to enforce contracts or compel other governments. In lieu of a blue-water navy, China offers what essentially amounts to political security services by censoring internet content, monitoring dissidents, and hacking political opponents—and a way to align the interests of other authoritarian governments with its own. If a political leader feels that China is a guarantor of their own rule, they are much more likely to side with Beijing on matters big and small. A recent series of events in the Solomon Islands provide a portrait of what this can look like. 

    Case studies in China’s “shared future”

    The saga surrounding the Solomon Islands provides a good example of China’s model for internet governance and the reasons for its adoption. 

    Over the course of 2022, the international community watched as the Solomon Islands vacillated on its course and in statements, and prevaricated about secret commitments to build a naval base for China. After a draft agreement for the Solomon Islands to host the People’s Liberation Army Navy (PLAN), the navy of the CCP’s military, was leaked to the press in March 2022, representatives of the Solomon Islands stated the agreement would not allow PLA military bases.48 Senior delegations from both Australia and the United States rushed to meet with representatives of the Pacific Island nation.49 Even opposition leaders in the Solomon Islands—who were surprised by the leaked documents—agreed that claims of PLA military bases should not be taken at face value.50 The back and forth by the Solomon Islands’ political parties worried China. In May 2022, a Chinese hacking team breached the Solomon Islands’ government systems, likely to assess the future of their agreement in the face of the island nation’s denials.51

    But the denials only bought Solomon Islands Prime Minister Manasseh Sogavare more time. In August, the ruling party introduced a bill to delay elections from May 2023 to December of that year.52 Shortly thereafter, the Solomon Islands announced a deal to purchase 161 Huawei telecoms towers financed by the Export-Import Bank of China.53 (The deal came just four years after Australia had successfully prevented the Solomon Islands from partnering with Huawei to lay undersea cables to provide internet access to the island nation.)54 Months later, the foreign press reported in October 2022 that the Solomon Islands had sent police to China for training.55 Local contacts in the security services may be useful for the PRC. A provision of the drafted deal leaked in March 2022 allows PLA service members to travel off base in the event of “social unrest.”56 Such contacts could facilitate interventions in a political crisis on behalf of PM Sogavare or his successor. In the summer of 2023, China and the Solomon Islands signed an agreement expanding cooperation on cybersecurity and policing.57

    To recap, in a single year the Solomon Islands agreed to host a PLAN base, delayed an election for Beijing’s friend, sent security services to train in the PRC, and rolled out PRC-made telecommunications equipment that can facilitate surveillance of political opponents. In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations. If there was a case study to sell China’s version of internet governance to nascent authoritarian regimes around the world, it would be the Solomon Islands.


    In the international system the CCP seeks, one that makes normal the censorship of political opponents and makes it a crime to disseminate information critical of authoritarian regimes, the sale of censorship as a service directly translates into the power to influence domestic politics in other nations.


    For countries with established authoritarian regimes, buying into China’s vision of internet governance and control is less about delaying elections and buying Huawei cell towers, and more about the transfer of expertise and knowledge of how to repress more effectively. Already convinced on the merits of China’s vision, these governments lack the expertise and technical capabilities to implement their shared vision of control over the internet. 

    Despite its capable but sometimes blunder-prone intelligence services, Russia was recently found to be soliciting technical expertise and training from China on how to better control its domestic internet content.58Documents obtained by Radio Free Europe/Radio Liberty detailed how Russian government officials met with teams from the Cyberspace Administration of China in 2017 and 2019 to discuss how to crack down on virtual private networks, messaging apps, and online content. Russian officials even went so far as to request that a Russian team visit China to better understand how China’s Great Firewall works and how to “form a positive image” of Russia on the domestic and foreign internet.59 The leaked documents align with what the PRC’s policy document details already. 

    Since 2016, they have co-hosted five China-Russia Internet Media Forum[s] to strengthen new media exchanges and cooperation between the two sides. Through the Sino-Russian Information Security Consultation Mechanism, they have constantly enhanced their coordination and cooperation on information security.

    The two countries formalized the agreement that served as the basis for their cooperation on the sidelines of the World Internet Conference in 2019.60 They could not have picked a better venue to signify what China’s Community with a Shared Future in Cyberspace policy would mean for the world. 

    The Solomon Islands and Russia neatly capture the spectrum of countries that might be most interested in China’s vision for the global internet. At each step along the spectrum, China has technical capabilities, software, services, and training it can offer to regimes from Borneo to Benin. 

    In conclusion, the chart below provides a visualization of the spectrum of countries that could be the most interested in implementing China’s Community for a Shared Future in Cyberspace.61

    Figure 1: PRC tech influence vs. democracy index score

    Sources: Data from “China Index 2022: Measuring PRC Influence Around the Globe,” Doublethink Lab and China In The World Lab, https://china-index.io/; and “The World’s Most, and Least, Democratic Countries in 2022,” Economist, February 1, 2023, https://www.economist.com/graphic-detail/2023/02/01/the-worlds-most-and-least-democratic-countries-in-2022

    By combining data from The Economist Democracy Index (a proxy for a country’s adherence to democratic norms and institutions) and Doublethink Lab’s China Index for PRC Technology Influence (limited to eighty countries and a proxy for a country’s exposure to, and integration of, PRC technology in its networks and services), this chart represents countries with low scores on democracy and significant PRC technology influence in the bottom right. Based on this chart, Pakistan in the most likely to support the Shared Future concept. Indeed, Pakistan has its own research center on the “Community for a Shared Future” concept.62The research center is hosted by the Communications University of China, which works closely with the CCP’s International Liaison Department responsible for keeping good relationships with foreign political parties. 

    Internet conference goes prime time

    The 2022 Wuzhen World Internet Conference got an upgrade and name change: the annual conference became an organization based in Beijing and the summit continues as its event, now called the World Internet Conference (WIC). The content from all previous Wuzhen conferences plasters the new organization’s website.63

    An odd collection of six entities founded the new WIC organization: Groupe Speciale Mobile Association (GSMA), a mobile device industry organization; China Internet Network Information Center (CNNIC), which is responsible for China’s top-level .cn domain and IPv6 rollout, among others functions; ChinaCERT, mentioned above; Alibaba; Tencent; and Zhejiang Labs.64 Another report by the author connects the last organization, Zhejiang Labs, to research on AI for cybersecurity and some oversight by members of the PLA defense establishment.65

    Though the Wuzhen iteration of the conference also included components of competition for technical innovation and research, the new collection of organizations overseeing WIC suggests it will focus more on promoting the fabric of the internet—hardware, software, and services—made by PRC firms. China’s largest tech companies including Alibaba and Tencent stand to benefit from China’s vision for global internet governance if the PRC can convince other countries to support its aims (and choose PRC firms to host their data in the process). Any policy changes tied to the elevation of the conference will become apparent over the coming years. For now, WIC will maintain the mission and goals of the Wuzhen conference.

    Conclusion

    China’s vision for the internet is really a vision for global norms around political speech, political oppression, and the proliferation of tools and capabilities that facilitate surveillance. Publications written by current and former PRC government officials on China’s “Shared Future for Humanity in Cyberspace” argue that the role of the state has been ignored until now, that each state can determine what is allowed on its internet—through the idea of cyber sovereignty, and that the political interests of the state are the core value that drives decision-making. Dressed up in language about the future of humanity, China’s vision for the internet is one safe for authoritarians to extract value from the interconnectedness of today’s economy while limiting risk to their regime’s stability. 

    China is likely to pursue agreements on cybersecurity and internet content control in regimes where it stands to lose most if the government changed hands. China’s grip on the critical minerals market is only as strong as its partners’ grip on power. In many authoritarian, resource-rich countries, a change of government could mean the renegotiation of contracts for access to natural resources or their outright nationalization—jeopardizing China’s access to important industrial inputs. Although internet censorship and domestic surveillance capabilities do not guarantee an authoritarian government will stay in power, it does improve their odds. China lacks a globally capable navy to project power and enforce contracts negotiated with former governments, so keeping current signatories in power is China’s best bet. 

    China will not have to work hard to promote its vision for internet governance in much of the world. Instead of China advocating for a new system that countries agree to use, then implement, the causality is reversed. Authoritarian regimes that seek economic benefits of widespread internet access are more apt to deploy PRC-made systems that facilitate mass surveillance, thus reducing the risks posed by increased connectivity. China’s tech companies are well-positioned to sell these goods, as their domestic market has forced them to perfect the capabilities of oppression.66 The example of Russia’s cooperation and learning from China demonstrates what the demand signal from other countries might look like. Elsewhere, secret agreements between national CERTs could facilitate access that allows for greater intelligence collection and visibility. Many Arabian Gulf countries already deploy PRC-made telecoms kit and hire PRC cybersecurity firms to do sensitive work. As the world’s autocrats roll out China’s technology, their countries will be added to the brochures of firms advertising internet connectivity, surveillance, and censorship services to their peers. Each nation buying into China’s Community for a Shared Future may well be a case study on the successful use of internet connectivity without increasing political risks: a world with fewer Arab Springs or “color revolutions.” 

    About the author

    Dakota Cary is a nonresident fellow at the Atlantic Council’s Global China Hub and a consultant at SentinelOne. He focuses on China’s efforts to develop its hacking capabilities.

    The author extends special thanks to Nadège Rolland, Tuvia Gering, Tom Hegel, Kenton Thibaut, and Kitsch Liao for their edits and contributions. 

    1    “China’s Internet White Paper,” China.org.cn, last modified June 8, 2010, accessed January 24, 2022, https://web.archive.org/web/20220124005101/http:/www.china.org.cn/government/whitepaper/2010-06/08/content_20207978.htm.
    2    Dakota Cary and Kristin Del Rosso, “Sleight of Hand: How China Weaponizes Software Vulnerability,” Atlantic Council, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/.
    3    I assume that a process for counterintelligence and operational deconfliction exists within the PRC security services, particularly for the more than one hundred companies that support the civilian intelligence service. Other mature countries have such processes and I graciously extend that competency to China.
    4    Liu Zheng, “Foreign Experts Keen on Interconnected China Market,” China Daily, 2014, https://www.wuzhenwic.org/2014-11/20/c_548230.htm.
    5    Catherine Shu, “China Tried to Get World Internet Conference Attendees to Ratify This Ridiculous Draft Declaration,” TechCrunch, 2014, https://techcrunch.com/2014/11/20/worldinternetconference-declaration/.
    6    Xi Jinping, “Remarks by H.E. Xi Jinping President of the People’s Republic of China at the Opening Ceremony of the Second World Internet Conference,” Ministry of Foreign Affairs of the People’s Republic of China, December 24, 2015, https://www.fmprc.gov.cn/eng/wjdt_665385/zyjh_665391/201512/t20151224_678467.html.
    7    State Council Information Office of the People’s Republic of China, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace,” November 7, 2022, http://english.scio.gov.cn/whitepapers/2022-11/07/content_78505694.htm. At the time, Xi was building on the nascent “shared future for humanity” concept introduced at the Eighteenth Party Congress in 2012; see Xinhua News Agency, “A Community of Shared Future for All Humankind,” Commentary, March 20, 2017, http://www.xinhuanet.com/english/2017-03/20/c_136142216.htm. However, state media has since claimed that the “shared future” concept was launched during a March 2013 event that Xi participated in while visiting Moscow; see Central Cyberspace Affairs Commission of the People’s Republic of China, “共行天下大道 共创美好未来——写在习近平主席提出构建人类命运共同体理念十周年之际,” PRC, March 24, 2023, http://www.cac.gov.cn/2023-03/24/c_1681297761772755.htm. The party rolled out the concept as part of its foreign policy and even added its language to the constitution in 2018; see N. Rolland [@RollandNadege], “My latest for @ChinaBriefJT on China’s ‘community with a shared future for humanity,’ which is BTW now enshrined in PRC Constitution,” Twitter (now X), February 26, 2018, https://twitter.com/RollandNadege/status/968152657226555392, as also seen in N. Rolland, ed., An Emerging China-Centric Order: China’s Vision for a New World Order in Practice, National Bureau of Asian Research, 2020, https://www.nbr.org/wp-content/uploads/pdfs/publications/sr87_aug2020.pdf.
    8    The PRC has even republished the 2015 document with updated statistics every few years, most recently in 2022; see State Council Information Office, “Full Text: Jointly Build a Community with a Shared Future in Cyberspace.”
    9    US Director of National Intelligence (DNI), “Digital Repression Growing Globally, Threatening Freedoms,” [PDF file],  ODNI, April 24, 2023, https://www.dni.gov/files/ODNI/documents/assessments/NIC-Declassified-Assessment-Digital-Repression-Growing-April2023.pdf.
    10    E. Kania et al., “China’s Strategic Thinking on Building Power in Cyberspace,” New America, September 25, 2017, https://www.newamerica.org/cybersecurity-initiative/blog/chinas-strategic-thinking-building-power-cyberspace/.
    11    National Computer Virus Emergency Response Center, “‘Empire of Hacking’: The U.S. Central Intelligence Agency—Part I,” [PDF file], May 4, 2023, https://web.archive.org/web/20230530221200/http:/gb.china-embassy.gov.cn/eng/PressandMedia/Spokepersons/202305/P020230508664391507653.pdf.
    12    Occasionally, translations refer to this as “a Community with a Shared Destiny [for Mankind]” or “Shared Future for Humanity in Cyberspace.” See State Council Information Office of the People’s Republic of China, “Full text: Jointly Build a Community with a Shared Future in Cyberspace.”
    13    Thanks to Nadege Rolland for her keen insight. 
    14    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.” 
    15    “China’s Internet White Paper,” China.org.cn. Thanks to Tuvia Gering for flagging this.
    16    W. C. Hannas, J. Mulvenon, and A. B. Puglisi, Chinese Industrial Espionage: Technology Acquisition and Military Modernisation (Abingdon, United Kingdom: Routledge, 2013), https://doi.org/10.4324/9780203630174.
    17    Institute for a Community with Shared Future, “《网络暴力现象治理报告》 [Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://web.archive.org/web/20221205001148/https:/icsf.cuc.edu.cn/2022/0701/c6043a194580/page.htm; andInstitute for a Community with Shared Future, “Full Text《网络暴力现象治理报告》[Governance Report on the Phenomenon of Internet Violence],” Communication University of China, July 1, 2022, https://archive.ph/B741D.
    18    Institute for a Community with Shared Future, “Understanding the Global Cyberspace Development and Governance Trends to Promote the Construction of a Cyberspace Community with a Shared Future,” Communication University of China, September 9, 2020, www.archive.ph/7XQyX.
    19    Xi, “Remarks by H.E. Xi Jinping President of the People’s Republic of China.”
    20    R. Creemers, P. Triolo, and G. Webster, “Translation: China’s New Top Internet Official Lays Out Agenda for Party Control Online,” New America, September 24, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinas-new-top-internet-official-lays-out-agenda-for-party-control-online/.
    21    M. Schmitt, “The Sixth United Nations GGE and International Law in Cyberspace,” Just Security (forum), June 10, 2021, https://www.justsecurity.org/76864/the-sixth-united-nations-gge-and-international-law-in-cyberspace/; and S. Sabin, “The UN Doesn’t Know How to Define Cybercrime,” Axios Codebook (newsletter), January 10, 2023, https://www.axios.com/newsletters/axios-codebook-e4388c1d-d782-4743-b96f-c228cdc7baa1.html.
    22    A. Martin, “China Proposes UN Treaty Criminalizes ‘Dissemination of False Information,’ ” Record, January 17, 2023, https://web.archive.org/web/20230118135457/https:/therecord.media/china-proposes-un-treaty-criminalizing-dissemination-of-false-information/.
    23    R. Serabian and L. Foster, “Pro-PRC Influence Campaign Expands to Dozens of Social Media Platforms, Websites, and Forums in at Least Seven Languages, Attempted to Physically Mobilize Protesters in the U.S.,” Mandiant, September 7, 2021, https://www.mandiant.com/resources/blog/pro-prc-influence-campaign-expands-dozens-social-media-platforms-websites-and-forums; and G. Eady et al., “Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior, Nature Communications 14, no. 62 (2023), https://www.nature.com/articles/s41467-022-35576-9#MOESM1.
    24    State Council of Information Office, PRC, “LIVE: Press Conference on White Paper on Jointly Building Community with Shared Future in Cyberspace,” New China TV, streamed live November 6, 2022, YouTube video, https://www.youtube.com/watch?v=hBYbjnSeLX0.
    25    China Daily, “Jointly Build a Community with a Shared Future in Cyberspace,” November 8, 2022, https://archive.ph/ch3LP+.
    26    Access Now, “Internet Shutdowns in 2022,” 2023, https://www.accessnow.org/internet-shutdowns-2022/.
    27    K. Drinhausen and J. Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy,” Mercator Institute for China Studies (MERICS), June 15, 2021, https://merics.org/en/ccp-2021-smart-governance-cyber-sovereignty-and-tech-supremacy.
    28    N. Attrill and A. Fritz, “China’s Cyber Vision: How the Cyberspace Administration of China Is Building a New Consensus on Global Internet Governance,” Australian Strategic Policy Institute, November 24, 2021, https://www.aspi.org.au/report/chinas-cyber-vision-how-cyberspace-administration-china-building-new-consensus-global.
    29    S. Hoffman, “Potential Chinese influence on African IT infrastructure,” Censys, March 8, 2023,   https://censys.com/potential-chinese-influence-on-african-it-infrastructure/.
    30    Xinhua, “Full Text: International Strategy of Cooperation on Cyberspace,” March 1, 2017, https://perma.cc/GDY6-6ZF8.
    31    Prensa Latina, “Cuba and China Sign Agreement on Cybersecurity,” 2023, April 3, 2023,  https://www.plenglish.com/news/2023/04/03/cuba-and-china-sign-agreement-on-cybersecurity/.
    32    China Daily, “Jointly Build.” CNCERT is a government-organized nongovernmental organization, not a direct government agency. It reports incidents and software vulnerabilities to PRC government agencies, including the 867-917 National Security Platform, and a couple of Ministry of Public Security Bureaus. See About Us (archive.vn).
    33    When asked for records of these international partners, CNCERT directed the author back to the home page of the organization’s website.
    35    Asian Development Bank, “Information on the Export-Import Bank of China,” n.d., https://www.adb.org/sites/default/files/linked-documents/46058-002-sd-04.pdf.
    36    D. Cary and K. Del Rosso, Sleight of Hand: How China Weaponizes Software Vulnerabilities, Atlantic Council, September 6, 2023,  https://www.atlanticcouncil.org/in-depth-research-reports/report/sleight-of-hand-how-china-weaponizes-software-vulnerability/ 
    37    Cary and Del Rosso, Sleight of Hand.
    38    I assume that a process for counterintelligence and operational deconfliction exists with the PRC security services. Other mature countries have such processes and I graciously extend that competency to China.
    39    Xinhua, “习近平对网络安全和信息化工作作出重要指示,” July 15, 2023, https://archive.ph/GkqnS.
    40    Chen Yixin, Secretary of the Party Committee and Minister of the Ministry of National Security, “Strengthening National Security Governance in the Digital Era,” China Internet Information Journal, September 26, 2023,  (中国网信). 国家安全部党委书记、部长陈一新:加强数字时代的国家安全治理–理论-中国共产党新闻网 (archive.ph).
    41    M. Hill, “China’s Offensive Cyber Operations Support Soft Power Agenda in Africa,” CSO Online, September 21, 2023, https://www.csoonline.com/article/652934/chinas-offensive-cyber-operations-support-soft-power-agenda-in-africa.html; and T. Hegel, “Cyber Soft Power | China’s Continental Takeover,” SentinelOne, September 21, 2023, https://www.sentinelone.com/labs/cyber-soft-power-chinas-continental-takeover/.
    42    J. Parkinson, N. Bariyo, and J. Chin, “Huawei Technicians Helped African Governments Spy on Political Opponents, Wall Street Journal, August 15, 2019, https://archive.ph/Xtwl1.
    43    Interview conducted in confidentiality; the name of the interviewee is withheld by mutual agreement.
    44    J. Hudson, E. Nakashima, and L. Sly, “Buildup Resumed at Suspected Chinese Military Site in UAE, Leak Says,”  Washington Post, April 26, 2023, https://www.washingtonpost.com/national-security/2023/04/26/chinese-military-base-uae/.
    45    Congressional Research Service, “Rare Earth Elements: The Global Supply Chain,” December 16, 2013,   https://crsreports.congress.gov/product/pdf/R/R41347/20; M. Humphries, “China’s Mineral Industry and U.S. Access to Strategic and Critical Minerals: Issues for Congress,” Congressional Research Service, March 20, 2015,  https://sgp.fas.org/crs/row/R43864.pdf; and the White House, “Building Resilient Supply Chains, Revitalizing American Manufacturing, and Fostering Broad-based Growth: 100-Day Reviews Under Executive Order 14017,”  June 2021, https://www.whitehouse.gov/wp-content/uploads/2021/06/100-day-supply-chain-review-report.pdf.
    47    “The Green Revolution Will Stall without Latin America’s Lithium,” Economist, May 2, 2023, https://www.economist.com/the-americas/2023/05/02/the-green-revolution-will-stall-without-latin-americas-lithium.
    48    N. Fildes and K. Hille, “Beijing Closes in on Security Pact That Will Allow Chinese Troops in Solomon Islands,”  Financial Times, March 24, 2022, https://archive.ph/X5a4h; and Associated Press, “Solomon Islands Says China Security Deal Won’t Include Military Base,” via National Public Radio, April 1, 2022, https://www.npr.org/2022/04/01/1090184438/solomon-islands-says-china-deal-wont-include-military-base
    49    N. Fildes, “Australian Minister Flies to Solomon Islands for Urgent Talks on China Pact,” Financial Times, April 12, 2022, https://www.ft.com/content/9da02244-2a10-4f18-a5c5-e88b14a2530b; and K. Lyons and D. Wickham, “The Deal That Shocked the World: Inside the China-Solomons Security Pact,” Guardian, April 20, 2022, https://www.theguardian.com/world/2022/apr/20/the-deal-that-shocked-the-world-inside-the-china-solomons-security-pact.
    50    N. Fildes, “Australian PM Welcomes Solomon Islands Denial of Chinese Base Reports,” Financial Times, July 14, 2022, https://www.ft.com/content/789340da-8c1a-4aff-8cf6-276c97c9f200.
    51    Microsoft, Microsoft Digital Defense Report 2022, 2022,  https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5bUvv.
    52    Reuters, “Bill to Delay Solomon Islands Election until December 2023 Prompts Concern,” in Guardian, August 9, 2022, https://www.theguardian.com/world/2022/aug/09/bill-to-delay-solomon-islands-election-until-december-2023-prompts-concern; and D. Cave, “Solomon Islands’ Leader, a Friend of China, Gets an Election Delayed,” New York Times, September 8, 2022,  https://www.nytimes.com/2022/09/08/world/asia/solomon-islands-election-delay.html.
    53    N. Fildes, “China Funds Huawei’s Solomon Islands Deal in Sign of Deepening Ties,” Financial Times, August 19, 2022, https://archive.ph/R47T0.
    54    “Huawei Marine Signs Submarine Cable Contract in Solomon Islands,” Huawei, July 2017, https://web.archive.org/web/20190129114026/https:/www.huawei.com/en/press-events/news/2017/7/HuaweiMarine-Submarine-Cable-Solomon; and W. Qiu, “Coral Sea Cable System Overview,” Submarine Cable Networks, December 13, 2019, https://archive.ph/E049b.
    55    Kirsty Needham, “Solomon Island Police Officers Head to China for Training,” Reuters, October 12, 2022,  https://www.reuters.com/world/asia-pacific/solomon-island-police-officers-head-china-training-2022-10-12/.
    56    Fildes and Hillie, “Beijing Closes in on Security Pact.”
    57    Nikkei Asia, “Solomons Says China Will Assist in Cyber, Community Policing,” Nikkei, July 17, 2023, https://archive.ph/90diZ.
    58    D. Belovodyev, A. Soshnikov, and R. Standish, “Exclusive: Leaked Files Show China and Russia Sharing Tactics on Internet Control, Censorship,” Radio Free Europe/Radio Liberty, April 5, 2023, https://www.rferl.org/a/russia-china-internet-censorship-collaboration/32350263.html.
    59    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
    60    Belovodyev, Soshnikov, and Standish, “Exclusive: Leaked Files.”
    61    Thanks to Tuvia Gering for this idea.
    62    “〖转载〗人类命运共同体巴基斯坦研究中心主任哈立德·阿克拉姆接受光明日报采访:中巴关系“比山高、比蜜甜”名副其实,” Communication University of China, June 4, 2021, https://comsfuture.cuc.edu.cn/2021/1027/c7810a188141/pagem.htm.
    63    Office of the Central Cyberspace Affairs Commission, “我国网络空间国际交流合作领域发展成就与变革,” China Internet Information Journal, December 30, 2023, www.archive.vn/tCnEa; D. Bandurski, “Taking China’s Global Cyber Body to Task,” China Media Project, 2023, https://chinamediaproject.org/2022/07/14/taking-chinas-global-cyber-body-to-task/; and Xinhua, “世界互联网大会成立,” Gov.cn, July 12, 2022,  https://web.archive.org/web/20220714134027/http:/www.gov.cn/xinwen/2022-07/12/content_5700692.htm.
    64    World Internet Conference, “Introduction,” WIC website, August 31, 2022, www.archive.ph/Axmuc.
    65    Dakota Cary, “Downrange: A Survey of China’s Cyber Ranges,” Issue Brief, Center for Security and Emerging Technology, September 2022, https://doi.org/10.51593/2021CA013.
    66    Drinhausen and Lee, “CCP 2021: Smart Governance, Cyber Sovereignty, and Tech Supremacy.”

    The post Community watch: China’s vision for the future of the internet appeared first on Atlantic Council.

    ]]>
    The 5×5—Veteran perspectives on cyber workforce development https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-veteran-perspectives-on-cyber-workforce-development/ Wed, 29 Nov 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=707775 In honor of National Military Veterans and Families Month, a group of veterans discuss their transitions from the military to the cyber workforce and suggest ways to improve the process for others. 

    The post The 5×5—Veteran perspectives on cyber workforce development appeared first on Atlantic Council.

    ]]>
    This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

    On November 3, the Atlantic Council’s Cyber Statecraft Initiative hosted “Joining forces: Veteran perspectives on cyber and tech workforce development” to discuss transitioning veterans interested in careers in cybersecurity and cyber policy. The veteran community is diverse but the transition out of uniform to civilian work is a well-recognized and widely challenging shift, both for servicemembers and their families.  

    In July 2023, the Biden administration released the National Cyber Workforce and Education Strategy, aimed at developing and maintaining the United States’ cybersecurity advantage through a skilled workforce. The Strategy highlights the importance of attracting veterans to careers in cybersecurity, given that the community is comprised of “diverse, and technologically skilled … people who have served the country and are committed to mission success.” Enhancing career pathways for servicemembers and the veteran community to join the cyber workforce can go a long way toward both meeting the urgent demand for cyber talent while providing job opportunities to those aspiring to meaningful careers beyond the military. 

    To continue these conversations, and in honor of National Military Veterans and Families Month, we brought together a group of veterans to discuss their own transitions from the military to the cyber workforce and suggest ways to improve the process for others. 

    #1 What are the barriers to entry for veterans seeking careers in cybersecurity? What is one way for hiring managers to overcome or mitigate them? 

    Nicholas Andersen, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; chief operating officer, Invictus International Consulting; former US Marine Corps

    “A typical barrier for veterans seeking a career in this field is that hiring managers may not be familiar with the missions throughout the military cyber community; they may only focus on experiences that are like those of typical applicants. We sometimes see the same challenges with traditional pathways to technology jobs, where managers are more inclined to hire applicants with degrees. Hiring managers need to shift their thinking from traditional qualification to focusing on competencies. Hiring managers should be thinking about how they can find the most competent people to fill these critical roles within their companies and what skills do they need to have?” 

    Cait Conley, senior advisor to the director, Cybersecurity and Infrastructure Security Agency; former US Army

    “Leaving the military and starting a new career either in the private sector or in federal or state government can be an intimidating (and outright confusing) process, especially if the military has been the servicemember’s only career experience. Hiring managers and leaders can make a huge difference here. They can show incoming veteran teammates that joining the team not only matters but is a priority. They can put in extra time to explain the application process and help veterans seeking to join their team navigate any questions or challenges that may come up during the process.” 

    Steve Luczynski, senior manager, Accenture Federal Services; chairman of the board, Aerospace Village; former US Air Force

    “One challenge that is not necessarily specific to cybersecurity is translating military experience to corporate roles, especially when cybersecurity job descriptions often have a difficult time adequately capturing the nature of the work to be done. Hiring managers and human resources teams would benefit from ensuring that they have someone on their teams, or easily accessible, to read resumes and provide explanations for military roles. I know servicemembers invest significant effort in attempting to remove jargon from their resumes, but that additional perspective from someone who shares their background ensures valuable skills are not lost simply because of an imperfect resume.” 

    Brandon Pugh, director, cybersecurity and emerging threats, R Street Institute; US Army

    “The transition for servicemembers into most civilian career fields presents challenges, and cybersecurity is no exception. It is imperative for servicemembers and veterans to learn from and network with those who have successfully transitioned before them and with those who are working in the field already. Hiring managers play a key role and should strive to proactively create a culture internally of hiring and supporting veterans, including linking job seekers to veterans at their organizations. I can attest firsthand that many individuals in the cyber field are willing to be a resource, and veterans should seek mentors early on in their job search.” 

    Maggie Smith, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director, Cyber Project, Irregular Warfare Initiative; US Army

    “A major barrier for many veterans is higher education and credentialing. While the military provides funding opportunities to pursue a degree while serving, access to those opportunities is often difficult—operational tempo, field training requirements, and other time constraints often prevent or deter a servicemember from taking classes. Additionally, most civilian certification opportunities are based on work role, meaning servicemembers who are not in cyber-related career fields are unlikely to encounter opportunities to earn credentials unless they pursue them on their own time—which, as discussed above, is often unpredictable and in short supply! I have encountered lots of soldiers in non-cyber military occupational specialties with an affinity for computers, networking, and technology but their lack of job experience in a cyber field, lack of any credentials, and a high school diploma prevent them from pursuing cybersecurity as a career. Expanding apprenticeship programs and revisiting job application requirements, as not all roles require a four-year degree, could get more veterans into cybersecurity.” 

    #2 What kinds of military activities provide relevant experience for cybersecurity roles? 

    Andersen: “I have seen plenty of non-technical veterans who transitioned to technical fields after they left active duty, but those with experience in cybersecurity, information technology, and intelligence make up the majority of the people in these roles. Servicemembers should take full advantage of tuition assistance and local technology training classes while they are still in the military! This does not cost them anything but time and can lead to any servicemember transitioning into a technology role if that is his or her desire.” 

    Conley: “Today, technology is a fundamental factor in warfare. Regardless of branch, military experience provides critical thinking and risk management skills essential to succeeding in any cybersecurity role. From day one of basic training, servicemembers learn how to identify, assess, and manage risk—a foundational mental model for cybersecurity professionals. Servicemembers also learn how to lead teams under stressful conditions in operating environments where technical tools are as integral as the humans themselves. Servicemembers, sometimes without even realizing it, have experienced the operational integration of a myriad of technologies from communication platforms and electronic warfare sensors to satellite systems and machine learning data aggregation tools. Those perspectives can provide unique insights into understanding and mitigating risk in changing environments.” 

    Luczynski: “Cybersecurity is comprised of a wide array of specializations in which high-level, broad governance and policy skills are more valuable in some domains than the deeply technical skills required in other domains. Security teams combining these diverse skillsets share the common need to prepare and then practice implementing response plans, which occurs often in the military. The ability to train in this manner, especially where open and honest after-action sessions can occur, is highly relevant and valuable in most cybersecurity roles.” 

    Pugh: “Direct cyber experience while in uniform is very helpful when looking to transition to cyber roles in the civilian workforce, and servicemembers can have experiences that civilians do not from their service. It is important to realize, however, that individuals who have served in different fields are still valuable in cybersecurity, especially because servicemembers often are good at handling competing demands in high stress environments, are educated and/or have practical experience in professional settings, and often have security clearances already. These can all be beneficial in the cybersecurity field.” 

    Smith: “This is a tricky question because it changes from service to service and, I would argue, every servicemember has a cybersecurity role to play! My own experience in the Army started when I enlisted in the Signal Corps and, later, I commissioned as an intelligence officer before becoming a cyber officer when the Army created the branch in 2014. I consider those three branches the Army’s trifecta—each has work roles that will result in an attractive resume. However, within every branch, there are opportunities to gain skills that technology companies and cybersecurity firms want: leadership, multi-tasking, curiosity, and mental agility. I think the challenge that many veterans face is translating their experience for the private sector so that companies can see their potential impact.” 

    #3 What are some positive US government initiatives to assist veterans in entering the cyber workforce? Where is one place for the US government to improve on this front? 

    Andersen: “Number one on the list must be the Department of Defense’s (DOD) SkillBridge Program, which is unmatched for the opportunities it provides to get firsthand experience with companies and have the military safety net while servicemembers consider their next career move. The generic Transition Assistance Program will not prepare servicemembers to exit the military successfully. The government needs to focus more on transitioning back to civilian life as a simple acknowledgement that the military is still part of regular society. Educating oneself, building savings, and addressing health needs are not tasks to begin at the end of a period of service. Those are tasks that are critical to making certain that our servicemembers return to civilian life ready to lead within communities and contribute to a different mission.” 

    Conley: “While there is always room for improvement, I am incredibly proud of the work that the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Homeland Security have done to promote cybersecurity learning for the veteran community. One of the most impactful ways that CISA contributes to helping transitioning veterans is by operating and maintaining the National Initiative for Cybersecurity Careers and Studies (NICCS), an online training initiative and portal. NICCS offers over eight-hundred and fifty hours of course content on a variety of important cybersecurity topics such as cloud security, ethical hacking and surveillance, risk management, malware analysis, and more. While it does not come with any formal cybersecurity certification, it does provide critical knowledge and insight for veterans to feel confident about their foundational understanding of cybersecurity.” 

    Luczynski: “The DOD’s SkillBridge career transition program is an incredible partnership between industry and servicemembers of all ranks and experience levels as they transition out of the military. In short, it is an internship where servicemembers can experience working outside the military as they look for their next role. Continuing to improve awareness among servicemembers about these opportunities and increase the industry participants will ensure that this program is a continued success.” 

    Pugh: “Over time, the military has put more emphasis on assisting servicemembers with their transitions, including facilitating opportunities for them to work with industry and to pursue cyber certifications while in uniform. One challenge is that there are many programs and opportunities to assist with transition run by the government, nonprofit organizations, and industry. Knowledge of these programs and knowing where to start is not always straightforward, which is one area in which the government and military can do better.” 

    Smith: “The new-ish Skillbridge program provides transitioning servicemembers with a chance to gain civilian work experience—any field, not just cybersecurity—through industry training, apprenticeships, or internships over their last one-hundred eighty days of service. Frankly, I am looking forward to taking advantage of this program when I retire in a couple years; it is a chance to spread my wings and test out a company or try something completely new. Even with Skillbridge, I think the military can do more. The Army is experimenting with a pilot program to allow soldiers to submit their retirement paperwork two full years before their anticipated end of service. That allows soldiers more time to plan for their life in retirement, but it is difficult to provide the same timeline for soldiers leaving service before they hit twenty years. Focusing on mid-career transitions and providing junior enlisted members with additional resources, such as career counseling, college counseling and application assistance, Department of Veterans Affairs, and financial benefits courses, could lead to better outcomes for veterans.”

    More from the Cyber Statecraft Initiative:

    #4 What is the biggest mistake you made (or avoided) in preparing for your transition from the military? 

    Andersen: “The biggest mistake that I made was focusing on my own transition out of the Marine Corps as a series of boxes to be checked. Successfully entering the civilian workplace was highly dependent on networking and having a support system of people who have previously done it themselves. I almost ignored this critical piece for too long.” 

    Conley: “I know a lot of veterans out there who struggle to find the same level of fulfillment in their career after the military, which sometimes leads them to question leaving the military in the first place. For me, after two decades in uniform with numerous deployments and over a decade in the special operations community, this was an important consideration when I looked at my next career choice. I knew that being part of a team with a mission focused on service and defending the Homeland was a necessity for me. That clarity helped me identify the best path forward for this new stage in my career. That is why I chose CISA. I know that I am not the only one either—veterans make up 40 percent of CISA’s workforce. Every day of my professional career—in or out of uniform—I have been excited to go to work because I know what I am doing makes a difference.” 

    Luczynski: “I tended to focus on my role at the time and short-term goals. Shifting to a longer-term approach and investing the time to consider my options gave me the benefit of having more time to prepare. I developed a better understanding of where my experience could be best applied while fulfilling my family and personal goals.” 

    Pugh: “I have been fortunate to serve in the military and now I am an active-duty military spouse. Before becoming a military spouse, I did not fully appreciate the unique employment challenges that military families face from their spouse’s military career caused by frequent moves and/or living in locations without the right job prospects. However, there are many opportunities for military spouses in cyber and many resources are available for them as well, along with some that are geared specifically toward spouses.” 

    Smith: “So… I have less than one thousand days until I will retire so at this stage, my mistakes are still in the future! However, what I am doing now is working with a mentor to work towards retirement milestones, identify people, jobs, and work roles that I find interesting, and really think through my transition. My mentor currently has me reaching out to people to conduct information interviews to talk to them about their careers, gather information about their company, and things like that. I have also prioritized doing things like this 5×5 because I want to keep academia’s door open to me, and remaining engaged in research will benefit me in the long run. I know I will make mistakes, but I am working hard on my transition plan in the hopes that I can mitigate risk and identify hazards before it turns into a dumpster fire!” 

    #5 What is the most important piece of advice you would share with a veteran interested in entering a career in cybersecurity or cyber policy? 

    Andersen: “This is a field that is constantly shifting and no one expert can sit on their laurels hoping that they will still be relevant in a few years’ time. Find a group of likeminded people that will push you to grow, and you will be surprised by how many rewarding experiences come your way. And if you are heading back to school using your GI Bill, make sure to join your local Cyber 9/12 Strategy Challenge team!” 

    Conley: “Recognize and own your value. Military service has taught you to be a good teammate, put mission first, and always remember that values matter. This combination of grit, selflessness, and reliability are rare qualities—and invaluable assets for any high performing security team. Be proud of your service history and look forward to what more good you can do!” 

    Luczynski: “Do not be afraid to ask for help! Reach out to your former supervisors and subordinates to learn what they do and what roles are available, review your resume, or help you grow your network. It does not matter that you have not spoken in a long time; that is understandable and easily fixed. I strive to put in as much energy toward helping folks now as so many did to help me during my own departure from the Air Force.” 

    Pugh: “There are many paths one can take within the cyber field. Too often people think opportunities within cybersecurity are very technical and that a technical background is essential. While those roles exist and are needed, there are many other ways to work in the cyber field, including in policy, law, and education, among many others.” 

    Smith: “I love this question because it presents the chance for me to champion the need for cybersecurity professionals with public policy experience and vice versa! I am a public policy nerd that happens to work in cyber—I started my Army career in an electronic maintenance shop repairing radios and later found myself getting my PhD in public policy as a cyber officer. One of my former students is currently doing a master’s degree at the Massachusetts Institute of Technology in technology and public policy—a match made in heaven! People often say that cybersecurity is a team sport, and I understand ‘team’ (and you will be hard pressed to convince me otherwise) as a multidisciplinary team, comprised of individuals with diverse backgrounds and skillsets coming together to craft a security strategy. Because humans are the ones who use technology, cybersecurity can never be just a technical field! However, cyber policy can never be just public policy. Just as cyberspace is the only domain of warfare that is totally dependent upon and spans the other domains of warfare (maritime, land, air, space) to exist, cyber policy is the only domain of policy that spans all other public policy domains (e.g., healthcare, education, transportation). Understanding of how technology works and its role in society is critical to crafting useful cyber policy.” 

    Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—Veteran perspectives on cyber workforce development appeared first on Atlantic Council.

    ]]>
    This job post will get you kidnapped: A deadly cycle of crime, cyberscams, and civil war in Myanmar https://www.atlanticcouncil.org/in-depth-research-reports/report/this-job-post-will-get-you-kidnapped-a-deadly-cycle-of-crime-cyberscams-and-civil-war-in-myanmar/ Mon, 13 Nov 2023 15:23:00 +0000 https://www.atlanticcouncil.org/?p=818089 In Myanmar, cybercrime has become an effective vehicle through which nonstate actors can fund and perpetuate conflict.

    The post This job post will get you kidnapped: A deadly cycle of crime, cyberscams, and civil war in Myanmar appeared first on Atlantic Council.

    ]]>

    Table of Contents

    Executive summary 

    Following decades of cyclical insecurity in Myanmar, conflict reached a new level following a coup d’etat in 2021 during which Myanmar’s military, the Tatmadaw, deposed the democratically elected National League for Democracy government.1Meanwhile, criminal syndicates, entrenched primarily in Special Economic Zones (SEZs) like Shwe Kokko within Myanmar’s Karen state,2 have expanded and evolved their criminal operations throughout this evolving conflict. The Tatmadaw forces have intertwined themselves in complicated and carefully balanced alliances to support the ongoing conflict, including with the Karen State Border Guard Force (BGF) . As the Tatmadaw and BGF look to sustain themselves and outlast each other, they have found allies of convenience and alternative funding sources in the criminal groups operating in Karen state.3 In the last two years, organized criminal groups in Myanmar have expanded their activities to include forms of profitable cybercrime and increased their partnership with the BGF, which enables their operations in return for a cut of the illicit profits. Since roughly 2020, criminal syndicates across Cambodia, Myanmar, Laos, and Thailand have largely lured individuals with fake offers of employment at resorts or casinos operating as criminal fronts where they are detained, beaten, and forced to scam, steal from, and defraud people over the internet.4 The tactics—kidnap-to-scam operations—evolved in response to the pandemic and to the Myanmar civil war, allowing criminal groups to build on existing networks and capabilities. These operations do not require significant upfront investment or technical expertise, but what they do need is time—time that can be stolen from victims trapped in the region’s already developed human trafficking network. The profits that these syndicates reap from victims around the globe add fuel to the ongoing civil war in Myanmar and threaten the stability of Southeast Asia. These groups entrench themselves and their illicit activities into the local environment by bribing, partnering with, or otherwise paying off a key local faction within the Myanmar civil war,5 creating an interconnectedness between regional instability and profit-generating cybercrime. 

    What is unfolding in Myanmar challenges conventional interpretations of cybercrime and the tacit separation of criminal activities in cyberspace from armed conflict. The criminal syndicates, and their BGF partners, adapted to the instability in Myanmar so effectively that each is financially and even existentially motivated to perpetuate this instability.  

    This paper explores the connectivity between cybercriminal activities and violence, instability, and armed conflict in a vulnerable region, exploring how cybercrime has become an effective vehicle through which nonstate actors can fund and perpetuate conflict. The following section examines the key precipitating conditions of this case, traces the use of cyberscams to create significant financial losses for victims across the world, sow instability across Southeast Asia, exacerbate the violence in Myanmar, and, finally, considers the risks that this model could be adopted and evolved elsewhere. This paper concludes with implications for the policy and research communities, highlighting the ways in which conflict can move, unbounded, between the cyber and physical domains as combatants and opportunists alike follow clear incentives to marry strategic and financial gain.  

    Introduction 

    Cybercrime and cyber fraud, from ransomware to financial data fraud and romance scams, have reached a new high in Southeast Asia, according to the United Nations Office on Drugs and Crime (UNODC), driven in part by “the increasing number of available targets and the perception of cybercrime as highly profitable with a relatively low risk of detection.”6 Cybercrime is a growing business worldwide; the relative accessibility of cybercrime tools combined with increasing reliance on online banking and the immense geographic reach that the domain provides to create a wealth of opportunity for those eager to exploit others for profit.7

    Criminal groups increasingly occupy a space between traditional crime and cybercrime, engaging in a multitude of interrelated and cross-supportive criminal activities. For more than a decade, organized criminal groups have moved into new online criminal markets while also engaging in cybercrime “to facilitate offline organized crime activities.”8

    In Myanmar, the pursuit of cybercrime profits not only facilitates traditional crime, but also directly supports the armed organizations waging civil war across the state. Myanmar criminal groups and ethnic armed organizations (EAOs) are engaged in a symbiotic relationship: criminal operations have flourished in the permissive environment of post-coup, war-torn Myanmar. Within Karen state, these criminal syndicates have in turn provided the Karen State Border Guard Force (BGF)—a key EAO in the conflict aligned with the Tatmadaw—with a desperately needed source of funding for their ongoing war. 

    A cycle of crime and conflict 

    Cybercrime and its symbiotic relationship to the Myanmar civil war emerged and evolved within the context of several internal and external shocks. The 2020 outbreak of COVID-19 had a devastating effect on tourism in Southeast Asia, limiting the supply of ready targets for fraud and myriad extortion schemes. Myanmar’s February 2021 coup destabilized a nascent regime and returned the country to a state of sustained conflict.  

    However, these shocks only served to amplify existing tensions in Myanmar including a deeply fractured governance landscape, a history of corruption across the country, and established regional organized criminal networks. Criminal groups in a chaotic and unstable environment with little to no rule of law sought out a limited, profitable resource—trafficked individuals from across the region—and forced them to engage in cyber fraud. Armed groups seeking to assert power and dominance within Myanmar with little legitimate means of generating income or war materiel sought to establish a symbiotic relationship with criminal groups that could provide a steady source of profit. 

    These catalysts and precipitating conditions lay the foundation for the model of cybercrime-funded conflict seen in Myanmar. However, it is the cyclical patterns of instability, criminal adaptation, and corrupt entrenchment that have created a self-reinforcing cycle of criminality and conflict—a cycle that may create an exportable model of cybercrime-funded conflict in new regions around the world. 

    INSTABILTY: The first portion of this cycle analyzes the conflict and instability within Myanmar both before and after the coup in February 2021. This section first provides the background to the insecurity in the country, especially surrounding the February 2021 coup that overthrew Myanmar’s democratically elected government. It lays out the fractured governance landscape of modern-day Myanmar and gives an overview of the key players in the conflict. This section then addresses the financial position of the combatants, and their emerging need to seek alternative funding sources, serving as a transition through which to better understand the relationship of armed conflict and the illicit economy in Myanmar. 

    CRIMINAL ADAPTATION: This section dives into cybercriminal activity in Karen state’s Shwe Kokko economic zone, examining the emergence of a massive kidnap-to-scam operation. This section examines how, in the context of the previously discussed fractured governance landscape and established criminal networks, criminal organizations operating within Myanmar responded to COVID-19’s shock to their operations. This section delves into what factors made cyberscams an optimal choice for criminal adaptation, namely the accessibility of this type of cyberscam and the existence of robust criminal human trafficking networks able to procure free, forced labor. This section explores the kidnap-to-scam operations as a whole, to better understand the pipeline of criminality that these criminal organizations created. 

    ENTRENCHMENT: This final section assesses the stability of the symbiotic relationship between cyber-enabled criminality and instability, and the parasitic effects both within Myanmar and without. The clearest example of this is the corrupt, quid pro quo relationship established between an ethnic armed organization, the BGF, and these criminal organizations. This relationship closes the feedback loop and entrenches an environment where multiple actors are incentivized toward instability. The instability within Myanmar provides enrichment and safe haven to sow increased instability in various manners and locales; the scam operations themselves create significant financial losses to individuals around the world, the criminal network supporting and supported by these scams contribute to insecurity across Southeast Asia, and the profits from these scams help purchase weapons and supplies that contribute to the protraction of the civil war in Myanmar. Finally, this method of operation as a whole may to inspire armed groups in insecure regions of the world to pursue a path of cybercrime-funded conflict.

    Instability: Conflict and opportunity 

    Myanmar has been in a state of cyclical violence for decades before the outbreak of the current civil war. This state of violence has created a sustained environment of instability and fertile grounds that opportunistic actors—both criminal syndicates and EAOs—have turned to their advantage.  

    Understanding the conflict  

    On February 1, 2021, the Tatmadaw—the Myanmar military—deposed the democratically elected government headed by the National League for Democracy (NLD) and detained President Win Myint and State Counsellor Aung San Suu Kyi.9 The Tatmadaw declared a state of emergency and called new elections at the end of that year.10 In the meantime, Commander-in-Chief of Defence Services Min Aung Hlaing took command of the government with a military junta, or military ruling committee.11  

    The unrest immediately following the coup has since erupted into outright violence between various militarized groups, and against civilians, across Myanmar.11 The primary political challenger to the Tatmadaw government is the National Unity Government (NUG), a shadow civilian government made up of former elected representatives who were ousted or left the Tatmadaw government following the coup. The NUG together with their armed forces, the People’s Defense Force, declared a “people’s defensive war” against the Tatmadaw.12

    PRECIPITATING CONDITION: History of insecurity

    Myanmar has known little respite from conflict and military coups in its history. After gaining independence from the British empire in 1948, a military coup overthrew the fledgling parliamentary democratic Union of Burma in 1962. The ‘70s and ‘80s saw protectionist policies drive a deteriorating economic situation and simultaneously the explosion of corruption and a black-market economy, specifically in drug-producing frontier areas like the Shan state and trafficking corridors in Karen state.13 These same regions featured an ongoing contest of control between the junta and ethnic armed organizations (EAO).14 Widespread protests in 1988 led to brutal crackdowns by the junta and a promise of multiparty elections in 1988, which did not occur until 1990, and the results of which the junta ignored, arresting opposition politicians and consolidating power.15 Various forms of junta remained in power through 2015, even after widely celebrated election brought the National League for Democracy into majority rule.16

    Since February 2021, the United States Institute of Peace and Human Rights Watch estimate that 3,000 civilians have been killed, 20,000 civilians have been arrested, and more than one million people have been displaced, both internally and externally.17 Government control throughout Myanmar is fractured. The junta estimated last year that the Tatmadaw maintained effective control over only 22 percent of Myanmar townships (17 percent of the country’s land area) and partial control over 39 percent of townships ( 31 percent of the land area).18 Regions across Myanmar controlled by EAOs exploded from a handful in 2021 to large swaths outside of a government-controlled central and southwestern core.19 The Myanmar government has cyclically engaged in conflict to assert control against ethnic armed groups in various regions.20 Now, both the Tatmadaw and the NUG rely on a series of carefully balanced alliances made up of such regionally aligned EAOs.21

    In the Karen state, there are two main armed groups: the Karen National Union (KNU) and the Karen Border Guard Forces (BGF). One of the oldest EAOs in Myanmar, the KNU, has controlled key territory in the Karen state along the Thai-Myanmar border since 1949. After cyclical antigovernment violence, the KNU signed a ceasefire in 2015 and led ten other ethnic armed groups in the peace process.22 Following the February 2021 coup, the KNU began fighting in open collaboration with the NUG against the Tatmadaw.23

    On the other side of the conflict, the BGF comprises units of former insurgents that are now aligned with the Tatmadaw and patrol areas of control along the Thai-Myanmar border including important trade corridors. The BGF is seen as a key force multiplier against the KNU and similar groups.24 The commanders of various BGF units operate with relative independence from the junta, often more as protection rackets for businesses—both legitimate and criminal—operating within their jurisdiction.25 The two warring sides, KNU and NUG versus BGF and Tatmadaw, are unwilling to cede an important trade corridor to Thailand and China alongside the legitimate and illegitimate economic opportunities of the area. 

    The Tatmadaw and antigovernment forces have increasingly relied on alternative funding sources as the civil war grinds on.26 Tatmadaw’s efforts have been guided toward putting pressure on antigovernment funding with the goal of fracturing the NUG’s alliances and networks with EAOs.27 The NUG relies almost exclusively on donations from supporters both within and without Myanmar.28 To cut off the NUG’s social media funding drives, the Tatmadaw in 2022 restricted mobile payments29 and in 2023 passed a cybersecurity law that banned VPN usage, throttled access to social media sites, and forced internet companies to hand over user data to the military.30 Despite protests from businesses and civil society, the cybersecurity law represents a check to the antigovernment forces’ ability to crowdfund and an attempt to diminish the NUG’s social media presence. The Tatmadaw, the NUG, and the various EAOs are competing not only for military and territorial control, but also for access to financial streams that are otherwise unrestrained by the unrelenting instability. 

    Criminal adaptation 

    Criminal syndicates and the power they wield in Myanmar is not a new phenomenon. However, the character of this criminal adaptation in Myanmar’s Karen state exemplifies how cyber tools can provide an alternate path forward for both criminal syndicates and EAOs. Zeroing in on the Shwe Kokko economic zone, these criminal syndicates have responded to their unstable environment and successive shocks by developing a new, large-scale operation using trafficked labor to scam individuals around the world for millions of dollars a year. This adaptation to the instability in Myanmar appears so effective that their profitability is anecdotally positively correlated with diminished rule of law and security. 

    Illicit economic development 

    The period from 2011 to 2020 saw limited economic and political reforms including amnesty to political prisoners and reinvigorated economic policies to encourage foreign direct investment.31 V-Dem Institute‘s Annual Democracy Report in 2019 emphasized that from 2008 to 2018 Myanmar had moved from a “closed autocracy” to an “electoral autocracy,” and through a lifting of sanctions and trade restrictions, the Myanmar economy experienced modest growth through 2016.32 These legitimate attempts to revitalize the economy, however, were frequently coopted by established criminal networks that used the development money to further their own operations. 

    PRECIPITATING CONDITION: History of corruption 

    Illicit markets, especially drag trafficking, produce profits that have directly financed and enabled EAOs to defy and challenge Myanmar government control for many decades, and over that time have in cases been better armed and resourced than the central government.33 During this time the ease of entry into the opium trade, the widespread demand for opium and heroin, and the lack of law enforcement on both sides of the Myanmar-Thailand border cultivated massive illicit trading networks.34

    Decades of fighting with these EAOs was temporarily halted by a 1989 ceasefire following the splintering of the largest drug-funded insurgency, the Burma Communist Party. This agreement, which would subsequently serve as the basis for ceasefires with other insurgency groups, granted the BCP’s successor groups significant political and economic autonomy as well as tangible development economic assistance programs.35 Following the ceasefires, state-controlled banks accepted deposits regardless of the murkiness of the money’s origins in exchange for a 40% (subsequently 25%) fee, helped these organizations and their subsidiaries get business permits and government contracts, and offered lucrative positions in business and government to influential insurgent leaders.36 The fighting stopped, but the EAO’s now effectively had carte blanche to develop their criminal operations unhindered, and the government now profited, too, from the expanded illicit trade. Illicit markets in some areas were so profitable that it dwarfed the formal economies of entire regions, providing little incentive for officials to crack down without heavy federal pressure.37

    In 2015, the government began an aggressive economic development effort along its borders using SEZs to drive international investment and foster domestic economic growth.  One such economic zone is Shwe Kokko in Karen state, which has since emerged as a center for legal and illegal gambling and illicit trade.38 Shwe Kokko was developed by Yatai International in partnership with BGF’s Chit Linn Myaing.39 Yatai International, a mining and manufacturing company, holds 20 percent stake in the Shwe Kokko development located next door to BGF headquarters.40 Yatai International Holding Group is owned by a Chinese national with Cambodian citizenship She Zhijiang and, according to the Yatai’s materials, the development “represents a new chapter for the Belt and Road Initiative”(BRI) within a new Special economic zone.41 

    It is unclear, however, how much Yatai’s claims are based in reality. According to the Myanmar Investment Commission, there is no official Special Economic Zone in Shwe Kokko, and legally speaking developments in this area are residential villas.42 Additionally, a 2020 US Senate report stated that Shwe Kokko is “an effort by the [People’s Republic of China] to colonize Karen [Kayin] territory … and expand regional BRI investments in Southeast Asia,” and links these investments to a broader Chinese strategy to increase its influence throughout the region through BRI.43 The Chinese government, however, denied any official state investment in Shwe Kokko.44  

    Shwe Kokko
    Credit: Google Earth

    Though SEZs are a legitimate mechanism for driving international investment, those in Myanmar and along the surrounding borders operate more like criminal safe havens with the controlling companies largely shielded from governmental pressures to regulate activities and crack down on criminal organizations.45  The National League for Democracy government was, in fact, conducting an investigation into the connections between the Shwe Kokko development and the BGF.46 Tensions stemming from the investigation were such that, in 2020, the BGF leadership considered breaking its alliance with the government and resuming fighting in Karen state.47 Ultimately, the investigation was never concluded.  

    The casinos of Shwe Kokko, and the illicit market that flourished alongside them, were a petri dish of criminal activity, connected with a regional illicit economy. Among these criminal activities were cyberscam operations run by Chinese and local criminal syndicates targeting primarily Chinese nationals.48 These operations largely fell under the category of pig butchering scams, which combine romance and boiler room (or false investment) scams through fake accounts on social media.49 This network of operations would serve as an unexpected foundation for both these criminal syndicates and the BGF in the tumultuous years to come. 

    Kidnap-to-scam 

    In early 2020, the outbreak of COVID-19 almost immediately altered the economic situation on the ground in Southeast Asia. There were no longer visitors to their casinos whose money could be siphoned off through gambling or illegal trade, and profitable trade gates that facilitated traffic between Myanmar and Thailand were forced to close.50 Relatively small-scale pig-butchering cyberscams—targeting fewer victims both as forced operators and as cryptoscam marks—were conduct in this area, primarily targeting Chinese nationals. However, the impact diminished due to China’s COVID-19 travel restrictions in 2020 and its counter-cybercrime laws and operations.51

    In this changed environment, criminal syndicates had to adapt to find an alternative, steady income stream. These syndicates still had relative safe harbor and physical headquarters in Shwe Kokko and access through their widespread criminal operations to a human trafficking network with a new pool of victims.52 Cybercrime is so appealing for criminals and criminal groups due in large part to the ability to create mass profit with relatively small resource input, from anywhere in the world. Instead of investing in technically sophisticated capabilities, these criminal syndicates instead used the resource they had in near abundance —forced labor—to conduct a global cyberscam operation. Cyberscams, run by these criminal syndicates out of empty hotels and casinos, became a multimillion-dollar criminal enterprise by exploiting thousands of vulnerable people in Myanmar and Southeast Asia and forcing them in turn to exploit people all around the world.53

    PRECIPITATING CONDITION: Established criminal networks 

    Transnational organized crime networks across Southeast Asia are engaged in a wide variety of criminal activity, four of the most active including drug trafficking, illegal migration, human trafficking, counterfeit goods and medicines, and environmental crimes.54 These criminal syndicates are deeply rooted in locales across Southeast Asia and Myanmar where their operations face minimal threat from governing bodies or law enforcement. The revenue of the drug trafficking out of Myanmar’s Shan state significantly outsizes the state’s legitimate economy.55 Chinese criminal syndicates have operated out of areas like Shwe Kokko in Myanmar and Cambodia for more than a decade in a self-reinforcing cycle in part “created by the confluence of Chinese money, Chinese organized crime groups, and a very poor legal environment,” says the head of the Northern Australia Strategic Policy Centre, John Coyne.56

    The Global Organized Crime Index ranks Myanmar third of 193 countries and first in Asia when it comes to degree of organized criminality.57 Myanmar has consistently rated as among the worst rated locations for human trafficking according to the US Department of State Office to Monitor and Combat Trafficking in Persons. The country has been ranked as Tier Three since 2015, meaning that is it included as one of just eleven “governments with a documented “policy or pattern” of human trafficking, trafficking in government-funded programs, forced labor in government-affiliated medical services or other sectors, sexual slavery in government camps, or the employment or recruitment of child soldiers.”58 Myanmar’s role as a hotbed of criminality, though briefly reduced in the mid-2010s, goes back decades and for now, any reduction in the strength of organized crime is next to impossible.59

    Since 2020, the groups kidnapped thousands of individuals from across Southeast Asia and beyond and forced them to engage in cybercriminal scams.60 Thousands of people who had moved within the region, unemployed due to the halt in tourism became what labor abuse investigator Khun Tharo called “invisible people,” extremely vulnerable to the lures employed by local criminal syndicates and without protection.61  Many of the victims are young, well-educated, computer-literate, and multilingual; and while they struggled to find gainful employment during the pandemic, their skills make them desirable targets for this type of work.62 Once these individuals arrived at the advertised location, eager to start new skilled positions with the promised competitive salaries, they are forcibly transported to heavily guarded compounds and their phones and passports are taken away.63 Once trapped inside, victims were told that their new role was to engage in a number of online pig butchering scams to generate sufficient profit for their captors. Victims deemed to be disobedient or underperforming are beaten and tortured by electrocution and other inhumane methods.64

    Breakdown of Pig Butchering Scams

    Pig butchering scams typically require intensive time and relationship building, and trafficked individuals are forced to build these relationships over time, partially through the use of playbooks and scripts made for the kidnapped workforce.65 The term pig butchering refers to the scammers’ process of first feeding the victim or ‘pig’ with false information until it is ready for the scam, then the scammer steals, or ‘butchers,’ the victim’s information and money. Typically, these pig butchering scams involve investments in cryptocurrency where the victim will be prompted to download a malicious app or create an account on a web platform that appears trustworthy or as a counterfeit of a legitimate financial institution.66 Inside the app or platform, the victim is shown carefully crafted data demonstrating that their ‘financial investment’ has grown and is encouraged to add more funds with the promise they can withdraw at any point. Once the victim has deposited a significant amount of money and shows signs of insolvency, the scammers will shut down the account and disappear, leaving the victim without recourse and often in debt.67 

    Entrenchment 

    While these cyberscam operations create symbiotic benefit for both criminal groups and EAOs in Myanmar, the operations themselves are intensely parasitic to the global cyber domain, the broader Southeast Asian region, and the population of Myanmar. Finally, this paper explores how this relationship could serve as an analytic model to better understand criminal-combatant relationships elsewhere in the world where similar precipitating conditions and patterns are present. 

    Global financial impact 

    The 2023 National Cybersecurity Strategy highlighted, along with China, Russia, Iran, and North Korea, the threat that global criminal syndicates pose to US national security, especially those threats emanating from jurisdictions with ineffective or irresponsible rule of law.68 Though the strategy’s section on countering cybercrime focuses primarily on ‘defeating ransomware,’ the strategy emphasizes the dangers of cyber operations that target the most vulnerable and least defended. The cyberscam operations run out of Myanmar are not technically sophisticated, and yet generate incredible gross and net profit—underlining the tenuous link between sophistication and the financial yield from cybercrime.  

    Several studies from governments and nongovernmental groups over the past few years have tried to illustrate the global financial impact of the scams as a whole, and particularly those emanating from Southeast Asia. In 2021, the US Federal Bureau of Investigation’s (FBI) Internet Crime Complaint Center received 4,325 complaints from US residents and citizens regarding pig butchering scams, reporting collective losses over $429 million.69 The Global Anti-Scam Organisation (GASO)—founded and maintained by victims of these pig butchering scams—conducted a survey through July 6, 2022, to better understand individual victims’ losses. Within the United States, the survey found that the average victim lost $210,760 and the median victim lost $100,000; outside of the United States the average and median were slightly lower, $155,117 and $52,000, respectively. The study also found that of the surveyed victims, 24 percent lost less than 50 percent, 43 percent lost between 50-100 percent, and 33 percent lost more than their net worth and went into debt as a result of the scam.70 In several reported cases, scam victims lost more than $1 million,71 and in one case a victim lost a staggering $5 million dollars.72 These figures, though not comprehensive, clearly telegraph the global economic impact of this relatively simple cyberscam. 

    Although little research has been conducted on these scams’ profits in Myanmar in particular, figures on related scams coming out of Cambodia may provide a comparative scale, at least, of the profits from similar operations conducted out of Myanmar. According to a 2022 report from VICE News, pig butchering kidnap-to-scam operations operated from Sihanoukville, Cambodia generate approximately $1 billion every year.73 Sean Gallagher, a principal threat researcher at Sophos X-Ops posing as a ’duped’ victim of a one such scam based in Sihanoukville, Cambodia, allowed the scam operator over the course of five months to play out a script of trust building until she provided him instructions in the form of friendly advice for how to invest in cryptocurrency. During his investigation, he found a series of crypto wallets used by the scammers over a five-month period worth a collective $3 million.74

    Exacerbating regional insecurity 

    These kidnap-to-scam operations have deep regional repercussions and interconnections. Myanmar’s neighbors across Southeast Asia are also the places from which these criminal syndicates pull most of their initial victims, thousands from Thailand, Malaysia, Taiwan, and more.75 Victims across the region and beyond are sucked into this multinational human trafficking network—lured to Thailand or other seemingly safe locations, and there, kidnapped and transported to scam compounds in Myanmar.76 The criminal syndicates behind most illegal gambling institutions and scam rings remain highly mobile and responsive to law enforcement pressures. These syndicates appear to have strong cross-regional connections that enable them to efficiently move their operation and their victims, and start operations anew—including to Myanmar.77 These groups tend to locate within SEZs or tourist hot spots and expand upon traditional criminal activities.  

    One such location where there are similar kidnap-to-scam operations is Sihanoukville, Cambodia. After years of local and international pressure, in September 2022 the Cambodian government and law enforcement carried out raids on the casinos and hotels where investigators and former victims were forced to run these scams. However, according to one volunteer who helps repatriate Thai victims, criminal syndicates bought out corrupted officials and law enforcement personnel who would inform the syndicates in advance of raids so that authorities would find the location empty.78 One victim, initially held in Sihanoukville, Cambodia, reported that he and dozens of other prisoners were moved in the middle of the night and reestablished in Shwe Kokko before his eventual escape.79

    A concentrated effort by government and law enforcement is required to detain criminals, rescue kidnapped workers, and shutter the bases of operations that these groups use. However, as the case of Sihanoukville shows, the mobility of and interconnection between these criminal groups means that they can respond easily to governmental pressure in one city or region by shifting workers and operations to a new area with weaker rule of law. This effectively means that regardless of how hard other regional governments target this activity, criminal groups have safe havens where local authorities have financial and political interest in facilitating and benefiting from illicit activities.  

    Perpetuating civil war 

    The 2021 coup in Myanmar and resulting instability were a boon for the BGF and their criminal partners across Karen state. The investigation into Shwe Kokko by the previous government was halted by the Tatmadaw, Karen casinos and border trade depots with Thailand have reopened, and the BGF’s multimillion-dollar developments in and around Shwe Kokko were restarted.80 The Shwe Kokko casinos resumed operations shortly after the coup in 2021 due to the Karen BGF’s relationship with the Tatmadaw and alleged promises between the parties that profits “would be split in half.”81 The money from these scams flow mostly to the heads of the crime organizations but also into the pockets of armed ethnic groups or junta-aligned forces that control the areas where the scam centers operate. The commanders of various BGF units operate with relative independence from the Tatmadaw and most operate as protection rackets for businesses, both legitimate and criminal, operating within their jurisdiction.82 In areas where Tatmadaw or Tatmadaw-aligned military forces control territory, the criminal organizations bribe military members with large sums of money and pay for taxes83 and yearly licenses to continue operations.84 Tatmadaw-aligned individuals have also issued licenses for business operations and land leases to front companies associated with scam groups, often in the form of seemingly legitimate tax payments for land, utility, and building use.85 Entrenchment in and control over Shwe Kokko and related criminal activities may have increased following the arrest of Yatai founder She Zhijiang in August 2022, which left an even greater opening for the BGF.86

    These cryptoscam operations, and those connected with them, have themselves become targets of retaliatory violence. As more information surfaces about money laundering networks, individuals connected with these activities face targeting by anti-regime forces in Myanmar and arrest in other countries.87 Minn Tayzar Nyunt Tin, a legal aide who was allegedly “key to money laundering for the Junta,” was assassinated in Yangon in March 2023 as a direct result of those activities. The Yangon guerrilla group allegedly responsible claimed that Tin had facilitated raising millions of dollars for the Tatmadaw.88 In April 2023, forces associated with the Karen National Union (KNU)89 attacked Shwe Kokko, reportedly because of its role as a criminal base of operations funding the military regime.90 This attack, allegedly without KNU approval, sent over 10,000 civilians fleeing into Thailand and initially overran five BGF outposts until BGF and Tatmadaw reinforcements forced them to retreat.91

    After sanctions by the West and increasing international pressure,92 the Tatmadaw faces a funding shortfall93 and difficulties in procuring weapons and ammunition.94 Russia and China both supply weapons to the Tatmadaw, but Russia’s export of weapons has slowed since its invasion of Ukraine, and China’s contributions are in part counterbalanced by weapon provisions they also make to several rebel EAOs.95 Chinese support of the Tatmadaw is being challenged due to the Tatmadaw’s lack of interest or ability to tackle the crime emanating from within Myanmar’s borders. Earlier this year, two Chinese ambassadors urged Myanmar to curtail these harmful illegal activities, possibly in conjunction with the Thai government.96 In addition, in early September, the United Wa State Army, carried out a series of raids in in northern Myanmar’s Shan state, arresting “more than 1,200 Chinese nationals allegedly involved in criminal online scam operations” and handing them over to the Chinese police just across the border in China’s Yunan province.97These actions show increasing Chinese momentum against this criminal activity, but it is unclear how this might play out in Karen state. Should the Tatmadaw government attempt to crack down on crime in Shwe Kokko, they will likely further fracture their relationship with the BGF, and thus their hold in the region. The estimated millions of dollars produced in Shwe Kokko kidnap-to-scam operations is an invaluable source of funding to enable the BGF to withstand shocks from within Myanmar and from international action, one they will be loath to lose.  

    A model for cybercrime-driven conflict 

    The catalysts, precipitating conditions, and cyclical patterns of the development and entrenchment of cybercrime-funded conflict within Myanmar are unique to that country. However, many of these factors are not unique and are present in similar configurations in other countries and regions around the world. As more and more criminal and armed groups develop cyber capabilities, some may look to Myanmar as an example. In locales with limited rule of law and a strong criminal human trafficking network, like the criminal syndicates in Myanmar, may see these victims as a resource themselves to conduct immensely profitable cyberscams requiring little additional resources or capabilities. INTERPOL, in its June 2023 warning on human trafficking-fueled fraud, warned that “there is evidence that [this modus operandi] is being replicated in other regions such as West Africa, where cyber-enabled financial crime is already prevalent.”98 However, this warning does not go far enough. The spread of kidnap-to-scam operations themselves do pose a significant risk to global financial security. With billions of dollars in profit, governments around the world must wake up to the threat of this kind of cyber fraud and its risks to their populations. These operations are rarely confined to one locale, but rather spread across regional criminal networks; when this modus operandi is implemented in a region impacted by violence and instability it creates a self-reinforcing cycle of instability.  

    Looking forward – What can we do? 

    Targeting the point of collection 

    Public awareness 

    In March 2023, the FBI released a public warning on the rise of pig butchering scams, outlining the basic format of one of these cryptocurrency scams and offering steps for potential victims to protect themselves.99 While government notifications like these are important, they do not go nearly far enough to communicate the depth of the threat to the wider public in the United States and beyond. Most of the population does not read government alerts, and so the government must coordinate with companies and nonprofits to find people where they are—and where these scammers find them. According to California prosecutor Erin West, cryptoscam victims are most commonly found on dating apps run by Match, Meta’s Facebook, Instagram, and WhatsApp, LinkedIn, and text messages.100Sites like these should be the focus of the FBI and the Cybersecurity and Infrastructure Security Agency’s (CISA) alert and education efforts on cryptocurrency scams, and run in cooperation with Match, Meta, LinkedIn, and other social media platforms. Since these cryptoscams do not solely exploit victims in the United States, the United States should engage with its allies and partners to coordinate their public awareness campaigns.  

    Education regarding the dangers of cryptocurrency scams should go beyond these alerts. The FBI and the Treasury Department should coordinate with companies like Chainalysis and nonprofit organizations like the Global Anti-Scam Organisation (GASO)101 to create guidelines to help people who want to buy and sell cryptocurrency better identify the signs of fraudulent or exploitative sites. These entities should also work in coordination to create guidelines for how people can report suspicious sites or activity below a formal criminal complaint. Most pig butchering cases go unreported, hindering potential prosecution, and leaving analysts without a full picture of their impact on victims.102

    United States Government 

    As previously mentioned, the US 2023 National Cybersecurity Strategy places a priority on countering cybercrime, but is too narrowly focused on ransomware as, seemingly, the only criminal strategic threat. Because cryptoscams target individuals rather than companies, the technical means used are often unsophisticated, and the scams themselves are often left unreported; therefore, the threat of this type of cybercriminal activity is underappreciated. As the CISA and FBI implement what has thus far been laid out in the strategy and implementation plan, their efforts must be intentionally expanded to include pig butchering operations. This may include creating a separate Joint Cryptoscam Task Force with a different mix of government and private sector entities to assess the full picture of the threat. It is critical to streamline communication and efforts between different organizations already tracking this. There are pockets of knowledge about this already, the key here is getting those pockets to talk and coordinate efforts. Specifically, the US Government should work with private sector crypto organizations to better identify funding streams of scam organizations and work with law enforcement and financial entities to close loopholes and enforce Know Your Customer rules.103

    One such tool may be to echo the successes of counter-ransomware operations to directly target the websites facilitating money laundering for cryptoscams. Steps have already been taken in this direction. In 2022, the United States filed a forfeiture complaint worth $2 million against cryptocurrency seized in an investment fraud case using the RiotX platform.104 In 2023, the FBI announced that it had seized more than $112 million in funds linked to cryptocurrency investment schemes.105 Once again, this effort should be internationalized so that the United States and its allies and partners are simultaneously applying pressure against the points of cryptocurrency collection for these criminal groups. In particular, the FBI should coordinate with Interpol’s ASEAN desk both to capture and share information and to execute limited, intentional operations against global criminals wherever jurisdiction allows.  

    The US government should also expand its use of sanctions. In November 2022, the US government, in partnership with the European Union, enacted sanctions against a number of individuals and companies as a result of “the continuing escalation of violence and grave human rights violations following the military takeover [in February 2021],” focusing on government officials, military leaders, and arms dealers.106 In December 2022, the US government passed the BURMA Act which lays out a series of mandatory and additional sanctions against various individuals and organizations within Myanmar. The mandatory sanctions include senior officials and entities that support the defense sector. The additional “possible” sanctions outlined by this act include: 

    (4) “any foreign person that, leading up to, during, and since the February 1, 2021, coup d’état in Burma, is responsible for or has directly and knowingly engaged in—

    (A) actions or policies that significantly undermine democratic processes or institutions in Burma; 
    (B) actions or policies that significantly threaten the peace, security, or stability of Burma;
    (C) actions or policies by a Burmese person that—

    (i) significantly prohibit, limit, or penalize the exercise of freedom of expression or assembly by people in Burma; or 
    (ii) limit access to print, online, or broadcast media in Burma; or

    (D) the orchestration of arbitrary detention or torture in Burma or other serious human rights abuses in Burma; or

    (5) any Burmese entity that provides materiel to the Burmese military”107

    Within these guidelines, the Department of Treasury’s Office of Foreign Assets Control should assess this connection between criminal groups, EAOs, and the companies operating within Shwe Kokko as outlined in this paper and determine how specific sanctions may aid in weakening the interplay of crime and violence in the state. 

    Targeting the source 

    Further research 

    More research is needed to understand the true financial impact of these operations throughout the world. Existing coverage of the prevalence and financial losses of these operations likely underestimates their true impact due to underreporting. Whether scam victims believe that nothing can be done, do not know how to report the criminal activity, or are too embarrassed to tell anyone, underreporting is a significant problem to overcome before governments and concerned actors can truly understand how to effectively respond.108 Research into the profits that pig butchering groups in Myanmar generated in three distinct time periods—before the COVID-19 outbreak in early 2020, between that date and the 2021 coup, and since the 2021 coup—would provide an indispensable, quantitative view of how different shocks to the population and criminal groups within the country affected these operations. 

    Regional cooperation 

    As the United States takes action against the threats emanating from criminal entities in Southeast Asia, it must work alongside governments in the region that have been most affected, and affected in different ways, by this threat. The Department of State should coordinate with Southeastern Asian government, especially Thailand and Cambodia, to better understand the scope and depth of the problem they face in cross-regional criminal operations like human trafficking. The Department of State’s Office to Monitor and Combat Trafficking in Persons must be more vocal to partners and push nations to release public service announcements, prevent citizens leaving for scam jobs, and work with their embassies to rescue citizens. This may also be an opportunity for the United States and China to cooperate against a common threat. China has been working to counter this problem for years, including in cooperation with Thailand in recent mass arrests.109 The United States should be clear about the public action it is taking against these groups and attempt, as much as possible, to align these efforts so that they are reinforcing rather than creating redundancies of Chinese and regional efforts. 

    Conclusion  

    The kidnap-to-scam operations run by criminal syndicates and enabled by armed combatants in an ongoing civil war, illustrate how cybercrime has become an effective vehicle through which nonstate actors can fund and perpetuate conflict outside of the cyber domain. Criminal syndicates in Myanmar are able to operate effectively due in large part to the vulnerable populations created in unstable environments and the lack of governance and law enforcement oversight. The BGF in Myanmar’s Karen state is able to maintain a level of independence from the Tatmadaw, even as allies, and convert illicit profits from cyberscams into guns and ammunition.110 The profits that these syndicates generate, at the expense of victims around the globe, add a growing source of fuel to the ongoing civil war in Myanmar and threaten the stability of Southeast Asia. 

    The use of cybercrime to fund conflict and instability could well become more prevalent as the basic relationship is symbiotic and few of the precipitating conditions are unique to Myanmar.  This emerging trend poses a significant risk to the United States and its allies, especially where it undermines important and rapidly hardening assumptions about the nature of risk from insecurity in cyberspace. 

    Conflict is not exclusive to the cyber or physical realms but increasingly moves across and between domains as combatants and opportunists alike follow clear incentives to marry strategic and financial gain. The United States and its allies must work together to create a clearer picture of the global cybercriminal landscape beyond ransomware and technical mitigations, and work with those governments impacted directly by kidnap-to-scam operations to help curtail this problem at its source. In the absence of such information, the next small wars and civil conflicts may be fueled by a powerful set of relationships and criminal incentives never well examined and all the more powerful because of it. 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    Richard C. Paddock, “Myanmar’s Coup and Its Aftermath, Explained,” The New York Times, December 9, 2022, , https://www.nytimes.com/article/myanmar-news-protests-coup.html.
    2    Alternatively, Kayin state
    3    “Crowdfunding a War: The Money behind Myanmar’s Resistance,” International Crisis Group, December 20, 2022, https://www.crisisgroup.org/asia/south-east-asia/myanmar/328-crowdfunding-war-money-behind-myanmars-resistance; Priscilla A. Clapp and Jason Tower, Myanmar’s Criminal Zones: A Growing Threat to Global SecurityThe United States Institute of Peace, November 9, 2022, https://www.usip.org/publications/2022/11/myanmars-criminal-zones-growing-threat-global-security.
    4    “The Gangs That Kidnap Asians and Force Them to Commit Cyberfraud,” The Economist, October 6, 2022, https://www.economist.com/asia/2022/10/06/the-gangs-that-kidnap-asians-and-force-them-to-commit-cyberfraud
    5    Naw Betty Han, “The Business of the Kayin State Border Guard Force,” Frontier Myanmar, December 16, 2019, https://www.frontiermyanmar.net/en/the-business-of-the-kayin-state-border-guard-force/.
    6    “Cybercrime and COVID19 in Southeast Asia: An Evolving Picture,” United Nations Office on Drugs and Crime, May 16, 2021, https://www.unodc.org/documents/Advocacy-Section/UNODC_CYBERCRIME_AND_COVID19_in_Southeast_Asia_-_April_2021_-_UNCLASSIFIED_FINAL_V2.1_16-05-2021_DISSEMINATED.pdf.
    7    “Cybercrime,” accessed October 13, 2023, https://www.interpol.int/en/Crimes/Cybercrime; E. Rutger Leukfeldt, Anita Lavorgna, and Edward R. Kleemans, “Organised Cybercrime or Cybercrime That Is Organised? An Assessment of the Conceptualisation of Financial Cybercrime as Organised Crime,” European Journal on Criminal Policy and Research 23 (2017), 287–300, https://link.springer.com/article/10.1007/s10610-016-9332-z
    8    “UNODC Teaching Module Series: Criminal Groups Engaging in Cyber Organized Crime,” United Nations Office on Drugs and Crime, accessed October 13, 2023, https://www.unodc.org/e4j/zh/cybercrime/module-13/key-issues/criminal-groups-engaging-in-cyber-organized-crime.html
    9    Hannah Beech, “Myanmar’s Leader, Daw Aung San Suu Kyi, Is Detained Amid Coup,” The New York Times, January 31, 2021, https://www.nytimes.com/2021/01/31/world/asia/myanmar-coup-aung-san-suu-kyi.html; Shibani Mahtani and Kyaw Ye Lynn, “Myanmar Military Seizes Power in Coup after Arresting Suu Kyi,” The Washington Post,” January 31, 2021, https://www.washingtonpost.com/world/asia_pacific/myanmar-aung-sun-suu-kyi-arrest/2021/01/31/c780ce6a-6419-11eb-886d-5264d4ceb46d_story.html
    10    Bill Chappell and Jaclyn Diaz, “Myanmar Coup: With Aung San Suu Kyi Detained, Military Takes Over Government,” NPR, February 1, 2021,  https://www.npr.org/2021/02/01/962758188/myanmar-coup-military-detains-aung-san-suu-kyi-plans-new-election-in-2022.
    11    World Report 2023: Events of 2022,” Human Rights Watch, 2023, https://www.hrw.org/world-report/2023/country-chapters/myanmar; Frida Ghitis, “As Myanmar’s Crisis Gets Bloodier, the World Still Looks Away,” World Politics Review, September 29, 2022, https://www.worldpoliticsreview.com/myanmar-civil-war-massacre-coup/; Nada Al- Nashif, “Oral Update on the Human Rights Situation in Myanmar to the Human Rights Council,” September 26, 2022, https://www.ohchr.org/en/statements-and-speeches/2022/09/oral-update-human-rights-situation-myanmar-human-rights-council; “Myanmar: Military’s Use of Banned Landmines in Kayah State Amounts to War Crimes, Human Rights Watch, July 20, 2022, https://www.amnesty.org/en/latest/news/2022/07/myanmar-militarys-use-of-banned-landmines-in-kayah-state-amounts-to-war-crimes/.
    12    “Myanmar Shadow Government Calls for Uprising against Military,” Al Jazeera, September 7, 2021, https://www.aljazeera.com/news/2021/9/7/myanmar-shadow-government-launches-peoples-defensive-war; Yun Sun, The Civil War in Myanmar: No End in Sight,” The Brookings Institution, February 13, 2023, https://www.brookings.edu/articles/the-civil-war-in-myanmar-no-end-in-sight/.  
    13    “Myanmar’s Troubled History: Coups, Military Rule, and Ethnic Conflict,” Council on Foreign Relations, last updated January 31, 2022, https://www.cfr.org/backgrounder/myanmar-history-coup-military-rule-ethnic-conflict-rohingya.
    14    Richard M. Gibson and John B. Haseman, “Prospects for Controlling Narcotics Production and Trafficking in Myanmar,” Contemporary Southeast Asia 25 (2003): 1–19, http://www.jstor.org/stable/25798625.
    15    Cormac Mangan, Private Enterprises in Fragile Situations: Myanmar, International Growth Centre, June 14, 2018,https://www.theigc.org/sites/default/files/2018/06/Myanmar-case-study.pdf; “As Myanmar Opens Up, A Look Back On A 1988 Uprising,” NPR, August 8, 2013, https://www.npr.org/2013/08/08/209919791/as-myanmar-opens-up-a-look-back-on-a-1988-uprising.
    16    The Junta Retained Control and Authority over Security, Foreign Relations, and other Domestic Policy Issues; “Myanmar’s Troubled History: Coups.” 
    17    Human Rights Watch, World Report 2023.
    18    “Briefing Paper: Effective Control in Myanmar,” Special Advisory Council for Myanmar, September 5, 2022, https://specialadvisorycouncil.org/2022/09/statement-briefing-effective-control-myanmar/.
    19    Kim Jolliffe, “Myanmar’s Military Is No Longer in Effective Control of the Country,” May 3, 2023, https://thediplomat.com/2023/05/myanmars-military-is-no-longer-in-effective-control-of-the-country/.; “Situation Maps: The Burma Army’s Authority Deteriorates as It Struggles to Maintain Control within the Country | Free Burma Rangers,” April 24, 2023, https://www.freeburmarangers.org/2023/04/24/situation-maps-the-burma-armys-authority-deteriorates-as-it-struggles-to-maintain-control-within-the-country/.
    20    Yun Sun, The Civil War in Myanmar.
    21    “Myanmar’s Ethnic Armies, Resistance Forces Plan to Boost Operations,” VOA News, February 17, 2022, https://www.voanews.com/a/myanmar-ethnic-armies-resistance-forces-plan-to-boost-operations/6445835.html.
    22    “Karen National Union (KNU),” Myanmar Peace Monitor, June 6, 2013, https://mmpeacemonitor.org/1563/knu/.
    23    Shona Loong, The Karen National Union in Post-Coup MyanmarStimson Center, April 7, 2022, https://www.stimson.org/2022/the-karen-national-union-in-post-coup-myanmar/.
    24    Priscilla A. Clapp and Jason Towever, The Myanmar Army’s Criminal Alliance, The United States Institute of Peace, March 7, 2022, https://www.usip.org/publications/2022/03/myanmar-armys-criminal-alliance.
    25    Jason Tower and Priscilla A. Clapp, Myanmar: Casino Cities Run on Blockchain Threaten Nation’s SovereigntyThe United States Institute of Peace, July 30, 2020, https://www.usip.org/publications/2020/07/myanmar-casino-cities-run-blockchain-threaten-nations-sovereignty.
    26    Sreeparna Banerjee and Tarushi Singh Rajaura, Growing Chinese Investments in Myanmar Post-CoupObserver Research Foundation, accessed November 9, 2021, https://www.orfonline.org/expert-speak/growing-chinese-investments-in-myanmar-post-coup/.
    27    Przemysław Gasztold and Michał Lubina, “Myanmar One Year after the Coup. Interview with Professor Michał Lubina,” Security and Defence Quarterly 38 (June 30, 2022): 86–93, https://securityanddefence.pl/Myanmar-one-year-after-the-coup-Interview-with-Professor-Michal-Lubina,149827,0,2.html.
    28    International Crisis Group, “Crowdfunding a War.”
    29    “Myanmar Junta Restricts Mobile Money Payments to Cut Resistance Funding,” The Irrawaddy, August 18, 2022, https://www.irrawaddy.com/news/burma/myanmar-junta-restricts-mobile-money-payments-to-cut-resistance-funding.html.
    30    Sebastian Strangio, “Myanmar Junta Set to Pass Draconian Cyber Security Law,” The Diplomat, January 31, 2022, https://thediplomat.com/2022/01/myanmar-junta-set-to-pass-draconian-cyber-security-law/.
    31    Annual Democracy Report 2019: Democracy Facing Global Challenges, V-Dem Institute, May 2019, https://www.v-dem.net/documents/16/dr_2019_CoXPbb1.pdf.
    32    V-Dem Institute, Democracy Facing Global Challenges.; Cormac Mangan, Private Enterprises in Fragile Situations: MyanmarInternational Growth Centre, June 14, 2018, https://www.theigc.org/publications/private-enterprises-fragile-situations-myanmar.
    33    Gibson and Haseman, “Prospects for Controlling Narcotics Production.”
    34    Patrick Meehan, “Drugs, Insurgency and State-Building in Burma: Why the Drugs Trade Is Central to Burma’s Changing Political Order,” Journal of Southeast Asian Studies 42 (2011): 376–404, http://www.jstor.org/stable/23020336.  
    35    Gibson and Haseman, “Prospects for Controlling Narcotics Production.”
    36    Meehan “Drugs, Insurgency and State-Building,” 376–404.
    37    ”Fire and Ice: Conflict and Drugs in Myanmar’s Shan State,” International Crisis Group, January 8, 2019, https://www.crisisgroup.org/asia/south-east-asia/myanmar/299-fire-and-ice-conflict-and-drugs-myanmars-shan-state
    38    “Online Scam Operations and Trafficking into Forced Criminality in Southeast Asia: Recommendations for a Human Rights Response,” United Nations Human Rights Office of the High Commissioner, 2023, https://bangkok.ohchr.org/wp-content/uploads/2023/08/ONLINE-SCAM-OPERATIONS-2582023.pdf.
    39    Debby S. W. Chan, “As Myanmar Coup Intensifies Regional Human Trafficking, How Will China Respond?,” The Diplomat, August 23, 2022, https://thediplomat.com/2022/08/as-myanmar-coup-intensifies-regional-human-trafficking-how-will-china-respond/.
    40    Han, “The Business of the Kayin State Border Guard Force.”; “Chit Lin Myaing Mining & Industry Co.,Ltd”, OpenCorporates, last updated July 12, 2017, https://opencorporates.com/companies/mm/1605-2005-2006.
    41    “Commerce and Conflict: Navigating Myanmar’s China Relationship,” International Crisis Group, March 30, 2020, https://www.crisisgroup.org/asia/south-east-asia/myanmar/305-commerce-and-conflict-navigating-myanmars-china-relationship; The Myanmar government officially recognizes and advertises three different SEZs within its borders: Kyauk Phyu in Rakhine State, Dawei in the Thanintharyi Region, and the Thilawa in Yangon Region. The three Burmese SEZs host mainly vehicle manufacturing plants from Japan, amid other foreign investment from Singapore, Thailand, and others, totaling $362.28 million in the 2018-19 fiscal year; “Special Economic Zones,” Directorate of Investment and Company Administration, accessed October 17, 2023, http://www.dica.gov.mm/en/special-economic-zones; Yuichi Nitta, “Race for Myanmar’s Auto Market Heats up as Toyota Builds Factory,” – Nikkei Asia, accessed October 17, 2023, https://asia.nikkei.com/Business/Automobiles/Race-for-Myanmar-s-auto-market-heats-up-as-Toyota-builds-factory.; “Japan Tops List of Foreign Investors in Myanmar SEZs,” November 2, 2019, https://www.bangkokpost.com/business/1769224/japan-tops-list-of-foreign-investors-in-myanmar-sezs.
    42    International Crisis Group, “Commerce and Conflict.”
    43    Committee on Appropriations, Department of State, Foreign Operations, and Related Programs Appropriations Bill of 2020, S. Rep. No. 116-126 (2019).
    44    Chan, “As Myanmar Coup Intensifies.”
    45    Jason Tower and Priscilla A. Clapp, Chinese Crime Networks Partner with Myanmar Armed Groups, United States Institute of Peace, April 20, 2020, https://www.usip.org/publications/2020/04/chinese-crime-networks-partner-myanmar-armed-groups; United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.”
    46    Zachary Abuza, “Will the First Myanmar Border Guard Defection Have a Contagion Effect?,” Radio Free Asia, June 27, 2023, https://www.rfa.org/english/commentaries/myanmar-border-guard-06272023092414.html.
    47    Frontier, “With Conflict Escalating.”
    48    “The Massive Phone Scam Problem Vexing China and Taiwan,” BBC News, April 22, 2016, https://www.bbc.com/news/world-asia-36108762; Tessa Wong, Bui Thu, and Lok Lee, “Cambodia Scams: Lured and Trapped into Slavery in South East Asia,” BBC News, September 21, 2022, https://www.bbc.com/news/world-asia-62792875.
    49    Wong, Thu, and Lee, “Cambodia Scams;” “Cryptocurrency Scam – Pig Butchering,” Michigan Department of Attorney General, accessed October 17, 2023, https://www.michigan.gov/ag/consumer-protection/consumer-alerts/consumer-alerts/scams/cryptocurrency-scam-pig-butchering.
    50    Naw Betty Han and  Thomas Kean, “On the Thai-Myanmar Border, COVID-19 Closes a Billion-Dollar Racket,” Frontier Myanmar, June 6, 2020, https://www.frontiermyanmar.net/en/on-the-thai-myanmar-border-covid-19-closes-a-billion-dollar-racket/; United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.” The Bali Process Regional Support Office, “Trapped in Deceit.”
    51    Matt Blomberg, “Chinese Scammers Enslave Jobless Teachers and Tourists in Cambodia,” Reuters, September 16, 2021, https://www.reuters.com/article/cambodia-trafficking-unemployed/feature-chinese-scammers-enslave-jobless-teachers-and-tourists-in-cambodia-idUSL8N2PP21I; Wong, Thu, and Lee, “Cambodia Scams;” “Cambodian Police Raid Alleged Cybercrime Trafficking Compounds,” September 21, 2022, Reuters, https://www.reuters.com/world/asia-pacific/cambodian-police-raid-alleged-cybercrime-trafficking-compounds-2022-09-21/; United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.”
    52    The Bali Process Regional Support Office, “Trapped in Deceit.”
    53    United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.  
    54    “Transnational Organized Crime in Southeast Asia: Evolution, Growth and Impact,” United Nations Office on Drugs and Crime, July 18, 2019, https://www.unodc.org/roseap/uploads/archive/documents/Publications/2019/SEA_TOCTA_2019_web.pdf.
    55    “Transnational Crime and Geopolitical Contestation along the Mekong,” International Crisis Group, August 18, 2023, https://www.crisisgroup.org/asia/south-east-asia/myanmar/332-transnational-crime-and-geopolitical-contestation-mekong.
    56    Matt Blomberg, “Chinese Scammers Enslave Jobless Teachers and Tourists in Cambodia,” Reuters, September 16, 2021, https://www.reuters.com/article/cambodia-trafficking-unemployed/feature-chinese-scammers-enslave-jobless-teachers-and-tourists-in-cambodia-idUSL8N2PP21I; Tessa Wong, Bui Thu, and Lok Lee, “Cambodia Scams: Lured and Trapped into Slavery in South East Asia, BBC News, September 21, 2022, https://www.bbc.com/news/world-asia-62792875
    57    “Myanmar profile,” Global Organized Crime Index,” accessed October 23, 2023, https://ocindex.net/country/myanmar.
    58    US Department of State, Trafficking in Persons Report, July 2022, https://www.state.gov/wp-content/uploads/2022/10/20221020-2022-TIP-Report.pdf.
    59    Priscilla A. Clapp and Jason Tower, Myanmar’s Criminal Zones: A Growing Threat to Global Security, The United States Institute of Peace, November 9, 2022, https://www.usip.org/publications/2022/11/myanmars-criminal-zones-growing-threat-global-security
    60    “The FBI Warns of False Job Advertisements Linked to Labor Trafficking at Scam Compounds,” Federal Bureau of Investigation, May 22, 2023, https://www.ic3.gov/Media/Y2023/PSA230522#fn1.
    61    Blomberg, “Chinese Scammers.”
    62    Wong, Thu, and Lee, “Cambodia Scams;” Reuters, “Cambodian Police Raid,”; United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.”
    63    “More than 50 Malaysians Held Captive by Syndicates in Cambodia, Myanmar, Vietnam and Thailand, says MCA’s Michael Chong,” Malay Mail, April 7, 2022, https://www.malaymail.com/news/malaysia/2022/04/07/more-than-50-malaysians-held-captive-by-syndicates-in-cambodia-myanmar-viet/2052138; “Malaysian Job Scam Victim Tells of ‘Prison’, Beatings in Myanmar,” The Straits Times, May 18, 2022, https://www.straitstimes.com/asia/se-asia/job-scam-victim-tells-of-prison-beatings-in-myanmar; Sokvy Rim, “The Social Costs of Chinese Transnational Crime in Sihanoukville,” The Diplomat, July 5, 2022, https://thediplomat.com/2022/07/the-social-costs-of-chinese-transnational-crime-in-sihanoukville/; Blomberg, “Chinese Scammers;” Wong, Thu, and Lee, “Cambodia Scams;” The Economist, “The Gangs that Kidnap;”, Lindsey Kennedy, Nathan Paul Southern, and Huang Yan, “Cambodia’s Modern Slavery Nightmare: the Human Trafficking Crisis Overlooked by Authorities” The Guardian, November 2, 2022, https://www.theguardian.com/world/2022/nov/03/cambodias-modern-slavery-nightmare-the-human-trafficking-crisis-overlooked-by-authorities; The Bali Process Regional Support Office, “Trapped in Deceit;” United Nations Human Rights Office of the High Commissioner, “Online Scam Operations.”
    64    Chan, “As Myanmar Coup Intensifies;” Wong, Thu, and Lee, “Cambodia Scams;” AFP, “Inside the ‘Living Hell’ of Cambodia’s Scam Operations,” France 24, November 9, 2022, https://www.france24.com/en/live-news/20221109-inside-the-living-hell-of-cambodia-s-scam-operations.
    65    Lily Hay Newman, “Hacker Lexicon: What Is a Pig Butchering Scam?” WIRED, January 2, 2023, https://www.wired.com/story/what-is-pig-butchering-scam/; Cezary Podkul, “What’s a Pig Butchering Scam? Here’s How to Avoid Falling Victim to One,” ProPublica, September 19, 2022, https://www.propublica.org/article/whats-a-pig-butchering-scam-heres-how-to-avoid-falling-victim-to-oneThe Economist, “The Gangs that Kidnap.”
    66    Newman, “Hacker Lexicon;” An analysis of one pig butchering scam network’s cryptocurrency wallets by TRM Labs showed that victim funds are usually sent in the form of Tether on Ethereum, with a smaller percentage using bitcoin or Tether (USDT) on Tron, see “Pig Butchering Scams: What the Data Shows,” TRM,” accessed October 17, 2023, https://www.trmlabs.com/post/pig-butchering-scams-what-the-data-shows.
    67    ProPublica, “What’s a Pig Butchering Scam?” 
    68    “National Cybersecurity Strategy,” White House, March 2, 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    69    “Internet Crime Report 2021,” Federal Bureau of Investigation, March 2021, https://www.ic3.gov/media/PDF/AnnualReport/2021_IC3Report.pdf.
    70    Cannabiccino, “Statistics of Crypto-Romance / Pig-Butchering Scam,” Global Anti-Scam Organisation, updated July 7, 2022, https://www.globalantiscam.org/post/statistics-of-crypto-romance-pig-butchering-scam.
    71    Robert McMillan, “A Text Scam Called ‘Pig Butchering’ Cost Her More Than $1.6 Million”Wall Street Journal, October 20, 2022, https://www.wsj.com/articles/a-text-scam-called-pig-butchering-cost-her-more-than-1-6-million-11666258201; Alastair McCready, “From Industrial-Scale Scam Centers, Trafficking Victims Are Being Forced to Steal Billions,” VICE News, July 13, 2022, https://www.vice.com/en/article/n7zb5d/pig-butchering-scam-cambodia-trafficking.
    72    Brian Krebs, “Massive Losses Define Epidemic of ‘Pig Butchering’,” Krebs on Security,” July 21, 2022, https://krebsonsecurity.com/2022/07/massive-losses-define-epidemic-of-pig-butchering/.
    73    McCready, “From Industrial-Scale Scam Centers.”
    74    Sean Gallagher, “Sour Grapes: Stomping on a Cambodia-Based ‘Pig Butchering’ Scam,” Sophos News (blog), February 28, 2023, https://news.sophos.com/en-us/2023/02/28/sour-grapes-stomping-on-a-cambodia-based-pig-butchering-scam/
    75    “Forced to Eat Rats and Pork, Malaysian Job-scam Victim Recounts Harrowing Captivity in Myanmar,” Coconuts KL, January 23, 2023, https://coconuts.co/kl/news/forced-to-eat-rats-and-pork-malaysian-job-scam-victim-recounts-harrowing-ordeal-during-captivity-in-myanmar/; Ouch Sony, “3,000 Thais to Be Repatriated From Cambodian Scam Compounds: Thai Police,” VOD, March 29, 2022, https://vodenglish.news/3000-thais-to-be-repatriated-from-cambodian-scam-compounds-thai-police/; Yan Naing, “Chinese Gangs Exploiting Vulnerable People Across Southeast Asia,” The Irrawaddy, May 2, 2022, https://www.irrawaddy.com/opinion/guest-column/chinese-gangs-exploiting-vulnerable-people-across-southeast-asia.html; Sony, “3,000 Thais to Be Repatriated.”
    76    International Crisis Group, “Commerce and Conflict;” Hillary Leung, “8 Hongkongers Missing in Myanmar as City Sets up Taskforce to Investigate Alleged Southeast Asia Job Scam,” Hong Kong Free Press, updated August 22, 2022, https://hongkongfp.com/2022/08/18/8-hongkongers-missing-in-myanmar-as-city-sets-up-taskforce-to-investigate-southeast-asia-job-scam-trafficking/; Chan, “As Myanmar Coup Intensifies;” AFP, “Hong Konger ‘Kidnapped’ by SE Asia Scam Ring Pleads for Help,” France 24, August 24, 2022, https://www.france24.com/en/live-news/20220824-hong-konger-kidnapped-by-se-asia-scam-ring-pleads-for-help; Raul Dancel, “8 Filipinos Rescued from Myanmar Syndicate Running Cryptocurrency Scams,” The Straits Times, February 13, 2023, https://www.straitstimes.com/asia/se-asia/8-filipinos-rescued-from-myanmar-syndicate-running-cryptocurrency-scams.
    77    “Inside the ‘living hell’” ; Jintamas Saksornchai and Cindy Liu, “Scam Workers; Wong, Thu, and Lee, “Cambodia Scams.”
    78    Kennedy, Southern, and Yan, “Cambodia’s Modern Slavery.”
    79    Jintamas Saksornchai and Cindy Liu, “Scam Workers Relocated From Cambodia to Laos, Myanmar,” VOD, October 24, 2022, https://vodenglish.news/scam-workers-relocated-from-cambodia-to-laos-myanmar/; part of VOD investigative series, Enslaved: Workers Trapped in Cambodian Human-trafficking Hubs are Forced to Perpetuate Massive Global Scams.  
    80    Frontier, “With Conflict Escalating.”
    81    Frontier, “With Conflict Escalating.”
    82    Jason Tower and Priscilla A. Clapp, Myanmar: Casino Cities Run on Blockchain Threaten Nation’s Sovereignty, The United States Institute of Peace, July 30, 2020, https://www.usip.org/publications/2020/07/myanmar-casino-cities-run-blockchain-threaten-nations-sovereignty.
    83    “Myanmar: Thai State-Owned Company Funds Junta,” Human Rights Watch, May 25, 2021, https://www.hrw.org/news/2021/05/25/myanmar-thai-state-owned-company-funds-junta.
    84    Dominic Faulder, “Asia’s Scamdemic: How COVID-19 Supercharged Online Crime,” Nikkei Asia, November 16, 2022, https://asia.nikkei.com/Spotlight/The-Big-Story/Asia-s-scamdemic-How-COVID-19-supercharged-online-crime.
    85    “Myanmar Junta Restricts Mobile Money Payments,” The Irrawaddy; “Shwe Kokko Crime Hub Attacked,” The Irrawaddy; Gary Warner, “Please Stop Calling All Crypto Scams ‘Pig Butchering!,’” Security Boulevard, August 1, 2022, https://securityboulevard.com/2022/08/please-stop-calling-all-crypto-scams-pig-butchering/.
    86    Feliz Solomon, “A Casino Kingpin Pitched a City in Myanmar—Police Say He Helped Build a Crime Haven,” Wall Street Journal,” September 29, 2022, https://www.wsj.com/articles/a-casino-kingpin-pitched-a-city-in-myanmarpolice-say-he-helped-build-a-crime-haven-11664450817
    87    Poppy McPherson and Panu Wongcha-Um, “Myanmar Junta Chief’s Family Assets Found in Thai Drug Raid, Sources Say,” The Japan Times, January 11, 2023, https://www.japantimes.co.jp/news/2023/01/11/asia-pacific/myanmar-junta-assets/.
    88    Hein Htoo Zan, “Yangon Guerrillas Kill Myanmar Junta Money Laundering Chief,” The Irrawaddy, March 25 2023, https://www.irrawaddy.com/news/yangon-guerrillas-kill-myanmar-junta-money-laundering-chief.html.
    89    “Kawthoolei Army: How a Broken System and a Disrespect for the Rules of Law in the KNU Gave Birth to Another Armed Group in Karen State,” Karen News, August 2, 2022, https://karennews.org/2022/08/kawthoolei-army-how-a-broken-system-and-a-disrespect-for-the-rules-of-law-in-the-knu-gave-birth-to-another-armed-group-in-karen-state/; MPA, “The KNLA and the Kawthoolei Army (KTLA) Issued Parallelly Statements, and the Attitude of Each Was Tense,” MPA (blog), February 1, 2023, https://mpapress.com/news/17009/.
    90    The Irrawaddy, “Shwe Kokko Crime Hub Attacked.”
    91    “Into the Lion’s Den: The Failed Attack on Shwe Kokko,” Frontier Myanmar, May 11, 2023, https://www.frontiermyanmar.net/en/into-the-lions-den-the-failed-attack-on-shwe-kokko/; “Heavy Fighting between the Military Council and the KNLA in Shwe Kukko,”Burmese VOA, April 7, 2023, https://burmese.voanews.com/a/7040280.html.  
    92    Panu Wongcha-um and Poppy McPherson, “Myanmar Activists, Victims File Criminal Complaint in Germany over Alleged Atrocities,” Reuters,” January 24, 2023, https://www.reuters.com/world/asia-pacific/myanmar-activists-victims-file-criminal-complaint-germany-over-alleged-2023-01-24/; “Foreign Companies in Myanmar Struggle with Shortage of Dollars,” Nikkei Asia, September 8, 2022, https://asia.nikkei.com/Spotlight/Myanmar-Crisis/Foreign-companies-in-Myanmar-struggle-with-shortage-of-dollars.
    93    Banerjee and Rajaura, “Growing Chinese Investments;” Aradhana Aravindan, “Myanmar’s Economic Woes Due to Gross Mismanagement since Coup – U.S. Official,” Reuters,” October 20, 2021, https://www.reuters.com/world/asia-pacific/myanmars-economic-woes-due-gross-mismanagement-since-coup-us-official-2021-10-20/.
    94    US Department of Treasury. ”Treasury Sanctions Officials and Military-Affiliated Cronies in Burma Two Years after Military Coup.” Department of Treasury press release, January 31, 2023, https://home.treasury.gov/news/press-releases/jy1233; Michael Martin, “News from the Front: Observations from Myanmar’s Revolutionary Forces,” The Center for Strategic and International Studies, December 5, 2022, https://www.csis.org/analysis/news-front-observations-myanmars-revolutionary-forces.
    95    Michael Martin, “Is Myanmar’s Military on Its Last Legs?,” The Center for Strategic and International Studies, June 21, 2022, https://www.csis.org/analysis/myanmars-military-its-last-legs.
    96    “Qin Gang: China Hopes Myanmar Will Crack Down on Internet Fraud,” Ministry of Foreing Affairs of the Republic of China, accessed October 23, 2023, https://www.fmprc.gov.cn/eng/wjb_663304/wjbz_663308/activities_663312/202305/t20230504_11070146.html; Sylvie Zhuang, “China Urges Myanmar to Crack Down on Telecoms Fraud Luring Victims over Border,” South China Morning Post, March 24, 2023, https://www.scmp.com/news/china/politics/article/3214714/china-urges-myanmar-crack-down-telecoms-frauds-luring-victims-across-border.
    97    “Powerful Ethnic Militia in Myanmar Repatriates 1,200 Chinese Suspected of Involvement in Cybercrime,” Associated Press, updated September 9, 2023, https://apnews.com/article/myanmar-cybercrime-wa-online-scams-58082a9f93a24406fa5c3cfbc647b20e.
    98    “INTERPOL Issues Global Warning on Human Trafficking-Fueled Fraud,” INTERPOL,  June 7, 2023, https://www.interpol.int/en/News-and-Events/News/2023/INTERPOL-issues-global-warning-on-human-trafficking-fueled-fraud.
    99    “The FBI Warns of a Spike in Cryptocurrency Investment Schemes,” Federal Bureau of Investigation, March 14, 2023, https://www.ic3.gov/Media/Y2023/PSA230314.
    100    “California Prosecutor Erin West on the Massive Wealth Transfer to Southeast Asia from a Crypto Scam Called ‘Pig Butchering,’” CyberScoop (blog), July 12, 2023, https://cyberscoop.com/erin-west-safe-mode-pig-butchering/.
    101    “Latest Scam Websites Information | Global Anti-Scam Org,” Global Anti Scam Org, accessed October 23, 2023, https://www.globalantiscam.org/scam-websites.
    102    TRM Insights, “Pig Butchering Scams.” 
    103    Daniel Mikkelsen, Shreyash Rajdev, and Vasiliki Stergiou, “Financial Crime Risk Management in Digital Payments, McKinsey, June 24, 2022, https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/managing-financial-crime-risk-in-digital-payments
    104    “U.S. Seeks Forfeiture of Crypto from ‘$2M Asian “Pig Butchering” RiotX Scam’,” OffshoreAlert, September 12, 2022, https://www.offshorealert.com/u-s-seeks-forfeiture-of-crypto-derived-from-2m-pig-butchering-riotx-scam/.
    105    US Department of Justice. “Justice Dept. Seizes Over $112M in Funds Linked to Cryptocurrency Investment Schemes, With Over Half Seized in Los Angeles Case. US Department of Justice press release,” April 3, 2023, https://www.justice.gov/usao-cdca/pr/justice-dept-seizes-over-112m-funds-linked-cryptocurrency-investment-schemes-over-half.
    106    “US, EU Add More Sanctions as Myanmar Violence Deepens,” Al Jazeera, November 9, 2022, https://www.aljazeera.com/news/2022/11/9/us-eu-add-more-sanctions-as-myanmar-violence-deepens.
    107    Burma Act of 2021, H.R. 5497, 117th Cong. (2021).
    108    Rohan Goswami, “That Simple ‘hi’ Text from a Stranger Could Be the Start of a Scam That Ends up Costing You Millions,” CNBC, May 2, 2023, https://www.cnbc.com/2023/05/02/pig-butchering-scammers-make-billions-convincing-victims-of-love.html; Keith B. Anderson, “To Whom Do Victims of Mass-Market Consumer Fraud Complain?,” SSRN, May 24, 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3852323.
    109    Zhao Ziwen, “Police in China and Myanmar Detain 269 in Cyber Scam Crackdown,” South China Morning Post, September 5, 2023, https://www.scmp.com/news/china/diplomacy/article/3233506/police-china-and-myanmar-detain-269-cyber-scam-crackdown.
    110    Gibson and Haseman, “Prospects for Controlling Narcotics Production.”

    The post This job post will get you kidnapped: A deadly cycle of crime, cyberscams, and civil war in Myanmar appeared first on Atlantic Council.

    ]]>
    Homogeneity and concentration in the browser https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/homogeneity-and-concentration-in-the-browser/ Mon, 30 Oct 2023 16:25:00 +0000 https://www.atlanticcouncil.org/?p=822204 Web browsers are the gateway to the internet. As browser developers replicate design features and concentrate around shared underlying technologies, they create cybersecurity risks with the potential to impact many internet users at once.

    The post Homogeneity and concentration in the browser appeared first on Atlantic Council.

    ]]>
    Web browsers serve as the gateway to the internet, offering users the ability to easily access and navigate information online. Whether operating on a desktop computer or a smartphone, the core task of a browser is simple: locate, retrieve, and display web pages and their contents. For many people, the browser-rendered interface—complete with an address bar, tabs, and a bookmark menu—is synonymous with the internet itself.

    Browsers routinely retrieve information from a wide array of sites, many of which are unsecure or unvetted, creating inherent security risks to their operation. Browser security is of utmost importance because people rely on browsers to be able to connect to the internet and its myriad services. If a major browser ceased to function, millions of people would be unable to access email, search engines, online banking, social media, and many other services and content. Yet, browser security is complex and often insufficiently discussed.

    Cybersecurity flaws can enable attackers to steal information stored within the browser, such as session cookies that provide access to online accounts where a user is automatically logged in. More dangerous software flaws can potentially allow an attacker to escape the browser and directly access the device running the software, whether it be a personal smartphone or a company workstation. While no consumer-facing technologies will ever be perfectly secure, browsers’ ubiquity and importance make them attractive targets for hackers looking to reach a wide swathe of potential targets. Even more strikingly, due to a highly concentrated browser market, attackers need only target a few select software programs to maximize their impact.

    When users are tasked with downloading a web browser—or are provided a pre-installed browser on their device—there are not many options to choose from. Due to a variety of factors, the world’s most popular browsers are manufactured by a small group of companies. Google’s Chrome dominates the browser market, representing more than half of users worldwide.1. Apple’s Safari ranks second, with a quarter of users. Other browsers, such as Microsoft’s Edge and Mozilla’s Firefox, each account for less than 6 percent of users.

    On top of this market concentration, browsers are also relatively homogeneous in their technical designs. Each browser must integrate with languages and technologies that make up web pages, creating design pressures that move them toward certain shared technologies and, thus, shared risk. Furthermore, when a new feature or design element of one browser is successful, there is a tendency for other browser developers to replicate it to remain competitive. Features that were once a company’s unique innovation, like browser extensions, often become standard across the industry, generalizing both their security advantages2. and disadvantages 3across the ecosystem. While user interface design choices are the most visible to consumers, design homogeneity also exists in the foundational technologies that enable browsers to work. For example, Google Chrome’s functionality is built on Chromium, a free and open-source software project. Given Chromium’s high performance and ease of use, a range of other browsers including Microsoft Edge, Opera, and Yandex Browser have also adopted it. Whether a user is browsing the web on Chrome or another platform, there is a high probability that Chromium is operating behind the scenes, leading to a worldwide web-browsing ecosystem that is heavily dependent on the standards and architectural norms of a singular company, Google, which maintains the project.

    This homogeneity in browsers—where consumers only use a few products, each of which are powered by shared or similar technologies—means that sources of insecurity can impact many users at once. The fact that browsers are one of the most-used consumer-facing technologies makes this systemic set of security problems significant.

    Homogeneity, Concentration, and Risk

    Homogenous design can have both positive and negative implications for browser security and cyber risk. For example, design homogeneity can increase the pressure for new entrants or outlier competitors to adopt others’ positive security features—for example, many browsers now insert a logo beside the URL bar to indicate that a website connection is HTTPS encrypted. New entrants, to make their product familiar to users, are likely to mimic the feature, which subsequently lets new users of that browser easily understand how to spot an insecure connection. Browser developers would certainly also argue that their greater market share enables greater investments in security because they have the resources to maintain large and well-resourced security teams. If there were hundreds of browsers that each controlled a small percentage of the browser market, each browser company might have a smaller budget for security and less technical expertise at its disposal.

    Simultaneously, homogeneity in browser usage and design can concentrate risk. Single point-of-failure risk in the cybersecurity market is not a new idea. For example, when the Meltdown CPU vulnerability was discovered in 2018, millions of devices were exposed because they all used Intel CPUs, or central processing units.4. Daniel Geer and colleagues’ 2003 paper, “CyberInSecurity: The Cost of Monopoly” was a foundational work on this issue, looking at product monoculture, Microsoft’s market dominance, and the resulting effects on cybersecurity and national security.5. Because most computers at the time ran Microsoft Windows, the authors argued, “most of the world’s computers [were] vulnerable to the same viruses and worms at the same time.”Greer et al., 2003 Microsoft, they continued, made this worse by locking users into its platform through network effects (like the power of owning Microsoft Word) and making it difficult to exchange data, documents, and other information outside the Microsoft product ecosystem.Greer et al., 2003 The result, Geer and coauthors wrote, was a “monoculture of networked computers” that were “a convenient and susceptible reservoir of platforms from which to launch attacks” that can cascade to other parts of the ecosystem.Greer et al., 2003Governments should intervene to “blunt the monoculture risk,” enforce diversity of platforms, and “reap a side benefit of increased market reliance on interoperability” instead of product lock-in.Greer et al., 2003

    Bruce Schneier, one of the coauthors, wrote a follow-up essay in 2010—responding to criticism from computer security researcher Marcus Ranum who said, while he agreed with many of the study’s conclusions, the use of “monoculture” was “distorting the truth by using an analogy.”6 No monoculture exists, Ranum wrote, when so many different firewall rules, software patch levels, browser settings, and other factors inform a device’s security alongside the use of a particular platform like Microsoft Windows.Ranum, 2022Some of the debate focused on the analogy of a monoculture, in the biological sense of the word, and Schneier said there were flaws in the monoculture analysis, such as downplaying the costs of maintaining security in the face of product diversity.7 Schneier also reiterated that “if everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that operating system or software or protocol, a single exploit can affect everyone.”Schneier, 2020 Whether “monoculture” was the right analogy or not, the discussion around it underscored concern about vulnerabilities in a single, dominant product cascading widely.

    Mozilla published a paper in September 2022 related to this notion of market dominance leading to internet risk—broadened beyond just one company. It noted there are “only three main browser engine providers left,” Google, Apple, and Mozilla, but “Apple’s engine only runs on Apple devices.”8 Without Mozilla, it said, Google would be the only device-agnostic provider available, creating a single point of failure in the ecosystem.Petrie, Shah, and Amlani, 2022 The study argued that when operating systems load their proprietary browsers onto their devices (like Apple with Safari), it harms consumers in several ways: limiting consumers’ choices; lowering product quality (because companies need not compete on quality, but instead only leverage their dominance); decreasing innovation (as disruptive competitors struggle to get foothold); harming privacy; and forcing consumers into unfair contracts.Petrie, Shah, and Amlani, 2022 Put simply, “without browser diversity, a single company’s influence can shape the internet.”Petrie, Shah, and Amlani, 2022 This statement could be modified to emphasize that without product diversity in the browser vertical, only a few companies can drive security decisions for billions of internet users.

    The point is not that people should use less popular products under the assumption that they are less frequently attacked.9 Instead, it is that companies that dominate a product vertical impact many people with their security decisions. That comes with a certain leverage over the online ecosystem. Said companies also pose a risk of a cascading security event, where a vulnerability in just one product is discovered and exploited to impact millions of people at once.

    If multiple companies converge around the same product design—which we call design homogeneity—products created by different manufacturers can have many design and technical similarities. Then, these products can replicate faulty design decisions or vulnerabilities across the internet ecosystem. As Dan Geer put it, “when deployment is wide enough, it takes on the misfeatures of monoculture10 The HeartBleed vulnerability in 2014 underscored this problem. It was a significant flaw in the OpenSSL encryption technology deployed widely across websites; attackers could exploit that single flaw to trick a server into divulging secure information.11 Hundreds of thousands of major websites used the OpenSSL technology12/ and—despite using different webhosts and having other variations—immediately became vulnerable to attacks, against which both they and their website visitors had to update to protect. In another example, many browsers allow (or even prompt) users to store their passwords directly in the browser. While this feature is convenient and widespread, it can also leave users vulnerable to information-stealing malware.13. This design decision, a potentially improper prioritization of usability over security, has thus replicated beyond one company or browser.

    Browsers are subject to this market concentration, homogenous design, and cascading insecurity risk. One can easily imagine an attack targeting the Chrome browser that could render roughly 60 percent of internet users unable to use their primary browser. Yes, it is likely that users have a backup browser installed, but even that backup browser could be subject to similar insecurities due to design homogeneity. For example, if an attack targeting Chrome did not exploit a vulnerability in code unique to Chrome but instead targeted Chromium, the system used by Chrome and many other major browsers, the compromise’s impact could be even more widespread. Such an attack could affect everything from Microsoft’s Edge browser to the Russian internet giant’s Yandex Browser.

    It’s not just Chromium. Because of the nature of the internet, browsers often face concentrated risks arising from the need to interpret or integrate common website technologies. For example, JavaScript (JS) is a primary method through which web developers build interactive applications, and Google’s V8 JS engine enables all Chromium-based browsers to execute JS code. As a result, V8 has set global technical norms for how browsers compile and interpret JS. With attackers frequently launching JavaScript-based attacks,14. and the V8 engine having historically been the location of memory-based security vulnerabilities,15. the convergence of a singular JS engine to power the entire web-browsing ecosystem poses a systemic risk. Another historical example is Adobe Flash, which was the default tool for rendering dynamic content across browsers for decades. Before it was discontinued in 2020, hundreds of vulnerabilities were disclosed every year and the tool was considered a major source of browser insecurity.16

    Chromium’s widespread usage additionally underscores a reality of homogeneity: it is not always obvious that homogeneity is there. Users might reasonably believe they have some increased dependence on Google products when they use Google’s Chrome browser to navigate the internet, Google’s Gmail application to send email, and the Google Drive suite of products to build presentations and collaborate on work documents. However, internet users with the Microsoft Edge browser likely do not expect they are still, in part, relying on code built by Google. The same goes for individuals using Opera or Yandex Browser; this is almost certainly true in the case of Russians using Yandex’s browser software. Even if homogeneity is not obvious on the visual interface side of a browser, it may exist on the software back-end.

    Conclusion

    The concept of homogeneity is useful in understanding how concentrated markets and design pressures in some software areas can lead to increasingly systemic cybersecurity risk. Market pressures as well as technological realities can incentivize companies to converge on a few core technologies and designs, and major errors and failures in technology can potentially cascade across the ecosystem.

    Looking ahead, policymakers working on systemic cybersecurity problems should consider how consumer-facing technologies like browsers fit into the picture alongside other critical technologies and protocols, such as internet traffic routing through the Border Gateway Protocol.17. If browsers around the world went down, it would not break the internet (i.e., consumers could still use video chats), but it would have a substantial impact on people’s ability to send and receive information online. This is an area ripe for additional research and policy analysis.

    Broadly, US and other policymakers working on competition issues should certainly consider that market concentration may have relevant implications for cybersecurity risk. At the same time, they must also acknowledge how questions of market concentration may not address other questions around the security and resilience of underlying, foundational technologies, such as Chromium or JavaScript. The offering of free and open-source software frameworks such as Chromium could benefit competition by increasing the ability of new entrants to compete while simultaneously creating single points of technological failure hidden beneath different companies and product brands. Rather than misguidedly taking this as a sign that open-source software is somehow inherently dangerous (it is not), policymakers should support more work on security for key open-source software packages through risk awareness and investment.18

    As internet-based information systems become an increasingly embedded and integral part of the modern world, browsers will continue to grow in importance as a central element of internet connectivity and information sharing. Understanding and overseeing market concentration and design homogeneity to avoid creating systemic insecurities is essential for the health of the present and future internet.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “Top Browsers Market Share,” similarweb.com, accessed August 3, 2023, https://www.similarweb.com/browsers/
    2    Lily Hay Newman, “Even the NSA and the CIA Use Ad Blockers to Stay Safe Online,” Wired, September 25, 2021, https://www.wired.com/story/security-roundup-even-cia-nsa-use-ad-blockers/
    3    Ravie Lakshmanan, “Malicious Browser Extensions Targeted Over a Million Users So Far This Year,” The Hacker News, August 17, 2022, https://thehackernews.com/2022/08/malicious-browser-extensions-targeted.html 
    4    Thomas Brewster, “Massive Intel Vulnerabilities Just Landed — And Every PC User On The Planet May Need To Update,” Forbes, January 3, 2018, https://www.forbes.com/sites/thomasbrewster/2018/01/03/intel-meltdown-spectre-vulnerabilities-leave-millions-open-to-cyber-attack/?sh=664492163932
    5    Dan Geer et al., CyberInSecurity: The Cost of MonopolyComputer & Communications Industry Association, September 2003, http://www.ccianet.org/wp-content/uploads/2003/09/cyberinsecurity%20the%20cost%20of%20monopoly.pdf
    6    Marcus Ranum, “The Monoculture Hype,” ranum.com, accessed September 26, 2022, http://www.ranum.com/security/computer_security/editorials/monoculture-hype/index.html
    7    Bruce Schneier, “Software Monoculture,” Schneier on Security, December 1, 2020, https://www.schneier.com/blog/archives/2010/12/software_monocu.html
    8    Gemma Petrie, Mika Shah, Kush Amlani, Five Walled Gardens: Why Browsers are Essential to the Internet and How Operating Systems are Holding Them Back, Mozilla, September 2022, https://research.mozilla.org/browser-competition/5wg/
    9    Roger A. Grimes, “Don’t fall for the monoculture myth,” CSO Online, April 24, 2009, https://www.csoonline.com/article/2632142/don-t-fall-for-the-monoculture-myth.html
    10    Dan Geer, “Heartbleed as Metaphor,” Lawfare, April 21, 2014, https://www.lawfaremedia.org/article/heartbleed-metaphor
    11    Timothy B. Lee, “The Heartbleed Bug, explained,” Vox, May 14, 2015, https://www.vox.com/2014/6/19/18076318/heartbleed
    12    Lee Rainie and Maeve Duggan, “Heartbleed’s Impact,” Pew Research Center, April 30, 2014, https://www.pewresearch.org/internet/2014/04/30/heartbleeds-impact/2
    13    Bill Toulas, “RedLine malware shows why passwords shouldn’t be saved in browsers,” Bleeping Computer, December 28, 2021, https://www.bleepingcomputer.com/news/security/redline-malware-shows-why-passwords-shouldnt-be-saved-in-browsers/; “Redline Stealer Targeting Accounts Saved to Web Browser with Automatic Login Feature Included,” ASEC, December 28, 2021, https://asec.ahnlab.com/en/29885/
    14    Liam Tung, “Bugs in Chrome’s JavaScript engine can lead to powerful exploits. This project aims to stop them,” ZDNet, August 3, 2021, https://www.zdnet.com/article/bugs-in-chromes-javascript-engine-can-lead-to-powerful-exploits-this-project-aims-to-stop-them/
    15    Peter Pflaster, “Type Confusion Vulnerability in Chrome V8 Javascript,” Automox, March 28, 2022, https://www.automox.com/blog/type-confusion-vulnerability-in-chrome-v8-javascript; “Vulnerability of Chrome: memory corruption via V8 Type Confusion,” Vigilance Vulnerability Reports, accessed August 1, 2023, https://vigilance.fr/vulnerability/Chrome-memory-corruption-via-V8-Type-Confusion-38089
    16    Jon Watson, “What makes Flash so insecure and what are the alternatives?” Comparitech, August 22, 2018, https://www.comparitech.com/blog/information-security/flash-vulnerabilities-security/; “Adobe Flash Vulnerability Affects Flash Player and Other Adobe Products,” U.S. Cybersecurity and Infrastructure Security Agency, January 24, 2013, https://www.cisa.gov/news-events/alerts/2009/07/23/adobe-flash-vulnerability-affects-flash-player-and-other-adobe
    17    Justin Sherman, The Politics of Internet Security: Private Industry and the Future of the WebAtlantic Council, October 5, 2020, https://www.atlanticcouncil.org/in-depth-research-reports/report/the-politics-of-internet-security-private-industry-and-the-future-of-the-web/
    18    Stewart Scott, Sara Ann Brackett, Trey Herr, and Maia Hamin, Avoiding the success trap: Toward policy for open-source software as infrastructure, Atlantic Council, February 8, 2023,https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/.

    The post Homogeneity and concentration in the browser appeared first on Atlantic Council.

    ]]>
    The 5×5—The cybersecurity implications of artificial intelligence https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-cybersecurity-implications-of-artificial-intelligence/ Fri, 27 Oct 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=696721 A group of experts with diverse perspectives discusses the intersection of cybersecurity and artificial intelligence.

    The post The 5×5—The cybersecurity implications of artificial intelligence appeared first on Atlantic Council.

    ]]>
    This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

    The arrival of ChatGPT, a chat interface built atop OpenAI’s GPT-3 model, in November 2022 provoked a frenzy of interest and activity in artificial intelligence (AI) from consumers, investors, corporate leaders and policymakers alike. Its demonstration of uncanny conversational abilities, as well as, later, the ability to write code, stoked the collective imagination as well as predictions about its likely impacts and integration into myriad technology systems and tasks.  

    The history of the field of AI stretches back to the 1950s, and more narrow machine learning models have been solving problems in prediction and analysis for nearly two decades. In fact, these models are already embedded in the cybersecurity lifecycle, most prominently in threat monitoring and detection. Yet, the emergence of the current generation of generative AI, powered by large language models, is producing markedly different capabilities than previous deep learning systems. Researchers are only beginning to explore the potential uses of generative AI systems in cybersecurity, as well as the potential threats arising from malign use or cyberattacks against generative AI systems themselves. 

    With cybersecurity playing a significant role in recently announced voluntary commitments by leading AI companies, a sweeping Executive Order on AI expected next week, and leading AI companies allowing their products to be used to construct increasingly autonomous systems, a discussion about the intersection of generative AI and cybersecurity could not be timelier. To that end, we assembled a group with diverse perspectives to discuss the intersection of cybersecurity and artificial intelligence. 

    #1 AI hype has risen and fallen in cycles with breakthrough achievements and paradigm shifts. How do large language models (LLM), and the associated hype wave, compare to previous AI paradigms? 

    Harriet Farlow, chief executive officer and founder, Mileva Security Labs; PhD candidate, UNSW Canberra:  

    “In my opinion, the excitement around large language models (LLMs) is similar [to excitement around past paradigm shifts] in that it showcases remarkable advancements in AI capabilities. It differs in that LLMs are significantly more powerful than the AI technologies of previous hype cycles. The concern I have with this hype—and I believe AI in general is already over-hyped—is that it gives the impression to non-practitioners that LLMs are the primary embodiment of AI. In reality, the natural language processing of LLMs is just one aspect of the myriad capabilities of AI, with other significant capabilities including computer vision and signal processing. My worry is that rapid adoption of AI and increasing trust in these systems, combined with the lack of awareness that AI systems can be hacked, means there are many productionized AI systems that are vulnerable to adversarial attack.”  

    Tim Fist, fellow, technology & national security, Center for a New American Security:  

    “While people’s excitement may have a similar character to previous AI ‘booms,’ such as in the 1960s, LLMs and other similar model architectures have some technical properties that together suggest the consequences of the current boom will be, to put it lightly, further reaching. These properties include task agnostic learning, in-context learning, and scaling. Unlike the AI models of yore, LLMs have impressive task performance in many domains at once—writing code, solving math problems, verbal reasoning—rather than one specific domain. Today’s ‘multimodal’ models are the next evolution of these capabilities, bringing the ability to understand and generate both natural language and images, with other modalities in the works. On top of their generality, once trained, LLMs can learn on the fly, allowing them to adapt to and perform reasonably well in novel contexts. LLMs and their multimodal cousins are AI architectures that can successfully leverage exponentially increasing amounts of computing power and data into greater and greater capabilities. This capacity means the basic recipe for more performance and generality is straightforward: just scale the inputs. This trend does not show any clear signs of slowing down.”  

    Dan Guido, chief executive officer, Trail of Bits:  

    “It is both the same and different. Like the hype surrounding LLMs, prior hype cycles arose due to the promise of fundamentally new capabilities in artificial intelligence, although not all the promised effects materialized. What is different this time is that the results of AI are immediately available to consumers. Now, AI is doing things that people thought computers could never do, like write stories, tell jokes, draw, or write your high school essays. This has occurred due to both fundamental advances like the Transformer model, and Sutter’s bitter lesson that AI becomes better with more computing power. We now have the computation to provide immense scale that was previously unachievable.” 

    Joshua Saxe, senior staff research scientist, Meta:  

    “The hype around LLMs rhymes with past hype cycles, but because AI is a real and substantive technology, each wave of hype does change security, even if less than AI boosters have anticipated. The hype wave of the 2010s fueled ideas that AI would fundamentally transform almost every aspect of cybersecurity practice, but, in fact, only disrupted security detection pipelines—for example, machine learning is now ubiquitous in malware and phishing detection pipelines. Similarly broad claims are being made about this current hype wave. Many of the imagined applications of LLMs will fall away, but as the bubble deflates we will see some genuinely new and load-bearing applications of LLMs within security.” 

    Helen Toner, director of strategy and foundational research grants, Center for Security and Emerging Technology, Georgetown University:  

    “I believe expectations are too high for what generative AI will be able to do this year or next. But on a slightly longer timeframe, I think the potential of the current deep learning-focused paradigm—LLMs being one its many faces—is still building. The level of investment and talent going into LLMs and other types of deep learning far outstrips AI waves of previous decades, which is evidence for—and a driver of—this wave being different.” 

    #2 What potential applications of generative AI in cybersecurity most excite you? Which are over-hyped?  

    Farlow: “In my experience, most people still use the term ‘AI’ the way they would ‘magic.’ I find too many conversations about how AI should be used in cybersecurity are based on trying to replicate and multiply the human workforce using AI. This is a very hard problem to solve, as most AI technologies are not good at operating autonomously across a range of tasks, especially when there is ambiguity and context-dependence. However, AI technologies are very good at assisting in narrow tasks like phishing and fraud detection, malware detection, and user and entity behavior analytics, for example. My focus is less on AI for cybersecurity, and instead on transferring cybersecurity principles into the field of AI to understand and manage the AI attack surface; this is where I think there needs to be more investment.”  

    Fist: “I predict that most people, including myself, will be surprised about which specific generative AI-powered applications in cybersecurity end up being most important. The capabilities of today’s models suggest a few viable use cases. Proof-of-concepts exist for offensive tools that use the capabilities of state-of-the-art generative models (e.g., coding expertise, flexibility) to adapt to new environments and write novel attacks on the fly. Attackers could plausibly combine these capabilities with an ‘agentized’ architecture to allow for autonomous vulnerability discovery and attack campaigns. Spearphishing and social engineering attacks are other obvious use cases in the near term. A Center for a New American Security report lays out a few other examples in Section 3.1.2. One important question is whether these capabilities will disproportionately favor attackers or defenders. As of now, the relative ease of generation compared to detection suggests that detectors might not win the arms race.”  

    Guido: “To judge whether something is overhyped or underhyped, consider whether it is a sustaining innovation or a disruptive innovation. That is, are any fundamental barriers being broken that were not before? Currently overhyped areas of cybersecurity research include crafting exploits, identifying zero-day vulnerabilities, and creating novel strains of malware. Attackers can already do these things very well. While AI will accelerate these activities, it does not offer a fundamentally new capability. AI shines in providing scalability to tasks that previously required an infeasible amount of effort by trained humans, including continuous cybersecurity education (AI is infinitely patient), testing and specification development, and many varieties of security monitoring and analysis. In July, Trail of Bits described how these capabilities may affect national security for the White House Office of Science and Technology Policy.” 

    Saxe: “Much of what people claim around applications of generative AI in cybersecurity is not substantiated by the underlying capabilities of the technology. LLMs, which are the most important generative AI technology for security, have a few proven application areas: they are good at summarizing technical text (including code), they are good at classifying text (including code and cybersecurity relevant text), and they are good at auto-completion. They are good at all this, even without the presence of training data. Applications that exploit these core competencies in LLMs, such as detecting spearphishing emails, identifying risky programming practices in code, or detecting exfiltration of sensitive data, are likely to succeed. Applications that imagine LLMs functioning as autonomous agents, solving hard program analysis problems, or configuring security systems, are less likely to succeed.”  

    Toner: “I am skeptical that deepfake videos are going to upend elections or destroy democracy. More generally, I think many applications are overhyped in terms of their likely effects in the very near term. Over the longer term, though—two-plus years from now—I think plenty of things are under-hyped. One is the possibility of mass spearphishing, highly individualized attacks at large scale. Another is the chance that generative AI could significantly expand the number of groups that are able to successfully hack critical infrastructure. I hope that I am wrong on both counts!”  

    #3 In what areas of generative AI and cybersecurity do you want to see significant research and development in the next five years?  

    Farlow: “While there is no denying that generative AI has garnered its fair share of hype, I cannot help but remain somewhat cynical about the singular focus on this technology. There is a vast landscape of AI advancements, including reinforcement learning, robotics, interpretable AI, and adversarial machine learning, that deserve equal attention. I find generative AI fascinating and exciting, but I also like to play devil’s advocate and note that the future of AI is not solely dependent on generative models. We should broaden our discussions to encompass the broader spectrum of AI research and its implications for various fields, as well as its security.”  

    Fist: “I am excited to see more research and development on AI-driven defenses, especially in the automated discovery and patching of vulnerabilities in AI models themselves. The recent paper ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’ is a great example of this kind of work. This research suggests that jailbreak discovery of open-source models like Llama is highly automatable and that these attacks transfer to closed-source models like GPT-4. This is an important problem to highlight. This problem also suggests that AI labs and cybersecurity researchers should work closely together to find vulnerabilities in models, including planned open-source models, and patch them before the models are widely deployed.”  

    Guido: “In July, Trail of Bits told the Commodity Futures Trading Commission that our top wishlist items are benchmarks and datasets to evaluate AI’s capability in cybersecurity, like a Netflix prize but for Cybersecurity+AI. Like the original ImageNet dataset, these benchmarks help focus research efforts and drive innovation. The UK recently announced it was funding Trail of Bits to create one such benchmark. Second would be guides, tools, and libraries to help safely use the current generation of generative AI tools. Generative AI’s failure modes are different from those of traditional software and, to avoid a security catastrophe down the road, we should make it easy for developers to do the right thing. The field is progressing so rapidly that the most exciting research and development will likely happen to tools that have not been created yet. Right now, most AI deployments implement AI as a feature of existing software. What is coming are new kinds of things where AI is the toolsomething like an exact decompiler for any programming language, or an AI assistant that crafts specifications or tests for your code as you write.” 

    Saxe: “I think there are multiple threads here, each with its own risk/reward profile. The low-risk research and development work will be in taking existing LLM capabilities and weaving them into security tools and workflows that extract maximal value from capabilities they already offer. For example, it seems likely that XDR/EDR/SIEM tooling and workflows can be improved by LLM next-token prediction and LLM embeddings at every node in current security workflows, and that what lies ahead is incremental work in iteratively figuring out how. On the higher-risk end of the spectrum, as successor models to LLMs and multimodal LLMs emerge that are capable of behaving as agents in the world in the next few years, we will need to figure out what these models can do autonomously.” 

    Toner: “This is perhaps not directly an area of cybersecurity, but I would love to see more progress in digital identity—in building and deploying systems that allow humans to prove their humanity online. There are some approaches to this under development that use cryptography and clever design to enable you to prove things about your identity online while also preserving your privacy. I expect these kinds of systems to be increasingly important as AI systems become more capable of impersonating human behavior online.”

    More from the Cyber Statecraft Initiative:

    #4 How can AI policy account for both the technology itself as well as the contexts in which generative AI is developed and deployed?  

    Farlow: “As I am sure readers are aware, the question of regulating AI has become quite a philosophical debate, with some jurisdictions creating policy for the AI technology, and others focusing on policy that regulates how different industries may choose to use that technology. And then within that, some jurisdictions are choosing to regulate only certain kinds of AI, such as generative AI. Given that AI encompasses an incredibly large landscape of technologies across an even broader range of use cases, I would like to see more analysis that explores both angles from a risk lens that can be used to inform internationally recognized and relevant regulation. While some AI applications can be risky and unethical and should be regulated or blocked, such as facial recognition for targeted assassinations, policy should not stifle innovation by constraining research and frontier labs. I would like to see regulation informed by a scientific method with the intention to be universally applicable and adopted.” 

    Fist: “End-use-focused policies make sense for technology used in any high-risk domain, and generative AI models should be no different. An additional dedicated regulatory approach is likely required for highly capable general-purpose models at the frontier of research and development, known as ‘frontier models.’ Such systems develop new capabilities in an unpredictable way, are hard to make reliably safe, and are likely to proliferate rapidly due to their multitude of possible uses. These are problems that are difficult to address with sector-specific regulation. Luckily, a dedicated regulatory approach for these models would only affect a handful of models and model developers. The recent voluntary commitments secured by the White House from seven leading companies is a great start. I recently contributed to a paper that goes into some of these considerations in more detail.”  

    Guido: “In June, Trail of Bits told the National Telecommunications and Information Administration that there can be no AI accountability or regulation without a defined context. An audit of an AI system must be measured against actual verifiable claims regarding what the system is supposed to do, rather than against narrow AI-related benchmarks. For instance, it would be silly to have the same regulation apply to medical devices, home security systems, automobiles, and smart speakers solely because they all use some form of AI. Conversely, we should not allow the use of AI to become a ‘get out of regulation free’ card because, you see, ‘the AI did it!.’”  

    Toner: “We need some of both. The default starting point should be that existing laws and regulations cover specific use cases within their sectors. But in some areas, we may need broader rules—for instance, requiring AI-generated content to be marked as such, or monitoring the development of potentially dangerous models.” 

    #5 How far can existing legal structures go in providing guardrails for AI in context? Where will new policy structures be needed?  

    Farlow: “Making policy for generative AI in context means tailoring regulations to specific industries and applications. There are a number of challenges associated with AI that are not necessarily new—data protection laws, for example, may be quite applicable to the use of AI (or attacks on AI) that expose information. However, AI technology is fundamentally different to cyber and information systems on which much of existing technology law and policy is based. For example, AI systems are inherently probabilistic, whereas cyber and information systems are rule-based. I believe there need to be new policy structures that can address novel challenges like adversarial attacks, deep fakes, model interpretability, and mandates on secure AI design.”  

    Fist: “Liability is a clear example of an existing legal approach that will be useful. Model developers should probably be made strictly liable for severe harm caused by their products. For potential future models that pose severe risks, those risks may not be able to be adequately addressed using after-the-fact remedies like liability. For these kinds of models, ex-ante approaches like licensing could be appropriate. The Food and Drug Administration and Federal Aviation Administration offer interesting case studies, but neither seems like exactly the right approach for frontier AI. In the interim, an information-gathering approach like mandatory registration of frontier models looks promising. One thing is clear: governments will need to build much more expertise than they currently possess to define and update standards for measuring model capabilities and issuing guidance on their oversight.”  

    Guido: “Existing industries have robust and effective regulatory and rule-setting bodies that work well for specific domains and provide relevant industry context. These same rule-setting bodies are best positioned to assess the impact of AI with the proper context. Some genuinely new emergent technologies may not fit into a current regulatory structure; these should be treated like any other new development and regulated based on the legislative process and societal needs.”  

    Toner: “Congress’ first step to manage new concerns from AI, generative and otherwise, should be to ensure that existing sectoral regulators have the resources, personnel, and authorities that they need. Wherever we already have an agency with deep expertise in an area—the Federal Aviation Administration for airplanes, the Food and Drug Administration for medical devices, the financial regulators for banking—we should empower them to handle AI within their wheelhouse. That being said, some of the challenges posed by AI would fall through the cracks of a purely sector-by-sector approach. Areas that may need more cross-cutting policy include protecting civil rights from government use of AI, clarifying liability rules to ensure that AI developers are held accountable when their systems cause harm, and managing novel risks from the most advanced systems at the cutting edge of the field.”

    Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—The cybersecurity implications of artificial intelligence appeared first on Atlantic Council.

    ]]>
    Roberts featured as guest on San Francisco Experience podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/roberts-featured-as-guest-on-san-francisco-experience-podcast/ Fri, 20 Oct 2023 19:06:30 +0000 https://www.atlanticcouncil.org/?p=715529 On October 19, IPSI/GCH nonresident senior fellow Dexter Tiff Roberts spoke on an episode of the San Francisco Experience podcast, where he discussed the recent meeting of the Five Eyes intelligence chiefs in Silicon Valley. He explained that this unprecedented meeting was a response to a massive push in Chinese espionage to steal cutting-edge technology. […]

    The post Roberts featured as guest on San Francisco Experience podcast appeared first on Atlantic Council.

    ]]>

    On October 19, IPSI/GCH nonresident senior fellow Dexter Tiff Roberts spoke on an episode of the San Francisco Experience podcast, where he discussed the recent meeting of the Five Eyes intelligence chiefs in Silicon Valley. He explained that this unprecedented meeting was a response to a massive push in Chinese espionage to steal cutting-edge technology. With the PRC ramping up its espionage efforts against tech companies, he explained that this meeting was a warning call to these companies to put anti-espionage protections in place immediately. 

    The post Roberts featured as guest on San Francisco Experience podcast appeared first on Atlantic Council.

    ]]>
    The US-EU Summit: Time to focus on geopolitics https://www.atlanticcouncil.org/blogs/new-atlanticist/the-us-eu-summit-time-to-focus-on-geopolitics/ Wed, 18 Oct 2023 14:44:28 +0000 https://www.atlanticcouncil.org/?p=693503 Faced with an increasingly hostile and divided world, US and EU officials must make the most of the upcoming summit in Washington DC.

    The post The US-EU Summit: Time to focus on geopolitics appeared first on Atlantic Council.

    ]]>
    The last summit between the European Union (EU) and the United States, in June 2021, focused on reaffirming the transatlantic partnership after some difficult years. At the summit in Washington, DC, this Friday, the United States and Europe must address the geopolitical challenges they face in an increasingly hostile and divided world. Transatlantic diplomacy can no longer be solely about the now strengthened partnership itself. Instead, its primary task must be to build joint efforts to ensure a more secure and resilient place for US and European citizens, in keeping with the transatlantic partnership’s democratic values.

    The 2021 summit faced a relatively peaceful world. At this 2023 summit, the United States and the EU must demonstrate their determination and close coordination in their responses to Hamas’s strike on Israel and Russia’s invasion of Ukraine. Most immediately, this will require holding Israel to the standards of international law as it justifiably seeks to remove the threat of Hamas. Russia’s war on Ukraine has been a key catalyst in energizing the US-EU partnership, fostering transatlantic cooperation on sanctions, export controls, and supplies of armaments. This summit should leave no doubt about the continued willingness of the United States and the EU to work together to supply weapons and financial support to Ukraine for as long as needed. 

    These are not the only conflicts and tensions challenging the United States and the EU. This geopolitical summit must also show unity in the face of threats from Iran and other countries that encourage terrorism and foster extremism. The United States and the EU must also look beyond physical threats to focus—both domestically and abroad—on disruptive perils online, from cyberattacks to state-sponsored disinformation. 

    As the United States and the EU seek to make their own economies more secure, they should ensure that developing economies are not collateral damage.

    The summit cannot just be about defending against aggression, however. It should also be an opportunity for the EU and United States to begin building a strategy based on a positive case for democracy and the rule of law, and for the critical nature of these values in making societies and economies prosperous and resilient. The United States and the EU have already reached out to other like-minded countries—Japan, South Korea, Australia, and others—that share these values. Now it is time to address other democracies, such as India, Brazil, and other regional powers, as well as those developing countries that are much more ambivalent toward democratic principles. In today’s tense geopolitical moment, such outreach is an essential part of making the United States and Europe more secure and resilient. Such a strategy will also require genuine assistance to developing countries, especially in helping them weather the green and digital transitions. The small projects that have been initiated under the US-EU Trade and Technology Council (TTC) can only be a beginning. 

    Much of the summit will be focused on how to make the transatlantic economies stronger and more competitive, especially when faced with the challenges of nonmarket economies, such as China. The United States and the EU need to use this summit to make progress in their negotiations on critical raw materials and greening the global steel market in the face of Chinese overcapacity. But they should also think about how to include others in these arrangements. As the United States and the EU seek to make their own economies more secure, they should ensure that developing economies are not collateral damage. 

    Technology offers another avenue for engaging these countries. The United States and the EU have started a very necessary conversation over the risks involved in generative and frontier artificial intelligence (AI). Indeed, leaders may adopt more initiatives in this area at the AI Safety Summit at Bletchley Park in the United Kingdom, which will be held in November. But there are many uses of AI that offer opportunities, including in agriculture, research, health care, and public services. Used with care and training, these can help many developing countries. Will China provide these opportunities, perhaps in a new version of the Belt and Road Initiative, or will the United States and the EU, as well as their partners, provide the systems and training that could make a real difference? The summit this week in Washington provides a opportunity to demonstrate transatlantic willingness to assist others in a safe, positive, and open digital transition. 

    Finally, with the United Nations Climate Change Conference, known as COP28, just a few weeks away, the United States and the EU must use the summit to demonstrate their commitment to climate goals. This is not only about assistance for climate mitigation, but also about the openness and accountability of US and EU climate policies, ensuring that subsidy schemes and clean energy standards are fair and do not create additional challenges for developing countries. For Europe especially, its southern neighbors could be a huge source of renewable energy. Any US-EU arrangements on clean tech that may emerge at the summit should be constructed to encourage this trade and engage developing countries with initiatives designed to build greener energy markets. 

    The EU-US relationship has come a long way since 2021. The TTC, which was created at the June 2021 summit, has proven to be an innovative and productive mechanism for addressing bilateral transatlantic tensions and for building consensus and relationships among officials. While focused mostly on emerging tech and supply chain issues, it has also organized real cooperation on critical issues such as export controls against Russia. The United States and the EU should now begin to consider how to make the TTC an even stronger, more legitimate, and perennial mechanism of transatlantic cooperation, for instance, through a small permanent team and parliamentary dialogue. But more broadly, the United States and the EU must look beyond their own relationship to cooperate on building a broader coalition to address today’s geopolitical challenges. This October summit is the place to start.


    Frances G. Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.

    Georg Riekeles is associate director and head of Europe’s political economy programme at the European Policy Centre.

    The post The US-EU Summit: Time to focus on geopolitics appeared first on Atlantic Council.

    ]]>
    Cartin quoted in Politico on Huawei use in Germany https://www.atlanticcouncil.org/insight-impact/in-the-news/cartin-quoted-in-politico-on-german-huawei-use/ Mon, 16 Oct 2023 19:35:53 +0000 https://www.atlanticcouncil.org/?p=707689 On October 15, IPSI nonresident senior fellow Josh Cartin was quoted in a Politico article on continued attempts by the United States to convince Germany to curb its use of Huawei technology. Cartin explained that “having a Chinese company that is strongly susceptible to the [Communist Party of China] leading the globe in a foundational […]

    The post Cartin quoted in Politico on Huawei use in Germany appeared first on Atlantic Council.

    ]]>

    On October 15, IPSI nonresident senior fellow Josh Cartin was quoted in a Politico article on continued attempts by the United States to convince Germany to curb its use of Huawei technology. Cartin explained that “having a Chinese company that is strongly susceptible to the [Communist Party of China] leading the globe in a foundational technology was not only a security problem, but a major economic problem.”  

    The post Cartin quoted in Politico on Huawei use in Germany appeared first on Atlantic Council.

    ]]>
    Atkins on Industrial Cybersecurity Pulse podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-on-industrial-cybersecurity-pulse-podcast/ Sat, 14 Oct 2023 20:54:08 +0000 https://www.atlanticcouncil.org/?p=707772 On October 13, IPSI nonresident senior fellow Victor Atkins spoke on an episode of the Industrial Cybersecurity Pulse Cybersecurity Awareness Month podcast series. He discussed issues such as patching organizational and personal cyber vulnerabilities, IT/OT integration, and emerging technologies such as AI and machine learning. Atkins noted that with the increasing automation of critical infrastructure […]

    The post Atkins on Industrial Cybersecurity Pulse podcast appeared first on Atlantic Council.

    ]]>

    On October 13, IPSI nonresident senior fellow Victor Atkins spoke on an episode of the Industrial Cybersecurity Pulse Cybersecurity Awareness Month podcast series. He discussed issues such as patching organizational and personal cyber vulnerabilities, IT/OT integration, and emerging technologies such as AI and machine learning. Atkins noted that with the increasing automation of critical infrastructure sectors from communications to shipping, the target for cyberattacks is increasing; however, he explained, private-sector threat intelligence entities are increasing opportunities to discover and respond to these threats. 

    The post Atkins on Industrial Cybersecurity Pulse podcast appeared first on Atlantic Council.

    ]]>
    Driving software recalls: Manufacturing supply chain best practices for open source consumption https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/driving-software-recalls-manufacturing-supply-chain-best-practices-for-open-source-consumption/ Thu, 12 Oct 2023 20:43:00 +0000 https://www.atlanticcouncil.org/?p=818364 Product recalls require practices that can help software vendors move toward better component selection and tracking and better relationships with customers, all while making software vendors responsible for OSS security instead of maintainers.

    The post Driving software recalls: Manufacturing supply chain best practices for open source consumption appeared first on Atlantic Council.

    ]]>

    Executive summary

    In December 2021, the Log4j vulnerability, Log4shell, crippled development teams worldwide. The exploit itself was not what dragged teams down. Instead, most software organizations could not identify whether and where they were using Log4j. This meant developers needed to review entire codebases to determine their exposure and risk. For large enterprises with thousands or tens of thousands of applications, work on new features came to a halt.

    Log4shell was another example of software organizations failing to acknowledge or recognize (likely both) that open source software (OSS) is more than just a technological innovation—OSS wholly changed how software products are created. Over more than two decades, OSS catalyzed an already growing movement towards componentized software development—where applications are developed in parts by different internal and external teams. In many ways, OSS transformed the industry into something that more closely mirrors traditional manufacturing.

    While there is no 1:1 comparison between software development and other forms of manufacturing, there are still many similarities that provide a learning opportunity. Specifically, by looking at automotive manufacturing, there are modern supply chain management best practices capable of improving OSS consumption and software supply chain security. These same mechanisms can also improve the processes software manufacturers use to disclose the presence of vulnerabilities to their customers.

    It is the latter point that is most critical. Like expectations set for any other manufacturer, customers expect software manufacturers to follow a standard of care to ensure their products are safe and secure by design. More importantly, if there is a defect in a product, customers expect a manufacturer to communicate and remediate the defect.

    Traditionally, this process of notification and remediation is referred to as a recall. Like comparing software and automotive manufacturing, a recall process for software cannot be applied exactly. Yet, there are similarities. Critical elements of recall processes could provide a mechanism to hold software manufacturers accountable for the parts they use and for communication of critical vulnerabilities (defects) to their customers. However, many manufacturers do not, or cannot, track the OSS used in their software products. Worse yet, in many cases, they lack awareness of critical vulnerabilities in the software products they manufacture.

    This paper aims to demonstrate how principles from modern automotive manufacturing, specifically those from W. Edwards Deming, a leader in supply chain and management theory, can be applied to improve OSS consumption and supply chain security. With these processes in place, software manufacturers can minimize the impact of vulnerable OSS and communicate to customers when those defects are encountered. To implement these improvements, policy from the federal government will need to provide further guidance, direction, and accountability.

    For software manufacturers, this means:

    1. Building security into software products by design.
    2. Consuming only high-value OSS components and projects.
    3. Continuously tracking, monitoring, and improving OSS consumption.


    For policymakers, this means:

    1. Holding software manufacturers responsible and accountable via a national standard of care.
    2. Requiring software manufacturers to demonstrate their approach to vetting OSS used in their products.
    3. Driving software manufacturers to continuously track, monitor, and improve OSS consumption.

    Introduction

    On December 10, 2021, around three weeks before much of the world logged off for winter and end-of-year holiday festivities, arguably the worst software vulnerability ever discovered, Log4shell,1 was publicly disclosed. And this is not something said lightly. Following heavy-hitting vulnerabilities like Heartbleed and Shellshock, Log4shell had an unprecedented impact.2 The vulnerability affected Log4j, a ubiquitous open source logging framework used to track information and errors in computer systems.3

    Logging tools provide critical functionality to software organizations today, helping enterprises investigate and determine causes of unexpected operations of everything from websites to applications on your phone. For example, if a server suddenly shuts down, those logs help pinpoint the root cause. Log4j, which the Apache Software Foundation manages, is used in almost every Java application,4 especially at the enterprise level.5 Log4shell was critical both because it was easy to exploit and due to its potential widespread impact, which included servers providing critical access to secured networks and sensitive data at the private, commercial, and national levels.

    The US Cybersecurity and Infrastructure Security Agency (CISA) has cited the scale of the Log4shell vulnerability across much of their published best practices. In July 2022, the first report from the Cyber Safety Review Board (CSRB) provided updates on lessons learned from Log4shell as well.6 And then in March 2023, the National Cybersecurity Strategy stressed the importance of open source software (OSS) security and its impact on supply chains. Unfortunately, with the historical absence of meaningful cybersecurity regulatory oversight, organizations and individuals must often voluntarily adopt these best practices and recommendations, especially in engagements outside government activities.7 This gap is punctuated by evidence indicating this is not happening across the board.

    In 2022, a year after the disclosure of Log4shell, a study of current Log4j downloads indicated that as much as 30 percent of users were still using vulnerable versions.8 Some of these cases were potentially due to hubris or lack of care. However, the more likely cause of continued downloads of vulnerable versions of Log4j was an organization’s lack of visibility into the OSS they consume. Without this insight, organizations cannot effectively respond to vulnerabilities, including communication of the presence and effect of those vulnerabilities on users (customers) of their software.

    There is a general expectation that products like food, vehicles, and other goods should be inherently safe. In cases where products include components known to be harmful or defective, manufacturers have a responsibility to disclose and remediate that risk through a recall process. When this is impossible, manufacturers are usually obligated by regulatory policy to warn their customers of the potential danger of defective products. Yet, software products are not held to the same standard; this must change.

    When defects are present, like their peers, software manufacturers are responsible for communicating potential risks to their users and guiding them through remediation options. Fortunately, addressing the awareness of OSS consumption and improving communication related to OSS vulnerability disclosure does not require every aspect of a typical recall process deployed in manufacturing. However, achieving those improvements does necessitate software manufacturers track and monitor all the OSS they consume and incorporate into their software products.

    Starting with an analysis of the Log4j vulnerability and the corresponding response by software manufacturers, this paper aims to provide a better understanding of OSS consumption, the role OSS plays in the modern software supply chain, and relevant parallels to traditional manufacturing, specifically in the automotive sector. This comparison provides an opportunity to borrow essential mechanisms tested across many years against similar challenges. Looking specifically at automotive manufacturing also provides an opportunity to isolate the best and most relevant examples, especially those pioneered by W. Edwards Deming, a pivotal figure in modern supply chain management. By building awareness of OSS consumption, software manufacturers can improve their ability to effectively respond to issues like Log4shell and facilitate risk communication through existing coordinated vulnerability disclosure (CVD) practices, such as publishing advisories and notifications, that closely resemble recall processes in manufacturing. While disclosure and communication are critically important, this paper’s primary intent is to assert software manufacturers’ responsibility to continuously track, monitor, and improve their consumption of OSS at an organizational level.

    The severity of Log4shell

    Log4shell (CVE-2021-44228)9 earned the highest Common Vulnerability Scoring System (CVSS) score, level 10 (critical), in its official Common Vulnerability Enumeration (CVE) disclosure.10, 11 For perspective, an independent database of CVSS scores shows just 4 percent of all CVEs ever recorded (over 200,000 through twenty-three years of reporting) received a score of ten, which is typically limited to high-impact vulnerabilities that are also easy to exploit.12

    In the case of Log4shell, the vulnerability allowed remote code execution—the ability for bad actors to remotely make changes to, run software on, and take control of a system. While this type of exploit is terrible in any situation, what made Log4shell so potentially dangerous was its ubiquity within the Java ecosystem. In an article from Wired magazine shortly after the official disclosure, Log4shell was characterized as something that would “haunt the Internet for years.”13 Echoing that sentiment, just four days after the Log4shell disclosure, CISA Director Jen Easterly briefed industry leaders on the situation, saying, “[the exploit] is one of the most serious I’ve seen in my entire career, if not the most serious.”14 Jay Gazlay of CISA’s Vulnerability Management Office followed Easterly’s comments, stating, “Hundreds of millions of devices are likely to be affected.” However, that number is likely low, given estimates of the breadth of affected companies and projections from experts that the vulnerability will persist for years to come.15

    By January 2022, there were already multiple reported examples of bad actors exploiting the Log4shell vulnerability.16 By September of the same year, the US government published an advisory confirming that the Federal Civilian Executive Branch (FCEB) had been compromised.17 Considering the Log4shell vulnerability was present in versions of Log4j since 2013, there is a high likelihood that attacks took place for some time before the official disclosure.

    Unfortunately, exploitation of a vulnerability does not necessarily mean software manufacturers will pay attention. CISA’s Cybersecurity Strategic Plan FY 2024-2026 highlights that “most intrusions today are perpetrated using known vulnerabilities or exploiting weak security controls.”18 A telemetry analysis from Tenable, a cybersecurity risk firm, found that 72 percent of organizations were still vulnerable to Log4shell as of October 2022.19 To understand how that is possible, when this article was written, a review of Maven Central, the largest repository of open source Java components, showed that 23 percent of new downloads each week represented versions vulnerable to Log4shell.20 This equates to hundreds of thousands of vulnerable versions of Log4j entering software supply chains every month.

    However, the negativity surrounding the Log4shell vulnerability is only part of the story. The spotlight on Log4j demonstrates OSS’s tremendous impact on modern software development. If software manufacturers are unaware of the vulnerability of critical OSS like Log4j, what about all the other OSS they consume?

    The problem is not open source software

    Today, almost 90 percent of modern applications are composed of OSS, including components like the Log4j logging framework.21 On average, about 11 percent of those OSS components have known vulnerabilities. While the extensive use of OSS has reduced the cost of research and development, catalyzed incredible leaps in innovation, and drastically decreased the time it takes to deliver critical business functionality, it has also created a security conundrum.

    Unlike other third-party software, in most cases, OSS is not “supplied;” that is, projects or individual developers that maintain the software do not act as suppliers. As described in Chinmayi Sharma’s “Tragedy of the Digital Commons,” OSS is a “public good” or a natural resource freely available to anyone.22 As a result, its creators cannot know how their work will be used or, in many cases, how software manufacturers may modify it to fit their customers’ needs. Instead, it is the responsibility of the software manufacturer to ensure that any final product delivered to a customer that uses OSS is free of defects (i.e. vulnerabilities). This relationship differs from traditional supplier engagement and procurement processes used to offset research and development costs. This difference presents considerable potential for risk.

    A componentized approach to software development is not new. Many organizations still outsource the development of specific frameworks and other elements of an application to third parties. However, these engagements use standardized procurement and review processes that ensure products match technical specifications and other contractual requirements. In contrast, for many organizations, the consumption of OSS has no equivalent process.

    This lack of process is both a blessing and a curse. Development teams can use and modify whatever they find among the vast amount of available OSS. That freedom allows them to completely sidestep the procurement process used when working directly with third parties. This circumvention often is not done intentionally. In most cases, development teams simply do not associate OSS consumption with procurement. This lack of overhead can create short-term benefits for innovation. However, knowing where OSS is used is just as crucial as what OSS is consumed. Without common standards, software manufacturers often do not track their OSS consumption, making it extremely difficult to monitor and identify defects.

    By ignoring OSS consumption, development teams expose the organization to increased risk, especially when vulnerabilities are discovered. This extends well beyond Log4shell: the same report that identified continued downloads of vulnerable versions of Log4j also found that 96 percent of OSS downloads with a vulnerability have non-vulnerable updates that are available.23The lack of visibility into the consumption of OSS means software manufacturers often miss these fixes and patches that would reduce and, in most cases, entirely remove the risk associated with vulnerabilities in previous versions.

    The impact of vulnerabilities

    The term “vulnerabilities” sounds frightening; sometimes, they can have that potential. However, vulnerabilities generally are not an injection of malicious code but rather an inadvertent weakness in the code itself.24 In many ways, vulnerabilities can be as simple as a typo in any written content. This broad definition is true of all code, open source or proprietary. The nature of a vulnerability changes when it can be exploited. In other words, when those bugs allow bad actors to access private systems, the vulnerability represents risk.

    The presence of risk does not mean a vulnerability is inherently critical to every organization and every software product. Depending on the context, vulnerabilities may present little to no risk, even in the case of a critical OSS component like Log4j. However, it is impossible to understand and address a risk without a clear understanding of OSS consumption, implementation, and configuration. This lack of understanding directly impedes software manufacturers’ ability to respond to critical issues effectively.

    Once again, using the Log4j vulnerability as an example, much of the time development teams spent addressing Log4shell was not focused on implementing fixes and applying patches to Log4j. Instead, the lion’s share of initial investment was spent trying to fully understand their use of Log4j in the first place. So, before the arduous technical work could begin, teams first needed to figure out their exact version of Log4j and where the vulnerable versions existed across their portfolio of software products.25. Again, all this must happen before any fix or patch can be applied. This can quickly become an impossible task at scale for large organizations with a complex code base and tens of thousands of applications.

    Regardless of organizational size, the reactive nature of a software manufacturer’s response to vulnerabilities is the best example of the current weaknesses in OSS consumption management. Yet, that weakness is almost entirely avoidable. If software organizations track what OSS they consume, where that OSS is used, and then monitor OSS for defects and other quality parameters, their response can be far more proactive. In many cases, issues can be addressed long before a product ships. Even when that is not possible, the ability to triage and prioritize remediation efforts avoids chaos when a vulnerability is discovered, allowing teams to take direct control of the response and tackle defects strategically.

    The inability of many development teams to effectively respond to Log4shell should be a call for software manufacturers to change. At the center of this change is the acceptance and adoption of processes that acknowledge OSS is not simply a way to bypass traditional procurement. Instead, OSS must be a critical consideration in managing a software supply chain. Achieving a paradigm shift like this requires tested principles and mechanisms. Luckily, there are a plethora of modern supply chain management best practices that can be borrowed from other manufacturing industries, especially automotive manufacturing.

    The advantage of modern supply chain management principles

    While nuances exist, the intent of this paper is not to draw a direct line between software development and automotive manufacturing. Instead, it is to compare processes in both industries, especially related to supply chain best practices. This is also not to say that manufacturing has everything figured out. Even in recent history, there have been low points, such as the combined failure of General Motors and the National Highway and Transportation Safety Administration (NHTSA) to recall faulty ignition switches in the Chevy Cobalt.26 However, despite these setbacks and continued opportunities to improve, automobile manufacturers have developed efficient and effective processes for identifying and communicating defective products. Through targeted notifications and safety recalls, automotive manufacturers collectively communicate defects for millions of vehicles each year.27

    In many cases, recalls are related to discovering and communicating severe safety issues that could cause serious injury and, in some cases, death. In this way, the volume of recalls represents drastic improvements to consumer safety.28However, a common misconception is that recalls are a way to pull a defective product back. While this works in some cases, for example, if a vehicle has not yet been sold, most recalls affect vehicles already on the road. This means the manufacturer must be able to identify not only defective parts but also the location of the affected vehicles. The critical point here is that recalls would be impossible without the ability to track parts in a vehicle throughout the supply chain and up to final assembly. Put another way, this ability to track and monitor parts means that when a defect is identified, the manufacturer can target their communication and any remediation steps to the affected consumers.

    Of course, tracking and responding to defects is only a part of modern supply chain management. Manufacturers must also work to minimize defects, and this is where modern supply chain theory provides the most relevant and helpful guidance for software supply chains. Specifically, today’s software manufacturers should look to the work of W. Edwards Deming, who was responsible for helping rebuild automotive manufacturing in post-World War II Japan and was highly influential in the global automotive market. Most notably, Deming focused on improving supply chain practices and, more importantly, ensuring greater control over quality and security.29

    Deming insisted that manufacturers source the best parts from the best suppliers, which was at the heart of his strategy. As part of his recommendations, he put together a fourteen-point approach to quality management to support this endeavor.30 Many of these ideas are now accepted concepts in manufacturing, like the Andon principle, which states that any worker should be able to immediately stop production to prevent defects and further quality issues down the line. While the complete set of fourteen principles dives deeper into management philosophy and is outside the scope of this paper, for software supply chains and improvements to OSS consumption, three are critical:

    • Principle 3: Cease dependence on inspection to achieve quality.31 In this principle, Deming suggests manufacturers “shift left.” By moving inspection earlier in the production processes, defects are found when changes are much easier to make. Inspection of the final product should still happen but should not be the only or first inspection point.
    • Principle 4: Move toward a single supplier for any one item on a long-term relationship of loyalty and trust.32 In this principle, Deming suggests that complexity is introduced by utilizing multiple suppliers for the same part. By utilizing the single, best supplier and building a relationship with them, when defects enter the supply chain, you only need to focus on reaching a resolution with a single supplier versus tackling issues from several suppliers simultaneously.
    • Principle 5: Constantly improve production systems to improve quality and efficiency, and thus constantly decrease costs.33 In this principle, Deming aligns with the philosophy that you cannot improve what you do not monitor, and you cannot monitor what you do not track.

    It is important to consider that quality can be highly subjective, establishing a singular definition of high-quality OSS is unnecessary. Rather, Deming’s principles offer an approach for software manufacturers to develop better processes for the consumption of OSS, which will enhance its quality in the long term. Here is how translating Deming’s guidance to software supply chains looks:

    • Principle 1: Build security into software products by design.34 Like physical products, software manufacturers should be responsible for ensuring their products are safe and secure. Within the context of liability, this responsibility is often associated with a duty of care. In addition to a duty of care, a manufacturer’s level of responsibility to ensure products are safe and secure is a standard of care. To align with the National Cybersecurity Strategy and, in turn, meet a reasonable duty and standard of care, security needs to be a critical part of software manufacturing from the start.35 For example, assessing the security of OSS in a product only after it is released is too late. Instead, software manufacturers must take an active role in their consumption of OSS at every stage of the Software Development Life Cycle (SDLC).
    • Principle 2: Use only the best, actively supported OSS components and build relationships with those projects and developers. Selecting the best OSS means evaluating it against criteria like known vulnerabilities, age, and average remediation/update times, among others. When an OSS component meets those standards, manufacturers should utilize it exclusively to avoid duplication and reduce their overall attack surface (risk). Next, select stable, supported versions of OSS and vet projects to ensure they utilize recommended processes and best practices.36 Finally, build partnerships with high-quality open source projects and invest back into those projects to accelerate innovation upstream and reduce future costs downstream.
    • Principle 3: Continuously track, monitor, and improve the security of OSS that is being consumed. Manufacturers should understand how and where they consume OSS spanning the entire SDLC to reduce their risk related to known vulnerabilities. Software manufacturers should also establish criteria and develop organizational policies to improve the consumption of OSS. While efforts may start small, research indicates37 a combination of modern tooling and best practices provide scalable and organization-wide approaches that can be applied across all teams and products without increasing costs or reducing productivity.

    The three principles discussed above should define every software manufacturer’s core strategy for OSS consumption and guide their approach to software supply chain security. At the same time, it is worth a word of caution that an overcorrection can occur. The removal of all vulnerabilities is not necessary—if such a state is even achievable. As mentioned previously, exploitability and vulnerability mean different things. Many vulnerabilities have little potential for harm.38 However, when a critical vulnerability does exist, once again, the most applicable lessons for software manufacturers come from a set of mechanisms used by automotive manufacturers to identify and respond to defects.

    The first goal of a recall

    An automotive recall is usually, although not always, conducted in partnership with the National Highway Transportation and Safety Administration (NHTSA), which, among other activities, investigates defects in automotive products.39 At the end of an investigation, the NHTSA provides a non-binding recommendation to the manufacturer on how to recall the product.40 Though automotive manufacturers can go against the recommendation, the NHTSA can seek legal action to ensure vehicle safety standards are upheld. However, any automotive manufacturer will likely say that their first goal for recalls is to avoid them altogether.

    While a recall represents improved safety, recalls also represent significant expenses. For example, one consulting firm—AlixPartners—found that 2016, recalls had cost the automotive industry $22.1 billion.41 Adding a bit more detail, Forbes cited that the average per-vehicle cost of a recall was about $500 over the last 10 years.42 However, the cost of some recalls is much higher—for example, 2021’s Hyundai recall of 82,000 vehicles, which came in at $11,000 per vehicle, sets a new benchmark.43 Like any business, automotive manufacturers focus on minimizing costs. And in the case of a critical defect requiring a recall, costs can increase exponentially. So, to better mitigate and minimize costs due to defects, many automotive manufacturers utilize processes and best practices related to supply chain management like Deming’s principles. This approach allows automotive manufacturers to proactively address defects in production and respond to customers quickly, efficiently, and effectively.

    For example, in 2020, Toyota encountered a potential coolant leak defect caused by a faulty water flow meter used to manufacture their engines.44 In this instance, the supplier identified the issue with the water meter but found no evidence this had created a defect in the engines themselves (it is important to note that Toyota supplies its own engines). Because the supplier “found no abnormalities,” it continued to ship the engines for final assembly at a Toyota manufacturing plant. However, according to Toyota’s investigation, the coolant leak defect was detected in vehicles awaiting delivery to dealerships, as well as a small number already in dealer inventory. Luckily, Toyota could use serial numbers from the defective engines to trace the leaks back to engines manufactured using the defective water flow meter over a three-month period in 2019. With that information, Toyota conducted an official recall, communicating with dealerships and customers to coordinate inspections and repairs.

    For a better understanding of the scale of both tracking the issue and recalling the potentially impacted vehicles, consider that Toyota’s engine manufacturing plant in Kentucky (their supplier) produces approximately 600,000 engines a year,45 and in 2019, Toyota sold over two million vehicles46 in the United States. To account for all defective parts, Toyota recalled just over 44,000 vehicles. However, the silver lining to this story is that the final number of vehicles impacted by the defect was minimal: only 250, or about 0.05 percent, of those included in the recall.

    This example demonstrates the application of all three of Deming’s principles. First, inspection was built into the manufacturing process by design and occurred before final assembly. And it’s important to note, even with those inspections in place, some defects don’t become apparent until they’ve reached production. Next, the example shows the importance of a strong relationship with suppliers, which made it easier to pinpoint the cause of the engine defect once they appeared after final assembly. In this scenario, the supplier’s adherence to supply chain best practices was as important as the manufacturer’s. Finally, because Toyota tracks and monitors its parts and vehicles, it was possible to use the serial numbers from engines manufactured with the faulty water flow meter and identify only those vehicles with the potential to leak coolant. Toyota then uses all the data from this investigation in its continuous improvement process for manufacturing.

    A recall for software manufacturers already exists

    At first glance, the ability to recall software seems absurd. However, requiring a customer to physically return a software product is too literal an interpretation. Instead, it’s better to consider that software manufacturers share a similar goal to automotive manufacturers: to produce a traceable record of defects and vulnerabilities in their products to reduce costs and respond more quickly, effectively, and efficiently. Thus, the lesson from the previous example for software manufacturers is that a recall process like Toyota’s and other automotive manufacturers’ is already possible. In fact, the best software manufacturers follow a standard process commonly referred to as a Coordinated Vulnerability Disclosure (CVD).47

    A CVD process is a collaborative approach that typically brings together cybersecurity researchers and software manufacturers to address critical vulnerabilities and provide communication to customers when necessary. To manage the relationship between the members of this group, most software manufacturers publish a vulnerability disclosure policy with criteria and guidance for reporting vulnerabilities, including estimated timelines for remediation and mitigation. At its core, CVD provides a way for software manufacturers to communicate with customers and improve the overall quality and safety of their products. Like the process described in tracing the root cause of Toyota’s defective engines, CVDs are most typically initiated by an external identification of an exploitable vulnerability—a defect. In the case of OSS, this process is already happening and is a recommended best practice in the most widely used projects. To understand how this is already in place today, once again, Log4shell provides a valuable point of analysis.

    The identification and announcement of the Log4shell vulnerability was part of a standard CVD process and followed many of the same steps as a product recall. In the case of Log4j, the CVD quickly made the headlines worldwide. Log4j would be hard to miss even for manufacturers not tracking or monitoring their OSS consumption. However, the panic that followed did not stem from the potential severity and exploitability of Log4shell alone. While those aspects were important, an even more fundamental issue was that software manufacturers were unaware of where Log4j was used in their applications or if it was used at all. Unlike Toyota engines, most software manufacturers have no serial number equivalent to connect OSS components like Log4j with impacted products. This gap left software manufacturers only one option: look through every application to find a Log4j dependency, often resorting to scanning the disks of production servers. For organizations with tens of thousands of applications, this is the equivalent of Toyota recalling every vehicle they have ever sold to determine which were affected. Not only would this be unacceptable, but the catastrophic cost would burden Toyota for years.

    Defects resulting in a recall, or CVD in the case of software manufacturing, test the strength and security of a supply chain. However, best practices and processes alone are not enough. While Deming’s principles created measurable improvements for automotive manufacturing, another element is at play in the contrasts between Toyota and software manufacturers: responsibility. Software manufacturers must take responsibility for the security of their software from the start. By design,48 they must evaluate their suppliers, whether OSS or commercial, and, most importantly, they must continuously track, monitor, and improve their consumption of OSS across all their products and at every stage of the SDLC. Achieving this goal at scale requires a combination of data, processes, best practices, and modern tooling. But most importantly, commitment to the responsibility to deliver safe, secure products.

    With these pieces in place, software manufacturers can meet the expectations and standards of their peers in other industries. Software manufacturers will not need to spend months determining which products are affected when the next critical vulnerability, like Log4shell, is identified. Instead, quick and efficient identification will support software manufacturers’ ability to utilize disclosure mechanisms like CVDs and proactively communicate mitigation and remediation steps with their customers. While this is not equivalent to removing products from shelves through physical recalls, better communication can still drive reduced risk for customers in the same manner. Further, with the improved consumption of OSS and attention to the guidance outlined throughout this paper, software manufacturers can work to avoid vulnerabilities in the first place.

    Recommendations for software manufacturers and policymakers

    Imagine that the next critical OSS vulnerability is identified. Could software manufacturers determine which applications in their portfolio are at risk? Could they determine, based on context, if the vulnerability is exploitable? Could they ensure that future downloads are of the non-vulnerable version? How would (or could) they disclose that information to customers? Based on the available data, the most likely answer is no, or at least not without great difficulty.49

    Many months have passed since Log4shell, yet teams continue to be affected. As of the writing of this paper, vulnerable versions of Log4j still constitute one-third of all Log4j downloads. The Log4shell vulnerability has been described as “endemic”50 and may never go away. Looking beyond Log4j, almost all downloads (96 percent) of vulnerable open source components have a non-vulnerable version available. The logical conclusion is that software manufacturers are unaware of, uninterested in, or perhaps worse, incapable of seriously evaluating their OSS consumption.

    The accepted paradigm of inaction and ignorance regarding OSS consumption and software supply chain security is beginning to change. The latest National Cybersecurity Strategy, along with new requirements for government contractors and vendors, is just the first step. Policy and regulations will be revised with even more stringent criteria.51

    Software manufacturers that follow modern supply chain management best practices and principles described in this paper have an opportunity to address liability concerns and protect their customers from risks associated with unmanaged OSS consumption.52 Moreover, when critical vulnerabilities occur, software manufacturers can provide effective communication to guide their customers through a recall-like disclosure process that addresses steps for mitigation and remediation.

    To drive these improvements, the last section of this paper is separated into two key areas. The first section provides recommendations for software manufacturers to improve their OSS consumption and supply chain security. Aligned with these recommendations, the next section explores potential strategies for future policies and regulations. It is important to note that these recommendations are not a wish list–they are realistic and based on existing best practices utilized in supply chain management across various sectors. Each recommendation represents a reality that can be achieved today through best practices, processes, and tooling.

    Software manufacturer recommendations

    Build security into software products by design.

    Customers expect the software products they purchase and use to be secure and safe. The federal government has made it clear that software manufacturers are responsible for ensuring that expectation is met by design. Meeting that expectation means software manufacturers must actively ensure those parts are free from defects. In alignment with the first principle borrowed from Deming’s supply chain management best practices, reducing defects requires attention to open source consumption at the beginning of software development. Waiting until after a product is shipped to identify vulnerabilities is too late. Instead, software manufacturers should create an environment that supports their development teams with the information and context needed to make the best choices when they begin writing code.

    When tackling this recommendation, it is important to consider existing developer workflows. If approaches are draconian or overly cumbersome, the loss of developer efficiency can discount the reduction in risk and improved OSS consumption processes. In many cases, developers do not look at OSS consumption in the same way as other forms of procurement. Success requires proper strategies to ensure that changes are not arduous and do not add undue friction for development teams.

    Finally, consider that not all vulnerabilities are exploitable in every situation, and as such, some products may ship with OSS vulnerabilities that introduce little to no risk. Regardless, it is important to balance expectations against an organization’s risk tolerance. Every organization will have a different tolerance, but this is not justification to leave tolerance for risk undefined. Software manufacturers must create policies for OSS consumption that match defined risk tolerance and are integrated throughout the SDLC.

    Consume only high-value open source software, components, and projects.

    The second principle borrowed from Deming focuses on identifying suppliers that produce the best parts and using them exclusively. While “best” can be highly subjective, software manufacturers should prioritize OSS that consistently provides measurable value to the organization and is updated and supported by an active group of contributors. Measuring value starts with identifying known vulnerabilities and includes criteria like update frequency and how long it takes a project or contributor to fix a vulnerability, among others. Once the best option is identified, manufacturers should use it exclusively. For example, utilizing a single logging framework like Log4j across all software products. In doing so, software manufacturers can reduce their overall risk surface and focus on the OSS that best meets their needs.

    According to Deming, the relationship with a supplier is as important as the goods produced. Though OSS is not “supplied” in a traditional sense, the principle around relationships stands. Software manufacturers should contribute back to open source projects as much as possible, especially for remediating vulnerabilities. This should be considered an investment in the long-term availability and quality of the product and can reduce downstream risks associated with future vulnerabilities.

    Continuously track, monitor, and improve the security of open source software being consumed.

    The last principle borrowed from Deming directs software manufacturers to continuously track, monitor, and improve their OSS consumption. While this is not manually feasible for software, many modern tools support the creation of an organization-level manifest. This manifest should include all OSS consumed within the context of specific products and across every stage of the SDLC. Having a list of OSS and where it is used is only the first step. Key criteria such as known vulnerabilities, age, mean remediation time, and other metadata must be tracked to improve choices and aid decision-making.

    Finally, software manufacturers must work at an organizational level to limit their exposure to vulnerabilities, which starts with establishing an OSS consumption policy aligned with the organization’s risk tolerance. This policy provides the foundation for broader, organizational-level governance of OSS consumption. The intent is not to create a list of approved components and reprimand teams when an unapproved component is discovered. Instead, OSS consumption policy should guide decision-making for OSS across the SDLC. More importantly, an organizational policy for open source should be used to educate teams and improve OSS consumption in the long term.

    Policymaker recommendations

    Hold software manufacturers responsible and accountable via a national standard of care.

    In “Tragedy of the Digital Commons” Sharma argues that “Open-source defects should be governed the same way product defects are: when a defect in a product injures a consumer, the law holds every commercial link in the supply chain capable of having identified and remediated the defect accountable.”53 This view represents an expectation of due care by software manufacturers no different than for manufacturers of any physical product. As Sharma points out, software manufacturers have long been able to disclaim liability under outdated interpretations of contract law.54 Legal tools like end-user license agreements (EULAs) typically contain indemnity clauses protecting software manufacturers from liability. However, the National Cybersecurity Strategy (NCS) hopes to change this with its call to “hold the stewards of our data accountable … reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.”55

    Software manufacturers’ lack of responsibility runs counter to recommendations from Deming’s first principle and CISA’s guidance that products should be secure by design.56 While the National Cybersecurity Strategy Implementation Plan’s (NCSIP) Strategic Objective 3.3.1 calls for the development of a software security liability framework and mentions a standard of care, it does not offer substantive details.57 To address this conflict, future policy should solidify the recommendations from the NCS and NCSIP by creating a national standard of care that enumerates the responsibility of software manufacturers to:

    1. Identify and evaluate OSS used across their portfolio of products
    2. Catalog collected data for OSS
    3. Define OSS policies and governance standards
    4. Implement continuous vulnerability tracking and monitoring capabilities across the SDLC
    5. Quickly and directly disclose and remediate vulnerabilities

    These capabilities would improve security across the board for both OSS and proprietary component software.

    Require software manufacturers to demonstrate their approach to vetting OSS used in their products.

    Many software manufacturers have no standards for their OSS consumption. The White House Office of Management and Budget (OMB) sets vendor requirements for federal agencies looking to acquire software products based on standards like NIST’s Secure Software Development Framework (SSDF).58 To qualify, a vendor must submit a form attesting to the implementation of the required best practices for the software they provide. However, the SSDF has limitations. While the framework provides guidance for software manufacturers to “define security-related criteria for selecting software,” it provides no details as to the potential criteria to be used beyond requiring third parties to attest they meet defined standards. Even under this paradigm, neither the standards nor attestation requirements indicate a software manufacturer’s approach to OSS consumption beyond technical acquisition (repository, download location, etc.).

    According to Deming’s second principle, software manufacturers should use the best OSS. The intent of this recommendation is not to define “best.” More important is the process software manufacturers use to evaluate the OSS they consume and how it is measured against their risk tolerance, both of which should be disclosed to customers. A good foundation for these processes can be found in Open Source Security Foundation’s Open Source Consumption Manifesto, which calls upon all software manufacturers to commit to improving their consumption of OSS through fifteen principles and best practices.59 With these considerations in mind, future policy should require all software manufacturers to follow expanded standards for OSS software consumption, including evaluation criteria, applied decision-making best practices, and detailed process descriptions. Software procurement and acquisition requirements for vendors and contractors at the federal level should be expanded to include qualified details of a software manufacturer’s organizational OSS consumption policy, including specifics on the criteria, processes, and tools used when consuming OSS. As outlined in the NCSIP’s Strategic Objective 3.5.2, the False Claims Act provides an enforcement mechanism to ensure truthful attestation, holding software manufacturers accountable to these expanded requirements.

    Drive software manufacturers to continuously track, monitor, and improve OSS consumption

    Strategic Objective 3.3.3 of the National Cybersecurity Strategy Implementation Plan focuses on CVDs.60 While imperfect, the current CVD process works well when communicating from upstream (OSS) to downstream (software manufacturers), but only in scenarios where a software manufacturer continuously tracks, monitors, and improves their consumption of OSS. In cases where this is not done and a critical vulnerability, like Log4shell, fails to make headlines, many software manufacturers do not know the potential risk they create for their customers. This is the exact scenario we see based on research demonstrating a significant proportion of OSS is downloaded with known vulnerabilities while non-vulnerable versions are available.

    The third principle adapted from Deming recommends an approach to continuously track, monitor, and improve OSS consumption. This will result in a more proactive response, better communication with customers, and closer alignment with the intent of the recall process utilized by automotive manufacturers. To meet this recommendation in the short term, acquisition and procurement policy should require manufacturers to demonstrate CVD processes for responding to and mitigating OSS with known, critical vulnerabilities in their software products. In the longer term, the requirement should evolve to demonstrate alignment with more robust vulnerability reporting and disclosure—for example, the National Institute of Standards and Technology (NIST) Vulnerability Disclosure Report, highlighted in NIST’s Cybersecurity Supply Chain Risk Management Practices for Technology and Management (C-SCRM) and the Secure Software Development Framework,61 or utilization of the Vulnerability Exploitability eXchange (VEX), currently led by CISA.62

    Beyond controls at the federal government level, disclosure and recall processes for software manufacturers should be aligned with a defined standard of care. Combining these approaches provides a more robust mechanism to drive data security and safety standards for software manufacturers in specific industries, such as financial services and the healthcare sector. Recently proposed regulation from the Securities and Exchange Commission (SEC) recommends new cybersecurity risk management and governance standards, including a requirement for public software manufacturers to adopt a more detailed disclosure process.63 In many ways, the SEC has taken this a step further by defining responsibility for public companies and other businesses within their scope of regulation through their proposed requirement that a demonstration of those processes be provided.64 Expanding this requirement to adopt disclosure mechanisms specific to OSS vulnerabilities would require software manufacturers to track, monitor, and improve open source consumption in line with the SEC’s more general cybersecurity requirements.

    Bringing it all together

    The road to improvement is paved by lessons learned. Change is hard, and defects are a constant threat to delivering safe and secure products. For manufacturers, minimizing defects is attached to the longevity and reputation of their enterprise, which often hinges on avoiding liability as well. In the past, software manufacturers could avoid liability by delivering products without the same standard of care as their manufacturing peers; that option is ending. Today, the US government, along with governments worldwide, has begun implementing policies and regulations to hold manufacturers responsible for safe and secure software. But gaps remain.

    The goal is simple: software manufacturers must build security into software products by design, choose the best suppliers, and track and monitor where those parts are used. This squarely places the responsibility for open source consumption and software supply chain security on manufacturers. To address this more holistically, this paper has focused on the importance of OSS consumption as a critical piece to better software supply chain management. The recommendations provided are time-tested approaches deployed in traditional automotive manufacturing. These ideas are not new–they represent what software manufacturers should be doing already. Every recommendation is built on the same principles.

    The aim is not to stifle innovation. Instead, it is to unwind an approach that sidesteps responsibility for due care and to encourage proactive, communicative processes. Policy improvements and expanded guidance provide an opportunity to help software manufacturers improve their responses to defects and communication with customers through recall-like capabilities. What is presented in this paper is a win-win. These principles simultaneously help improve open source and proprietary software supply chains while reducing the overall impact and cost of critical vulnerabilities like Log4shell altogether.

    Acknowledgments

    The authors would like to thank Deborah Bryant, Aeva Black, Maia Hamin, Stewart Scott, Trey Herr, Shane Miller, Jonathan Meadows, Christopher Robinson, Tobie Langel, and several individuals who will remain anonymous for their feedback on earlier versions of this document, as well as individuals who attended a Chatham House rule workshop on the paper. In addition, the authors would like to thank Anais Gonzalez and Donald Partyka for their layout of this issue brief.

    About the authors

    Jeff Wayman has spent more than a decade leading digital content and community teams across OSS Security, DevOps, and DevSecOps roles. In his current position, he guides OSS security thought leadership and content strategy for Sonatype. Jeff promotes OSS security awareness through his work with the OpenSSF End Users Working Group and his contributions to the Atlantic Council’s Open Source Policy Network. Jeff is pursuing an MBA at the Gies College of Business at the University of Illinois, Urbana-Champaign, focusing on Digital Marketing and Strategic Innovation.

    Brian Fox, Sonatype co-founder and CTO, is a Governing Board member for the Open Source Security Foundation (OpenSSF), a member of the Apache Software Foundation, and former Chair of the Apache Maven project. As a direct contributor to the Maven ecosystem, including the maven-dependency-plugin and maven-enforcer-plugin, he has over twenty years of experience driving the vision behind the project, as well as developing and leading the development of software for organizations ranging from startups to large enterprises. Brian is a frequent speaker at national and regional events including Java User Groups and other development-related conferences.


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    1    “CVE-2021-44228,” NIST NVD, December 10, 2021, https://nvd.nist.gov/vuln/detail/CVE-2021-44228.
    2    “The Heartbleed Bug,” Heartbleed, June 3, 2020, https://heartbleed.com/; “GNU Bourne-Again Shell (Bash) ‘Shellshock’ Vulnerability,” CISA, September 30, 2016, https://www.cisa.gov/news-events/alerts/2014/09/25/gnu-bourne-again-shell-bash-shellshock-vulnerability-cve-2014-6271.
    3    “Log4j – Apache Log4jTM 2,” The Apache Software Foundation, May 2, 2023, https://logging.apache.org/log4j/2.x/.
    4    This includes notable systems and products like Amazon Web Services, Cloudflare, and iCloud, among others. Due to the lack of disclosure requirements the exact impact of the vulnerability is impossible know.
    5    Liam Tung, “US warns Log4j flaw puts hundreds of millions of devices at risk,” ZDNET, December 14, 2021, https://www.zdnet.com/article/log4j-flaw-puts-hundreds-of-millions-of-devices-at-risk-says-us-cybersecurity-agency/.
    6    Cyber Safety Review Board, “Review of the December 2021 Log4j Event,” July 11, 2022, https://www.cisa.gov/sites/default/files/publications/CSRB-Report-on-Log4-July-11-2022_508.pdf.
    7    “FACT SHEET: Biden-Harris Administration Announces National Cybersecurity Strategy,” The White House, March 2, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/.
    8    “Open Source Supply, Demand, and Security,” Sonatype, https://www.sonatype.com/state-of-the-software-supply-chain/open-source-supply-demand-security.
    9    “CVE-2021-44228 Detail,” December 10, 2020, https://nvd.nist.gov/vuln/detail/CVE-2021-44228.
    10    While CVSS scores provide a common framework to compare cybersecurity vulnerabilities, it should be noted that scores are imperfect and can be misleading. For more, read Jacques Chester’s “A closer look at CVSS scores.”
    11    “Log4j – Apache Log4j Security Vulnerabilities,” The Apache Software Foundation, https://logging.apache.org/log4j/2.x/security.html#fixed-in-log4j-2-15-0-java-8.
    12    “CVE Security Vulnerability Database. Security Vulnerabilities, Exploits, References and More,” CVEdetails, https://www.cvedetails.com/index.php.
    13    Lily Hay Newman, “The Log4j Vulnerability Will Haunt the Internet for Years,” Wired, December 14, 2021, https://www.wired.com/story/log4j-log4shell/.
    14    Tim Starks, “CISA Warns ‘Most Serious’ Log4j Vulnerability Likely to Affect Hundreds of Millions of Devices,” CyberScoop, December 13, 2021, https://cyberscoop.com/log4j-cisa-easterly-most-serious/.
    15    Jonathan Greig, “Log4j Update: Experts Say Log4shell Exploits Will Persist for ‘Months If Not Years,” ZDNET, December 13, 2021, https://www.zdnet.com/article/log4j-update-experts-say-log4shell-exploits-will-persist-for-months-if-not-years/.
    16    Bill Toulas, “State Hackers Use New PowerShell Backdoor in Log4j Attacks,” BleepingComputer, January 11, 2022, https://www.bleepingcomputer.com/news/security/state-hackers-use-new-powershell-backdoor-in-log4j-attacks/.
    17    “Iranian Government-Sponsored APT Actors Compromise Federal Network, Deploy Crypto Miner, Credential Harvester,” CISA, November 25, 2022, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-320a#:~:text=fbi.gov.-,Mitigations,-CISA%20and%20FBI.
    18    “CISA Cybersecurity Strategic Plan FY2024-2026,” CISA, August 6, 2023, https://www.cisa.gov/sites/default/files/2023-08/FY2024-2026_Cybersecurity_Strategic_Plan.pdf.
    19    “Tenable Research Finds 72% of Organizations Remain Vulnerable to ‘Nightmare’ Log4j Vulnerability,” Tenable, November 30, 2022, https://www.tenable.com/press-releases/tenable-research-finds-72-of-organizations-remain-vulnerable-to-nightmare-log4j.
    20    “Log4j exploit updates,” Sonatype, https://www.sonatype.com/resources/log4j-vulnerability-resource-center.
    21    “2020 State of the Software Supply Chain,” Sonatype, https://www.sonatype.com/hubfs/SSC/SON_SSSC-Report-2020_sept23.pdf.
    22    Chinmayi Sharma, “Tragedy of the Digital Commons,” North Carolina Law Review 101 (2023), 1129,https://doi.org/10.2139/ssrn.4245266.
    23    “8th Annual State of the Software Supply Chain Report,” Sonatype, 2021. https://www.sonatype.com/state-of-the-software-supply-chain/introduction
    24    “Vulnerability – Glossary,” National Institute of Standards and Technology, https://csrc.nist.gov/glossary/term/vulnerability.
    25    “(ISC)2 Pulse Survey: Log4j Remediation Exposes Real-World Toll of the Cybersecurity Workforce Gap,” “(ISC)2, February 22, 2022, https://blog.isc2.org/isc2_blog/2022/02/log4j-remediation-exposes-cybersecurity-workforce-gap.html
    26    Jerry Hirsch, “NHTSA Launches Probe into Cobalt Recall; GM Issues Another Apology,” Los Angeles Times, February 27, 2014, https://www.latimes.com/business/autos/la-fi-hy-nhtsa-gm-cobalt-recall-probe-20140227-story.html.
    27    Jim Gorzelany, “Automakers with the Most and Fewest Recalls in 2022,” Forbes, January 2, 2023, https://www.forbes.com/sites/jimgorzelany/2022/12/30/automakers-with-the-most-and-fewest-recalls-in-2022/?sh=441e13327cb9.
    29    The W. Edwards Deming Institute, https://deming.org/.
    30    “Dr. Deming’s 14 Points,” The W. Edwards Deming Institute, https://deming.org/explore/fourteen-points/.
    31    “Inspection Is Too Late. The Quality, Good or Bad, Is Already in the Product,” The W. Edwards Deming Institute, November 8, 2012, https://deming.org/inspection-is-too-late-the-quality-good-or-bad-is-already-in-the-product/.
    32    “Haircuts and Continuous Improvement,” The W. Edwards Deming Institute, July 31, 2017, https://deming.org/haircuts-and-continuous-improvement/.
    33    “Haircuts and Continuous Improvement,” The W. Edwards Deming Institute, July 31, 2017, https://deming.org/haircuts-and-continuous-improvement/.
    34    “Security-By-Design and -Default,” CISA, June 12, 2023, https://www.cisa.gov/resources-tools/resources/secure-by-design-and-default.
    35    “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-By-Design and -Default,” CISA, April 13, 2023, https://www.cisa.gov/sites/default/files/2023-04/principles_approaches_for_security-by-design-default_508_0.pdf.
    36    “OpenSSF Scorecard – Security Health Metrics for Open Source,” GitHub, https://github.com/ossf/scorecard.
    37    “2020 State of the Software Supply Chain: Chapter 4 – How High Performance Teams Manage Open Source Software Supply Chains,” Sonatype, September 23, 2020, https://www.sonatype.com/hubfs/SSC/SON_SSSC-Report-2020_sept23.pdf.
    38    “Do all vulnerabilities really matter?,” Red Hat, November 4, 2022, https://www.redhat.com/en/blog/do-all-vulnerabilities-really-matter
    39    National Highway Traffic Safety Administration, https://www.nhtsa.gov/.
    40    “Safety Issues and Recalls,” NHTSA, https://www.nhtsa.gov/recalls.
    41    Michael Held, Alexandre Marian, and Jason Reaves, “The auto industry’s growing recall problem—and how to fix it,” Alix Partners, January 2018, https://www.alixpartners.com/media/14438/ap_auto_industry_recall_problem_jan_2018.pdf.
    42    Steve Tengler, “Auto Recalls Way Down in 2023 And Mercedes Knows Why,” Forbes, June 28, 2023. https://www.forbes.com/sites/stevetengler/2023/06/28/auto-recalls-way-down-in-2023-and-mercedes-knows-why/?sh=398df6e06795.
    43    “Hyundai’s recals 82,000 electric cars is one of the most expensive in history,” (sp) CNN Business, February 26, 2021, https://www.kktv.com/2021/02/26/hyundais-recals-82000-electric-cars-is-one-of-the-most-expensive-in-history/.
    44    “Toyota and Lexus Recall Cars to Replace Engineers,” February 14, 2020, https://www.consumerreports.org/car-recalls-defects/toyota-lexus-recall-replace-engine-avalon-camry-rav4-es/; Toyota NHTSA Defect Information Report” February 6, 2020, https://static.nhtsa.gov/odi/rcl/2020/RMISC-20V064-0396.pdf.
    45    “Toyota Motor Manufacturing, Kentucky (TMMK),” https://pressroom.toyota.com/facility/toyota-motor-manufacturing-kentucky-tmmk/.
    46    “Toyota Motor North America Reports December 2019, Year-End Sales,” January 3, 2020, https://pressroom.toyota.com/toyota-motor-north-america-reports-december-2019-year-end-sales/.
    47    “Coordinated Vulnerability Disclosure Process,” CISA, https://www.cisa.gov/coordinated-vulnerability-disclosure-process.
    48    “CISA Director Easterly Remarks at Carnegie Mellon University,” CISA, February 27, 2023, https://www.cisa.gov/cisa-director-easterly-remarks-carnegie-mellon-university.
    49    “8th Annual State of the Software Supply Chain Report.”
    50    “Review of the December 2021 Log4j Event.”
    51    “National Cybersecurity Strategy,” The White House, March 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf; “Secure Software Development Attestation Form Instructions,” CISA, March 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    52    Trey Herr et al., “Buying Down Risk: Cyber Liability,” Atlantic Council, May 3, 2022, https://www.atlanticcouncil.org/content-series/buying-down-risk/cyber-liability/.
    53    Sharma, “Tragedy of the Digital Commons.”
    54    “CISA Director Easterly Remarks at Carnegie Mellon University,” February 27, 2023, https://www.cisa.gov/cisa-director-easterly-remarks-carnegie-mellon-university.
    55    “National Cybersecurity Strategy.”
    56    “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-By-Design and -Default.”
    57    “National Cybersecurity Strategy Implementation Plan”, The White House, July 2023, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
    58    “Secure Software Development Framework,” NIST, January 10, 2023, https://csrc.nist.gov/Projects/ssdf.
    59    “The Open Source Consumption Manifesto,” OpenSSF EUWG, August 24, 2023, https://github.com/ossf/wg-endusers/tree/main/MANIFESTO.
    60    “National Cybersecurity Strategy Implementation Plan.”
    61    “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-By- Design and -Default;” Murugiah Souppaya, Karen Scarfone, and Donna Dodson, “Secure Software Development Framework (SSDF) Version 1.1,” NIST Special Publication 800-218, February 2022, https://doi.org/10.6028/nist.sp.800-218.
    62    Tom Alrich, “VEX (Vulnerability Exploitability eXchange): Purpose and Use Cases,” FOSSA, June 08, 2023, https://fossa.com/blog/vulnerability-exploitability-exchange-vex-purpose-use-cases/.
    63    “Fact Sheet: Public Company Cybersecurity; Proposed Rules,” SEC, 2022, https://perma.cc/5P34-UV92.
    64    Maia Hamin, “Who’s Afraid of the SEC?” Atlantic Council DFRLab, June 14, 2023, https://dfrlab.org/2023/06/14/whos-afraid-of-the-sec/.

    The post Driving software recalls: Manufacturing supply chain best practices for open source consumption appeared first on Atlantic Council.

    ]]>
    The sixth domain: The role of the private sector in warfare https://www.atlanticcouncil.org/in-depth-research-reports/report/the-sixth-domain-the-role-of-the-private-sector-in-warfare/ Wed, 04 Oct 2023 15:40:01 +0000 https://www.atlanticcouncil.org/?p=683477 The private sector is the "sixth domain" of modern warfare, argues Frank Kramer, and the government should act to protect it.

    The post The sixth domain: The role of the private sector in warfare appeared first on Atlantic Council.

    ]]>

    Table of contents

    I. Homelands at risk in wartime
    II. Lessons from the Ukraine-Russia war—the role of the private sector in warfare
    A. Cybersecurity
    B. Cloud computing
    C. Space
    D. Artificial intelligence
    E. Communications
    III. The US homeland security framework does not include wartime requirements for the private sector
    IV. Recommendations
    A. Congress and the Biden administration should expand the existing national framework to provide for effective engagement with the private sector in wartime
    B. Establish a critical infrastructure wartime planning and operations council with government and private-sector membership
    C. Establish regional resilience collaboratives
    D. Establish private-sector systemic risk analysis and response centers
    E. Establish an integrated cybersecurity providers corps
    F. Create a wartime surge capability of cybersecurity personnel by establishing a cybersecurity civilian reserve corps and expanding National Guard cyber capabilities
    G. Expansion of Cyber Command’s “hunt forward” model to support key critical infrastructures in wartime in the United States
    H. Establish an undersea infrastructure protection corps
    I. Expand usage of commercial space-based capabilities
    J. Authorities and resources
    Conclusion
    About the author

    The United States and its allies have for some time recognized, as NATO doctrine provides, five operational domains—air, land, maritime, cyberspace, and space.1 Each of those arenas fully fits with the understanding of a domain as a “specified sphere of activity” and, in each, militaries undertake critical wartime actions. But in the ongoing Ukraine-Russia war, certain key operational activities have been undertaken by the private sector as part of the conduct of warfare.2 By way of example, private-sector companies have been instrumental both in providing effective cybersecurity and in maintaining working information technology networks. As part of such efforts, these firms have established coordinated mechanisms to work with relevant government actors.

    These operational and coordinated activities by the private sector demonstrate that there is a “sixth domain”—specifically, the “sphere of activities” of the private sector in warfare—that needs to be included as part of warfighting constructs, plans, preparations, and actions if the United States and its allies are to prevail in future conflicts. As will be elaborated below, that sphere of activities focuses mainly on the roles of information and critical infrastructures, including their intersections—ranging from the transmission and protection of information to the assurance of critical infrastructure operations.

    Many of the United States’ activities in the sixth domain will take place in the United States homeland. However, while “defending the homeland” is listed as the first priority in the 2022 National Defense Strategy, insufficient attention has been paid to the actions that will be required of the private sector beyond just the defense industrial base as part of accomplishing an effective defense.3 Likewise, when US military forces are engaged in overseas combat, private-sector companies in allied countries (as well as US companies operating overseas) will be critical for the effectiveness of US forces, as well as for the allies’ own militaries. In short, establishing an effective strategy for the private sector in warfare is a key requirement for the United States and its allies.

    This report sets forth the elements of such a strategy.4 In substantial part, the paper builds on lessons regarding the sixth domain derived from the ongoing Ukraine-Russia war. The report discusses the key operational activities that fall within the sixth domain and how such activities need to be included in war planning with a focus on the organizational structures and authorities required for effective implementation of private-sector activities in warfare. For clarity of exposition, the report focuses its recommendations for the most part on the United States, though comparable approaches will be important for allies and partners.

    The report recognizes the existing frameworks that have been established in the United States for interactions between the government and the private sector as set forth in Presidential Policy Directive 21 (PPD-21) of 2013 on critical infrastructure security and resilience, the statutory requirements including those in the FY 2021 National Defense Authorization Act, the National Infrastructure Protection Plan, which addresses the resilience of critical infrastructures, and the role of the Cybersecurity and Infrastructure Security Agency (CISA) as the national coordinator for critical infrastructure security and resilience.5The report expands on those existing structures to recommend actions that will provide the framework for effective operational activities by the private sector in wartime.

    Specifically, the report recommends:

    1. Congress and the administration should work together to expand the existing national framework to provide for effective engagement with and coordination of the role of the private sector in wartime. This expanded framework for coordination between the private sector and federal government should include the requisite authorities and resources to accomplish each of the recommended actions below.
    2. A Critical Infrastructure Wartime Planning and Operations Council (CIWPOC) with government and private-sector membership should be established to oversee planning for, and coordination of, government and private-sector wartime activities in support of national defense.
    3. Regional resilience collaboratives should be established in key geographical locations to plan for and coordinate US government and private-sector activities in wartime and other high-consequence events and wartime efforts, including by the creation of regional risk registries that evaluate systemic risks.
    4. Private-sector systemic risk analysis and response centers should be established for key critical infrastructures: a) using as an initial model the Analysis and Resilience Center for Systemic Risk that has been established by large private-sector firms for the financial and energy sectors, and b) focusing on cascading as well as other high-consequence, sector-specific risks. New centers should include key firms in the transportation, health, water, and food sectors.
    5. An integrated corps of cybersecurity providers should be established whose private-sector members would provide high-end cybersecurity in wartime to key critical infrastructures and, if requested, to states, localities, tribes, and territories (SLTTs).
    6. A “surge capability” of cybersecurity personnel in wartime should be established through the creation of a national cybersecurity civilian reserve corps and expansion of National Guard military reserve cybersecurity capabilities.
    7. Cyber Command’s “Hunt Forward” model of operations should be expanded in wartime to support key critical infrastructures in the United States and, if requested, to provide support to SLTTs.
    8. An international undersea infrastructure protection corps should be established that would combine governmental and private activities to support the resilience of undersea cables and pipelines. Membership should include the United States, allied nations with undersea maritime capabilities, and key private-sector cable and pipeline companies.
    9. The Department of Defense should continue to expand its utilization of commercial space capabilities including the establishment of wartime contractual arrangements and other mechanisms to ensure the availability of commercial space assets in wartime.
    10. Congress should enact the necessary authorities and provide the appropriate resources to accomplish the actions recommended above.

    I. Homelands at risk in wartime

    While the United States has largely not been subject to armed attack on the homeland, the National Defense Strategy now makes explicit that the “scope and scale of threats to the homeland have fundamentally changed . . . as the “PRC and Russia now pose more dangerous challenges to safety and security at home.”6 Gen. Glenn VanHerck, commander of US Northern Command, has similarly testified that the:


    . . . primary threat to the homeland is now . . . significant and consequential. Multiple peer competitors and rogue states possess the capability and capacity to threaten our citizens, critical infrastructure, and vital institutions.7

    As Gen. VanHerck has stated, the challenges are particularly acute regarding critical infrastructures. The cyber attack on Colonial Pipeline, the attack on SolarWinds software supply chains, and multiple major ransomware attacks are illustrative of the types of attacks that have taken place in the United States.8 Such attacks could be expected to be substantially expanded in the event of armed conflict.

    The potential for attacks on critical infrastructures in a conflict with Russia is significant. The Annual Threat Assessment of the US Intelligence Community has stated that, while “Russia probably does not want a direct military conflict with US and NATO forces, . . . there is potential for that to occur,” including in the context of the Ukraine-Russia war where “ the risk for escalation remains significant.”9 The 2023 Annual Threat Assessment is unequivocal regarding Russia’s capabilities to attack infrastructure in such an event:


    Russia is particularly focused on improving its ability to target critical infrastructure, including underwater cables and industrial control systems, in the United States as well as in allied and partner countries, because compromising such infrastructure improves and demonstrates its ability to damage infrastructure during a crisis.10

    Similarly, the 2023 report speaks to China’s capacity to threaten critical US infrastructures:


    If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide. . . . China almost certainly is capable of launching cyber attacks that could disrupt critical infrastructure services within the United States, including against oil and gas pipelines, and rail systems.11

    Moreover, Chinese intrusions into US critical infrastructures appear to have already occurred, according to media reports:


    The Biden administration is hunting for malicious computer code it believes China has hidden deep inside the networks controlling power grids, communications systems and water supplies that feed military bases in the United States and around the world, according to American military, intelligence and national security officials.12

    Of course, as the foregoing indicates, Russia or China could be expected not only to attack critical infrastructures in the United States, but also to undertake comparable actions against US allies. Indeed, such actions have already occurred in the context of the Ukraine-Russia war, in which Russia’s attack on the Viasat satellite network disrupted information networks in multiple countries, including Germany, France, Greece, Italy, and Poland.13 Other Russian activities in its war against Ukraine have similarly targeted allied critical infrastructures including “destructive attacks with the Prestige ransomware operation against the transportation sector in Poland, a NATO member and key logistical hub for Ukraine-bound supplies,” and additionally “compromis[ing] a separate Polish transportation sector firm, and later increas[ing] reconnaissance against NATO-affiliated organizations, suggesting an intent to conduct future intrusions against this target set.”14

    Moreover, as noted above, China has comparable capabilities that could be utilized in a conflict against US allies and partners. For example, as the Department of Defense’s 2022 report on China’s military activities states, in the context of a conflict over Taiwan, the PRC “could include computer network . . . attacks against Taiwan’s political, military and economic infrastructure.”15

    In sum, in the event of a conflict with either Russia or China, US, allied, and partner critical infrastructures and information flows will “almost certainly” be subject to attacks. But most of those critical infrastructures, including information and communications technology capabilities, are owned and operated by the private sector. As discussed below, those private-sector capabilities will be critical for military operations, continuity of government, and maintaining the performance of the economy in the event of conflict. Accordingly, a key issue for the United States and its allies and partners is how to effectively engage the private sector in wartime in order to offset the consequences of expected adversarial actions.

    II. Lessons from the Ukraine-Russia war—The role of the private sector in warfare

    A useful starting place for understanding the sixth domain, and the role of the private sector in establishing an effective defense, comes from an overview of the efforts of private-sector companies in the context of the Ukraine-Russia war.

    A worthwhile report by Irene Sánchez Cózar and José Ignacio Torreblanca summarized the actions of a number of companies:


    Microsoft and Amazon, for example, have proven fundamental in helping Ukrainian public and private actors secure their critical software services. They have done so by moving their on-site premises to cloud servers to guarantee the continuity of their activities and aid in the detection of and response to cyber-attacks. Moreover, Google has assisted Ukraine on more than one front: it created an air raid alerts app to protect Ukraine’s citizens against Russian bombardment, while also expanding its free anti-distributed denial-of-service (DDoS) software-Project Shield-which is used to protect Ukraine’s networks against cyber-attacks.16

    Similarly, Ariel Levite has described how Ukraine, the United States, and the United Kingdom have utilized their technical capabilities in cyber defense and other areas during the Ukraine-Russia conflict:


    Ukraine and its Western allies have fared much better than Russia in the competition over cyber defense, early warning, battlefield situational awareness, and targeting information. This is due in large part to the richness and sophistication of the technical capabilities brought to bear by the U.S. and UK governments as well as various commercial entities (including SpaceX, Palantir, Microsoft, Amazon, Mandiant and many others), some of which received funding from the U.S. and UK governments. These actors came to Ukraine’s help with intelligence as well as invaluable space reconnaissance sensors, telecommunications, and other technical assets and capabilities for fusing information and deriving operational cues. The Ukrainians skillfully wove these assets together with their indigenous resources.17

    The discussion below elaborates on these points, focusing on five functional sectors (which have some degree of overlap) where the private sector has had key roles: cybersecurity, cloud computing, space, artificial intelligence, and communications.

    A. Cybersecurity

    Effective cybersecurity has been a key element of Ukraine’s defense against Russia—achieving a degree of success that had not been generally expected:


    The war has inspired a defensive effort that government officials and technology executives describe as unprecedented—challenging the adage in cybersecurity that if you give a well-resourced attacker enough time, they will pretty much always succeed. The relative success of the defensive effort in Ukraine is beginning to change the calculation about what a robust cyber defense might look like going forward.18

    The key to success has been the high degree of collaboration:


    This high level of defense capability is a consequence of a combination of Ukraine’s own effectiveness, significant support from other nations including the United States and the United Kingdom, and a key role for private sector companies.
    The defensive cyber strategy in Ukraine has been an international effort, bringing together some of the biggest technology companies in the world such as Google and Microsoft, Western allies such as the U.S. and Britain and social media giants such as Meta who have worked together against Russia’s digital aggression.19

    A crucial part of that effort has been the private sector’s willingness to expend significant resources:


    The cybersecurity industry has thrown a huge amount of resources toward bolstering Ukraine’s digital defense. Just as the United States, European nations and many other countries have delivered billions of dollars in aid and military equipment, cybersecurity firms have donated services, equipment and analysts. Google has said it’s donated 50,000 Google Workspace licenses. Microsoft’s free technology support will have amounted to $400 million by the end of 2023, the company said in February. In the run-up to the invasion there was a broad effort by industry to supply Ukraine with equipment like network sensors and gateways and anti-virus and endpoint-detection and response tools.20

    These combined actions have been highly effective. Ukraine was able to proactively foil Russian cyber operations at least two times, according to Dan Black. The threats involved were, he wrote, “a destructive malware targeting a shipping company in Lviv and the Industroyer2 operation against Ukraine’s energy infrastructure at the onset of the Donbas offensive.” Ukraine, with international, nongovernmental entities, disrupted them “through coordinated detection and response.”21

    B. Cloud computing

    Another critical set of activities—likewise focused on resilience—has been undertaken by private cloud companies. Ukraine has:


    . . . worked closely with several technology companies including Microsoft, Amazon Web Services, and Google, to effect the transfer of critical government data to infrastructure hosted outside the country. . . . Cloud computing is dominated by . . . hyperscalers—[and] Amazon, Microsoft, [and] Google . . . provide computing and storage at enterprise scale and are responsible for the operation and security of data centers all around the world, any of which could host . . . data.22

    The result has been consequential for both assuring continuity of governmental functions and for supporting the performance of the economy:


    Ukraine’s emergency migration to the cloud has conferred immeasurable benefits. Within days of the war breaking out, key [critical infrastructure] assets and services came under the protection of Western technology companies, allowing Ukrainian authorities to maintain access and control over vital state functions. The uptime afforded by the public cloud cut across various critical services. Banking systems kept working, trains kept running on schedule, and Ukraine’s military kept its vital connections to situational awareness data. Physical risks to data centres and incident-response personnel were likewise mitigated.23

    C. Space

    Private-sector space capabilities have been crucial factors in Ukraine’s defense efforts. Most well-known perhaps are the activities of the satellite company Starlink, a unit of SpaceX. As described by Emma Schroeder and Sean Dack, Starlink’s performance in the Ukraine conflict demonstrated its high value for wartime satellite communications:


    Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable, . . . the government ‘made a decision to use satellite communication for such emergencies’ from American companies like SpaceX. Starlink has proven more resilient than any other alternatives throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions.24

    Starlink is not, however, the only satellite company involved in the war:


    Companies both small and large, private and public, have supported Ukraine’s military operations. Planet, Capella Space, and Maxar technologies—all satellite companies—have supplied imagery helpful to the Ukrainian government. . . . The imagery has done everything from inform ground operations to mobilize global opinion . . . Primer.AI, a Silicon Valley startup, quickly modified its suite of tools to analyze news and social media, as well as to capture, translate, and analyze unencrypted Russian military leaders’ voice communications.25

    The role of space assets presents a specific example of the systemic overlap among different capabilities operated by the private sector—and the need to coordinate with and protect them during wartime. As Levite indicates, the fusion of space and cyberspace as well as land- and space-based digital infrastructure is evident in the Ukraine conflict:


    Digital information, telecommunication, navigation, and mass communication assets are vital for modern warfare, and many now operate in or through space. In the Ukraine conflict we can detect early signs that attacking (and defending) space assets is not only deeply integrated with warfare in the air, sea, and land but is also heavily intertwined with digital confrontation in other domains. Control (or conversely disruption or disablement) of digital assets in space is thus becoming indispensable to gaining the upper hand on the battlefield and in the overall war effort.26

    D. Artificial intelligence

    Artificial intelligence is another capability utilized in the Ukraine-Russia war that has been heavily supported by the private sector. Robin Fontes and Jorrit Kamminga underscore the voluntary role and impact of companies, primarily American ones, to heighten Ukraine’s wartime capacity:


    What makes this conflict unique is the unprecedented willingness of foreign geospatial intelligence companies to assist Ukraine by using AI-enhanced systems to convert satellite imagery into intelligence, surveillance, and reconnaissance advantages. U.S. companies play a leading role in this. The company Palantir Technologies, for one, has provided its AI software to analyze how the war has been unfolding, to understand troop movements and conduct battlefield damage assessments. Other companies such as Planet Labs, BlackSky Technology and Maxar Technologies are also constantly producing satellite imagery about the conflict. Based on requests by Ukraine, some of this data is shared almost instantly with the Ukrainian government and defense forces.27

    In providing such assistance, the private sector has often integrated its artificial intelligence capabilities with open-source information, combining them for military-effective results. Fontes and Kamminga also provide some granular examples of this and discuss how open-source data also bolster battlefield intelligence:


    In general, AI is heavily used in systems that integrate target and object recognition with satellite imagery. In fact, AI’s most widespread use in the Ukraine war is in geospatial intelligence. AI is used to analyze satellite images, but also to geolocate and analyze open-source data such as social media photos in geopolitically sensitive locations. Neural networks are used, for example, to combine ground-level photos, drone video footage and satellite imagery to enhance intelligence in unique ways to produce strategic and tactical intelligence advantages.
    This represents a broader trend in the recruitment of AI for data analytics on the battlefield. It is increasingly and structurally used in the conflict to analyze vast amounts of data to produce battlefield intelligence regarding the strategy and tactics of parties to the conflict. This trend is enhanced by the convergence of other developments, including the growing availability of low-Earth orbit satellites and the unprecedented availability of big data from open sources.28

    E. Communications

    Maintaining functional information technology networks has been a critical requirement of Ukraine’s defense. As Levite has pointed out, that has been accomplished despite massive Russian attacks essentially because of the inherent resilience of the underlying private-sector technologies including space and cloud capabilities (as described above):


    One especially novel insight to emerge from the Ukraine conflict is the relative agility of digital infrastructure (telecommunications, computers, and data) compared to physical infrastructure. Physical, electromagnetic, and cyber attacks can undoubtedly disrupt and even destroy key digital assets and undermine or diminish the efficacy of the missions they serve. But Ukrainian digital infrastructure (especially its cell towers and data servers) has been able to absorb fairly massive Russian missile as well as cyber attacks and continue to function, notwithstanding some temporary setbacks. . . . It appears that modern digital technology networks (such as those based on mobile and satellite communications and cloud computing infrastructure) are more robust and resilient than older infrastructure, allowing relatively quick reconstitution, preservation, and repurposing of key assets and functions.29

    III. The US homeland security framework does not include wartime requirements for the private sector

    The current US framework for private-sector engagement with the government is not focused on wartime. Rather, as set forth in PPD-21, the scope is limited by the definition of the term “all hazards,” which stops short of armed conflict:


    The term ‘all hazards’ means a threat or an incident, natural or man-made, that warrants action to protect life, property, the environment, and public health or safety, and to minimize disruptions of government, social, or economic activities. It includes natural disasters, cyber incidents, industrial accidents, pandemics, acts of terrorism, sabotage, and destructive criminal activity targeting critical infrastructure.30

    A recent report by the Government Accountability Office (GAO) similarly notes that, while the US Department of Homeland Security (DHS) was initially established in the wake of the 9/11 terrorist attacks and correspondingly had a counterterror focus, PPD-21 “shifted the focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents.”31

    While wartime planning and operations are not covered, it is nonetheless important to recognize that the United States does undertake multiple efforts under the National Plan that are focused on the resilience of critical infrastructures and that the National Plan has been enhanced by each administration and the Congress since its inception. The National Plan is briefly reviewed below, as it provides the context and a valuable starting point for the recommendations made by this report with respect to the role of the private sector in wartime.

    The GAO has described the National Plan as providing both a foundation for critical infrastructure protection and an “overarching approach” to make the work of protection and resilience an integrated national effort:


    The National Plan details federal roles and responsibilities in protecting the nation’s critical infrastructures and how sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. It emphasizes the importance of collaboration, partnering, and voluntary information sharing among DHS and industry owners and operators, and state, local, and tribal governments.32

    DHS has the overall coordination responsibility under the National Plan and, within DHS, the Cybersecurity and Infrastructure Security Agency has been established as the “national coordinator for critical infrastructure protection,” partnering with federal, state, and municipal agencies as well as territorial and tribal authorities and the private sector.33

    In conjunction with the National Plan, PPD-21 designated sixteen critical infrastructure sectors. In each sector, a lead agency or department—dubbed a sector risk management agency (SRMA)—coordinates with CISA; collaborates with critical infrastructure owners and operators; coordinates with the varying levels of governments, authorities, and territorial partners; and participates in a government coordinating council as well as a sector coordinating council with owners-operators of critical assets and relevant trade association representatives.34

    Pursuant to PPD-21, including through actions taken by CISA, a host of coordination mechanisms exist to enhance the resilience of critical infrastructures, including the Federal Senior Leadership Council, the Critical Infrastructure Partnership Advisory Council, government coordinating councils, and sector coordinating councils.35 Congress also established the Office of the National Cyber Director (ONCD), whose mandate includes working with “all levels of government, America’s international allies and partners, non-profits, academia, and the private sector, to shape and coordinate federal cybersecurity policy.”36 ONCD’s mandate includes coordinating the recently issued National Cybersecurity Strategy Implementation Plan, whose multiple initiatives include defending critical infrastructures, disrupting threat actors, shaping market forces for security and resilience, undertaking investment, and forging international partnerships.37

    In addition to the substantial efforts at coordination, CISA and the SRMAs have undertaken a number of other worthwhile steps to enhance the US capability to respond to attacks on critical infrastructures. Regulatory authority has been utilized to require or propose cybersecurity requirements including for air, rail, pipelines, and water.38 Utilizing the authority and resources provided by Congress, cybersecurity assistance is being provided to SLTT entities.39 A Joint Cyber Defense Collaborative has been established to effectuate “operational collaboration and cybersecurity information fusion between public and private sectors, for the benefit of the broader ecosystem, [and for] producing and disseminating cyber defense guidance across all stakeholder communities.”40 CISA additionally conducts exercises and training with the private sector, ranging from a tabletop exercise to the large-scale Cyber Storm exercise, which simulates a cyberattack.41

    CISA also has set forth a “planning agenda” seeking to “combin[e] the capabilities of key industry partners with the unique insights of government agencies . . .[in order to] create common shoulder-to-shoulder approaches to confront malicious actors and significant cyber risks.”42 The agenda includes “efforts to address risk areas” such as open-source software, and the energy and water sectors, while recognizing that “our plans and doctrine have not kept up” with the requirements of cybersecurity.43 Similarly, CISA has recognized the value of effective cybersecurity firms supporting less-capable companies, specifically seeking to “advance cybersecurity and reduce supply chain risk for small and medium critical infrastructure entities through collaboration with remote monitoring and management (RMM), managed service providers (MSPs), and managed security service providers (MSSPs).”44

    CISA’s efforts are complemented by the National Cyber Investigative Joint Task Force, led by the Federal Bureau of Investigation and by the Cybersecurity Collaborative Center (CCC) led by the National Security Agency (NSA). Under the recent National Cybersecurity Strategy Implementation Plan, the FBI is to “expand its capacity to coordinate takedown and disruption campaigns with greater speed, scale, and frequency.”45 The NSA’s CCC provides support to the private sector including cost-free protection for DIB companies through a “filter which blocks users from connecting to malicious or suspicious [Internet] domains” as well as “bi-directional cyber threat intelligence sharing with major IT and cybersecurity companies who are best positioned to scale defensive impacts [and which has] hardened billions of endpoints across the globe against foreign malicious cyber activity.”46

    To sum up, while the National Plan is focused on significant threats and there is much to commend in the actions taken and planned, those efforts have not yet taken account of the significant disruptive potential of wartime threats. Neither CISA (through the Joint Cyber Defense Collaborative or otherwise) nor the SRMAs nor the ONCD have yet established the type of coordination mechanisms necessary for effective private-sector operations in wartime along the lines as have been undertaken in the Ukraine-Russia war. Similarly, while the FBI and the NSA undertake certain operational activities, in their current format those actions do not reach the level of effort required for effectiveness in wartime.

    IV. Recommendations

    The discussion above demonstrates both the ongoing engagement of the private sector in the Ukraine-Russia war and the potential for important private-sector future roles if the United States and its allies were involved in a future conflict. Maximizing that potential for the United States and its allies will require collaborative initiatives that engage the private sector as an operational partner. The discussion below sets forth ten such initiatives focusing largely on actions to be taken in the United States, though as previously noted, comparable actions should be undertaken by allies and key partners.

    A. Congress and the Biden administration should expand the existing national framework to provide for effective engagement with the private sector in wartime

    Congress and successive administrations have regularly focused on the need to upgrade homeland security and each branch of government has undertaken to assure an effective national defense. However, neither Congress nor the executive branch has yet brought the two together in a comprehensive approach, and neither has provided a framework for the inclusion of the private sector as part of operational wartime defense activities.

    The importance of establishing such a framework has recently been made clear by the lessons drawn from the Ukraine-Russia war, as discussed above. Broadly, the administration should issue an executive order under existing authorities to begin the establishment of such a framework, and Congress should work with the administration to establish the necessary full-fledged approach, including the provision of the requisite authorities and resources. The specific actions are discussed at length in the recommendations below.

    Initially, the administration should establish a Critical Infrastructure Wartime Planning and Operations Council with government and private-sector membership (including, as requested, SLTTs); establish regional resilience collaboratives; and help facilitate the establishment of sector-specific coordinating mechanisms. Congress and the administration should work together to establish an Integrated Cybersecurity Providers Corps; authorize the establishment of a national Cybersecurity Civilian Reserve Corps and an expansion of National Guard cybersecurity capabilities; authorize Cyber Command in wartime to support key critical infrastructures; establish an international Undersea Infrastructure Protection Corps; expand the use of private-sector space capabilities; and enact the required authorities and provide the necessary resources to accomplish each of the foregoing.

    B. Establish a critical infrastructure wartime planning and operations council with government and private-sector membership

    In the United States (and in most other allied countries), there is no comprehensive mechanism to engage the private sector in warfare. While there are worthwhile efforts—such as by CISA and the SRMAs, as described above—they are focused on prewar resilience. By contrast, Finland, NATO’s newest member, has long had a comprehensive approach to national security that fully engages the private sector, including in the event of an “emergency,” which is defined to include “an armed or equally serious attack against Finland and its immediate aftermath [or] a serious threat of an armed or equally serious attack against Finland.”47

    In such an event, the Finland model of “comprehensive security” provides that the “vital functions of society are jointly safeguarded by the authorities, business operators, organisations and citizens.”48 The Security Strategy for Society describes a “cooperation model in which actors share and analy[z]e security information, prepare joint plans, as well as train and work together.”49 Participants include the central government, authorities, business operators, regions and municipalities, universities, and research and other organizations.50 Quite importantly, “[b]usiness operators are playing an increasingly important role in the preparedness process . . . [and in] ensuring the functioning of the economy and the infrastructure.”51

    Finland has a small population, so the precise mechanisms it utilizes for its comprehensive approach would need to be modified for other countries, including the United States. But the key point is that there needs to be such an overarching cooperation model involving this range of actors and activities.

    To accomplish such a coordinated effort—and to focus on the United States—a CIWPOC with government and private-sector membership should be established through the issuance of an executive order as part of the overall White House national security structures.

    At the governmental level, it is important to recognize that neither the existing Federal Senior Leadership Council, which includes CISA and the SMRAs, nor any of the other councils and coordinating efforts described above are operationally oriented for wartime activities, nor are they designed to undertake the necessary actions required to “analyze security information, prepare joint plans, as well as train and work together” in the context of conflict or imminent threat of conflict.52Accordingly, a better mechanism to guide actions in wartime would be to establish a CIWPOC along the lines of a joint interagency task force (JIATF) with appropriate personnel from relevant agencies plus private-sector subject matter experts, each of whom would have the background and capabilities to plan for and, if required, act in a wartime context.53

    Such a CIWPOC could be headed by CISA prior to a wartime-related emergency, with the Defense Department acting as the deputy and organizing the necessary planning and training. In the event of a conflict or if a threat is imminent, the Defense Department would take command to integrate the CIWPOC into the full context of responding to the conflict, with CISA then in the deputy role. The dual-hatting of CISA and the Defense Department is key to ensuring a smooth transition in the event of conflict as that will allow for coordination mechanisms to be established prior to conflict. The planning and training led by the Defense Department prior to conflict will also establish lines of coordination as well as the necessary familiarity with tasks required in wartime, both for DOD and CISA as well as for the other government departments and private sector entities that are engaged with the CIWPOC.

    Initially, at least, the CIWPOC membership should be limited to departments with responsibility for sectors most relevant to wartime military efforts as well as to continuity of government and to key elements of the economy. Utilizing that criterion, a first set of members would include defense, homeland security, energy, finance, information and communications technology, transportation, SLTTs, food, and water.

    Private-sector representation on the CIWPOC should come from the key critical infrastructures, noted above, most relevant to planning and operations in a conflict. As discussed below, that would include representatives from the proposed Integrated Cybersecurity Providers Corps and the Undersea Infrastructure Protection Corps, as well as from the regional resilience collaboratives and the private-sector systemic risk analysis and response centers, established as recommended below. As would be true for governmental departments, private-sector membership will not necessarily include all critical infrastructures, as the focus for the CIWPOC is on the operational capabilities that the private sector can provide in the event of a conflict. There would be costs to the private-sector entities associated with the planning and training efforts described, and, inasmuch as those costs are associated with providing national defense, Congress should undertake to include them in the national defense budget.
    As part of organizing the proposed CIWPOC, DOD would have to determine which military command would have the lead and what resources would be required. In order to achieve the full degree of effectiveness required, the administration should undertake a thorough review of command arrangements and resources required for homeland defense, as the current arrangements are not sufficient.54

    • Northern Command’s current mission is to provide “command and control of . . . DOD homeland defense efforts and to coordinate defense support of civil authorities.”55 While it is analytically the appropriate command to lead in the context of the CIWPOC, in reality, Northern Command would need substantial additional resources and expanded authorities to undertake the requisite actions. By way of example, its mission would need to expand beyond “defense support to civil authorities” to include planning for wartime and operational control as required in the event of conflict.
    • Transportation Command, Cyber Command, Space Command, and the Coast Guard each would have important roles in generating the necessary plans, training, and (if required) operations. They likely should be supporting commands in undertaking those missions in the United States in order to maintain unity of command at the DOD level and unity of effort both at the interagency and private-sector levels. However, the arrangements within DOD and with interagency participants are not yet established.
    • The review recommended above should be undertaken promptly, and the results presented to the president and then to the Congress for such actions as may be required—but that process should not be a bar to the initial establishment of the CIWPOC, including DOD’s engagement.

    C. Establish regional resilience collaboratives

    In addition to the central Critical Infrastructure Wartime Planning and Operations Council discussed above, it will be important to coordinate government and private-sector activities in key geographical locations with a focus on support to national defense wartime efforts.

    Not everything can best be done centrally in the context of a conflict. By way of example, the Finnish model of collective security underscores the importance of regional efforts:


    There should be cooperation forums of security actors (such as preparedness forums) . . . in each region . . . [which] would form the basis for the preparedness plan that would also include the lines of authority, continuity management, use of resources, [and] crisis communications plan[s] . . . The workability of the preparedness plans and the competence of the security actors would be ensured by training and joint exercises.56

    CISA does have established mechanisms to reach out to private sector companies and to SLTTs, including through its regional offices and its SLTT grant program.57 However, in accord with its overall approach, those efforts are not focused on wartime activities. One way to generate the necessary regional efforts for wartime would be to establish regional resilience collaboratives for key geographic areas with an initial focus on those areas that provide critical support to military operations such as key US ports on the East, Gulf, and West coasts. To increase the attractiveness for the private sector, the regional resilience cooperatives should focus on both wartime and other high-consequence risks, such as cascading impacts in circumstances short of war.

    The Senate version of the FY2024 National Defense Authorization Act includes a provision focused on regional resilience. The bill provides for a pilot program to evaluate “how to prioritize restoration of power, water, and telecommunications for a military installation in the event of a significant cyberattack on regional critical infrastructure that has similar impacts on State and local infrastructure.”58 The bill requires that the pilot program should be “coordinated with . . . private entities that operate power, water, and telecommunications” for the military installations included in the pilot program.59

    It should be apparent that the Defense Department will not be able of itself to create the necessary cyber resilience against an attack nor the necessary restoration processes (though, as discussed below, DOD can provide important support). Those actions will have to be undertaken by the private sector (or, in some cases, by SLTTs that operate critical infrastructure).

    Accordingly, the FY2024 NDAA when enacted should include provisions to establish regional resilience collaboratives, initially to operate to generate sustained engagement among public and private entities designed to respond to wartime attacks and high-consequence cybersecurity risks in peacetime through collaboration among key private, SLTT, and federal entities. As a first step (and consistent with the Senate bill calling for mapping dependencies) , a regional resilience collaborative should build a regional risk registry focused on regional dependency models, including cascading risks.60

    As with the case of the CIWPOC discussed above, CISA would lead in peacetime and DOD in wartime. Support would also come from the integrated cybersecurity protection corps described below. Regional resilience collaboratives would undertake operational planning led by the Department of Defense that would utilize both private and public capabilities. Continuous planning (including updated threat reviews and net assessments) and implementing actions would enhance resilience and allow for effective responses, if required. While the benefits from a regional resilience collaborative would be made widely available, the actual participants would be selectively included as relevant to the risks identified by the regional risk registry.

    A regional risk collaborative effort would have costs associated with its activities. As would be the case regarding the CIWPOC as well as the integrated corps of cybersecurity providers, and since those costs are associated with providing national defense, Congress should undertake to include them in the national defense budget.

    D. Establish private-sector systemic risk analysis and response centers

    Certain sectors of the economy are sufficiently critical that undertaking enhanced efforts to reduce risk in wartime would be important to the national defense. To be sure, all critical infrastructures already undertake a variety of coordination efforts, including those noted above, as well as through Information Sharing and Analysis Centers (ISACs) and Information Sharing and Analysis Organizations.61However, particularly in the context of wartime, it will be important to go beyond information sharing and to undertake coordinated risk-reduction efforts.

    A model for this in the United States is the Analysis and Resilience Center for Systemic Risk (ARC), which is a “coalition that is identifying, prioritizing, and mitigating risks to their infrastructure and the points of connection to other critical infrastructure sectors.”62 The ARC brings together “small groups of industry experts [who] identify risks and find solutions that benefit the larger critical infrastructure community.”63 The activities of the ARC go well beyond the information sharing currently undertaken by the ISACs, seeking to respond to systemic risk in a coordinated way. While the existing ARC members come from leading financial and energy firms, the concept should be extended to key functional areas including transportation, food, water, and healthcare.

    Newly established private-sector systemic risk analysis and response centers will also benefit from close coordination with key providers of network infrastructure and services, as is currently being accomplished for the financial industry through the Critical Providers Program of the financial services ISAC (FS-ISAC).64 That program “enables critical providers to use FS-ISAC channels to communicate during large-scale security upgrades, technical outages, cyber-based vulnerabilities, software and hardware misconfigurations, and/or changes that could impact multiple FS-ISAC members.”65 As the foregoing suggests, there is already a certain amount of coordination being undertaken in the information and communications technology (ICT) arena, and a determination can be undertaken as to the value of establishing an ICT systemic risk analysis and response center.

    E. Establish an integrated cybersecurity providers corps

    As discussed above, one of the key roles that the private sector has played in the Ukraine-Russia war is to provide highly effective cybersecurity for critical infrastructures despite significant and continuing Russian cyberattacks. In the event of a conflict with either Russia or China, US cybersecurity firms could be expected to undertake similar actions, including based on service-level agreements they have with critical infrastructures in the United States and efforts like the Critical Providers Program noted above. However, also as noted above, the actions being taken in Ukraine are part of a larger operational collaborative effort that includes firms working together and with governments (including the United States, the UK, and Ukraine). Accordingly, for private-sector cybersecurity support to be most effective in the United States in wartime, a similar approach to coordinated support should be organized in advance of the need, in conjunction with the government, including appropriate information sharing, planning, and exercises relevant to wartime operations.

    To begin such an effort, an Integrated Cybersecurity Providers Corps (ICPC) should be established and focused on providing effective cybersecurity for those critical infrastructures most relevant to military activities, continuity of government, and maintaining the performance of the economy. One of the fundamental recommendations of the National Cybersecurity Strategy is to “ask more of the most capable and best-positioned actors to make our digital ecosystem secure and resilient,” and that should certainly apply to wartime.66

    The ICPC should operate under the general ambit of the Critical Infrastructure Wartime Planning and Operations Council, described above. Membership should consist of highly capable cybersecurity firms and major cloud providers, with CISA and DOD jointly determining whether a cybersecurity provider met the requirements for membership in the corps. Broadly speaking, an integrated cybersecurity provider should be able to provide high-end cybersecurity services including authentication, authorization, segmentation, encryption, continuous monitoring, and protection against DDoS attacks. Cloud providers should have the ability to protect the cloud itself and to offer other expert security providers the opportunity to provide cybersecurity as a service on the cloud. The intent would be to ensure that key critical infrastructures have the support of effective integrated cybersecurity providers in wartime.67

    Concomitant with the establishment of the ICPC, DHS/CISA and DOD, who will work closely with the ICPC members, should undertake to assure the engagement of the key critical infrastructures most relevant in wartime to military activities, continuity of government, and maintaining the performance of the economy. Usefully, DHS/CISA already is required to identify infrastructures of critical importance to the United States:


    The Department of Homeland Security (DHS), in coordination with relevant Sector Specific Agencies (SSAs), annually identifies and maintains a list of critical infrastructure entities that meet the criteria specified in Executive Order (EO) 13636, Improving Critical Infrastructure Cybersecurity, Section 9(a)(‘Section 9 entities’) utilizing a risk-based approach. Section 9 entities are defined as ‘critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security.’68

    The Section 9 list could provide the basis—or at a minimum, a starting point—for identifying the infrastructures most critical in the context of wartime. Additionally, however, since one key objective in wartime will be continuity of government, at least some SLTT governments will need to be included on the list—though there will have to be some very significant prioritization since there are approximately ninety thousand local governments in the United States.69Initial inclusion of SLTTs might be for those related to areas for which regional resilience collaboratives are established.

    A third step will be to create a process to provide assured linkages between the designated key critical infrastructures (including the key SLTTs) and integrated cybersecurity providers. Congress should enact legislation authorizing regulations requiring such support in wartime for designated critical infrastructures and should establish a voluntary program for key SLTTs. A regulatory approach is particularly necessary as, for the most part, critical infrastructure companies are far less capable at cybersecurity than are the expert cybersecurity providers—and that would certainly be true in wartime, when the threat would be more substantial. Under the regulations, designated critical infrastructures should be required to plan and train with integrated cybersecurity providers prior to conflict so that the requisite cybersecurity resilience could be achieved in wartime. SLTTs should likewise be provided the opportunity for cybersecurity support, including planning and training on a voluntary basis, for reasons of federalism. As noted above, there will be costs associated with such activities which, since they would be undertaken in support of national defense, should be included by Congress in the Defense Department budget.

    F. Create a wartime surge capability of cybersecurity personnel by establishing a cybersecurity civilian reserve corps and expanding National Guard cyber capabilities

    The need for the federal government to overcome the currently existing shortage of qualified cybersecurity personnel is well understood, and the importance of having sufficient cybersecurity personnel would be even greater in wartime. At the time of this writing, both the House and Senate versions of the fiscal year (FY) 2024 National Defense Authorization Act (NDAA) have provisions intended to help ameliorate that shortage, but more substantial improvements are warranted.

    In the House, Representative Mark Green had proposed requiring a report on the “feasibility of establishing a cyber unit in every National Guard of a State.”70 That recommendation was not included in the House version of the NDAA but there is a provision authorizing Cyber Command to “accept voluntary and uncompensated services from cybersecurity experts.”71 By contrast, in the Senate, Senators Jacky Rosen and Marsha Blackburn had proposed establishing a pilot program for a cyber reserve for DOD and DHS.72 That proposal also was not included in its entirety in the Senate version of the NDAA but there is a provision for the Secretary of the Army to “carry out a pilot project to establish a Civilian Cybersecurity Reserve.”73 Each of the proposed provisions is a step forward and enacting both the House and Senate provisions would be worthwhile, but the final version of the NDAA should go further than the existing proposals and move promptly to full-fledged cyber civilian reserve and augmented National Guard cyber capabilities.

    Establishing a “surge capability” able to add significant numbers of personnel from the private sector for cybersecurity activities in the event of a conflict should be a high priority for the United States. The value of such a capability has been underscored in the context of the conflict in Ukraine, in which:


    [i]mmediately after the invasion, Ukraine also began to elicit support from the private sector to supplement its own cyber capabilities. One aspect of this effort was to call on national private-sector experts. Requests for volunteers to help protect [critical infrastructures] were reportedly circulated through communities at the request of a senior Ukrainian defence ministry official. These volunteers were requested to help defend infrastructure, identify critical vulnerabilities and carry out other defensive tasks.74

    In the United States, such a reserve capability could be established by a combination of the proposed measures now in the House and Senate versions of the NDAA as well as Representative Green’s proposal for expanding National Guard cyber capabilities.

    • A cybersecurity civilian reserve corps would provide for the United States access to personnel beyond those seeking to be part of the military. Such an approach is being utilized by US allies with very substantial cyber capabilities. The UK has already established its Joint Cyber Reserve Force with a “mantra of high-end cyber talent first,” so that the “Reserves ‘conventional’ physical entry standards (physical ability, fitness, etc.) are not our immediate concern. This ensures that we can select untapped talented individuals who would not normally see reserve service as an option or possibility.”75 Other countries such as Estonia have also developed reserve models to “bring together competent IT experts who can solve significant and long-term cyber incidents.”76
    • The National Guard currently includes both Army and Air Force cyber units.77 However, expanding their numbers and better integrating them into the force would have high value. Given the substantial demand for additional cyber personnel, and as previously recommended, “the number of National Guard personnel directed toward the cyber mission should be significantly increased. . . . [and] a reasonable initial step would be to increase Guard end strength in order to increase the number of cyber personnel to approximately double the current levels.”78 In accomplishing that increase, the “Department of Defense [should] bolster its operational capacity in cyberspace through improved utilization of the National Guard,” as Congress has previously called for: “Despite [Congressional] calls for change, the Department of Defense and the military services appear not to have made any meaningful change in how the expertise resident within the National Guard and the Reserve Component can be better leveraged.”79

    In sum, combining the current versions of the House and Senate NDAA legislation and additionally establishing an expanded National Guard cyber capability would result in significant benefits to the United States in the event of a conflict.

    G. Expansion of Cyber Command’s “hunt forward” model to support key critical infrastructures in wartime in the United States

    US Cyber Command regularly works with allied and partner nations at their request to enhance the cybersecurity of their critical infrastructures.80 Testimony from Cyber Command has described that “since 2018, [it] has deployed hunt forward teams 40 times to 21 countries to work on 59 networks.”81 Cyber Command has described its Hunt Forward operations (HFOs) as follows:


    . . . strictly defensive cyber operations conducted by U.S. Cyber Command (USCYBERCOM) at the request of partner nations. Upon invitation, USCYBERCOM Hunt Forward Teams deploy to partner nations to observe and detect malicious cyber activity on host nation networks. The operations generate insights that bolster homeland defense and increase the resiliency of shared networks from cyber threats.82

    A Hunt Forward operation is a joint effort, as the Cyber Command operators “sit side-by-side with partners and hunt for vulnerabilities, malware, and adversary presence on the host nation’s networks.”83

    As a matter of policy, Cyber Command does not currently undertake operations in the United States. In wartime, however, Cyber Command should have an expanded mission to support key critical infrastructures most relevant to national defense. As described above, such governmental efforts have been instrumental—along with the actions of the private sector—in supporting Ukraine, and a similar collaborative approach should be undertaken for wartime in the United States.

    In the United States in wartime, Cyber Command hunting capabilities should be coordinated with the relevant critical infrastructures and with the proposed Integrated Cybersecurity Providers Corps. Undertaking prior training and exercises would, of course, make any actual operations more effective. Additionally, to accomplish such a mission without diverting resources from Cyber Command’s core mission set (i.e., global cyber operations and defense of DOD networks), Cyber Command would likely require a substantial increase in personnel for wartime operations.84 As discussed in the prior section, there are good reasons to establish a wartime cyber civilian reserve and to increase National Guard cybersecurity capabilities—and supporting Cyber Command wartime operations would be one of the most important.

    In expanding the mission as recommended above, Cyber Command would be subject to the same constitutional requirements as other federal departments and agencies, including the Fourth Amendment’s limits on intrusion into private activities. While searches based on enemy actions in wartime would likely be deemed reasonable and warrants could be obtained, a much better approach—both as a matter of constitutional law and appropriate policy—would be for the federal government to work with the key critical infrastructures to establish a consensual wartime set of arrangements and for Congress to undertake a review of the agreed activities.85

    H. Establish an undersea infrastructure protection corps

    The United States and its allies have long recognized the vulnerability of undersea pipelines and cables.86 Attacks on the Nord Stream 1 and 2 pipelines in September 2022 have underscored those vulnerabilities and raised the visibility of the security issue at the highest levels of government.87 At the May 2023 G7 summit, the group determined, “[w]e are committed to deepen our cooperation within the G7 and with like-minded partners to support and enhance network resilience by measures such as extending secure routes of submarine cables.”88 Relatedly, the Quad grouping of countries (i.e., Australia, India, Japan, United States) agreed to establish “the Quad Partnership for Cable Connectivity and Resilience [which] will bring together public and private sector actors to address gaps in the infrastructure and coordinate on future builds.”89

    The G7 and Quad actions are future-oriented, but pipelines and undersea cables are currently subject to more immediate vulnerabilities, with Russia being a particularly concerning threat.90 As NATO Secretary General Jens Stoltenberg has stated:


    So we know that Russia has the capacity to map, but also potentially to conduct actions against critical infrastructure. And that’s also the reason why we have, for many years, addressed the vulnerability of critical undersea infrastructure. This is about gas pipelines, oil pipelines, but not least thousands of kilometres of internet cables, which is so critical for our modern societies—for financial transaction, for communications, and this is in the North Sea, in the Baltic Sea, but across the whole Atlantic, the Mediterranean Sea.91

    A report to the European Parliament similarly highlighted the issues, noting the Russian Navy has a “special focus” on the Yantar-class intelligence ships and auxiliary submarines, which have the capacity to disrupt undersea cable infrastructure. Also of note are “new abilities to deploy mini-submarines” to explore underwater sea cables by stealth, according to the report.92

    As a consequence of those concerns, NATO has established a NATO Maritime Centre for the Security of Critical Undersea Infrastructure as a partnership with the private sector.The Maritime Centre for the Security of Critical Undersea Infrastructure will be based in Northwood near London. NATO had earlier set up a coordination cell in Brussels to better monitor pipelines and subsea cables that are deemed especially endangered by underwater drones and submarines. 93 Per Secretary General Stoltenberg, the purpose is to strengthen the protection of undersea infrastructure:


    And of course, there’s no way that we can have NATO presence alone [surveilling] all these thousands of kilometres of undersea, offshore infrastructure, but we can be better at collecting information, intelligence, sharing information, connecting the dots, because also in the private sector is a lot of information. And actually, there’s a lot of ongoing monitoring of traffic at sea and to connect all those flows of information will increase our ability to see when there is something abnormal and then react dependent on that.94

    Secretary General Stoltenberg highlighted the importance of collaborating with the private sector:


    And then most of it is owned and operated by the private sector and they also have a lot of capabilities, to protect, to do repair and so on. So the purpose of this Centre . . . is to bring together different Allies to share information, share best practices, and to be able to react if something abnormal happens and then also to ensure that the private sector and the government, the nations are working together.95

    As the new NATO effort underscores, resilience of undersea infrastructure will be of high consequence in the event of armed conflict. However, NATO itself does not generally provide the capabilities that the organization utilizes, but rather relies on the capabilities provided by its member nations. Accordingly, the United States should work with allies and those elements of the private sector that have relevant undersea capabilities to establish an international Undersea Infrastructure Protection Corps, both to support NATO activity and because security for undersea infrastructures is inherently international. This corps should include both the private-sector builders/maintainers and the owners of undersea cables and pipelines. That group would organize the actions required to enhance the resilience that would be necessary in wartime.

    The countries and companies connected by cables and pipelines involve substantial numbers of US allies. According to one industry analysis, the top five undersea cable vendors are Alcatel-Lucent Enterprise (France), SubCom LLC (United States), NEC Corporation (Japan), Nexans (France), and Prysmian Group (Italy).96 In terms of ownership, US companies are significantly involved with Google, Facebook, Microsoft, and Amazon being significant investors in cables.97 With respect to undersea pipelines, there are multiple such pipelines in the North Sea, Baltic Sea, Mediterranean Sea, and the Gulf of Mexico, all, of course, involving US allies and/or the United States.“98 Accordingly, there should be sufficient geopolitical alignment with respect to establishing an Undersea Infrastructure Protection Corps, and while the precise arrangements will have to be negotiated, it is notable that several countries have already taken steps. The UK, Norway, and Italy are each organizing security efforts to enhance pipeline security, and the United States, the UK, and France have well-established undersea capabilities.99

    An international Undersea Infrastructure Protection Corps should have three areas of focus. First, as is true with respect to other information and communication technology networks, undersea cables will need the same type of effective cybersecurity. As noted above, several significant undersea cable owners are also companies that have been extensively involved in the defense of Ukraine’s ICT networks, including working with the United States and the UK. That operational experience and real-time experience with public-private coordination should provide a basis for extending such an approach to undersea cables.100

    Second, all undersea cables eventually come out of the sea to on-ground “landing points.” John Arquila has indicated that “concerns about the vulnerability of landing points, where the cables come ashore . . . has led to the idea of having many branch points near landfall.”101 Arquila also describes efforts “to improve landing-point security through concealment and hardening—including, in the latter case, the shielding with armor of the cable segments in shallower waters near landing points. . . . [and also use of] both surveillance technologies and increased on-site security.”102 An Undersea Infrastructure Protection Corps can build on such approaches.103

    Third, undersea infrastructures can be repaired, with cable repairs regularly undertaken for commercial reasons.104 However, as a report to the European Parliament describes, the availability of cable repair capabilities deserves review:


    A key and often neglected vulnerability of the cable infrastructure is the
    capabilities . . . for repair. The capabilities within Europe are very limited . . . The repair infrastructure is often not featured in risk analyses, although it is in larger-scale coordinated attack scenarios.105

    The proposed international Undersea Infrastructure Protection Corps should evaluate whether sufficient repair capability exists under the conditions that might occur if there were an active conflict and recommend such remediation steps as should be undertaken in the face of any deficiencies.

    I. Expand usage of commercial space-based capabilities

    In the Ukraine-Russia war, commercial space capabilities have been critical to Ukraine’s defense (as described above), as well as to maintaining governmental and economic functioning. The United States is already undertaking significant activities with the commercial space sector in the defense arena. The discussion below summarizes key elements of that effort and further proposes additional actions for the use of private-sector space capabilities that would enhance resilience in wartime for defense, government continuity, and the economy.

    First, in the defense arena, commercial capabilities are being increasingly relied upon to meet the military’s space launch requirements. Private-sector SpaceX Falcon 9 reusable rockets, which regularly put commercial satellites in place, have recently been used, for example, to launch “the first 10 of the planned 28 satellites [for defense] low-latency communications [and] missile warning/missile tracking.”106 That space architecture is planned to expand to 163 satellites.107 Similarly, other companies such as Rocket Lab have commercial launch capabilities.108 Continuing the use of commercial launch capabilities to generate military constellations as well assuring their availability in wartime will be critical to effective defense operations.

    Second, and as the foregoing suggests, the proliferation of satellites that the DOD can rely on in wartime significantly adds to the resilience of the space enterprise. As one report describes:


    The use of small, inexpensive satellites in a pLEO [proliferated low-Earth orbit] constellation also improves deterrence because of its increased cost imposition potential. The cost of a direct-ascent KE ASAT [kinetic antisatellite] is now greater than the target satellite, and because of the sheer number of assets an enemy must attack, proliferation reduces the effectiveness and impact of these weapons and other coorbital threats.109

    Third, commercial sensing capabilities can complement the military’s more exquisite sensing. Satellite companies such as Planet, Capella Space, and Maxar Technologies have supplied imagery upon Ukraine’s request, as noted above.110 The Defense Department has likewise been utilizing such commercial space-based, ground-sensing capabilities having, for example, recognized a “critical need for improved, large scale, situational awareness satisfied by less expensive, day/night, all-weather imaging satellites capable of filling gaps in space-based reconnaissance.”111 For example, Planet was awarded a National Reconnaissance Office (NRO) contract in October 2019 for “an unclassified, multi-year subscription service contract for daily, large-area, 3-5 meter resolution commercial imagery collection. . . . [for] access to new daily unclassified imagery over multiple areas of interest to military planners, warfighters, and the national security community.”112

    Moreover, commercial sensing is becoming increasingly capable, going beyond optical capabilities, with Umbra having launched commercial “radar-imaging” microsatellites whose capabilities can be used for “remote wildlife habitat protection, pollution and plastic waste tracking, oil spill detection, military intelligence gathering [italics added], live flooding estimation during storms, and more.113

    The Defense Department also has been seeking to expand its “space domain awareness” through collaboration with the private sector. Maxar Technologies, for example, recently signed a contract with the NRO which “includes a provision to experiment with using its satellites to provide ‘non-Earth’ data, which includes high-resolution imagery of the space environment.”114 That effort would complement ongoing actions by Space Force, whose “fleet of radars, known as the Space Surveillance Network, observe space from the ground and feed data into command and control systems that catalog space objects” to deal both with issues of “congestion and debris in low Earth orbit . . . and aggression from adversaries like Russia and China.”115

    Fourth, the information and communications technology networks being established by commercial providers can themselves be utilized for wartime operations, again as has been demonstrated by the use of Starlink in Ukraine. But Starlink would not be the only provider. Currently, another constellation consisting “of small, low-cost satellites under 100 kilograms capable of multiple rapid-launch” is under development, based “on an orbital mesh network of . . . commercial and military microsatellites,” which will be “capable of providing low-latency internet connectivity between sensors and weapons for military mission.”116 Future capabilities include the establishment of “free space optical networks” which will potentially have “immense benefits including high security, better data rates [and] fast installations, no requirement of licensed spectrum, best costs [and] simplicity of design,” and will be challenging to detect and to intercept “in view of small divergence of the laser beams.”117

    Governments plan to develop position, navigation, and timing capabilities—now generally done in medium-Earth orbit by the Global Positioning System or equivalent satellites—with a variety of capabilities including but not limited to low-Earth orbit capabilities.118 In the United States, Xona Space Systems is “developing PULSAR—a high-performance positioning, navigation, and timing (PNT) service enabled by a commercial constellation of dedicated [low-Earth orbit] satellites.”119

    Another application of commercial capabilities for defense space support is the use of the cloud for development of space-related software:


    The Space Development Agency awarded a $64 million contract to Science Applications International Corp. (SAIC) to develop a software applications factory for the agency’s low Earth orbit constellation [but] . . not [by] build[ing] an actual factory but [rather] a cloud-based development process to design, test and update software applications using a repeatable path.120

    In light of the very substantial ongoing interactions between the Department of Defense and the commercial space sector, as discussed above, the key issue for wartime is simply to ensure that the existing (and future) capabilities are available for use as required. That can be accomplished in the first instance by contractual arrangements along the lines of those utilized by DOD for support from the airline and maritime industries. By way of example, the Civil Reserve Air Fleet (CRAF) provides “selected aircraft from US airlines [which are] contractually committed to CRAF [to] augment Department of Defense airlift requirements in emergencies when the need for airlift exceeds the capability of military aircraft.”121

    The US Space Force is in process of developing the Commercial Augmentation Space Reserve (CASR) program. As with CRAF, CASR would seek to establish “voluntary pre-negotiated contractual arrangements” that would provide support to the military by ensuring that “services like satellite communication and remote sensing are prioritized for U.S. government use during national security emergencies.”122 Among the issues that Space Force presumably is discussing with the private sector in connection with CASR would be determining which services and in what amounts could reliably be provided in a wartime environment, whether such services could be based on existing (or planned) private-sector constellations or whether those would need to be expanded, what provisions would need to be made for satellite and/or ground station replacement in the event of adversary attacks, what provisions for indemnification need to be agreed upon, and what level of funding would be appropriate both to incentivize the private sector and to accomplish the requisite wartime tasks as well as to undertake planning and training prior to conflict.

    Relatedly, it is worth noting that the Defense Production Act authorizes the government to require the prioritized provision of services—which would include services from space companies—and exempts any company receiving such an order from liabilities such as inability to support other customers.123 However, it would be much more desirable—and much more effective—if the necessary arrangements were established in advance through a voluntary arrangement as the CASR program is seeking.

    J. Authorities and resources

    Undertaking the actions recommended above will require some important changes to governmental authorities as well as the provision of additional resources necessary to accomplish the recommended outcomes.

    Regarding authorities, the administration currently has the authority to establish a Critical Infrastructure Wartime Planning and Operations Council with government and private-sector membership (including, as requested, SLTTs); establish regional resilience collaboratives; and help facilitate the establishment of sector-specific coordinating mechanisms. The administration and the Congress should work together to establish the authorities necessary to:

    • Create an Integrated Cybersecurity Providers Corps.
    • Establish a national Cybersecurity Civilian Reserve Corps and expand National Guard cybersecurity capabilities.
    • Authorize Cyber Command to support key critical infrastructures in wartime.
    • Establish an international Undersea Infrastructure Protection Corps.
    • Expand the use of private-sector space capabilities.

    In undertaking such enactments as required, Congress should also evaluate whether any antitrust or other safe harbor exemptions would be necessary to allow for the desired level of collaboration.

    In terms of resources, funding, as noted above, will be required for each of the recommended activities. Including such costs as line items in the Defense Department budget would be appropriate to support each of the proposed activities as the activities are all to be undertaken in support of national defense in a wartime context. As a complement to line-item budgeting, Congress might also consider authorizing the use of transferable tax credits, which could be utilized as payment in order to offset the costs of the provision of capabilities and services prior to or in wartime.124 The precise nature of the funding arrangement might differ among the different activities. Space Force’s CASR initiative is a useful model but whatever the precise mechanism, it is important to recognize that the private sector would incur potentially significant costs including pre-conflict planning and training activities, and that those are being undertaken to support national defense.

    Conclusion

    The United States has made significant efforts in enhancing the resilience of critical infrastructures, but has not yet focused on how to support those infrastructures in wartime. The recommendations in this report provide a basis for such an effort. That effort should start now. Indeed, one of the lessons from Ukraine’s wartime experience is the importance of beginning as soon as possible. As one analysis states:


    . . . others seeking to replicate Ukraine’s model of success should recognise that building an effective cyber-defence posture is a marathon, not a sprint. Ukraine’s capacity to withstand Russia’s offensive stems from incremental improvements in its cyber defences over years of painstaking effort and investment. The specific plans and contingencies developed for the war would not have been possible without modernising national cyber-defence systems and raising the maturity levels of public and private critical infrastructure providers in the years leading up to the invasion. Take for example the unprecedented levels of threat intelligence sharing from external partners—undeniably a significant boon to Ukrainian situational awareness and ability to detect emerging threats. Without prior efforts to close visibility gaps, train defenders and adopt a more active cyber-defence posture, the ability to integrate and exploit this intelligence at scale would have been severely limited.125

    The private sector will have important roles in any future conflict in which the United States engages. To maximize that potential, there needs to be active development of the sixth domain, with the private sector being fully included in wartime constructs, plans, preparations, and actions, as recommended in this report.

    About the author

    Franklin D. Kramer is a distinguished fellow and board director at the Atlantic Council. Kramer has served as a senior political appointee in two administrations, including as assistant secretary of defense for international security affairs. At the Department of Defense, Kramer was in charge of the formulation and implementation of international defense and political-military policy, with worldwide responsibilities including NATO and Europe, the Middle East, Asia, Africa, and Latin America.

    In the nonprofit world, Kramer has been a senior fellow at CNA; chairman of the board of the World Affairs Council of Washington, DC; a distinguished research fellow at the Center for Technology and National Security Policy of the National Defense University; and an adjunct professor at the Elliott School of International Affairs of The George Washington University. Kramer’s areas of focus include defense, both conventional and hybrid; NATO and Russia; China, including managing competition, military power, economics and security, and China-Taiwan-US relations; cyber, including resilience and international issues; innovation and national security; and irregular conflict and counterinsurgency.

    Kramer has written extensively. In addition to the current report, recent publications include China and the New Globalization; Free but Secure Trade; NATO Deterrence and Defense: Military Priorities for the Vilnius Summit; NATO Priorities: Initial Lessons from the Russia-Ukraine War; “Here’s the ‘Concrete’ Path for Ukraine to Join NATO”; and Providing Long-Term Security for Ukraine: NATO Membership and Other Security Options.

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    1    “Multi-Domains Operations Conference—What We Are Learning,” Allied Command Transformation, April 8, 2022, https://www.act.nato.int/articles/multi-domains-operations-lessons-learned.
    2    Christine H. Fox and Emelia S. Probasco, “Big Tech Goes to War,” Foreign Affairs, October 19, 2022, https://www.foreignaffairs.com/ukraine/big-tech-goes-war.
    3    Department of Defense (DOD), 2022 National Defense Strategy, 7, https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/2022-NATIONAL-DEFENSE-STRATEGY-NPR-MDR.PDF.
    4    The report elaborates on the discussion of the private sector and the sixth domain in Franklin D. Kramer, NATO Deterrence and Defense: Military Priorities for the Vilnius Summit, Atlantic Council, April 18, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/nato-summit-military-priorities/.
    5    PPD-21 is in process of being updated. Tim Starks, “A Presidential Critical Infrastructure Protection Order Is Getting a Badly Needed Update, Officials Say,” Washington Post, May 11, 2023, https://www.washingtonpost.com/politics/2023/05/11/presidential-critical-infrastructure-protection-order-is-getting-badly-needed-update-officials-say/; White House, “Presidential Policy Directive—Critical Infrastructure Security and Resilience,” February 12, 2013, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil; William M. (Mac) Thornberry National Defense Authorization Act For Fiscal Year 2021, Pub. L. No. 116–283, 134 Stat. 3388 (2021), https://www.congress.gov/116/plaws/publ283/PLAW-116publ283.pdf; Cybersecurity and Infrastructure Security Agency (CISA), National Infrastructure Protection Plan and Resources,  accessed July 6, 2023, https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/national-infrastructure-protection-plan-and-resources; CISA, “About CISA,” accessed July 6, 2023, https://www.cisa.gov/about.
    6    DOD, National Defense Strategy 2022, 5.
    7    “Statement of General Glen D. VanHerck, Commander, United States Northern Command and North American Aerospace Defense Command Before the Senate Armed Services Committee,” March 23, 2023, 8-9, https://www.armed-services.senate.gov/imo/media/doc/NNC_FY23%20Posture%20Statement%2023%20March%20SASC%20FINAL.pdf.
    8    CISA, “The Attack on Colonial Pipeline: What We’ve Learned & What We’ve Done Over the Past Two Years,” May 7, 2023, https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-weve-learned-what-weve-done-over-past-two-years; Saheed Oladimeji and Sean Michael Kerner, “SolarWinds Hack Explained: Everything You Need to Know,” Tech Target, June 27, 2023, https://www.techtarget.com/whatis/feature/SolarWinds-hack-explained-Everything-you-need-to-know; and “Stop Ransomware,” CISA (website), accessed July 6, 2023, https://www.cisa.gov/stopransomware/resources.
    9    Office of the Director of National Intelligence (ODNI), Annual Threat Assessment of the U.S. Intelligence Community, February 6, 2023, 12, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2023-Unclassified-Report.pdf.
    10    ODNI, Annual Threat Assessment, 14.
    11    ODNI, Annual Threat Assessment, 10.
    12    David E. Sanger and Julian E. Barnes, “U.S. Hunts Chinese Malware That Could Disrupt American Military Operations,” New York Times, July 29, 2023, https://www.nytimes.com/2023/07/29/us/politics/china-malware-us-military-bases-taiwan.html.
    13    Cyber Peace Institute, “Case Study, Viasat,” June 2022, https://cyberconflicts.cyberpeaceinstitute.org/law-and-policy/cases/viasat. The case study describes the breath of the impact: “The attack on Viasat also impacted a major German energy company who lost remote monitoring access to over 5,800 wind turbines, and in France nearly 9,000 subscribers of a satellite internet service provider experienced an internet outage. In addition, around a third of 40,000 subscribers of another satellite internet service provider in Europe (Germany, France, Hungary, Greece, Italy, Poland) were affected. Overall, this attack impacted several thousand customers located in Ukraine and tens of thousands of other fixed broadband customers across Europe.”
    14    Microsoft Threat Intelligence, “A Year of Russian Hybrid Warfare in Ukraine,” March 15, 2023, 19, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10mGC.
    15    DOD, Military and Security Developments Involving the People’s Republic of China 2022, 127, https://media.defense.gov/2022/Nov/29/2003122279/-1/-1/1/2022-military-and-security-developments-involving-the-peoples-republic-of-china.pdf.
    16    Irene Sánchez Cózar and José Ignacio Torreblanca, “Ukraine One Year On: When Tech Companies Go to War,” European Council on Foreign Relations, March 7, 2023, https://ecfr.eu/article/ukraine-one-year-on-when-tech-companies-go-to-war/.
    17    Ariel E. Levite, Integrating Cyber Into Warfighting: Some Early Takeaways from the Ukraine Conflict, Working Paper, Carnegie Endowment for International Peace, April 2023, 14, https://carnegieendowment.org/files/Levite_Ukraine_Cyber_War.pdf.
    18    Elias Groll and Aj Vicens, “A Year After Russia’s Invasion, the Scope of Cyberwar in Ukraine Comes into Focus,” CyberScoop, February 24, 2023), https://cyberscoop.com/ukraine-russia-cyberwar-anniversary/.
    19    Groll and Vicens, “A Year After Russia’s Invasion.”
    20    Groll and Vicens, “A Year After Russia’s Invasion.” A report from Google, Fog of War: How the Ukraine Conflict Transformed the Cyber Threat Landscape, underscores the “unprecedented” nature of the efforts including “expanded eligibility for Project Shield, our free protection against distributed denial of service attacks (DDoS), so that Ukrainian government websites and embassies worldwide could stay online and continue to offer critical services” as well as “rapid Air Raid Alerts system for Android phones in the region; support for refugees, businesses, and entrepreneurs . . . and “compromise assessments, incident response services, shared cyber threat intelligence, and security transformation services—to help detect, mitigate and defend against cyber attacks.” See Threat Analysis Group, Fog of War, Google, February 2023, 2, https://services.google.com/fh/files/blogs/google_fog_of_war_research_report.pdf.
    21    Dan Black, Russia’s War in Ukraine: Examining the Success of Ukrainian Cyber Defences, International Institute for Strategic Studies, March 2023, 14, https://www.iiss.org/globalassets/media-library—content–migration/files/research-papers/2023/03/russias-war-in-ukraine-examining-the-success-of-ukrainian-cyber-defences.pdf.
    22    Emma Schroeder and Sean Dack, A Parallel Terrain: Public-Private Defense of the Ukrainian Information Environment, Atlantic Council, February 2023, 14, https://www.atlanticcouncil.org/wp-content/uploads/2023/02/A-Parallel-Terrain.pdf.
    23    Black, Russia’s War in Ukraine, 17-18.
    24    Schroeder and Dack, A Parallel Terrain, 16.
    25    Fox and Probasco, “Big Tech Goes to War,” 4.
    26    Levite, Integrating Cyber Into Warfighting, 17-18.
    27    Robin Fontes and Jorrit Kamminga, “Ukraine: A Living Lab for AI Warfare,” National Defense, March 24, 2023, https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare; their report notes that “the Russia-Ukraine war can also be considered the first conflict where AI-enhanced facial recognition software has been used on a substantial scale. In March 2022, Ukraine’s defense ministry started using facial recognition software produced by the U.S. company Clearview AI. This allows Ukraine to identify dead soldiers and to uncover Russian assailants and combat misinformation. What’s more, AI is playing an important role in electronic warfare and encryption. For example, the U.S. company Primer has deployed its AI tools to analyze unencrypted Russian radio communications. This illustrates how AI systems were constantly retrained and adapted, for example, to deal with idiosyncrasies in customized ways, such as colloquial terms for weaponry.”
    28    Fontes and Kamminga, “Ukraine: A Living Lab”; they also note that AI has also been used for the “spread of misinformation and the use of deep fakes as part of information warfare. AI has, for example, been used to create face images for fake social media accounts used in propaganda campaigns. While the spread of disinformation is not new, AI offers unprecedented opportunities for scaling and targeting such campaigns, especially in combination with the broad range of social media platforms.”
    29    Levite, Integrating Cyber Into Warfighting, 17.
    30    White House, “Presidential Policy Directive—Critical Infrastructure Security and Resilience, Definitions,” February 12, 2013, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil.
    31    Government Accountability Office (GAO), Critical Infrastructure Protection: Time Frames to Complete DHS Efforts Would Help Sector Risk Management Agencies Implement Statutory Responsibilities, February 2023, 7, https://www.gao.gov/assets/gao-23-105806.pdf.
    32    GAO, Critical Infrastructure Protection.
    34    GAO, Critical Infrastructure Protection, 8.
    35    CISA, “FSLC Charter and Membership,” accessed July 6, 2023, https://www.cisa.gov/fslc-charter-and-membership; CISA, “Critical Infrastructure Partnership Advisory Council (CIPAC),” accessed July 6, 2023, https://www.cisa.gov/resources-tools/groups/critical-infrastructure-partnership-advisory-council-cipac; CISA, “Government Coordinating Councils,” accessed July 6, 2023), https://www.cisa.gov/resources-tools/groups/government-coordinating-councils; and CISA, “Sector Coordinating Councils,” accessed July 6, 2023, https://www.cisa.gov/resources-tools/groups/sector-coordinating-councils.
    36    White House, “Office of National Cyber Director,” accessed July 6, 2023, https://www.whitehouse.gov/oncd/.
    37    White House, National Cybersecurity Strategy Implementation Plan, July 2023, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
    38    Transportation Security Administration (TSA), “TSA Issues New Cybersecurity Requirements for Airport and Aircraft Operators,” March 7, 2023, https://www.tsa.gov/news/press/releases/2023/03/07/tsa-issues-new-cybersecurity-requirements-airport-and-aircraft; TSA, “TSA Issues New Cybersecurity Requirements for Passenger and Freight Railroad Carriers,” October 18, 2022, https://www.tsa.gov/news/press/releases/2022/10/18/tsa-issues-new-cybersecurity-requirements-passenger-and-freight; TSA, “TSA Revises and Reissues Cybersecurity Requirements for Pipeline Owners and Operators, July 21, 2022, https://www.tsa.gov/news/press/releases/2022/07/21/tsa-revises-and-reissues-cybersecurity-requirements-pipeline-owners; and Environmental Protection Agency, “EPA Cybersecurity for the Water Sector,” accessed July 6, 2023, https://www.epa.gov/waterriskassessment/epa-cybersecurity-water-sector.
    39    CISA, “State and Local Cybersecurity Grant Program,” accessed July 4, 2023, https://www.cisa.gov/state-and-local-cybersecurity-grant-program.
    40    CISA, “JCDC FAQs, What Are JCDC’s Core Functions,” accessed June 24, 2023, https://www.cisa.gov/topics/partnerships-and-collaboration/joint-cyber-defense-collaborative/jcdc-faqs.
    41    CISA, “Cybersecurity Training and Exercises,” accessed July 4, 2023, https://www.cisa.gov/cybersecurity-training-exercises.
    43    CISA, “JCDC 2023 Planning Agenda.”
    44    CISA, “JCDC 2023 Planning Agenda.”
    45    “National Cyber Investigative Joint Task Force,” Federal Bureau of Investigation, accessed July 18, 2023, https://www.fbi.gov/investigate/cyber/national-cyber-investigative-joint-task-force; White House, National Cybersecurity Strategy Implementation Plan, July 2023, 21, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
    46    National Security Agency, NSA Cybersecurity Collaboration Center, accessed September 7, 2023, https://www.nsa.gov/About/Cybersecurity-Collaboration-Center/
    47    Government of Finland, Ministry of Defense, Security Committee, Security Strategy for Society, November 2, 2017, 98, https://turvallisuuskomitea.fi/wp-content/uploads/2018/04/YTS_2017_english.pdf.
    48    Government of Finland, Security Strategy for Society, 5.
    49    Government of Finland, Security Strategy for Society, 5.
    50    Government of Finland, Security Strategy for Society, 7.
    51    Government of Finland, Security Strategy for Society, 7-8.
    52    CISA, Federal Senior Leadership Council Charter, accessed July 4, 2023, https://www.cisa.gov/sites/default/files/publications/fslc-charter-2021-508.pdf.
    53    The FBI-led National Cybersecurity Investigative Joint Task Force is, of course, a joint task force, but it is not oriented to wartime activities.
    54    The National Cybersecurity Implementation Plan requires DOD to issue an “updated DOD cyber strategy,” and while the full scope of homeland defense goes beyond cyber, the two efforts might be undertaken in a coordinated fashion. White House, National Cybersecurity Strategy Implementation Plan, July 2023, 21, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
    55    Northern Command, “Defending the Homeland,” accessed July 6, 2023, https://www.northcom.mil/HomelandDefense.
    56    Government of Finland, Security Strategy for Society, 10.
    57    CISA, “CISA Regional Office Fact Sheets,” August 4, 2021, https://www.cisa.gov/resources-tools/resources/cisa-regional-office-fact-sheets; and CISA, “State and Local Cybersecurity Grant Program.”
    58    Section 331(c)(1)(a), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf
    59    Section 331(d), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf
    60    Section 331(c)(2), Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023, https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf.
    61    CISA, “Information Sharing: A Vital Resource,” accessed July 2, 2023, https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/information-sharing-vital-resource.
    62    Analysis and Resilience Center for Systemic Risk, “Who We Are,” https://systemicrisk.org/.
    63    Analysis and Resilience Center for Systemic Risk, “What We Do,” https://systemicrisk.org/.
    64    FS-ISAC, “Critical Providers Program FAQ,” accessed July 2, 2023),  https://www.fsisac.com/faq-criticalproviders.
    65    FS-ISAC, “Critical Providers.”
    66    White House, National Cybersecurity Strategy, March 2023, 4, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
    67    The National Cybersecurity Strategy Implementation Plan takes a step in this direction by requiring the Department of Commerce to publish a “Notice of Proposed rulemaking on requirements, standards, and procedures for Infrastructure-as-a-Service (IaaS) providers and resellers.” White House, National Cybersecurity Strategy Implementation Plan, July 2023, 25, https://www.whitehouse.gov/wp-content/uploads/2023/07/National-Cybersecurity-Strategy-Implementation-Plan-WH.gov_.pdf.
    68    CISA, “Support to Critical Infrastructure at Greatest Risk, (‘Section 9 Report’) Summary,” February 8, 2021, https://www.cisa.gov/resources-tools/resources/support-critical-infrastructure-greatest-risk-section-9-report-summary.
    69    “Census Bureau Reports There Are 89,004 Local Governments in the United States,” US Census Bureau, August 30, 2012, https://www.census.gov/newsroom/releases/archives/governments/cb12-161.html.
    70    “Amendment to Rules Comm. Print 118–10 Offered by Mr. Green of Tennessee,” June 27, 2023, https://amendments-rules.house.gov/amendments/Cyber%20in%20National%20Guard%20Amendment230630140357934.pdf.
    71    Section 1521, Rules Committee Print 118–10 Text of H.R. 2670, The National Defense Authorization Act for Fiscal Year 2024, June 23, 2023, https://rules.house.gov/sites/republicans.rules118.house.gov/files/RCP_xml_1.pdf.
    72    Office of Senator Jacky Rosen, “Rosen, Blackburn Introduce Bipartisan Bills to Strengthen Federal Response to Cyberattacks,” March 21, 2023, https://www.rosen.senate.gov/2023/03/21/rosen-blackburn-introduce-bipartisan-bills-to-strengthen-federal-response-to-cyberattacks/.
    73    Section 1116, Senate Armed Services Committee, National Defense Authorization Act for Fiscal Year 2024, accessed September 2, 2023,  https://www.armed-services.senate.gov/imo/media/doc/fy24_ndaa_bill_text.pdf.
    74    Black, Russia’s War in Ukraine, 14.
    75    “Joint Cyber Reserve Force,” Gov.UK, accessed June 3, 2023, https://www.gov.uk/government/groups/joint-cyber-reserve-force.
    76    Republic of Estonia, Information System Authority, “Cyber Security in Estonia 2023,” 51, https://www.ria.ee/media/2702/download.
    77    National Guard, “National Guard Cyber Defense Team,” accessed September 2, 2023,https://www.nationalguard.mil/Portals/31/Resources/Fact%20Sheets/Cyber%20Defense%20Team%202022.pdf.
    78    Franklin D. Kramer and Robert J. Butler, “Expanding the Role of the National Guard for Effective Cybersecurity,” The Hill, April 21, 2021, https://thehill.com/opinion/cybersecurity/550740-expanding-the-role-of-the-national-guard-for-effective-cybersecurity/.
    79    Mark Pomerleau, “Lawmakers Pushing for More Integration of National Guard, Reserve Personnel into DOD Cyber Forces,” Defensescoop, June 12, 2023, https://defensescoop.com/2023/06/12/lawmakers-pushing-for-more-integration-of-national-guard-reserve-personnel-into-dod-cyber-forces/.
    80    Cyber Command, “Hunt Forward Operations,” November 15, 2022, https://www.cybercom.mil/Media/News/Article/3218642/cyber-101-hunt-forward-operations/.
    81    “2023 Posture Statement of General Paul M. Nakasone,” US Cyber Command, March 7, 2023, https://www.cybercom.mil/Media/News/Article/3320195/2023-posture-statement-of-general-paul-m-nakasone/.
    82    Cyber Command, “Hunt Forward Operations.”
    83    Cyber Command, “Hunt Forward Operations.”
    84    This is a nontrivial requirement, as there is a significant shortage of highly skilled cyber talent, and retaining such talent has been a challenge for US Cyber Command. As Gen. Nakasone recently observed, “someone that has this type of training is very, very attractive to those on the outside.” Jim Garamone, “Cyber Command, NSA Successes Point Way to Future,” DOD News, March 8, 2023, https://www.defense.gov/News/News-Stories/Article/Article/3322765/cyber-command-nsa-successes-point-way-to-future/.
    85    There are important legal issues regarding the interface between the Fourth Amendment and constitutional wartime powers, but establishing a consensual regime—which should be in the self-interest of critical infrastructures —would avoid those questions.
    86    There are approximately 550 existing and planned undersea cables; see TeleGeography, “Submarine Cable Frequently Asked Questions,” accessed July 2, 2023, https://www2.telegeography.com/submarine-cable-faqs-frequently-asked-questions. There are far fewer undersea pipelines, but for Europe, important pipelines include those in the North, Baltic, and Mediterranean seas with “about 8,000 kilometers (5,000 miles) of oil and gas pipelines crisscross[ing] the North Sea alone.” Lorne Cook, “NATO Moves to Protect Undersea Pipelines, Cables as Concern Mounts over Russian Sabotage Threat,” Associated Press, June 16, 2023, https://apnews.com/article/nato-russia-sabotage-pipelines-cables-infrastructure-507929033b05b5651475c8738179ba5c.
    87    There is at least some indication that Ukraine undertook those Nord Stream actions. See Julian E. Barnes and Michael Schwirtz, “C.I.A. Told Ukraine Last Summer It Should Not Attack Nord Stream Pipelines,” New York Times, June 13, 2023, https://www.nytimes.com/2023/06/13/us/politics/nord-stream-pipeline-ukraine-cia.html.
    88    White House, “G7 Hiroshima Leaders’ Communiqué,” May 20, 2023, paragraph 39, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.
    89    White House, “Quad Leaders’ Summit Fact Sheet,” May 20, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/quad-leaders-summit-fact-sheet/.
    90    Though there is at least some indication that Ukraine undertook the Nord Stream actions. Barnes and Schwirtz, “C.I.A. Told Ukraine Last Summer It Should Not Attack Nord Stream Pipelines.”
    91    Jens Stoltenberg, “Press Conference by NATO Secretary General Jens Stoltenberg Following the Meeting of NATO Ministers of Defense in Brussels,” Remarks (as delivered), NATO, June 16, 2023, https://www.nato.int/cps/en/natohq/opinions_215694.htm?selectedLocale=en.
    92    Christian Bueger, Tobias Liebetrau, and Jonas Franken, Security Threats to Undersea Communications Cables and Infrastructure–Consequences for the EU, In-Depth Analysis Requested by the SEDE Sub-committee, European Parliament, June 2022, 31, https://www.europarl.europa.eu/RegData/etudes/IDAN/2022/702557/EXPO_IDA(2022)702557_EN.pdf.
    93    See “NATO to Set Up New Unit to Monitor Pipelines/Other Critical Infrastructure,” Pipeline Technology Journal, June 19, 2023, https://www.pipeline-journal.net/news/nato-set-new-unit-monitor-pipelines-other-critical-infrastructure.
    94    Stoltenberg, “Press Conference.”
    95    Stoltenberg, “Press Conference.”
    96    “Frequently Asked Questions, Submarine Cable Systems Market,” MarketsandMarkets, accessed July 1, 2023, https://www.marketsandmarkets.com/Market-Reports/submarine-cable-system-market-184625.html.
    97    “Submarine Cable Frequently Asked Questions,” TeleGeography, accessed July 1, 2023.
    98    Underwater Arteries—the World’s Longest Offshore Pipelines,” Offshore Technology, September 9, 2014, https://www.offshore-technology.com/features/featureunderwater-arteries-the-worlds-longest-offshore-pipelines-4365616/; “After Nord Stream Attack, Europe Scrambles to Secure Subsea Pipelines,” Maritime Executive, October 2, 2022, https://maritime-executive.com/article/after-nord-stream-attack-europe-scrambles-to-secure-subsea-pipelines; “Gulf of Mexico Data Atlas,” National Centers for Environmental Information (“There are over 26,000 miles of oil and gas pipeline on the Gulf of Mexico seafloor,”), accessed July 1, 2023, https://www.ncei.noaa.gov/maps/gulf-data-atlas/atlas.htm?plate=Gas%20and%20Oil%20Pipelines.
    99    “After Nord Stream Attack,” Maritime Executive; and Christiana Gallardo, “UK and Norway Team Up to Protect Undersea Cables, Gas Pipes in Wake of Nord Stream Attacks,” Politico, June 28, 2023, https://www.politico.eu/article/uk-norway-team-up-protect-undersea-cables-gas-pipelines/.
    100    For a series of specific recommendations, see Sherman, Cyber Defense Across the Ocean Floor.
    101    John Arquila, “Securing the Undersea Cable Network,” Hoover Institution, 2023, 4, https://www.hoover.org/sites/default/files/research/docs/Arquilla_SecuringUnderseaCable_FINAL_0.pdf.
    102    Arquila, “Securing the Undersea Cable Network,” 8, 9.
    103    For recommendations on enhancing the cybersecurity of undersea cables, see also Justin Sherman, Cyber Defenses Across the Ocean Floor, Atlantic Council, September 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/cyber-defense-across-the-ocean-floor-the-geopolitics-of-submarine-cable-security/.
    104    Mick Green et al., “Submarine Cable Network Security,” Slide Deck, International Cable Protection Committee, April 13, 2009, https://www.iscpc.org/publications/.
    105    Bueger, Liebetrau, and Franken, “Security Threats to Undersea Communications Cables,” 53.
    106    DOD, “Space Development Agency Successfully Launches Tranche 0 Satellites,” April 2, 2023, https://www.defense.gov/News/Releases/Release/Article/3348974/space-development-agency-successfully-launches-tranche-0-satellites/.
    107    DOD, “Space Development Agency.”
    108    Rocket Lab, “About Us,” accessed July 5, 2023, https://www.rocketlabusa.com/about/about-us/.
    109    Charles S. Galbreath, “Building U.S. Space Force Counterspace Capabilities: An Imperative for America’s Defense,” Mitchell Institute, June 2023, 16, https://mitchellaerospacepower.org/wp-content/uploads/2023/06/Building-US-Space-Force-Counterspace-Capabilities-FINAL2.pdf.
    110    Fontes and Kamminga, “Ukraine: A Living Lab.”
    111    “Planet Labs, Inc.—Peacetime Indications & Warning,” Defense Innovation Unit (DIU), 2019, https://www.diu.mil/solutions/portfolio/catalog/a0Tt0000009En0yEAC-a0ht000000AYgyYAAT.
    112    “Planet Labs,” DIU.
    113    “Umbra Launches World’s Most Capable Commercial Radar-Imaging Satellite,” Umbra, June 25, 2021, https://umbra.space/blog/umbra-launches-worlds-most-capable-commercial-radar-imaging-satellite.
    114    Courtney Albon, “Maxar Explores New Uses for Earth Observation Satellites,” C4ISRNET, May 30, 2023, https://www.c4isrnet.com/battlefield-tech/space/2023/05/30/maxar-explores-new-uses-for-earth-observation-satellites/.
    115    Albon, “Maxar Explores New Uses.”
    116    Offset-X: Closing the Deterrence Gap and Building the Future Joint Force, Special Competitive Studies Project (a bipartisan, nonprofit effort), May 2023, 51, https://www.scsp.ai/wp-content/uploads/2023/05/Offset-X-Closing-the-Detterence-Gap-and-Building-the-Future-Joint-Force.pdf.
    117    Suresh Kumar and Nishant Sharma, “Emerging Military Applications of Free Space Optical Communication Technology: A Detailed Review,” 2022 Journal of Physics Conference Series (2022), 1, https://iopscience.iop.org/article/10.1088/1742-6596/2161/1/012011/pdf.
    118    The European Commission has undertaken an evaluation of seven different systems that it found to have met technical requirements. See L. Bonenberg, B. Motella, and J. Fortuny Guasch, Assessing Alternative Positioning, Navigation and Timing Technologies for Potential Deployment in the EU, JRC Science for Policy Report, EUR 31450 EN (Luxembourg: Publications Office of the European Union, 2023), https://doi.org/10.2760/596229.
    119    “Safran to Provide GNSS Simulation Solutions for Xona Space System’s Low-Earth-Orbit Constellation and Navigation Signal,” Electronic Engineering Journal, April 6, 2023, https://www.eejournal.com/industry_news/safran-to-provide-gnss-simulation-solutions-for-xona-space-systems-low-earth-orbit-constellation-and-navigation-signals/.
    120    Sandra Erwin, “SAIC to Develop ‘Software Factory’ for Space Development Agency,” SpaceNews, June 8, 2023, https://spacenews.com/saic-to-develop-software-factory-for-space-development-agency/.
    121    US Air Force, “Civil Reserve Air Fleet,” accessed July 4, 2023, https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104583/civil-reserve-air-fleet/.
    122    Sandra Erwin, “Space Force to Further Define Details of a ‘Commercial Space Reserve,’” Space News, July 25, 2023, https://spacenews.com/space-force-to-further-define-details-of-a-commercial-space-reserve.
    123    50 US Code, §§ 4511 and 4557.
    124    See Franklin D. Kramer, Melanie J. Teplinsky, and Robert J. Butler, “We Need a Cybersecurity Paradigm Change,” The Hill, February 15, 2022, https://thehill.com/opinion/cybersecurity/594296-we-need-a-cybersecurity-paradigm-change/.
    125    Black, Russia’s War in Ukraine, 39.

    The post The sixth domain: The role of the private sector in warfare appeared first on Atlantic Council.

    ]]>
    Kink in the chain: Eight perspectives on software supply chain risk management https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/kink-in-the-chain-eight-perspectives-on-software-supply-chain-risk-management/ Wed, 27 Sep 2023 20:58:00 +0000 https://www.atlanticcouncil.org/?p=817942 Software supply chain attacks are popular, impactful, and are used to great effect by malicious actors. To dive deeper on this topic, we asked eight experts about these threats and how policymakers can help protect against them.

    The post Kink in the chain: Eight perspectives on software supply chain risk management appeared first on Atlantic Council.

    ]]>
    Now more than ever, society depends on software. Whether it is the cloud computing behind an email service, a new fifth-generation (5G) telecommunications deployment, or the system used to monitor a remote oil rig, software has become an essential and pervasive facet of modern society – making software supply chain security remains an under appreciated domain of national security policymaking. Software supply chain attacks are popular, they are impactful, and are used to great effect by a variety of malicious actors.

    To dive deeper on this topic – and the recent policy action taken to address this problem – we asked eight experts about these threats and how policymakers can help protect against them:

    What has changed the least over the last 2-3 years when it comes to the exploitation of software supply chains in the wild?

    Two significant changes that we have seen over the course of the last several years when it comes to software supply chain exploits are 1) new, novel attack patterns such as dependency confusion and 2) more sophistication and opportunistic attacks – including supply chain attacks priming another supply chain attack. Dependency confusion attacks exploit the fact that many package managers – including pip, npm, and RubyGems – pull from public code registries for a package before private registries. If a specific package exists in a private registry, an attacker could register a package of the same name with the public registry – thus pulling down the malicious version on the public registry when a new install occurs. These attacks are hard to mitigate and target widely used tools – making them particularly worrisome. Another new trend is using software supply chain compromises to set up and execute other supply chain compromises. The most recent example of this was the 3CX compromise, where the company was initially comprised through the use of a malware-laced version of the X_Trader financial software, and then that initial access was used to compromise the desktop version of their app. These two accelerating trends demonstrate a frustrating reality of supply chain compromises – that malicious actors are always looking to innovate.

    William Loomis, Associate Director, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRL), Atlantic Council

    On the one hand, the problem of attacks on accounts of legitimate open source maintainers improved, also thanks to the introduction of 2FA by different package ecosystems. However, there’s an increase of malware campaigns using techniques like typosquatting and dependency confusion, sometimes with hundreds of malicious packages published in short time frames (PyPI even had to suspend the registration of new packages in May 2023). Those attacks will continue, because the marginal costs for attackers to conduct such campaigns is very low – thanks to a high degree of automation and reuse, which renders them profitable even if very few people fall victim to such attacks. Developers need to get used to such attacks, just like we got used to email spam. What will become more critical are social engineering attacks on legitimate projects, e.g. through the submission of malicious pull requests by fake profiles, which are harder to detect than the simplistic malicious code typically contained in a typo-squatting package, for example. Going forward, we will see an arms-race on all of those fronts, as in any other IT security domain.

    Henrik PlateSecurity Researcher, Endor Labs; Co-author of the Taxonomy of Attacks on OSS Supply Chains and the Risk Explorer

    How have we seen the tools/methods available to help support effective supply chain risk management evolve in the last two years?

    There is more focus on capabilities that test open-source software before it is ingested into an enterprise. Such testing focuses on detecting malicious open-source software rather than just vulnerable code. This is in response to a significant increase in the amount of intentionally malicious open-source software contributed by attackers. This increase is particularly concerning as malicious open-source software is designed to deliver an exploit immediately upon ingestion into an enterprise whereas vulnerable code mainly has the potential to be exploitable and is often more of an issue when in production. 

    There is also a lot of emphasis on the security of pipelines within the Software Delivery Lifecycle (SDLC) and ensuring that the basic configuration of these pipelines meets certain, specific guidelines. This ensures that even if an attacker has access to the internal SDLC, it remains protected. More recently we’ve seen a focus on bringing attestations to the SDLC so that it is possible to validate the provenance of a software artefact throughout the SDLC and into production, which makes it more difficult for an attacker to fully exploit the supply chain.

    Jon MeadowsCiti Tech Fellow, CITI; Governing Board Member, Open Source Security Foundation

    Since SolarWinds, which policy activities/programs have had the most impact/least impact in your view?

    This is an intimidating question with a simple answer: there is no evidence that any U.S. government policies or activities have improved software supply chain security, yet. Despite the talents and efforts of many elected leaders, dedicated public servants, and concerned citizens, there is no data, analysis or you-name-it that links the relatively few concrete software supply chain security initiatives of Uncle Sam to improved software supply chain security. There are, of course, promising initiatives, such as Department of Homeland Security funding for software bill of materials (SBOM) tools or the military’s adoption of low-vulnerability count container images. But broader changes—the type that might happen after Russian intelligence agencies hijack important software updates and ride their way into government networks—have mostly remained the stuff of op-eds and think-tank seminar rooms. Email me at jsmeyers@chainguard.dev if you disagree!  

    John Speed MeyersNonresident Senior Fellow, Atlantic Council; Security Data Scientist, Chainguard

    What policy activity/program in this space are you most hopeful about?

    At the Office of the National Cyber Director, our theory of the case is that in order to secure the cyber supply chain, we need to start with the atomic unit of the system. Generally speaking, many cybersecurity issues start with a line of code. That means that the most atomic unit of the cyber supply chain is the programming language itself. If the programming language is not safe, then nothing we build in that language is safe, either. The policy idea I am most excited about right now is driving adoption of memory safe programming languages. Memory safety describes the underlying property of a programming language that allows software developers to introduce certain types of cybersecurity bugs that affect how memory is used, both spatially and temporally. The main memory unsafe languages we are primarily focused on in the U.S. Government are C and C++. 

    The problem around memory safety is daunting because of the high proliferation of C and C++ in our critical infrastructure. In fact, several of the big security vulnerabilities we have seen in the past two decades were caused by memory unsafety, including the Heartbleed vulnerability in 2014 and the WannaCry Ransomware attack in 2017. However, I have two reasons to be optimistic that we can succeed here.  

    First, the technical solutions already exist. There are many memory safe languages all across the tech stack that we can – and should – use instead. Second, research shows that migrating code from a memory unsafe language to a memory safe language can eliminate up to 70% of the software’s vulnerabilities. This statistic is quite remarkable. Even though there are no silver bullets to securing the cyber supply chain, this is a high-impact example of where good engineering practices are intersecting with good cybersecurity policy. I believe that advocating for the adoption of memory safe languages is one of the most impactful actions the collective ecosystem can take to drive the security of our cyber supply chain. 

    Anjana RajanAssistant National Cyber Director, White House

    Where should government not be engaged in supporting software supply chain security? Where does government power or associated implications of that focus hurt more than they might help? 

    Due to lack of evidence it is not clear where Governments can let the free market address the challenge of software supply chain security without engagement. Whilst some vendors and end user organizations are being proactive a significant proportion has yet to mobilize The risk with any Government intervention is organizations look to compliance and the management of legal risk as opposed to addressing the intent. The systemic risk of software supply chains requires our end objective to be more than compliance and instead a life-cycle set of evidenced solutions which quantifiably improve the situation for producers and customers. 

    Ollie WhitehouseIncoming Chief Technology Officer, National Cyber Security Centre (NCSC), UK Government

    What still needs to be done when it comes to government efforts to support effective software supply chain risk management?

    The answer to this question depends on the level of ambition and available resources of governments – in a recent report, we identified three levels of government efforts. The US and many other Western states will have already covered what we call the basics, like including secure software development practices in software developer education and in workforce development efforts. In some cases, they have also put into practice tried and tested policy instruments, like requiring software-developing government agencies to develop and publish organizational coordinated vulnerability disclosure (CVD) policies and requiring such policies from organizations supplying the public sector through public procurement guidelines.

    As a result, to significantly strengthen software supply chain security in the long run, they will need to invest significant resources and implement policy instruments that have not yet been widely put into practice elsewhere. Areas ripe for policy innovation in this field include regulation mandating software-developing entities to implement secure software development practices, national legal frameworks for CVD, the refinement of software bill of materials (SBOM) data formats and technical tools that build on SBOM data and regulation mandating SBOM use, and product liability regimes that cover software (or amend an existing one to include software). In doing so, governments have to assess, among other things, the impact of such interventions on small and medium enterprises and individual software developers, who may be disproportionately affected by regulatory burdens.  

    Dr. Alexandra PaulusProject Director, Cybersecurity Policy and Resilience, Stiftung Neue Verantwortung (SNV)

    Where does open source software fit into this picture?

    Most of the open-source software supply chain attacks in the dataset rely on malicious packages masquerading as legitimate open-source packages, often through typosquatting or similar techniques. Some of these impersonations are strengthened by the lack of verification of details on some open-source hosting websites, as in the dataset’s example case of StarJacking. With this technique, attackers can add credibility to malicious packages on npm, PyPI, or Yarn by displaying the GitHub statistics of other packages on the page of their package, since the package manager does not verify the connection between the GitHub link and the package. Another common variation of attacks involving open-source software relies on expired resources or links. In one case, researchers found that thousands of maintainer email accounts on npm are linked to email addresses with expired domains, which could be purchased by malicious actors to take over those accounts without notifying the maintainers. Similarly, in another case in the dataset, a package relied on an expired S3 bucket to download an add-on. A malicious actor took over the bucket and replaced the add-on with malware, which was then included in the widely used package. These cases all emphasize the importance of intentional, comprehensive tracking and management of open-source dependencies, which can help ensure companies are not affected by these relatively trivial but still very impactful techniques. 

    Sara Ann BrackettResearch Associate, Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post Kink in the chain: Eight perspectives on software supply chain risk management appeared first on Atlantic Council.

    ]]>
    Software supply chain security: The dataset https://www.atlanticcouncil.org/content-series/cybersecurity-policy-and-strategy/software-supply-chain-security-the-dataset/ Wed, 27 Sep 2023 14:52:00 +0000 https://www.atlanticcouncil.org/?p=818016 Want to dive deeper into the Breaking Trust database? You have come to the right place.

    The post Software supply chain security: The dataset appeared first on Atlantic Council.

    ]]>
    Software supply chain attacks are a regular feature of cybersecurity but remain understudied as a tactic of malicious actors and a tool of cyber statecraft. This dashboard provides an interactive visualization of the dataset and its major trends. The charts break down incidents by several criteria, including scale and impact, when they took place, the responsible actors (if attributed), targeted codebase, and attack and distribution vectors.

    A list of every incident in this dataset is available at the bottom of the page, and both this list and all charts and graphs can be further filtered by the slider and drop-down menus below. Clicking on any value will offer the option to filter the entire dashboard. To download the filtered version of the tableau dashboard and the dataset, please use the download button in the bottom right. Definitions of key terms and data categories can be found by hovering over values in each graph or chart the codebook, which can be downloaded along with the full dataset below.

    To download the full dataset or its codebook, use the buttons below.

    Update 3 – 2023 – 250 entries, 168 software supply chain attacks and 82 disclosures

    Update 2 – 2021 – 161 entries, 117 software supply chain attacks and 44 disclosures

    Update 1 – 2020 – 115 entries, 82 software supply chain attacks and 33 disclosures 


    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post Software supply chain security: The dataset appeared first on Atlantic Council.

    ]]>
    Warrick quoted in Bloomberg Government https://www.atlanticcouncil.org/insight-impact/in-the-news/warrick-quoted-in-bloomberg-government/ Thu, 21 Sep 2023 18:54:50 +0000 https://www.atlanticcouncil.org/?p=685363 Thomas Warrick discusses the risks of not renewing DHS authorities, which are set to expire given partisan divides in Congress.

    The post Warrick quoted in Bloomberg Government appeared first on Atlantic Council.

    ]]>

    On September 21, Forward Defense nonresident senior fellow Thomas Warrick was quoted in Bloomberg Government. He expressed concern about congressional gridlock and its subsequent effects on the expected expiration of several DHS protection measures. Warrick warns that these safeguards are integral to US national security.

    What I worry is about the idea that we’re not shoring up our defenses at a time when it’s hard to predict where the next attack or serious threat is going to come from.

    Arnold Punaro

    Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

    The post Warrick quoted in Bloomberg Government appeared first on Atlantic Council.

    ]]>
    The 5×5—Bridging the divide: Cyber conflict in international relations https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-bridging-the-divide-cyber-conflict-in-international-relations/ Wed, 20 Sep 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=672524 Researchers discuss the relationship between the cyber policy and academic communities, and share their advice for those interested in breaking into each community.

    The post The 5×5—Bridging the divide: Cyber conflict in international relations appeared first on Atlantic Council.

    ]]>
    This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

    This summer, we drew on insights from across the academic and policymaking communities for two editions of the 5×5 focused on the nature of cyber operations and their role in international relations. In June, we published an edition that featured a panel of scholars whose deliberate works have helped inspire and shape government cyber strategies in recent years. We followed that up with a second edition featuring perspectives from a range of current and former policymakers, whose day-to-day work has had to navigate government politics, processes, and other realities to confront present cyber challenges. 

    These two editions can be accessed here: 

    The contributions from members of these two communities are valuable in their own rights, but when taken together, provide a fuller picture of the intersection of the theory and practice of cyber conflict. Geopolitics, technologies, and the use of operations in and through the cyber domain are constantly evolving, creating new challenges for understanding cyber conflict. Continued collaboration between the scholarly and policy communities stands to deepen understanding for all involved. 

    To reflect on this conversation, we brought together four researchers whose career experiences span both the scholarly and policy worlds, to share their thoughts and advice for those interested in breaking into each community.

    #1 What, in your opinion, is the biggest misconception about cyber conflict’s role in international relations theory?

    Michael Fischerkeller, researcher, Information, Technology, and Systems Division, Institute for Defense Analyses

    “I consider two [misconceptions] as equally important. The first is that independent cyber operations offer de-escalatory offramps in a militarized crisis between nuclear states. There is no empirical evidence to support this view and it is at odds with what crisis bargaining theory suggests. The second is that states’ primary cyber behaviors are best understood as an intelligence contest, vice cyber strategic competition. The intelligence contest argument is too narrow to serve as a guide to policy, as it struggles to account for the wide range of strategic outcomes (gains and losses) that are a consequence of the speed, scope, and scale of cyber campaigns/operations, and needlessly ties the definition of strategic significance to coercion theory.” 

    Jackie Kerr, senior research fellow for defense and technology futures, Center for Strategic Research, National Defense University’s Institute for National Security Studies:  

    The views expressed in this interview are those of Dr. Kerr and do not reflect the official policy or position of the National Defense University, the Department of Defense, or the US government.  

    “For those new to the field, it might be easy to imagine cyber conflict as a very narrow subfield of security studies or the study of military strategy—focused on a specific and rather technical domain. But the biggest challenge in the field, as I see it, is actually the degree of interdisciplinary and cross-silo thinking that is needed. The digital technologies and networks that constitute cyberspace cut across many areas of society and policy, are broadly accessible, and allow for novel emerging innovations.  Understanding the potential areas of conflict, competing interests, and roles of different stakeholders and governance mechanisms—not to mention how to address these in relation to various domestic and international institutions, actors, and levels of contestation—requires a broad range of expertise.”  

    Erica Lonergan, assistant professor, School of International and Public Affairs, Columbia University

    “An enduring and significant misconception is that cyberspace is a dangerous, escalatory domain and that conflict in cyberspace is likely to spill over into the kinetic realm. This is an assumption that exists at the highest levels of policymaking. Secretary Austin, for example, has described cyberspace as an escalatory environment, and US President Joe Biden has said that if the United States ends up in a conventional conflict, it will likely be because of a cyberattack. However, academic research has revealed little evidence of cyber escalation. The implications of such misconceptions are significant as they continue to shape US cyber strategy and policy.” 

    Joshua Rovner, associate professor, School of International Service, American University

    “That cyber conflict is akin to war. Cyber conflict is not anything like the bloody business of war, where states use violence to coerce their enemies and wreck their forces. It is about information superiority. States use cyberspace for espionage, deception, and propaganda. Their basic goal is the same: understanding the world better than their rivals.”

    #2 What would you like to see scholars and students studying cyber conflict better understanding about policymaking?

    Fischerkeller: “[I would like them to understand] that once a theory has set a foundation for strategy, policymakers benefit from what Alexander George and Richard Smoke call ‘contingent generalizations.’ These comprise policy insights that are informed by context—for example, the distinct geopolitical conditions of competition, militarized crisis, and armed conflict; the interactions between nuclear and non-nuclear states; and state versus non-state actors.” 

    Kerr: “I think it is important for students and scholars who are interested in policy to gain as much granular familiarity as possible with policymaking processes and institutions relevant to their work. This can provide insights into the silos, bureaucratic frictions, and institutional politics involved—some of the real dilemmas faced by policymakers—all of which can be quite helpful for delineating what kinds of policy recommendations might be most valuable and to whom.”  

    Lonergan: “Policymaking is often a messy, complicated, bureaucratic process. As scholars, we like to debate the intellectual merits and substance of various ideas and strategies, carefully examining documents that the government publishes. But the reality is that those documents reflect underlying bureaucratic politics and organizational processes—they are the result of bargaining, logrolling, standard operating procedures, parochial interests and biases, and so on. Therefore, it is important to take the behind-the-scenes processes into account when evaluating strategy and policy.”  

    Rovner: “They should start by studying something else. Instead of focusing on cyber, they should start by studying diplomacy, intelligence, and war. They should study the policy process with care, noting especially the ways in which weighty theoretical issues play out in mundane matters like budgets and authorities. Only then should they start thinking about cyber.”

    #3 What is a scholarly piece of literature on cyber conflict that you recommend aspiring policymakers read closely and why?

    Fischerkeller: “This would be a function of their policy portfolio. If they are interested in cyber organizational development and capacity building, I would recommend Max Smeet’s No Shortcuts: Why States Struggle to Develop a Military Cyber-Force. If they are interested in the nexus of cyber campaigns/operations and the nuclear weapons enterprise, I would recommend Herbert Lin’s Cyber Threats and Nuclear Weapons. If they are interested in national strategy, I would recommend Cyber Persistence Theory: Redefining National Security in Cyberspace. And so on.” 

    Kerr: “As a first recommendation for aspiring policymakers, I would actually recommend that they take a step further back and read something on the history of computing, cybernetic theory, the Internet and its governance, and the different ways these have been thought about in connection both to interdependent economic growth and democracy, and to conflict and strategic competition. Norbert Wiener’s Cybernetics would not be a bad starting point. Thomas Rid’s The Rise of the Machines and Laura DeNardis’ The Global War for Internet Governance would also be excellent. I recommend this because this is the larger perspective that will help people entering the policy arena see the connections between more narrowly circumscribed policy debates of this moment and the longer-term evolution and bigger issues at stake.”  

    Lonergan: “Aspiring policymakers should read Lennart Maschmeyer’s 2021 International Security article, ‘The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations.’ This piece provides an alternative perspective on cyber conflict that will likely challenge some of the conventional wisdom in policy circles, because Maschmeyer argues that cyber conflict is more like subversion than it is like conflict. The ‘subversive trilemma’ in cyberspace, in which there are tradeoffs between speed, intensity, and control of cyber operations, accounts for the gap between expectations and reality of cyber conflict.”  

    Rovner: “Robert Chesney and Max Smeets edited Deter, Disrupt, or Deceive: Assessing Cyber Conflict as an Intelligence Contest, in which contributors debate whether cyber conflict is best seen in terms of intelligence. This debate has important implications for policy. It speaks to several fundamental questions. Which agencies and organizations should be responsible for cyber operations? Who should oversee them? How should they measure success and failure?” 

    More from the Cyber Statecraft Initiative:

    #4 How has understanding of cyber conflict evolved in the last five years within the cyber policy community and how do you see it evolving in the next five years?   

    Fischerkeller: “There has been a notable shift to the recognition that the primary strategic threat in and through cyberspace is from campaigns whose effects are short of armed-attack equivalence but whose cumulative gains are of strategic significance. Additionally, there has been a recognition that cybersecurity is national security. And, unfortunately, states cyber behaviors have demonstrated that extraordinary, explicit efforts to cultivate voluntary, non-binding cyber norms have met with limited success.” 

    Kerr: “The last five years have been a productive time for innovative thinking in the field. There have been serious efforts to understand complex issues, including the nature of strategic interactions, different adversary conceptions of the domain, cross-domain interaction and escalation dynamics, the relationships of cyber conflict with intelligence competition and with other cyber-enabled forms of conflict—and the list goes on. While these efforts have led to significant insights, the continuing evolution of global politics, technology, and cyberspace itself keeps pushing forward new challenges for both policy and theory. I think the intersections between thinking on cyber policy, artificial intelligence, and other emerging areas of technology competition and cooperation will be important areas to watch.”  

    Lonergan: “A significant inflection point in cyber policy took place five years ago. In 2018, the Defense Department published a new cyber strategy anchored in the concept of Defend Forward and US Cyber Command promulgated its first vision statement guided by the idea of ‘persistent engagement.’ Both define a broader and more assertive role for the US military in cyberspace. But we still lack real metrics that enable experts to evaluate the outcomes of these approaches. Looking ahead over the next five years, I hope the policy community focuses on assessing the implementation of these strategies, with an eye toward gauging how they integrate and are aligned with broader US strategic goals.”  

    Rovner: “The policy debate has become more interesting and expansive. New ideas about the logic of cyber conflict, and the nature of different cyber actors, have entered the chat. This has happened in part because scholars have deliberately sought to speak to policy, and their research has nudged the policy community to think harder about the uses and limits of cyberspace operations. It helps that many of these scholars have experience in government, the military, and the intelligence community. The quality of their research—and the clarity of their writing—has probably disabused policymakers of the idea that cyber issues are only comprehensible to technical specialists. The next five years will be interesting, mostly because we will have a huge amount of data on current conflicts to explore. Information from Russia-Ukraine war and the ongoing US-China competition will help put our theories to the test.”

    #5 How can scholars and policymakers of cyber conflict better incorporate perspectives from each other’s work?

    Fischerkeller: “The military uses the phrase ‘right seat ride’ to describe a process whereby an incoming commander stays at the hip of an outgoing commander to gather in-depth knowledge of the historical, present, and future challenges facing the command. A similar model is equally valuable for policymakers and scholars. Policy shops ought to leverage scholar-in-residence programs or, alternatively, the Intergovernmental Personnel Act that allows for the temporary assignment of skilled personnel between the federal government and state and local governments, colleges and universities, tribal governments, federally funded research and development centers, and other eligible organizations. These approaches are particularly relevant for cyber policy, as much of the background that informs cyber policy cannot be discovered by scholars via open-source research.” 

    Kerr: “There are so many areas where mutual learning is possible, and I have seen a lot of this going on that is productive. My first recommendation is to get involved in the communities that have developed to deliberately bridge this gap. People know each other, attend workshops together, read and comment on each other’s work, and really facilitate more innovative thinking for all involved. There also are opportunities for individuals to rotate between scholarly and policymaking roles—whether entering the policy arena temporarily from academia or taking a period off from government service to conduct research at a think tank or university. Going in either direction is a great way to learn.”  

    Lonergan: “This challenge is not unique to the field of cyber conflict. Bridging the gap between academics and policymakers is an important and enduring issue in the international relations field. What makes this even more complex in cyberspace is the multistakeholder nature of the cyber domain, which significantly expands the ecosystem of relevant parties, each of which has unique perspectives, interests, and expertise. Therefore, seeking out opportunities to engage with this diverse community—encompassing not just academics and beltway bureaucrats, but also the private sector, non-governmental organizations, big tech, civil society organizations, and so on—will enrich the understandings of all involved.”  

    Rovner: “[They can do so] by stepping away from their day jobs, at least for a while. Policymakers who spend a little time in academia get the chance to think about the bigger picture, and to think about how their work fits in. Mid-career master’s degrees are particularly useful here, as are programs with fewer time commitments, like MIT’s Seminar XXI. The opposite is also true. Scholars who routinely interact with policymakers are likely to get a more detailed sense of cyberspace competition. Spending time in government can be illuminating.”

    #6 What is one piece of advice you have for scholars interested in making a more direct impact on cyber policymaking?

    Fischerkeller: “Write concise, peer-reviewed essays that speak directly to a current or likely future cyber challenge with the intention of submitting those essays to well-established online fora for publication consideration.” 

    Kerr: “There are many things you can do here, some of which I have already mentioned. But one of the most important things that I will stress here is that human relationships are key. There is no substitute for getting to know people in the policy world and having regular enough interaction to understand what they are wrestling with and where scholarly research can help. Whether this happens through attending the same conferences, reading and engaging with the same policy-relevant publications, or fellowship stints in government service, academics who get to know and engage regularly with people in the policy community will benefit from learning how policymakers think about the issues, and iteratively contributing to the existing policy debates. For this, they also need to learn where and how to publish output that will be picked up and seen as relevant in the policy circles. This will not always be the same output as is relevant to within-discipline academic prestige or tenure track progression, but the two objectives can also be mutually beneficial.” 

    Lonergan: “First and foremost, scholars should familiarize themselves with what is going on in the policymaking realm—just as they would when tackling a new research project. It is important to take care to understand the significant policy work that has already been accomplished, prior efforts that have been less successful and why, and so on. I would also encourage scholars to actively engage in dialogues and venues that bring together scholars and practitioners, like roundtables or other events hosted at think tanks, or find ways of getting involved in track II or track 1.5 dialogues.” 

    Rovner: “When you are starting a new project, plan on three products: a peer-reviewed article in a scholarly journal, a policy paper summarizing your research, and an op-ed. Thinking about a project with these goals in mind helps broaden your audience, and it forces you to think about how to get your ideas across to policy professionals who are more or less familiar with cyber issues.”

    #7 What is the biggest difference between writing for a scholarly audience vs. writing for a policymaking audience?

    Fischerkeller: “Importantly, one difference should not be the quality and depth of research supporting one’s arguments. The format in which those claims are presented differs, however, as many, perhaps most policymakers prefer to read concise presentations rather than twenty-five-page articles with over one hundred footnotes. Additionally, policymakers are often interested in options rather than a definitive argument in support of a single viewpoint.” 

    Kerr: “A key element in either type of writing is to really know your audience and know where you can add value. Do not underestimate your audience in either direction.  While scholars bring extensive theoretical, conceptual, and methodological rigor, policymakers often have significantly more first-hand experience and day-to-day knowledge of empirical data or precise processes relevant to the area of inquiry. For a scholarly audience, the goal is often to advance theoretical arguments within an academic discipline, often by publishing long articles or books through lengthy peer review processes. For a policy audience, some of the theory, concepts, and rigor from academia can absolutely be valuable, but they must relate to practical approaches to address fast-moving policy challenges. Writing for a policy audience also should be written in a style, format, and length that can be rapidly absorbed by busy professionals. This writing usually is much shorter and more concise than long-form academic writing, responding quickly to real-world events, and avoiding discipline-specific jargon. It also is important to write for outlets that are known and read within policy communities.” 

    Lonergan: “The biggest difference lies in the ‘so what’ question. For scholarly writing, researchers usually aim to formulate and answer research questions that speak to, build on, or challenge core theoretical and empirical issues in the discipline; the ‘so what’ is a function of how that research engages with a robust academic body of work. For policy writing, the ‘so what’ is entirely different—even if the main insights may stem from academic research. What matters in this area is how a research question or topic directly informs or speaks to questions of policy.” 

    Rovner: “[The biggest difference is] length. Good scholarship is a conversation with the past, as the saying goes. This means scholars need to spend time situating their work in a broader field, footnoting everything, criticizing one another’s work, and proposing new questions to encourage new arguments. Research articles and books are long and sometimes quite dense. Engaging scholarly work takes time. Because policymakers do not have the luxury of time, good policy pieces are shorter. They get to the point, eschewing the paraphernalia of academic writing in favor of the bottom line. Scholars who write for policy are ruthless about chopping up their research into digestible portions. Especially good scholars keep all the background in mind, just in case an interested policymaker wants to do a deeper dive.” 

    Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

    The Atlantic Council’s Cyber Statecraft Initiative, part of the Atlantic Council Technology Programs, works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

    The post The 5×5—Bridging the divide: Cyber conflict in international relations appeared first on Atlantic Council.

    ]]>