Cybersecurity, Public sector

When Reuters reported last week that U.S. cybersecurity officials are weighing a proposal to cut the deadline for patching actively exploited vulnerabilities from roughly two to three weeks down to three days, it’s easy to imagine how the federal security community likely received it. Those focused on the accelerating threat landscape probably welcomed it—the pressure to move faster is real and well-documented. Those responsible for actually executing patches across complex, distributed civilian agency environments likely had a very different read: for systems built on legacy infrastructure where every patch requires testing before deployment, three days may be less a deadline than a wishful benchmark.

Both sides are right. Both sides are also missing the point.

The proposal deserves credit for naming something real: frontier AI models—Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber among them—have compressed the window between disclosure and exploitation from weeks to hours. That compression is not hypothetical. We’ve tracked it in live threat data. The defensive response does need to accelerate. But accelerating the clock on the same broken workflow doesn’t solve the underlying problem—it just makes the failure faster.

The real problem with the federal government’s cyber defense isn’t the timeline. The underlying architecture is broken. And until that architecture changes, no timeline mandate will close the gap between when threats emerge and when agencies are actually protected.

The Fundamental Mismatch

Let me describe the system most federal civilian agencies are still operating inside, because it matters for understanding why the three-day proposal lands the way it does.

A vulnerability is disclosed. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) adds it to the Known Exploited Vulnerabilities (KEV) catalog, often days or weeks after the first public evidence of exploitation. A Binding Operational Directive is issued. Agency teams receive notification through their intelligence feeds, which may be a curated briefing that runs on a weekly cycle. Analysts manually query their CMDB and vulnerability scanners to determine whether affected software is present in their environment. That process requires pivoting between tools that don’t talk to each other. The scan data may be hours or days stale. Shadow IT and legacy components may not appear at all. 

Once exposure is confirmed, escalation happens through ticketing systems and Microsoft Teams channels. Leadership asks for context on financial and mission impact. Someone assembles a business case manually. A change management process begins. In most federal environments, emergency patches still require testing windows.

Now compress all of that to three days. The defenders aren’t slow because they lack urgency. They’re slow because the systems they depend on weren’t designed for the speed adversaries now operate at. The challenge is compounded by the fact that accelerating timelines and modernizing intelligence workflows are competing priorities for agencies already managing constrained budgets and staffing. This is a structural reality that any serious proposal must account for.

Cutting the clock is not the same as fixing the clock.

What AI Changes—And What It Doesn’t

The AI angle in this debate deserves more precision than it typically gets. The latest frontier AI models represent a meaningful step change in offensive capability. They can identify previously unknown vulnerabilities, accelerate exploit development, and adapt attack chains in near real time. Exploitation timelines that were once measured in weeks are now measured in hours for a growing share of disclosed vulnerabilities. That compression is real, and it’s accelerating.

But there’s a less-discussed dimension: AI is also being deployed offensively to perform reconnaissance at scale. Attackers are quietly probing exposed systems, cataloging vulnerable configurations, and identifying high-value targets before any formal disclosure exists. In our analysis of the Cisco Catalyst SD-WAN cluster disclosed earlier this year, we observed malicious IPs systematically probing one of the affected CVEs across a seven-week window. No finished intelligence report described that activity in real time. Because finished reports aren’t written at the speed that scanners operate.

The recognition that AI speeds exploitation is correct. But relying on faster patching mandates as a policy response only addresses the back half of the problem. The front half, the reconnaissance and early-signal phase where the attack actually begins, remains structurally invisible to most federal intelligence architectures. This is the gap that CISA’s proposal cannot close by itself.

The Federal Government’s Unique Position

The federal government faces cyber defense challenges that differ materially from even the most complex private-sector environments. This is not a criticism. It is simply the context we need to shape how we think about what “faster” actually requires.

Federal civilian agencies operate enormous portfolios of legacy systems, many running software that was never designed to be patched on short cycles. FedRAMP authorization processes and FISMA compliance frameworks create necessary but time-consuming prerequisites for change. Many agencies lack dedicated threat intelligence functions and rely heavily on CISA’s centralized guidance for situational awareness. Agency security teams are distributed across departments with widely varying levels of maturity, tooling, and staffing. And the national security stakes—for systems holding sensitive citizen data, supporting critical infrastructure, or enabling government continuity—mean that a botched patch can create its own category of harm.

Any serious proposal for accelerating cyber defense in this environment has to start with these realities, not abstract them away. What federal agencies need is not a faster version of the same reactive process. They need a fundamentally different model where the answer to “are we affected by this threat?” isn’t something analysts have to reconstruct from scratch every time a vulnerability drops.

Reframing the Risk Model

In our earlier work on reframing cyber risk, my colleague Jerry Caponera proposed a simple but meaningful shift in how we express the problem: Cyber Risk = (Threat × Exposure × Impact) ÷ Preemption.

The traditional model — Threat × Likelihood × Impact — assumes a reactive posture. Something happens, and you respond. The preemption variable changes that equation. It represents the organization’s ability to detect emerging adversary activity earlier, understand how those threats map to internal exposures more completely, and remediate weaknesses before attackers exploit them. The stronger that capacity, the lower the probability that a threat becomes an incident.

For the federal government, this reframe has concrete implications. Cutting the patching deadline addresses impact mitigation after the threat is confirmed. Preemption addresses whether you ever get to the point where you need emergency patching in the first place.

The distinction matters because the two require completely different investments. Shortening deadlines requires faster execution of existing processes. Building preemptive capacity requires changing the architecture of how threat intelligence, exposure data, and business risk are connected, and how quickly that connection happens.

Three Things Federal Agencies Actually Need

If the goal is to make the federal government genuinely more defensible against AI-accelerated threats, three capabilities need to come online, not as separate programs, but as a connected system.

No. 1: Real-Time, Client-Tailored Threat Intelligence Delivered Into Working Environments

Most federal agencies rely on centralized threat intelligence such as CISA advisories, ISAC feeds, vendor briefings, consumed through portals separate from the tools where analysts actually work. The result is a structural pivot: analysts have to leave their SIEM, EDR, or ticketing environment, navigate to an intelligence portal, search for context, determine relevance to their specific agency environment, and return with findings. That process takes time that disappears when exploitation windows shrink from weeks to hours.

The alternative is intelligence that arrives already contextualized to the agency’s environment. One that answers not just “is this threat real?” but also “does this threat matter to our specific systems, configurations, and attack surface?” And it needs to arrive inside the tools where investigation and response already happen, not in a separate tab.

This isn’t an incremental improvement to existing intel distribution. It requires AI systems continuously correlating external threat signals with internal asset data, alert history, and control posture, and delivering that correlation as a unified picture, not a set of artifacts to be manually assembled.

No. 2: Operationalized Intelligence That Compounds Across the Threat Lifecycle

One of the most persistent inefficiencies in federal threat intelligence programs is the rate at which intelligence gets rebuilt from scratch. A major vulnerability drops. Analysts assemble context like threat actor history, related CVEs, MITRE ATT&CK techniques, prior targeting of federal infrastructure. That work is often done independently across agencies. Then the next vulnerability drops, and the process repeats.

What federal agencies need instead is intelligence that accumulates, where the work of understanding one threat informs the response to the next, where detection engineering, incident response, and reporting are fed by a shared, continuously updated knowledge base rather than parallel investigations. This is what intelligence operations maturity actually looks like in practice. It’s the difference between intelligence as a consumption activity and intelligence as an organizational capability.

At the scale of the federal civilian agency enterprise, this isn’t an aspiration. It’s a requirement. Individual agencies cannot independently sustain the analyst hours required to produce finished intelligence at the pace AI-accelerated threats are generating it.

No. 3: Continuous Exposure Management Grounded in Control Validation and Financial/Mission Risk

The CISA KEV catalog is genuinely valuable. But it describes a reactive inventory: vulnerabilities that are known and already being exploited. For an agency trying to make defensible decisions about where to prioritize limited remediation resources, “known exploited” is only one input among many, and it arrives late in the timeline.

What’s missing is continuous validation of control effectiveness, not periodic assessment or quarterly scans. Continuous measurement of whether the defenses that are supposed to be in place are actually functioning as designed across the systems that matter most to mission continuity.

When a new vulnerability drops, the question shouldn’t be “do we run this software?” It should be “which of our assets run this software, which are internet-facing, which compensating controls are actually active and verified, and what is the operational impact if this exploit lands here?” That question requires live telemetry, not static configuration management databases. And the answer needs to reach decision-makers in language they can act on. Not just CVSS scores, but mission impact and, for appropriate stakeholders, financial exposure.

The GAO and OMB have both documented the gap between how federal agencies model cyber risk and how that risk connects to actual mission outcomes. Closing that gap requires exactly this kind of continuously updated, evidence-backed exposure management.

The Architecture That Makes the Shift Real

The frameworks for this shift already exist. Gartner’s Unified Cyber Risk Intelligence (UCRI) model describes the architecture: fusing external threat signals with internal exposure data and business risk into a shared intelligence fabric that informs decisions across the entire security function. The Continuous Threat Exposure Management (CTEM) framework describes the operating rhythm: not periodic assessment cycles, but continuous identification, validation, and prioritization of exposure as conditions change.

Both frameworks have been catching hold in private sector security programs for the past several years. The federal government, for all the structural reasons outlined above, has been slower to adopt them. The CISA proposal is, inadvertently, a forcing function. Because if the alternative to a three-day patching mandate is an architecture that can actually identify and begin mitigating risk before formal disclosure, which is demonstrably achievable, then the policy conversation shifts from “how do we move faster?” to “how do we build the system that makes speed possible?”

That’s the right conversation.

What We’ve Seen Work

This isn’t theoretical. We’ve been tracking what happens when organizations, including some with federal sector overlap, build this kind of architecture. The results are measurable.

In the case of the Fortinet FortiWeb vulnerability (CVE-2025-64446), Dataminr’s AI systems detected early exploit activity originating from a specific IP address on October 7, 2025. There was no published CVE, no vendor advisory, no KEV entry. The signal came from where threat actors actually operate. Customers who received that intelligence had 38 days before CISA added the vulnerability to the KEV catalog on November 14.

During those 38 days, they investigated for early signs of intrusion, hardened defenses, and adjusted detection rules. By the time the broader security community was beginning to triage the risk, those organizations had already reduced their exposure.

At a 10-day detection-to-containment gap reduction, based on our proprietary cyber loss data, that translates to more than $8 million in potential avoided loss per incident. That’s not an abstract metric. It’s the financial value of a connected system versus a disconnected one.

For federal agencies operating under appropriated budgets and facing oversight from OMB and the Hill on cyber program effectiveness, that kind of quantified risk reduction is exactly what defensible investment decisions require.

What the Federal Government Should Take From This Moment

The CISA proposal will likely move forward in some form. The pressure to accelerate is real, the AI threat trajectory is not reversing, and the signal that faster action is required is legitimate. That signal should be heard.

But leaders in the federal cyber ecosystem should use this moment to push for something more consequential than a compressed timeline: a genuine architectural shift in how threat intelligence, exposure management, and risk quantification are connected across civilian agencies.

That means investing in intelligence systems that deliver relevance at the agency level, not just at the enterprise catalog level. It means building or acquiring the capability to continuously validate control effectiveness, not assume it. It means creating a shared model for expressing cyber risk in mission and financial terms that can inform decisions from agency CISO to OMB to Congress. And it means recognizing that the human analysts doing this work, who are already stretched across a federal security workforce that has seen significant attrition, can’t bear this load manually. The systems that connect threat intelligence to exposure to business risk need to operate autonomously, at the speed of the threat, not the speed of the weekly briefing cycle.

The goal isn’t to comply with a three-day mandate. The goal is to build a defense architecture so well-connected that three days is sufficient, because the work of understanding exposure and prioritizing action has already been done before the mandate clock starts.

Mending the Broken Cyber Defense Chain

Cyber failures aren’t caused by a lack of alerts—they’re caused by the broken connections between signal, risk, and action. Learn how to bridge the gap with an agentic defense system.

Download Ebook
Author
Tim Miller, Global Field CTO & Chief Cybersecurity Strategist
May 8, 2026
  • Cybersecurity
  • Public sector
  • Cyber Risk
  • Public Sector
  • Blog