Share Via
What Glasswing reveals about the speed of vulnerabilities, and who should be responsible for fixing them.
When Anthropic introduced Project Glasswing in April 2026, it surfaced thousands of high-severity vulnerabilities across foundational systems, including a 27-year-old flaw in OpenBSD’s firewall stack, a 17-year-old remote code execution issue in FreeBSD, and multiple privilege-escalation paths in the Linux kernel. These weren’t edge cases; they sat in the components modern infrastructure.
Equally telling is who joined: AWS, Google, Microsoft, Palo Alto Networks, CrowdStrike, Zscaler, and the Linux Foundation. When a coalition of that scale converges on the same conclusion, it’s no longer a fringe view.
The real significance of Glasswing is what it reveals about speed. Vulnerability discovery is no longer bound by human effort, it is continuous, scalable, and increasingly autonomous. The window between “vulnerability exists” and “vulnerability is exploitable” has collapsed from months to minutes. The limiting factor is no longer detection. It is how fast software can be upgraded.
Remediation is the new bottleneck
Enterprise environments aren’t blind to risk; if anything, they’re saturated with it. More signals, alerts, and intelligence than ever, yet breach frequency and impact haven’t declined proportionally.
Remediation in infrastructure is governed by operational realities: changes must be validated, coordinated, and introduced carefully to avoid disruption. Maintenance windows exist because they’re necessary. But that same discipline introduces delay, and when vulnerabilities can be exploited within minutes, remediation cycles measured in days or weeks create a persistent gap. That gap is where attackers operate.
Why the network is uniquely exposed
Traditional network infrastructure was built for persistence and reliability, not continuous high-frequency change. Updates are deliberate and infrequent because a single misstep can affect an entire campus, site, or business.
At the same time, networks sit in the path of all communication, enforce access decisions, and are often reachable, making them critical to operations and highly attractive targets. The result is a structural mismatch: high-value risk, low remediation velocity. In a slower threat environment, that trade-off was acceptable. In a faster one, it stops being defensible.
When ownership becomes liability
What should it mean to own a network? Real ownership is about intent and outcomes, deciding what the network should do, who should reach what, how it should behave, and what risk posture is acceptable. That is the part of ownership the business cares about, and it should never be outsourced.
But traditional ownership bundled intent with operations: design, deploy, patch, upgrade, debug, troubleshoot. In a slower threat environment, that bundle was workable. As exploitation timelines compress, the operational layer has to move continuously and without delay, and most enterprise environments aren’t structured for that. The same operational responsibility that once represented control increasingly resembles a liability.
The shift isn’t about giving up ownership. It is about separating what ownership should mean, intent and outcomes, from what it has historically been forced to include: operations that no longer scale.
Why true NaaS is the defensible answer
NaaS is often framed as a financial or operational choice, subscription instead of capex, simpler deployment, lower overhead. Those benefits are real, but they’re not the primary reason it matters today.
When the vendor that builds the software also operates the network, responsibility for patching shifts to the entity best equipped to execute. Software updates are no longer discrete events tied to customer change cycles, they become continuous, automated processes embedded into the service. Vulnerabilities can be addressed as they’re identified, not when the next maintenance window allows.
This is the same delivery model every modern SaaS vendor already operates on. Software is shipped continuously through CI/CD pipelines, rolled out behind the scenes, and the burden of installation on the customer is eliminated. There is no good reason there should be an exception in networking.
NaaS is the operational expression of the intent/operations split. Customers continue to define policy, security intent, access requirements, and compliance expectations. The vendor automates infrastructure operation, manages the lifecycle, delivers upgrades, and remediates vulnerabilities. The customer retains authority over what the network should do; the vendor is accountable for keeping it secure, reliable, and current.
What to demand from your networking vendor
A subscription alone doesn’t change the underlying risk profile. If customers still own software patching, upgrade cycles, or exposed management interfaces, the constraints remain. Whatever the label, the right model is defined by accountability, not pricing. The vendor owns the software lifecycle end-to-end, operates the control and management planes to minimize exposure, and delivers continuous, non-disruptive updates. That accountability needs to be explicit in the contract, not implied by the service model.
This reshapes procurement. Performance, coverage, features, and cost still matter, but they no longer capture the full picture of risk. The more important question is how quickly a deployed network can reduce exposure once a vulnerability is known, and who is contractually responsible for making it happen.
The bottom line
Vulnerability discovery is accelerating. Exploitation timelines are compressing. The cost of delay is rising. Traditional models, where customers own infrastructure and vendors hand off software patches to schedule and deploy, are increasingly misaligned with reality.
The first test is simple. Ask your current vendor one question: who is contractually responsible for patching the next zero-day, and how quickly can the fix be deployed across every site?