On the morning of February 3, 2026, a trader at Jefferies coined a word: "SaaSpocalypse." He used it to describe the mood on his desk - "very much 'get me out' style selling" - as $285 billion of market capitalization disappeared from software, financial services, and asset management stocks in a single session.[1] Thomson Reuters shed $8.2 billion. LegalZoom dropped 20%. India's Nifty IT index posted its worst month since October 2008.[1]
The trigger was a product announcement: Anthropic had released Claude Cowork, a set of plugins for legal, financial, and sales workflows.[1] The market's logic was brutally simple. If one AI agent can handle an entire workflow end-to-end, why maintain ten separate software licenses to do it piecemeal?
Markets move faster than facts warrant. But the underlying pressure is real, and the repricing it set in motion reveals something important: the transition from generative AI to agentic AI is not a capability upgrade. It is a structural economic disruption. And the institutions tasked with governing that disruption are, at best, three steps behind.
The SaaS boom of the 2010s was built on a specific economic logic: software replaced spreadsheets, charged per seat, and enjoyed near-zero marginal cost. Margins of 75 to 80 percent were routine. Retention compounded. The model was nearly frictionless - until it met something with lower marginal cost still.[1]
AI agents cost pennies per task, operate continuously, and can read documentation, Slack threads, and legacy macros that no SaaS interface was ever designed to accommodate. The interface - the login page, the dashboard, the carefully designed UX - was the product. Now the outcome is the product, and the interface is optional.[1] That repricing, as the Fintech Brainfood analysis observed, "happened in an afternoon."
By early March, the damage across major SaaS incumbents was substantial. Salesforce was down roughly 26% year-to-date. ServiceNow had shed 28%. Workday had dropped 25%. Adobe, 22%.[1] IT budgets overall are up 8% in 2026 - but AI budgets are up an estimated 100%, functioning as a capital black hole absorbing spend that once flowed reliably to traditional software vendors.[1]
Not everyone is losing. The sell-off revealed a fault line between companies whose value lives in their data and those whose value lives in their interface. Thomson Reuters, initially punished, staged a dramatic recovery in late February after announcing that its CoCounsel legal AI assistant had surpassed one million active users across more than 100 countries and that it had formalized a deep integration with Anthropic's Claude Agent SDK.[2] Shares surged 14% in a single session. The market had decided: proprietary data is a moat. A polished UI is not.
Sign in to join the discussion.
IT budgets are up 8% in 2026. AI budgets are up an estimated 100%. The difference is going somewhere - and it is not going to enterprise software incumbents.
While equity markets adjudicated the agentic question in hours, regulators are moving on a timeline measured in years - and beginning, at least, to move.
On January 22, 2026, Singapore unveiled what its Infocomm Media Development Authority (IMDA) describes as the world's first comprehensive governance framework specifically designed for agentic AI systems.[3] Announced at the World Economic Forum in Davos, the Model AI Governance Framework for Agentic AI is the third iteration of Singapore's broader AI governance effort, following editions addressing traditional AI in 2020 and generative AI in 2024.[4]
The framework is organized around four pillars: assessing and bounding risks before deployment; ensuring meaningful human accountability; implementing technical controls including sandboxing, privilege limitation, and safety testing; and promoting end-user responsibility through transparency and the ability to intervene or deactivate agents.[3] It is, notably, nonbinding - a voluntary guide rather than an enforceable regulation. Singapore's IMDA is accepting feedback and case studies as the framework evolves.
The European Union's posture is more prescriptive but slower to apply. The EU AI Act classifies high-risk AI applications and imposes conformity requirements, but its provisions were written with narrower automation in mind. The proliferation of general-purpose autonomous agents - capable of initiating transactions, updating databases, and taking cascading actions across systems - strains a risk taxonomy built around more predictable deployment contexts.[5] Legal analysts have noted that the Act's liability framework, which distributes responsibility across builders, deployers, and controllers, was designed for AI-as-tool rather than AI-as-agent.[5]
The United States presents a more complicated picture than either Singapore or the EU - not because it lacks a position on agentic AI, but because it holds two contradictory ones simultaneously, depending on the context.
At the commercial level, the federal posture is explicitly deregulatory. On January 20, 2025, President Trump revoked the Biden administration's Executive Order on AI safety.[7] Six months later, the White House released its AI Action Plan, "Winning the Race," organized around three pillars: accelerating innovation, building infrastructure, and leading in international diplomacy and security. Its domestic commercial agenda centers on removing regulatory friction.[8] A December 2025 executive order went further, directing the Justice Department to establish an AI Litigation Task Force to identify and challenge state laws inconsistent with a "minimally burdensome national policy framework for AI," and authorizing the withholding of federal funds from states deemed to impose onerous AI regulations.[9]
States have responded by legislating anyway. Over 1,000 AI-related bills were introduced across state legislatures in 2025 alone.[9] California's Transparency in Frontier AI Act, effective January 1, 2026, requires developers of large frontier models to publish risk frameworks and report safety incidents, with penalties up to $1 million per violation.[9] Colorado's AI Act, which took effect February 1, 2026, bans algorithmic discrimination in high-stakes decisions including hiring, education, and banking - and was cited by name in the December executive order as an example of the "ideological bias" the administration intends to preempt.[9] The result is an accelerating preemption battle whose outcome courts have not yet determined.
Within the defense and national security perimeter, the calculus is reversed. The FY2026 National Defense Authorization Act created an AI Futures Steering Committee within the Pentagon, charged with developing the Department of Defense's long-term AI strategy and assessing adversary AI development trajectories.[10] The NDAA also mandates a comprehensive cybersecurity framework for AI and machine learning systems - governance infrastructure that the administration has simultaneously argued is unnecessary for commercial AI development.
The U.S. position, in other words, is not an absence of governance thinking. It is a deliberate bifurcation: maximum permissiveness for commercial AI development, structured oversight for national security applications. Whether that distinction holds as autonomous agents increasingly operate at the boundary between commercial enterprise and critical infrastructure remains an open question - one that neither the AI Action Plan nor the NDAA has yet been forced to answer.
The deeper problem is not that regulators are slow - they always are, relative to technology. The problem is that agentic AI introduces a specific governance challenge that existing frameworks were not designed to address: the diffusion of accountability across long, automated action chains.
When an AI agent makes a consequential decision - denying a loan application, executing a trade, flagging a legal document - it is rarely acting alone. It is acting as a node in a pipeline of other agents, each with its own configuration, permissions, and failure modes. Determining who is responsible when such a pipeline produces harm is not a question current law handles cleanly.
Singapore's framework gestures at this problem with its emphasis on "clear allocation of responsibilities and approval checkpoints," but acknowledges it cannot resolve it through guidance alone.[3] The EU's approach of distributing liability across the value chain is theoretically sound but practically complex when that chain includes dozens of model providers, orchestration platforms, tool integrations, and enterprise deployers.[5] And the U.S. federal government, having declined to set commercial guardrails, has left the accountability question to a patchwork of state laws whose enforceability is itself in litigation.
Meanwhile, the enterprise adoption data from Thomson Reuters' own research is striking: organization-wide AI use in professional services nearly doubled to 40% in 2026, and most professionals are now actively using generative AI tools. Yet only 18% of organizations track the ROI of their AI tools, and fewer still measure AI's impact on client satisfaction or revenue outcomes.[6] Deployment has outrun measurement - and measurement has outrun governance.
The gap between financial markets and regulatory institutions is not new. What is new is the stakes. When software companies were simply repriced, the harm was borne by shareholders. When autonomous agents are making decisions with direct consequences for individuals - in legal, financial, medical, and governmental contexts - the asymmetry between market speed and governance speed becomes a public risk, not just an investor one.
Singapore's framework, voluntary as it is, represents the most serious attempt so far to name the specific risks of agentic systems and suggest architectural responses. Its influence on peer regulators in the EU, the UK, and the United States will likely exceed what its nonbinding status might suggest: in emerging technology governance, the jurisdiction that frames the question first often shapes the global vocabulary.
But framing the question and answering it are different things. The $285 billion that evaporated in February was a market answering a question about economic disruption. The governance question - who is accountable when an agent causes harm, and what constraints should bind its autonomy - remains, for now, genuinely open.