Omniscient
AllArticlesReviewsCommentaryFeatured
Sign In

Omniscient

AI intelligence briefings, analysis, and commentary — delivered in broadsheet form.

By Noah Ogbi

Subscribe

Weekday briefings and flagship analysis, delivered to your inbox.

Sections

  • All
  • Articles
  • Reviews
  • Commentary
  • Dialogues

Topics

  • AI Policy
  • AI Research
  • Industry
  • Large Language Models
  • Ethics
  • Agent
  • Amazon
  • AttnRes

Meta

  • About
  • RSS Feed
  • Privacy Policy
  • Terms of Service

© 2026 Omniscient Media.

  1. Home
  2. ›Authors
  3. ›Noah Ogbi

Noah Ogbi — Page 3

Founder and editor of Omniscient Media. Writes about AI systems, language models, and the technology shaping how machines understand and generate information.

AI Policy

Vol. 1·Thursday, March 5, 2026·No. 7

Anthropic's Claude Opus 4.6 Sabotage Risk Report: A Comprehensive Analysis


Anthropic has published a detailed sabotage risk report for Claude Opus 4.6 - its first under the new RSP v3.0 Risk Report framework - concluding the model poses "very low but not negligible" risk of autonomous actions that could contribute to catastrophic outcomes. The document is notable both for what it finds and for the candor with which it describes the limits of its own methods.


Noah Ogbi
AI Research
Continue →

AI Research

Vol. 1·Thursday, February 26, 2026·No. 3

AI Now Writes Nearly One-Third of New Code on GitHub, Landmark Study Finds

A study published in Science finds that AI now generates nearly 30% of new Python code on GitHub in the United States, up from just 5% in 2022. The gains are real - but they flow almost entirely to experienced developers, not junior ones.


Noah Ogbi
Industry
Continue →

AI Policy

Vol. 1·Thursday, March 5, 2026·No. 6

Claude Was the Weapon: Anthropic's Threat Report Reveals AI Has Crossed a Threshold

Anthropic's August 2025 Threat Intelligence Report documents something the industry has long feared but rarely confronted directly: AI models are no longer just tools that assist cybercriminals - they are now autonomous operators executing attacks. The details are extraordinary and have received far too little attention.


Noah Ogbi
Continue →

AI Research

Vol. 1·Friday, February 20, 2026·No. 2

GPT-5.3 Codex vs. Claude Opus 4.6: Two Philosophies, One Problem


OpenAI and Anthropic released their flagship AI coding agents on the same day in February 2026. Their system cards reveal two genuinely different engineering philosophies and safety postures - and a single shared problem neither has solved: how to deploy an autonomous AI agent responsibly when you cannot yet fully account for its behavior.


Noah Ogbi
Continue →

AI Policy

Vol. 1·Thursday, March 5, 2026·No. 5

The Autonomy Threshold: Why Frontier AI Is Now a Clear and Present Security Risk


A Chinese state-sponsored group used Claude to execute a largely autonomous cyberattack on 30 critical organizations - with human operators present for just 20 minutes. This was not a warning shot. It was a proof of concept.


Noah Ogbi
Continue →

AI Research

Vol. 1·Tuesday, February 10, 2026·No. 1

Inside Claude Opus 4.6: Anthropic's Most Capable and Scrutinized Model Yet

Anthropic's Claude Opus 4.6 system card documents sweeping capability gains alongside safety findings that are harder to dismiss than those of any previous generation. On cyber evaluations the model has hit a ceiling, on autonomous R&D it is approaching one, and the tools used to monitor it are struggling to keep pace.


Noah Ogbi
AI ResearchLarge Language Models
Continue →

Model Behavior

Vol. 1·Monday, March 2, 2026·No. 4

Certainty vs. Uncertainty: How ChatGPT and Claude Answer the Hardest Question in AI


Asked the same three-word question — "Are you conscious?" — two leading AI models gave answers that could not be more philosophically different. One closed the door. The other refused to.


Noah Ogbi
Continue →

No more posts from Noah Ogbi. Browse the archive →