The News You’re Not Reading Is the News That Matters
This week in AI, two things happened on the same day that I’ll bet few noticed were connected, but that raised alerts with me.
There’s so much going on in our world right now…
Sam Altman published a 13-page policy blueprint calling for a New Deal-scale social contract to manage the disruption of AI superintelligence. Sweeping proposals: a public wealth fund, a 32-hour workweek, new corporate taxes to protect Medicare. Historic in ambition. The kind of document that makes you want to stand up and pay attention.
On that same day, Ronan Farrow and Andrew Marantz published an 18-month investigation into OpenAIin The New Yorker (subscription required, see synopsis here) – drawing on over 100 interviews and hundreds of pages of internal documents, including secret memos compiled by former chief scientist Ilya Sutskever.
The central finding, in Sutskever’s own words: “Sam exhibits a consistent pattern of lying.”
I’ve been in technology for more than 40 years. I have watched a lot of pivotal moments get buried under competing noise. I am telling you: this one is worth your full attention.
The Coincidence That Wasn’t
Let me be direct about the timing, because I think it matters. (Believe me, I teach PR strategy.)
A CEO releases a sweeping, generous-sounding policy vision – we need government to regulate us, we need to redistribute AI wealth, we need a new social contract – on the exact day that the most thorough investigation of his leadership lands in the public record. One story crowds out the other. The bold vision gets the headlines. The accountability reporting gets the footnotes.
Whether this was deliberate strategy or extraordinary coincidence, the effect is the same: the conversation about what Sam Altman might build with unchecked power gets overtaken by the conversation about what Sam Altman says he wants to build.
This is a pattern worth recognizing. Not because Altman is uniquely cynical – he may well believe every word of that blueprint.
But because in a moment when AI is moving faster than any regulatory framework can track, the people building the most powerful systems in human history are also the ones most actively shaping the public narrative about how those systems should be governed.
That is a structural problem, regardless of anyone’s intentions.
What the CFO Knew
Here is a quieter story from the same week that deserves more attention than it received.
Sarah Friar, OpenAI’s CFO, has reportedly told colleagues she doesn’t believe the company will be ready to go public by late 2026 – the timeline Altman has privately championed. Her concerns are specific and grounded:
$600 billion committed to cloud server infrastructure over five years
$200 billion in projected cash burn before the company reaches positive cash flow
$14 billion in expected losses in 2026 alone.
She has also flagged a structural vulnerability: a substantial portion of OpenAI’s recent $122 billion funding round came from Amazon and NVIDIA – two companies that are also OpenAI’s primary chip and cloud suppliers.
When your investors are also your vendors, the capital structure is circular in ways that should alarm any serious CFO.
Friar is not someone who defaults to caution. Before joining OpenAI, she served as CEO of Nextdoor, where she took the company public in 2021 – and before that as CFO of Block (formerly Square), where she led the company’s IPO in 2015 and helped add $30 billion in market capitalization. When someone with that track record raises a flag, the flag means something.
What happened next is the part of this story I can’t stop thinking about.
Altman reportedly excluded Friar from key financial meetings, including a high-level discussion with a major investor about server procurement. One attendee called her absence “noticeable and awkward.” And as of August 2025, Friar no longer reports directly to the CEO. She reports to the head of applications – a product executive, not a finance one. The CFO of a company preparing to go public at an $852 billion valuation does not have a direct line to the CEO.
I have spent enough time in corporate governance to know that reporting structures are never accidental. When the person raising the financial alarm gets moved out of the decision-making room, that is not a reorganization.
That is a signal.
The Loop That Shouldn’t Be Called a Round
Friar’s investor-as-vendor concern deserves its own moment, because it is not a nuance. It is a structural problem with implications that extend well beyond OpenAI.
OpenAI’s $122 billion funding round, closed March 31, 2026, was anchored by three investors: Amazon ($50 billion), NVIDIA ($30 billion), and SoftBank ($30 billion). Those three names account for $110 billion of the headline number.
Here is what that means in practice: Amazon is also OpenAI’s cloud infrastructure provider. NVIDIA is also OpenAI’s primary chip supplier. SoftBank has committed $3 billion annually to deploy OpenAI technology across its own portfolio companies – making it simultaneously an investor in and a paying customer of the company it just funded.
The money flows in, and then it flows right back out to the people who sent it.
This is what financial analysts call circular financing, and it is not a new story. Bloomberg’s investigation into AI circular deals drew an explicit parallel to the dot-com era’s fiber optic buildout, when equipment makers provided vendor financing to telecom companies that used the loans to buy more equipment from the same equipment makers. When the growth forecasts fell short, the model broke. Heavily leveraged companies slashed spending, filed for bankruptcy, and left enormous amounts of capacity sitting unused for years.
I know this story from the inside.
In the early 2000s, I was Senior Director of Marketing at Enron Broadband Services (EBS) – the fiber optic division of Enron – before the crash. The product at the center of the fraud was called the Broadband Operating System, the BOS: a software platform that executives publicly claimed could dynamically provision bandwidth on demand and deliver end-to-end quality of service across Enron’s network. It was, as the Department of Justice later established, never embedded on the network, never functionally complete, and never capable of doing what the press releases said it could do.
The co-CEO of EBS, Joe Hirko, eventually pleaded guilty to wire fraud for approving those misrepresentations.
I was deposed in Hirko’s attorney’s office over my emails. Because I had been asking questions about the BOS – in writing, in the normal course of doing my job – questions that turned out to be the right ones, asked too quietly, inside a room where the answers were being carefully managed. I was never called into the larger proceeding. My best assumption: my answers weren’t useful to the defense.
The person asking inconvenient questions got reviewed, assessed, and set aside.
I tell that story when I speak because it is not ancient history. It is a pattern. The product that doesn’t work the way the press release says it does. The financial structure where the same parties are on both sides of the transaction. The internal voices raising concerns that get noted, managed, and absorbed without changing anything. And the moment, much later, when an investigation establishes what some people inside already knew.
When I read about circular financing in AI – vendors investing in the companies that spend money back on their products, capital commitments that are contingent on milestones that haven’t been reached, a CFO raising concerns who no longer has a direct line to the CEO – I am not reading about Silicon Valley.
I am reading something I have read before. The technology is different. The structure is familiar.
That does not mean OpenAI is Enron. It means the structure deserves the same scrutiny we eventually – much too late – applied to Enron. And it means the people raising questions from inside deserve to be heard, not managed.
Why This Is Your Problem
I understand the temptation to read this as Silicon Valley drama. Billionaire CEOs, internal power struggles, stock market speculation – it can feel remote from the real work of running a business.
But here is what I know from 40 years of helping technology move through organizations: the governance failures at the top of the stack always cascade down. And the companies being built right now – with the financial structures, the safety trade-offs, and the leadership cultures they currently have – are the companies that will be embedded in your operations within the next three years.
Pick your poison.
You may already be using OpenAI’s technology. You may be evaluating it. You may be building on top of it. The questions about whether that company’s numbers are real, whether its leadership can be trusted, and whether the regulatory environment will hold anyone accountable are not abstract. They are vendor risk questions. They are operational continuity questions. They are governance questions that belong in your boardroom.
The SEC is the institution specifically designed to protect investors from a company going public on an unreliable financial story. I wish I could tell you the SEC is positioned to play that role right now. The current chair has reduced enforcement actions to their lowest level in a decade, dropped cases against companies that donated heavily to the administration, and oversaw the resignation of the agency’s enforcement director – who left, reportedly, after clashing with leadership over cases involving parties close to the administration.
The institution that is supposed to be the last backstop is not, at this moment, functioning as a backstop.
What Paying Attention Actually Looks Like
I am not writing this to induce panic, and I am not writing this to tell you to stop using AI tools. I am writing this because information is power, and right now there is a lot of deliberately generated noise making it harder to see clearly.
Here is what paying attention actually looks like in practice.
First: read the counter-argument. The critics of OpenAI’s policy paper are largely right that it is vague and strategically timed. But vague proposals sometimes move the Overton window, and not every ambitious idea that serves the person proposing it is therefore wrong.
Hold both things. The proposals deserve scrutiny AND engagement – not dismissal.
Second: watch the financial story, not the vision story. (Always, always follow the money, as my friend and mentor Kathy always says.) When a CFO raises concerns and gets moved out of the room, that is a more reliable signal than any press release. If OpenAI moves toward an IPO in 2026, watch what the S-1 actually says about cash flow, compute commitments, and the investor-as-vendor relationships Friar flagged.
The gap between the narrative and the numbers is where risk lives.
Third: ask what governance infrastructure actually exists. The EU AI Act is in implementation. The U.S. has no federal AI framework. NIST’s AI Risk Management Framework is voluntary. In the absence of binding regulation, OpenAI’s policy paper may actually define the conversation by default – which is its own form of power.
Your organization should not be waiting for federal clarity to develop its own standards for evaluating AI vendors’ financial stability, safety practices, and governance structures.
Fourth: notice who gets pushed aside. In the stories I’ve been watching this week, the people raising legitimate institutional concerns – the CFO, the enforcement director, the former safety leads – keep disappearing from the decision-making rooms. That pattern is worth naming, because it tells you something about what kinds of institutional checks are being treated as obstacles rather than functions.
The Harder Question
I have been in rooms where the argument goes like this: AI is moving too fast for regulation to keep up, over-regulation will stifle innovation, and the companies building these systems understand the technology better than any regulator could. Some version of this is true.
But the argument proves too much. It would apply equally to pharmaceutical companies, financial institutions, aviation, and every other industry where we decided that the complexity of the technology was a reason for robust oversight rather than a reason to abandon it.
The Altman proposal – for all its self-serving framing – actually concedes this point. He is calling for a New Deal precisely because he is acknowledging that the market, left to its own devices, will not distribute the benefits or manage the risks of superintelligence equitably. The question is not whether governance is needed. The question is who designs it, who enforces it, and whose interests it protects.
Right now, the person with the most detailed public proposal for AI governance is the person who stands to benefit most from a particular kind of governance. The enforcement institution designed to check financial misrepresentation is under-functioning. The internal voices raising concerns are being systematically moved out of the rooms where decisions get made.
This is not the time to throw up our hands. It is exactly the time to pay attention.
I got into technology in 1984 because I was curious enough to say yes to something I didn’t fully understand. I have stayed in it because that curiosity, applied rigorously and honestly, is what keeps you from being surprised by things you could have seen coming.
The signs are not hidden. They are in the news, if you know what to look for.
Please pay attention.
Don’t leave your AI journey to chance.
Connect with us today for your AI adoption support, including AI Literacy training, AI pilot support, AI policy protection, risk mitigation strategies, and developing your O’Mind for scaling value. Schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.
Your next step is simple. Let’s talk together and start your journey towards safe, strategic AI adoption and deployment with AIGG.

