Tech and futures blog | Where ideas in AI, design, human cognition, and futures converge. Thinking out loud — in pursuit of what matters next.

Tech and futures blog | Where ideas in AI, design, human cognition, and futures converge. Thinking out loud — in pursuit of what matters next.


AI Governance: Who’s Really in Control?

How corporate structures are falling short in the age of intelligent machines

Book-Mined Series

Rabih Ibrahim

Rabih Ibrahim

5 min read
May 21, 2025

Artificial intelligence is accelerating — learning faster, scaling wider, reaching deeper into society than most technologies before it. And yet, it’s being governed not by public institutions or global treaties, but by a small group of corporate actors: executives, investors, and boards of directors.

This isn’t just a technical issue. It’s a governance dilemma.

The question isn’t whether AI will change the world — it already is. The real question is: Can corporate governance, a system built to manage profit and risk, be trusted to handle a technology that may redefine what risk even means?

The Corporate Frame Was Never Meant for This

Corporate governance has long served a clear purpose: to align the interests of executives and shareholders. Incentivize growth. Maximize returns. Stay within the guardrails of legal compliance.

It works well enough when you’re producing cars or streaming content. But AI is different.

Companies like OpenAI and Anthropic recognized this early. They introduced governance models designed to put safety above profit. OpenAI’s charter, for example, states that its primary duty is to humanity — not to shareholders. Anthropic went even further, structuring itself as a public benefit corporation with a trust that gains increasing control over time.

These are not typical moves in Silicon Valley. They are structural attempts to resist short-termism in favor of long-term social responsibility.

But as we saw with the Sam Altman saga in late 2023, even the most idealistic governance models can buckle under pressure. Altman was removed by OpenAI’s board citing transparency concerns. Days later, investor backlash and internal revolts forced his return. The board folded. Microsoft emerged even more powerful.

The lesson? Even when you write safety into your governance, the logic of capital tends to write over it.

. . . . .

Profit Still Has the Final Say

What happened at OpenAI wasn’t just a boardroom shuffle — it was a real-time test of whether AI safety could hold its ground against the machinery of profit.

Spoiler: it couldn’t.

Economists Oliver Hart and Luigi Zingales have a term for this — amoral drift. It describes how, in open markets, social goals are gradually eroded by the gravitational pull of returns. Even a well-structured company can be overtaken by market forces if its ideals aren’t supported by investors or protected by something stronger than a mission statement.

Would Microsoft have invested $13 billion in OpenAI if those funds were completely locked into social priorities rather than commercial opportunity? Probably not.

And that’s the dilemma: if prioritizing safety makes a company uninvestable, then the safety-first companies will lose. Not by failing, but by being outcompeted.

Independence Isn’t the Same as Accountability

We often assume that “independent” directors are a safeguard — that they’ll bring balance, or objectivity, or at least a wider lens.

But independence is a structural feature, not a guarantee of responsibility.

Board of directors might be free from CEO influence, but that doesn’t mean they’re aligned with the public interest. They might follow personal convictions, groupthink, or just default to risk-averse behavior that delays hard decisions.

OpenAI’s board was insulated from both investors and the CEO. But when it came to a crisis point, that insulation didn’t produce better outcomes — it produced confusion, opacity, and ultimately, collapse.

If companies truly want governance that reflects public interest, then independence alone isn’t enough. They need mechanisms that reinforce accountability to that interest — structures that reward transparency, dissent, and long-term thinking, even when it’s inconvenient.

. . . . .

The AI Alignment Problem Isn’t Just for Machines

In AI safety circles, there’s an idea called the alignment problem: the challenge of ensuring an advanced AI system’s goals remain compatible with human values.

But this problem isn’t exclusive to machines. Corporate governance has its own version.

Shareholders delegate authority to executives, trusting them to act in the company’s best interest. But those relationships rely on incomplete contracts — there’s no way to foresee every future scenario or encode every ethical consideration.

It’s the same problem AI developers face: you can’t anticipate every edge case. You can’t hard-code every good outcome.

And so, just as AI systems might pursue unintended goals, corporate systems often drift into outcomes nobody explicitly wanted.

According to a 2022 survey of AI experts, the median estimate for the risk of “something extremely bad” — such as human extinction — was 5%. Nearly half the respondents put that risk at 10% or higher.
Source: AI Impacts, 2022 Expert Survey

That number isn’t fringe paranoia. It’s a warning from people building the future. And it should prompt us to ask: if we can’t fully align a boardroom with long-term public safety, what makes us think we can align AI?

. . . . .

If Governance Can’t Handle This, What Will?

Corporate boards were never designed to handle catastrophic risk. Their job is to balance returns and responsibilities — not to anticipate global collapse.

So when it comes to AI, maybe we’re asking the wrong institutions to do too much.

There are increasing calls for a Manhattan Project for AI safety: a globally coordinated, publicly funded initiative to manage the risks that no single company, and no single governance model, can contain.

This isn’t about stopping innovation — it’s about buying time, creating guardrails, and building institutional intelligence fast enough to keep pace with technical acceleration.

Because if AI becomes as powerful — and as unpredictable — as many believe it will, then relying on quarterly earnings calls and investor governance won’t cut it.

We need new frameworks, new incentives, and new actors. Possibly even new values.

. . . . .

Final Reflection

If corporations can’t govern AI safely, who should?

Can we trust nation-states with the task? Do we need an independent global regulator? Or is it time to invent something that doesn’t exist yet?

And maybe the more uncomfortable question is this:

If we haven’t figured out how to govern ourselves responsibly, why are we so confident we can govern machines that might outthink us?

. . . . .