By now if you haven’t been drawn into the soap opera that has been OpenAI, you’ve clearly been disconnected from any sort of media or you’ve recently returned from a long time-travel trip to the past.
How could a company seemingly be doing so many things right and achieving so much success in the field of generative AI struggle so mightily within the governing suite of the Board of Directors?
Generally, we see this type of strife and mayhem in companies struggling to stay relevant and stay alive, not within companies dominating within their respective industries. To answer the question above, we must glean insights from OpenAI’s nearly eight years of operation.
A Partial OpenAI Timeline
December 11, 2015
Sam Altman and Elon Musk found OpenAI and become co-chairs. Musk donates $100M to the initiative and reportedly commits to up to $1 billion. The stated goal of the company is “..to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
- It’s important to note from the above that the company is created as a not-for-profit entity.
- The stated goals also reinforce the notion of not-for-profit constraints on distributions. Clearly, investors should be wary that they are not investing for a return but rather for some greater good.
December 31, 2016
The OpenAI Board of Directors grows to include Elon Musk, Sam Altman, and Chris Clark.
$30M is donated to Open AI from Open Philanthropy. The founder of that company joins Open AI’s board.
December 31 2017
As of filings at the close of 2017, the Board now consists of Elon Musk, Sam Altman, Chris Clark, Holden Karnofsky, Greg Brockman and Ilya Sustskever.
February 20, 2018
Musk either leaves or is removed form the Board. Accounts vary as to the reason, with Musk indicating a conflict of interest with his work in AI at Tesla, and others claiming it was a result of Musk attempting a takeover.
Chris Clark is also now absent from the Board of Directors, though it is difficult to determine exactly when or why he left. Chris is evidently a lawyer who has represented Musk before, so perhaps they left the Board together.
Whatever the reason, Musk reneged on his planned donations, leaving Open AI with significant fees to train its models and no funding to do so.
March - October 2018
Reid Hoffman and Adam D’Angelo (CEO of Quora and prior Facebook CTO) join the Board along with Sue Yoon. The 2018 end of year Board of Directors are:
- Sam Altman
- Sue Yoon
- Holden Karnofsky
- Greg Brockman
- Ilya Sutskever
- Adam D’Angelo
- Tasha McCauley
Over the course of the next 5 years, multiple people join and leave the Board. Perhaps the most significant is Reid Hoffman in 2023.
Microsoft invests $1 Billion in OpenAI and forms a relationship to build Azure AI technologies.
I believe it is important to note that this investment may very well be into a for-profit arm of OpenAI which as we’ve previously indicated is a not-for-profit company.
November 30, 2022
ChatGPT is released for public use.
Microsoft announces $10 billion investment in OpenAI.
Friday, November 17th
The OpenAI Board fires Sam Altman, indicating that he was not candid in his Board communications. The Board has shrunk and now consists of Ilya Sutskever, Adam D’Angelo, Tasha McCauley, and Helen Toner. Cofounder Greg Brockman is also removed from the Board.
Saturday, November 18th
OpenAI investors push to have Altman rehired. Emmet Shear (formerly CEO of Twitch) named as Interim CEO.
Monday, November 20th
Microsoft hires Altman and Brockman along with other OpenAI veterans. Shear issues a three-point, 30-day plan for the company. Nearly 500 employees indicate they will leave if the Board does not resign and hire Altman back to OpenAI. Marc Benioff offers to hire the OpenAI employees. OpenAI Board approaches a competitor about a potential merger.
Tuesday November 21st
Altman returns as OpenAI CEO. OpenAI indicates it will reform its Board eliminating several Board members.
Astute observers will note the rotating door that became the OpenAI Board of Directors. My experience is that companies that achieve great success seldom have significant changes to boards of directors. Who wouldn’t want to be associated with the incredible success the company has seen to date? While churn within a company may be a result of changing strategies or changes in performance, the most common reason for churn within boards of directors in successful companies is a lack of alignment or agreement on either outcomes or approaches. That observation seems bolstered by the fight for control, ultimately an indicator of great affective conflict, within the company between November 17th, 2023 and November 21st, 2023. Readers of our blog will know that affective conflict (conflict regarding position, power and control) is always detrimental to company performance and nowhere is it more detrimental than the board of directors.
Consistent with how we approach all our AKF Partners postmortems, we’ll leverage our established timeline to identify key learnings.
Key Learnings from Timeline
- Discontinuity in governance and rotating boards of directors is an early warning sign of problems within successful companies. Buyer beware as such churn likely indicates a lack of alignment which can damage company performance.
- The Board completely mishandled the communication to stakeholders. Additionally, it failed to properly meet the expectations of the stakeholder community, indicating it was completely out of touch with the stakeholders it represented. No other conclusion is possible given the investor and employee backlash.
- Sam Altman similarly failed to properly align the Board, employees, and other stakeholders. This is one of the most important tasks for any CEO and his surprise firing is a clear indication that he was not properly ensuring this alignment.
- Company governance can and must be implemented to help govern the use of AI within a company. There’s nothing in the timeline that directly indicates this, but the notion of governance relative to AI (the purpose of this story) invites the question of whether we are properly governing the use of this powerful new technology within our own companies.
- Company governance is insufficient to govern the use of a technology – only countries can effectively do that and even then, it still requires all countries to agree on its usage and governance to be effective.
- A company can only govern itself.
- The theory of simultaneous discovery should have been enough for OpenAI to realize other similar technologies would emerge.
- These two points indicate OpenAI could not have governed other AI-related technologies developed by other companies, making its not-for-profit governance structure questionable in terms of value; why form a board to govern AI usage when you have so little power to do it broadly? The notion was noble, but clearly poorly implemented.
- Governance, when performed effectively, should never be about “control”. Rather it is about enablement while properly protecting stakeholders against known risks enroute to a compelling vision.
Key Questions to Resolve
- What is the role of a non-profit board of directors when a portion of that company is a profit-seeking endeavor with investors who expect a return?
- I absolutely do not know the answer to this question and while I’m sure similar cases have arisen in the past potentially with some success, the probability of success seems low to me.
- I don’t think I ever would have been concerned about an investment like this in the past, but the OpenAI soap opera now makes me believe I would never invest in a company structured as a non-profit. I would absolutely consider donating to such a company – but I would never invest with an expected return.
- How much value did investors lose with the struggle for control?
- If Altman had joined Microsoft, one would argue their past investments were nothing more than a long duration hiring bonus – one that Altman never personally received.
- Now that Altman is apparently staying at OpenAI, perhaps the loss is just the cost of low productivity over roughly two percent of the year. Either way, the CEO and the Board are collectively responsible for the loss following from the points above.
What this Means to You
Perhaps the greatest takeaway from the OpenAI soap opera is for us to use it as a lens to view our own AI-related governance initiatives.
- Have you started an AI governance program in your company?
- How will you keep practitioners and stakeholders aligned to the regarding the use of AI within your company?
- How will you evaluate the efficacy of your governance over the usage of AI?
If you haven’t thought about these questions, or are just now starting to think about them contact us, we can help!