Friday’s news of changing leadership at OpenAI may be the most newsworthy development in AI this year — even when factoring in actual advancements in AI.
As the story unfolds with twists and turns every hour, and as the AI world speculates wildly, there are a few things we do know:
- Altman’s firing and the subsequent fleeing of executives and staff will not only likely benefit competing companies, but result in a new endeavor by Altman and company.
- Altman also may find his way back to the company in record time.
- The decision to fire Altman, based on the makeup of the OpenAI non-profit board (which controls the for-profit entity), statements made, and concerns echoed within the company, may have been motivated by AI safety concerns.
This event (and everything after) as well as Meta’s dissolution of their Responsible AI team (see below) more or less proves that those making generative AI tools should have nothing to do with leading the protection against it. If safety can be called into question, jettisoned from internal rosters entirely, or tossed aside in favor of unchecked model usage and company growth, then it is a company’s half-hearted attempt at best to implement that safety from the start.
Those serious about AI safety will treat it as non-negotiable and not an afterthought. At Reality Defender, AI safety is core to our very existence, not a hinderance or a nuisance. We ask those considering implementing and using AI models to take equal measures in protecting against them, ensuring there’s always a counterbalance protecting users and society as a whole from the unfathomable dangers advanced AI could bring.
We don’t know how this story will play out in the coming hours and days. I wrote this early Monday morning, and by the time you read it, it could already be horribly dated.
My only hope is that, in the end, safety is valued over all.