Imagine a world where your every thought is subtly shaped by AI gatekeepers. Where the vast repository of human knowledge is locked behind proprietary walls, doled out to you as someone else has decided fit. Sound like science fiction? It's the future we're zombie-walking towards if we don't fight for open-weight AI models.
In a recent statement, the US Federal Trade Commission acknowledged the potential benefits of open-weight AI models, likening them to open-source software in their capacity to drive innovation and benefit the public. Chapeau, Chair Khan.
Meanwhile, OpenAI’s Mira Murati gravely warned us of “major risks” if we follow the FTC’s line of thinking.
Open-weight models are AI systems whose guts - the "weights" - are public. Anyone can tinker, improve, or build on them. It's democracy for machine intelligence. Proponents argue this transparency fosters innovation and democratizes AI development. Yann LeCun, Meta's AI leader, nails it:
“In the future, our entire information diet is going to be mediated by [AI] systems. They will constitute basically the repository of all human knowledge. And you cannot have this kind of dependency on a proprietary, closed system… the future has to be open source, if nothing else, for reasons of cultural diversity, democracy, diversity. We need a diverse AI assistant for the same reason we need a diverse press.”
He's right. Letting a handful of companies and governments — no matter how well-intentioned — control the AI that might come to shape our thoughts is outsourcing the boundaries of our own intellects to a third party.
But what about Murati’s warnings of major risks? She professed concern for the ability of AI to persuade, influence and control users. Just so. Any AI sufficiently powerful to be broadly useful will have these characteristics. The question isn’t “can I avoid the possibility of being influenced?” You can’t. The question is whether or not you want choice in, and understanding of, the influence to which you are subjecting yourself.
Murati implies that you ought to trust OpenAI to “persuade, influence, and control” you, but that it would be a “major risk” to allow influence from a transparently-built open model that reflects the values of the community who built it.
There are practical challenges, too. AI model weights aren’t physical objects, they are pure information. Duplicable and shareable at the speed of light in a fibre optic cable. The U.S. military and intelligence establishment, with all its resources, can't keep information perfectly secret. Remember Snowden? Manning? If they can't lock down information, what hope does Silicon Valley have?
Banning open weights is like thinking a "No Trespassing" sign will stop a determined thief. It's security theatre that serves only to impede good actors from building public goods. Projects like llama.cpp show that everyday geniuses can push AI forward as part of a broader ecosystem which embraces both closed, and open, approaches without prejudice.
The FTC gets it. We need more transparency, not less. We need AI development that serves the many, not the few. We need a future where our thoughts aren't subtly nudged by black-box algorithms serving agendas that aren’t ours.
Do we want a future where AI empowers us, or one where it's used to keep us in line?
The choice is ours. For now.