In the wake of OpenAI's festive release of their full ‘o1’ and ‘o1-pro’ products, a curious phenomenon has emerged: groundbreaking technical achievements are being met with widespread indifference. This disconnect isn't just about marketing or unrealistic expectations—it reveals a fundamental paradox in how we develop and consume artificial intelligence.
The problem isn't that these new models aren't impressive. By many measures, they represent tangible advances in machine intelligence, capable of PhD-level reasoning and complex problem-solving that matches or exceeds human experts on some problems. Instead, we're witnessing what economists would call a “market for lemons” scenario in AI, just as laid out in George Akerlof's famous analysis of information asymmetry in markets.
What if most people can't tell the difference between a truly exceptional AI and one that's merely adequate? As AI pundit Ethan Mollick astutely notes, “people won't generally notice AGI-ish systems that are better than humans at most intellectual tasks.” Why? Because most of us rarely push up against the limits of human intelligence in our daily work. We just don't need an AI that can revolutionize algebraic topology to help us draft banal emails or summarize routine meetings.
This creates a problematic dynamic. When buyers can't distinguish between exceptional and adequate products, they naturally gravitate toward cheaper options. Why pay premium prices for capabilities you can't personally verify—or may never use? Clayton Christensen's jobs to be done offers the micro/consumer-level framework explaining the macro-level lemon market dynamics. Most users have tasks that can be accomplished perfectly well by more basic AI models. The extra horsepower of advanced systems, however impressive technically, often goes unused—like buying an ultra-high-performance car for your daily commute.
The implications are nontrivial. Akerlof might argue that companies investing millions in pushing the boundaries of AI capability might find themselves in an untenable position: their technical achievements, while groundbreaking, may not translate into market success. Meanwhile, providers of "good enough" AI solutions could dominate the market, not because they're better, but because they're cheaper and sufficient for most users' needs.
This isn't necessarily a market failure. Rather, it might be the market efficiently matching capabilities to actual requirements. But it does raise interesting questions about the future of AI development. If the market won't reward technical excellence, will companies continue to pursue it? Are we risking a future where AI development plateaus at "good enough"?
I don’t think so.
While, at first glance, the AI market seems to mirror a “market for lemons,” this familiar economic story misses something crucial: the looming possibility of Artificial General Intelligence (AGI). This isn't just another product improvement—it's a potential paradigm shift that transforms the game entirely.
This observation suggests a fascinating dual-layer market: At the consumer level, we see the classic “lemons” dynamic, where average users gravitate toward cheaper, “good enough” solutions. But at the strategic level, we're witnessing something more akin to an arms race. Tech giants, well-funded startups, and governments will pour massive resources into advancing AI capabilities far beyond what today's users can appreciate or even notice.
Why? Because unlike traditional markets where being “better” yields modest gains, achieving AGI first could deliver exponentially-growing returns1. The winner wouldn't just capture a larger market share—they could potentially dominate all AI-dependent industries, from drug discovery to financial markets, from automated research to advanced robotics.
This strategic layer fundamentally changes the investment calculus. While everyday consumers might not pay extra for superior AI, deep-pocketed investors and institutions aren't making decisions based on current market demands. They're placing bets on future market control. The potential prize—whether monopolistic power, geopolitical advantage, or the ability to shape humanity's technological future—is so massive that it overwhelms the typical erosion of quality seen in standard market scenarios.
The result is a peculiar split in the AI market. The mass market may continue to be dominated by “good enough” solutions, leading to the appearance of stagnation or disappointment. Meanwhile, beneath this surface-level equilibrium, an intense competition rages on, driving capabilities far beyond what most users can currently appreciate or utilize.
This dynamic helps explain the lukewarm public reception to increasingly advanced AI models. The average user, whose needs are met by more basic systems, might see new releases as underwhelming. But for those engaged in the race to AGI, these incremental advances represent crucial steps toward a transformative goal.
While Clayton Christensen's “jobs to be done” framework helps explain current market behaviour, the strategic landscape suggests we're not headed toward a simple good enough plateau. Instead, we're likely to see continued investment in pushing AI capabilities to their limits, even if the immediate commercial benefits aren't obvious.
For the AI industry, this means navigating a complex dual reality: serving today's market while racing to tomorrow's transformation. For policymakers and the public, it suggests the need for a more nuanced understanding of AI progress. What looks like market inefficiency might actually be rational preparation for a winner-take-all future.
The disappointment surrounding cutting-edge AI models might say less about the technology itself and more about the gap between current utility and future potential. As we navigate this transition, the challenge isn't just technical—it's about understanding that we're witnessing not just a market evolution, but a strategic revolution.
If one believes in “singularity theory,” and the ability of an AGI model to bootstrap itself to increasingly greater capability.