What we really need is a "deck chair configuration" committee...
Facing the uncomfortable AI research iceberg. (Or not.)
Researchers, for the most part, continue to nibble around the edges of AI adoption: a bit of proofreading here, a bit of reference management there, and maybe some help number crunching. It’s just a tool, right?
If you only read the recent Wiley ExplanAItions study, you might agree; concluding that in a year or two, a few improved software tools will help you summarize articles faster or generate citations more accurately. But that conclusion omits the much larger story, one that renders incremental improvements about as meaningful as rearranging deck chairs on the Titanic.
The reality is that AI systems are accelerating toward a future in which they can execute the entire research process end to end in some domains. We’re not just talking about “help with data cleaning” or “suggest some references for my next literature review.” We’re talking about AIs that will propose novel hypotheses, design and carry out complex experiments, analyze the data, and then generate written conclusions that stand up to peer review.
This isn’t a starry-eyed future prophesy. It is present reality: Joshua Gans used o1-pro to, with about one hour of added human labour, do economics research that had enough novelty and merit to be published in a peer-reviewed journal. Andrew Maynard took a shot at using o3 (via Deep Research) to write a PhD thesis.
The message? While the Wiley ExplanAItions study and countless others document researchers’ incremental comfort zones (translation, editorial tasks, etc.) AI capabilities are quietly breaking through those barriers. The question is whether university researchers and administrators will notice the impending sea change before it’s too late.
The Fallacy of Incremental Progress
The Wiley report highlights widespread concerns: lacking policy guidelines, poor awareness of tools beyond ChatGPT, uncertainty about ethical use. These challenges are real, and we do need guidelines and best practices. But focusing only on short-term fixes can blind us to the more disruptive implications: AI will soon exceed human abilities across broad swaths of research activities in certain (many?) fields.
Think of the stable owner at the turn of the 20th century who realized that maintaining better horseshoes and improving barn conditions was important… but failed to see that the automobile would soon render that entire operation obsolete. Addressing researchers’ immediate pain points around AI (like formatting references) without grappling with the broader reality of full-spectrum AI research capabilities is akin to upgrading the horse stable while Henry Ford is quietly perfecting assembly lines.
The AI ‘Iceberg’ in Full View
Today, the “above the waterline” portion of AI adoption might be tasks such as proofreading, summarizing literature, or verifying citations. But the far larger, more transformative aspects of AI are looming just beneath the surface:
Hypothesis Generation & Experimental Design: While the Wiley study indicates researchers still believe human intuition is necessary for these tasks, reasoning AI models like o1-pro/o3, are increasingly adept at scanning enormous datasets, identifying unrecognized patterns, and suggesting lines of inquiry that might never occur to a human.
Data Analysis & Discovery: Yes, we’ve seen AI do grunt work like cleaning datasets, but advanced models are already moving beyond that to glean novel insights, detect anomalies no human eye could catch, and propose follow-up experiments automatically.
Writing & Publication: The report acknowledges AI’s growing role in manuscript preparation, but few want to face a near future where AI-driven research not only writes the paper but conceives the theoretical framework and interprets the results. The fact that an AI-generated paper has already passed peer review should be a startling wake-up call.
The real “iceberg” is not about forging better guidelines for how to cite with ChatGPT or whether to permit AI-based translations in submitted manuscripts. Rather, it’s about recognizing that entire fields are on the cusp of transformation1 by increasingly capable, general-purpose AI systems.
Danger of Deferment and Denial
Researchers and leaders might take comfort in the idea that AI is still “emerging,” that we have a couple of years before broader adoption. Yet the past few years of breakthroughs should remind us that this technology leaps forward unpredictably. If we wait until next-generation systems have already begun conducting entire lines of research, we’ll have missed the critical window to develop policies, procedures, and structures to maintain academic integrity, research quality, and, frankly, a strategy for human researchers’ roles in this new landscape.
It’s not that humans will disappear from research. People who adapt to harness AI effectively will still play a vital part, but they need to pivot from “using AI here and there” to anticipating an entirely new ecosystem. Denial or half measures, like focusing solely on “AI skill-building” for performing minor tasks, risk leaving researchers unprepared for the bigger changes.
Embrace the creative destruction
The Wiley ExplanAItions study offers a telling snapshot of where many researchers currently stand on AI adoption, but it barely addresses the true depth of the coming changes.
We need to broaden the conversation. AI is not just a new set of spurs and bridles; it’s a paradigm shift that, for certain disciplines, will open the door to near-complete automation of the research process. If we in academia truly want to lead, rather than be blindsided by unstoppable transformations, our collective responsibility is to prepare for a future in which AI’s role in research is far more pervasive than a fancy text editor or data wrangler.
It’s time to stop rearranging deck chairs and start confronting the iceberg head-on. Our moment calls for bold leadership, ethical foresight, and a willingness to rethink the entire structure of how research is conducted, evaluated, and rewarded: let’s train our eyes on the real horizon.
And frankly, if one is serious, creative destruction in a truly Schumpeterian sense.