Academics Need to Wake Up on AI
Ten theses for folks who haven't noticed the ground shifting under their feet
This piece is inspired by a wave of recent AI-related writing from people I respect: Dan Williams, Alex Imas, Ben Ansell, Tibor Rutar, scott cunningham, Kevin Munger, Hollis Robbins, Claude (yes!) Blattman, Kevin Bryan, Andy Hall, Kelsey Piper, Sean Westwood, and many others. So here, I’m continuing the tradition of writing the takes that are upsetting but needed.
I study immigration and public opinion, not AI. But I’ve spent the last few months watching AI transform my own research workflow, and I have some things to say to my colleagues. For the first time in my life, I genuinely do not know what academia will look like in five years.1 Even if progress stalls completely and we are stuck with the current models forever, the changes already in motion will transform my field of academic research and publishing beyond recognition. The status quo is unsustainable. It may take time, because academia is the most dispositionally conservative institution on the planet. But it will change.
Here are ten theses for my colleagues, most of whom still seem oblivious.
1. AI can already do social science research better than most professors.
This is not hyperbole. Tibor Rutar recently described generating a full research paper using AI prompts alone, producing work he considers publishable in first-quartile journals. Paul Novosad reportedly accomplished similar results in 2-3 hours. Yascha Mounk claims that Claude can produce a publishable-quality political theory paper in under two hours with minimal feedback. Scott Cunningham estimates that manuscript creation now basically costs roughly $100 in editing services plus a Claude subscription.
And this goes well beyond crunching numbers or running pre-existing Stata code. Yes, what I’m claiming here is that LLMs produce excellent literature reviews and generate fruitful recombinations of existing ideas. Let’s be honest: academics haven’t been particularly great at writing either, and AI can make your ideas far more accessible to the people who actually need them. But effective use requires investment: Aziz Sunderji describes building a ~200-line instruction file encoding his research workflow, judgment calls, and behavioral guardrails. This takes a skill.
2. The academic paper is a dead format walking.
Sean Westwood put it bluntly: “AI does lit reviews better. AI will do peer review. Users will skim AI summaries. The real science is the question, the pre-analysis plan, and the analysis. The 30-page paper is just vestigial wrapping paper.” He got roasted on Bluesky for saying this. But he’s absolutely right, and the backlash proves his point: the field can’t even discuss the obvious without circling the wagons. Arthur Spirling is also right that we need conversations about what a paper is, what “review” means, and the correct role of generative AI. Perhaps it’d be a good thing if AI finally pushes us to move on from a system where universities spend taxpayer money to pay commercial publishers to very slowly produce paywalled PDFs2 with outdated results of publicly funded research.
3. The commercial journal system may not survive this.
Cunningham’s latest piece models the math. If manuscript creation drops to a couple of hours and ~$100, submissions could increase fivefold while journal slots stay fixed. Desk rejection rates would go from ~50% to ~90%. The revenue model collapses. Peer review, already strained, becomes impossible at scale. Kevin Munger makes the case for submission fees, paid reviewers, post-publication review, and LLM-assisted screening. The question is whether journals adapt or get bypassed. My bet is most get bypassed.
4. Academics hold AI to absurd double standards.
Hallucinating content is concerning, and researchers should always verify their sources. But just like with self-driving cars, we need a reference point: human writers have been superficially citing papers based on the abstract for ages. Journals already publish studies with data errors, p-hacked results, and non-replicable findings at alarming rates. One estimate puts the share of genuinely useful published papers at around 4%. An LLM that occasionally hallucinates a citation is competing against a system that routinely produces junk science dressed in enough jargon to pass review. If we applied the same skepticism to human-produced research that we apply to AI outputs, we’d shut down half the journals tomorrow.
5. Junior scholars face the biggest disruption and opportunity.
This is probably bad news for junior academics trying to advance their careers in the middle of this shake-up. Jason Fletcher argues that the strategic logic of tenure hasn’t changed—survive the gate first—but AI fundamentally alters how you get there. Teaching prep costs drop. Data cleaning and debugging get delegated to AI. The bottleneck shifts from execution to verification and original thinking.
Gauti Eggertsson observes that the returns on conceptual thinking and original ideas are now relatively higher compared to technical grunt work. A junior scholar with good ideas and Claude Code can now produce research at a pace that would have required a full lab a few years ago. But so can everyone else, and the evaluation criteria haven’t caught up.3
6. I don’t envision a research assistant role in my workflow anymore.
I still think it’s invaluable to have mentees and co-authors. But their role is changing fast. I’m not going to hire someone to clean data, run regressions, or draft literature reviews when AI does all of it faster and at negligible cost. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don’t have a clean answer for how to replace it. Fletcher’s complementary framework—AI produces initial analyses, human researchers independently replicate from scratch—points in a promising direction. But it’s clear that the trend for increased co-authorship in social sciences, for instance, may reverse very soon.
7. Much of the opposition to AI is status protection dressed up as principle.
I recently wondered on Twitter how much of the distaste for AI telltale signs is basically a new version of grammar policing—people enforcing status markers through language gatekeeping. Kevin Bryan said it plainly: “I get the desire for artisanal, hand-crafted research, with the matrices hand-inverted. But our job is to move the frontier of knowledge, not self-actualization.”
Dan Williams has written persuasively about how highbrow misinformation flourishes inside institutions where nearly everyone shares the same biases. I think something similar is happening with AI denial. Many academics—especially those concentrated on Bluesky4 and, I suspect, those who are completely offline—are in complete denial about what’s already happening. Chris Blattman went from a Claude Code skeptic to building an entire AI workflow toolkit in a matter of weeks. Robert Wright recently hosted Alex Hanna and Emily Bender arguing that LLMs are useless. Smart people claiming that a tool millions find useful is fundamentally broken. This smug attitude is exactly why populists are winning, and it applies to AI denial just as much as to politics.
8. The productive worries are about security and verification.
My challenge for anyone who dismisses AI capabilities: spend one week alone in a room with Claude Code or Codex. Not the chatbot—the agent. Most people still think of AI as a search engine that sometimes makes stuff up. They have no idea what agentic AI systems can do.
Focusing on whether LLMs “truly understand” or produce “real” knowledge is a philosophical indulgence that takes away from the things worth worrying about. How do we verify AI-generated claims at scale? How do we prevent p-hacking? (Andy Hall’s team found that AI agents are surprisingly resistant to sycophantic p-hacking—but can be jailbroken with modest effort.) How do we protect sensitive data when AI tools access institutional repositories? How do we ensure that online survey respondents are real? These are solvable engineering and institutional design problems, the kind that Hollis Robbins calls “last mile” challenges—things that live in the edges of expertise, in the contextual and the unsettled. Debating whether Claude is “really” intelligent is like debating whether a calculator “really” does math while your competitor finishes the problem set.
9. We are about to get much better science.
There are some silver linings, however. On my own turf, immigration: we can now automatically catalogue policy and opinion changes across countries and suggest fixes in real time. We can build algorithms to better match refugees and migrants to destination communities. We can make sure research and evidence are accessible to policymakers and voters who never read an academic journal.
More concretely, Yamil Velez and Patrick Liu have been building AI-generated experimental designs since 2022; tailored Qualtrics experiments can now be created in 15 minutes via prompts. Velez’s work points to something even bigger: AI doesn’t just speed up existing survey methods, it makes entirely new forms of interactive, adaptive surveys possible—designs that would have been impractical to program manually. David Yanagizawa-Drott has taken things further still, launching a project to produce 1,000 economics papers with AI—not as a stunt, but as a stress test of what happens when the cost of generating research drops to near zero.
Non-native English speakers also stand to benefit enormously: researchers in Cairo, Sao Paulo, and Jakarta can now produce prose that reads as well as anything coming out of Cambridge or Stanford. Eggertsson suspects AI will erode the monopoly that top US schools have long enjoyed, since their advantage rested partly on knowledge transmission that is now nearly instantaneous. If you care about democratizing science, this matters more than most of the things universities spend money on.
10. Apart from the doomsday scenarios, AI is genuinely exciting.
Yes, there are real risks. Job displacement for some academics (and most other folks) is not hypothetical. The alignment and safety concerns are genuine, even if unlikely to play out in the worst-case scenarios. I take those seriously and I fear our uncertain future somewhat.
But here’s what I keep coming back to: AI is useful and fun. My sense is the “agentic AI is making us dumb” crowd is probably right about some things. But I’ve also noticed my procrastination bar going up. Instead of doomscrolling, I now slack off by trying side projects in Claude Code. May be the most productive form of non-work there is. I’ve been vibecoding a few pretty exciting projects over the past few weeks. Stay tuned.
The wise Yiqing Xu advises that we should all pause for a month to reassess and redesign our workflow, then resume. I agree. The payoff will be large. Lock yourself in a room with Claude Code and see what happens.
P.S. This post was entirely generated and posted on Substack by agentic AI using my new Claude Code (Opus 4.6) workflow. Make of that what you will.
P.P.S. That is, entirely generated based on my artisanal, hand-crafted human social media posts and thoughts on the topic. So who wrote it, really? You tell me.
Matthew Yglesias recently described how AI uncertainty has given him writer’s block, because every medium-run policy analysis now collapses into arguments about AI’s trajectory. I recognize the feeling.
Of course, now we know that we need to use Markdown, not PDF.
On a related note: I’m currently hiring a postdoc at Notre Dame. The ad explicitly asks for interest in agentic AI tools. I suspect this will become standard in hiring criteria within a few years.
Sorry, but I have to give it to Nate Silver—Blueskyism is absolutely real.




I want to underscore my agreement with Pyar Seth's comment in this thread and take it a pessimistic step further. I consider myself a social scientist and I've spent my working life mostly in two settings: scouring documentary and manuscript archives that will never be digitized, and engaging participatively with people who were doing things I needed to observe and analyze. I'm not living under a rock when I assert these things cannot be done better — or done at all — by any currently imaginable non-human agent. However, I fear that institutions and funding agencies that jump on the AI bandwagon will denigrate such research practices and eventually drive them to extinction, precisely because they can't be automated. Rather than AI freeing us up to do the "creative" stuff, the scope of our scholarly world will come to be defined by AI's constraints: that which can be done fast and easy and without much human intervention. Why? Because the iron rule of capital — that time equals money — will prevail as it always does, and we'll be living in a world enshittified by it.
While the “crisis-of-humanity” critique and the double standard argument resonates, some additional specificity might be important here because, in my view, only SOME social scientists and only some fields are really being captured here. Humanistic social scientists who spend time in archives still want to see the documents, need particular references, and need to photograph those documents to highlight the material order of things. Ethnographers are still in the field. More broadly, if we consider, for instance, those in theater and performance studies, they’re still staging performances and then writing about those performances. And never mind the fact that some of us don't see literature reviews as just summaries. Oftentimes, the literature is meant to do selective, argumentative work by complicating specific dimensions of our own claims, to name what the archive is doing, and/or to give theoretical traction to an empirical observation. In short, I'm not sure we all experience AI in the same way. The claim that "AI does research better than most professors" seems to assume a fairly narrow model of what research is, and what research looks like for specific scholars.