71 Comments
User's avatar
Gerald Lombardi's avatar

I want to underscore my agreement with Pyar Seth's comment in this thread and take it a pessimistic step further. I consider myself a social scientist and I've spent my working life mostly in two settings: scouring documentary and manuscript archives that will never be digitized, and engaging participatively with people who were doing things I needed to observe and analyze. I'm not living under a rock when I assert these things cannot be done better — or done at all — by any currently imaginable non-human agent. However, I fear that institutions and funding agencies that jump on the AI bandwagon will denigrate such research practices and eventually drive them to extinction, precisely because they can't be automated. Rather than AI freeing us up to do the "creative" stuff, the scope of our scholarly world will come to be defined by AI's constraints: that which can be done fast and easy and without much human intervention. Why? Because the iron rule of capital — that time equals money — will prevail as it always does, and we'll be living in a world enshittified by it.

Karol Kosnik's avatar

Gerald, I understand your concern, but I think the issue is less about pessimism and more about structural change.

There may come a point where the kind of research you describe becomes, functionally, research for the sake of research. Not because it lacks depth or rigour, but because the surrounding system no longer assigns it priority. Longevity and intrinsic value are not the same as relevance within a given economic and institutional framework.

The harder question is not whether such work can be done without non-human agents. It is who will value it, fund it, circulate it, and build on it. If it remains undigitizable and therefore inaccessible to large-scale computational systems, it may persist, but as a niche practice rather than a central one. We do not actually know whether it will have broad impact, or what “impact” will even mean in that context.

This is not pessimism in the emotional sense. It is more like weather. Conditions change. People adapt. Institutions reallocate attention and resources. The landscape shifts, and what once defined the center can move to the margins.

The question, then, is not whether AI can replace such research, but whether the ecosystem that sustained it will continue to exist in the same form.

Enzo Trent's avatar

There is nothing that cannot be digitized. Did you see the dancing robots - the other half of this revolution had not even started yet. In a short while the robots will be here too, they will able to move as we do, and do stuff like take photos with their eyes and upload that to a server - for example.

You will be able to turn one loose on a library filled with ancient books in dead languages, say to it , "transcribe all these books, in the language as written, but save a copy translated to English also, and then upload to the server" - that will exactly happen.

Alexander Kustov's avatar

Gerald, thanks for this thoughtful response. I agree that institutions narrowing the scope of legitimate research to whatever AI can automate is a real possibility. That would be a terrible outcome, but it's on us to channel these changes into something more productive and positive. I tried to address some of this in my follow-up--would genuinely value your take on it: https://alexanderkustov.substack.com/p/academics-need-to-wake-up-on-ai-part

Pyar Seth's avatar

While the “crisis-of-humanity” critique and the double standard argument resonates, some additional specificity might be important here because, in my view, only SOME social scientists and only some fields are really being captured here. Humanistic social scientists who spend time in archives still want to see the documents, need particular references, and need to photograph those documents to highlight the material order of things. Ethnographers are still in the field. More broadly, if we consider, for instance, those in theater and performance studies, they’re still staging performances and then writing about those performances. And never mind the fact that some of us don't see literature reviews as just summaries. Oftentimes, the literature is meant to do selective, argumentative work by complicating specific dimensions of our own claims, to name what the archive is doing, and/or to give theoretical traction to an empirical observation. In short, I'm not sure we all experience AI in the same way. The claim that "AI does research better than most professors" seems to assume a fairly narrow model of what research is, and what research looks like for specific scholars.

Alexander Kustov's avatar

That's fair. I do think that the value of qualitative research will go up, at least for now, as a result of all the AI-driven changes.

zach robert wehrwein's avatar

Why should I not conclude from this not that social science can be done by AI, but that a certain style of social science (run a few regressions, a literature review, and not much originality) can be done by AIs? I have experimented with agents and absolutely think they have a great deal to offer. I have yet to see someone celebrate an actual finding or contribution or novel idea sourced from an agent.

Alexander Kustov's avatar

Right. These tools have only become widely available in the recent month, so I doubt there are any fully AI generated paper in a top journal yet. But it will certainly happen at some point soon.

I also agree that the value of qualitative research and getting novel data from hard-to-reach place will probably go up.

zach robert wehrwein's avatar

So if agents have produced nothing of real note, why the headline claim "AIs are better at social science than social scientists"?

Matt's avatar

Is there ever social science of "real note"? It's mostly a slow accumulation of evidence. Once a question and a dataset have been identified the AI can do the rest with a little guidance. Soon enough none of those stipulations will be needed anymore.

Dr Tops's avatar

There are interesting observations here, like the double standards point (thesis 4). But I think the piece has a foundational problem that limits how far any of it travels.

The essay universalizes from a specific methodological tradition--survey-based, experimental, quantitative--and presents that tradition's encounter with AI as academia's encounter with AI. That's a significant overreach.

Archival research involves not just locating documents but interpreting silences, provenance, and context in ways that resist prompt engineering. Ethnographic fieldwork and participatory methods aren't just data collection techniques; rather, they're epistemological commitments about who produces knowledge and under what conditions. The research is partly constituted by the relationships and process; there's no separable output you could hand to an AI. Theoretical work in fields beyond the author's, at its best, doesn't recombine existing ideas but rather reframes problems so that previously invisible things become visible. That's different in kind from synthesis.

'Academics need to wake up' lands very differently than 'quantitative social scientists need to rethink their workflow.' The latter is probably a defensible and useful argument. The former requires more of a reckoning with the full range of what academic research actually is.

Alexander Kustov's avatar

Thanks, this is all a fair critique! I do think the value of qualitative work in hard-to-reach contexts will go up now — I’ll be writing more on that shortly.

Alfred's avatar

“I get the desire for artisanal, hand-crafted research, with the matrices hand-inverted. But our job is to move the frontier of knowledge, not self-actualization.” 💯

Hollis Robbins's avatar

Great stuff here (and thank you for the "Last Mile" shoutout) but I'll keep challenging you on the solution. The problem cannot be solved by "waking up." Individual faculty are trapped in a bureaucratic system that has already distorted their labor. Administrators are shoveling coal in dying engines. Since writing Last Mile a year and a half ago, I've been focused on the entire system.

Alexander Kustov's avatar

Thanks, and that's fair. My point was that waking up is the first step, since so many folks are still in denial about all these AI-driven changes.

Hollis Robbins's avatar

We are on the same side! My point here a many other places is that faculty have almost no power any more and leadership is focused on metrics. https://www.compactmag.com/article/how-business-metrics-broke-the-university/

Kim Hosein's avatar

I completely agree on the note that most of the opposition is status protection. Narrow thinking and learning to one specific method, language, system. Gatekeep it from others. Interestingly, The people in the institution are the ones that train and set the features for the models themselves. It reflects them- which we can see based on how models now direct everyone to publish a white paper- and opposes their authority structure because now people outside the structure can enter the conversation.

Valentin Guigon's avatar

A lot of those critiques seem to apply to a portion of total sciences.

From my side of the view (neuroscience), for sure AI is key to conduct tasks in a time-confortable manner but there's no way AI can be trusted to the extent you've described. The last 2 years my RAs and I have built experiments, processes, packages, analyses, pipelines, and computational models. Despite all running full-speed with AI, we've compressed for each meaningful piece of work the equivalent of months of work to weeks of work

Even with going hard with the AI systems, the only way to achieve truly meaningful work in minutes is if you already have processed and boilerplates at hand. Then, it means the hard work has already been achieved by people before.

I've personally looked at some outputs referenced in that post, and even though the achievement with AI is impressive, the scope of the projects were really not on par with what is expected from Academia anyway.

I want to make sure that I agree with the spirit of your piece. There's a lot of rework to do in Academia. But claims must really be carefully calibrated, because there's a lot at stakes here.

Valentin Guigon's avatar

Edit: you've addressed in the opening that your audience is your folks in academia specifically, but the observations you make hardly generalize to all folks in the academia

Alexander Kustov's avatar

Thanks, that's a fair point. I'm mostly talking about social science here based on my own experience, so you should certainly apply some of the theses to other fields with much caution. I should have been clearer on that.

David Chassin's avatar

The fact that AI can write an academic research paper that's as good as a human-authored one is not in itself a measure of how well AI is doing research. It's actually a measure of how well AI slop can stochastically parrot human slop. For this we must lay blame on the reviewers and editors of academic journals who for years have allowed human slop to be passed of as quality research because it profits their journals.

Jeff Milliman's avatar

As someone who is finishing up their PhD in the social sciences and does alot of the quantitative work that the piece argues may well become automated, I have two comments:

1) The paradox here is that AI is cleary a bubble. Yes, it has the potential to completely transform the way we do academic research, but these companies are clearly throwing massive amounta of money on AI tools that are unlikely to generate posotive cash flows for years. The academics pointing out how great these tools are and how cheap they are aren't thinking of the larger issue in that the entire AI field might collapse in the next couple of years. I highly doubt that Claude code can be sustained based on subscriptions from quantitative social scientists. The AI proponents also appear to assume that the costs of their AI tools of choice will stay relatively low. There is a clear problem with this, if Claude, for example, gains a monopoly on the market for AI tools for academic resesrchers, what is to stop them from raising prices to an extremely high level?

2. I'm no fan of the current publishing system and find the idea of an alternative system intruiging, but posts like this make me wonder whether social science is worth doing at all. Whats the point if you can just sit in a room, type some prompts, and a couple of hours later have an entire paper completed? If a student did that on an assignment we would consider it to be cheating. But if we do it, we are just pushing the "knowledge frontier". Which is completely hypocritical. Overtime, my fear is that we will be unable to do almost any part of the research process on our own. Reading the literature or engaging with data directly is part of the creative process. Im not sure if losing alot of this process is a great idea.

Alexander Kustov's avatar

Both good points. On the bubble: the pricing may be unsustainable but the capabilities aren't going away even if some companies do. I'd plan for the tools getting cheaper, not disappearing.

On the existential question: I felt this too. But the goal was never the process itself--it was understanding the world. The scholars who ask the best questions will be more valuable, not less. You're finishing a PhD, which means you're building exactly that judgment. That's the scarce resource. Not sure if saw this recent post from John List, but it certainly gives me some optimism: https://x.com/Econ_4_Everyone/status/2029236978231984446?s=20

Jeff Milliman's avatar

I saw List's post and largely agree with it. On whether these tools will become cheaper, Brookings has a recent piece on what AI means for social science and they are ambivalent about this, but point out that current pricing is based on what is very likely an unsustainable flow of VC money. The piece is worth reading in its entirety if you haven't seen it: https://www.brookings.edu/articles/the-train-has-left-the-station-agentic-ai-and-the-future-of-social-science-research/.

I think the real question here is one of delegation: how much do we delegate to an AI tool vs do on our own. There are levels to this, but I think it is clear that in a very competitive environment the pressure will tip us towards more delegation. While I do think our goal should be to produce knowledge, my sense is that the process is important. In the same way that doing archival work is important for a historian, my work has become better as I have engaged in doing some of the data collection, writing, and coding manually. If all we are left with is the creative side of things and posing questions, I worry that the types of things that help us do this will atrophy overtime.

I dunno, maybe these tools will force us to re-evaluate the entire publish or perish system in a way that will be beneficial. Or, at the very least, reduce the supply of social science PhDs, which might be a good thing given how saturated the academic job market is.

Rhys Kelly's avatar

The process is how you develop the capacity for judgment, isn’t it?

Dave Karpf's avatar

The part that I think you have right is that the 6,000-8,000 word unit of academic production is probably dead.

The part that I think you're glossing over way too quickly is that offloading the research to Claude Code agents moves the frontier of knowledge in a substantively forward direction.

To borrow a concept from C. Thi Nguyen's new book, The Score, it seems to me that you're conflating the scoring system with the intellectual endeavor.

Alexander Kustov's avatar

That's a really good distinction, and I think you're probably right on the second part. Adding that book to my list.

e1luka's avatar

My two points:

1. Anthropomorphic bias: AI is the best compression and search algorithm invented so far. Having basically the entire humanity compressed and searchable in under 1Tb of data on my desk is indeed an amazing tool but not a replacement for a research assistant providing creative input, (not just morning coffee and quotes to fill your dull, useless ‘literature review’ while waiting for you to die and inherit your tenure)

2. Credential crisis: publish or perish bubble bust is long overdue. We can only hope AI will finally do it. But… The gate-keeping already mentioned in this thread is not just academic petty self-interest but a larger systemic problem. Academia is fully absorbed in the capitalist rent seeking via IP and patents industry and as such even a hypothetical miraculous “wake up” in the campus will not achieve much. As long as self-interest is socially mandatory, the credentials will keep being gate-keeping tools, not true measures of expertise - ‘...the measure becomes the target’ - implacable Goodhart's law is here to stay

Alexander Kustov's avatar

Thanks! These are all good points.

Dr. Edith's avatar

So if AI does the research does that mean we can focus on teaching?

e1luka's avatar

Ha ha Nice one! "Last year I published 2 engineers and 5 physicists, all with intact unbiased critical thinking, all ready o use AI for great achievements" ;)

Marcus Seldon's avatar

Do you worry about academic research being mostly automated in 5-10 years? For now, conceptual innovations, taste, creativity, and building out AI workflows still require the human touch, but are you so sure that Claude 6 won’t be better at all off those than all (or 99%+) of academics?

Alexander Kustov's avatar

It's certainly possible. The hope is that we will adapt and start doing things that humans can still do better like doing qualitative research, collecting novel data from hard-to-reach physical contexts, and teaching with a good old "human touch." But that's the point--I genuinely have no idea what happens in 10 years.

Matt's avatar

Yes Claude 6 (or 8 or 10) will be better at every one of those things. We have no idea how anyone is going to earn a living in 10-20 years. Maybe 5 years.

Jamie-Lukas Campbell's avatar

This was an insightful and thoughtful take, Professor. Many thank-yous for writing it. It saddens me that some academics fail to see AI as a valuable tool that radically transforms student experience and learning. And, it is difficult to listen to senior academics denounce this tool on the basis that it negatively impacts cognitive diversity and original thought when all one needs to do is look around the academic boardroom and find that diversity is no-where to be found and is barely referenced in their syllabi.

While I appreciate educators' concerns around AI – some come across as incredibly bad-faith and demonstrate a stronger commitment to the status-quo rather than actually delivering on what students and our communities need. It is also laughable when academics shout "AI flattens writing," mocking students' writing knowing that neurodivergent scholars have long-been flagged for 'flattened writing' and that the styles of writing taught were built on a system that just doesn't reflect the rich diversity of today's classrooms in the first place (epistemic injustice is rife in the humanities).

To your point, AI can lead HE to change; oral defenses, more dialogical learning, skills-based learning, cognitive scaffolding, etc.) that ensure academic rigor and support ongoing learning and development. Libraries and archives are using AI to help categorize, transcribe, and make data accessible – saving researchers and students time and money; researcher curated archives are now adopting AI to develop human-shaped narratives that avoid extractive and exploitative practices, translate work that was once inaccessible to non-native speakers, map work between the Global North and South, and more... I think it's exciting. And while I respect the anxiety AI has caused, the deeper issue it has revealed is the persistence of gatekeeping within institutions that have long excluded those with whom they should be most invested in engaging and collaborating.

Chad Raymond's avatar

An example of conservative institutional gatekeeping: universities have created faculty/administrator committees to "examine the implications" and "make recommendations" on AI policy. The committees meet at most twice a semester and are supposed to issue findings in a year or two. The results of their work will end up in a digital file folder that no one ever opens.