Discussion about this post

User's avatar
Gerald Lombardi's avatar

I want to underscore my agreement with Pyar Seth's comment in this thread and take it a pessimistic step further. I consider myself a social scientist and I've spent my working life mostly in two settings: scouring documentary and manuscript archives that will never be digitized, and engaging participatively with people who were doing things I needed to observe and analyze. I'm not living under a rock when I assert these things cannot be done better — or done at all — by any currently imaginable non-human agent. However, I fear that institutions and funding agencies that jump on the AI bandwagon will denigrate such research practices and eventually drive them to extinction, precisely because they can't be automated. Rather than AI freeing us up to do the "creative" stuff, the scope of our scholarly world will come to be defined by AI's constraints: that which can be done fast and easy and without much human intervention. Why? Because the iron rule of capital — that time equals money — will prevail as it always does, and we'll be living in a world enshittified by it.

Pyar Seth's avatar

While the “crisis-of-humanity” critique and the double standard argument resonates, some additional specificity might be important here because, in my view, only SOME social scientists and only some fields are really being captured here. Humanistic social scientists who spend time in archives still want to see the documents, need particular references, and need to photograph those documents to highlight the material order of things. Ethnographers are still in the field. More broadly, if we consider, for instance, those in theater and performance studies, they’re still staging performances and then writing about those performances. And never mind the fact that some of us don't see literature reviews as just summaries. Oftentimes, the literature is meant to do selective, argumentative work by complicating specific dimensions of our own claims, to name what the archive is doing, and/or to give theoretical traction to an empirical observation. In short, I'm not sure we all experience AI in the same way. The claim that "AI does research better than most professors" seems to assume a fairly narrow model of what research is, and what research looks like for specific scholars.

68 more comments...

No posts

Ready for more?