The Anxiety of Influence in Scholarly Writing
We had a lovely debate about the use of artificial intelligence tools in scholarly research in the department yesterday. It seemed that pretty much everyone agrees that AI tools are enormously helpful for coding, including data processing or statistical analysis, and they will fundamentally change our workflows. Many use these tools, albeit sporadically, and they function almost like a research assistant in that regard. Yet, when the topic switched to writing, several colleagues stated that they have never used AI in their scholarly writings, strongly emphasizing that they never will. Using AI in this way, the argument goes, is profoundly detrimental to scholarship.
The underlying tension that informs this position seems to be the notion that writing is thinking. We often clarify our ideas to ourselves in the process of writing. If language is indeed the medium through which thinking becomes actualized, the process of writing is actually the process of thinking, and assistance from AI tools in the writing process means that we would lose the very act of clarification and understanding. Worse, because AI agents have a tendency to regress to the mean (Hintze, Proschinger Åström, and Schossau 2026), they ultimately would standardize our scholarly writing and, in doing so, reduce the quality of proper argumentation.
This position, however, rests on a basic assumption: there is a fundamental difference between coding and writing. Writing, in this view, is sacrosanct in a way that coding is not: while the latter is simply a pipeline which takes certain inputs and processes them into outputs, the former is the genuine intellectual work, fundamentally different than the technical task. This difference between the work and the task is what makes writing the actual property of The Author. It is also the reason why it feels wrong to treat the work itself as a form of task, which is what AI assistance ultimately implies: if that is writing, how is it different than using a command tool?
There are many merits to this reasoning, and it is something really intuitive to many scholars, who were trained under a theory of the writer as a connoisseur framework. It has this intuitive appeal difficult to ignore.1
1 Obviously, the scholar is the connoisseur (that’s what being a scholar is). I’m saying the writer as connoisseur.
Yet, writing and coding are not different kinds of intellectual activity. They are both part of a single scholarly argument, the thing I will call The Project. The sentences you write and the models you estimate are expressions of the same underlying logic. Consider what writing actually involves in practice. You choose to frame your article around one debate rather than another. You decide which argument should deserve a response and which can be (or must be) ignored. You organize your discussion section to emphasize one implication at the expense of several others. These are not acts of intellectual clarification only: they are choices over theoretical exploration, no different in kind from deciding which covariates belong in your model or which robustness check goes to the appendix. If you accept that AI can help you explore and construct one half of that logic, you probably require a much stronger case for why this other half must be a solitary and transcending activity of deep thinking.
Put differently, it seems doubtful to me that there is anything about writing, specifically, that makes it a privileged site of thinking. It is one mode of constructing an argument among several, and treating it as the only mode worth protecting from the influence of AI sounds a lot like a mystification.
An understanding of social scientific scholarship as a project changes this basic calculus. Think of how, in quantitative research, we have come to accept that exploring multiple paths is better practice than reporting only the one with the cleanest result. While we may disagree on whether multiverse analysis is something good or bad, it is undeniable that there are many paths we take, and rather than presenting a single model as if it were the only possible one, we probably should show the full space of reasonable choices. Now, think about this in the context of writing, however: the notion that we clarify and refine our ideas via the process of writing is a good one, but understood from the point of view of the project, it is just as likely that this hides many paths not taken: alternative framings, different theoretical commitments, or arguments that almost worked but didn’t.2
2 How many times did you find it disconcerting that you ended up reframing your front-end when your results are exactly the same?
AI as a tool makes this kind of theoretical exploration and clarification much more feasible. If we set this attitude that writing is foundationally different aside, and treat the formulation of arguments as a task in itself, we can see that writing is curating as much as it is refining. It may indeed be true that writing clarifies thought. But we also think by talking to other people. We think by arguing at a workshop. We think by sketching things on a whiteboard or staring at a regression table. Talking through an argument with an AI is structurally closer to these activities than people want to admit. It lets you sketch alternative lines of reasoning, test your logic against different framing devices, and perhaps map a space of arguments rather than simply settling on one.
A scholar’s job is, in essence, judgment. And judgment does not really care which tools you used to get there. We already incorporated many cultural technologies in this process. What is the fight here, really?