Public Engagement Is Good for Your Research
The case for social scientists who talk to people outside of the ivory tower
This piece is more personal than usual. After my recent posts on AI in academic writing, I received a wave of private messages from fellow academics who agreed with my hot takes but wouldn’t say so publicly. My first instinct was to write about self-censorship in the academy. But the problem runs deeper. Most academics don’t want to engage with the public at all. This piece is about why that’s self-defeating, and why many of my colleagues are getting it wrong.1
A few years ago, I gave a talk at a retiree center in Charlotte, North Carolina, about my research on public attitudes and making immigration popular. Before I could even start, an older woman in the back raised her hand. “Why,” she asked, “would we want to make immigration popular in the first place?” No academic colleague had ever asked me that question before. Although I wasn’t able to bring her to my side fully, it turned out to be one of the most productive conversations I’ve had about my research with anyone.
I am increasingly convinced that for social scientists and academics, engaging with the public is not a distraction from research but a direct input to it. The audiences you encounter outside the seminar room, the questions journalists ask, and the pushback from readers who have no stake in your theoretical framework: these are all important data. They reveal blind spots that insular academic communities systematically miss. Public engagement also forces you to justify, in plain language, why your work matters, which turns out to be a surprisingly effective filter for figuring out whether it actually does.
The standard academic view treats public engagement as a trade-off: time spent writing for popular audiences is time not spent on “real” research. I will argue the opposite here. My own experience and the experience of researchers I admire suggest that talking to non-academic audiences, writing for the public, and presenting research to people who might genuinely disagree with you make your scholarship sharper and more honest. It does this by stress-testing our ideas against the one audience that academic peer review systematically excludes: the very people researchers claim to study.
What engaging the public taught me peer-review didn’t
One of my most cited findings on immigration opinion came not from a faculty seminar but from conversations with policymakers in Washington. They all kept telling me the same thing: even when polls show majority support for more liberal immigration policies, politicians still will not touch the issue. The anti-immigration side simply seems to care more. This observation never came up in the academic literature I had been reading, where the focus was almost entirely on why people oppose immigration, not on how much they care about it in the first place, including the pro-immigration side.
That disconnect led me to a paper documenting what I called the “issue importance asymmetry” in academic-speak, describing the simple fact that anti-immigration voters are consistently several times more likely to rank immigration as their most important political issue compared to pro-immigration voters. This holds across decades in the US, the UK, and Europe. It is one of the most consistent findings in immigration research. And it started with listening to people outside academia who were closer to the political reality than most of my colleagues.
This was not a one-off. People outside the university often see things that people inside it miss, not because they are smarter, but because they are working from a different set of assumptions. When the overwhelming majority of your colleagues share the same political priors, certain questions never get asked. I have written about how well-meaning colleagues have suggested I soften findings that might “feed the far right,” even when the results were solid. That kind of filtering is invisible inside the academy. It becomes very visible the moment you share the unfiltered version with a public audience and discover they find honesty more credible, not less.
My piece arguing that Western countries do not “need” immigration grew directly from this. Voters who heard experts claim economies would collapse without immigration and saw their country functioning just fine concluded the experts were dishonest. The reframing came from paying attention to what skeptics actually found persuasive, not from academic theory. Similarly, when I wrote about community sponsorship, I highlighted polling showing that 73 percent of Republicans supported the Welcome Corps, a U.S. pilot of the sponsorship program, because it taps into conservative values of localism and faith. Most immigration scholars had not even thought to test whether the right might support refugee resettlement, because the academic framing treated it as exclusively a humanitarian, left-wing cause.
The wisdom of an old Charlottean
Let me tell you more about that Charlotte talk. The audience was conservative and very old, and the woman who challenged my premise was not the only skeptic. Before I could start, an agitated man asked, as a gotcha question, whether I believed Americans had the right to secure their border. I said yes. He almost seemed disappointed I did not proclaim “no human is illegal” or something to that effect. He sat back and calmed down.
After the woman said we did not need any foreigners, I agreed that immigration is a challenging issue and asked whether she thought we should also stop, say, German engineers from coming. She thought for a few seconds and then said, “Of course not.” Within a few minutes, we had moved past the headline positions and were having a genuinely productive conversation about which specific immigration policies she did and did not support, and why. In the end, folks tried to listen to me despite all the hearing problems in the audience for the rest of my presentation.
No academic audience had ever forced me to defend the premise of my research in quite that way. It made me rethink some of the ways my colleagues and I frame our questions and answers. We often assume that the value of studying what makes immigration policies popular is self-evident. It is not, and discovering that in a room full of retirees was more useful than discovering it from a reviewer comment.
Engaging with the public also improved my writing, and more so than LLMs ever could. When you have to translate a complex finding into a sentence that a non-specialist can follow, you quickly discover whether you actually understand it yourself. The vagueness that academic peer reviewers sometimes wave through does not survive a comments section or even a relatively shallow journalist’s follow-up question.
When jargon replaces argument
This brings me to an uncomfortable observation about a certain kind of academic work that I think public engagement would cure. Some research, particularly in what is called “critical” or “postmodern” scholarship, has become so insulated from public scrutiny that it is almost impossible to explain what it is actually saying, or why it matters.
I recently attended a seminar by Charmaine Chua, a geographer now at Berkeley, who presented research from a forthcoming book based on fieldwork aboard a container ship. The underlying empirical work was genuinely fascinating, on top of her great photography: vivid, detailed observations about the enormous salary disparities among crew members based on national origin, and the daily mechanics of global shipping that most people never see.
But the framing was almost entirely directed at an audience of critical geographers and “abolitionists.” Every observation had to be routed through Marx or David Harvey. One framework had to be “connected” to another framework, which had to be “put in conversation” with a third. There is a real story here about global inequality and labor exploitation, and it was being buried under layers of disciplinary performance.
In fairness to Chua, she has also written for popular outlets like Boston Review and Jacobin, translating her shipping research into language that (at least highbrow, left-wing) non-academics can engage with. She is, in that sense, doing the kind of public-facing work I am arguing for here. But the gap between the seminar version and the public version was striking. Even though we may disagree politically, I suspect her public version was much better. And not just because it was more accessible, but because the discipline of writing for a general audience forced clearer thinking about what the research actually shows.
This is not an isolated case, and the problem is that the vast majority of critical and empirical scholars alike do not go beyond publishing their work in obscure journals nobody reads. When research is never exposed to audiences who might say “I don’t understand what you mean” or “why should I care?”, it can drift into a self-referential loop where the work exists primarily to satisfy disciplinary gatekeepers. Public engagement is a corrective. It forces you to answer the question that every taxpayer has the right to ask: What is this for?
But engagement is not activism
I want to draw a distinction here that often gets lost. Public engagement is not the same thing as political activism. Confusing the two has done real damage, particularly in fields like sociology and political science, where “activist scholarship” has become an identity rather than a practice.
Activist scholarship’s problem is not that scholars have political views. Everyone does after all. When the scholarship itself is oriented toward a predetermined political conclusion, it stops being scholarship in any meaningful sense. And in practice, activist scholarship has tilted overwhelmingly in one ideological direction, which has undermined the credibility of entire fields. This includes hard science, too. Even the scholars doing this work would benefit from making their research more accessible to audiences beyond their own political coalition, because accessibility invites challenge, and challenge is what separates inquiry from advocacy.
What I am describing is closer to what Cyrus Samii calls the “problem-solving” approach to social science. Samii argues that social scientists should orient their work toward clearly defined societal problems, using normative analysis to identify what needs fixing, observational research to understand why, and experimental methods to test what works. This is distinct from both “disinterested” puzzle-solving (which often produces technically impressive work that no one outside the discipline reads or needs) and from activist scholarship (which knows the answer before the question is asked). Problem-solving research takes sides on the problem, not on the politics. It asks: Does this policy work? How do we know? What should we try instead?
That framework describes what I try to do with my own research and public writing. I’m sure I have my own biases and blind spots, but my Substack is decidedly not an advocacy project. It is an attempt to make research that is often locked behind paywalls and disciplinary jargon available to the people, including policymakers, journalists, and voters, who could actually use it. And the process of doing that has made my research better, not worse, because it forces me to change my mind on issues every once in a while.
This does not have to be an individual effort. Some departments have made public engagement part of their institutional identity. George Mason’s economics department with probably the single highest concentration of influential bloggers is a good example: serious, well-published researchers who also shape public discourse on the issues they study even when they disagree (e.g., compare Bryan Caplan and Garett Jones on immigration). More social science departments, and especially public policy schools, should follow that model. The infrastructure for combining fundamental research with public influence already exists. Most places just choose not to use it.
Taxpayer-funded research belongs to the public
There is also a straightforward accountability argument for public engagement that I think deserves more weight than it usually gets. Most social science research in universities is funded, directly or indirectly, by taxpayers. The National Science Foundation, the National Institutes of Health, and state legislatures fund the grants, the labs, and the salaries. Taxpayers underwrite the whole enterprise.
That creates an obligation. Not an obligation to oversimplify, or to produce findings that voters will find convenient, but an obligation to make the work legible. If you cannot explain to a non-specialist why your research question matters and what you found, that is worth examining. Sometimes the explanation is genuinely difficult because the work is methodologically complex, and that is fine. But you should at least be able to explain why the methodological complexity is necessary and what it is in service of.
I think this test is actually useful as a self-check. If I am working on something and I find that I genuinely cannot explain to a thoughtful non-academic why it matters, that is a signal that I should reconsider either the framing or the project itself. Not everything that is publishable is important. And not everything that is important is inaccessible. The exercise of translation is also an exercise in self-honesty.
There is a more basic point here that often gets lost. Academics are not just academics. They are also citizens, presumably interested in contributing to the public good. It makes sense to do that using your expertise rather than compartmentalizing it. When I see colleagues who study migration and its political implications but never comment on the topic publicly, while sharing their hot political takes on Facebook anyway, it strikes me as a missed opportunity. The idea that you can wear a professor hat and a citizen hat and never connect them does not hold up for most social scientists. You are already a citizen with political views. You might as well be one with informed political views who shares the basis for them.
Yes, it costs something. But you should do it anyway.
Many academics have been told, by colleagues or even their dean, not to spend too much time on public engagement, or warned not to say something publicly that would embarrass their college. If this is advice against posting on social media without any serious research work behind it, that may be quite sound. After all, unless you are at a public policy school, even a piece in The New York Times will not count for much in your annual review, let alone toward tenure. So I do not want to pretend that public engagement is costless.
The most obvious cost is time. Writing a Substack post or giving a public talk takes hours that could be spent on a paper. For junior scholars without tenure, your promotion committee probably will not count your Boston Review essay or your popular podcast appearance. The incentive structure of academia still largely rewards journal publications, grant funding, and citations from other academics.
Then there is the social cost. Colleagues who view public engagement as unserious can be quietly dismissive. I have experienced this myself. Not as direct criticism, but as a certain subtle skepticism, a sense from some peers that time spent writing for the public is time not spent on “real” work. The signals are usually indirect: a raised eyebrow, a conspicuous lack of interest, the faint suggestion that popular writing is something you do instead of scholarship rather than alongside it.
And there is the online environment, which can be genuinely toxic. Platforms like Bluesky, in particular, have become what I can only describe as a corrupting influence on academic discourse. The incentive structure rewards performative outrage and virtue signaling over substance.
Academics who engage there often find themselves dragged into pile-ons that have nothing to do with the quality of their ideas and everything to do with whether they said something that violated the platform’s ever-shifting ideological consensus. Compare this to long-form platforms like Substack, where the incentive structure at least partially rewards depth and evidence. Not all public engagement is equal, and choosing the right venues matters.
Having said all that, do it anyway. The alternative is worse. Matthew Yglesias recently argued that if you write publicly about contested topics, getting piled on is not a rare catastrophe but a predictable occupational hazard. The question is not whether it will happen but when. His advice is simple: accept the risk and keep writing. Do not let the possibility of a pile-on shape what you are willing to say.
The issue goes beyond managing pile-ons, though. Ruxandra Teslo has written persuasively about intellectual courage as a scarce resource. Her most striking observation is that academics regularly message her privately to say they agree with positions she has taken publicly but are unwilling to say so themselves. I have witnessed this firsthand recently with my hot takes on AI in academia:
The courage to state publicly what you believe privately, especially when it is unpopular within your professional community, is not a nice-to-have. It is an epistemic necessity. Truth emerges through open argument. If everyone self-censors, the entire discovery process breaks down. Matt Burgess has made a related case that tenured academics possess extraordinary free-speech protections and have a responsibility to actually use them. His own experience suggests that speaking honestly and across ideological lines actually improved his professional relationships and research collaborations. The fear of consequences was larger than the consequences themselves.
I have found this to be true in my own experience as well. After my recent piece challenging pro-immigration misinformation from within the pro-immigration camp or prompting my AI-skeptical colleagues to lock themselves in a room with Claude Code, I received pushback from many quarters. But I was also struck by the number of academics, including left-of-center scholars, who publicly endorsed these pieces that challenged their own side’s orthodoxy. As I wrote at the time, tenured (and untenured) professors should do this more often.
DEI is what happens when no one talks to the public
Let’s talk about faculty hiring for a moment because this is somewhat personal. The standard explanation for why universities went so far off the rails on race-based hiring after 2020 is left-wing bias and self-censorship. People were truly afraid to speak up. There is truth to that. Even influential tenured Harvard professors like Steven Pinker and Jill Lepore found it difficult to challenge the new orthodoxies.
But the deeper problem was that academics simply did not talk to people outside their institutions. Many of the faculty and administrators who embraced racial balancing in hiring genuinely believed they were doing the right thing. They had spent years inside institutions where this logic was so normalized that it never occurred to them to ask whether the public supported it, whether it was legal, or whether systematically excluding qualified candidates on the basis of race and sex might be ethically wrong.
Had they asked, the answers would have been clear. Race-based affirmative action in hiring is extremely unpopular among the American public, and has been for decades. Taxpayers fund universities to advance science and the public good. Nobody is paying us to maintain a particular racial balance among the faculty.
The scale of what happened is now well documented. Jacob Savage's viral "Lost Generation" essay highlighted that white men went from 49 percent of tenure-track hires in 2014 to 27 percent by 2024. At UC Irvine, just three of 64 tenure-track hires in the humanities and social sciences since 2020 were white men (4.7 percent). John Sailer of the National Association of Scholars obtained internal emails through hundreds of public records requests that showed the machinery plainly: at one NIH-funded program, an administrator wrote "I don't want to hire white men for sure." Aaron Sibarium at the Washington Free Beacon documented similar patterns across the country. I can also tell you about my first-hand experiences of search committee members who invited me to give a job talk telling me candidly that it was not going to happen because of my racial background (of course, most would be smarter than to invite me or say anything at all).
All in all, if you were a white or Asian man on the academic job market in 2020 or 2021, especially one from abroad, your marginal chances of landing a tenure-track position in many fields approached zero, all else equal. The fact that the existing stock of senior faculty was predominantly white and male was no consolation to an ambitious but broke thirty-year-old finishing a PhD. So many brilliant scientists with enormous potential have either become adjuncts with no future or left academia if they were lucky. The harm to science in terms of discoveries delayed or never made is staggering.
So, the collapse of public trust in higher education over this period was quite predictable. Academics knew what was happening. Many disagreed privately. But almost nobody was talking or explaining it to the public or pointing out that these policies had no popular mandate. That silence left the field to culture warriors on both sides and made the eventual backlash worse than it needed to be. It also cost a generation of talented researchers their careers, which is not the kind of thing a healthy profession stays quiet about.
How to actually do it better
So, our grim collective action problems aside, if you are an academic considering more public engagement, here are three things I have found genuinely useful.
Have a functioning website. First, and foremost, for the love of all that is reasonable, have a website. An up-to-date, accessible academic website. I genuinely do not understand colleagues who don’t. The notion that good research will find its audience on its own is optimistic to the point of delusion in an age when people are bombarded with information from every direction.
If you have done the work, make it findable. Thanks to Claude Code, from now on, my own site will be available in a dozen global languages, because accessibility means nothing if it stops at the English-speaking world. This very piece will be available on it in all languages upon publication.
Present your research to people who might disagree with you. This sounds obvious, but surprisingly few people do it, so I cannot recommend this enough. Go to a retiree center or a community forum. Or Bluesky if you’re writing on AI or LGBT issues. The audiences in these spaces are far more politically and demographically diverse than those of any college seminar. They will ask you questions that your colleagues never would, and those questions will reveal whether your argument actually holds up outside the assumptions of your discipline. The woman in Charlotte who asked me why we would want to make immigration popular taught me more in five minutes than many peer review reports have.
Write for the public. Start a blog or a newsletter. It doesn’t have to be a Substack, even though all cool people in academia are increasingly here now. The discipline of writing regularly for a non-academic audience changes how you think. It improves your prose, which then improves your academic papers. It forces clarity. And it opens you up to feedback from people with real-world experience in the things you study. Some of the most useful responses I have received to my Substack have come from readers who challenged my research claims based on their own experience: voters, immigrants, local officials, business owners, and even anonymous randos from the internet. That is a form of peer review that the academy does not provide.
Give pre-recorded interviews and do science podcasts. Popular science and policy podcast hosts ask different questions than academics do. They want to know what your findings mean for people who are not specialists. They push you to be concrete and specific. And they often identify angles that you, embedded in your own literature, have overlooked. They are not interested in any gotchas, so they would send you questions in advance. I have had those folks ask me questions that opened entirely new lines of inquiry, things no academic colleague had thought to raise because everyone in the field took the same assumptions for granted.
What not to do, or do with caution
Do not confuse social media arguments with public engagement. Getting into reply threads on X or Bluesky can feel like engaging with the public, but the incentive structure on those platforms rewards dunks and outrage, not depth. A 280-character exchange rarely changes anyone’s mind or improves your thinking. Long-form writing, in-person talks, and substantive interviews are where the real feedback loop happens. Use social media to share your work and find your audience, not to conduct your debates. And yes, I know I should follow this advice myself more.
Do not wing it on unfamiliar topics. The fastest way to undermine your credibility as a public-facing academic is to opine confidently on something you have not studied. One bad appearance on a topic outside your expertise can overshadow years of careful work within it. If you are asked about something adjacent, either redirect to what you actually know or say “I don’t know enough about that to give you a useful answer.” That sentence, rarely heard from both pundits and academics, tends to earn more respect than a half-informed hot take.
I have personally been asked on multiple occasions to come on news shows and talk about the US-Mexico border crisis, but I politely declined because it is not my area of expertise. Similarly, I now mostly refuse to talk to journalists about AI despite my recent notoriety on the topic, because I am a novice.2 Knowing when to say “this is not my lane” is itself a form of intellectual honesty that builds credibility over time.
Be generally selective with media requests, especially live interviews. If a journalist you know and respect reaches out on a topic you have actually studied, you should absolutely talk to them. Just understand that you will spend a few hours prepping and talking, and you may not be acknowledged or, worse, may be misinterpreted when their piece comes out.
For live interviews in particular, the risks are higher: the time you are given is limited, and you may not know what you will be asked. If someone you have not heard of reaches out, or the topic is adjacent to your expertise rather than central to it, the answer should be a polite no in most cases. Unless you do want to become the maligned stereotypical “talking head,” of course.
I will write more on this soon, but my sense is that with the help of agentic AI tools, scientists and experts should increasingly be able to produce better popular pieces on their own topics than generalist journalists can.
What’s lost when researchers stay silent
The stakes of this argument go beyond individual careers. When researchers with genuine expertise refuse to engage with the public, they leave a vacuum. And that vacuum gets filled by journalists and pundits without relevant training and advocates with axes to grind, and eventually by politicians who find it convenient to misrepresent what the evidence shows. The result is a public discourse about science topics that is poorer, more polarized, and more detached from evidence than it needs to be.
I have written at length about how “highbrow misinformation” develops when academic research gets filtered through advocacy groups and media outlets that strip away caveats and complexity. One way to fight this is to cut out the intermediaries. Not by replacing them entirely, but by making sure that the original researchers are also in the room, in the comments section, on the newsletter, explaining what their findings do and do not show.
The false trade-off between “serious scholarship” and public engagement has real consequences. It keeps good research invisible and lets bad arguments go unchallenged. It deprives researchers themselves of the feedback that would make their work better. If you are a scientist sitting on findings that matter, and you are not making them accessible to the people they are about, you are leaving value on the table for your field and for the people your research claims to serve.
Initially, I wanted to note that my argument applied more to social sciences than pure STEM disciplines. I could see how a mathematician might contribute through a breakthrough paper without ever writing a newspaper column or engaging with the public. My friend Venkatesh V Ranjan (read him!), though, pointed out that much of this was still applicable to any scientists who have to justify their funding before the public.
With regard to AI, though, almost everyone is a novice, so I can make an exception for some folks I respect when I have something of value to say.








Great piece.