Smart People Are Especially Prone to Tribalism, Dogmatism and Virtue Signaling
The symbolic professions aggressively select for those who are highly educated and cognitively sophisticated. This is a key source of their dysfunction.
Symbolic capitalists – people who work in fields like education, consulting, finance, science and technology, arts and entertainment, media, law, human resources and so on – tend to have unusual political preferences and dispositions compared to most other Americans.
These idiosyncratic ways of talking and thinking about morality and politics have come to dominate symbolic capitalists’ preferred political party – the Democratic Party – leading to increased alienation from the party among “normie” voters.
These long-running trends (which date back to the late 1960s and accelerated during the Clinton years) were exacerbated as a result of the post-2010 “Great Awokening.” Symbolic capitalists consolidated themselves even more intensely into the Democratic Party, even as our own views grew more extreme relative to the rest of the public. During this period, the Democratic Party’s messaging and priorities shifted radically to reflect “our” desires, in ways that left the party increasingly out of step with the median voter. This generated electoral backlash among populations that are sociologically distant from “us,” such as ethnic and religious minorities, less affluent voters, less educated voters, and so on. These trends were reflected in every single midterm and general election after 2008 and culminated most recently with Donald Trump retaking the White House following the 2024 elections.
My book, We Have Never Been Woke, shows that the transformations the Democratic Party underwent during this period were not unique. After 2010, many other institutions, especially those connected to the symbolic economy, likewise grew more overtly moralistic and political, and became more morally and politically homogenous, in ways that put them out of step with most other Americans. However, they also grew more disdainful and intolerant towards public opinion on these matters, instead prioritizing the preferences and priorities of symbolic capitalists, who comprise their most valued employees, clientele, investors and audiences.
This has contributed to growing mistrust in “our” institutions among large swaths of the public, creating an opening for right-aligned political entrepreneurs to win support by running against “us,” and vowing to render institutions of knowledge and cultural production accountable and responsive to “the people” once more.
At present, most empirical indicators seem to suggest that the current “Great Awokening” has run its course: symbolic capitalists and aligned institutions are moderating (although they have not yet returned to antecedent baselines). Critically, however, even in “normal” times, the symbolic professions tend to be morally and politically unusual and parochial. Moreover, independent of the Awokenings, they’ve been growing increasingly unrepresentative of the public in many respects over the last half-century, with important consequences for the functioning and legitimacy of these institutions, and the quality of their knowledge and cultural outputs.
We Have Never Been Woke explains why Awokenings play out the ways they do (and why they occur when they occur, and so on). This essay is designed to explore why the symbolic professions are so unusual, even in “ordinary” times. But to get leverage on this question, we have to start by clarifying a few important points about how human beings tend to perceive and think about the world.1
Our cognitive and perceptual systems are fundamentally geared towards self-advancement and coalitional struggles
Enlightenment era scholars, and many classical scholars before them, believed that human cognition was oriented towards truth, objectivity and logic. Deviations from this ideal were held to be caused by forces external to our minds, such as our physical appetites, social corruption, or malevolent supernatural entities. By striving to liberate ourselves from these external influiences, we could approach the ideal of “pure” reason, and perceive reality in its “true” form.
The contemporary scientific consensus has coalesced around a completely different understanding of human rationality and perception.
For instance, contrary to earlier conceptions of cognition juxtaposing the body and the mind, it’s now understood that we always think with and through our physical bodies, and there isn’t any way around it.
However, thinking is not something that occurs exclusively within our individual corporeal forms – it occurs primarily in conjunction with others, in dialogue with our physical and social surroundings, and with the use of human-produced tools and resources alongside other cultural artifacts. At a fundamental level, we think with and through other people and our environments in much the same way as we think with and through our physical bodies.
Indeed, our rational capacities seem to have been developed largely in response to social pressures – to mitigate practical issues related to living in collaborative groups – such as:
Persuading others to do what we want or need them to do
Avoiding getting manipulated by others in adverse ways
Maintaining or enhancing the standing of ourselves and the groups we identify with relative to “others”
Over time, the extraordinary capacities we developed to address these types of social coordination challenges developed into two distinct but interrelated cognitive systems.
One system specializes in dealing with immediate, routine and necessary tasks and dilemmas. It works rapidly, automatically, and largely unconsciously – drawing on and producing tacit knowledge.
The other system is designed to deal with more difficult, novel, and high-stakes challenges. It is more deliberative. It is the system of our conscious thought. Its outputs are more easily translated into words to be conveyed to others.
This latter cognitive system is what people are generally referring to when they talk about “thinking” or “rationality.” However, the former system actually drives most of our understanding of the world, drives most of our behaviors, and informs much of our conscious thought. As social psychologist Jon Haidt emphasized:
“The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant… Human minds, like animal minds, are constantly reacting intuitively to everything they perceive, and basing their responses on those reactions. Within the first second of seeing, hearing or meeting another person, the elephant has already begun to lean toward or away, and that lean influences what you think and do next. Intuitions come first.”
These general characteristics of human cognition shape everything people do – including knowledge and cultural production.
On the one hand, our socially-evolved brains are capable of remarkable feats. Working and thinking together, we’ve developed means of travelling into space. We can split atoms. We can communicate instantaneously across continents. We can prevent people from contracting diseases like leprosy and polio that have long ravaged humankind. However, at the individual level, our ability to understand the world is inevitably constrained in some highly consequential ways.
Consider the process of perception:
At all times, we are bombarded with stimuli. In a lecture hall, for instance, a professor could pay attention to how many students seem to be right-handed versus left-handed, how many ceiling or floor tiles there are in the room, what ratio of students are wearing corrective lenses, and so on, ad infinitum. It’s impossible to attend to all the information that is available to us at any given time – and that’s fine, because most of it isn’t particularly valuable anyway.
And of the things we do pay attention to, we can’t remember every detail indefinitely – and it wouldn’t avail us much if we could.
Instead, we have to make decisions about what to focus on, and what to retain (this is called “selection”). We have to organize the details we observe into a relatively stable and coherent picture – often making determinations about how events and phenomena relate to one-another (this is called “organization”). And then we have to make inferences about what it all means, what the significance of the observed phenomena are, and how we should respond to the things we observe (this is called “interpretation”). This process unfolds instantaneously and largely unconsciously. And for good reason:
In the real world, we can’t sit and ruminate indefinitely. Decisions must be made about what to focus on, what to remember, how it all fits together, if and why things matter, if and how we should act, and so forth. We typically need to make these determinations quickly, with limited information, in dynamic conditions, with high degrees of ambiguity and uncertainty. Trying to gather more and more information, contemplating things over and over without acting – this can have high opportunity costs and often leads to worse decisions in any event. Under most conditions, our intuitive cognitive systems are reliable, quick and highly efficient.
However, the decisions we make about what to focus on (or not), what we remember and how we remember it (or don’t), and how to interpret ambiguous signals – these choices are not made in a random or disinterested way. Our brains are not designed to produce an “objective” and “true” picture of the world. Instead, reflecting their social origins, our cognitive capacities are oriented towards perceiving, interpreting and describing reality in ways that enhance our fitness and further our goals – often generating a distorted understanding of the world around us in the process.
We pay attention to, easily recall, and feel positive emotions towards things we deem interesting or useful. We dismiss, downplay, dump, and have negative emotional reactions to information that is threatening to our objectives or our self-image, or that conflicts with our expectations or pre-existing beliefs. Things that don’t seem particularly significant in either direction, we largely ignore (even though these neglected details often prove to be quite important in retrospect).
These types of systematic distortions are known as “biases.” They’re tied to the core information problems we have to solve in light of our cognitive and perceptual limitations (and our need to act in the world despite them):
Critically, it isn’t just our perception that’s biased. Our attributions (the causal stories we tell about why things are the way they are, why things happen the way they do, and so on) operate in a similar way:
When good things happen that could be plausibly laid at our feet, we attribute those positive outcomes to stable and internal factors that are within our control – i.e. positive characteristics we possess and wise actions we took. When bad things happen, we tell the opposite story. Adverse outcomes are attributed to contingent and fleeting circumstances – things external to us and outside of our control.
We don’t just tell these systematically skewed stories for ourselves, but also for groups, movements and institutions we identify with. And for folks we disidentify with, the dynamics are reversed: we attribute good outcomes to luck, privilege, fleeting circumstances, and so on – and we attribute bad outcomes to durable characteristics and willful actions.
In fact, most cognitive biases can be boiled down to a handful of self-serving assumptions like these:
People typically think their beliefs are correct – morally, factually, and otherwise. We further assume that we arrived at our beliefs in a good way, that we think the things we think for good reasons. We view our own experience as a good guide to how the world works. We perceive ourselves to be personally representative of the groups and institutions we identify with.
As a corollary of these assumptions, when we encounter others who hold beliefs that are incompatible with our own, or who engage in behaviors that defy our expectations and sensibilities, we rarely start by questioning our own views (and the means by which we arrived at them). Instead, we try to figure out what’s wrong with “those people” who disagree with us. Our first (and typically dominant) instinct is to explain deviance from our own views by appeal to deficits (ignorance, irrationality, a lack of cognitive sophistication, inexperience) and pathologies (“those people” are bigots, zealots, authoritarians, and so on).
Again, these aren’t just tendencies that people have in their daily lives, but that symbolic capitalists somehow set aside when engaging in their work. The inclination towards tribalism, status seeking, identity reinforcement and other forms of motivated reasoning may be, in fact, the main purpose of our advanced cognitive faculties – it’s the thing they’re best at, and it influences everything else they do.
In principle, these well-documented and ubiquitous cognitive tendencies should inspire a sense of intellectual humility in symbolic capitalists – a deep awareness of our own fallibility and limits. In practice, however, people tend to think of themselves as exceptions to general rules.
Most view themselves as smarter, less biased, and more authentic and moral than average. We tend to think that the forces that bind and blind everyone else do not govern our own attitudes and behaviors to the same extent, if at all. Other people (especially those we don’t identify with) are driven by self-interest, ideology, and so on. We are motivated by strong ethical standards, including a principled commitment to the truth.
Sociologist Andrew Abbot referred to this as “knowledge alienation”: declining to apply information we have about the world to ourselves and the institutions and groups we identify with.
Indeed, even when people intellectually acknowledge that we are susceptible to bias and error, and we’re working from limited knowledge and experiences (and other people might have important information that reveals our own commitments to be erroneous) – it’s hard for us to actually feel that way, especially in moments of contestation. This is because, with respect to many cognitive distortions, our brains seem designed to avoid recognizing our biases. We have “bias blind spots” that interfere with our ability to recognize when our cognition is going astray.
Worse, even when we recognize that we may be engaging in motivated reasoning, the social motivations undergirding that reasoning often help us justify our biases to ourselves and others.
It should be emphasized again that we these cognitive tendencies are not necessarily pathological. In general, our biases and heuristics allow us to process and respond to extraordinary amounts of information quite quickly in dynamic circumstances. We could scarcely function without these distortions. Even bracketing the cognitive and perceptual limitations and constraints we have to work within – if we truly saw the world as it actually was, if we saw ourselves and our loved ones as they actually are, if they saw us as we truly are… that would be unlikely to make us happy.
Likewise, if our actions were tightly calibrated to the statistical likelihood of success, there are many risks we wouldn’t take (that we currently do take), and we’d miss out on a lot in life as a result. Most business fail within six years. An overwhelming majority of romantic relationships end in less than a year. Most employment relationships end up not working out for one or more parties eventually (relative to the alternatives) – typically leading to resignations or termination within five years. Social movements rarely achieve their stated ends. Most innovations are maladaptive. The modal result of publication submissions is rejection. An overwhelming majority of published scientific findings are wrong, trivial, and/or non-impactful. If we allowed these types of probabilities to govern our attitudes and behaviors, we’d rarely invest ourselves in anything.
In reality, however, people defy the odds all the time. Ostensibly irrational levels of confidence, conviction, resilience and optimism often play an important role in these outcomes. Our biases and blindspots are, therefore, not just a product of our cognitive limitations – they empower us to accomplish things we otherwise may not. In Nietzschean terms, our cognitive distortions serve important life-enhancing functions.
That said, it is also an empirical reality that biases and limitations to our perceptions, experience and cognition often do cause practical problems – especially with respect to knowledge and cultural production, and particularly when it comes to contentious social topics. Our socially-oriented cognition (seeking status, tribal victories, and the like) often supervenes even sincere attempts to pursue the truth wherever it leads.
After all, our natural impulses are to sort into groups with people who share our values, politics, and other identity commitments, to publicly bring ourselves and push others into conformity with the group, and try to suppress, exclude or dominate others with incompatible goals and perspectives. Our default inclination is to perceive and interpret the world in ways that flatter our self-image, advance our interests and reinforce our existing worldviews – while explaining others’ deviance from our preferred positions through appeals to deficits and pathologies.
It is not natural, in fact it’s often deeply unpleasant, to slow down our judgements and think more carefully – taking care to avoid biases, oversights or errors. It’s not natural to work amicably with people across lines of profound difference, making decisions about things like admissions, hiring and promotion primarily on the basis of merit. It is not natural – and in fact, it is very difficult (but also quite important) – to recognize and publicly acknowledge our error, and then revise our attitudes, beliefs and actions in accordance with the best available evidence – irrespective of our own expectations, interests and preferences.
Symbolic capitalists may be especially prone to bias and motivated reasoning
Here, a discerning reader might object that, while these conditions might hold for “normies,” symbolic capitalists are different from other people. For instance, we have far more knowledge about various topics than the average person (especially with respect to our areas of expertise). We’re cognitively sophisticated (i.e. we perform highly on assessments of intellectual ability). We’re highly conscientious (allowing us to maintain high GPAs throughout our educational careers and, ultimately, to see our credentials through to completion). We define ourselves in terms of our commitment to following the truth wherever it leads and acting in accordance with the best information. Surely, these must influence our susceptibility to bias and motivated reasoning, right?
Fortunately, we don’t have to speculate on this matter. There is a robust literature on the political psychology and cognition of people with high IQ scores, high GPAs, and high levels of education – i.e. the folks most likely to become symbolic capitalists. It turns out, the kinds of people who take part in the symbolic professions do tend to vary systematically from most other people . . . albeit not in the ways we like to think.
For instance, most of us who work in the symbolic professions assume that, although “others” may be driven primarily by prejudices, emotions, superstition, dogma and ignorance – the positions of people like “us” (highly educated, cognitively sophisticated, etc.) are shaped by logic, and “the facts.” We make decisions based on a careful consideration of the issues; we would readily change our minds if the facts were not “on our side,” or as the relevant circumstances evolved.
In reality, people who are highly educated, intelligent, or rhetorically skilled are significantly less likely than most others to revise their beliefs or adjust their behaviors when confronted with evidence or arguments that contradict their preferred narratives or preexisting beliefs. Precisely in virtue of knowing more about the world or being better at arguing, we are better equipped to punch holes in data or narratives that undermine our priors, come up with excuses to “stick to our guns” irrespective of the facts, or else interpret threatening information in a way that flatters our existing worldview. And we typically do just that.
In a decades-long set of ambitious experiments and forecasting tournaments, psychologist Philip Tetlock has demonstrated that—as a result of their inclinations toward epistemic arrogance and ideological rigidity—experts are often worse than laymen at anticipating how events are likely to play out . . . especially with respect to their areas of expertise. What’s worse, cognitively sophisticated people tend not to be very self-aware about our error rates either, because we excel at telling stories about how we were “basically right” even when we were, in fact, clearly wrong – inhibiting our ability to learn from mistakes and miscalculations.
In a similar vein, experts have been shown to perform a bit worse than laymen at predicting the likely effects of behavioral science interventions. Political practitioners have been found to be no better than laypeople at predicting which political messages are persuasive. Comparative and longitudinal studies have found that highly educated political leaders perform no better than less educated ones, and may even be a bit worse in some respects.
Rather than becoming more likely to converge on the same position, people tend to grow more politically polarized on contentious topics as their knowledge, numeracy, reflectiveness increases, or when they try to think in actively openminded ways.
These empirical patterns would be shocking and difficult to explain while operating under the assumption that humans’ cognitive and perceptual systems are primarily oriented towards objective truth. However, these tendencies are exactly what one might expect if we instead work from the premise that our cognitive capacities are fundamentally geared toward group building and coalitional struggles, and that we typically reason in ways that help us achieve our goals with and through other people.
On this understanding of how our brains work, we might likewise expect that the kinds of people the symbolic professions select for (cognitively sophisticated, academically high-performing, highly educated) may be especially prone to tribalism, virtue signaling and self-deception. And, unfortunately, there is a lot of empirical evidence that seems to pull in this direction.
For instance, highly educated Americans are also much more likely than others to know what positions they “should” hold in virtue of their partisan or ideological identities, and we’re more likely to align our beliefs to systematically accord with those identities. We are more likely to form positions on issues we didn’t previously have strong opinions about by looking to partisan cues—and to modify our existing positions to bring them into line with new messaging from party leaders. Politically sophisticated Americans are also more likely to systematically accord their political beliefs and preferences to their religion, race, gender, or sexuality—conforming themselves with what they “should” think or say on the basis of their identity characteristics (while other in-group peers tend to have much more heterogeneous sets of views and dispositions).
That is, in a literal sense, the kinds of people who comprise the symbolic professions strive to be “politically correct” in their views much more than other Americans. It should therefore not be surprising that college-educated Americans are more likely than other Americans to self-censor. Nor should it be shocking to discover that censorship in science is heavily driven by scientists themselves, typically against people whose moral and political views diverge from scientists’ own.
With respect to political engagement, highly educated voters are much more likely than most others to donate to political causes. They’re also more likely to have flexible work schedules that facilitate their higher rates of voting, protesting, and other political activities. Yet, although these constituents tend to be more politically engaged on average, their political involvement is also much less likely to be oriented toward pragmatic ends. Instead, Americans with high levels of education gravitate toward “political hobbyism” and “expressive voting”—that is, engaging in political research, discourse, and other activities for the purposes of self-aggrandizement, entertainment, validation of one’s identity, and so forth instead of trying to realize concrete and practical goals.
In virtue of these tendencies, highly educated people tend to follow political horse races much more closely than the general public, and are often much better versed in contemporary political gossip, drama, or scandals. Yet we tend to be little more informed than most with respect to more substantive facts—often lacking even rudimentary knowledge about core civic institutions and processes. And the more we think we know, the less self-aware we tend to become with respect to our own biases and ignorance. As social psychologist Keith Stanovich put it:
“If you are a person of high intelligence, if you are highly educated, and if you are strongly committed to an ideological viewpoint, you will be highly likely to think you have thought your way to your viewpoint. And you will be even less likely than the average person to realize that you have derived your beliefs from the social groups you belong to and because they fit with your temperament and your innate psychological propensities.”
Indeed, education and cognitive sophistication themselves seem to increasingly serve as a key basis for tribalism. Highly educated people tend to marry and socialize with others possessing similar levels of education as themselves. Even among the highly educated, those with high levels of academic performance tend to cluster together—and gradually abandon social ties with those of lower GPAs. And with respect to engaging with “others,” college graduates often look down on those with fewer (or worse) credentials than themselves.
One series of studies looking at college grads in the United States and western European countries found that people who graduated from college viewed less educated people more unfavorably than they did any other reference group. They were also less supportive of programs to help less educated people compared to other potential recipients. And while they often expressed some sense of shame or regret for prejudices expressed against other groups, they were unabashed in their bias against those less educated than themselves. In light of the ways educational attainment varies systematically along the lines of race, gender, and class, these forms of education-based prejudice often exacerbate and reinforce other forms of bias and discrimination.
Likewise, although highly educated and cognitively sophisticated Americans are less likely to express racially prejudicial attitudes on surveys, we tend to be far more prejudicial than most against those whose ideological views diverge from our own – despite the reality that the kinds of ideologies symbolic capitalists are drawn to are robustly associated with increased risk of neuroticism, anxiety and depression. Highly intelligent people are also much more prone to overreact to small shocks, challenges, or slights. That is, we are more sensitive and intolerant of many forms of conflict or disagreement.
Even the apparent racial and ethnic tolerance that highly-educated voters exhibit may be a function of the fact that, precisely in virtue of being more elite than other Americans, symbolic capitalists’ material interests, ambitions, and life prospects do not appear to be threatened by people from historically marginalized and disadvantaged groups. Research has found that when highly educated people do come to sense that their own interests or prospects are undermined or threatened by competition with racial or ethnic minorities, they often become significantly more hostile toward the groups in question despite their previous egalitarian leanings.
More generally, although cognitively sophisticated people are more likely than most to endorse racial equality in principle, they seem to be no more likely than others to support policies that would undermine relative advantages they personally enjoy—and their cognitive sophistication is part of what may allow them to justify this gap to themselves and others. High levels of creativity have likewise been found to be connected with higher levels of unethical behavior—in part because highly creative individuals excel at rationalizing harmful actions to others and themselves, and tend to think of themselves as exceptions to rules that others have to play by.
In part as a consequence of our ability to produce rationalizations of this nature, highly educated Americans tend to be less aware of our own sociopolitical preferences than most—typically describing ourselves as more left-wing than we actually seem to be. Studies consistently find that highly educated and cognitively sophisticated voters tend to gravitate toward a marriage of cultural liberalism and economic conservativism. However, we regularly understand ourselves as down-the-line leftists. As economist James Rockey put it:
“How does education affect ideology? It would seem that the better educated, if anything, are less accurate in how they perceive their ideology. Higher levels of education are associated with being less likely to believe oneself to be right-wing, whilst simultaneously associated with being in favour of increased inequality.”
In addition to our poor self-awareness with respect to how committed to egalitarianism we actually are, highly educated Americans tend to be much worse at gauging other people too — typically assuming others are more extreme or dogmatic (and closer to our own views) than they actually seem to be. This is perhaps a product of the reality that, as compared with the general public, the kinds of people who become academics (highly educated, cognitively sophisticated, academically high performing) tend to themselves be more ideological in their thinking, more dogmatic in their views, and more extreme in their ideological leanings than everybody else—and the process of attaining a college education seems to drive people even further in the direction of moral absolutism. A recent National Bureau of Economic Research study found that the Americans most prone to zero-sum thinking included people who lived in cities, those who have especially low or high levels of income, people who identify as strong Democrats, and those who possess postgraduate degrees.
Critically, even though symbolic capitalists tend to hold moral and political views that are out of step with most other Americans, we tend to operate under the false notion that we do, in fact, represent the will and interests of the public. And when we’re confronted with evidence that “the people” seem to be in disagreement with us, we often assert that this is not because we misunderstand other people’s interests, but rather, others don’t understand their own interests correctly: they’ve been indoctrinated, misinformed, or are otherwise operating under “false consciousness.”
In reality, there is very little evidence that “indoctrination” is a thing – and misinformation seems to be the rule rather than the exception in human affairs, including (and perhaps especially) among symbolic capitalists. As Orwell put it, many ideas are so patently ridiculous that only an intellectual would give them the time of day. Often, again, we adopt outlandish ideas in large part to signal our group membership and moral or intellectual virtues — and we promptly abandon many positions when they seem inadequate or counterproductive to these purposes, truth values be damned (did I mention the Great Awokening is rapidly winding down?).
In short, far from being independent thinkers who come to their positions on issues through a careful deliberation of “the facts,” who change their minds readily in accordance with “the facts,” and who make wise decisions by deferring to “the facts,” the kinds of people most likely to become symbolic capitalists (highly-educated, academically high-performing, cognitive sophisticated) are more likely than most to be dogmatic ideologues or partisan conformists. Yet it is difficult for us to recognize these tendencies in ourselves due to our larger “bias blind spots” and increased capabilities for motivated reasoning.
Put another way, it is precisely because the symbolic professions contain especially high concentrations of smart and knowledgeable people that they are riven with bias, motivated reasoning, groupthink and ideological parochialism. Subject-matter expertise and cognitive sophistication doesn’t empower folks to overcome the general human tendencies towards bias and motivated reasoning. If anything, they can make it harder. And the institutions symbolic capitalists are embedded in often exacerbate these tendencies further.
Many systems and practices intended to mitigate biases function in counterproductive ways
To help illustrate the problems many symbolic professions face with respect to biases and motivated reasoning, consider the case of scientific research:
Scientists are not randomly assigned our areas of study. We gravitate towards the specific questions we investigate, and the specific methods and theories we use to investigate them, for all manner of personal and social reasons we may or may not be conscious of.
Moreover, upon selecting topics of interest, personal commitments and beliefs shape how we approach research questions at a fundamental level. For example, consider questions like:
Which phenomena or dimensions of a phenomena are worthy of study?
Which methods are best suited to investigate a particular question?
What counts as ‘evidence’ for or against a hypothesis? In virtue of what?
How are key terms defined and operationalized?
What is a social problem (as opposed to being merely a social phenomenon)?
Which perspectives will we privilege? Who will we analyze and how?
How are findings best described?
In most cases, these are not questions for which there will be objectively ‘correct’ answers. Instead, researchers have nearly infinite degrees of freedom when designing, executing and reporting on investigations.
Studies consistently find that one can present sets of researchers with the exact same data, to investigate the exact same question, and they’ll typically deliver highly divergent results. This is not just true for contentious social and political questions -- the same realities have been observed in the life sciences, technical fields, and beyond.
This is one of the main reasons studies often fail to replicate: not necessarily due to flaws in original study or the replication – but because each party made slightly different but consequential choices that led them to different conclusions, even when using the same method, to investigate the same question, using the same types of data (and sometimes even the exact same data).
Put simply: scientists cannot simply “follow the data” and arrive at ‘big-T’ truths. In fact, even the act of converting messy and complicated things and people “in the world” into abstract and austere data that can be easily communicated, transformed and operationalized -- this is itself a highly contingent process, deeply informed by the assumptions, limitations and desires of the data collector. And said data get subsequently analyzed and presented as a result of choices scholars make, driven by myriad “trans scientific” factors – and there is really no way to avoid this.
In short, every single step in the knowledge production enterprise is riddled with opportunities for bias and motivated reasoning -- and as we’ve previously explored, the kinds of people who do research tend to be especially susceptible to bias and motivated reasoning. They also tend to be adept at rationalizing their skewed approaches to studying, analyzing or describing phenomena to themselves and others. As Pierre Bourdieu put it, “Being professionals of discourse and explication… intellectuals have a much greater capacity to transform their spontaneous sociology, their self-interested vision of the social world, into the appearance of a scientific sociology.”
Critically, it has long been understood in many fields that individuals can’t achieve objectivity on their own. We all have partial and situated knowledge and experiences. We’re all fallible and often biased in our reasoning. Knowledge of biases doesn’t, itself, eliminate them – often it can instead lead to greater overconfidence in our own objectivity. And as we have seen, specialized training and cognitive sophistication can likewise render us more prone to epistemic rigidity and motivated reasoning.
But fortunately, we aren’t forced to contend with these problems by futility trying to pull ourselves up by our own bootstraps. Instead, under the right circumstances, it’s possible to collectively check and transcend our own individual cognitive limitations and vices. In contexts where researchers approach questions with different sets of knowledge and experiences, different material and ideal interests, using different methods, and drawing on different theoretical frameworks and value systems, we can produce something together and over time that approaches objective, reliable and comprehensive knowledge.
With this possibility in mind, many institutions of knowledge and cultural production have been designed around institutionalized disconfirmation, adversarial collaboration, and consensus building. For instance, decisions about who to admit, hire and promote within academic departments are supposed to be made through diverse and rotating committees of scholars hashing out the merits of various candidates together. Decisions about what to publish in academic journals are supposed to be made by multiple, (double) blinded, peer reviewers -- themselves selected and checked by editors. And so on.
However, these systems only work as intended when there is genuine diversity within a field across various dimensions. In the absence of substantive diversity, the same systems, norms and institutions that are supposed to help us overcome our limitations and biases can instead exacerbate them. They can stifle dissent and innovation. They can lead to collective blind spots and misinformation cascades. It can become easier to discriminate against, or create a hostile atmosphere towards, those who diverge from the dominant view. In contexts like these, important details and possibilities can be right in front of people’s faces, but it can be almost impossible for anyone to “see” them.
This isn’t merely a hypothetical, it’s a reality that many symbolic professions have been struggling to with because they lack substantive diversity along many key dimensions. This was the case before the post-2010 Great Awokening, and it will remain a problem even as the Awokening continues to fade.
The U.S. professoriate, for example, is drawn from a narrow and highly idiosyncratic slice of society along virtually all dimensions. And many subfields are even more homogenous and parochial in many respects than the professoriate as a writ large.
The chart here simply visualizes “identity” characteristics, but when we look at factors like class background or region of origin, the same patterns hold – professors are extremely unrepresentative of America writ large, and have been growing increasingly less representative over time.
As I illustrate at length in We Have Never Been Woke, similar realities hold for most other institutions of knowledge and cultural production. In fact, despite the intense focus on diversity, equity and inclusion in these spaces, they are some of the most exclusive and unrepresentative institutions in society, and have been growing further removed from the rest of the country over time.
The increasingly idiosyncratic constitution of the symbolic professions doesn’t just undermine the quality of the work we produce, it also contributes to a growing legitimacy crisis. Ever-larger shares of the population feel that folks “like them” have little voice or stake in our institutions. They don’t believe that symbolic capitalists and aligned institutions understand, respect or reflect their values and interests. This matters because, when people come to feel like they are not represented in institutions – and especially when they perceive that said institutions (or the people who dominate them) are outright hostile towards people like themselves – the natural and rational response is to delegitimize, marginalize, defund or dismantle those institutions. This is a pattern that holds across institutional types, issue areas, and geographical or historical contexts.
Symbolic capitalists try to pathologize this resistance. We attribute it to misinformation, anti-intellectualism and the like. However, the uncomfortable reality is that Americans actually have good reason to be alienated from symbolic capitalists. Our institutions are, in fact, highly unrepresentative of the public as a whole. Symbolic capitalists often are disdainful or hostile towards the concerns and perspectives of “normies.” These dynamics grow especially pronounced during Great Awokenings, but they’re also present in “ordinary” times, and have grown gradually worse over time – perhaps owing to the reality that the symbolic professions have, themselves, grown more fiercely oriented around selecting on the basis of college degrees (especially from elite schools), standardized tests, and the like. The more institutions filter for these traits, the more they’re selecting for people who are extraordinarily prone to dogmatism, extremism, tribalism and virtue signaling – and the more “out of touch” and ideologically parochial we would expect these professions to grow as a result.
It is incumbent upon symbolic capitalists to find ways to bridge the rapidly-growing sociological distance between themselves and everyone else in society -- both to enhance the quality and impact of our work, and to preserve the viability and credibility of our institutions.
A version of this article was originally published on 11/20/2024 by Inquisitive Magazine.
This essay is intended to serve as a resource for understanding the origins and consequences of cognitive and perceptual divides. Consequently, throughout the remainder of this essay, links will near-exclusively direct readers to peer reviewed studies and meta-analyses.
I like to say that we decide what to believe by deciding who to believe. This essay explains why intellectuals can be dogmatically wrong. It is an outstanding piece. But it still leaves questions to be explored. What motivates the intellectual dissenters (conservatives, libertarians)? Why did the symbolic capitalists converge on the particular set of beliefs that we call Woke? Perhaps these are explored elsewhere in your book.
So if I understand correctly symbolic capitalism is neither inherent good or bad. It just is. Correct?
Yesterday I wrote:
"The sophistication of this capitalism kept growing, until it became possible for some people in the community to become bookish, mathematical and brainy. Some became scribes. Some became engineers. Some became philosophers. Some became religious leaders. Some became teachers. Some became accountants. Some developed designs for weapons, others devised business plans. All of these people read, wrote, and thought more than other people in their community. They solved abstract problems. Their intellectual abilities and innovations gave them power and wealth.
Rather than trade materials, they traded ideas, designs, solutions, words, concepts, and knowledge.
Their innovations, accumulated knowledge, and wealth eventually made it possible for me to be born and raised in the harsh climate of Lake Tahoe. Only a modern capitalist baby could be born and raised there."
Is this an accurate description of symbolic capitalists?
https://substack.com/@scottgibb/p-152436769