Skip to main content

X-Risk Literate

16.4% of concerned respondents (N=328)

The most bipartisan and heaviest AI-using segment in the sample, yet the group most focused on catastrophic and existential risk.


Who They Are

X-Risk Literate are nearly evenly split between Democrats (36%) and Republicans (36%), with 24% identifying as Independent. This bipartisan composition is unique in the sample and unusual for technology attitudes more broadly. They are majority male (55%), with a median age around 40, and more likely than any other segment to be employed full-time (51%). More than half have children (56%). They are the most racially diverse segment in the sample, with notably higher Black representation (20%) than the study average, though this finding should be interpreted cautiously given the absence of racial quotas in recruitment.

The segment's relationship to AI is defined by proximity. Roughly one in four (24%) use AI at work daily or more frequently, the highest rate of any group. Another 21% use it several times per week. These are not respondents forming opinions from news coverage alone. They work with the technology routinely and have formed their views in the context of that direct experience.

A notable tension emerges in the literacy data. Despite being the heaviest users, X-Risk Literate scored the lowest of any segment on the AI understanding check: only 45% correctly identified that large language models work by predicting text patterns, compared to 64% of Progressive Alarmed, who are among the lightest users. They were also the most likely segment to describe AI as "a thinking agent that understands meaning like a human" (13%, compared to 6-8% in other groups).

This result admits two readings. The conventional interpretation is that daily exposure to AI's capabilities does not produce mechanistic understanding, and may work against it. The alternative is that next-token prediction is itself a contested account of what large language models do. Whether systems trained to predict text are "merely" pattern-matching or developing something closer to general reasoning remains an active debate in the field, and respondents who work with these systems daily may be registering something about the behavioral output that the textbook answer undersells. The distinction matters because the two interpretations produce different risk models. If these respondents are simply anthropomorphizing, their existential concern may be built on a misunderstanding. If they are tracking real capabilities that exceed the "mere pattern-matching" frame, their concern has a different epistemic basis than is typically assumed.


What Worries Them

Like every segment in the sample, X-Risk Literate are most concerned about practical, near-term harms. Job displacement leads their concern list at 48%, close to the sample average. Cybercrime and scams (23%), privacy and surveillance (27%), and large-scale harm from coordinated cyberattacks (23%) round out their top tier. On these bread-and-butter issues, they look broadly similar to the rest of the concerned public.

What distinguishes the segment is not what leads their list but what appears on it at all. Human extinction or mass casualties was selected by 16%, substantially above the sample average of 9%. AI developing goals that conflict with human interests reached 13%, compared to an 8% sample mean. Weapons and autonomous warfare scored 21%, well above the 15% average. These are minority positions within the group itself, but they are elevated relative to every other segment, and they give the X-Risk Literate concern profile its distinctive shape: a conventional base of economic and security concerns with an unusual tail of catastrophic risk.

The inverse pattern is equally telling. Misinformation and deepfakes, the leading concern for nearly every other segment, registers at just 23% here, roughly half the sample average of 46%. Environmental impact of data centers sits at 16%, well below the 24% mean and less than half the 44% reported by Progressive Alarmed. Bias and discrimination falls under 3%. X-Risk Literate select fewer total concerns than most groups (5.3 out of 14, the second-narrowest range in the sample), and the concerns they deprioritize are predominantly the near-term, systemic harms that drive mainstream public discourse. Their attention skews toward what happens as the technology scales, even as their day-to-day worries remain grounded in jobs and security.


How They See AI

The attitudinal profile of X-Risk Literate is more internally complex than other high-concern segments. Nearly six in ten (58%) describe themselves as more concerned than excited about AI, but this is the lowest rate among the three most concerned groups. Eleven percent report being more excited than concerned, the second-highest rate in the sample. Another 29% describe themselves as equally concerned and excited. Unlike the Alarm Maximalists, where concern is essentially uniform, this segment holds appreciation and apprehension simultaneously.

Trust in institutions is moderate rather than collapsed. A third (33%) trust tech companies "somewhat" to develop AI responsibly, and 8% trust them "a great deal." Compare this to Progressive Alarmed, where nearly two-thirds (64%) trust tech companies "not at all." Government trust follows a similar pattern: present but qualified, not absent.

On regulation, the group splits roughly 50-40 between those who want stronger government oversight and those who want development paused until safety is demonstrated. The 40% pause figure is the second-highest of any segment, exceeded only by the Alarm Maximalists at 60%. Only 6% believe current oversight is adequate.

Voting salience is relatively high. Fifty-six percent say a candidate's position on AI regulation would be a "significant factor" in their vote, and 18% call it "one of the most important factors." In their overall issue priorities, however, AI ranks fifth, behind the economy, healthcare, crime, and housing. The gap between how seriously they take the issue and where it falls in their priority stack reflects a broader pattern in the data: AI concern is intense but competes with more immediate material pressures.


What Resonates

Nine message frames about AI safety were tested across all five segments. The response pattern for X-Risk Literate is distinctive: they are broadly receptive across frames, and the existential arguments that fail elsewhere gain meaningful traction.

Children and Family was the most convincing frame at 60% "very convincing," consistent with the universal pattern across segments. Jobs and Economic Impact followed at 56%. Near-term, tangible frames outperform all others, even in the group most oriented toward long-term risk.

The divergence appears in the existential and geopolitical frames. The "pause development" message, which reaches only 13% "very convincing" among Progressive Alarmed and 7% among Trusting Pragmatists, registers 32% here. The international treaty frame reaches 36%. The US Competitiveness counter-frame, which most segments perceive as unconvincing or alarmist, scores 37% "very convincing," the highest of any group, with 51% perceiving it as "realistic."

The geopolitical framing is particularly notable. The argument that the US must lead on AI development to prevent authoritarian misuse was treated as plausible by a majority of this segment, consistent with a group already reasoning about AI in terms of large-scale, cross-national consequences.

However, the perception data on existential framing reveals genuine division within the segment. The pause message was perceived as "realistic" by 30% but "alarmist" by 38%. This group does not agree on the correct level of alarm. What distinguishes them from other segments is that the question is treated as worth debating rather than dismissed outright.


In Their Own Words

"You happened to find a scientist who sets guardrails in some prominent AI models. AI will change human society in ways we do not understand. Anyone purporting to know what will happen is either hubristic or is seeking gain. I am concerned. For instance, the 'polarization' groups feel now is nothing compared to the existential threats we may face. Politics is useless now." > — Independent Man, 44, New York · Parent · Graduate degree · Full-time worker
"I use it every hour in my job, and it will soon replace me." > — Independent Man, 66, Florida
"Its a monster. This is how civilizations end. Too much technology, too fast. Don't know how to control it, and it will be in the hands of very evil people. It is already." > — Republican Woman, 73, Oregon · College degree · Retired
"If regulations are not put on AI, I believe it will cause the demise of the world." > — Democrat Woman, 68, Florida · Parent · High school · Retired
"Ai will Destroy the world we must act now or face great consequences..." > — Republican Man, 35, Arkansas · High school · Full-time worker

Implications

X-Risk Literate challenges several common assumptions about who worries about catastrophic AI risk and why. The segment's bipartisan composition (36% Democrat, 36% Republican) places existential AI concern in genuinely nonpartisan territory, a position that is rare for technology issues in the United States. Climate concern, which started with comparable bipartisan support in the late 1990s, fractured within a decade as partisan elites began signaling positions. Whether AI concern follows a similar trajectory or retains its cross-partisan character is an open and consequential question.

The segment's combination of heavy AI use and low mechanistic literacy suggests a distinct epistemic pathway to concern. These respondents are not arriving at existential risk assessment through the arguments of the AI safety research community or through philosophical reasoning about superintelligence. They appear to be arriving at it inductively, from sustained firsthand observation of what AI systems can do. The gap between what they see the technology accomplish and what they can explain about how it works may itself function as a source of concern. Technologies that exceed their users' ability to account for them have historically triggered elevated risk perception, and AI fits that pattern for this group.

For researchers studying public risk perception of emerging technologies, X-Risk Literate presents an instructive case. The standard assumption in risk communication is that greater exposure and familiarity reduce perceived risk. This segment inverts that relationship. The respondents with the most direct, sustained exposure to AI's current capabilities are the ones most focused on its catastrophic potential. Whether this reflects a well-calibrated assessment or an availability bias driven by firsthand encounters with impressive capabilities is an empirical question the present data cannot resolve. What the data do establish is that heavy use and existential concern are not mutually exclusive orientations, and that the demographic profile of people who hold both is far more diverse, bipartisan, and working-class than prevailing narratives about AI risk would suggest.