Alarm Maximalists
20.0% of concerned respondents (N=400)
The most concerned segment across every measure: broadest worry, strongest demand for regulation, and lowest trust in any institution to act. They do not need to be convinced. They need to be shown that action is possible.
Who They Are
Alarm Maximalists are predominantly female (60%), with a mean age of 48 and the largest share of respondents over 65 (32%) among the high-concern segments. They lean Democratic (49%) but include a meaningful Republican minority (13%) and a substantial Independent bloc (32%). This is not a purely partisan group. Education levels are slightly below the sample average, with 50% holding a college degree. Just under half (47%) have children.
AI use is moderate. About a third (34%) have never used AI at work, while 13% use it daily or more. Their AI literacy score (60% correct on the understanding check) sits near the sample mean. They are not heavy users or technical experts. Their concern does not stem from personal experience with the technology so much as from a comprehensive assessment that its risks are severe across multiple domains simultaneously.
What Worries Them
The defining feature of Alarm Maximalists is breadth. They select an average of 12.6 concerns out of 14 options, far more than any other segment. Where other groups prioritize, this group checks nearly every box. Misinformation leads at 45%, followed closely by job displacement at 45%, privacy at 35%, environmental impact at 27%, and concentration of power at 25%. But the gap between their top and bottom concerns is far smaller than in other segments. Even their least-selected items register above most groups' averages.
This breadth extends to the catastrophic end of the spectrum. Extinction or mass casualties reaches 16%, tied with X-Risk Literate for the highest rate. AI developing conflicting goals hits 12%. But unlike X-Risk Literate, these existential concerns sit alongside, rather than instead of, the full range of near-term harms. Alarm Maximalists do not trade near-term for long-term worry. They hold both.
The concern profile can be read two ways. The sympathetic reading is that this group perceives AI risk holistically, recognizing the interconnection between near-term harms and longer-term dangers. The critical reading is that intensity without selectivity may reflect generalized anxiety rather than informed risk assessment. The data do not resolve this question, but the segment's high voting salience and policy engagement suggest the concern is not passive.
How They See AI
Concern is essentially at ceiling. Ninety-eight percent report being at least "very concerned" about AI's negative effects, with 69% selecting "extremely concerned." Eighty-seven percent say they are more concerned than excited. There is no ambivalence in this segment's emotional orientation toward AI.
Trust is at floor. Two-thirds (68%) trust AI companies "not at all," and 57% say the same about government. These are the lowest trust scores in the sample on both measures. The combination of maximum concern and minimum trust produces a distinctive regulatory posture: 60% favor pausing AI development entirely until safety is demonstrated, the highest pause preference of any segment. Another 37% want stronger regulation. Fewer than 1% think current oversight is adequate.
The barrier data reveals a paradox. Despite having the strongest opinions and highest urgency, 62% say their actions would not make a difference, and 57% say AI is moving too fast for meaningful regulation. Half (50%) believe US regulations alone would be ineffective. This is a segment that perceives the problem as both urgent and intractable.
AI ranks highest among their national priorities of any segment (mean rank 4.2 of 8), and 74% say a candidate's AI stance would be at least a significant factor in their vote.
What Resonates
Alarm Maximalists are the most responsive segment in the message experiment. Nearly every frame achieves high conviction rates. Children and Family leads at 72%, followed by Technical Safety (71%), Democratic Control (70%), Consumer Protection (67%), and Jobs (66%). Five of nine messages exceed 60% "very convincing." This level of across-the-board responsiveness is unique in the sample and mirrors the breadth of their concern profile.
The most notable finding is the performance of existential risk framing. The "pause development" message reaches 40% very convincing, the highest of any segment, and is perceived as "realistic" by 43%, the only group where the realistic perception exceeds the alarmist perception for this frame. The international treaty frame scores 51%. Alarm Maximalists are the only segment where existential arguments are received as credible mainstream positions rather than fringe claims.
The challenge this creates is one of discrimination. When a segment finds nearly everything very convincing, identifying the most effective message becomes difficult. The gap between the best-performing message (72%) and the fifth-best (66%) is only six points. Message selection for this segment may matter less than for others, though the existential frames still lag the near-term ones by a meaningful margin.
In Their Own Words
"All those movies involving AI could potentially come true. Could have been a warning that people are ignoring. It's going to get dangerous and we'll be replaced by robots." > — Democrat Woman, 41, Georgia · Parent · High school
"Every time there is new technology, governments around the world use it for evil. Nuclear energy became earth-ending weapons. Can you imagine what can happen with evil humans with AI on top of the evil weapons that already exist? It is terrifying to be on this planet already as it is!" > — Democrat Woman, 64, Colorado
"The American government has been totally captured by the companies developing AI. These men are wealthy beyond anyone's dreams and cannot be controlled. Regulation will be impossible unless wealth is confronted at the same time." > — Democrat Woman, 74, New Jersey
"I use AI daily as I have to for work, but I find it terrifying and think we will look back in a few decades with regret if we don't implement safeguards ASAP." > — Independent Man, 40, Oregon
Implications
Alarm Maximalists present a specific challenge for researchers studying public risk perception. Their across-the-board concern and across-the-board message responsiveness raise the question of whether this segment represents a distinct risk orientation or a generalized alarm response that attaches to any presented threat. The data cannot fully distinguish between these interpretations, though the segment's high policy engagement and voting salience suggest the concern is translating into political orientation rather than remaining abstract.
The trust collapse in this segment is the most severe in the sample. A group that simultaneously believes AI development should be paused and that no institution can be trusted to pause it occupies a position of informed helplessness. This distinguishes them from Progressive Alarmed, who share the distrust but channel it into a specific policy framework (environmental, labor, corporate accountability). Alarm Maximalists lack that channeling mechanism. Their concern is total but undirected.
For the politics of AI governance, this segment may function as an intensity floor: the constituency that would support essentially any regulatory proposal, from the mildest transparency requirement to a full development moratorium. Whether that broad support is an asset or a liability for policy advocates depends on whether breadth of agreement compensates for lack of prioritization. A coalition that supports everything may struggle to prioritize anything.