Progressive Alarmed
18.9% of concerned respondents (N=378)
The youngest, most technically literate, and most politically liberal segment in the sample, yet the group least likely to use AI at work.
Who They Are
Progressive Alarmed are the youngest segment in the study, with a median age in the mid-thirties and over a quarter (28%) under 30. They are the most politically homogeneous group: 61% Democrat, only 7% Republican. They are majority female (55%), the most educated segment (59% hold a bachelor's degree or higher), and the least likely to have children (40%). Climate belief runs at 93%, the highest in the sample.
The defining tension in this group is the gap between knowledge and use. Progressive Alarmed scored the highest of any segment on the AI understanding check: 64% correctly identified that large language models work by predicting text patterns. Yet they are also the lightest users of AI at work. Half (50%) report never using AI in a professional context, the highest non-use rate in the sample. They understand the technology conceptually without engaging with it routinely. This is a segment whose concern is informed by literacy rather than experience.
What Worries Them
Progressive Alarmed have a focused concern profile anchored in near-term, systemic harms. Their top concern is misinformation and deepfakes at 67%, the highest rate of any segment and well above the 46% sample average. Job displacement follows at 54%. Environmental impact of data centers reaches 44%, nearly double the sample average and far above any other group. This environmental concern is the segment's signature: no other group comes close.
What is largely absent from their concern profile is existential risk. Only 2% select human extinction. AI developing conflicting goals with humans registers at 3%. Weapons and warfare sits at a modest 15%. This is a group that worries about what AI is doing now and in the near term, not about what it might become. Their concern is concrete and values-driven: the information environment, the climate, the labor market, corporate accountability.
They select a moderate number of total concerns (8 out of 14), placing them between the breadth of the Alarm Maximalists and the narrow focus of the X-Risk Literate. The concerns they select are internally coherent and map cleanly onto a progressive policy worldview.
How They See AI
Concern is high and nearly uniform. Eighty-six percent describe themselves as more concerned than excited about AI, the second-highest rate after Alarm Maximalists. On overall AI concern, 89% report being at least "very concerned." There is little ambivalence here.
The trust profile is where Progressive Alarmed stand out most sharply. Nearly two-thirds (64%) say they trust AI companies "not at all" to develop the technology responsibly, the lowest faith in industry of any segment. Government trust is also low, with 45% saying "not at all." This creates a distinctive posture: they strongly favor policy action but distrust every institution that might deliver it. On regulation, the group splits nearly evenly between wanting a full pause on development (47%) and wanting stronger government oversight (52%). None say current levels are adequate.
The barrier data reinforces this pattern. When asked why they have not taken more action on AI issues, 64% say "I don't think my actions would make a difference," the highest efficacy-doubt rate in the sample. This is a segment with strong opinions, clear values, and a deep sense that the systems responsible for acting are unlikely to do so.
Voting salience is moderate: AI ranks fifth in their overall issue priorities, behind the economy, healthcare, housing, and climate change. Their top two national priorities, the economy and healthcare, are near-universal, but climate change ranking above AI is distinctive to this group.
What Resonates
In the message experiment, Progressive Alarmed responded most strongly to frames that align with their concern profile. Children and Family led at 65% "very convincing," followed by Democratic Control (65%), Jobs and Economic (63%), and Technical Safety (63%). The top four messages cluster tightly, all above 60%, and all focused on concrete, near-term consequences.
The existential risk frames performed poorly. The "pause development" message reached only 13% "very convincing," the second-lowest rate of any segment, and 48% perceived it as "alarmist." The international treaty frame fared somewhat better at 25% but was still seen as alarmist by 26%. The US Competitiveness counter-frame registered at only 18% very convincing, the lowest of any group. Geopolitical framing does not connect with this segment.
The perception data is striking in its consistency. Messages about corporate power, jobs, children, and technical safety were perceived as "realistic" by 80-89% of the group. The pause message was perceived as "realistic" by only 15%. Progressive Alarmed draw a sharp line between near-term systemic arguments, which they find credible, and existential arguments, which they perceive as overblown.
In Their Own Words
"Many of my colleagues and I have lost a lot of our independent contractor digital writing work to AI. So I was introduced to significant problems involving AI early on." > — Democrat Woman, 67, Wisconsin · College degree
"I am the most worried about the environmental impact. I live in Colorado and data centers are trying to pop up here where we already have a lack of water." > — Democrat Woman, 33, Colorado
"I loathe AI and refuse to use it. It is both over-hyped regarding its capabilities and underestimated in how much damage it can do to society. We need much stronger regulation as soon as possible." > — Democrat Woman, 41, Michigan
"AI will have astronomically negative effects on cognitive functions that are already falling behind in our education system. Future generations will lose even more critical thinking and the ability to think independently." > — Independent Woman, 28, New York · College degree
Implications
Progressive Alarmed represent a segment whose concern is rooted in a coherent values framework rather than generalized anxiety. Their focus on misinformation, environmental harm, and corporate power maps onto an existing progressive policy agenda, and their near-total rejection of existential risk framing suggests they view catastrophic AI scenarios as a distraction from the systemic harms already underway.
The gap between their high technical literacy and low AI usage is notable. Unlike X-Risk Literate, whose concern appears to derive from daily exposure, Progressive Alarmed concern appears to derive from understanding the technology well enough to identify its structural risks without needing to use it personally. This distinction has implications for how the two groups process new information about AI: one updates from experience, the other from analysis.
The efficacy deficit is perhaps the most consequential finding for this segment. They are the most politically engaged of the concerned groups in terms of values clarity and issue coherence, yet they are also the most likely to say their actions would not make a difference. This combination of strong conviction and low perceived efficacy is a well-documented predictor of disengagement in the risk communication literature. The segment most prepared to act on AI concern is also the most skeptical that action would matter.