Skip to main content

1. Almost three-quarters of Americans are concerned about AI

Seventy-three percent of our nationally representative sample reported being at least somewhat concerned about AI as a national issue. This places AI alongside climate change in overall concern levels, and well above immigration. The concern is not concentrated in any single demographic group: majorities of every age bracket, education level, gender, ethnicity, and political party report at least some worry.

2. AI concern is bipartisan in a way that almost nothing else is

The partisan gap on AI concern is 13 points (79% of Democrats, 65% of Republicans). Compare that to climate change (50 points) or immigration (32 points). AI is one of the few technology issues where you cannot reliably predict someone's concern level from their party registration. Whether this bipartisan character holds as political elites begin taking positions on AI governance is an open question, but for now, the window for nonpartisan engagement is wide.

3. People are concerned but not yet activated

Despite high concern, AI ranks last among eight national issues in personal salience. Only 4% of respondents ranked it as their top priority; the economy (29%) and healthcare (23%) dominated. Political scientists call this an activation gap: the distance between registering concern and treating an issue as urgent. Most Americans recognize AI as a problem without yet treating it as one that demands immediate action.

4. Jobs and misinformation are the universal concerns

Every segment in the study placed job displacement and misinformation in their top three AI concerns. These are not niche fears. They are the baseline from which all other concerns radiate. What distinguishes the segments is what comes after: Progressive Alarmed add environmental impact, X-Risk Literate add weapons and extinction, Trusting Pragmatists add cybercrime and scams.

5. The concerned public is not one group. It is five.

Using latent class analysis on 25 attitudinal indicators, we identified five distinct segments within the 73% who expressed concern. They differ not just in how much they worry, but in what they worry about, who they trust, and what they think should be done about it. A 28-year-old progressive worried about data center water usage and a 73-year-old Republican worried about civilizational collapse both register as "concerned about AI," but they are concerned about fundamentally different things.

6. Young people are the most worried, not the least

Among 18-to-29-year-olds, 82% reported concern about AI, the highest of any age group. Among adults 60 and older, 74%. The cohort most fluent in the technology and most likely to use it daily is also the most concerned about its trajectory. This inverts the common assumption that concern stems from unfamiliarity.

7. More people want a pause than want the status quo

Across the full concerned sample, a larger share of respondents favored pausing AI development until safety is demonstrated than believed current oversight levels are adequate. A strong majority wanted either a pause or significantly stronger regulation. Only a small fraction endorsed the status quo, and fewer still said there is too much regulation already. The demand for governance is not a fringe position. It is the overwhelming center of public opinion.

8. That holds even when you frame it as competing with China

Eighty percent of concerned respondents said the government should prioritize AI safety rules, even if it means developing AI more slowly than China. This preference held across every segment and every political party, including 68% of Republicans. More respondents favored pausing AI development entirely than believed current oversight levels are adequate. The appetite for governance is broad, bipartisan, and far stronger than the prevailing policy narrative suggests.

9. Nobody trusts anyone to do it

The demand for regulation coexists with deep institutional distrust. Across all five segments, only a small minority expresses even moderate trust in AI companies or government to handle AI responsibly. The most alarmed groups trust the least: two-thirds of Progressive Alarmed and Alarm Maximalists trust tech companies "not at all." This creates a governance paradox: the public wants action from institutions it does not trust to act well.

10. One argument reaches everyone. Most others divide.

Of nine AI safety arguments tested, children and family was the only frame rated "very convincing" by a majority of every segment (53-76%). Near-term, concrete frames about jobs, consumer protection, and technical safety performed well across the board. Existential risk arguments activated Alarm Maximalists and X-Risk Literate but were perceived as "alarmist" by 83% of Trusting Pragmatists. The same argument that mobilizes one group can alienate another.

What this means

The American public is not divided on whether AI is a concern. It is divided on what kind of concern it is, how urgent it is, and what should be done. The five segments we identify are not arranged on a simple spectrum from worried to unconcerned. They represent genuinely different mental models of what AI risk looks like: systemic versus catastrophic, near-term versus long-term, institutional versus personal.

Understanding these distinctions matters because the policy window for AI governance may be narrow. Climate concern started bipartisan and fractured within a decade. AI concern is bipartisan today. Whether it stays that way depends in part on whether public engagement meets people where they actually are, rather than where advocates assume they should be.