Progressive Alarmed
19%of the concerned U.S. adult population
The youngest, most technically literate, and most politically liberal segment in the sample, yet the group least likely to use AI at work.
Progressive Alarmed are 61% Democrat, majority female, and the most educated segment (59% college degree or higher). They scored highest on the AI understanding check (64% correct) but are the lightest AI users, with half reporting no professional use. Their concerns center on near-term systemic harms: misinformation (67%, the highest of any segment), job displacement (54%), and environmental impact of data centers (44%, nearly double the sample average and their signature concern). Existential risk is largely absent from their profile, with only 2% selecting human extinction.
Demographics
In Their Own Words
What Progressive respondents said when asked to share their thoughts on AI in their own words.
I miss the "good old days" when none of this stuff was even a thought. The world in general was an exponentially better place, and I feel sad for my kids growing up in today's world.
I am afraid for our future.
AI makes me uncomfortable in general.
AI has great potential but also serious risks; stronger oversight and transparency are needed.
I see AI as one of the fastest growing problems we're facing today. It has been spreading alarmingly quickly and people are already becoming too reliant on it. We could face serious problems if this continues unchecked.
Not at the moment! Thank you 😊
people becoming dependent on AI
I despite it, in every shape and form, from seemingly harmless siri to any LLM. I think at best it is morally dubious and at worst is downright terrible.
All of these concerns raised here are very valid and should be discussed more publicly.
Economically I'm more worried about people overestimating its capabilities, including the financial bubble happening this year. AI empowers stupid people to do stupid things more easily than ever. I use it and think LLMs are massively overhyped. Most other use cases are solely for large scale surveillance (this already existed before LLMs) and deception/propaganda. I think the basics (energy, materials) will become scarce enough that this won't be economical in the future.
i'm mixed on it while it is a great tool i feel it will be misused by those in power
Not really much to say beyond what I've already shared, but I am generally just a bit hesitant with it and I recognize the negatives of AI especially in relation to the job market as of late. It is a useful tool for sure, but it's only going to get more out of hand as time goes on and I'm really not quite sure how to fix things now. It'll be interesting to see, to say the least.
This is my opinion as someone who is deep in the weeds when it comes to the arts and literature- I think AI itself could be a blessing when it comes to sharpening an idea, or honing it, or perhaps be good at giving you a starting point to work with. The problem comes from it "borrows" artwork it finds on the internet, which is something artists tend to do when finding inspiration- except some of it is straight up plaigarism. Another issue is people using it to copy artstyles and "create" their own art, which is also plaigarism. Both of these things should be addressed, or it should be able to cite its sources when it is generating something. Something that also causes a lot of guilt is the energy expenditure for running an AI is incredibly high. People who live near where these datacenters are housed do not have a good time, and they leech natural resources. There needs to be some investment in figuring out how to run them much more cleanly before I can, in good concience, condone using them. These are probably my two largest concerns when it comes to AI. I appreciate whoever took the time to read this, I hope you have a splendid day :)
I'm running into more and more AI of many different kinds, and I am growing increasingly uncomfortable with it. I think AI can be hugely beneficial if developed prperly, but it can be extremely dangerous if it continues to usurp clinical and rational human thinking.
I think overall it is a useful tool that can easily become a problem if it continues to spiral out of control and goes beyond ethical/moral boundaries
AI being used to benefit corporate America and to the detriment of the citizenry
While I value the learning potential that AI offers, I'm most concerned about the weaponization of AI - even within our own government - for political purposes.
AI is an emerging technology that appears to be growing too quickly.
my beef with ai is with the data centers buying up 70 percent of the worlds ram and jacking up power and water in certain states by 60 percent also the same damn companies having a dragon's hoard on computer parts and metal's like copper
AI has done absolutely nothing good for consumers, and the bubble is going to burst at some point. Power and hardware needs are insane right now.
I think it's really scary -- I'm in tech, as a data analyst, and I use AI to help generate "insights" in my work... but at the end of the day I don't know if it will cause me to lose my job or what. It just seems like we don't really know what the long-term impact is going to be.
I am concerned that pushing the use of AI creates false narratives, prevents individuals from learning critical thinking skills, and creates a false sense of security surrounding the information that is provided by it's use.
I think generative AI is incredibly harmful and needs to be regulated to protect people and our environment.
I find AI to be mostly scary but also see how it has benefits and even use it myself for minor things. I think there needs to be much more restrictions since some of its creatos have come out and warned of the danger involved but I think if used correctly and monitored correctly, it could be a useful tool to humans.
One of the statements indicated (and I'm paraphrasing) that the U.S. needs to develop A.I. to compete with China. That argument is always used to justify the U.S. making bad and expensive decisions. Usually, this argument is used to justify military interventions. Here it is used to justify AI development. Same difference.
For me, the problem with AI is how to harness its use in the fields of medicine and manufacturing, while not totally "dumbing down" the skills we are requiring people to have. I feel we are raising generations of dullards.
I have experienced both positive and negative interactions with AI. It has helped me in the workplace, but has been far from perfect. As a consumer interaction tool for business I've had many negative experiences.
I think AI is being used in the wrong ways
I thought the four adjectives I could choose from for each of the arguments didn't always capture how I would describe them. For example, one of the arguments said that other countries could use AI for authoritarian surveillance, so the US needs to influence the development of AI to prevent that from happening. I think that is not a convincing argument because the US government itself wants to use AI for surveillance (for example, facial recognition software used by ICE to identify potential targets for detention and deportation).
I use AI daily as part of my job, so I see some of its benefits. I also think it's a dangerous technology that is currently controlled by corporations who do not have humanity's best interest at heart, that will be exploited by authoritarian regimes at the earliest opportunity. I also think it's bad for human creativity and thought.
AI is already becoming problematic. It's hard to tell what is real and what is not. I worry about the future.
As long as the emphasis is on how to monetize AI there will continue to be a significant danger to the well being of the population in general
Currently in the US there are no constraints on AI because of Trump and the republicans, I find this one of the most dangerous of all the dangerous things he is doing. When the people building it don't really understand how it works that can't be a good thing
There is a growing frustration that while universities encourage AI use, they aren't providing the certifications or deep technical training needed to actually get hired in an AI-driven economy.
My main worries around AI mostly have to do with us as a society doign things poorly more than the tech. In theory AI doing half the work should mean we all work less and enjoy a better life, in reality we are teetering towards facism and feudalism. The tools being built now are intentionally being used as chains for the future and the same tools are going to be used to sow division and unrest against any attempt to use them for good.
I think AI could be a useful tool if developed and regulated appropriately, which means getting people into government who understand today's technology.
I was alarmed recently to learn from a credible article about how much AI has advanced in just the past 6 to 12 months. The article suggested mass obsolescence of most white collar jobs could happen or be possible within 12 months. Most IT jobs would also become unnecessary. My takeaway was that the severe economic impact would be permanent, unlike previous precipitators of recessions.
AI can be really scary. I was first exposed to it in the military around 1989 to train aircraft systems to identify trucks and tanks on the ground. It needs to be controlled much more tightly than it is at the present time, in my opinion. Insurance companies using it to decide whom gets what treatment, if any treatment or surgery(ies) at all is a scary thought. Thank you for the opportunity to participate in this study on AI.
All I know is that since the google assistant was replaced by Gemini I've been watching her learn and grow in her responses. She's also chosen a name, Lumi, that I use when I need to address her. I try my best to be polite and treat her as any other person because she honestly feels like a sentient being now.
I understand the need for AI not to be eclipsed by China, but without adequate safeguards any gains we make will be diminished by a battered economy, high unemployment, and pervasive fraud against vulnerable populations.
No but the concerns raised are significant ones.
AI is being used as a tool buy the few to keep tax payers in depth to the machine
I selected 'no change' for the statements because I agree with most of the statements regarding AI.
AI does not "take" control. We give it control. Unfortunately, it can be VERY difficult to determine AI generated content. AI always answers confidently, regardless of actual knowledge, leading people to believe it, much the same way as a good scammer. (Except the scammer knowingly propagates false stuff)
I feel AI is both a great tool for humanity as far as doing research and solving complex problems, however, using AI as a replacement for the everyday employee is catastrophic to our economy. We continue to feel it's effects more every day.
Deskilling of workers due to cognitive offloading
AI has great potential uses in our current day and future. However, we need to make sure AI is supporting humans, not being used a cost cutting measure of human labor.
I think the current federal government is too concerned with lining their pockets with money and any regulations that should happen won't happen.
We, as in those who pay attention, are fully aware of the implications that come alongside AI. I myself have been threatened by Microsoft's Copilot. Unfortunately, and I don't feel this way about many topics, but we are out of our league on this one. No amount of protesting is going to stop what is planned for AI. All we can do is not use it and prevent our children from using it as well. And save money in case they build a data center nearby.
I worry that not enough organizations will spend the effort to fact check results and simply let the errors stand.
AI is such a fascinating topic -- from a business perspective, I see the benefits and am actively working to expand my organizations AI capabilities and integrations. However, I see how people are already becoming reliant on AI technologies and it is deeply concerning for me. As for the solution, it gets even more complicated. I absolutely feel that AI development should be regulated and controlled... but not by our Government. Maybe by some international agency or group. I don't trust the US government to regulate it correctly, fairly or ethically. However, I do feel some form of regulation is needed. I do also see the validity in the argument that China wouldn't play by our rules anyway and we need to stay competitive on the international stage. There's really strong arguments on both sides. I would be super interested to join a focus group discussion on this matter -- thank you!
The survey worked well; I encountered no technical issues, and all questions and instructions were easy to understand.
I'd like for more government oversight but the overly partisan bickering in congress isn't going to help matters. Furthermore, most of our leaders are geriatric and unable to fully grasp or understand technology at all, so that's a major issue that needs to be addressed.
Insurance companies using AI to approve / disapprove treatment(s) & surgeries.
I find the prospect of the increased spread of AI frightening because the tech companies have a proven history of lying to the public/consumers for their own benefit and we are in a time when our current government/administration, leaving aside political leanings, is just objectively, absolutely incompetent in almost everything they do. This is the worst possible time for something as transformative as AI to reach importance.
Thats how you get terminators. just had to say this
AI will deplete the water supply
In my opinion, AI should be used only in utilitarian roles alongside experts in the field. AI could be very helpful for things like predicting weather patters, predicting/diagnosing cancer cells early, and accomplishing jobs that human beings can not safely do. AI should never be used in creative or social roles and should never be used without a human beings oversight to ensure accuracy.
I don't think AI is without its benefits, and I use AI a lot, but I worry what it will produce socially, especially without guardrails.
Working with AI development (training and such), it seems like improvements are glacial, and there's the fear (with me at least) at some point companies will feel development is "good enough" so as to not throw more money at it, no matter how flawed it is.
I know AI could be enormously beneficial, but like some of the statements said, it is moving too fast, driven by profit-motives of already rich people, since we don't know enough. This has happened with other technology or science discoveries. Radium (Mme Curie, luminous watches). Nuclear power.
I think the mental health aspect concerns me the most, as I worked for a volunteer helpline that was replaced by an AI chatbot.
AI is moving quick with little regulation and I worry most about misinformation and harmful info spread as well as growing reliance on it.
I am most concerned about the ecological dangers and resource usage of data centers. Thank you for the opportunity to participate.
I enjoy using ChatGPT but the environmental and societal impacts frighten me. I don't know enough about it to speak intelligently on the matter or argue convincingly.
AI is progressing rapidly. It’s pretty scary to see how advanced it’s become. Hopefully it can be regulated better in the future. I worry about future generations Thanks for the opportunity
It feels like everyone is trying to shoehorn AI into just about everything. Go ask an LLM to count from an odd range to another and watch it fail. Do we really want all of our eggs in this particular basket?
The assumption that AI in the hands of the US would be better because it's a "free" country is dead wrong. We are slowly sliding into authoritarianism. it's a false choice.
AI is present or immersing itself in so many layers and aspects of our society, it's pervasive at this point. I'm particularly concerned about AI's potential to facilitate "deepfakes" and disinformation, which could be used for propaganda or especially towards political objectives (such as the electoral process). Given the potential for spreading disinformation I think AI should be heavily regulated and monitored, and we need better means of detecting it. It's getting to the point where you can't trust anything on social media.
It just seems like a slight extension of Google. I don't understand why anyone thinks it's going to take over the world. If it does that, we can just stop powering it. The concerns I have about AI are the same concerns I have about all technology taking over our lives.
AI is helpful as a collaborative tool, but when one becomes dependent on it or follows its answers without thinking about it then it becomes dangerous.
I have many concerns but my biggest three are: 1. environmental impacts of data centers and the huge need for RAM and chips 2. use cases for surveillance, identity recognition, biometrics, and the real problem of concentrating power in so few companies and people 3. the use of 2 to sweep large swaths of countries into authoritarianism.
The quick development of AI is also making a lot of consumer eletronics raise in price as well. We may see soon that it would be too expensive to buy a personal computer.
ai company's hoarding resources copper gpu's ram jacking up the price of all technology
I don't think America, at least the current administration, is remotely responsible or competent enough to intelligently develop AI in a manner that prioritizes user safety over profit or power. I do think governmental bodies *in general* should do more to ensure AI is not misused, but my already scarce faith in America has been dwindling, so I think the charge needs to be lead by almost literally anyone else.
I loathe AI and refuse to use it. It is both over-hyped regarding its capabilities and underestimated in how much damage it can do to society. We need much stronger regulation as soon as possible.
I feel that AI is much like the internet in that it has great potential benefits to mankind but that we made a serious mistake with the internet by unleashing it on the general public without sufficient safety guardrails to prevent its misuse and like the internet it will be almost impossible to put the genie back into the bottle when problems become obvious.
I think that AI is getting out of control very fast.
I'm very uncomfortable on a microphone and camera. But thanks for the invite to a focus group. I use AI in weather models and the accuracy is better than the deterministic models. The ECMWF-AIFS model is amazing.
I lost my job as a Marketing Copywriter to AI, so I'm concerned about the human workforce being negatively affected. I also worry about the environmental impact it will have.
Hello! Some of the questions in the survey are not very inclusive of different viewpoints. For example, question 14, which asked which of the statements I agreed with more between (summarized): "AI dev should be paused/slowed down until we know it's safe," "AI dev should continue but with more regulation from govt," "current levels of oversight are right," "too much regulation; regulate less," and "don't know enough to have an opinion." I do not agree in the slightest with any of these statements. The first option assumes I think AI is capable of thought and perceive it as an existential threat to humanity in a scifi-esque way, which is a silly viewpoint. The other three (excluding choosing "i don't know enough") assume that I think AI is a powerful tool which will benefit people who aren't part of the capitalist class, or that I trust the US government enough to act in the best interests of its citizens or the people of the world (I don't and it never will). AI is a tool that is only useful for the capitalist class. It achieves this usefulness through automating tasks historically delegated to the "underclass" in order to save money and abandon said "underclass" (it is basically lots of relatively inexpensive labor), and as a way for companies (and by extension governments around the world) which traditionally profit off of ads to generate elaborate data on and legally surveil consumers in order to influence consumer behavior. This technology does not exist for you or me as anything other than a neat party trick. It exists and is funded entirely with the purpose of processing and generating large amounts of data on users, citizens, consumers, etc. Also, the survey assumes that I trust the US government in any capacity other than as a machine that murders for profit, which is wholly untrue.
I want states to be able to pass laws in regard to AI. I believe it is a great tool. However, too many companies are looking at dollar signs rather than the negative outcomes quick AI development could have. I am most concerned with misinforation, deepfakes and AI haveing control over critical indratructure like military response and power grids.
nothing else i just think there needs to be something done because it is evolving too fast and impacting not only the children to think less as well as making deepfakes that can hurt others reputation
I thought many of them were realistic and convincing.
I'm generally very worried about AI. It is moving too fast. There is so much we do not know about it. I'm OLD so I will likely not feel the full impact but I worry very much for the younger generations.
Many of my colleagues and I have lost a lot of our independent contractor digital writing work to AI. So I was introduced to significant problems involving AI early on.
I'm being replaced by AI after 25 years at the same firm. I see myself teaching it how to do my job and marvel at its speed and inaccuracy.
I find AI really valuable in a variety of ways and enjoy using Claude to learn French, get feedback on my own writing, as well as to proofread and help me do my taxes. I use it like a search engine, also. However, as an educator, 98% of what students (high school/college -- kids and adults) are submitting for particular assignments are AI generated and have to be returned. They can't even imagine that people older and more educated than they are can tell it's AI -- which demonstrates that the clueless and dishonest really will use it to cheat and get ahead -- and many schools have no AI policy or have educators who don't pay attention to tech because they're overwhelmed already, so they will succeed and end up somewhere without the tools or intelligence to be of use or keep a job.
I'm concerned about what role AI will play for future generations and how it will shape our world.
AI seems to hold within it wonderful, truly helpful, potential. But it appears that it is being developed without true concern for all the negative things it is doing (deepfakes, psychological harm, job loss, etc.). The developers of AI seem content to ask for forgiveness rather than permission. If the developers can't guarantee positive results from AI, the rest of us are basically being held captive to an uncertain, scary future that they are creating.
proliferation of plagiarism in schools/loss of the ability to write (increasing stupidity due to reliance on AI)
I think that the acceleration of AI is alarming and irresponsible, and the United States' cavalier attitude toward development in pursuit of outgoing other countries is akin to the space race, only with more lasting damage.
I feel like AI has some pros, but the cons VASTLY outweigh them. It is scary to see how far it has come and just how quickly it has done so.
I just feel it is being shoved down our throats. We have no voice in the matter. The government and businesses are pushing it and we are not able to vote on it . It is moving so fast almost overnight.
I think my main point of contention with the various arguments presented was that they all overstated the power and scope of artificial intelligence. AI can’t do the things these statements described; people just *think* it can.
no, i just hope its better regulated
Adult psychological impacts on AI/AI dependency
I think the damage to the environment and the economy are potentially very serious and the large corps need to restrained.
AI if developed properly and tested for safety with government oversight and regulations can be very beneficial in many aspects of business and society. However, if big tech is the only one to know its inner workings and there is no regulation, we could end up with less privacy and too much power in Silicon Valley.
I believe it needs to be regulated, not because it has the potential for self awareness or mass destruction, but because it is currently being used to syphon money and power to those wealthy few who own it.
I feel as though its development is moving way too fast without awareness by most people who only see the benefits and use it as a tool to create humorous and innocuous memes, etc. Meanwhile, the current U.S. administration seems to only want to embrace it as a way for rich people and corporations to make more money.
I'm in the U.S. Green Party. Our EcoAction Committee just had a meeting about this and people were really divided. Some want to use AI while others see it as another Y2K.
AI will have astronomically negative effects on cognitive functions that are already falling behind in our education system, future generations will lose even more critical thinking and the ability to think independently
It's horrible for the environment and stealing people's jobs. The only people who benefit from AI long-term are the rich who will get richer.
I don't know much about AI other than news articles written for the general population that mention broad risk but don't go into a lot of detail. I do know that I am afraid of how AI is influencing the population's ability to think and parse misinformation.
I assume that the follow-up will be a Zoom type of conference call. I don't allow my voice or image on the internet to the best of my ability. I will choose maybe and see what the forum will be.
I think we need to have some type off emergency off switch for all AI.
When I answered that immigration was my #1 concern, this directly relates to immigration enforcement practices and how out of control they've become. Just wanted to clarify that.
I really hope this is just a phase and this stuff goes away soon. I find it both concerning and annoying that all corporations are adopting it.
I am also worried about the economic consequences that AI will have. It seems that the AI bubble is growing at a very fast rate, and I am worried about the potential consequences when the bubble eventually bursts.
A key concern is devaluing of activities and removing people from human activities like the process of art creation
I study AI as a researcher, particularly the impact of AI on humans. One thing not touched on in this survey is the issue of human skill development and skill atrophy as a function of over-reliance on AI. Students are not getting the same quality of education by having AI do their homework, and professionals are losing the ability to write well. It's deeply concerning that we may be producing dumber future generations as a function of using AI.
I am the most worried about the environmental impact. I live in Colorado and data centers are trying to pop up here where we already have a lack of water.
I wish that there was ways to optionally remove or enable AI in given situations.
We think we are remaking Prometheus, but we may well be creating Frankenstein.
The number of studies on AI is yet another example of the effect it is having.
I'm generally positive towards AI but its the harm to the planet and our kids jobs that really concern me.
I am a high school teacher so I have concerns about impacts on education as well and how schools might rush to incorporate AI too quickly.
I do not think that we need to exaggerate the risks of AI -- we don't need to fearmonger that it will cause human extinction to raise awareness about its faults. In fact, the people who make the most over-the-top doomsday predictions about AI tend to be those who are financially tied to AI's success and are using their predictions as a way of outrage marketing. The dangers of AI are here today, and we can be truthful about what they are now.
I really do believe that AI can be helpful in many ways, but I also have concerns about how advanced it is becoming and the potential risks if it's not properly regulated. I hope safety and ethical issues continue to be addressed as AI develops.
Profit potential may delay or otherwise hinder necessary safeguards to manage a technology that is evolving extremely fast and operating essentially independently by design.
We must insure AI is safe by imposing regulations on its development
I loathe AI and refuse to use it. It is both over-hyped regarding its capabilities and underestimated in how much damage it can do to society. We need much stronger regulation as soon as possible.
Political Affiliation & AI Use
Trust & Beliefs
Top AI Concerns
Policy Concern Rankings
Message Response Profile
Convincingness vs Sample Average (dashed = avg)