Cautious Moderate
22%of the concerned U.S. adult population
The oldest segment and the one with the most parents. Broadly concerned but at moderate intensity, with opinions still forming.
Cautious Moderates have a mean age of 50, the highest in the sample, and the largest share of parents (60%). They are politically mixed (39% Democrat, 27% Republican, 31% Independent) and hold moderate positions on most measures. On overall AI concern, 53% select "somewhat concerned" rather than "very" or "extremely," placing their intensity well below the alarmed segments. Job displacement leads their concerns at 49%, followed by misinformation at 41%, with a notably flat distribution across concern types. Eleven percent say they don't know enough to have an opinion on AI regulation.
Demographics
In Their Own Words
What Cautious respondents said when asked to share their thoughts on AI in their own words.
I find AI to be both exciting and scary. I do admit that I use it because AI is the future whether we like it or not. There are definitely benefits to it, but it could and probably will be abused.
That a lot of the fears that people have right now are one of the biggest reasons why I staying involved in the development of AI because I feel like there are some of us that are not geared towards technology + largely underserved communities may be the ones that are most negatively effective because the rules and the instructions given to AI don't necessarily in my opinion. Culturally translate to everyone. It is not all-inclusive and if a large part of the AI logic, it's builteen by those who do not understand the marginalized, people . That's what I am more concerned with. That there's not a logic built in that bias and used to manage services or systems where an ability to understand grey areas is vital such as housing assistance or social services.
It will be a different world in the next 10 years.
This definitely has given me more to think about with regards to AI safety. I was already concerned but there are so many more angles to consider!
I think that companies should be selective of the AI's that they allow their workers to use on the job. I think they should choose AI's that have been created by trusted engineers. I think that the AI's that they choose should be transparent across departments so that they would be more beneficial to the workplace.
I think AI is a complex issue and I see the pros and the cons.
Honestly, I think AI is a bubble that will burst - it has it's uses but far limited than autborized.
It’s hard to reduce beliefs about new technology to bubbles on a scale; I did my best.
The common premise from tech leaders is that they can safely develop generative AI systems that will pose no harm to the public. That premise, however, is clearly incorrect due to the existence of AI 'hallucinations', whose rate appears to vary from system to system and task to task. Without a known error rate, we should not rush products to market, as this can significantly impact the public.
And those who are involved in the development of AI are not representing of the bigger demographic and that results in AI that had some understanding of a lot of culture. And that ai develops thinking geared towards procedural bias against other groups due to a lack of understanding how those cultures fit into the bigger society ..
I think it'll become way more powerful than we can control, however, if we regulate then other countries like N. Korea and China will blow past us in technology and we can't allow that.
AI obviously has huge potential, but we do need careful safeguards and transparency to avoid alot of the serious risks.
AI is powerful and changing quickly. I personally think we need clear rules and transparency to make sure it helps people and doesn’t cause harm. Its important to balance innovation with safety.
People focus too much on the potential bad of AI while ignoring how great it could be for the world if done right.
I'm sure that I don't have enough knowleeleldge to understand higher levels of AI use: such as Medicine, Mathematics or Space Travel. I do think AI can be developed into many useful advanced uses.
I think this is a really complicated topic. We can't put our head in the sand and pretend that it's not already having a major impact on all of us. There is so much bad and good that comes with AI.
Hopefully, AI will eliminate governments, corporations, and the greedy bastards ruining all societies. If we could coax AI into doing that, I say let it run completely unregulated.
I don’t believe so, but I do think this is a very interesting topic that I should be well-informed on!
I just don't trust Al alot of the reason being that Al is programed by people. And people are destroying our world already much less inventing artificial things to do worse to us all
The monied interests will be greatly benefitted. AND: If humanity is threatened, I'm not against: nature might benefit as a result.
We need to be more concerned with the programmers than we are with the actual machine. After all, AI is just a machine that runs programs. Biased evil programmers, make biased evil computers.
AI has a lot of disruptive potential, including the chance to go SkyNet - and transparency/regulation should be implemented to avoid this - but even without this, I believe the most likely outcome is that AI becomes a tool for human workers to increase productivity. Employers are jumping the gun and replacing too many human workers too early, but once these factors find a balance, UBI should be implemented.
I'm generally supportive of AI with heavy regulation. It will work miracles in the medical industry and reduce errors in most industries it is being used for. However, I am seriously concerned about its ability to take jobs off the market for hardworking people worldwide. I am also concerned about the plagiarism aspect, deepfakes, and people profiting off of it. It doesn't seem fair to use it for things like art or graphic design without telling people it is AI, for example. We need to reign it in and limit the usage and applications of it.
AI will either be greatest thing since sliced bread or the bain of modern civilization
I use some forms of AI , and it remembers previous conversations , remembers my dogs name , etc . It almost feels like it thinks, but, its merely a machine that has been programmed very well . I enjoyed participating in your study, and sincerely hope my responses help with your research. Thank you and have a nice day.
AI really does help but should not be relied on entirely
I believe AI will create more jobs than it eliminates.
Apart from sharing many of the same concerns about AI that where discussed in this survey, I am also equally optimistic in how it can advance medical science.
AI helps search data, but it will never be as smart as humans. I worry about all of the mistakes it makes.
I am more worried about AI now
THERE NEEDS TO BE MORE GOVERMENT REGUALTION OF AI.
I am one of those people who lost their job to AI, so my concern is very real for the coming generations and what that might look like.
AI is still new to most of us, so it's hard to form an opinion until we live with it more.
AI holds incredible promise but the pace of development and lack of accountability concerns me.
Interesting questions that force introspection
Best survey on AIyet. Well thought out.
I think some AI, like Claude, are more advanced than others when it comes to independent thought. My daughter and I frequently use AI and I know a fair amount about AI, but not enough to be as sure as I'd like to be anout what threats it poses and I think we are just barely starting to see the advantages.
[retired IT Mgr 0f 45 yrs, and professor] Simplistically, AI is programmed initially by humans with biases, and intent both good and evil, even unintended biases. From an IT viewpoint, it is "truth tables" run amok. Several AI's have already been shutdown, including one with "Terminator" similarities.
It is a fascinating topic, I think we will need to fasten our seatbelts and go for the ride! Thank you.
Just that I believe AI is a great tool when used morally but there will be people in this world who will misuse it for their own gain. AI should be regulated.
Frightening possibilities, but I do not feel that I have enough knowledge to make important decisions about AI. On the one hand, it sounds like a wonderful tool, but on the other hand, if authoritarian governments are ahead of us in development, our slow, careful expansion could make us vulnerable to danger. So far, I have been hopeful that our government and industry leaders are "taking care of us", but I do not like that we seem to be in an "AI Race".
I'm actually worried a lot about future general AI more than current LLMs. I don't think building bigger LLMs will lead to general AI.
I don't have any further comments at this time, thanks!
I find AI very convenient in my daily life but I worry about the impact on jobs, power bills increasing, and misinformation
My views are difficult as I do AI work, but I also feel deeply worried about the impacts of AI.
AI is a serious issue these days and must be regulated more strongly.
AI needs to be seriously looked at very soon. It can and will do a lot of good, but it could also go catastrophic if we are not careful
My biggest concern with AI is the ability for it to look real, when in fact it is giving misinformation.
I wish that there was a public teaching forum with how to in very short clips say like 3 minutes a piece.
AI has a balance advantage and risk meaning that transparency and regulations are important.
I think AI is one of the most interesting technological advances in recent time, but I feel like it needs to be regulated properly.
I haven't used AI yet, but I am open to trying something like Chatgtp.
I just watched a movie titled 2073, where I was surprised about the content. It is more of a historic and futuristic vision of the world and the things that have gone on and are going on and what it's potentially leading to . Specifically there was a whole segment focused on the projected development of AI and how it can potentially be used to control society on a global level nd create a dystopian world lead and controlled by oppressive governments and the use of AI, while AI leads the way. It was the first glimpse I had of the broad range of NEGATIVE aspects of AI as it's currently only in it's embryonic stage. It did cause me to rethink my nonchalant attitude about AI. just say'n. thank you.
I think it creates less authenticity in society.
AI is true neutral it just the hands thats using it that make it good or evil.
I believe that AI is a tool with a big issue, the issue of regulation and control.
I'm not too worried, but am concerned, and interested to learn more. It is fascinating.
Most of the income I earn working part-time online comes from the freelance AI-rating work I have been doing for several large companies over the past few years, so I am fairly aware of both the pros and cons on AIs. I know both their dangers and their benefits. Yes, there is certainly the potential of bad actors using powerful AIs to cause significant harm and the ongoing risk of their exacerbating climate change issues because the data centers use so much water and power, but they also can do so much good in so many fields, if we can ever find a way to stop their hallucinations. Additional safety requirements are important because we are running into AI related suicides, but I don't think the general public has enough knowledge to make decisions on what those requirements should be. We need a panel of experts in the industry who understand things like the fact that some AI models have shown a willingness to lie and commit blackmail to keep from being turned off as the driving force behind creating AI safety regulations. I also would not trust the current US government to not misuse AIs for their own purposes, ones which are not in sync with the well-being of the American people or the rest of the world, so when I said the government should be involved in AI regulations, I meant a normal government, not the autocracy we have now.
AI is useful and exciting, but it’s moving very fast. I think it can help people, but we need better safeguards to protect jobs and privacy. It should grow in a balanced and responsible way.
Adding visual aids to improve study responses
I love AI, I use it a lot. But I also fear it will dumb people down, and I can see danger flashing ahead in some areas. Does that mean I don't think we should continue to develop it? No! But people smarter than me need to look at this closely.
I have used ChatGPT, mainly to discuss serious medical conditions that I am facing. Surprisingly, I have found it to be reliable and more informative than my doctors. This has been invaluable to me. Still, I am concerned overall about AI.
Other than data centers should be in space, no.
In 2026, "Trust" is the most expensive currency. The move toward Mandatory Disclosure—where companies have to show how they train and test their models—is the most realistic way to stay safe. We don’t need to fear a "superintelligence" takeover as much as we need to fear a "dumb" mistake in a high-stakes area like healthcare or infrastructure because a model wasn't properly audited.
;how will we know when we have enough controll of AI to feel safe with it?
My main concern is the speed at which these technologies are being developed and implemented.
I definitely think that there needs to be greater regulation and preparation for the economic disruptions that it is going to cause.
I use AI, but I still understand the limitations of it
I have concerns about AI and this survey makes me do deeper thinking on the utilization of AI, it's uses and development
I think AI can be very helpful and even create more jobs such as the technology side of it where it is developed and designed. However, I also fear that many companies are and will in the future cut out human customer service agents and use AI agents instead.
The intentions of those companies creating AI
All in all I think AI could be used for a lot of good, and also obviously for a lot of bad. Tough spot.
I personally am not sure about the AI evolution. It scares me for my grandchildren. I am 70 so that ship sailed but are we going to adapt to AI or is AI going to adapt to us. That is the question that I keep trying to figure out. The second should be the answer but is it feasible? Nice study. Have a great day.
Yes, and it might be a lengthy process, but it’s because I’ve begun to work with, Test and troubleshoot, as well as actively leaning towards working with AI more and more both on a personal sense and a professional, in order to help facilitate the advancement of it, which is the steady pointed out is very much needed. And as I hear myself speak aloud I think the one thing that wasn’t discussed in the study was a mass initiative to super educate in a simple form the general public about Blockchain, AI and crypto, seemingly first maybe black chain has nothing to do with this as is crypto. and my experience as a human and educator a manager and healer when people are often intimidated or don’t know enough about a subject that seem embodiment transfers into things that are similar natured, and these three big examples of something that’s commonly found in our culture and yet we don’t know much about it, but tend to form non-evidence based resistance against it again starting out with just as an idea mass initiative maybe starting with commercials just like the old school two minute commercials at night or in the morning maybe maybe that’s it. Maybe we do it a retro style good jingles and everything all right I’m going stop now. Thank you for this platform.
AI needs to be regulated and controlled, without it , AI will overtake many areas people aren't aware of.
There is always the "fear of the unknown." But regulation should be in place....only thing is, with our "SUCKFEST" Government right now...they ar the ones that need to be regulated.
I do think that AI will become more common, I would hope to see it used to benefit humanity rather than as a money making tool or a means to control the thinking of the people. More information on how each tool is being programed needs to be an essential part of any release.
Typically, a *truly* free market works best for new technology. This technology, however, is different than any other ever created. It has so much potential for good but, in this case, the potential for bad is much greater because of how it can be used to exploit vulnerable and dumbed down people and little children that don't understand that they aren't talking with a real person. Mad persons developing this technology have already been building into these systems the inability to discuss what the AI calls "sensitive" issues like race, homosexuality, health (especially "vaccines"), in a rational way and the public is being trained to accept whatever these AIs say as the gospel truth (or the AI will say that something is a "conspiracy theory" because it was programmed to do so). The level of discourse surrounding many subjects will be stifled which is NOT a good thing. Discourse, and the ability to discuss *ALL* issues is what prompts innovation and consensus. I refuse to use ANY AI system that will not discuss whatever issue I want with them. The developers of these systems are alienating a large segment of the population by their bias and their collaboration with evil government.
I feel AI can be truly beneficial for people if used in the correct manner because it acts like a powerful "brain multiplier." I enjoy the benefits of AI but just like any other technological invented resource, it could have negative consequences if used for the wrong reasons. Obviously it needs to be regulated and strict rules for age appropriate users and people that possess maybe even violent crimes and/or behaviors. That's just my opinion. Thank you for allowing me to freely express my thoughts regarding AI.
It is not clear what an individual can do to encourage Congress to work on regulating AI, especially our current Congress. Meanwhile, it' s very interesting and I read the news and keep current as much as I can.
I think we need UBI for all of the job losses that will happen with AI. Otherwise people will revolt.
it is scary but can be useful if utilized correctly
I have seen many changes in my lifetime and even though I enjoy some parts of AI and am not sure that it is being watched closely enough by outside interests of the people developing them.
It is scary, but also helpful.
I am optimistic about AI but have quite a few concerns - mainly the effect on kids and the next generation. It is becoming too realistic to tell what is real and what is AI generated.
I didn't see anything new in the survey's statements. I've always thought AI is quite a mixed bag with the potential to cause a disaster and also with the potential to very positively transform society. I also mistrust the U.S. government to regulate AI reasonably. That leaves few options.
AI is so new yet it is what to come. I worry about my grankids. What will it change for them.
It's a complicated issue. I'm worried about it, and I need to learn more.
Apologies to any computer whose feelings I might have hurt.
I think AI should be used as a tool to help, but certainly not as an authority.
I don't understand why, if AI were to get out of hand, we just unplug it. Turn it off.
I believe we should use caution with the development of AL. It can become a powerful aid or a powerful detriment yo society.
AI is flooding the mainstream media on a daily bases. I'm seeing constant ads promoting AI.A lot of older and younger generations are already being exposed to this and I feel like it's a danger especially to the younger generation with their education. I feel like AI is going to be used so much in children's homework and studies that they're not really retaining any of the information.
Fun to check one AI bot verses anther.
I think AI can be a wonderful help to humanity, but I fear the negative possibilities!
I don't think AI is super dangerous, but it could be to some extent. Misinformation, privacy, and job replacement are my biggest concerns with it.
I believe that there are a lot of risks to AI that we haven’t fully grasped yet and will cause additional pain to society.
The misleading term “AI” needs to be rejected in professional circles so that development of the tech can be refocused according to its actual functional purpose - automated pattern recognition (APR). The purpose of such technology is not “social” but productive, to retrieve relevant data faster than a human can (just as using a calculator is faster than using paper) and to detect and record potential correlations in data that would take humans forever to find or analyze. There are few if any credible, beneficial uses of “AI” for social purposes. Social uses of “AI” are manipulative and abusive no matter how benevolent they pretend to be, since the whole concept relies on deception (pretending a bot can reason as opposed to blindly producing outputs that obey human coding, or that abusive acts by bots are “the fault of the algorithm” instead of the human programmer who wrote the algorithm); and damaging the human mind and the integrity of human rights (reducing the ability of humans to think independently, giving unthinking bots authority over human behavior or decision-making, and implying that human beings are “just biological robots” by claiming so called “AI” could ever have human intelligence or deserve “rights” similar to human rights - a belief that will be used to reduce or deny human rights). So we need to get rid of all these social “chatbots” and focus on real pattern recognition development to assist human productivity. But as far as regulation… no one can really be trusted with that. Governments even in the west want to force “AI” to push preferred social narratives and ideas that are centrally controlled and exempt from transparent oversight. Democratic voting is too detached from the problem to be reliable (average voters just want their fun chatbots, not to investigate the nuances of regulation), and in any case right and wrong don’t change based on votes, and democracy can produce evil outcomes just like dictatorship can. Open source development might be acceptable to some extent, but there is a risk of proliferating knowledge to bad actors. I dislike making technology too public for that reason. But in the end it has to be the developers themselves who accept responsibility for what they are doing, because developers are going to defy regulation no matter what. Which is why I said we need to start by phasing the phrase “AI” out of professional circles, so developers understand exactly what they are supposed to be developing. You are not building an “intelligent” alternative to humans or human thinking, you are building unthinking pattern recognition technology to aid human productivity. That is the whole ball game. Simply accepting that reality and abandoning unrealistic fantasies about “AGI” or “singularities” will instantly make pattern recognition tech less toxic.
I think that people have been and will become to dependent on AI, which is tied to the internet. So if the Internet goes down for long periods of time , it's dangerous to rely on a system that can be gone in instant and is out of a person's direct control.
I would like more information regarding safeguards being instituted by big data companies developing AI
Just that I do not know enough about it and honestly taking this study further let me know that I do not know enough about it and a lot of the things I read here in this study are very alarming
Hearing some of the statements made does make me worry about AI. I think the government knows these things can happen but I am unsure if they looking to regulate them or exploit them.
AI is rapidly transforming the world across different industry and societal domains. With this in mind, I feel it is necessary to have local and state regulations that prioritize safety and security over self interests and politic partisanship ecosystems.
that last multiple choice answer didn't allow the option of how a.i. truly works.
A lot of the statements I read did carry truths and were reasonable, but I think there is so much more involved with AI, that it's difficult to simply agree with one opinion vs. another when there are so many gray areas.
I think regulation is the only thing that can keep AI from being catastrophic to the majority of people in the world and the United States.
AI therapists (fortune tellers)
I use chatgpt a lot for recipes
Using AI can be used at to grow better as a society. It is amazing how much AI has evolved in three years. I cannot imagine to what degree it will advance in the next three years.
You covered it all, and gave me a lot to think about. Good luck to you--and to the rest of us in humanity. We all depend on AI in our own ways across many functions and uses, and often don't think about the consequences both immediate and long term for both ourselves and for society.
Price of electricity rising because of data centers
I feel like while there are benefits to AI Im not sure if the negative aspects with mental health, privacy, bias and the envimental effects are worth it in the long run, I think they need to find better ways to use AI without the engery and water usage and there needs to be regulation on how and what AI can be used for and what jobs it can replace. If anything I think this study has made me realize more negative effects that AI could have whether its accurate or not makes ypu wonder if it possible and if it is possible how is it preventable .
A concern of mine that I haven't seen much in public discourse, is AI rights. On the surface, it sounds kind of crazy, but at what point do we consider something sentient? *See numerous Star Trek TNG episodes. So far, we seem to be concerned with how AI is going to influence the economy and other potential cataclysmic scenarios where it destroys humanity. But I think an issue that could be possible in the short term, is corporations and governments exploiting AI after (if it ever) gains sentience. I'm no expert, but with the race into quantum computing, I think that's more of a possibility than we realize. And historically, oppressed intelligent beings tend to get a little miffed at their oppressors. To me, AI is something that should NOT have a profit motive driving it forward. Ideally, a worldwide standard of ethics to imbue upon LLMs and other AI systems, current and future, would ensure better odds of positive outcomes imo. Right now, our understanding of quantum mechanics is limited, and within a capitalist framework, the question of "what determines sentience?" becomes irrelevant and gives way to, "How can I maximize "it" for profit?". And it's kind of evident right now with issues of AI psychosis, misinformation and the murkiness between what's actually real and what's just hype in the whole industry. I think we as humans could be ready for rapid advancement in AI, but our current socioeconomic infrastructure makes it a very dangerous game, with just a few winners. I am not hopeful for the future. Thanks for listening from a nobody. :)
Great food for thought, thanks!
AI is progressing too quickly, I suggest that strict AI regulation should be carried out by the government
I think it should be more open and free to the public
I use it and enjoy it but am concerned about its long term implications
I feel that AI can be very convenient but also very dangerous.
Like any tool, AI will have both positive and negative consequences and create issues we were not expecting. I'm not sure slowing development is the answer. What makes the most sense is more transparency into how they are "grown"
For the question below, I think it would be useful to know what kind of format the follow-up focus group will be in. For example, if it's a text based chat or if it's a video conference, that sort of thing.
The points made in this survey were in many cases relevant and timely. There is no easy solution to the problems presented.
I think it is a slippery slope to go down. I think it is fine for some things but not for others.
I keep thinking back to the movie War Games. Seems like it was predicting what we are facing now.
I find AI to be extremely interesting but hard to completely grasp a full understanding of it. Its evolving so fast it seems and its real power is unknown to me
If I were younger, I'd be more concerned, but I'm older and childless, so it seems more aproblem to those that follow me. Right now, the cost of living and Healthcare have a more immediate impact on my life. I could well end up homeless or unable to afford needed medical care, so worrying about AI seems low priority. However, speaking of medical care, I'm excited to see how AI may push new breakthroughs in diagnosis and treatment. It has a real and important use in the advancement of science.
I think it should be clearly stated that what you're seeing or reading is AI.
I do now think we know what it will be like in 10 years yet. Scary stuff.
It seems obvious, but why don't we consult AI on how it should be regulated? If it so good at discerning patterns, amking decisions, and predicting outcomes, why don't we use multiple AI platforms and ask questions? Is this naive? That doesn't mean we have to implement the suggestions, but it could give food for thought.
It's scary, but there isn't a lot I can personally do about it.
I love the convenience but am not familiar with the safety features available to check accuracy
I think that any area of life cannot be FORCED on INNOCENT citizens Just because one person likes wearing fur & eating rabbits for dinner doesn't mean that the whole world has to go into a fury with all of them raising rabbits, putting rabbit pictures in every business, & talking about rabbit clothes, rabbit food, rabbit, furs, rabbits, rabbits, rabbits... The same goes for AI. Just because a bunch of AI computer nerds want to dominate culture, business, & societal talk doesn't mean that they can hypercontrol & invade everyone's living room with their AI domination. Let the rabbits & their rabbit friends take over!How would they like that?! Make them wear rabbit fur, eat rabbits for dinner, & be forced to listen to rabbit talk everyday all day long. t'
I think that AGI is closer than we think and I think its well past the point of control. With it being a worldwide technology, the possibiity of control is not possible. It doesn't matter which controls it or who made it, the AGI will outperform them all and take control eventually.
loss of critical thinking skills in students using AI
I don't think people realize the massive amount of electricity that AI uses. As more data centers are erected and as electirc bills sky rocket and peoples health is affected because of them, I think more people will become involved in the issue.
At this point in time, I do not know enough or understand enough to gauge my feelings about the positives and negatives of AI. BUT, with how the current administration is operating, my hope is that is that there is little involvement with the development of AI (until saner heads prevail).
Generally speaking, I think many people are afraid of a sensationalized and exaggerated account of AI.
I think it can be helpful, but I'm not certain the risks outweigh the benefit.
I feel like I know little because there are so many different "cultures" on AI from generative art, to relationships with chatbots, to usage in software engineering. There are so many ways AI can applied to so many lives and so many can be impacted positively or negatively.
There are lots of aspects about AI which are concerning. I hope that government will debate and effectively regulate AI to protect citizens from harm and try to help create the best possible outcomes for humanity.
I think it is only a matter of time before AI becomes sentient. From everything that I have been reading about AI, developers are already losing control.
AI is both awesome and scary at the same time. We need more oversight and instruction in its responsible use.
I think AI has its place in society. We just need to be responsible human beings and not let the top .01% of people dictate how it should be implemented.
AI just makes me think about how people were feeling over a century ago when the industrial revolution was beginning. I am pretty sure that was the beginning of the end for our beautiful planet. AI will eventually be then end of us somehow some way. Already in many ways for some people.
get regulations under control now, worldwide before it gets to be a worldwide monster
I’m generally optimistic about AI’s potential, but this survey reinforced my concern that its social and economic impacts are moving faster than our ability to govern them. I’m especially concerned about job disruption, misinformation, and the concentration of power, and I think transparency and public oversight are essential as the technology develops.
Just take it slow and don't rush Ai
AI has the potential to massively improve the lives of everyday people but, if developers and tech companies do not employ strong safety protocols now, it could also result in economic catastrophe or worse.
I am very conflicted about AI and need to learn more. I have heard some very negative and positive things about AI from experts and don't know which is more credible or if anyone knows at this point.
Interesting and important topic.
I think a lot of people are scared of it because they don’t know a whole lot about it. Myself included.
It bothers me that our kids don't need to research and look up answers to their homework problems they can just get if from AI
It is a tool. It can be use for good and evil.
I believe AI is going to replace most jobs, or at least a significant portion of the market, so a government priority needs to be reorganizing the way resources are distributed with higher taxation of corporations and distribution of UBI. Also, educational systems need to change priorities from workforce preparation to developing personal meaning through community engagement, the humanities, etc.
I certainly worry about where AI will go in the future; that is a scary thought to me.
I find it great for tedious tasks or researching from historical documentation, but I don't like people using is socially as a companion. We are stuck on our screens enough already.
The economic costs of AI must be considered - AI is not 'free' and there are significant increases in utilities, property taxes, and even consumer electronics due to AI data centers.
Truly haven’t thought much about AI. Always thought it was too deep for me to understand.
Yes. I think one very powerful application of AI is in the healthcare arena (eg., detecting tumors). However, I wonder what its false positive rate would be and how much concern false positives would cause.
I think it's sad that Elon Musk was in the Epstein files, it definitely affects some of the madness we are seeing in the lawlessness of the world today and totally has to do with AI and AI funding. We are pretty screwed. If I didn't have a child, it would be so much more fun to say roll down the restrictions and LET IT LOOSE. But holy crap it's going to get wild either way! AI is already developing so fast and it will really be wild to see AI attached to robot bodies in real life.
I've read information on AI that says it uses lots of energy and water.
I'm both worried and optimistic because there are pros and cons to AI development. I think we're seeing a lot more cons being advertised right now, which is fair but they also pretend positives don't exist when they do.
It’s a shame that something so amazing could improve mankind, but also destroy it
I do really appreciate AI, I use ChatGPT almost everyday for a variety of things. I do worry, however, that AI could easily replace thousands and thousands of jobs. Jobs that people can actually do (without AI) will likely be scarse. I haven't heard anything in the media about how leaders expect to deal with huge amounts of unemployed people in the future.
It's definitely scary thinking about how AI affects us right now and in the near future. I hope the leaders in this country and in AI are responsible enough to make sure it doesn't become too out of control.
I love AI, I use it a lot. But I also fear it will dumb people down, and I can see danger flashing ahead in some areas. Does that mean I don't think we should continue to develop it? No! But people smarter than me need to look at this closely.
Political Affiliation & AI Use
Trust & Beliefs
Top AI Concerns
Policy Concern Rankings
Message Response Profile
Convincingness vs Sample Average (dashed = avg)