Public adoption, sentiment, feelings, and regulatory demands on AI, Big Tech, and social media in Western societies

YouGov
January 21, 2026, 1:40 PM GMT+0

A YouGov research paper by Patrick English, Martha Posthofen, Frieder Schmid, Molly Fluet, Matthew Smith, Archie Lievesley

Outline and research overview

The topics of artificial intelligence (AI), technological advancement, and social media need little introduction, nor do their applications to, and influence within, political and social environments worldwide. Additionally, the relationship between so-called ‘Big Tech’ companies and democratic actors, systems, and institutions has come in for a great deal of criticism, scrutiny, and concern in recent years. Most recently, governments have been grappling with social media bans (such as in Australia) and with how, if at all, to regulate AI content generation on public platforms (such as the Grok explicit image generation controversy).

2025 was the year in which mass public engagement with AI through large language models (LLMs) truly exploded. Though ChatGPT was launched in November 2022 to great success, posting 1 million active users by November that year, it took until the following November for that figure to reach 100 million. By March 2025, over 500 million people were using Open AI’s o3-powered ChatGPT tools per week. That rose again to 700 million by September of last year, and further to 810 million by November, representing a near 200% growth in the number of active weekly users in less than one year, and a doubling of users compared to February of the same year.

Other providers were posting strong growth numbers across 2025, with Anthropic for example reporting a 40% increase in active Claude users in Q2 2024 compared to Q2 2024.

These figures reveal dramatic growth in global AI adoption in 2025. Nevertheless, significant segments of the global population do not intentionally use or interact with AI-powered services.

In this paper, we have used YouGov’s world-leading panel and research methodologies to assess public opinion toward AI, so-called ‘Big Tech’, and social media and their influences and effects on life, democracy, and society worldwide. Our data sheds light on who the AI adopters and enthusiasts are, including looking by age, gender, education level, and more, and who is hesitant and/or negative about this emerging technology.

By designing and fielding a high-quality survey to nationally and politically representative samples of almost 25,000 people across 12 countries, we can provide timely analysis and commentary on where publics stand regarding AI adoption and hesitation, tech and social media regulation, and the social media for political advertising.

The main bulk of the data for this paper comes from our European Political Monthly polling service ran in September and December 2025, and are complimented by additional insights from previous international research data collected and published in August of the same year.1 Full tables of results for all data cited in this report can be found on the YouGov web archive.

The research here focuses mainly on AI, but touches on other elements of the technology space, including social media, Big Tech companies, and regulation of technologies and platforms and the organisations which host them. This study provides the most comprehensive and detailed international examination of public opinion on these critical topics to date.

Survey results – AI adoption and hesitation

AI adoption is not equal across countries

The European Political Monthly survey asked respondents to tell us about their AI usage in different contexts – and the frequency with which that usage happens. At this stage it is important to highlight the limitations of self-reporting behaviour; respondents may not necessarily recall their behaviour correctly, and many people might not know that they are interacting with or using AI, even when they are. It could also be that many respondents answer the questions in this survey as intended AI usage, while other question wording might pick up accidental or coincidental usage (for example – being given AI replies on Google search results).

With that in mind, it is important to focus on the relative differences (namely, between countries, or between social groups within countries) more so than focusing too much on the point estimates.

That aside, our survey suggests that using AI at least once a week for personal and/or leisure activities is most common in the Netherlands (43%) and Spain (41%). Following closely are Canada (37%), the United States (37%), Romania (35%) and also Italy (35%).

Elsewhere, we see the highest percentage of the population reporting that they use AI for work at least once a week in the Netherlands (33%) and Spain (29%), with Canada (28%), with the United States (27%), and Romania (27%) not far behind. Approaching half of people surveyed in Italy (46%) and Germany (45%) said they never use AI to assist in their work.

The self-reported rates of AI usage among adults across our study countries appears lower than usage for personal or leisure activities. However, this finding is clouded by the fact that the figures are representative of countries as a whole and not working people. AI usage among the workforce will, naturally, be much higher if we restricted the sample to workers-only (as indeed we can see in the later section looking in depth at the case of Great Britain).

Younger generations and men are much more likely to use AI than older people and women

AI usage is higher among younger generations than older ones. In Spain for instance, our data suggests that while 70% of those aged 18-24 and 56% of those aged 25 to 34 say they use AI at least once a week for personal or leisure activities, this drops to just 29% of those aged 55 to 64 and 23% of those aged 65 and above. In the Netherlands, almost one-third (31%) of those aged 65 and above say they never use AI in their personal lives, but this drops down to 14% of those aged 18 to 34, and 16% of those aged 35 to 44.

As well, the data suggest there is a consistent gender gap across countries. Men are generally more likely than women to use AI at least once a week in their personal lives by a margin of for example 32% to 21% in France, 46% to 36% in Spain, 35% to 27% in Poland 41% to 33% in both the United States and Canada, and by 33% to 26% in Australia.

We also examine whether education predicts AI adoption. However, because age and education are strongly correlate (with university education more prevalent among working age adults than among the youngest and oldest generations), we must analyse the data by age and education to isolate their independent effects. Given the large sample size of over 2000 respondents, it is possible to do this in Great Britain. The graphic below shows self-reported usage of AI among degree (or equivalent) holders and non-degree holders broken down into four age groups, as well as by education level.

There is no real difference in uptake between degree holders and non-degree holders at the aggregate level in Great Britain, with 24% of those holding degrees reporting using AI at least once a week for either work or personal use, and 25% of non-degree holders saying the same.

However, this masks a large number of differences which we can clearly see when splitting by age. degree holders appear to be more likely to use AI in the workplace (by a margin of 34% to 13%), while non-degree holders appear to be more likely to use AI in their personal lives (by 28% to 23%). The degree vs non-degree gaps are particularly wide for workplace use in the 30-49 category. Apart from the oldest age group, we see the same patterns consistently across age groups.

The workplace AI uptake findings could be driven by AI as a work tool being more applicable in ‘white collar’ employment environments (for things like desk research, problem solving, design, and so on in office environments) versus ‘blue collar’ environments (which tend to be less desk and office based in nature, and more manual or practical focused).  It may also be the case that for many ‘white collar’ workers, AI usage is required or directed by their employers. Future research should explore these differences – and whether they hold in other country contexts.

Greater aggregate knowledge around AI does not seem to drive greater uptake

AI adoption levels appear to not be driven by knowledge or familiarity, however. In other words, the data does not suggest that as people learn about AI, so they become adopters. For instance, two-thirds (66%) of the French public report knowing a great deal or a fair amount about AI, yet only around a quarter use it at least once a week in their personal lives.

Similarly, uptake is low in Great Britain (20% using AI at least once a week in work, 23% in their personal life), but 65% would describe themselves as having great or good knowledge about it. In other words, non-adopters are not uninformed, but rather they are making active choices to reject this technology despite their knowledge of it.

Supporters of parties on the political right are not more likely to use AI

There has been recent discussion about the politicisation of technology and the relationship between Big Tech and the political right in general. However, the data here does not suggest a stronger uptake of AI among supporters of parties on the political right. In fact, there is little evidence of a clear and consistent relationship between partisanship and AI adoption in general, and where we do see differences, they tend to point in the opposite direction to the above.

Supporters of the centre-left German Greens have significantly lower levels of AI hesitation than supporters of other parties, for instance, with only 30% of them saying they never use AI for work and 25% saying they never use it in their personal life. As well, we see much greater hesitation for supporters of Geert Wilders’ PVV (37% never using it for work, 30% never in their personal life) than supporters of other major Dutch parties. Elsewhere, Vox voters are much more likely to never use AI than voters of other Spanish parties, as are supporters of Fratelli d’Italia vs voters of all other parties in Italy, and second-round 2022 Le Pen voters vs Marcon voters in France.

It is important to also note that these examples above could also be in part driven by the age profiles of the sorts of supporters we tend to see backing each party. German Green supporters tend to be younger on average than supporters of other German parties, and supporters of the PVV tend to be older. That said, the changing profiles of support for parties of the right, from traditionally attracting far more older voters, but recently doing much better among younger voters than previously so muddies this relationship elsewhere (for example, the case of France and Vox above).

Regression analysis suggests connections between AI usage and each of gender, age, education, class, and employment.

As we can see above when looking at the relationship between education level and AI uptake when crossed by age, it is important to try and consider how different factors intertwine or condition one another when exploring social data.

Owing to its larger sample size, we can use the data on AI uptake from Great Britain in a formal regression analysis to try and build a profile of AI adoption using a range of information about the respondents themselves. The findings are of course only generalisable directly to the British case, but speak to common themes noticed throughout the data across countries.

To conduct the analysis, a new variable was created which captured whether people reported using AI once a week for either work or personal life tasks. According to the data, 33% of Brits fall into this category of weekly AI user. A weakly informative2 Bayesian logit regression was fitted with this weekly AI user binary as the dependent variable. One of the key advantages of looking at regression tables instead of descriptive statistics is that we are able to account for the effect of all variables simultaneously on the outcome and look for those which the model reports as ‘substantively important’.3

The advantage of using regression analyses to investigate the data further, rather than relying on interpreting individual aggregate level statistics, is that we can model the effect of all characteristics about our respondents simultaneously and draw out relationships which might not appear to be there upon first glance. Or equally perhaps invalidate suggestions from aggregate data which might prove spurious under statistical pressure.

The results, summarised in the table below, align (as we would expect) with what we can clearly see in the descriptive data, suggesting that men are significantly more likely to use AI in general than women. As well, again confirming appearances in the descriptive statistics, if we convert the log odds to percentage likelihood change, the analysis suggests that for each one year older a person is, there is around about a half-percentage point decline in the probability that they use AI at least once a week

Elsewhere, there are other findings that we may not have necessarily drawn from analysing the top-level data alone. For instance, while according to the descriptives there is not much difference in AI uptake between degree holders and non-degree holders, when all other factors are controlled for in the regression scenario, we find that if someone holds a degree, the probability of them using AI at least once a week declines by around 12 percentage points (relative to someone who does not hold a degree). That is to say, the ‘null finding’ above in the descriptive data masks an effect which emerges once we control for other factors, with employment being the most likely mediator here.

Holding regular employment (part-time, at least more than 8 hours a week) is associated with substantially higher probability of using AI at least once a week (an 11-percentage point increase on those not in employment).

Similarly, the social class variable scales from 1 (the highest class) to 6 (most working class), and the model coefficient reports that for every movement on this scale toward the most working class end, there is a decline in the probability of weekly AI usage of around four percentage points.

The results however do not indicate that current supporters (based on expressed vote intention) of parties on the British political right (namely – the Conservatives or Reform UK) are more likely to use AI once a week than those who currently support parties of other ideological persuasions (or none).4 This is consistent with the findings across countries above.

Survey results – public sentiment toward AI and its effect on democracy

Western publics are lukewarm at best in their feelings about AI

As well as AI adoption and hesitation, our data provides information on public sentiment towards AI and its (potential) impact on societies, economies, and politics and democracy. Put simply, we find that publics are, in general, lukewarm at best and concerned at worst about AI and its potential effects across these areas. We asked people, ‘in general, how positive or negative do you feel about artificial intelligence (AI)?’, and the results are visualised below.

Spain and Romania are the most positive countries toward AI in our study, with favorable views outpacing unfavorable ones by margins of 37% to 20% and 27% to 18%, respectively. Publics in Poland, Italy, and the Netherlands also lean positive, though the most common response in these countries is "neither positive nor negative." Canada and Germany show an even split between positive and negative views, while France and Australia tilt slightly more negative than positive. The United States and Great Britain exhibit the most skepticism, with negative views substantially outnumbering positive ones.

Elsewhere, people are more likely to think that AI has had a negative impact on society in general, but are more likely to think it will have a positive impact on economic growth moving forward, according to our data. The ‘net scores’ (positive impact – negative impact) for social impact range from –23 in Great Britain up to +20 in Spain, with each of (in order) the United States (-23), France (-13), Australia (-10), and Germany (-5) in negative territory. Canada (-2), Poland (+1), the Netherlands (+2), and Italy (+3) were essentially neutral, while Romania (+17) was the only other positive country (alongside Spain).

On the economy question, France was most negative (-14) and again Spain the most positive (+18). Also on the negative side were Great Britain (-10) and the United States (-6), with Australia (-2) split. Each of Italy (+5), Germany (+5), Canada (+7), the Netherlands (+10), Poland (+10) and Romania (+12) join Spain in the positive category.

The results show significant gender differences in assessments of AI’s effect on societies and economies. For example, there is a 20-point gap in the proportion of men (42%) who think that AI will have a positive effect on the economy in the Netherlands compared to the proportion of women (22%) who think the same. That same gap is 15 points in Poland, 14 points in Australia and Great Britain, and 13 points in Italy. Differences are slightly less pronounced regarding the impact of AI on society, ranging from 14 in Netherlands to seven points in Poland and Spain.

Though publics are not positive about AI, but think it poses as much of a threat to humanity as losing the bees

While we can conclude that publics across the globe are not particularly thrilled about AI, how much of a threat do they think it really poses? With discussions abound online as to the potential for AI to ‘run riot’ and potentially spell the end for civilisation (and perhaps humanity) as we know it, do the general public across countries share such concerns?

Here, we can appeal to YouGov data from the summer of 2025 for some answers, in which we asked respondents from nine countries to pick up to three things from a predetermined list which they believed were most likely to cause human extinction. Across all countries, top of the results came ‘nuclear war’ with an average of 61%, followed by ‘global warming/climate change’ at 40%, and then ‘a pandemic’ at 30%.

Also included on the list was ‘robots / artificial intelligence’, which scored 13% - one point behind ‘the bees dying out’ (14%). So, as lukewarm at best as global publics may feel about AI, they seem unlikely to consider it an existential threat to their existence.

In another question in the same survey, respondents were asked whether they believed that, in general, the benefits of AI outweighed the risks. In only two countries did we find evidence that the public believes the risks outweigh the benefits – the USA (by a margin of 36% to 25%) and Australia (by 33% to 25%). The public in Spain were most positive, stating that the benefits were weightier by a margin of 49% to 18%. Similarly, it appears that Denmark is equally positive (42% to 14%), while Germany (35% to 24%) and Italy (36% to 22%) more likely to think the benefits are worth the risks than not. Publics in Britain (28% to 31%), France (31% to 27%), and Canada (28% to 31%) are split.

We can then overall describe the public mood toward AI in our study countries as mixed, at best. There is a fair degree of optimism and positivity about AI in some countries – most notably, Spain – but a large slice of pessimism and negativity in a larger number of others – including Great Britain, France, and the USA.

There is a high degree of negativity and dissatisfaction about the effect of AI on democracy and democratic systems

Though publics are more positive about the economic impacts of AI, and do not appear to really see it as a threat, they are significantly more negative about its social impacts. One of the key cornerstones of social activity in many countries is a fully functioning democratic system. Our YouGov international research data can shed light on how publics across the globe feel about the impact of AI on government, democracy, and the media.

Citizens in Western societies have mixed feelings about the impact of AI across areas, according to our data. They seem positive about the practical benefit of AI in some use cases, for example, almost half of citizens or more in each study country think that the impact of AI on healthcare and medicine has been positive (including United States: 46 %, Germany 61%). These predominantly positive sentiments are notable given professional discussions regarding the use of AI in medicine and healthcare and the potentially grave impact misuse of AI might have in this area.5

Public views on the impact of AI in other areas like day-to-day workplace activities, daily life activities or transportation are also largely positive. Even for fighting crime – for which applying AI has been discussed controversially as well as prominently in ethical discourse as well as literature and movies – up to 55% in Germany suggest a positive impact.

On the topics of democracy and democratic functions, respondents were asked to tell us if they thought that, overall, “AI has had a positive of negative impact” on each of “the running of local and national government”, “democracy in [your country]”, and “news and the media”. Publics across our study countries are much more likely to be negative than positive on each, with only Denmark and Spain providing split or positive opinions overall.

With respect to “news and the media”, a considerable share of the publics in Great Britain (57%), Australia and Germany (51%), the United States (49%), France (47%) Canada (44%), Denmark (43%), and Italy (42%), view AI's impact on  the media as negative rather than beneficial.

Relatively few citizens believe that AI has been good for democracy (in general) in their country. As few as 6% in Britain up to only 16% in Canada think that AI has had a positive impact on democracy. In contrast, 21% in Denmark and Spain to 41% in the United States think that AI has damaged democracy. Again, uncertainty is a factor: 15% in Germany to 32% in Denmark are not sure whether the impact is positive or negative.

Finally, while Danes were slightly positive (+7) and the Spanish public evenly split (± 0) about the impact of AI on the running of national and local government in their countries, all other publics landed on the negative side of this fence. Once again, we see highest levels of negativity in the US (-27) and Great Britain (-20), followed by France (-20) and Australia (-15).

Survey results: public demands on tech regulation

Regulation of new and emerging technologies, such as AI, and social media (platforms) is a particularly hot topic with Australia having recently introduced a ban on under 16s accessing social media apps, and the United Kingdom now beginning a consultation on a similar ban, after X in particular was heavily criticised for allowing users to create non-consentual sexualised images of other users (and in some particularly aggrevious circumstances, children).

Additionally, the European Union is in the middle of crafting an extensive piece of regulatory legislation on AI, having been a leading figure in the development of advanced data privacy legislation. In this context, our data suggests that publics indeed want more regulation of tech, AI, and social media, not less.

Demand for AI regulation far outpaces the desire for developmental freedom

Firstly, we asked respondents in our sample countries whether they thought it is more important that “governments put effective regulations in place, even if this slows down the development of AI”, or that “new technology such as AI can be developed freely, even if this means that the industry is less regulated”. A majority of the public in each country – even the most AI-positive – were in favour of governments erring on the side of regulation, rather than development.

Support for regulation over development freedom reached as high as 78% in Great Britain, 76% in Australia, and 71% in the United States. It even reached 73% in Spain – the country where we see some of the highest levels of positivity toward AI generally speaking. The lowest levels of support for government regulation were found in Romania (55%) and Germany (59%).

In no country did support for allowing more freedom in AI development reach even a quarter of the population. Just 8% of the public were in favour of developmental freedom in Great Britain, 10% in Australia, 14% in the United States, and 16% in Spain. Romania (20%) and Germany (19%) show the highest levels of such support.

There is broad brush support for greater regulation on Big Tech companies – even if this annoys Donald Trump

Demand for regulation of so-called “Big Tech” companies is also high among the populations of our countries surveyed, We asked publics whether they thought the EU/UK/Canada/Australia should enforce regulations on Big Tech companies even at the risk of irking Donald Trump, or whether they should cool their approach in order to improve relations with the US president. By clear and decisive margins, publics in each country said that regulation should be enforced even if it risked getting on the wrong side of Trump.

There was clear majority support for regulating Big Tech companies in Great Britain (75% to 9% who preffered a lighter touch), Australia (74% to 11%) the Netherlands (68% to 10%), Canada (66% to 15%), Spain (65% to 18%),  and Germany (64%, versus 11%), . Around half of the public support enforecement in each of France (53% to 17%) and Italy (53% to 15%). Romanians are less sure than the rest, but still prefer enforcement over improved relations with Trump by a margin of 43% to 29%. It is worth nothing that a significant part of these strong signal opinions is likely to do with the unpopularity of Trump himself, as well as the clear public demands for more regulation on technology.

Publics support tighter restrictions on social media, including banning political advertising, but supporters of right-wing political parties are less keen

We also asked the public in each country for their views on social media regulation. Firstly, we asked a general question regarding whether respondents thought that social media regulation in the EU/UK/U.S./Canada/Australia was too tight, too relaxed, or about right. In no country does the viewpoint that social media regulation was too tight reach even one in five of the population. Just 18% hold this view in Romania, as do 11% in Poland, and 9% in Australia.

On the other hand, 27% of Romanians think regulations should be tighter, while the same amount think things are about right. Almost half (47%) of Australians think there should be tighter regulation, while 30% think it should remain as it is, and those figures are 27% and 32% for Poland respectively.

Demand for more regulation reaches as high as 63% in France (6% too tight, 17% about right), 61% in Great Britain (5% too tight, 17% about right), and 48% in the United States (5% too tight, 24% about right). In every single country, people are far more likely to believe that rules governing social media platforms should be tightened rather than loosened.

Lastly, we asked whether people would support or oppose a ban on political advertising on social media. The results here showed particularly dramatic splits along political axes, demonstrating the strong differences in online presences between the political left and right.

Overall, publics in each of our eleven study countries were generally far more likely to support, rather than oppose, a hypothetical ban on political adverts appearing on social media platforms. Net support for a ban reaches as high as +43 in Spain, down to –2 in the Netherlands. Net support figures were next highest in Australia (36%), Great Britain (+33), France (+27), and Poland (+26). Alongside the Netherlands, both Italy and Romania appear very divided (both with net scores of +2).

In terms of the political angles to these aggregate opinions, we see big differences in many country contexts. This includes in Germany, where (at least) a majority of 2025 SPD (net +7), Union (+26), and Green (+45) voters would support a ban, voters who backed the far-right AfD oppose this idea (–11). Similarly, while 64% of Marcon 2022 voters (+38) would support a ban to a much greater extent, than Le Pen voters 2022 (+11). In Great Britain, net support for the ban ranges from +42 for voters of the left-wing Green Party down to +19 for supporters of the right-wing Reform UK. Elsewhere, net support for the ban is higher among 2024 Harris voters (+27) than Trump voters (+15), and among backers of the Green-Left in the Netherlands (+7) than the far-right PVV (-15).

The full picture

YouGov data provides a rich picture of what citizens in Western societies think about tech, AI and social media, whether they embrace new technologies or hesitate and how they think society should cope with the change happening as we speak. All of this takes place in a volatile, almost febrile context of regulators rushing to keep pace with developers, and AI becoming evermore present in the generation of online (social media) content. We want to highlight three key takeaways from our research:

  1. How differing levels of uptake might create a challenging ‘AI literacy gap’ for governments, education establishments, and policymakers
  2. The contrast in public openness to AI in some areas, but their rather damning view on its impact on democracy and democratic functions
  3. The clear and persistent calls for greater regulation of AI, social media, and Big Tech among Western publics

Mind the gap the AI literacy gap could become a new social divide

Enthusiasm for AI in the round is lukewarm at best among Western democratic publics. While the development of AI-driven technologies thrives and more and more applications emerge, a considerable share of citizens do not use AI regularly at work or in their personal lives (or at least they are not aware of using it).

This is more than trivial: if, as YouGov’s research is showing, more than half of the publics in some countries are not (intentionally, or memorably) using AI regularly and thus are not literate in operating such technologies, consequences for democratic systems, consumer markets and the labour market which moves in a more AI-friendly (or AI-endorsing) direction could be severe even in the short term.

How can governments make sure that no one is left behind? The speed at which AI technologies are developing increases the gap between those that are adopting AI and those that are hesitant every day. This poses an enormous challenge to schools, universities and employers: how can they enable students and employees to learn to use AI effectively and in an ethical way?

This so called ‘AI literacy’ challenge presents a conundrum to policy makers across many different areas. What does a curriculum look like that teaches AI literacy as well as ensures learners acquire important cognitive skills and competencies? Who should be targetted to reduce inclusion and usage gaps now, and into the future? How would such a programme of literacy match up to public scepticism, and greater demands for regulation?

Our data suggests that the workplace is an important factor when it comes to AI adoption– as in political education, the main challenge will be to include those who have little access to professional education or to opportunities on the labour market or have jobs that require a lower level of AI competencies.

‘Not democracy’ – The public is concerned about the present and future of democracy in Western societies

The emergence of AI-driven technologies is not taking place in the shadows or behind the scenes: citizens in Western societies have had a front-row seat in observing how the Internet has changed the media, political landscapes and democratic discourse in the past 20 years. Our research shows that publics do not reject AI technology as such, but have become highly sceptical when it comes to the impact of AI on media, society, and democracies.

While according to out data, Western citizens acknowledge the instrumental benefits of AI in everyday life and in the workplace (Convenience! Better healthcare! A safer world!), more citizens think that AI is negatively impacting their democratic systems, and the media, and their country’s society in general than think that it is impacting these areas positively.

Perhaps more concerning for EU societies is that English-speaking publics express greater pessimism about AI's impact across multiple domains, not only social or political ones. Historically, Anglo-Saxon countries have often led societal, commercial, and political transformations, with European countries experiencing similar shifts several years later. If this pattern holds, public sentiment in Anglo-Saxon countries may serve as an early warning system for challenges that EU societies will soon confront.

There are positive argument to be made for AI in these spaces: AI can help societies to organize more effectively and efficiently, can enable governments to ensure that citizens’ expectations of good governance are met. As well, it can support campaigns and parties to communicate effectively, and can help voters stay informed. But nonetheless, Western publics are skeptical with respect to the effects AI has on democracy, local and national governments and the media. Developers of AI, governments which regulate them, and civil society stakeholders who use and promote them need to think about how to address these concerns.

Regulating tech could be a public instrument to seek shelter in a world in motion

Sentiments in Western societies show that citizens are in favour of regulating AI and social media, with robust majorities (up to four out of five in Great Britain!) support regulating AI technologies even if this slows down development of such technologies. But there is no immediate or obvious answer as to how to contruct and enforce reulgation in these rapidly developing, rapidly changing fields.

Equally, citizens want to see social media platforms regulated. There is almost no support for relaxing regulations but quite considerable demand for tighter regulations. In France, Great Britain, and the United States, half of the public and more think that regulations for social media companies are too relaxed. These sentiments indicate that the public does perceive social media as a force in Western societies that needs to be enclosed. And they provide a robust basis for regulatory measures such as the Digital Services Act by the European Union.

Finally, Western publics are quite clear that if there is to be a tradeoff between increased regluation on Big Tech in general and the risk it may bring in a political or diplomatic sense with Donald Trump and the US, they have no hesitation in stil calling for more regulation. This simultaneously showcases the strength of feeling regarding public demands for control, and the lack of sway that the current US President – with all his connections to Big Tech and social media in particular – might have on global affairs.

YouGov’s research also sheds light on the political dynamics which may shape conversations about AI, tech, social media, and regulation as we move forward. Voters of far-right parties are far less likely to think that regulations are too tight. When it comes to political advertising, the overall sentiment in most countries is in favour of banning political adverts from social media. When split by political preferences, voters of progressive and liberal parties support banning political advertising to a much higher extent than voters of right-wing parties do.

This suggests that citizens with progressive and liberal mindsets perceive the digital space as currently provided by social media companies as politically too extreme – and thus leaning towards regulating social media or banning content to fix the issue. Research showing that algorithms are in favour of extreme political content. In this context, political advertising has been one of the few effective and easy-to-use tools of progressive and liberal parties to counter the bias of the algorithms.6


Footnotes

1 Namely an August 2025 poll comprising (in Europe) the UK, Germany, France, Spain, Italy, and Denmark, as well as Canada, the USA, and Australia. This survey forms the basis of all data in this paper which was not part of the September and December European Political Monthly programmes.


2 Zero-centred priors with a standard deviation of one.


3 In this sense, we mean reported effects in the ‘Estimate’ column which do not include 0 between the two upper and lower ranges reported in the ‘Lower 90% CI’ and ‘Upper 90% CI’ columns. If 0 is included in this range, the effect for any such variable is not distinguishable from 0.


4 This result holds consistent if we reduce this variable to only looking at supporters of the most-right wing party: Reform UK.


5 For an overview, see the WHO guidance on Ethics & Governance of Artificial Intelligence for Health: https://www.who.int/publications/i/item/9789240029200


6 Most recently, a study by Universität Potsdam and Bertelsmann Stiftung showed that content by far-right and far-left parties were more likely to be shown on TikTok than content by centrist and liberal parties: https://www.bertelsmann-stiftung.de/de/unsere-projekte/engagement-junger-menschen-fuer-demokratie/projektnachrichten/algorithmen-im-wahlkampf

Explore more data & articles