|

The US-China AI gap closed. The responsible AI gap didn’t

The assumption that the US holds a sturdy lead in AI mannequin efficiency shouldn’t be well-supported by the information, and that’s simply one of many uncomfortable findings in Stanford University’s 2026 AI Index Report, revealed this week.

The report, produced by Stanford’s Institute for Human-Centred Artificial Intelligence, is a 423-page annual evaluation of the place synthetic intelligence stands. It covers analysis output, mannequin efficiency, funding flows, public sentiment, and responsible AI. The headline findings are placing.

But the extra consequential insights sit within the sections most protection has skipped, significantly on AI security, the place the gap between what fashions can do and the way rigorously they’re evaluated for hurt has not closed however widened.

That mentioned, three findings deserve extra consideration than they’re getting.

The US-China mannequin efficiency gap has successfully closed

The framing that the US leads China in AI improvement wants updating. According to the report, US and Chinese models have traded the highest efficiency place a number of instances since early 2025. In February 2025, DeepSeek-R1 briefly matched the highest US mannequin. As of March 2026, Anthropic’s high mannequin leads by simply 2.7%.

The US nonetheless produces extra top-tier AI fashions – 50 fashions in 2025 to China’s 30 – and retains higher-impact patents. But China now leads in publication quantity, quotation share, and patent grants. China’s share of the highest 100 most-cited AI papers grew from 33 in 2021 to 41 in 2024. South Korea, notably, leads the world in AI patents per capita.

The sensible implication is that the idea of a sturdy US technological lead in AI mannequin efficiency shouldn’t be well-supported by the information. The gap that existed two years in the past has closed to a margin that shifts with every main mannequin launch.

There is an extra structural vulnerability the report identifies. The US hosts 5,427 knowledge centres – greater than ten instances another nation – however a single firm, TSMC, fabricates virtually each main AI chip inside them. The total world AI {hardware} provide chain runs by means of one foundry in Taiwan, although a TSMC growth within the US started operations in 2025.

AI security benchmarking shouldn’t be protecting tempo, and the numbers present it

Almost each frontier mannequin developer experiences outcomes on skill benchmarks. The identical shouldn’t be true for responsible AI benchmarks, and the 2026 Index paperwork the gap with some precision.

The report’s benchmark desk for security and responsible AI exhibits that almost all entries are merely empty. Only Claude Opus 4.5 experiences outcomes on greater than two of the responsible AI benchmarks tracked. Only GPT-5.2 experiences StrongREJECT. Across benchmarks measuring equity, safety and human company, the vast majority of frontier fashions report nothing.

Capability benchmarks are reported persistently throughout frontier fashions. Responsible AI benchmarks–overlaying security, equity, and factuality–are largely absent. Source: Stanford HAI 2026 AI Index Report

This doesn’t imply Frontier Labs is doing no inner security work. The report acknowledges that red-teaming and alignment testing occur, however that “these efforts are not often disclosed utilizing a standard, externally comparable set of benchmarks.” The impact is that exterior comparability in AI security dimensions is successfully not possible for many fashions.

Documented AI incidents rose to 362 in 2025, up from 233 in 2024, based on the AI Incident Database. The OECD’s AI Incidents and Hazards Monitor, which makes use of a broader automated pipeline, recorded a peak of 435 month-to-month incidents in January 2026, with a six-month transferring common of 326.

Documented AI incidents rose to 362 in 2025, up from 233 the earlier yr and beneath 100 yearly earlier than 2022. Source: AI Incident Database (AIID), through Stanford HAI 2026 AI Index Report

The governance response on the organisational degree is struggling to match. According to a survey performed by the AI Index and McKinsey, the share of organisations ranking their AI incident response as “glorious” dropped from 28% in 2024 to 18% in 2025. Those reporting “good” responses additionally fell, from 39% to 24%. Meanwhile, the share experiencing three to 5 incidents rose from 30% to 50%.

The report additionally identifies a structural drawback in responsible AI enchancment itself: good points in a single dimension have a tendency to cut back efficiency in one other. Improving security can degrade accuracy, or bettering privateness can cut back equity, for instance. There is not any established framework for managing such trade-offs, and in a number of dimensions, together with equity and explainability, the standardised knowledge wanted to trace progress over time doesn’t but exist.

Public nervousness rises with adoption, and the expert-public gap

Globally, 59% of individuals surveyed say AI’s advantages outweigh its drawbacks, up from 55% in 2024. At the identical time, 52% say AI services make them nervous, a rise of two share factors in a single yr. Both figures are transferring upward concurrently, which displays a public that’s utilizing AI extra whereas turning into extra unsure about the place it leads.

The expert-public divide on AI’s employment results is especially sharp. According to the report, 73% of AI specialists count on AI to have a optimistic affect on how individuals do their jobs, in contrast with simply 23% of most of the people – a 50-point gap. On the financial system, the gap is 48 factors (69% of specialists are optimistic versus 21% of the general public). On medical care, specialists are significantly extra optimistic at 84%, towards 44% of the general public.

Those gaps matter as a result of public belief shapes regulatory outcomes, and regulatory outcomes form how AI is deployed. On that dimension, the report flags one thing placing: the US reported the bottom degree of belief in its personal authorities to control AI responsibly of any nation surveyed, at 31%. The world common was 54%. Southeast Asian international locations had been essentially the most trusting, with Singapore at 81% and Indonesia at 76%.

Globally, the EU is trusted greater than the US or China to control AI successfully. Among 25 international locations in Pew Research Centre’s 2025 survey, a median of 53% trusted the EU to control AI, in comparison with 37% for the US and 27% for China.

The report closes its public opinion chapter by noting that Southeast Asian international locations stay among the many world’s most optimistic about AI. In China, Malaysia, Thailand, Indonesia, and Singapore, greater than 80% of respondents say AI will profoundly change their lives within the subsequent three to 5 years. Malaysia posted the most important improve on this view from 2024 to 2025.

See additionally: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to study extra about AI and large knowledge from trade leaders? Check out AI & Big Data Expo going down in Amsterdam, California, and London. The complete occasion is a part of TechEx and is co-located with different main expertise occasions together with the Cyber Security & Cloud Expo. Click here for extra info.

AI News is powered by TechForge Media. Explore different upcoming enterprise expertise occasions and webinars here.

The submit The US-China AI gap closed. The responsible AI gap didn’t appeared first on AI News.

Similar Posts