The Truth Is Out There

Posts tagged ‘technology’

DARPA Uses AI to Push Viral Pandemic Outbreak Modeling From Weeks to Days


Speed is being prioritized over scrutiny, with AI-generated models designed to justify interventions before they can be meaningfully challenged.

The U.S. military is funding artificial intelligence (AI) systems designed to drastically accelerate viral outbreak modeling—compressing a process that typically takes weeks into something that can be produced in days, then used to steer real-world interventions.

In other words, the faster the model, the less time there is to question whether the response is justified at all.

This acceleration follows DARPA’s already-documented pre-COVID pandemic infrastructure designed to turn digital genetic sequences into synthesized viruses and mass-produced mRNA countermeasures on a fixed timeline.


DARPA’s Stated Problem: Pandemic Models Were Brittle, Opaque, & Slow

According to a December Science publication:

As SARS-CoV-2 radiated across the planet in 2020, epidemiologists scrambled to predict its spread—and its deadly consequences. Often, they turned to models that not only simulate viral transmission and hospitalization rates, but can also predict the effect of interventions: masks, vaccines, or travel bans.

But in addition to being computationally intensive, models in epidemiology and other disciplines can be black boxes: millions of lines of legacy code subject to finicky tunings by operators at research organizations scattered around the world. They don’t always provide clear guidance. “The models that are used are often kind of brittle and nonexplainable,” says Erica Briscoe, who was a program manager for the Automating Scientific Knowledge Extraction and Modeling (ASKEM) project at the Defense Advanced Research Projects Agency (DARPA).

The Defense Advanced Research Projects Agency’s (DARPA) own program manager is conceding that the models used to steer COVID-era responses were fragile and difficult to interpret.

Meaning: they’re not trying to slow down or restrain model-driven policy after COVID.

They’re trying to make the same kind of decision machinery run faster.

There’s “real potential” for them to speed up model building during an outbreak, says Mohsen Malekinejad, an epidemiologist at the University of California San Francisco who helped evaluate the ASKEM products. “In a pandemic, time is always our biggest constraint. We need to have the information. We need to have it fast,” he says. “We simply don’t have enough data-skilled modelers for every single emergence, or every different type of public health need.”

The Program: AI-Generated Outbreak Models on Demand

“Launched in 2022, the $29.4 million program aims to develop artificial intelligence (AI)-based tools that can make model building easier, faster, and more transparent.”

DARPA funded infrastructure that standardizes and accelerates outbreak modeling.

The emphasis is on speed, reproducibility, and usability by non-specialists, allowing policy-relevant models to be generated quickly, even when underlying assumptions are incomplete or contested.

How It Works: Papers & Notebooks → Equations → Models

“The program’s AI tools automate that coding, allowing researchers to construct, update, and combine models at a higher level of abstraction.”

By removing much of the technical friction involved in model construction, these tools make it easier to generate outbreak models that carry institutional weight, even when the scientific grounding is thin or uncertain.

“ASKEM teams designed AI systems that can consume scientific literature… and extract the equations and knowledge needed to create or update a given model.”

Scientific literature is converted directly into reusable model components, giving machine-parsed interpretations of research the ability to propagate quickly into decision-making frameworks.

“One ASKEM project developed a way to ingest those notebooks, extract the underlying mathematical descriptions, and turn them into a model.”

Informal reasoning and exploratory notebook work can be elevated into deployable models at speed, reducing the distance between preliminary thinking and authoritative outputs.

Intervention-Focused Modeling

“The resulting model integrated the viruses’ different transmission and seasonal patterns, while gauging the effects of interventions such as wearing masks and testing.”

The system is designed to evaluate intervention scenarios alongside disease dynamics, embedding policy considerations directly into the modeling process.

“Testers were asked to model the impact of a vaccination campaign on the cost of hospitalization for hepatitis A in a state’s unhoused drug users.”

These tools are oriented toward applied governance questions—cost, targeting, and campaign impact—rather than purely descriptive epidemiology.

The Speed Claim: 83% Faster

“In the final results, testers found that the ASKEM tools, when pitted against standard modeling workflows, could create models 83% faster.”

Model generation is fast enough to fit within political and media timelines, reducing the opportunity for external review before results are acted upon.

“They were able to build practically useful models in a 40-hour work week for multiple problems.”

Once speed ceases to be the limiting factor, the pressure shifts toward rapid implementation rather than careful validation.

‘Transparency’ as an Internal Confidence Signal

“Because of the ASKEM models’ enhanced transparency, testers also found that decision-makers would be more confident in ASKEM’s outputs than in those of traditional models.”

Here, “transparency” functions less as a safeguard and more as a confidence amplifier for officials.

By making models legible enough to satisfy internal review, the system reduces friction within institutions, allowing officials to act more quickly while unresolved uncertainties remain embedded in the outputs.

Intended Users: Health, Defense, & Intelligence Agencies

“DARPA is working to find agencies or programs within the health, defense, and intelligence communities that might want to take advantage of ASKEM.”

Outbreak modeling is being positioned as a permanent national-security capability, integrated alongside defense and intelligence functions rather than treated as an ad hoc public-health exercise.

Bottom Line

DARPA is building a system that converts literature, assumptions, and exploratory analysis into outbreak models fast enough to guide interventions in near real time.

When speed is treated as the primary constraint, the window for scrutiny, dissent, and meaningful challenge necessarily collapses before those models are used to justify action.

Orwellian Tactics. WHO Instructs Governments to Track Online Anti-Vaccine Messaging in Real Time with AI: Journal ‘Vaccines’


Believe in vaccines or be targeted.

The World Health Organization (WHO) has demanded that governments surveil online information that questions the legitimacy of influenza vaccines and that they launch “countermeasures” against those who question the WHO’s vaccine dogma, in a November Vaccines journal publication.

The WHO’s largest funders are the U.S. government (taxpayers) and the Bill & Melinda Gates Foundation.

In the November publication, the WHO representatives do not argue for their beliefs in vaccines.

They do not attempt to interact with arguments against vaccines.


Instead, they call for governments to use artificial intelligence (AI) to monitor online opposition to injectable pharmaceuticals, and to develop ways to combat such opposition.

There is no persuasion, only doctrine.

The WHO paper reads:

“Vaccine effectiveness is contingent on public acceptance, making risk communication and community engagement (RCCE) an integral component of preparedness. The research agenda calls for the design of tailored communication strategies that address local sociocultural contexts, linguistic diversity, and trust dynamics.”

“Digital epidemiology tools, such as AI-driven infodemic monitoring systems like VaccineLies and CoVaxLies, offer real-time insight into misinformation trends, enabling proactive countermeasures.”

The WHO starts from the assumption that all vaccine skepticism is inherently false, pushing surveillance tools to track and catalog online dissent from those rejecting that creed.

The goal is not finding middle ground or even fostering dialogue.

It’s increasing vaccinations.

“The engagement of high-exposure occupational groups as trusted messengers is recommended to improve uptake.”

To accomplish this, governments “should” align “all” their messaging with the WHO’s denomination of vaccine faith.

“All messaging should align with WHO’s six communication principles, ensuring information is Accessible, Actionable, Credible, Relevant, Timely, and Understandable, to strengthen public trust in vaccination programmes [sp-non English].”

The WHO’s faith system requires not only that its own followers, but also non-followers inject themselves with drugs linked to injuries, diseases, hospitalizations, and deaths.

If your posts online oppose that faith system, they are targeted and labeled as “misinformation.”

You require “behavioural [sp-non English] intervention.”

You must be “counter[ed].”

“Beyond monitoring misinformation, participatory communication models that involve local leaders, healthcare workers, and veterinarians have shown measurable improvements in vaccine uptake and trust. Evidence-based behavioural [sp-non English] can complement these approaches to counter misinformation.”

The WHO is outlining an Orwellian control system where dissent is pathologized, belief is enforced by surveillance, and governments are instructed to algorithmically police thought in service of pharmaceutical compliance.

WHO–Gates Blueprint for Global Digital ID, AI-Driven Surveillance, and Life-Long Vaccine Tracking for Every Person


Automated, cradle-to-grave traceability for “identifying and targeting the unreached.”

In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.

The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals from birth.

It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.

The proposed system:

  • integrates personally identifiable information with socioeconomic data such as “household income, ethnicity and religion,”
  • deploys artificial intelligence for “identifying and targeting the unreached” and “combating misinformation,”
  • and enables governments to use vaccination records as prerequisites for education, travel, and other services.

What the WHO Document Admits, in Their Own Words

To establish the framework, the authors define the program as nothing less than a restructuring of how governments govern:

“Digital transformation is the intentional, systematic implementation of integrated digital applications that change how governments plan, execute, measure and monitor programmes.”

They openly state the purpose:

“This transformation can accelerate progress towards the Immunization agenda 2030, which aims to ensure that everyone, everywhere, at every age, fully benefits from vaccines.”

This is the context for every policy recommendation that follows: a global vaccination compliance system, digitally enforced.

1. Birth-Registered Digital Identity & Life-Long Tracking

The document describes a system in which a newborn is automatically added to a national digital vaccine-tracking registry the moment their birth is recorded.

“When birth notification triggers the set-up of a personal digital immunization record, health workers know who to vaccinate before the child’s first contact with services.”

They specify that this digital identity contains personal identifiers:

“A newborn whose electronic immunization record is populated with personally identifiable information benefits because health workers can retrieve their records through unique identifiers or demographic details, generate lists of unvaccinated children and remind parents to bring them for vaccination.”

This is automated, cradle-to-grave traceability.

The system also enables surveillance across all locations:

“[W]ith a national electronic immunization record, a child can be followed up anywhere within the country and referred electronically from one health facility to another.”

This is mobility tracking tied to medical compliance.

2. Linking Vaccine Records to Income, Ethnicity, Religion, & Social Programs

The document explicitly endorses merging vaccine status with socioeconomic data.

“Registers that record household asset data for social protection programmes enable monitoring of vaccination coverage by socioeconomic status such as household income, ethnicity and religion.”

This is demographic stratification attached to a compliance database.

3. Conditioning Access to Schooling, Travel, & Services on Digital Vaccine Proof

The WHO acknowledges and encourages systems that require vaccine passes for core civil functions:

“Some countries require proof of vaccination for children to access daycare and education, and evidence of other vaccinations is often required for international travel.”

They then underline why digital formats are preferred:

“Digital records and certificates are traceable and shareable.”

Digital traceability means enforceability.

4. Using Digital Systems to Prevent ‘Wasting Vaccine on Already Immune Children’

The authors describe a key rationale:

“Children’s vaccination status is not checked during campaigns, a practice that wastes vaccine on already immune children and exposes them to the risk of adverse events.”

Their solution is automated verification to maximize vaccination throughput.

The digital system is positioned as both a logistical enhancer and a compliance enforcer:

“National electronic immunization records could transform how measles campaigns and supplementary immunization activities are conducted by enabling on-site confirmation of vaccination status.”

5. AI Systems to Target Individuals, Identify ‘Unreached,’ & Combat ‘Misinformation’

The WHO document openly promotes artificial intelligence to shape public behavior:

“AI… demonstrate[s] its utility in identifying and targeting the unreached, identifying critical service bottlenecks, combating misinformation and optimizing task management.”

They explain additional planned uses:

“Additional strategic applications include analysing population-level data, predicting service needs and spread of disease, identifying barriers to immunization, and enhancing nutrition and health status assessments via mobile technology.”

This is predictive analytics paired with influence operations.

6. Global Interoperability Standards for International Data Exchange

The authors call for a unified international data standard:

“Recognize fast healthcare interoperability resources… as the global standard for exchange of health data.”

Translated: vaccine-linked personal identity data must be globally shareable.

They describe the need for “digital public infrastructure”:

“Digital public infrastructure is a foundation and catalyst for the digital transformation of primary health care.”

This is the architecture of a global vaccination-compliance network.

7. Surveillance Expansion Into Everyday Interactions

The WHO outlines a surveillance model that activates whenever a child interacts with any health or community service:

“CHWs who identify children during home visits and other community activities can refer them for vaccination through an electronic immunization registry or electronic child health record.”

This means non-clinical community actors participating in vaccination-compliance identification.

The authors also describe cross-service integration:

“Under-vaccinated children can be reached when CHWs and facility-based providers providing other services collaborate and communicate around individual children in the same electronic child health records.”

Every point of contact becomes a checkpoint.

8. Behavior-Shaping Through Alerts, Reminders, and Social Monitoring

The WHO endorses using digital messaging to overcome “intention–action gaps”:

“Direct communication with parents in the form of alerts, reminders and information helps overcome the intention–action gap.”

They also prescribe digital surveillance of public sentiment:

“Active detection and response to misinformation in social media build trust and demand.”

This is official justification for monitoring and countering speech.

9. Acknowledgment of Global Donor Control—Including Gates Foundation

At the very end of the article, the financial architect is stated plainly:

“This work was supported by the Gates Foundation [INV-016137].”

This confirms the alignment with Gates-backed global ID and vaccine-registry initiatives operating through Gavi, the World Bank, UNICEF, and WHO.

Bottom Line

In the WHO’s own words:

“Digital transformation is a unique opportunity to address many longstanding challenges in immunization… now is the time for bold, new approaches.”

And:

“Stakeholders… should embrace digital transformation as an enabler for achieving the ambitious Immunization agenda 2030 goals.”

This is a comprehensive proposal for a global digital-identity system, permanently linked to vaccine status, integrated with demographic and socioeconomic data, enforced through AI-driven surveillance, and designed for international interoperability.

It is not speculative, but written in plain language, funded by the Gates Foundation, and published in the World Health Organization’s own journal.

WHO Builds International Pandemic Command System Through New Pathogen-Sharing Agreement


The WHO’s new annex would establish a worldwide system for collecting, sharing, and redistributing pathogens—giving the agency a permanent role in directing future pandemic responses.

The World Health Organization (WHO) just took one of its most consequential steps toward centralized pandemic coordination, as governments around the world lab-engineer multiple chimeric bird flu viruses, the very pathogen the mainstream predicts will cause the next pandemic.

In a new announcement from Geneva published on Friday, the agency confirmed that countries are negotiating the first draft of the ‘Pathogen Access and Benefit-Sharing’ (PABS) annex.

This is a legally binding add-on to the WHO’s forthcoming ‘Pandemic Agreement’ that would create a permanent international mechanism for collecting, storing, and redistributing pathogen samples and genetic sequence data.

Across the short press release, the WHO used the word “pandemic” fourteen times, revealing the core justification for what it’s really building: a standing international command network for future pandemic response.

“Countries must be able to quickly identify pathogens that have pandemic potential and share their genetic information and material so scientists can develop tools like tests, treatments, and vaccines,” the WHO said.


A Permanent Infrastructure for Pandemic Coordination

The PABS annex operationalizes Article 12 of the Pandemic Agreement, transforming what was once voluntary information-sharing into a formal, legally binding system.

If adopted, countries will be required to submit both biological materials and genetic data on “pathogens with pandemic potential” into a WHO-coordinated system, effectively creating a multinational pathogen clearinghouse.

In return, the WHO promises “fair and equitable” access to the medical products developed from these materials.

But that access would be managed through the same centralized network, making the WHO not just an advisor, but a logistical coordinator for the entire chain of pandemic response: detection, data, research, and distribution.

‘Solidarity’ as the Framework for Centralized Control

WHO Director-General Tedros Adhanom Ghebreyesus called the move a victory for unity.

“Solidarity is our best immunity,” Tedros said. “Finalizing the Pandemic Agreement, through a commitment to multilateral action, is our collective promise to protect humanity.”

That message of solidarity sounds benevolent.

But in practice, it marks the institutionalization of transnational pandemic management under WHO authority, giving the agency standing powers to organize and direct the movement of pathogen data worldwide.

Risks of an International Pathogen Network

Centralized pathogen-sharing regime raises major risks:

  • Loss of Sovereignty: Countries could be legally obligated to transfer biological samples and genetic information to the WHO, diminishing national control over biosecurity.
  • Intellectual Property Exploitation: Data shared through the WHO may be commercialized by corporate or academic partners with no guaranteed benefit to source nations.
  • Security and Dual-Use Concerns: Centralized pathogen databases become high-value targets for theft or misuse.
  • Administrative Bottlenecks: Complex “benefit-sharing” rules could delay rapid response—the opposite of what’s promised.

From Agreement to Enforcement

The Intergovernmental Working Group (IGWG) met November 3–7 in Geneva to negotiate the annex, with co-chairs Ambassador Tovar da Silva Nunes (Brazil) and Matthew Harpur (UK) promising a finalized version for adoption at the 79th World Health Assembly in May 2026.

Once approved, national parliaments would begin ratifying the full Pandemic Agreement, paving the way for a unified international system of pathogen control and pandemic coordination.

All anchored in Geneva and legally binding across WHO member states.

Bottom Line

The WHO’s new PABS annex is more than a technical policy.

It’s the foundation of a permanent international pandemic infrastructure, one that centralizes biological data, pathogen access, and emergency response authority under the world’s largest unelected health agency.

Under the banner of “pandemic preparedness,” the WHO is building the system that will coordinate—and possibly control—the next worldwide outbreak response.

Free Speech Under Siege: How Europe is Becoming the New China


The Decline and Fall of Free Speech in Europe

In the middle of the twentieth century, Europe lay in ruins, having learned, or so we thought, the dark lesson that when speech is regulated, tyranny flourishes. That lesson has now been forgotten. A continent once hailed as the cradle of liberal democracy has become the laboratory of a new digital authoritarianism. This is not an exaggeration. It is, rather, the consequence of a steady drift toward control, clothed in the language of safety, decency, and order. And today, that drift has become an avalanche.

The United Kingdom’s Online Safety Act, France’s criminal investigation into X, and the European Union’s Digital Services Act (DSA) are not merely legislative developments. They are declarations of war against the free exchange of ideas. What unites them is a belief, common to technocrats in Brussels and bureaucrats in Whitehall, that ordinary people cannot be trusted with unfiltered information. To preserve democracy, it must be preemptively constrained.

This inversion of means and ends has accelerated since Elon Musk restored viewpoint neutrality to X, a platform that had, under previous management, cooperated with government actors to throttle disfavored speech. What Europe fears now is not misinformation, but competition. Competition of ideas. With the Biden administration, that fear was shared. But with Trump back in the White House and Secretary Marco Rubio at the helm of the State Department, the US has become once again the principal global guarantor of free speech.

Examining the facts.

Britain’s Online Safety Act, which came into force yesterday on July 25th, is a case study in bureaucratic excess. Ostensibly designed to protect children from harmful content, the law extends far beyond illegal material. It empowers the Office of Communications (Ofcom) to police “legal but harmful” speech, a category so vague that it becomes a weapon. Among the priority targets of censorship are foreign influence, disinformation, and content deemed injurious to public health or electoral trust. All are, of course, euphemisms for political heterodoxy.

The mechanisms of enforcement are equally chilling. Platforms face fines of up to 10 percent of global revenue and criminal charges against executives who fail to comply. In response, X announced that it would default users to a restricted mode unless they verify their age, employing invasive AI-based identification to filter content. In practice, this amounts to algorithmic ghettoization: speech not officially banned but rendered invisible, unreachable, and unsearchable.

If Britain’s law is sweeping, France’s assault is surgical. In early 2025, French prosecutors launched a criminal probe into X, accusing the company of algorithmic manipulation to promote divisive political content, including material critical of the French government’s stance on immigration and LGBT issues. But the legal framework under which this charge was levied is what ought to alarm any student of liberty. Prosecutors invoked Articles 323-2 and 323-3 of the French Penal Code, which target cybercriminals who distort data systems. They did so while declaring the company an “organized crime group,” the very designation used against narcotics cartels.

In other words, France is treating the operation of a social media algorithm as a felony, and the platform’s executives and users as gangsters. It is difficult to imagine a clearer betrayal of liberal norms. The response from the Trump administration was swift and sharp. The State Department’s Bureau of Democracy, Human Rights, and Labor (DRL) issued a statement condemning the investigation as an affront to the speech rights of American citizens and companies, noting that governments must not suppress voices they disfavor under the guise of regulation.

That brings this to Brussels, where the European Union’s Digital Services Act has taken aim at the global information ecosystem. Under the DSA, platforms with more than 45 million users in the EU, including X, YouTube, Facebook, and TikTok, are designated as Very Large Online Platforms (VLOPs) and subjected to a draconian compliance regime. They must submit to algorithmic audits, provide content takedown systems, establish risk mitigation protocols, and grant data access to academic researchers. These may sound innocuous. They are not.

The risk assessments required under the DSA demand that platforms identify and reduce threats to democratic processes, public health, and civil discourse. But who defines these risks? Who decides what constitutes a threat to democracy? In practice, the answer is European regulators whose notion of democracy excludes populism, nationalism, and conservative dissent. The effect is predictable. As revealed in documents obtained by the House Judiciary Committee, platforms are modifying algorithms not to protect users, but to conform to a political orthodoxy that elevates some voices and buries others.

This is not merely an internal European affair. American citizens are affected. American companies are compelled to enforce rules that conflict with the First Amendment. European law is being globalized through the extraterritorial compliance of US-based firms, thereby exporting censorship to the last country in the West where speech remains constitutionally protected. The Trump administration has rightly characterized this as digital colonialism, and in response, it has begun to act.

Executive Order 14149, issued by President Trump on January 20, 2025, prohibits federal agencies from colluding in censorship and directs the Attorney General to prosecute such collaboration where found. More pointedly, Secretary Rubio has launched a campaign of diplomatic retaliation. In May, the State Department imposed visa bans on foreign officials who attempt to suppress American speech online. Among those targeted were members of France’s interior ministry and German regulators affiliated with the European Commission.

This is not just policy. It is philosophy. The Trump administration is reasserting a principle once understood and now forgotten: that the freedom to speak is not a gift from the government, but a right to be defended against it. That principle has been invoked in lawsuits filed by US firms against foreign judges, including the suit by Truth Social and Rumble against Brazilian Justice Alexandre de Moraes, who ordered censorship of American content. A federal judge recently ruled that US companies are under no obligation to comply with foreign censorship mandates, affirming the territorial integrity of the First Amendment.

These actions underscore a broader point. The war for speech is no longer domestic. It is international. And Europe, once the defender of Enlightenment values, has become the staging ground for a counter-Enlightenment led not by kings or priests, but by regulators and prosecutors. That is the novelty. The new censorship is procedural, not ideological. It hides behind audits, compliance regimes, and “safety by design” architectures. But the result is the same: fewer voices, less dissent, and a public square scrubbed clean of deviation.

Some will object: do we not have a responsibility to prevent harm? Certainly, but that is not what is happening. These regimes do not surgically remove incitement or criminality. They blur the line between disagreement and danger. A platform that allows a conservative view of gender to trend is now suspected of extremism. A politician who questions climate policy is accused of disinformation. The censor has changed outfits. He now wears a badge that says “compliance officer.”

What is to be done? First, the US must extend its protective umbrella over its citizens wherever they are. The principle that an American cannot be silenced by a foreign government must be codified, not merely asserted. Second, American platforms must be encouraged, even compelled, to defend their users against extra-constitutional demands. If that means declining to operate in censorious jurisdictions, so be it. Freedom has a price. Better to pay it now than to live forever in rented liberty.

Finally, it must be recognized that this is not just a legal conflict. It is a civilizational one. Europe has chosen managed democracy over free society. The US must not follow. We must lead. And if our allies bristle, let them. Better to be isolated and free than integrated and gagged. The American Revolution did not begin with gunfire. It began with speech. We owe it to our forebears, and our future, to keep that flame lit.

Bill Gates Launches Attack on ‘Insane’ Elon Musk, But He Clearly Didn’t Think It Through


Bill Gates is criticizing Elon Musk's involvement in politics.

Bill Gates has been incredibly involved in shaping politics and policy, not just in the United States, but all over the world.

But now he has a massive issue with Elon Musk doing the same.

The multibillionaire Microsoft co-founder bemoaned the sudden influence of Musk on American and European politics in an interview published Saturday by The Times.

The British newspaper pressed Gates in the wide-ranging conversation to react as Musk enters the global political fray. The magazine, ironically enough, asked Gates if he wished that he “had got more involved in influencing politics like Elon Musk.”

“Not at all,” Gates responded.

“I thought the rules of the game were you picked a finite number of things to spout about that you cared for, focused on a few critical things, rather than telling people who they should vote for,” he told the outlet.

“For me it’s only ever about aid. I did think Brexit was a mistake, but I wasn’t tweeting every day,” Gates insisted.

Gates may not have the same style of political engagement as Musk’s off-the-cuff use of X, the social media platform he bought three years ago, or his appearance at rallies for Donald Trump.

But make no mistake. Gates is as political as they come.

Gates has thrown around his influence and his money, especially by means of the Gates Foundation, to move the policy conversation in his direction, especially on topics like climate change and public health.

Just two years ago, for instance, the Gates Foundation dumped $40 million into highly controversial mRNA manufacturing projects in Africa.

Gates has also been involved with buying American farmland, seemingly to encourage meat alternatives and other purported climate-friendly agriculture activities.

But remember, Musk is the actual problem for supporting Trump, raising awareness of the groomer gangs in the United Kingdom, and encouraging Germany to be a sovereign nation, or so says Gates.

“I’m ultra-different. It’s really insane that he can destabilize the political situations in countries,” Gates claimed to The Times.

“I think in the U.S. foreigners aren’t allowed to give money; other countries maybe should adopt safeguards to make sure super-rich foreigners aren’t distorting their elections,” he continued.

Conservative commentator Victor Davis Hanson said what many Gates skeptics were thinking: “Is he joking, or simply completely misinformed?”

Beyond the long history of global activism from Gates, Hanson reminded social media that the billionaire was dead silent about various other forms of foreign political interference from the global left, including in the United States.

That includes Christopher Steele, the British ex-spy who “who interfered in the 2016 presidential election by fabricating a venomous dossier to destroy the Trump campaign,” and much more recently, the fact that the Labour Party in Britain called for British activists to “swarm American swing states in service to the 2024 Kamala Harris campaign.”

Like other leftists who attack Musk and his affinity for the global right, Gates is not upset about a billionaire involving himself in politics.

Gates is only mad that the world’s richest man is not channeling his billions toward the global left instead.

“So, sir. Gates, spare us you very selective outage about Mr. Musk, given your prior deafening silence on hired foreign interference here and Democratic efforts to interfere in the elections of others,” Hanson added.

AI, Society, and Democracy: Maybe Just Relax.


The argument argues that law and regulation have never diagnosed and prevented social, political, and economic ills of new technology. AI is no different. AI regulation poses a greater threat to democracy than AI, as governments are anxious to use regulation to censor information. Free competition in civil society, media, and academia will address any ill effects of AI as it has for previous technological revolutions, not preemptive regulation.

“AI poses a threat to democracy and society. It must be extensively regulated.”
Or words to that effect, are a common sentiment. They must be kidding. 

Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology. Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?

Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus. 

Have people ever correctly forecast social and political changes, from any set of causes? Representative democracy and liberal society have, in their slow progress, waxed and waned, to put it mildly. Did our predecessors in 1910 see 70 years of communist dictatorship about to envelop Russia? Did they understand in 1925 the catastrophe waiting for Germany? 

Society is transforming rapidly. Birth rates are plummeting around the globe. The U.S. political system seems to be coming apart at the seams with unprecedented polarization, a busting of norms, and the decline of our institutions. Does anyone really know why?

“The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong.”

The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong. Thomas Malthus predicted, plausibly, that the technological innovations of the late 1700s would lead to widespread starvation. He was spectacularly wrong. Marx thought industrialization would necessarily lead to immiseration of the proletariat and communism. He was spectacularly wrong. Automobiles did not destroy American morals. Comic books and TV did not rot young minds.

Our more neurotic age began in the 1970s, with the widespread view that overpopulation and dwindling natural resources would lead to an economic and political hellscape, views put forth, for example, in the Club of Rome report and movies like Soylent Green. (2) They were spectacularly wrong. China acted on the “population bomb” with the sort of coercion our worriers cheer for, to its current great regret. Our new worry is global population collapse. Resource prices are lower than ever, the U.S. is an energy exporter, and people worry that the “climate crisis” from too much fossil fuel will end Western civilization, not “peak oil.” Yet demographics and natural resources are orders of magnitude more predictable than whatever AI will be and what dangers it poses to democracy and society. 

“Millenarian” stems from those who worried that the world would end in the year 1000, and people had better get serious about repentance for our sins. They were wrong then, but much of the impulse to worry about the apocalypse, then to call for massive changes, usually with “us” taking charge, is alive today. 

Yes, new technologies often have turbulent effects, dangers, and social or political implications. But that’s not the question. Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology? 

There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.

Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned. 

Climate change is, to many, the current threat to civilization, society, and democracy (the latter from worry about “climate justice” and waves of “climate refugee” immigrants). However much you believe the social and political impacts—much less certain than the meteorological ones—one thing is for sure: Trillion dollar subsidies for electric cars, made in the U.S., with U.S. materials, U.S. union labor, and page after page of restrictive rules, along with 100% tariffs against much cheaper Chinese electric cars, will not save the planet—especially once you realize that every drop of oil saved by a new electric car is freed up to be used by someone else, and at astronomical cost. Whether you’re Bjorn Lomborg or Greta Thunberg on climate change, the regulatory state is failing. 

We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger. 

I do not deny potential dangers of AI. The point is that the advocated tool, the machinery of the regulatory state, guided by people like us, has never been able to see social, economic, and political dangers of technical change, or to do anything constructive about them ahead of time, and is surely just as unable to do so now. The size of the problem does not justify deploying completely ineffective tools. 

Preemptive regulation is even less likely to work. AI is said to be an existential threat, fancier versions of “the robots will take over,” needing preemptive “safety” regulation before we even know what AI can do, and before dangers reveal themselves. 

Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path. 

I do not claim that all regulation is bad. The Clean Air and Clean Water Acts of the early 1970s were quite successful. But consider all the ways in which they are so different from AI regulation. The dangers of air pollution were known. The nature of the “market failure,” classic externalities, was well understood. The technologies available for abatement were well understood. The problem was local. The results were measurable. None of those conditions is remotely true for regulating AI, its “safety,” its economic impacts, or its impacts on society or democratic politics. Environmental regulation is also an example of successful ex post rather than preemptive regulation. Industrial society developed, we discovered safety and environmental problems, and the political system fixed those problems, at tolerable cost, without losing the great benefits. If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.

“If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.”

Who will regulate? 

Calls for regulation usually come in the passive voice (“AI must be regulated”), leaving open the question of just who is going to do this regulating. 

We are all taught in first-year economics classes a litany of “market failures” remediable by far-sighted, dispassionate, and perfectly informed “regulators.” That normative analysis is not logically incorrect. But it abjectly fails to explain the regulation we have now, or how our regulatory bodies behave, what they are capable of, and when they fail. The question for regulating AI is not what an author, appointing him or herself benevolent dictator for a day, would wish to see done. The question is what our legal, regulatory, or executive apparatus can even vaguely hope to deliver, buttressed by analysis of its successes and failures in the past. What can our regulatory institutions do? How have they performed in the past? 

Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure. 

Even successful regulation, such as the first wave of environmental regulation, is now routinely perverted for other ends. People bring environmental lawsuits to endlessly delay projects they dislike for other reasons. 

The basic competence of regulatory agencies is now in doubt. On the heels of the massive failure of financial regulation in 2008 and again in 2021, (7) the obscene failures of public health in 2020–2022, do we really think this institutional machinery can artfully guide the development of one of the most uncertain and consequential technologies of the last century?

And all of my examples asked regulators only to address economic issues, or easily measured environmental issues. Is there any historical case in which the social and political implications of any technology were successfully guided by regulation?

“Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators.”

It is AI regulation, not AI, that threatens democracy. 

Large Language Models (LLMs) are currently the most visible face of AI. They are fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”

The worriers often had a point. Gutenberg’s moveable type arguably led to the Protestant Reformation. Luther was the social influencer of his age, writing pamphlet after pamphlet of what the Catholic Church certainly regarded as “misinformation.” The church “regulated” with widespread censorship where it could. Would more censorship, or “regulating” the development of printing, have been good? The political and social consequences of the Reformation were profound, not least a century of disastrous warfare. But nobody at the time saw what they would be. They were more concerned with salvation. And moveable type also made the scientific journal and the Enlightenment possible, spreading a lot of good information along with “misinformation.” The printing press arguably was a crucial ingredient for democracy, by allowing the spread of those then-heretical ideas. The founding generation of the U.S. had libraries full of classical and enlightenment books that they would not have had without printing. 

More recently, newspapers, movies, radio, and TV have been influential in the spread of social and political ideas, both good and bad. Starting in the 1930s, the U.S. had extensive regulation, amounting to censorship, of radio, movies, and TV. Content was regulated, licenses given under stringent rules. Would further empowering U.S. censors to worry about “social stability” have been helpful or harmful in the slow liberalization of American society? Was any of this successful in promoting democracy, or just in silencing the many oppressed voices of the era? They surely would have tried to stifle, not promote, the civil rights and anti-Vietnam War movements, as the FBI did. 

Freer communication by and large is central to the spread of representative democracy and prosperity. And the contents of that communication are frequently wrong or disturbing, and usually profoundly offensive to the elites who run the regulatory state. It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge. 

“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom. 

Yes, AI can lie and produce “deepfakes.” The brief era when a photograph or video provided by itself evidence that something happened, since photographs and videos were difficult to doctor, is over. Society and democracy will survive.

“Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.”

AI can certainly be tuned to favor one or the other political view. Look only at Google’s Gemini misadventure. (9) Try to get any of the currently available LLMs to report controversial views on hot-button issues, even medical advice. Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real. If an algorithm is feeding people misinformation, as TikTok is accused of feeding people Chinese censorship, (10) count on its competitors, if allowed to do so, to scream that from the rafters and attract people to a better product. 

Regulation naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table, including through collective bargaining,” and “AI development should be built on the views of workers, labor unions, educators, and employers.” (11) Writing in the Wall Street Journal, Ted Cruz and Phil Gramm report: “Mr. Biden’s separate AI Bill of Rights claims to advance ‘racial equity and support for underserved communities.’ AI must also be used to ‘improve environmental and social outcomes,’ to ‘mitigate climate change risk,’ and to facilitate ‘building an equitable clean energy economy.’” (12) All worthy goals, perhaps, but one must admit those are somewhat partisan goals not narrowly tailored to scientifically understood AI risks. And if you like these, imagine what the likely Trump executive order on AI will look like. 

Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.

Economics

What about jobs? It is said that once AI comes along, we’ll all be out of work. And exactly this was said of just about every innovation for the last millennium. Technology does disrupt. Mechanized looms in the 1800s did lower wages for skilled weavers, while it provided a reprieve from the misery of farmwork for unskilled workers. The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives. Special regulations to help people displaced by AI, or China, or other newsworthy causes is counterproductive. 

But after three centuries of labor-saving innovation, the unemployment rate is 4%. (13) In 1900, a third of Americans worked on farms. Then the tractor was invented. People went on to better jobs at higher wages. The automobile did not lead to massive unemployment of horse-drivers. In the 1970s and 1980s, women entered the workforce in large numbers. Just then, the word processor and Xerox machine slashed demand for secretaries. Female employment did not crash. ATM machines increased bank employment. Tellers were displaced, but bank branches became cheaper to operate, so banks opened more of them. AI is not qualitatively different in this regard. 

One activity will be severely disrupted: Essays like this one. ChatGPT-5, please write 4,000 words on AI regulation, society, and democracy, in the voice of the Grumpy Economist…(I was tempted!). But the same economic principle applies: Reduction in cost will lead to a massive expansion in supply. Revenues can even go up if people want to read it, i.e., if demand is elastic enough. (14) And perhaps authors like me can spend more time on deeper contributions. 

The big story of AI will be how it makes workers more productive. Imagine you’re an undertrained educator or nurse practitioner in a village in India or Africa. With an AI companion, you can perform at a much higher level. AI tools will likely raise the wages and productivity of less-skilled workers, by more easily spreading around the knowledge and analytical abilities of the best ones. 

AI is one of the most promising technical innovations of recent decades. Since social media of the early 2000s, Silicon Valley has been trying to figure out what’s next. It wasn’t crypto. Now we know. AI promises to unlock tremendous advances. Consider only machine learning plus genetics and ponder the consequent huge advances coming in health. But nobody really knows yet what it can do, or how to apply it. It was a century from Franklin’s kite to the electric light bulb, and another century to the microprocessor and the electric car. 

A broad controversy has erupted in economics: whether frontier growth is over or dramatically slowing down because we have run out of ideas. (15) AI is a great hope this is not true. Historically, ideas became harder to find in existing technologies. And then, as it seemed growth would peter out, something new came along. Steam engines plateaued after a century. Then diesel, electric, and airplanes came along. As birthrates continue to decline, the issue is not too few jobs, but too few people. Artificial “people” may be coming along just in time!

“It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.”

Conclusion 

As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes, 

We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow… 

We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. (16) 

The first paragraph is correct. But the logical implication is the converse—if relations are “complex” and consequences “unforeseen,” the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this “we”? How much more “attention” can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this “getting”? Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.

What’s the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart? 

The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It’s going to be great.

Footnotes

(1) Angela Aristidou, Eugene Volokh, and an anonymous reviewer for helpful comments.

(2) Donella Meadows, Dennis Meadows, Jørgen Randers, and William Behrens, Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind (New York: Universe Books, 1972), https://www.donellameadows.org/wp-content/userfiles/Limits-to-Growth-digital-scan-version.pdf; Soylent Green, directed by Richard Fleischer (1973; Beverly Hills, CA: Metro-Goldwyn-Mayer).

(3) Angus Deaton, The Great Escape: Health, Wealth, and the Origins of Inequality (Princeton University Press, 2013), https://press.princeton.edu/books/hardcover/9780691153544/the-great-escape.

(4) See Friedrich Hayek, “The Use of Knowledge in Society,” American Economic Review 35 (September 1945): 519–30, https://www.jstor.org/stable/1809376.

(5) See George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (Spring 1971): 3–21, https://doi.org/10.2307/3003160.

(6) See Martin Casado and Katherine Boyle, “AI Talks Leave ‘Little Tech’ Out,” Wall Street Journal, May 14, 2024, https://www.wsj.com/articles/ai-talks-leave-little-tech-outhomeland-security-adversaries-open-source-board-46e3232d.

(7) See John H. Cochrane and Amit Seru, “Ending Bailouts, at Last,” Journal of Law, Economics and Policy 19, no. 2 (2024): 169–193, https://www.johncochrane.com/research-all/end-bailouts.

(8) Murthy v. Missouri, 603 U.S. _____ (2024).

(9) Megan McArdle, “Female Popes? Google’s Amusing AI Bias Underscores a Serious Problem,” Washington Post, February 27, 2024, https://www.washingtonpost.com/opinions/2024/02/27/google-gemini-bias-race-politics/.

(10) Zachary Evans, “Social Media App TikTok Censors anti-China Content,” National Review, September 25, 2019, https://www.nationalreview.com/news/social-mediaapp-tiktok-censors-anti-china-content.

(11) Exec. Order No. 14110, 88 Fed. Reg. 75191 (October 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safesecure-and-trustworthy-development-and-use-of-artificial-intelligence/.

(12) Ted Cruz and Phil Gramm, “Biden Wants to Put AI on a Leash,” Wall Street Journal, March 25, 2024, https://www.wsj.com/articles/biden-wants-to-put-artificial-intelligence-on-a-leash-progressive-regulation-45275102.

(13) “Unemployment Rate [UNRATE], May 2024” U.S. Bureau of Labor Statistics, retrieved from FRED, Federal Reserve Bank of St. Louis, July 5, 2024, https://fred.stlouisfed.org/series/UNRATE.

(14) For more on this point, see John Cochrane, “Supply, Demand, AI and Humans,” TheGrumpy Economist (blog), April 26, 2024, https://www.grumpy-economist.com/p/supply-demand-ai-and-humans.

(15) See the excellent, and troubling, analysis in Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War (Princeton: Princeton University Press, 2017) and Nick Bloom, John Van Reenen, Charles Jones, and Michael Webb, “Are Ideas Getting Harder to Find?,” American Economic Review, 110, no. 4 (April 2020): 1104–1144, https://www.aeaweb.org/articles?id=10.1257/aer.20180338

(16) Daren Acemoglu, “Are We Ready for AI Creative Destruction?,” Project Syndicate, April 9, 2024, https://www.project-syndicate.org/commentary/ai-age-needs-morenuanced-view-of-creative-destruction-disruptive-innovation-by-daron-acemog