Speed is being prioritized over scrutiny, with AI-generated models designed to justify interventions before they can be meaningfully challenged.
The U.S. military is funding artificial intelligence (AI) systems designed to drastically accelerate viral outbreak modeling—compressing a process that typically takes weeks into something that can be produced in days, then used to steer real-world interventions.
In other words, the faster the model, the less time there is to question whether the response is justified at all.
This acceleration follows DARPA’s already-documented pre-COVID pandemic infrastructure designed to turn digital genetic sequences into synthesized viruses and mass-produced mRNA countermeasures on a fixed timeline.
DARPA’s Stated Problem: Pandemic Models Were Brittle, Opaque, & Slow
As SARS-CoV-2 radiated across the planet in 2020, epidemiologists scrambled to predict its spread—and its deadly consequences. Often, they turned to models that not only simulate viral transmission and hospitalization rates, but can also predict the effect of interventions: masks, vaccines, or travel bans.
But in addition to being computationally intensive, models in epidemiology and other disciplines can be black boxes: millions of lines of legacy code subject to finicky tunings by operators at research organizations scattered around the world. They don’t always provide clear guidance. “The models that are used are often kind of brittle and nonexplainable,” says Erica Briscoe, who was a program manager for the Automating Scientific Knowledge Extraction and Modeling (ASKEM) project at the Defense Advanced Research Projects Agency (DARPA).
The Defense Advanced Research Projects Agency’s (DARPA) own program manager is conceding that the models used to steer COVID-era responses were fragile and difficult to interpret.
Meaning: they’re not trying to slow down or restrain model-driven policy after COVID.
They’re trying to make the same kind of decision machinery run faster.
There’s “real potential” for them to speed up model building during an outbreak, says Mohsen Malekinejad, an epidemiologist at the University of California San Francisco who helped evaluate the ASKEM products. “In a pandemic, time is always our biggest constraint. We need to have the information. We need to have it fast,” he says. “We simply don’t have enough data-skilled modelers for every single emergence, or every different type of public health need.”
The Program: AI-Generated Outbreak Models on Demand
“Launched in 2022, the $29.4 million program aims to develop artificial intelligence (AI)-based tools that can make model building easier, faster, and more transparent.”
DARPA funded infrastructure that standardizes and accelerates outbreak modeling.
The emphasis is on speed, reproducibility, and usability by non-specialists, allowing policy-relevant models to be generated quickly, even when underlying assumptions are incomplete or contested.
How It Works: Papers & Notebooks → Equations → Models
“The program’s AI tools automate that coding, allowing researchers to construct, update, and combine models at a higher level of abstraction.”
By removing much of the technical friction involved in model construction, these tools make it easier to generate outbreak models that carry institutional weight, even when the scientific grounding is thin or uncertain.
“ASKEM teams designed AI systems that can consume scientific literature… and extract the equations and knowledge needed to create or update a given model.”
Scientific literature is converted directly into reusable model components, giving machine-parsed interpretations of research the ability to propagate quickly into decision-making frameworks.
“One ASKEM project developed a way to ingest those notebooks, extract the underlying mathematical descriptions, and turn them into a model.”
Informal reasoning and exploratory notebook work can be elevated into deployable models at speed, reducing the distance between preliminary thinking and authoritative outputs.
Intervention-Focused Modeling
“The resulting model integrated the viruses’ different transmission and seasonal patterns, while gauging the effects of interventions such as wearing masks and testing.”
The system is designed to evaluate intervention scenarios alongside disease dynamics, embedding policy considerations directly into the modeling process.
“Testers were asked to model the impact of a vaccination campaign on the cost of hospitalization for hepatitis A in a state’s unhoused drug users.”
These tools are oriented toward applied governance questions—cost, targeting, and campaign impact—rather than purely descriptive epidemiology.
The Speed Claim: 83% Faster
“In the final results, testers found that the ASKEM tools, when pitted against standard modeling workflows, could create models 83% faster.”
Model generation is fast enough to fit within political and media timelines, reducing the opportunity for external review before results are acted upon.
“They were able to build practically useful models in a 40-hour work week for multiple problems.”
Once speed ceases to be the limiting factor, the pressure shifts toward rapid implementation rather than careful validation.
‘Transparency’ as an Internal Confidence Signal
“Because of the ASKEM models’ enhanced transparency, testers also found that decision-makers would be more confident in ASKEM’s outputs than in those of traditional models.”
Here, “transparency” functions less as a safeguard and more as a confidence amplifier for officials.
By making models legible enough to satisfy internal review, the system reduces friction within institutions, allowing officials to act more quickly while unresolved uncertainties remain embedded in the outputs.
“DARPA is working to find agencies or programs within the health, defense, and intelligence communities that might want to take advantage of ASKEM.”
Outbreak modeling is being positioned as a permanent national-security capability, integrated alongside defense and intelligence functions rather than treated as an ad hoc public-health exercise.
Bottom Line
DARPA is building a system that converts literature, assumptions, and exploratory analysis into outbreak models fast enough to guide interventions in near real time.
When speed is treated as the primary constraint, the window for scrutiny, dissent, and meaningful challenge necessarily collapses before those models are used to justify action.
Automated, cradle-to-grave traceability for “identifying and targeting the unreached.”
In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.
The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals from birth.
It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.
The proposed system:
integrates personally identifiable information with socioeconomic data such as “household income, ethnicity and religion,”
deploys artificial intelligence for “identifying and targeting the unreached” and “combating misinformation,”
and enables governments to use vaccination records as prerequisites for education, travel, and other services.
What the WHO Document Admits, in Their Own Words
To establish the framework, the authors define the program as nothing less than a restructuring of how governments govern:
“Digital transformation is the intentional, systematic implementation of integrated digital applications that change how governments plan, execute, measure and monitor programmes.”
They openly state the purpose:
“This transformation can accelerate progress towards the Immunization agenda 2030, which aims to ensure that everyone, everywhere, at every age, fully benefits from vaccines.”
This is the context for every policy recommendation that follows: a global vaccination compliance system, digitally enforced.
1. Birth-Registered Digital Identity & Life-Long Tracking
The document describes a system in which a newborn is automatically added to a national digital vaccine-tracking registry the moment their birth is recorded.
“When birth notification triggers the set-up of a personal digital immunization record, health workers know who to vaccinate before the child’s first contact with services.”
They specify that this digital identity contains personal identifiers:
“A newborn whose electronic immunization record is populated with personally identifiable information benefits because health workers can retrieve their records through unique identifiers or demographic details, generate lists of unvaccinated children and remind parents to bring them for vaccination.”
This is automated, cradle-to-grave traceability.
The system also enables surveillance across all locations:
“[W]ith a national electronic immunization record, a child can be followed up anywhere within the country and referred electronically from one health facility to another.”
This is mobility tracking tied to medical compliance.
2. Linking Vaccine Records to Income, Ethnicity, Religion, & Social Programs
The document explicitly endorses merging vaccine status with socioeconomic data.
“Registers that record household asset data for social protection programmes enable monitoring of vaccination coverage by socioeconomic status such as household income, ethnicity and religion.”
This is demographic stratification attached to a compliance database.
3. Conditioning Access to Schooling, Travel, & Services on Digital Vaccine Proof
The WHO acknowledges and encourages systems that require vaccine passes for core civil functions:
“Some countries require proof of vaccination for children to access daycare and education, and evidence of other vaccinations is often required for international travel.”
They then underline why digital formats are preferred:
“Digital records and certificates are traceable and shareable.”
Digital traceability means enforceability.
4. Using Digital Systems to Prevent ‘Wasting Vaccine on Already Immune Children’
The authors describe a key rationale:
“Children’s vaccination status is not checked during campaigns, a practice that wastes vaccine on already immune children and exposes them to the risk of adverse events.”
Their solution is automated verification to maximize vaccination throughput.
The digital system is positioned as both a logistical enhancer and a compliance enforcer:
“National electronic immunization records could transform how measles campaigns and supplementary immunization activities are conducted by enabling on-site confirmation of vaccination status.”
5. AI Systems to Target Individuals, Identify ‘Unreached,’ & Combat ‘Misinformation’
The WHO document openly promotes artificial intelligence to shape public behavior:
“AI… demonstrate[s] its utility in identifying and targeting the unreached, identifying critical service bottlenecks, combating misinformation and optimizing task management.”
They explain additional planned uses:
“Additional strategic applications include analysing population-level data, predicting service needs and spread of disease, identifying barriers to immunization, and enhancing nutrition and health status assessments via mobile technology.”
This is predictive analytics paired with influence operations.
6. Global Interoperability Standards for International Data Exchange
The authors call for a unified international data standard:
“Recognize fast healthcare interoperability resources… as the global standard for exchange of health data.”
Translated: vaccine-linked personal identity data must be globally shareable.
They describe the need for “digital public infrastructure”:
“Digital public infrastructure is a foundation and catalyst for the digital transformation of primary health care.”
This is the architecture of a global vaccination-compliance network.
7. Surveillance Expansion Into Everyday Interactions
The WHO outlines a surveillance model that activates whenever a child interacts with any health or community service:
“CHWs who identify children during home visits and other community activities can refer them for vaccination through an electronic immunization registry or electronic child health record.”
This means non-clinical community actors participating in vaccination-compliance identification.
The authors also describe cross-service integration:
“Under-vaccinated children can be reached when CHWs and facility-based providers providing other services collaborate and communicate around individual children in the same electronic child health records.”
Every point of contact becomes a checkpoint.
8. Behavior-Shaping Through Alerts, Reminders, and Social Monitoring
The WHO endorses using digital messaging to overcome “intention–action gaps”:
“Direct communication with parents in the form of alerts, reminders and information helps overcome the intention–action gap.”
They also prescribe digital surveillance of public sentiment:
“Active detection and response to misinformation in social media build trust and demand.”
This is official justification for monitoring and countering speech.
9. Acknowledgment of Global Donor Control—Including Gates Foundation
At the very end of the article, the financial architect is stated plainly:
“This work was supported by the Gates Foundation [INV-016137].”
This confirms the alignment with Gates-backed global ID and vaccine-registry initiatives operating through Gavi, the World Bank, UNICEF, and WHO.
Bottom Line
In the WHO’s own words:
“Digital transformation is a unique opportunity to address many longstanding challenges in immunization… now is the time for bold, new approaches.”
And:
“Stakeholders… should embrace digital transformation as an enabler for achieving the ambitious Immunization agenda 2030 goals.”
This is a comprehensive proposal for a global digital-identity system, permanently linked to vaccine status, integrated with demographic and socioeconomic data, enforced through AI-driven surveillance, and designed for international interoperability.
It is not speculative, but written in plain language, funded by the Gates Foundation, and published in the World Health Organization’s own journal.
Microsoft-led study shows AI can design tens of thousands of toxin variants—including ricin and botulinum—that DNA company safety checks don’t catch, raising fears they could be purchased undetected.
A peer-reviewed Science study has revealed that artificial intelligence (AI) can design lethal toxin blueprints that slip past the safety systems used by DNA vendors—the very safeguards intended to stop bad actors from ordering genetic material for bioweapons.
Science published an article explaining the study’s findings, confirming: “DNA vendors typically use screening software to flag sequences that might be used to cause harm. But the researchers report that this software failed to catch many of their AI-designed genes—one tool missed more than 75% of the potential toxins.”
In simple terms, if someone today submitted an order to a gene synthesis company for one of these AI-designed toxin sequences, the system that’s supposed to block it would likely approve it.
The top gene synthesis companies with a major U.S. presence include Twist Bioscience, Integrated DNA Technologies (IDT), GenScript, Thermo Fisher Scientific’s GeneArt division, Azenta/Genewiz, ATUM (formerly DNA2.0), and Eurofins Genomics.
Twist Bioscience Spins ‘Leadership’ After Embarrassing Failure
In the wake of the Science revelations, one of the largest U.S. DNA synthesis companies, Twist Bioscience, rushed out a press release attempting to frame the debacle as proof of its “leadership” in biosecurity.
The company admitted the study was a “first-of-its-kind” red-team exercise showing that AI-designed toxins escaped detection by standard biosecurity screening software.
But instead of highlighting the alarming 75% failure rate, Twist described its role as “a proactive approach to safeguard public health, providing an example for other industries to follow.”
CEO Emily Leproust tried to reassure investors, insisting: “For known proteins and sequences, industry best practices for biosecurity screening are robust and highly effective. However, as AI capabilities evolve, screening practices must evolve just as quickly.”
That is the tell.
These screening systems only work against already-known toxins—the very ones that AI is now mutating into endless new forms.
In other words, the locks on the door are sturdy only if the burglar is polite enough to knock with a familiar key.
Microsoft’s own chief scientist Eric Horvitz admitted the problem plainly: “AI advances are fueling breakthroughs in biology and medicine, yet with new power comes the responsibility for vigilance and thoughtful risk management.”
The subtext is clear—these are weapons-grade blueprints, and the systems meant to stop them have failed.
Twist wants the public to believe that private “collaboration” with tech giants is enough to protect the world.
But the hard fact, buried beneath their press release optimism, is that the same study they co-authored proved their industry’s defenses could not prevent lethal toxin sequences from slipping through.
Instead of taking accountability, Twist shifted the narrative to “responsible innovation,” downplaying the reality that thousands of bioweapon blueprints could still be ordered undetected today.
How the Experiment Worked
The Science study was led by Microsoft bioengineer Bruce Wittmann.
“Wittmann and his Microsoft colleagues wanted to know what would happen if they ordered the DNA sequences that code for these proteins from companies that synthesize nucleic acids,” the article explains.
They designed more than 70,000 DNA sequences that mimicked notorious toxins like ricin, botulinum, and Shiga.
“Computer models suggested that at least some of these alternatives would also be toxic.”
Wittmann admitted: “The knowledge that I had access to, and stewardship over these proteins was, on a human level, a notable burden.”
Translation: with only AI tools, a single research team generated tens of thousands of potential bioweapon recipes—knowing some could be lethal if produced.
The Screening Failure
The group then tested whether DNA companies’ order-screening software would flag these toxin blueprints.
The results were devastating.
“The tools failed to flag many of these sequences as problematic. Their performance varied widely. One tool flagged just 23% of the sequences.”
That means nearly 8 out of 10 AI-engineered poisons could have been ordered and delivered without anyone noticing.
Even the most effective tool caught just 70%.
“One of the screening tools flagged 70% of the sequences, and its developer chose not to make any changes to improve the software.”
The others took months to quietly patch their systems.
“We were all very quiet about it,” said one expert quoted in the paper.
The ‘Fix’—But Still Failing
After upgrades, detection improved but remained incomplete.
“The systems flagged 72% of Wittmann’s AI-generated sequences, on average, including 97% of the sequences that models rated most likely to generate toxins.”
But that still leaves thousands of engineered toxin blueprints invisible to safeguards.
Even a 3% failure rate equals over 2,000 AI-generated poison sequences slipping through undetected.
A Gaping Hole in the Supply Chain
Even more alarming, the article confirms: “Some DNA vendors, accounting for perhaps 20% of the market, don’t screen their orders at all.”
That means nearly a quarter of global synthetic DNA sellers may approve any order, no questions asked.
Expert Warnings
Jaime Yassif of the Nuclear Threat Initiative said: “It’s just the beginning. AI capabilities are going to evolve and be able to design more and more complex living systems, and our DNA synthesis screening capabilities are going to have to continue to evolve to keep up with that.”
In other words: AI is moving faster than the safeguards.
Stanford researcher Drew Endy went further: “I wish people would wake up a little bit… Today, nations are accusing one another of having offensive bioweapons programs… This is the historical pattern that happened 100 years ago that led to actual bioweapons programs. We have to de-escalate this.”
That’s a blunt warning that this is not just about terrorists—it’s about governments running clandestine bioweapons labs.
What It Means
The authors did not physically manufacture the toxins.
“That would have required ordering the genes from DNA vendors and inserting them into bacteria or yeast to produce the proteins of interest. And doing so could be considered a violation of the Biological Weapons Convention,” the article explains.
But the point is clear: if Microsoft researchers could design and slip tens of thousands of toxin blueprints past DNA vendor safeguards, others could too—and they might not stop at the design stage.
Bottom Line
The Science paper proves the locks on the door of biosecurity are broken.
AI can mass-generate toxin blueprints.
DNA vendors’ screening software fails up to 75% of the time.
Some companies don’t screen orders at all.
The implications are stark: ordering DNA for a custom-made bioweapon may already be possible through legitimate commercial suppliers, and the public would never know until it was too late.
The Federal Trade Commission (FTC), under Chair Lina Khan, has been transformed from an independent regulatory body into a tool for furthering the Biden-Harris regime’s ideological objectives—at least, that’s what the evidence proves. This shift in focus has fundamentally disrupted the landscape of startup innovation in Silicon Valley. Once a thriving ecosystem where young startups could rely on eventual acquisition as a viable exit strategy, the FTC’s stringent anti-merger policies under Chair Khan have instead introduced what amounts to a tax on entrepreneurship. And this so-called tax is not just bureaucratic red tape; it’s a direct impediment to the dream of every startup: turning a novel idea into a profitable venture, even if the ultimate goal is simply to get acquired by an established player like Google, Amazon or Microsoft. Khan’s FTC—steering under the direction of Executive Order 14036—has taken on a European-style aversion to size itself, substituting broad market condemnation for nuanced analysis.
FTC Chair Lina Khan’s Aggressive Stance on Antitrust Enforcement
FTC Chair Lina Khan’s aggressive stance on antitrust enforcement has sparked considerable criticism, especially from venture capitalists and tech advocates who argue that her policies discourage innovation and undermine the startup ecosystem. Khan’s approach prioritizes preemptive action against acquisitions—particularly in tech—focusing on preventing monopolies before they form. This shift marks a departure from traditional antitrust enforcement, which usually only intervenes after a monopoly’s power is well-established. Critics, including venture capital leaders like Marc Andreessen, argue that blocking acquisitions prevents startups from being acquired by larger companies, which often serve as their primary exit strategy. They contend that fewer acquisitions make startups less attractive investments, reducing funding and limiting opportunities for innovation.
The National Venture Capital Association (NVCA) and others in Silicon Valley have also highlighted that antitrust actions, such as those against Meta’s attempted acquisition of VR company Within, can deter investment and reduce valuations of startups due to diminished acquisition prospects. According to the NVCA, venture funding relies heavily on the potential for acquisition, with nearly 90% of venture-backed exits occurring through acquisitions rather than IPOs. The broader concern is that restricting these exit paths disincentivizes startups from entering the market in the first place, thereby curbing technological advancement and economic growth.
Moreover, critics argue that Khan’s policies could inadvertently increase market concentration by stifling small companies before they can scale, making them less likely to challenge established giants independently. With reduced venture capital investment and the departure of some smaller players due to regulatory barriers, Khan’s policies might unintentionally favor dominant firms rather than foster competition and consumer choice.
However, Khan and her supporters maintain that unchecked acquisitions often lead to “killer acquisitions,” where larger firms acquire startups solely to neutralize potential competition. She argues that her approach is a necessary corrective to the “winner-takes-all” dynamics prevalent in tech, aiming to ensure a competitive landscape that fosters genuine innovation rather than monopolistic control.
These perspectives reflect a complex and contentious debate over the FTC’s role in regulating competition. While proponents see Khan’s approach as protecting long-term market health, detractors warn it could stifle innovation and prevent the growth of future tech leaders.
The Acquisition Ecosystem: Once a Fountainhead of Innovation
Historically, Silicon Valley thrived not because every startup had to become a multi-billion-dollar business, but because every startup had options. Investors could pour money into new ventures knowing there were numerous paths to return—whether through growth into a unicorn or by being acquired. Founders could innovate boldly, focusing on niche solutions or enhancing existing products, with the knowledge that even modest success could lead to a rewarding acquisition—a “soft landing” that allowed them to contribute within larger companies and eventually build again. This model made sense: Small tech firms, brimming with ideas but short on scale, married perfectly with larger corporations that could apply their resources to scale up those ideas. Everyone won—investors, founders, consumers and even regulators who wanted thriving markets.
Chair Lina Khan, however, seemingly has other plans. Under her direction, the FTC has cast a chilling effect over mergers and acquisitions (M&A) within the tech sector. Even mergers that would create clear consumer benefits—providing resources to enhance existing products, or even, paradoxically, increasing competition with major international firms—are subjected to a labyrinthine review process, one seemingly crafted more for obstruction than for adjudicating antitrust concerns in good faith.
The House Oversight Committee Report: A Chronicle of FTC Overreach
James Comer (R-Ky.), the Chairman of the House Committee on Oversight and Accountability, delivered a resounding indictment of Lina Khan’s FTC in his recent report titled, “The Federal Trade Commission Under Chair Lina Khan: Undue Biden-Harris White House Influence and Sweeping Destruction of Agency Norms”. The report paints a damning picture of Khan’s tenure, showing an agency unmoored from its original purpose and openly compliant to the political will of the White House. As noted in the report, Khan has “trampled on principles of due process, respect for the rule of law, and ethical standards to achieve her ideologically fueled ends at the FTC.” This is no light accusation; the charge is that Khan has taken an institution that was supposed to be independent and turned it into an ideological battering ram against American entrepreneurship.
U.S. Army Corps of Engineers Nashville District, CC BY-SA 2.0
One critical example lies in Khan’s approach to mergers like that of Meta’s proposed acquisition of Within Unlimited, and Microsoft’s bid for Activision Blizzard. Both cases—designed to challenge “potential competition” and vertical integration respectively—ended in resounding losses for Khan’s FTC in federal court. It wasn’t just that the FTC lost; it’s that the courts found the commission had failed to provide even basic grounds for its arguments against the mergers. Judge Jacqueline Scott Corley, an Obama appointee, pointed out that the FTC had not even raised “serious questions regarding whether the proposed merger is likely to substantially lessen competition.” The failures are not merely tactical errors; they reveal the degree to which Khan’s approach deviates from legal norms and reflects ideological zealotry.
Prominent Deals Blocked or Challenged Under Khan’s FTC
Under Lina Khan’s leadership, the FTC has halted or delayed several high-profile acquisitions across various sectors, resulting in significant backlash from venture capitalists and tech innovators alike. Here are some prominent deals Khan’s FTC has blocked or challenged, invoking criticism that her tactics stifle competition and innovation:
Meta’s Attempted Acquisition of Within (2022): The FTC filed to prevent Meta’s acquisition of Within, a VR fitness app, aiming to block what it deemed an anti-competitive move in the nascent virtual reality sector. For Meta, this acquisition was integral to its Metaverse ambitions, but Khan’s FTC argued it would restrict competition in VR fitness apps and stunt innovation in the VR space. Tech proponents argue this block punishes smaller companies needing investment to grow and denies the market innovative tech synergies in the VR landscape.
Illumina and Grail Deal: Illumina, a genetic sequencing giant, attempted a $7.1 billion acquisition of Grail, a cancer detection company. Khan’s FTC argued this merger would hinder competition in the emerging cancer screening market by consolidating too much control under Illumina. This legal action took years, with Illumina ultimately abandoning the acquisition in 2024, largely because Khan’s FTC framed it as anti-competitive, though Grail’s potential benefited from Illumina’s resources and technology integration.
Kroger and Albertsons Merger: Recently, the FTC scrutinized Kroger’s $24.6 billion bid to acquire Albertsons, a merger that Kroger argued would enable it to compete better against retail behemoths like Walmart and Amazon. Khan’s FTC, however, is reviewing the merger, expressing concerns that it might raise grocery prices by reducing competition. Critics point out that stopping or delaying such mergers harms consumers who stand to benefit from lower costs in a consolidated operation. This case illustrates Khan’s expansive view on the anti-competitive risks of mergers, even in non-tech sectors.
Semiconductor and Defense Sector Acquisitions: Under Khan, the FTC also blocked or impeded multiple mergers in the semiconductor and defense sectors, though details of each deal remain confidential. Analysts argue that these sectors, which are critical for national security and innovation, suffer from regulatory overreach that may restrict tech advancement in microchips, which are crucial to industries worldwide. The approach fuels investor concerns, making VCs wary of funding startups in areas where acquisition by larger firms is the most viable exit strategy.
Khan’s FTC has arguably gone beyond traditional regulatory scope, blocking even speculative mergers to prevent potential monopolies before they materialize. This radical strategy, while aimed at preventing Big Tech consolidation, is viewed by detractors as throttling the innovation ecosystem, particularly for startups that depend on the potential of acquisition. VCs have criticized this “anti-innovation” approach, arguing that without acquisition options, startups lose their key growth pathways, a blow to both entrepreneurship and consumer choice in emerging markets.
Emulating Europe’s Bureaucratic Failures
Perhaps the most egregious example of this shift is the FTC’s partnership with European regulators in implementing the EU’s Digital Markets Act (DMA). The DMA is a distinctly anti-American, protectionist piece of legislation aimed at hobbling the success of U.S. tech giants like Google, Apple and Amazon in favor of European firms. Despite its thinly veiled animosity towards American companies, Khan’s FTC took the inexplicable step of actively assisting with the DMA’s implementation, sending FTC staffers to Europe to guide its adoption. According to Maria Coppola, Director of the FTC’s Office of International Affairs, the FTC undertook these actions partly because “legislative proposals… were, like, cut and paste from European legislation,” and they wanted to be prepared if similar proposals were adopted here in the United States. Essentially, the FTC appears to be helping Europe impose rules that its own courts, and even the White House, recognize as harmful to American business.
This pattern reflects an abdication of the FTC’s duty to American consumers and a disservice to the principles upon which American antitrust law is built. Our laws are intended to prevent harm to competition, not punish success or hamstring companies simply for being large. The European model, which imposes punitive measures on companies for being “gatekeepers,” is at odds with decades of American antitrust principles. By adopting this approach, the FTC under Khan isn’t just obstructing mergers; it’s sabotaging American firms in the global market, putting them at a disadvantage compared to foreign competitors.
The cumulative effect of Khan’s anti-merger crusade is a dramatic chilling effect on startup culture in Silicon Valley. Venture capitalists (VCs) are less willing to take a gamble on a startup if the most likely exit—acquisition—is being systematically blocked by the FTC. The idea that every startup must turn into a standalone, billion-dollar enterprise or be considered a failure is not just laughable; it’s dangerous to the spirit of American innovation. Some of the most revolutionary technological features we use today came from smaller startups acquired by major players: Google acquiring YouTube, Facebook buying Instagram, and Amazon absorbing Twitch—all ventures that benefited from larger corporate backings.
Chair Khan’s direction has reversed a longstanding culture of innovation and transformation. Instead of bolstering companies with resources and scale, her FTC—under the influence of Biden-Harris policies—seems to prefer a European-style system of perpetual market fragmentation. Not surprisingly, the Silicon Valley that was once the envy of the world for its ingenuity and entrepreneurial daring is seeing fewer startups founded, fewer funded, and fewer able to make a mark on the world.
The Stakes for the 2024 Election: A Vote for Innovation
The upcoming 2024 election is crucial for determining the future of American innovation. If Kamala Harris is elected president, it is highly likely that Lina Khan’s destructive antitrust crusade will continue, further stifling the startup ecosystem and preventing the growth of new technologies that could enhance American competitiveness. In contrast, just this morning, Elon Musk pointed out that Donald Trump has committed to firing Lina Khan on his first day in office—a move that many see as essential to restoring sanity to the FTC and allowing American innovation to thrive again.
It’s time for voters to understand the stakes clearly: the choice is between an administration that supports a bureaucratic assault on entrepreneurship or one that aims to restore opportunities for innovators, founders and venture capitalists. The decision will determine whether America continues to be the global hub of innovation or slips into stagnation under the weight of needless regulation and ideological rigidity. A vote for Donald Trump is a vote for growth, opportunity and the restoration of an environment where the American dream—especially for entrepreneurs—can once again flourish.
The impact of these policies extends beyond just Silicon Valley; it affects every American who benefits from a thriving, competitive economy. The choice is simple: either continue with the Biden-Harris regime’s heavy-handed, anti-business approach or pivot toward policies that encourage growth and prosperity. If we want to see innovation thrive, the tech sector rebound and opportunities expand for all, it is crucial that Lina Khan and her destructive policies are shown the door. Let’s vote for innovation and a prosperous future—let’s vote to bring common sense back to Washington.
The argument argues that law and regulation have never diagnosed and prevented social, political, and economic ills of new technology. AI is no different. AI regulation poses a greater threat to democracy than AI, as governments are anxious to use regulation to censor information. Free competition in civil society, media, and academia will address any ill effects of AI as it has for previous technological revolutions, not preemptive regulation.
“AI poses a threat to democracy and society. It must be extensively regulated.” Or words to that effect, are a common sentiment. They must be kidding.
Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology. Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?
Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus.
Have people ever correctly forecast social and political changes, from any set of causes? Representative democracy and liberal society have, in their slow progress, waxed and waned, to put it mildly. Did our predecessors in 1910 see 70 years of communist dictatorship about to envelop Russia? Did they understand in 1925 the catastrophe waiting for Germany?
Society is transforming rapidly. Birth rates are plummeting around the globe. The U.S. political system seems to be coming apart at the seams with unprecedented polarization, a busting of norms, and the decline of our institutions. Does anyone really know why?
“The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong.”
The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong. Thomas Malthus predicted, plausibly, that the technological innovations of the late 1700s would lead to widespread starvation. He was spectacularly wrong. Marx thought industrialization would necessarily lead to immiseration of the proletariat and communism. He was spectacularly wrong. Automobiles did not destroy American morals. Comic books and TV did not rot young minds.
Our more neurotic age began in the 1970s, with the widespread view that overpopulation and dwindling natural resources would lead to an economic and political hellscape, views put forth, for example, in the Club of Rome report and movies like Soylent Green. (2) They were spectacularly wrong. China acted on the “population bomb” with the sort of coercion our worriers cheer for, to its current great regret. Our new worry is global population collapse. Resource prices are lower than ever, the U.S. is an energy exporter, and people worry that the “climate crisis” from too much fossil fuel will end Western civilization, not “peak oil.” Yet demographics and natural resources are orders of magnitude more predictable than whatever AI will be and what dangers it poses to democracy and society.
“Millenarian” stems from those who worried that the world would end in the year 1000, and people had better get serious about repentance for our sins. They were wrong then, but much of the impulse to worry about the apocalypse, then to call for massive changes, usually with “us” taking charge, is alive today.
Yes, new technologies often have turbulent effects, dangers, and social or political implications. But that’s not the question. Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology?
There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.
Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned.
Climate change is, to many, the current threat to civilization, society, and democracy (the latter from worry about “climate justice” and waves of “climate refugee” immigrants). However much you believe the social and political impacts—much less certain than the meteorological ones—one thing is for sure: Trillion dollar subsidies for electric cars, made in the U.S., with U.S. materials, U.S. union labor, and page after page of restrictive rules, along with 100% tariffs against much cheaper Chinese electric cars, will not save the planet—especially once you realize that every drop of oil saved by a new electric car is freed up to be used by someone else, and at astronomical cost. Whether you’re Bjorn Lomborg or Greta Thunberg on climate change, the regulatory state is failing.
We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger.
I do not deny potential dangers of AI. The point is that the advocated tool, the machinery of the regulatory state, guided by people like us, has never been able to see social, economic, and political dangers of technical change, or to do anything constructive about them ahead of time, and is surely just as unable to do so now. The size of the problem does not justify deploying completely ineffective tools.
Preemptive regulation is even less likely to work. AI is said to be an existential threat, fancier versions of “the robots will take over,” needing preemptive “safety” regulation before we even know what AI can do, and before dangers reveal themselves.
Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path.
I do not claim that all regulation is bad. The Clean Air and Clean Water Acts of the early 1970s were quite successful. But consider all the ways in which they are so different from AI regulation. The dangers of air pollution were known. The nature of the “market failure,” classic externalities, was well understood. The technologies available for abatement were well understood. The problem was local. The results were measurable. None of those conditions is remotely true for regulating AI, its “safety,” its economic impacts, or its impacts on society or democratic politics. Environmental regulation is also an example of successful ex post rather than preemptive regulation. Industrial society developed, we discovered safety and environmental problems, and the political system fixed those problems, at tolerable cost, without losing the great benefits. If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.
“If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.”
Who will regulate?
Calls for regulation usually come in the passive voice (“AI must be regulated”), leaving open the question of just who is going to do this regulating.
We are all taught in first-year economics classes a litany of “market failures” remediable by far-sighted, dispassionate, and perfectly informed “regulators.” That normative analysis is not logically incorrect. But it abjectly fails to explain the regulation we have now, or how our regulatory bodies behave, what they are capable of, and when they fail. The question for regulating AI is not what an author, appointing him or herself benevolent dictator for a day, would wish to see done. The question is what our legal, regulatory, or executive apparatus can even vaguely hope to deliver, buttressed by analysis of its successes and failures in the past. What can our regulatory institutions do? How have they performed in the past?
Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure.
Even successful regulation, such as the first wave of environmental regulation, is now routinely perverted for other ends. People bring environmental lawsuits to endlessly delay projects they dislike for other reasons.
The basic competence of regulatory agencies is now in doubt. On the heels of the massive failure of financial regulation in 2008 and again in 2021, (7) the obscene failures of public health in 2020–2022, do we really think this institutional machinery can artfully guide the development of one of the most uncertain and consequential technologies of the last century?
And all of my examples asked regulators only to address economic issues, or easily measured environmental issues. Is there any historical case in which the social and political implications of any technology were successfully guided by regulation?
“Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators.”
It is AI regulation, not AI, that threatens democracy.
Large Language Models (LLMs) are currently the most visible face of AI. They are fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”
The worriers often had a point. Gutenberg’s moveable type arguably led to the Protestant Reformation. Luther was the social influencer of his age, writing pamphlet after pamphlet of what the Catholic Church certainly regarded as “misinformation.” The church “regulated” with widespread censorship where it could. Would more censorship, or “regulating” the development of printing, have been good? The political and social consequences of the Reformation were profound, not least a century of disastrous warfare. But nobody at the time saw what they would be. They were more concerned with salvation. And moveable type also made the scientific journal and the Enlightenment possible, spreading a lot of good information along with “misinformation.” The printing press arguably was a crucial ingredient for democracy, by allowing the spread of those then-heretical ideas. The founding generation of the U.S. had libraries full of classical and enlightenment books that they would not have had without printing.
More recently, newspapers, movies, radio, and TV have been influential in the spread of social and political ideas, both good and bad. Starting in the 1930s, the U.S. had extensive regulation, amounting to censorship, of radio, movies, and TV. Content was regulated, licenses given under stringent rules. Would further empowering U.S. censors to worry about “social stability” have been helpful or harmful in the slow liberalization of American society? Was any of this successful in promoting democracy, or just in silencing the many oppressed voices of the era? They surely would have tried to stifle, not promote, the civil rights and anti-Vietnam War movements, as the FBI did.
Freer communication by and large is central to the spread of representative democracy and prosperity. And the contents of that communication are frequently wrong or disturbing, and usually profoundly offensive to the elites who run the regulatory state. It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.
“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom.
Yes, AI can lie and produce “deepfakes.” The brief era when a photograph or video provided by itself evidence that something happened, since photographs and videos were difficult to doctor, is over. Society and democracy will survive.
“Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.”
AI can certainly be tuned to favor one or the other political view. Look only at Google’s Gemini misadventure. (9) Try to get any of the currently available LLMs to report controversial views on hot-button issues, even medical advice. Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real. If an algorithm is feeding people misinformation, as TikTok is accused of feeding people Chinese censorship, (10) count on its competitors, if allowed to do so, to scream that from the rafters and attract people to a better product.
Regulation naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table, including through collective bargaining,” and “AI development should be built on the views of workers, labor unions, educators, and employers.” (11) Writing in the Wall Street Journal, Ted Cruz and Phil Gramm report: “Mr. Biden’s separate AI Bill of Rights claims to advance ‘racial equity and support for underserved communities.’ AI must also be used to ‘improve environmental and social outcomes,’ to ‘mitigate climate change risk,’ and to facilitate ‘building an equitable clean energy economy.’” (12) All worthy goals, perhaps, but one must admit those are somewhat partisan goals not narrowly tailored to scientifically understood AI risks. And if you like these, imagine what the likely Trump executive order on AI will look like.
Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.
Economics
What about jobs? It is said that once AI comes along, we’ll all be out of work. And exactly this was said of just about every innovation for the last millennium. Technology does disrupt. Mechanized looms in the 1800s did lower wages for skilled weavers, while it provided a reprieve from the misery of farmwork for unskilled workers. The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives. Special regulations to help people displaced by AI, or China, or other newsworthy causes is counterproductive.
But after three centuries of labor-saving innovation, the unemployment rate is 4%. (13) In 1900, a third of Americans worked on farms. Then the tractor was invented. People went on to better jobs at higher wages. The automobile did not lead to massive unemployment of horse-drivers. In the 1970s and 1980s, women entered the workforce in large numbers. Just then, the word processor and Xerox machine slashed demand for secretaries. Female employment did not crash. ATM machines increased bank employment. Tellers were displaced, but bank branches became cheaper to operate, so banks opened more of them. AI is not qualitatively different in this regard.
One activity will be severely disrupted: Essays like this one. ChatGPT-5, please write 4,000 words on AI regulation, society, and democracy, in the voice of the Grumpy Economist…(I was tempted!). But the same economic principle applies: Reduction in cost will lead to a massive expansion in supply. Revenues can even go up if people want to read it, i.e., if demand is elastic enough. (14) And perhaps authors like me can spend more time on deeper contributions.
The big story of AI will be how it makes workers more productive. Imagine you’re an undertrained educator or nurse practitioner in a village in India or Africa. With an AI companion, you can perform at a much higher level. AI tools will likely raise the wages and productivity of less-skilled workers, by more easily spreading around the knowledge and analytical abilities of the best ones.
AI is one of the most promising technical innovations of recent decades. Since social media of the early 2000s, Silicon Valley has been trying to figure out what’s next. It wasn’t crypto. Now we know. AI promises to unlock tremendous advances. Consider only machine learning plus genetics and ponder the consequent huge advances coming in health. But nobody really knows yet what it can do, or how to apply it. It was a century from Franklin’s kite to the electric light bulb, and another century to the microprocessor and the electric car.
A broad controversy has erupted in economics: whether frontier growth is over or dramatically slowing down because we have run out of ideas. (15) AI is a great hope this is not true. Historically, ideas became harder to find in existing technologies. And then, as it seemed growth would peter out, something new came along. Steam engines plateaued after a century. Then diesel, electric, and airplanes came along. As birthrates continue to decline, the issue is not too few jobs, but too few people. Artificial “people” may be coming along just in time!
“It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.”
Conclusion
As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes,
We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. (16)
The first paragraph is correct. But the logical implication is the converse—if relations are “complex” and consequences “unforeseen,” the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this “we”? How much more “attention” can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this “getting”? Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.
What’s the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart?
The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It’s going to be great.
Footnotes
(1) Angela Aristidou, Eugene Volokh, and an anonymous reviewer for helpful comments.
(2) Donella Meadows, Dennis Meadows, Jørgen Randers, and William Behrens, Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind (New York: Universe Books, 1972), https://www.donellameadows.org/wp-content/userfiles/Limits-to-Growth-digital-scan-version.pdf; Soylent Green, directed by Richard Fleischer (1973; Beverly Hills, CA: Metro-Goldwyn-Mayer).
(4) See Friedrich Hayek, “The Use of Knowledge in Society,” American Economic Review 35 (September 1945): 519–30, https://www.jstor.org/stable/1809376.
(5) See George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (Spring 1971): 3–21, https://doi.org/10.2307/3003160.
(13) “Unemployment Rate [UNRATE], May 2024” U.S. Bureau of Labor Statistics, retrieved from FRED, Federal Reserve Bank of St. Louis, July 5, 2024, https://fred.stlouisfed.org/series/UNRATE.
(15) See the excellent, and troubling, analysis in Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War (Princeton: Princeton University Press, 2017) and Nick Bloom, John Van Reenen, Charles Jones, and Michael Webb, “Are Ideas Getting Harder to Find?,” American Economic Review, 110, no. 4 (April 2020): 1104–1144, https://www.aeaweb.org/articles?id=10.1257/aer.20180338
Recent Comments