Under Construction
Fundamentally, I’m committed to human excellence and prosperity. Technology and markets can empower humanity, extend our healthy life years, and solve some of our fundamental problems, but only if we are smart about it. My research primarily focuses on:
- Artificial intelligence public policy and the economics of automation;
- Vetocracy, good governance, and project management; and
- Attention markets and digital competition.
I’ve also recently written policy papers on regulation of over-the-counter COVID tests and other commercial diagnostic kits; the constitutional and legal challenges of regulating TikTok; and the efficacy of municipal broadband projects.
Artificial Intelligence and Automation
Artificial intelligence is a topic I’ve been covering for over a decade but recently it has come to dominate my research. My writing covers topics such as:
- The CHIPS Act and semiconductor economics
- How LLMs might reform government
- The practical problems with California’s SB 1047 and issues in legislating for AI safety
- Four major fault lines in AI policy
- First Amendment concerns with regulating AI
- Nvidia’s blockbuster earnings and the value of compute
- The challenges in regulating for bias and fairness
- Problems with Biden’s Executive Order on Artificial Intelligence
- How regulatory frameworks are challenging AI-driven services
- Headwinds facing self-driving cars
If we want to understand how AI technology is likely to progress, how it will affect workers, and how it might impact productivity, we should be focused on understanding its interdependencies. In other words, “To Understand AI Adoption, Focus on the Interdependencies.” But that’s not all, Should robots be taxed? May you live in interesting times, or just another AI hype cycle
I am fairly confident that in the next two years or so, artificial general intelligence (AGI), which is typically defined as an AI system that can match or exceed human-level performance across virtually any task, will become a reality. But I don’t think the disruption in the labor market is going to be as dramatic as people think. Ideation has always been cheap. Implementation is the real challenge.
In June, I wrote a two-part series on the economics of AI that discussed how emerging technologies are adopted and how human workers and AI systems can work together. I’m finding it all too common that people simply dismiss the effort that’s needed to transform a company, let alone transform an industry, with a technology like AI.
Moreover, people tend to couple robots with advanced AI tech. But when you look at the data, as I did, you learn that industries investing the most in robotics tend to be using AI the least. Manufacturing and retail trade spend the most on robotic equipment but they aren’t going big on machine learning, natural language processing, virtual agents, and the like.
When I’m asked how AI will change industries, no one likes to hear my answer: It is going to vary. Sometimes, highly productive companies slim their staffing while lower-end firms expand their staffing. Other times, automation technologies will produce substantial output gains that reduce labor costs while still expanding net jobs. Or, a technology might lead to more job creation and higher wages, as was the case with banks and ATMs**.**
Adopting new technology can reshape how companies use their workforce and equipment by automating or enhancing specific tasks. This transformation, however, comes with costs. It requires significant investment in both implementation and adaptation. A firm’s decision to adopt new technology should be based on a simple calculus: Invest when benefits exceed costs. But successful implementation ultimately hinges on the technology’s integration with existing organizational structures and processes.
Last February, I wrote about why the telephone switchboard took so long to become automatic. The interdependencies between call switching and other production processes within the firm presented an obstacle to change. But the same is true today of firms thinking of adopting AI. The interdependencies between AI and other production processes within firms might be an obstacle to change.
This year might be the year that marks a change in how businesses operate. OpenAI CEO Sam Altman writes in a new essay, “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.”
Sometimes, there are also some serious limitations to making these changes.
”We need to regulate AI” and “We need to get ahead of this thing” are popular phrases among tech leaders, policy experts, and media commentators. But this framing misses a crucial point: Significant AI regulation is already happening, just not through Congress. Yes, Congress hasn’t produced AI legislation, but the executive branch and the judiciary are deeply involved in regulating this new tech.
Here are just a few of the things I’ve been tracking:
- The Biden administration issued an executive order on AI that mandated some 150 requirements of the various agencies.
- Both states and the federal government, including the Federal Trade Commission (FTC), have reiterated that they will police unfair or deceptive acts and provide consumer protection over AI services.
- Federal agencies have issued more than 500 AI-relevant regulations, standards, and other governance documents, including the National Institute of Standards and Technology’s AI Risk Management Framework; the Equal Employment Opportunity Commission’s (EEOC) Artificial Intelligence and Algorithmic Fairness Initiative; the Food and Drug Administration’s Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative; and the Consumer Financial Protection Bureau’s Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems made in conjunction with the Department of Justice, the EEOC, and the FTC, just to name a couple of the big ones.
- Industry giants like OpenAI, Microsoft, Meta, Midjourney, and GitHub are currently embroiled in copyright disputes over the use of content for their models.
- Product recall authority gives entities like the National Highway Traffic Safety Administration, the Food and Drug Administration, and the Consumer Product Safety Commission the ability to regulate and mitigate risks posed by AI systems.
Given all this movement, I’m skeptical that a new regulatory regime is needed to ensure consumers are protected. Perhaps agencies need specific tools to collect information on harmful events in finance, housing, and health, but there already is a lot of authority to do this. Consumers are protected in so many ways. The burden of proof needs to be on the bill authors.
This is why I was so critical of California’s SB 1047, which I wrote about here and here. The bill did so much more than what was needed. Advocates of AI bills also tend to underappreciate the First Amendment concerns and the challenges in regulating for bias and fairness.
The story of AI at the beginning of 2025 is more complex than most headlines suggest. While we debate abstract questions about AGI and regulation, two parallel revolutions are reshaping our world: a hardware transformation that’s redrawing global supply chains and a software evolution that’s redefining what machines can do.
The real challenge will be in the unglamorous work of implementation, the careful consideration of existing regulations, and the thoughtful integration of AI into our institutions and businesses. As we navigate this transition, success won’t just come from technological breakthroughs or new laws, but from understanding how hardware constraints, software capabilities, economic incentives, and existing regulatory frameworks all fit together. The future of AI depends less on what AI can do, and more on how we choose to use it.
Vetocracy, Good Governance, and Bureaucratic Reform
The United States is increasingly becoming a vetocracy, a system ruled by excessive veto power. Vetocracy has paralyzed decision-making at every level, frustrating efforts to build homes, nuclear power plants, solar and wind projects, add new hospital beds, lay railroad tracks, and start businesses. I’ve collected the evidence here. San Francisco is what happens when vetocracy comes to dominate a city. And I’m convinced the catastrophic effects of Hurricane Helene in Asheville were worsened because NEPA was used to stop a dam project in the 1970s. But the answer isn’t degrowth. While I am not completely wedded to the concept of the abundance agenda, it is a useful framing for what needs to be achieved. We should focus on economic growth. We need to go faster.
Regulatory notes and fiscal note reform; How to reverse the vetocracy; A reading list to understand bureaucracy, The costs of CEQA.
The Attention Economy and Digital Competition
Another major throughline of my research explores the rise of digital platforms and the public policy surrounding these tech giants. My piece titled “The attention economy: a history of the term, its economics, its value, and how it is changing politics” provides the intellectual basis for my work on digital competition. I’m also interested in phenomena like the majority illusion, which explains how a loud, small, partisan minority can disproportionately shape online narratives. I’m convinced that wokeness got its trial run on Tumblr and the Cambridge Analytica scandal was overplayed. I’ve also analyzed the hidden costs of privacy legislation.
I tend to think there is an overemphasis on what Schudson calls the informed citizen theory of democracy, which I tried to lay out in a series of posts at Cato Unbound, including “Fake News and Our Real Problems,” “Democracy as an Essentially Contested Concept,” and “Technologies of Freedom.”
Others: Making sense of content moderation, some foundations, Zuboff’s definition of surveillance capitalism commits a category error, From my recent CSPAN appearance: We need technological atonement, Notes on McLuhan’s The Gutenberg Galaxy, Social media research, a constantly updating bibliography; When an online community migrates; Musk wants to make Twitter open source. What does that mean?, MrBeast confirms the importance of Buchanan’s “Order Defined in the Process of its Emergence”; Every way of seeing (like a platform) is also a way of not seeing, There might be too much advertising, but not for the reasons you think, Inequality in the attention economy. Practical Problems with Regulating Tech in the Public Interest
Everything Else
I’m also interested in the concept of time, the perils and promise of geoengineering; King Tut’s meteorite dagger; how real options analysis could help improve regulatory decisions, heuristics guiding my research into politics and policy, broadband buildout, and the moving goal posts of the net neutrality debate.
Space Policy and Economics
I have also become increasingly fascinated with the management of large government projects, especially those at NASA. My recent article on the cost overruns of NASA’s VIPER project is one part of this research stream but I’m also finishing papers on the management lessons from Apollo and cost overruns in the James Webb Space Telescope. Notes on space capitalism, A list of space and defense projects
Broadband and Internet Infrastructure
The rural broadband penalty Estimates of broadband deployment costs Is Internet access like electricity? (Is Internet access a general purpose technology?), A compilation of my recent broadband work and broadband maps!, Where broadband likely exists in the United States; an estimation of the broadband gap using logit models, Is cost stopping people from getting online?; Is Internet Access a Right?
In various places, I have written about the history of media regulation. In “The Real History of Title II and Investment,” I wrote a section on the history and investments made during the Dotcom Bubble.
Language, Rhetoric, and Reasoning
I still find it deeply ironic that Noam Chomsky signed the Harper’s Magazine letter, Some unstructured thoughts on Trump, Twitter, Parler and the last two weeks in tech, Is reasoning inherently adversarial? Some random thoughts, unequally distributed, Today’s one-dollar delay is worth $39 billion in the future, some comments on Cowen’s Stubborn Attachments, On rhetoric, Defining knowledge, tacit knowledge, local knowledge, and others, My fundamental theorems of cognition, technology, and the social
Economics
Notes on time: economics, discounting, innovation, etc.; Random research in politics, economics, firms, etc.; The Millennial wealth gap; Notes on economic short-termism; Where Modern Monetary Theory (MMT) goes awry; The 15-hour workweek: a dream in search of economic roots
Resources, bibliographies, etc
Public policy cheatsheet, ChatGPT prompts + code for economists, Philosophy of tech, an outline of theories,
My Substack is called Exformation. It’s where I send updates on my work and publish personal essays. The blog on this site serves as a scratchpad to work out ideas before publishing them. My publications page lists of all my work, op-eds, essays. All of my media appearances can be found here. You can find my bio and more information about me here.
For my research fellows, I created a syllabus of key texts for tech policy, as well as a public policy cheatsheet. If you’re interested in housing policy, I put together a housing and urbanism FAQ. For economists using ChatGPT, check out my prompts. For a full list of datasets, resources, technical manuals, and the like, check out my commonplace. I’m also constantly updating my working bibliographies on social media research, philosophy of technology, and theorems of cognition.
Some other works: Neoliberalism in the university: why the Marxists don’t take the analysis far enough, Instead of being eradicated, should mosquitoes be vaccinated?, What does it mean if the FDA made aging a disease?