AI Research

My work on AI policy is detailed below. It examines how technological capabilities, economic incentives, regulatory frameworks, and organizational structures will interact to shape the development of AI.

Selected Projects

My latest is an extended analysis of AI laws: “How much might AI legislation cost in the U.S.?” After explaining in-depth two new recent CCPA amendments and a federal AI rule, I tested a novel approach: using LLMs (ChatGPT, Claude, and Grok) to simulate compliance scenarios as virtual “compliance officers.” Their estimates consistently exceeded official projections, suggesting systematic underestimation in traditional regulatory impact assessments.

I also filed comments in the Trump Administration’s “AI Action Plan.” While there is a lot to these comments, I would bring your attention to three parts. First, it makes the case that state level AI regulation presents the greatest risk. “The White House would do well to push back against a tangle of conflicting state rules that make cutting-edge AI too costly or risk-laden to develop and deploy.” Second, the Administration should champion permitting reform modeled after the successful Prescription Drug User Fee Act (PDUFA). This law has successfully accelerated pharmaceutical approvals without compromising safety standards by allowing applicants to fund expedited reviews. Third, I call for OIRA to pilot a project that uses AI simulations to standardize how agencies model compliance burdens across diverse businesses.

Other recent projects:

Current Perspective

Artificial intelligence is a topic I’ve been covering for over a decade but recently it has come to dominate my research.

I am fairly confident that in the next two years or so, artificial general intelligence (AGI), which is typically defined as an AI system that can match or exceed human-level performance across virtually any task, will become a reality. But I don’t think the disruption in the labor market is going to be as dramatic as people think. Ideation has always been cheap. Implementation is the real challenge.

In June, I wrote a two-part series on the economics of AI that discussed how emerging technologies are adopted and how human workers and AI systems can work together. I’m finding it all too common that people simply dismiss the effort that’s needed to transform a company, let alone transform an industry, with a technology like AI.

Moreover, people tend to couple robots with advanced AI tech. But when you look at the data, as I did, you learn that industries investing the most in robotics tend to be using AI the least. Manufacturing and retail trade spend the most on robotic equipment but they aren’t going big on machine learning, natural language processing, virtual agents, and the like.

When I’m asked how AI will change industries, no one likes to hear my answer: It is going to vary. Sometimes, highly productive companies slim their staffing while lower-end firms expand their staffing. Other times, automation technologies will produce substantial output gains that reduce labor costs while still expanding net jobs. Or, a technology might lead to more job creation and higher wages, as was the case with banks and ATMs**.**

Adopting new technology can reshape how companies use their workforce and equipment by automating or enhancing specific tasks. This transformation, however, comes with costs. It requires significant investment in both implementation and adaptation. A firm’s decision to adopt new technology should be based on a simple calculus: Invest when benefits exceed costs. But successful implementation ultimately hinges on the technology’s integration with existing organizational structures and processes.

Last February, I wrote about why the telephone switchboard took so long to become automatic. The interdependencies between call switching and other production processes within the firm presented an obstacle to change. But the same is true today of firms thinking of adopting AI. The interdependencies between AI and other production processes within firms might be an obstacle to change.

This year might be the year that marks a change in how businesses operate. OpenAI CEO Sam Altman writes in a new essay, “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.”

Sometimes, there are also some serious limitations to making these changes.

“We need to regulate AI” and “We need to get ahead of this thing” are popular phrases among tech leaders, policy experts, and media commentators. But this framing misses a crucial point: Significant AI regulation is already happening, just not through Congress. Yes, Congress hasn’t produced AI legislation, but the executive branch and the judiciary are deeply involved in regulating this new tech.

Here are just a few of the things I’ve been tracking:

Given all this movement, I’m skeptical that a new regulatory regime is needed to ensure consumers are protected. Perhaps agencies need specific tools to collect information on harmful events in finance, housing, and health, but there already is a lot of authority to do this. Consumers are protected in so many ways. The burden of proof needs to be on the bill authors.

This is why I was so critical of California’s SB 1047, which I wrote about here and here. The bill did so much more than what was needed. Advocates of AI bills also tend to underappreciate the First Amendment concerns and the challenges in regulating for bias and fairness.

The story of AI at the beginning of 2025 is more complex than most headlines suggest. While we debate abstract questions about AGI and regulation, two parallel revolutions are reshaping our world: a hardware transformation that’s redrawing global supply chains and a software evolution that’s redefining what machines can do.

The real challenge will be in the unglamorous work of implementation, the careful consideration of existing regulations, and the thoughtful integration of AI into our institutions and businesses. As we navigate this transition, success won’t just come from technological breakthroughs or new laws, but from understanding how hardware constraints, software capabilities, economic incentives, and existing regulatory frameworks all fit together. The future of AI depends less on what AI can do, and more on how we choose to use it.

Complete AI Research Portfolio

Regulatory Analysis

Economic Impact

Transportation & Innovation

Government & Policy