My work on AI policy is detailed below. It examines how technological capabilities, economic incentives, regulatory frameworks, and organizational structures will interact to shape the development of AI.
Selected Projects
My latest is an extended analysis of AI laws: “How much might AI legislation cost in the U.S.?” After explaining in-depth two new recent CCPA amendments and a federal AI rule, I tested a novel approach: using LLMs (ChatGPT, Claude, and Grok) to simulate compliance scenarios as virtual “compliance officers.” Their estimates consistently exceeded official projections, suggesting systematic underestimation in traditional regulatory impact assessments.
I also filed comments in the Trump Administration’s “AI Action Plan.” While there is a lot to these comments, I would bring your attention to three parts. First, it makes the case that state level AI regulation presents the greatest risk. “The White House would do well to push back against a tangle of conflicting state rules that make cutting-edge AI too costly or risk-laden to develop and deploy.” Second, the Administration should champion permitting reform modeled after the successful Prescription Drug User Fee Act (PDUFA). This law has successfully accelerated pharmaceutical approvals without compromising safety standards by allowing applicants to fund expedited reviews. Third, I call for OIRA to pilot a project that uses AI simulations to standardize how agencies model compliance burdens across diverse businesses.
Other recent projects:
- Is AI Moving Too Fast or Is Regulation? - A plethora of state AI bills could sacrifice the very technological preeminence that has defined our nation’s modern tech sector
- The value of waiting: What finance theory can teach us about the value of not passing AI Bills - Regulators possess a “regulatory real option” where the smart choice is often to wait for more information rather than rushing to regulate
- The CHIPS Act and Semiconductor Economics - Analyzing the economic implications of semiconductor policy
- AI and Government Reform - An op-ed in Fox News examining how large language models might transform government operations
- The Four Fault Lines in AI Policy - Identifying critical divisions in approaches to AI governance
- To Understand AI Adoption, Focus on the Interdependencies - Analyzing how organizational structures impact AI implementation
- Automation Isn’t Just One Thing - Research revealing divergent patterns between robotics and AI adoption across industries
Current Perspective
Artificial intelligence is a topic I’ve been covering for over a decade but recently it has come to dominate my research.
I am fairly confident that in the next two years or so, artificial general intelligence (AGI), which is typically defined as an AI system that can match or exceed human-level performance across virtually any task, will become a reality. But I don’t think the disruption in the labor market is going to be as dramatic as people think. Ideation has always been cheap. Implementation is the real challenge.
In June, I wrote a two-part series on the economics of AI that discussed how emerging technologies are adopted and how human workers and AI systems can work together. I’m finding it all too common that people simply dismiss the effort that’s needed to transform a company, let alone transform an industry, with a technology like AI.
Moreover, people tend to couple robots with advanced AI tech. But when you look at the data, as I did, you learn that industries investing the most in robotics tend to be using AI the least. Manufacturing and retail trade spend the most on robotic equipment but they aren’t going big on machine learning, natural language processing, virtual agents, and the like.
When I’m asked how AI will change industries, no one likes to hear my answer: It is going to vary. Sometimes, highly productive companies slim their staffing while lower-end firms expand their staffing. Other times, automation technologies will produce substantial output gains that reduce labor costs while still expanding net jobs. Or, a technology might lead to more job creation and higher wages, as was the case with banks and ATMs**.**
Adopting new technology can reshape how companies use their workforce and equipment by automating or enhancing specific tasks. This transformation, however, comes with costs. It requires significant investment in both implementation and adaptation. A firm’s decision to adopt new technology should be based on a simple calculus: Invest when benefits exceed costs. But successful implementation ultimately hinges on the technology’s integration with existing organizational structures and processes.
Last February, I wrote about why the telephone switchboard took so long to become automatic. The interdependencies between call switching and other production processes within the firm presented an obstacle to change. But the same is true today of firms thinking of adopting AI. The interdependencies between AI and other production processes within firms might be an obstacle to change.
This year might be the year that marks a change in how businesses operate. OpenAI CEO Sam Altman writes in a new essay, “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.”
Sometimes, there are also some serious limitations to making these changes.
“We need to regulate AI” and “We need to get ahead of this thing” are popular phrases among tech leaders, policy experts, and media commentators. But this framing misses a crucial point: Significant AI regulation is already happening, just not through Congress. Yes, Congress hasn’t produced AI legislation, but the executive branch and the judiciary are deeply involved in regulating this new tech.
Here are just a few of the things I’ve been tracking:
- The Biden administration issued an executive order on AI that mandated some 150 requirements of the various agencies.
- Both states and the federal government, including the Federal Trade Commission (FTC), have reiterated that they will police unfair or deceptive acts and provide consumer protection over AI services.
- Federal agencies have issued more than 500 AI-relevant regulations, standards, and other governance documents, including the National Institute of Standards and Technology’s AI Risk Management Framework; the Equal Employment Opportunity Commission’s (EEOC) Artificial Intelligence and Algorithmic Fairness Initiative; the Food and Drug Administration’s Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative; and the Consumer Financial Protection Bureau’s Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems made in conjunction with the Department of Justice, the EEOC, and the FTC, just to name a couple of the big ones.
- Industry giants like OpenAI, Microsoft, Meta, Midjourney, and GitHub are currently embroiled in copyright disputes over the use of content for their models.
- Product recall authority gives entities like the National Highway Traffic Safety Administration, the Food and Drug Administration, and the Consumer Product Safety Commission the ability to regulate and mitigate risks posed by AI systems.
Given all this movement, I’m skeptical that a new regulatory regime is needed to ensure consumers are protected. Perhaps agencies need specific tools to collect information on harmful events in finance, housing, and health, but there already is a lot of authority to do this. Consumers are protected in so many ways. The burden of proof needs to be on the bill authors.
This is why I was so critical of California’s SB 1047, which I wrote about here and here. The bill did so much more than what was needed. Advocates of AI bills also tend to underappreciate the First Amendment concerns and the challenges in regulating for bias and fairness.
The story of AI at the beginning of 2025 is more complex than most headlines suggest. While we debate abstract questions about AGI and regulation, two parallel revolutions are reshaping our world: a hardware transformation that’s redrawing global supply chains and a software evolution that’s redefining what machines can do.
The real challenge will be in the unglamorous work of implementation, the careful consideration of existing regulations, and the thoughtful integration of AI into our institutions and businesses. As we navigate this transition, success won’t just come from technological breakthroughs or new laws, but from understanding how hardware constraints, software capabilities, economic incentives, and existing regulatory frameworks all fit together. The future of AI depends less on what AI can do, and more on how we choose to use it.
Complete AI Research Portfolio
Regulatory Analysis
- The Practical Problems with California’s SB 1047
- Issues in Legislating for AI Safety
- Problems with Biden’s Executive Order on Artificial Intelligence
- How Regulatory Frameworks Challenge AI-Driven Services
- First Amendment Concerns with Regulating AI
- Challenges in Regulating for Bias and Fairness
- AI’s Automatic Stabilizers
- How Gun Shy Legislators Could Hamper AI
- California’s High Stakes AI Bill Lacks Legal Awareness
Economic Impact
- Nvidia’s Blockbuster Earnings and the Value of Compute
- To Understand AI Adoption, Focus on the Interdependencies
- Should Robots Be Taxed?
- May You Live in Interesting Times, or Just Another AI Hype Cycle
- Technological Disruption Takes Time
- AI’s Advent Doesn’t Spell Labor Doom and Gloom
- Automation Isn’t Just One Thing: Insights from Two Census Datasets