What the San Francisco Fed Gets Right — and Wrong — About AI and Productivity

What the San Francisco Fed Gets Right — and Wrong — About AI and Productivity

What the San Francisco Fed Gets Right — and Wrong — About AI and Productivity

The Federal Reserve Bank of San Francisco published a comprehensive research paper last year examining the relationship between AI adoption and productivity growth. It is among the most rigorous economic work on the topic, and it acknowledges several critical dynamics that many AI optimists conveniently ignore.

But it also misses something important. The San Francisco Fed team gets the J-curve adoption pattern right. They get the measurement lag right. What they underestimate is how severe the measurement gap itself has become, and what that means for interpreting the productivity statistics we rely on.

Understanding this distinction is not academic. It changes how investors, policymakers, and business leaders should interpret the absence of AI productivity gains and what it predicts about future growth.

40 years Historical lag from technology deployment to measurable productivity (Fed research)
60% of UK intangible investment expensed rather than capitalised
92% of S&P 500 value is intangible assets (not in productivity statistics)

What the San Francisco Fed Got Right

The Fed's research correctly identifies three critical dynamics that explain why AI productivity gains have not yet materialised in official statistics.

1. The J-Curve Adoption Pattern

The Fed notes, accurately, that major technology adoptions follow a J-curve trajectory. Investment and deployment occur first (the downstroke of the J), followed by a period where measured productivity actually declines as firms incur reorganisation costs without yet realising output gains. Only later (the upstroke of the J) do productivity statistics catch the gains.

Why this matters: This is historically accurate. When electric motors were introduced in factories, measured labour productivity declined for 15-20 years before the J-curve inflected upward. The lag was not a failure of the technology — it was the time required for complementary organisational transformation.

The Fed cites labour productivity growth data from 1920-1950 showing that measured productivity remained flat for 20 years after widespread electrification, before accelerating sharply. This is a powerful argument that the current flat AI productivity statistics do not mean AI will not eventually drive productivity growth. They mean we are early in the adoption curve.

Where this is cautious: The Fed implicitly assumes the same timeline — 20-30 years for major productivity gains. But AI deployment is occurring at a fundamentally faster pace than electrification. Firms can experiment with and scale AI in months, not decades. The question is whether the organisational transformation will also be faster.

2. The Measurement Lag

The Fed explicitly acknowledges that national income accounting systems (GDP, labour productivity, TFP) are lagging the reality of technology deployment. Official statistics are backward-looking, often revised years later. A firm that deploys AI in January 2025 will see that activity reflected in national productivity statistics in mid-2026, at earliest.

Moreover, the Fed notes that service industries (where much of the AI deployment is occurring) are notoriously difficult to measure for productivity. How do you quantify productivity improvement in knowledge work, customer service, or professional services. Traditional GDP accounting treats these sectors as labour-intensive, with limited productivity growth potential — which is precisely where significant AI deployment is happening.

Why this matters: This acknowledges that the productivity statistics are not capturing something real. Firms are experiencing measurable productivity improvements from AI deployment, even if those improvements have not yet percolated through to official statistics.

Where this is incomplete: The Fed treats the measurement lag as a timing issue that will resolve itself. They assume the statistics will eventually catch up, and when they do, we will see the productivity gains. But there is a deeper problem the Fed understates: the fundamental mismatch between how intangible assets are treated in national accounts and the reality of where modern economic value resides.

★ Key Takeaway

The San Francisco Fed correctly identifies the J-curve adoption pattern and measurement lag. Both are real. But the Fed assumes the measurement gap is a timing issue rather than a structural feature of how we count economic activity.


3. The Portfolio-of-Technologies View

The Fed notes, correctly, that "AI" is not a single technology with a single adoption curve. It is a portfolio of technologies — large language models, computer vision, recommendation systems, time series forecasting, robotics, and countless others — each with different adoption timelines, complementary requirements, and productivity implications.

Some of these technologies (OCR, machine translation) have been around long enough that productivity gains might be visible. Others (generative AI) are only months old. Analysing "AI productivity" as if it were a monolithic technology overstates both the near-term productivity impact and oversimplifies the underlying dynamics.

Why this matters: This is an important corrective to simplistic arguments that "AI will deliver productivity growth." The question is not whether AI can improve productivity, but which applications will, on what timeline, and with what complementary investments.


What the San Francisco Fed Misses

The Fed's analysis is sophisticated, but it contains three important blindspots.

Missing 1: The Severity of the Intangible Asset Measurement Gap

The Fed acknowledges that intangible investment is not fully captured in productivity statistics. But they underestimate how severe this gap has become.

When electrification occurred, intangible assets were small relative to tangible capital. A factory was primarily bricks, machines, and labour. You could measure productivity by physical output per worker. Intangible investment — new management practices, worker training, reorganisation — was real but was a smaller share of total investment.

Today, the situation is inverted. The OECD estimates that intangible investment is now comparable to tangible investment across advanced economies. The UK Office for National Statistics reports that intangible investment is roughly 60% of tangible investment. On the S&P 500, 92% of enterprise value is attributable to intangible assets.

When a company invests in AI, it is creating intangible assets: trained models, proprietary datasets, improved customer relationships, enhanced organisational processes. Under current accounting rules (IAS 38, ASC 350), most of this investment is expensed immediately. It does not appear on the balance sheet. It does not appear in capital stock calculations. And it does not appear in productivity statistics that rely on capital stock to estimate intangible capital's contribution to output.

The implication: If 50-70% of AI spending creates intangible assets that are expensed rather than capitalised, then the productivity statistics are systematically understating both the investment and the potential returns. The Fed's 20-40 year timeline for J-curve resolution assumes a measurement system that eventually corrects these gaps. But without explicit reform to intangible asset accounting, the gaps may never be fully corrected.

✔ Example

A software company invests £2 million in training language models and building a proprietary dataset for customer service automation. Under current accounting, the entire £2 million is expensed. It reduces reported earnings but does not appear as capital investment. Labour productivity (output per hour) might improve 30% from automation, but national productivity statistics do not have a framework to capture the investment that drove it. The company has created genuine economic value — measurable in its own P&L — but the statistical system cannot see it.

Missing 2: The Assumption That AI Will Follow Previous GPT Timelines

The Fed anchors its productivity timeline analysis on historical precedent: electrification (40 years), ICT revolution (20-30 years), computerisation (15-25 years). It implicitly assumes AI will follow a similar 20-40 year timeline to peak productivity impact.

But there are reasons to believe the AI timeline could be compressed. AI deployment is not geographically or organisationally constrained the way prior technologies were. A firm in Singapore can download and deploy the same models that a firm in Silicon Valley is using. The experimentation cycle is months, not years. The scalability is digital, not physical.

Alternatively, the timeline could be extended. The complementary organisational transformation required for AI to deliver productivity might be slower than for previous technologies because it affects more of the labour force and because it requires more sophisticated management capability.

The Fed's assumption: A middle-of-the-road 20-30 year timeline. The problem: This assumption is not derived from any analysis of what determines these timelines — it is simply anchored on precedent.

Missing 3: Confusing Measurement Lag With Measurement Absence

The Fed treats the current lack of visible AI productivity as a temporary gap that will resolve as the statistics catch up. But there is a possibility that is not discussed: the absence of productivity gains is real, not statistical, because the complementary investments required for AI to drive productivity have not been made.

The J-curve framework assumes that firms will eventually make the organisational investments required to extract productivity from technology. But there is no guarantee of this. Many firms deploy AI as a cost-reduction tool (replacing workers) rather than as a complement (augmenting workers and reorganising work). This may reduce headcount but does not necessarily improve productivity (output per hour).

If firms do not make the complementary organisational investments, then the productivity gains may never arrive. The J-curve becomes an asymptotic curve that approaches but never reaches the expected productivity uplift.

ℹ Note

The SNA 2025 update (System of National Accounts) represents the first major revision to explicitly capitalise certain categories of AI and data investment rather than expensing them. This will partially resolve the measurement gap for future activity. But it does not retroactively revalue the £500+ billion in AI spending that has already been expensed as costs rather than capital.


The Measurement Gap Is the Real Story

Here is what I think is the most important insight from the Fed research, and what the Fed itself underemphasises: the measurement gap is not just a statistical problem — it is a strategic and investment problem.

When 92% of S&P 500 value is intangible and the national accounting system is designed for an economy where 60% of value was tangible capital, the statistics are systematically missing the value that is being created.

The Fed's analysis suggests we should expect 20-40 years before AI productivity effects are visible in official statistics. My analysis — drawing on the SNA 2025 framework, OECD productivity compendium, and UK ONS research on intangible investment — suggests the measurement gap is even wider than the Fed acknowledges, and that the gap is not primarily a timing issue but a structural accounting issue.

The Real Productivity Paradox

Global AI investment has exceeded £500 billion cumulatively. At least £300 billion of that was expensed as costs rather than capitalised as investment, because the assets created (models, datasets, processes) do not fit neatly into accounting frameworks designed for tangible assets. The productivity gains from that £300 billion investment are real, but they are invisible in national accounts because the investment itself is invisible. The productivity paradox is partly real (we are early in adoption, waiting for J-curve upstroke) and partly illusory (the gains are being created but not measured).


What This Means for Investors, Policymakers, and Business Leaders

For investors: The absence of visible AI productivity in national statistics should not be interpreted as evidence that AI investment is failing. It is evidence that the statistical system is lagging the economic reality. But it also means that firms need to measure AI value creation within their own financial systems, rather than relying on macro statistics. The Opagio framework for measuring intangible asset creation is designed precisely for this — making visible the value that national accounting systems cannot yet capture.

For policymakers: The current productivity statistics are increasingly unreliable guides to economic growth. Policies should be designed around forward-looking intangible asset measures (R&D spending, training investment, data asset development) rather than lagging productivity statistics that may not reflect reality until they are revised 5-10 years later.

For business leaders: AI investment that creates measurable intangible assets (proprietary models, customer relationship improvements, organisational capital) is creating value even if that value is not yet visible in productivity statistics. The discipline is measuring that value within your own firm, not waiting for national statistics to validate it.

The San Francisco Fed's research is solid. But the most important insight from that research is not what it says explicitly — it is what it implies about the limitations of our measurement systems.

The productivity paradox will resolve. But it will resolve not when statistics catch up to reality, but when companies and investors have built better measurement systems to quantify the intangible value that traditional statistics cannot see.


David Stroll is Co-Founder and Chief Scientist at Opagio, specialising in productivity measurement frameworks and the economics of intangible capital. His work draws on SNA 2025, OECD, and ONS methodologies. He has published research on intangible asset data collection (ESCoE/ONS, 2021), innovation diffusion measurement (ISPIM, 2018), and intangible capital frameworks (Big Innovation Centre, 2017). Previously, he co-founded PayMode, the first B2B internet payment service, and held senior roles at Digital Equipment Corporation.

Share:

DS

David Stroll — Chief Scientist, Co-Founder

PhD in Productivity | 40 years in strategy and technical systems delivery

Related Articles

Seven Categories of Intangible Assets That Productivity Statistics Ignore
intangible assets 2026-02-24 · David Stroll

Seven Categories of Intangible Assets That Productivity Statistics Ignore

National accounts now recognise R&D and software as capital assets. But the majority of intangible investment — organisational know-how, proprietary data, trained workforces, customer networks — still falls outside the measurement boundary. These are the seven categories that distort our understanding of productivity.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.