Your company just dropped six figures on the latest AI tool. The CEO announced it in an all-hands meeting. IT rolled it out to every department. Everyone nodded enthusiastically.
Three months later, nobody's using it. Well, almost nobody—Karen in legal figured out it saves her 4 hours a day, but she's keeping that secret because she's terrified of looking incompetent or accidentally uploading the wrong data and getting fired.
Meanwhile, your CFO is presenting to the board with exactly one metric for AI success: "the amount of stuff we bought."
Welcome to the $700 billion enterprise AI spending spree—where 70% of leaders surveyed admit they're probably wasting money, but they have no system to figure out which 70%.
In a recent conversation on the a16z podcast, Russ Fradin—who sold his first company, Adify, for $300 million and later served as an early executive at comScore—revealed findings from interviews with 350 enterprise IT leaders. The data is striking: 85% said they believe they have exactly 18 months to either become an AI leader or fall permanently behind. So they're buying everything, deploying fast, and crossing their fingers.
The problem? We're asking the entire global workforce to retrain on tools that didn't exist 6 months ago—with zero infrastructure to measure if any of it works.
Sound familiar? It should. In his conversation with a16z General Partner Alex Rampell, Fradin drew direct parallels to internet advertising in the late '90s. Back then, advertisers were dumping money into banner ads with no clue whether they drove sales. Was it the Yahoo banner ad? The last Google click? The coupon site that stuffed a cookie on your machine?
The companies that solved that measurement problem—Nielsen, comScore, DoubleClick—unlocked trillions in advertising spend. Google and Facebook's revenue wouldn't have exploded without that "boring infrastructure" proving value to buyers.
AI needs the same thing. Not another chatbot. Not another coding assistant. The actual boring stuff: What tools did we buy? Are employees using them? Are the users more productive than non-users?
That's exactly what Fradin is building with Laridan—applying the same measurement playbook that turned comScore into a billion-dollar company to the AI productivity crisis.
Top Takeaways from the Episode
The AI "Gold Rush" & The Reality of Adoption
- (0:00) The 18-Month Panic Window: In a survey of enterprise leaders, 85% of companies stated they believe they only have the next 18 months to either become a leader in AI or fall behind permanently.
- (0:12) The Hype vs. Utility Gap: While some skeptics believe AI is "going to zero," practical application proves otherwise. There are individuals at every major company who have figured out how to do tasks in 1 minute that previously took 8 hours.
- (0:25) The "Global Call" Absurdity: Russ shares a story of a 28-year-old investment banker who used ChatGPT to create a 30-slide deck instantly. The bank's response was to host a global call where this junior employee taught the entire bank via Zoom—an absurd and unscalable method for adopting world-changing technology.
- (0:37) Cursor's Impact on Engineering: The coding tool Cursor has a bifurcated impact: It takes mediocre engineers and makes them "good," but it takes amazing engineers and makes them "Gods."
- (0:55) The Productivity Paradox: Companies thought they had reached maximum productivity (quotas were set, people were busy). AI revealed that what was considered "productive" was actually inefficient compared to what is now possible.
Ad Tech Parallels & The Measurement Infrastructure
- (1:54) The Attribution Problem: AI adoption parallels the Web 1.0 Ad Tech boom. In Ad Tech, the difficulty was attributing a sale to a specific banner ad or cookie. In AI, the technical challenge is making it work, but the business challenge is attribution: "Did this expensive AI tool actually yield a benefit?"
- (3:36) The Necessity of Boring Infrastructure: Just as TV had Nielsen and the internet had Comscore and DoubleClick to justify ad spend, AI requires a new infrastructure stack for measurement and governance. Google and Facebook's revenue growth relied on this boring infrastructure to prove value to buyers.
- (4:17) The Rapid Budget Shift Thesis: Whenever budget shifts rapidly (e.g., Client-Server to Cloud, TV to Digital), infrastructure must be rebuilt. Laridan was founded on the thesis that enterprises need governance tools not to stop AI, but to accelerate it by answering "boring" questions about DNO insurance, security, and ROI.
- (5:30) The Best Friend to AI Companies: Measurement tools will ultimately be the "best friend" to AI vendors because large companies cannot retrain 35,000 employees instantly. They need governance to feel safe enough to unlock the budget.
Software Eating Labor & Budget Dynamics
- (6:46) Software Eating Labor Budgets: The opportunity isn't just optimizing the software budget; it's attacking the labor budget. If a company has a $10 billion labor budget and a $1 billion software budget, AI allows them to shrink labor to $8 billion while doubling software spend to $2 billion. The company becomes more profitable, even as software spend explodes.
- (7:53) The CFO's Resistance: The "Bull Case" for AI assumes Global IT spend will jump from $1 trillion to $10 trillion. However, CFOs (like at JPMorgan with ~$18B IT spend) will not arbitrarily 10x their budget without proof of return.
- (9:08) Shadow AI Usage: 80%+ of customers discover far more AI tools are being used by employees than the IT department has licensed or knows about.
- (10:35) The Fear of Looking Dumb: To drive enterprise adoption, you must address the psychology of the 42-year-old employee. Unlike a 22-year-old digital native, the mid-career employee is terrified of looking incompetent or accidentally uploading prohibited data (getting fired).
Measuring Productivity & The "Harvey" Problem
- (12:10) The Methodology of Measurement: Effective measurement requires marrying behavioral data (who is actually logging in and using the tool) with survey data (perceived productivity). Surveys alone are flawed because employees will rarely admit to being unproductive or criticizing a tool their boss bought.
- (13:59) The Principal-Agent Problem in AI: Humans generally want to be "lazier and richer." If an AI tool allows a lawyer to do 8 hours of work in 4 hours, the agent (lawyer) wins by working less. The principal (company) only wins if they capture that efficiency gain.
- (17:16) "Raw Tonnage" of Work: To solve the efficiency paradox, companies must measure three things: 1) Who is a heavy vs. light user? 2) Are heavy users more productive? 3) The "raw tonnage" (volume) of work produced to ensure time saved is reinvested in output.
- (20:31) Goodhart's Law Risks: "When a measure becomes a target, it ceases to be a good measure." If you measure lines of code or emails sent, employees will game the metric (write sloppy code, send spam). Measurement must remain distinct from targets.
- (21:42) The Harvey Example (Usage Distribution): In a hypothetical team of six using the legal tool "Harvey": Two never log in, two use it slightly, and two use it constantly. A generic survey averages this out and misses the insight. You must identify the "Zero Usage" group to understand ROI.
Responsiveness as the New Metric
- (27:40) Interdepartmental SLAs: Since companies rarely want to fire large swaths of employees (churn is bad), the better productivity metric is responsiveness.
- (28:45) The Friction Test: A tangible measure of AI success is: "Is the Legal department now answering product/sales queries faster?" If Legal becomes more responsive, other departments become more productive, reducing the "coordination tax" of large bureaucracies.
- Insights from 350 Enterprise Leaders
- (30:49) The 70% Waste Statistic: About 70% of enterprise leaders surveyed believe they are currently wasting money on AI projects, but they continue spending because they lack the systems to identify which 70% is waste.
- (31:14) The "Amount Bought" Metric: A CEO of a PE-owned company admitted that for every other department, he has performance metrics. For AI, his only reportable metric to the board was "the amount of stuff we bought."
- (33:04) Tool Fatigue & Anxiety: A major, often ignored friction point is employee anxiety. Employees aren't just scared of losing jobs; they are exhausted by being told to learn a new system every week without proper training or safety guidelines.
Nexus, Safety, and "Making it Go Away"
- (35:33) The "Secret Hero" Problem: The employee who figures out how to automate their job often keeps it secret (to stay lazy/rich or out of fear). Companies need to identify these users, memorialize their workflows, and distribute them to the organization.
- (38:50) Model Wrappers for Compliance: Companies are using "wrappers" (e.g., a custom-trained Llama model) to intercept prompts. This prevents employees from asking illegal questions (like "Review this employee" in the EU) while allowing them to use the tool safely for other tasks.
- (50:20) The "Make it Go Away" Wish: If you ask the average 42-year-old Associate Brand Manager what they want from AI, their honest answer is: "I wish it would go away." They want to continue doing the job they are good at without having to relearn everything.
The Future of Work & Macro Economics
- (42:03) Historical Resilience: 98% of Americans were once farmers. Technology (tractors, fertilizer) destroyed those jobs, yet employment rose.
- (43:43) The Competitive Margin Argument: Russ argues against mass unemployment because of capitalism. If Company A fires 50% of its staff to boost margins, Company B will hire them, accept lower margins, and out-compete Company A. (Referencing Bezos: "Your margin is my opportunity").
- (47:02) The "Hyper-Educated" Shift: This technological shift is unique because it targets the hyper-educated. However, because this group is highly skilled, they are the most capable of re-skilling, unlike low-skill laborers in previous industrial shifts.
- (51:31) The Waste Management Counter-Point: Even before ChatGPT, the CEO of Waste Management noted he had unlimited MBA applicants but couldn't find anyone to drive trucks for $150,000/year. The labor shortage is in the physical world, while the glut is in the knowledge worker world.
Product Marketing & The "Tip Calculator"
- (52:19) The Product Marketing Failure: AI currently has a product marketing problem. Vendors say, "It does everything!" Customers reply, "I don't need 'everything,' I need to solve X."
- (53:02) The Comscore Lesson: In the early days, Comscore pitched "We know everything online." It failed. They only succeeded when they pitched specific answers: "We can tell you Visa's market share vs. Mastercard in Japan."
- (54:38) The Seinfeld "Tip Calculator" Analogy: Russ references the "Wizard" organizer from Seinfeld. It did everything, but Jerry's dad only cared about the "Tip Calculator." AI needs to stop selling the platform and start selling the "Tip Calculator"—specific, tangible utility.
Closing Thoughts
- (56:05) The Harvard Analogy: Russ shares advice he gave his son regarding college rankings: Rankings change constantly, but "Everyone knows Harvard is number one." Similarly, in business and technology, real quality and leadership are recognized regardless of fluctuating external metrics or temporary hype cycles.
Now, let's dive into the discussion a bit deeper...
The Shadow AI Problem
Companies discovered something uncomfortable when they started measuring: 80%+ find far more AI tools being used by employees than IT ever licensed. Shadow AI isn't necessarily bad—some of those tools are genuinely useful and should be brought into the fold. But some are dangerous, and most companies have no idea what's happening with their own data.
As Fradin told Rampell, "You normally don't allow software to just be used across your organization with access to your organization's data and have no idea what's happening. We're letting that happen in AI all the time."
The Principal-Agent Problem
The conversation dug into a fundamental tension: employees want to be "lazier and richer" (Rampell's framing), while companies need productivity gains they can actually capture. If a lawyer can do 8 hours of work in 4 hours but spends the other 4 hours golfing, the individual wins but the company doesn't benefit.
The solution isn't surveillance—it's creating systems where everyone wins. Identify the employees who've figured out how to do 8 hours of work in 5 seconds (Fradin's example: a 28-year-old investment banker at a European bank who mastered ChatGPT). Make them heroes. Memorialize their workflows. Push the knowledge throughout the organization.
But most companies are doing the opposite: that banker had to teach the entire investment bank via a global Zoom call—"an absurd way to hope people adopt world-changing technology," according to Fradin.
The Measurement Gap
Why this matters: The AI gold rush won't slow down, but the companies that survive will be the ones that can prove their tools actually work. Third-party measurement infrastructure—the unsexy plumbing that justified ad spend and built trillion-dollar markets—is what's missing from AI.
Expect a wave of measurement and governance companies to emerge over the next 12-18 months, mirroring the ad tech infrastructure boom. The AI vendors that welcome this scrutiny will win enterprise budgets. The ones that resist it will get crushed by competitors who can prove ROI.
As Rampell noted in the conversation, we're witnessing "software eating labor"—not eliminating jobs, but shifting how companies allocate between labor budgets and software budgets. If you have a $10 billion labor budget and a $1 billion software budget, AI allows you to shift to $8 billion in labor and $2 billion in software. The company becomes more profitable, but CFOs need proof that increased software spend is actually delivering results.
Without measurement infrastructure, that proof doesn't exist. And without proof, the current spending spree eventually hits a wall.
Watch the full conversation for deeper insights into measurement methodologies, Goodhart's Law pitfalls, interdepartmental productivity metrics, and why Fradin believes AI-driven mass unemployment is a myth.







