Poolside CEO Jason Warner believes we're just three years away from AI systems capable of handling most information work—and he's building the infrastructure to make it happen.
In a wide-ranging conversation, Warner shared Poolside's unique approach to building AI coding assistants and his ambitious vision for the future of artificial intelligence. While the company recently made headlines with its massive $500M funding round at a $3B valuation, Warner's timeline for artificial general intelligence (or as he prefers to call it, “intelligence on compute”) is what really caught our attention.
“I think about 36 months,” Warner said when asked when AI will be able to handle typical information work. “Doesn't mean it will be, but it'll be able to be.”
For Warner, the path to this future runs through code. Poolside has built what he calls a “reinforcement learning via code execution feedback” (RLCEF) system—essentially an environment where AI can generate and test code, learning from what works and what doesn't.
This approach has allowed Poolside to create nearly a trillion “net new novel synthetic tokens” of training data, dramatically reducing their dependence on web-scraped content.
What makes Poolside different from competitors? They're focused on enterprise deployment with government, defense, and finance clients who need maximum security:
- On-premises installation in air-gapped environments.
- Private VPC deployment options.
- Custom model training on client's own code.
- Careful filtering of license types (removing GPL and other “viral” open source licenses) to protect enterprise clients.
“Two specific things are different about us than almost anybody else on the planet,” Warner explained. “We can install on-prem...and we take our state-of-the-art models and marry it with their data and context and further train Poolside on their data.”
This approach contrasts with competitors like Anthropic and OpenAI, which require sending data “over the wire” to their clusters. For regulated industries with proprietary code, this difference is crucial.
Warner sees software development as the perfect first domain for pushing AI forward because:
- It's a massive market.
- Success creates a self-reinforcing loop for improving their models.
- Software has known, testable outputs that can train AI systems.
Jason says Poolside sells to “government defense, very large FSIS banks... legacy tech, enterprise tech folks” and focuses on “5,000 developer and above organizations.”
While he doesn't provide exact numbers, his wording suggests they're still in early stages of commercial deployment with a “a handful of live customers and even more... POCs or POVs” (proof of concepts or proof of values).
He emphasizes they're working with “some of the most interesting and hardened and most compliant and regulatory rigorous environments on the planet”—organizations that cannot send data outside their corporate boundaries.
Jason also mentions their partnership with Amazon: “We have Amazon as a AWS as a first party partner... And they're selling poolside direct” through the AWS marketplace. He notes they're currently “paper focused on enabling them and making sure they can sell effectively through this channel.
Most importantly, software could be the key to unlocking a “holy grail” AI training dataset. Warner explained that while we have inputs and outputs for AI training, we're missing the “thought process” data—how humans get from inputs to outputs. Programming, with its logical steps and testable outcomes, provides that missing link.
“The holy grail in all this is to get a thought process dataset,” Warner said. “The larger thought process dataset you could have, the closer you can get to intelligence on compute faster. And what's the largest domain that we can have, that we can generate one of those that's testable? Software.”
Why does this matter? Warner believes that achieving this level of AI represents a fundamental shift in human productivity—comparable to, but even more profound than, the invention of electricity.
“All throughout history, up until a certain point in time, GDPs, countries were limited by the population,” he explained. “But imagine they take this concept [intelligence on compute] and say, 'We're going to help out on this because what we're after is an ability to get intelligence on compute so that we as Canada can turn the dial a little bit and then increase the output of our entire country from a GDP perspective.”
That dial—the ability to direct computational resources at previously intractable problems like drug discovery or cancer research—represents a new paradigm where the limiting factor isn't human intelligence but compute power and energy.
Warner anticipates that in the next decade, three things will be critically important:
- Access to energy.
- Building out massive scale compute infrastructure.
- And that intelligence on that compute infrastructure.
The Poolside CEO is also pragmatic about competition. He believes only about 16 companies worldwide have a shot at achieving AGI, including OpenAI, Anthropic, and, of course, Poolside.
When asked why competitors aren't using his approach, he suggested they eventually will: “I always assume that my competitors, if I can conceive them to be competitors, will do the smartest possible thing in the future. And this is the smartest possible thing.”
He also noted that building a reinforcement learning environment for code is "massively hard" and requires different expertise than model-building. "This is a core distributed system, 'I need to build AWS' problem, which is why half of our folks inside our organization are applied researchers, and the other half are people who have built AWS, GCP, massive distributed system runtime environments."
Poolside isn't waiting for the future—they're already partnered with AWS as a first-party solution, with Warner noting they're "paper focused on enabling them right now...making sure that we're off to the races."
Our take: Whether Warner's 36-month timeline proves accurate or not, his approach highlights a key insight that's often overlooked in AI discussions. While many focus on bigger models with more parameters, Poolside is focusing on the quality and structure of the training data—specifically creating that missing "thought process" dataset through code execution. As Richard Sutton's "Bitter Lesson" paper argued, AI progress has consistently come from more compute and greater depth of search rather than clever algorithms—exactly the approach Poolside is taking.
In a world where everyone's chasing the same web-scraped data, Poolside's synthetic data generation could prove to be a crucial differentiator. The race to AGI isn't just about who has the most GPUs (though Warner admits he's behind the leaders there)—it's also about who can build the right systems and generate the right data to train those models effectively.