VMblog: Can you give us your elevator pitch? What kind of message will an attendee hear from you this year? What will they take back to help sell their management team and decision-makers?
Ari Weil: Akamai is the cloud platform built for what's next. For years, the industry has been fixated on centralized hyperscaler clouds, but AI, media, and real-time applications need something different. Our message is simple: The future of cloud isn't about bigger data centers-it's about being closer to users, devices, and data sources. That's exactly what we've built.
We recently unveiled Akamai Cloud Inference, a new solution that makes AI inference faster, more cost efficient, and more scalable. We're talking about 3x better throughput, 60% lower latency, and 86% lower costs than traditional hyperscale infrastructure. That's not just a stat-it's the difference between AI being an expensive experiment and something that actually delivers results. If you're a developer, you get easier access to CPUs, GPUs, VPUs, and edge compute without the usual lock-in. If you're a decision-maker, you get to cut costs and improve performance without re-architecting your entire stack.
The takeaway? If your business depends on AI, media, or any low-latency application, you need a cloud that moves with you. That's Akamai.
VMblog: In an increasingly crowded cloud-native and Kubernetes market, what makes your solution stand out in 2025? What makes it unique or differentiating?
Weil: Most cloud providers built their businesses around centralization-giant, expensive data centers in a few locations. That's fine for training AI models, but not for real-time inference, interactive applications, or anything that needs low latency.
Akamai took a different approach. We started with the most distributed network on the planet that is optimized for high throughput and low latency and built our cloud on top of it. The result? Compute, storage, and AI inference that happens where it makes sense-whether that's a core data center, a regional location, or at the edge of the network-with the ability to manage massive amounts of data and decisions with near real-time performance.
That's a game-changer for Kubernetes. Developers get all the flexibility of K8s, but with infrastructure that actually matches how modern applications work, and the performance that users demand. Akamai is also a game-changer for AI inference. We're making it not just possible but affordable-cutting costs by 86% while delivering 3x better throughput. And because we aren't running the traditional hyperscaler playbook, we don't make you pay absurd egress fees just to access your own data.
This isn't just about competing in the cloud market-it's about redefining what cloud infrastructure should be in the first place.
VMblog: Are you launching any new products or features at KubeCon?
Weil: Yes-and we're not just making incremental updates. We're tackling some of the biggest pain points in Kubernetes adoption head-on.
Akamai Cloud Inference is our biggest launch this year. Akamai Cloud Inference is a new service that completely changes the economics of running AI workloads. It delivers 3x the throughput, 60% less latency, and up to 86% lower cost than traditional hyperscale infrastructure. AI is shifting from big, centralized training models to lightweight, real-world inference, and most cloud providers aren't built for that.
Also launching is Accelerated Compute Instances, powered by NETINT VPUs (Video Processing Units). This is a first in the cloud industry-VPUs offer up to 20x better throughput than CPUs for workloads like live streaming, real-time video processing, and AI-driven content creation.
And finally, we're pleased to announce that Akamai cloud computing service and CDN will support kernel.org, the main distribution system for Linux kernel source code and the primary coordination vehicle for its global developer network. Akamai is delivering the infrastructure that developers and users rely on for various Linux distributions, at no cost, supporting the Git environments developers use to access kernel sources quickly, regardless of where they're based.
VMblog: Where can attendees find you at the event? What interactive experiences, demos, or activities have you planned for your booth?
Weil: We're at Booth #N160, and we're making it worth your time. We'll have demo stations showcasing:
- App Platform for LKE (Linode Kubernetes Engine) - See how to deploy, scale, and manage apps on our Kubernetes-based platform without the typical operational overhead.
- LKE Enterprise Benchmarking - We'll share performance comparisons that show how our Kubernetes offering stacks up against the competition.
On top of that, we're hosting daily booth theatre sessions, including deep dives into the App Platform, the basics of LKE, and our partnership with Fermyon on SpinKube. Oh, and we've got a Hackathon-win cloud credits and a limited-edition London trolley Lego set while you build something cool with multi-cloud, AI, or SpinKube.
If you care about AI, Kubernetes, or cloud cost savings, you'll want to stop by.
VMblog: Is your company involved in or presenting any sessions during the event? Can you give us the details? What key insights will attendees gain?
Weil: Yes, we're bringing some serious expertise to the stage this year. Our team is presenting multiple sessions covering Kubernetes, AI, storage, and cloud-native runtime advancements:
- "OTel-y Oops: Learning From Our Observability Blunders" (Tuesday, 3:55 - 4:20 BST)
- A candid discussion about observability challenges, what went wrong, and how to fix it.
- Speakers: Joe Stephenson & Rodney Karemba (Akamai)
- "Stateful Superpowers: Explore High Performance and Scalable Stateful Workloads on K8s" (Wednesday, 11:15 - 11:45 BST)
- Learn how to run high-performance, stateful applications at scale.
- Speakers: Alex Chircop & Chris Milsted (Akamai)
- "Cloud Native Storage and Data: The CNCF Storage TAG Projects, Technology & Landscape" (Wednesday, 2:30 - 3:00 PM BST)
- A deep dive into storage strategies for modern Kubernetes workloads.
- Speakers: Alex Chircop (Akamai) & Raffaele Spazzoli (Red Hat)
- "Real-Time AI Inferencing - Building a Speech-to-Text App" (Demo Theatre, Wednesday, 5:15 PM - 5:35 PM BST)
- Watch an AI-powered speech-to-text app in action, built for low-latency performance.
- Speakers: Sander Rodenhuis (Akamai) & Thorsten Hans (Fermyon)
- "Discover CNCF TAG Runtime: Advancing Cloud-Native Innovation from AI to Edge" (Friday, 2:30 PM - 3:00 PM BST)
- Explore cutting-edge AI and edge computing trends in cloud-native runtimes.
- Speakers: Stephen Rust (Akamai) & panelists from Microsoft, Intel, Snowflake, and Broadcom
If you're dealing with Kubernetes complexity, AI workloads, or cloud cost challenges, these sessions will be worth your time.
VMblog: How does your technology integrate with the broader CNCF ecosystem? What role do you play in the modern cloud-native stack?
Weil: We've been deep in the CNCF ecosystem for years-our Linode Kubernetes Engine (LKE) is CNCF-certified, we contribute to Envoy, Trousseau, and other projects, and we're backing critical open-source infrastructure like kernel.org. In fact, we're announcing at Kubecon that Akamai is picking up the hosting of kernel.org, bringing long-term stability, security and unwavering support to the preservation of the open source Linux operating system, the world's most widely deployed open source software.
Also, with Akamai Cloud Inference, we're making Kubernetes-based AI workloads cheaper and faster. Our enterprise Kubernetes offering, Linode Kubernetes Engine - Enterprise, is purpose-built for large-scale AI inference, integrating open-source projects like KServe, Kubeflow, and SpinKube. We're not just throwing GPUs at the problem-we're enabling businesses to run AI where it makes the most sense: closer to the data, closer to users, and without the cost bloat of centralized hyperscaler pricing.
VMblog: With the rise of hybrid operations, how do you help enterprises balance cloud-native and on-premises deployments?
Weil: Hybrid isn't a nice-to-have anymore-it's a necessity. Most enterprises are dealing with a mix of multiple clouds and on-prem, not because they want to, but because they have to. Whether it's regulatory requirements, data gravity, or just making sure their bills don't skyrocket, businesses need flexibility.
Akamai Cloud Inference fits right into that reality. Whether you need to process AI workloads in a data center, at the edge, or across multiple cloud providers, we give you the flexibility to do it without hyperscaler lock-in. Our approach is simple: keep what works, move what doesn't, and always keep costs predictable.
VMblog: What are the remaining barriers to Kubernetes adoption in 2025? How does your solution help overcome these challenges?
Weil: Complexity and cost. Kubernetes is amazing, but let's be real-it's still a massive headache for too many teams. And while it was supposed to free companies from vendor lock-in, hyperscalers have found new ways to keep customers dependent on their proprietary services.
Akamai is tackling both problems head-on. First, we make Kubernetes easier to run at scale. Whether you need a managed service like LKE Enterprise or you want to bring your own tooling, we let you operate how you want-without forcing you into a walled garden. We simplify multi-cloud and hybrid deployments so teams can focus on applications, not troubleshooting networking or storage issues.
Second, we slash the cost of running K8s in production. Traditional cloud providers make a killing on egress fees and hidden costs. We don't play that game. Akamai Cloud is designed for predictable pricing, and our global distribution means you can run Kubernetes closer to users, cutting costs without sacrificing performance.
The bottom line: Kubernetes should be an enabler, not a headache. We're making that a reality.
VMblog: What's your perspective on the intersection of DevSecOps and cloud-native in 2025? How do you help customers navigate this?
Weil: Security can't be an afterthought anymore. Too many teams still treat it as something you bolt onto Kubernetes clusters, instead of designing with security from the start.
We take a developer-first approach to security. That means:
- Zero-trust networking baked into Akamai Cloud.
- Native container security to protect workloads from supply chain attacks.
- Seamless integration with leading security tools, so teams don't have to reinvent the wheel.
At KubeCon, we're hosting a Hackathon focused on DevSecOps best practices. Attendees will compete to build the most secure, high-performing K8s applications-judged on real-world criteria like performance optimization, security hardening, and energy efficiency. Winners walk away with major cloud credits and exclusive swag.
VMblog: With AI and machine learning becoming increasingly central to cloud-native applications, how does your solution address these emerging needs?
Weil: AI is shifting from training to inference, and that changes everything. Training massive models in hyperscale data centers is one thing. Running those models in real time, close to users, at a price that makes sense? That's where Akamai Cloud wins. Our Akamai Cloud Inference service is built for this moment. We've partnered with VAST Data for high-performance AI storage, integrated NVIDIA GPUs for fast model execution, and even introduced VPUs (Video Processing Units) to handle video AI workloads at a fraction of the cost of traditional cloud setups. If you're running AI-powered apps-whether it's real-time language translation, personalized search, or fraud detection-we make it possible to execute those workloads closer to users, faster, and without breaking the bank.
VMblog: How is your company addressing the growing focus on sustainability in cloud operations?
Weil: The dirty secret of AI is how much power it consumes. The bigger the model, the bigger the energy footprint. That's why we're taking a different approach. Instead of centralizing all that compute in a few mega-data centers, we're distributing AI inference workloads across Akamai's global network.
By running inference closer to where data is generated and consumed, we eliminate unnecessary data transfers and reduce energy waste. And because our cloud model is fundamentally more efficient-fewer hops, less duplication, smarter workload placement-we're helping companies cut both their costs and their carbon footprint at the same time.
VMblog: How do you see the cloud-native landscape evolving post-2025? What should organizations be preparing for?
Weil: The old centralized cloud model isn't going away, but it's losing ground to a more distributed future. AI, media, and real-time applications are pushing infrastructure closer to the user. That's why companies should be thinking beyond "Which cloud region do I pick?" and more about "How do I make my cloud work wherever I need it?"
Akamai is already building for that future:
- AI inference at the edge
- Distributed Kubernetes that actually works
- Lower cloud costs without sacrificing performance
The shift is happening now. Companies that plan for it will be in a much better position than those stuck in yesterday's cloud model.
VMblog: What's your top piece of advice for attendees to make the most of KubeCon 2025?
Weil: Don't just walk the floor collecting swag-go to the talks that challenge your assumptions. If your cloud costs are ballooning, ask vendors the hard questions about pricing. If AI is on your roadmap, look beyond the hype and focus on what it actually takes to run inference in production. And if Kubernetes is still giving you headaches, talk to the teams that are simplifying-not complicating-it. If you stop by our booth, we'll give you a straight answer on how to do all three.