March 18, 2025

Mirantis Unveils k0rdent: Open Source Solution Tackles Multi-Cloud AI Workload Challenges at KubeCon Europe 2025

Written by

In an exclusive interview with VMblog, Dominic Wilde, Senior VP of Marketing of Mirantis explains how their new k0rdent platform is transforming how enterprises deploy and manage Kubernetes-based AI workloads across cloud, datacenter, and edge environments. As a Gold Sponsor at KubeCon + CloudNativeCon Europe 2025, Mirantis is showcasing this 100% open-source Distributed Container Management Environment (DCME) that addresses the growing complexity of managing infrastructure for AI/ML applications. The company's expanded presence aligns with their recent upgrade to Gold Membership in the Cloud Native Computing Foundation (CNCF), demonstrating their deepening commitment to the cloud-native ecosystem. Visitors to booth N331 can experience live demonstrations of k0rdent, explore the new k0rdent Application Catalog with 19+ validated integrations, and get a sneak peek at upcoming KubeVirt capabilities for VM workload management on Kubernetes.

VMblog:  Can you give us your elevator pitch?  What kind of message will an attendee hear from you this year?  What will they take back to help sell their management team and decision makers?

Dominic Wilde:  Modern enterprises face massive challenges managing IT infrastructure, especially with AI and ML workloads adding new layers of complexity. Relying on proprietary cloud platforms and APIs isn't enough-organizations need open, flexible, and scalable solutions that empower platform engineers to take control. That's where k0rdent comes in: an open-source, declarative, and composable Kubernetes-native Distributed Container Management Environment (DCME), released on February 26 and sponsored by Mirantis. k0rdent provides the infrastructure plumbing AI/ML applications require while simplifying multi-cloud, multi-cluster operations.

We'll be demonstrating and talking about k0rdent at Kubecon, and about how it can be used to accelerate buildout of AI applications - particularly inference workloads - from cloud to datacenter to edge.

Double-underlining ‘composability,' we're launching an open source k0rdent Catalog that delivers 19+ validated integrations with critical cloud-native tools for networking, security, storage, policy enforcement, CI/CD, and monitoring. This ecosystem ensures platform teams can quickly implement best practices and optimize AI workloads without the hassle of manual configuration. Attendees will leave understanding how k0rdent reduces lock-in, automates operations, and accelerates AI adoption, making it an easy sell to decision-makers.

We're also planning on demonstrating some new k0rdent capabilities that are pre-release, including the ability to use k0rdent to define, deploy, operate, and lifecycle-manage developer platforms that feature KubeVirt - the popular open source Kubernetes VM-hosting platform. We think KubeVirt offers a very interesting and practical option for organizations that want to move select VM workloads off proprietary legacy private cloud platforms and perhaps off public clouds as well, and get them onto Kubernetes where they can be managed harmoniously along with containerized applications. The ability of k0rdent to work on any infrastructure, plus the small, efficient footprint of k0s Kubernetes, promises to enable VM-hosting use cases at the near-edge (e.g., in branch offices) as well as in bare metal datacenters.

VMblog:  As a sponsor of KubeCon + CloudNativeCon Europe 2025, what level of sponsorship have you chosen, and what motivated this investment?

​Wilde:  As a Gold Sponsor of KubeCon + CloudNativeCon Europe 2025, Mirantis has elevated its support from the previous Silver Sponsorship at KubeCon + CloudNativeCon North America 2024. This decision aligns with Mirantis' recent upgrade to Gold Membership in the Cloud Native Computing Foundation (CNCF), reflecting our commitment to advancing cloud-native technologies and fostering innovation within the community.

VMblog:  Where can attendees find you at the event? What interactive experiences, demos, or activities have you planned for your booth?

Wilde:  Attendees can find Mirantis at booth N331, where we've prepared an exciting lineup of interactive experiences.

We'll be showcasing a live demo of k0rdent, our latest open-source project, talking about the open source k0rdent Catalog, and demonstrating other things as well, including AI inference applications and our upcoming KubeVirt-on-k0rdent solution for VM hosting. We'll also have fun, hands-on activities and games. Visitors can participate in these activities for a chance to win exclusive, custom-designed t-shirts and even a limited-edition open-source skateboard deck. Be sure to stop by and join the fun!

VMblog:  How has your company's presence at KubeCon evolved over the years? What keeps bringing you back?

Wilde:  Mirantis has been part of KubeCon from its earliest days, and our involvement has grown in tandem with the community's focus on AI, ML, and platform engineering. We return every year because KubeCon is where the most forward-thinking minds in cloud-native infrastructure gather to collaborate on solving real problems. As open source drives AI beyond proprietary clouds, our commitment to projects like k0rdent-an open, declarative, composable solution-continues to expand. Our Open Source Program Office, led by Randy Bias, fuels upstream contributions and ecosystem partnerships. Ultimately, KubeCon is where we learn, share, and shape the future of Kubernetes and AI-driven workloads alongside the community.

VMblog:  Can you double click on your company's technologies?  And talk about the types of problems you solve for a KubeCon + CloudNativeCon attendee.

Wilde:  Mirantis is increasingly about leveraging open source, Kubernetes-native declarative and templated configuration, and other Kubernetes-centric standards (e.g., ClusterAPI) and approaches to solve the cloud and infrastructure plumbing problems under the most challenging of modern applications - centering on AI inference, Machine Learning, and similar applications that need distributed architectures that work from cloud to datacenter to edge.We offer a full portfolio of solutions, all grounded in open source, for attendees wrestling with real-world Kubernetes challenges-ranging from quick deployment of robust clusters to orchestrating complex AI and ML pipelines. So, attendees can engage with Mirantis at several levels of complexity, get involved with our numerous open source projects, and see how everything we sponsor and build hang together.

At the foundation lies k0s, a CNCF-certified Kubernetes distribution that installs as a single binary with minimal dependencies. It scales from the far edge on IoT devices to massive data centers with thousands of nodes, making it an ideal starting point for any cloud-native strategy.

MKE 4-our enterprise Kubernetes (open source except for its CLI and a few other components)-further extends these capabilities by adding security, policy management, and ready-to-use add-ons, creating a more consistent operational experience.

For those dealing with AI/ML and massive platform sprawl, k0rdent leverages k0s and our k0smotron project (hosted control planes and ClusterAPI providers) to create a declarative, composable, and open source Distributed Container Management Environment (DCME). It simplifies and automates multi-cloud, multi-cluster deployments while providing standardized templates and policies for AI workloads. The new k0rdent Application Catalog streamlines integrations with networking, storage, security, and observability tools, ensuring frictionless setup from data center to edge.

KubeCon attendees face challenges like sprawling infrastructures, inconsistent cluster operations, and the complexity of emerging AI-driven workloads. By providing a consistent Kubernetes foundation, fully-composable CNCF open source solutions and other components, template-driven automation, and a full-featured approach to multi-cloud and multi-cluster developer platform creation and management, we help teams focus on innovation rather than wrestling with platform complexity.

VMblog:  In an increasingly crowded cloud-native and Kubernetes market, what makes your solution stand out in 2025? What makes it unique or differentiating?

Wilde:  In a market saturated with proprietary tools and single-vendor ecosystems, we believe our 100% open-source approach stands out in 2025. k0rdent, along with k0s and k0smotron, gives organizations a complete solution for building, running, and managing cloud-to-edge environments-crucial for AI inference and other demanding workloads. By using Kubernetes-native standards like declarative templates and continuous reconciliation, we deliver a self-healing, highly automated framework that supports your specific compliance, security, and data sovereignty needs.

Unlike many point solutions, k0rdent provides everything needed to create self-service developer platforms-clusters, core services, AI workloads, monitoring, and cost management-all unified under a single source of truth. This ensures that platform architects, developers, and business stakeholders can trust the same open-source toolkit to meet stringent cost and performance targets without vendor lock-in. In our view, that's what truly sets us apart in the evolving cloud-native landscape.

VMblog:  With AI and machine learning becoming increasingly central to cloud-native applications, how does your solution address these emerging needs?

Wilde:  We built k0rdent to handle the specialized demands of AI and machine learning at scale, aiming our solution at platform engineers, empowering them to build the developer platforms and self-service solutions developers and DevOps need. k0rdent uses a Kubernetes-native approach-declarative templates, automated policy enforcement, and continuous reconciliation-to deliver flexible, composable infrastructure that can quickly accommodate GPU-based clusters or large-scale inference deployments. By coordinating everything from network configurations and security policies to observability and cost monitoring, k0rdent provides a single framework for efficiently allocating resources, enforcing compliance, and keeping AI pipelines running smoothly. This ensures teams can focus on model development and innovation, rather than wrestling with operational complexity.

VMblog:  Is your company involved in or presenting any sessions during the event? Can you give us the details?  What key insights will attendees gain?

Wilde:  Mirantis will be presenting in two sessions at KubeCon + CloudNativeCon Europe 2025: a keynote presentation and a breakout session.

Keynote Presentation

  • Date/Time: April 3, 2025 (Thursday), 10:03 - 10:05 BST
  • Location: Level 0 | ICC Auditorium
  • Speaker: Randy Bias, VP of Open Source & Technology
  • Title: Redefining MLOps at Scale on Top of Kubernetes

Key Insights:

  • AI adoption has rapidly accelerated, but managing Kubernetes sprawl and complexity remains a major challenge.
  • This keynote explores how Platform Engineering concepts can help simplify and scale MLOps by organizing Kubernetes into larger abstractions, reducing toil, and creating structured "Golden Paths" for developers.
  • Attendees will see a live demonstration of how open-source tools like k0rdent, k0s, k0smotron, Project Sveltos, and Helm come together to streamline AI deployment, making Kubernetes-based AI platforms easier to use and scale.

Breakout Session

  • Date/Time: April 3, 2025 (Thursday), 17:20 - 17:25 BST
  • Location: Room 6, Level 0, ICC Auditorium
  • Speaker: Prashant Ramhit, Senior DevOps & QA Engineer
  • Title: Solving Real-World Edge Challenges With k0s, NATS, and Raspberry Pi Clusters

Key Insights:

  • This session highlights a fascinating real-world edge computing project that monitors sea algae proliferation and coral growth in real time.
  • Attendees will learn how k0s (a lightweight CNCF-certified Kubernetes distribution) and NATS (a scalable messaging system) power a distributed Raspberry Pi cluster for real-time data collection and processing under resource constraints.
  • The session covers best practices for dynamically bootstrapping edge clusters, managing distributed workloads, and using NATS for scalable communication-offering valuable insights for anyone working on edge computing projects.

VMblog:  Which technical tracks or sessions at KubeCon 2025 align most closely with your company's vision?

Wilde:  We're looking forward to these sessions that dig very deep into aspects of AI workloads on Kubernetes:

  1. Dashboards & Dragons: Crafting SLOs to Tame the AI Platform Chaos
     Highlights observability and SLO design for large-scale AI, helping teams ensure reliable operations.
    https://kccnceu2025.sched.com/event/1tx75/
  2. Securing AI Workloads: Building Zero-Trust Architecture for LLM Applications
     Discusses zero-trust patterns for protecting AI workloads and data while meeting compliance demands.
    https://kccnceu2025.sched.com/event/1txCb/

VMblog:  What's your elevator pitch for a CTO or CIO? How does your solution impact the bottom line?

Wilde:  Many organizations struggle under the weight of today's patchwork solutions, especially as AI and ML workloads multiply. Proprietary clouds lock you into specific APIs and pricing structures, pushing costs ever upward. Meanwhile, managing massive clusters-or worse, an explosion of smaller ones-drains resources in both headcount and budget. GPU provisioning often devolves into tedious, one-off scripts that break whenever configurations change. And crucial layers like observability, policy management, or cost analytics typically reside in separate tools operated by disconnected teams. The result is a fragile ecosystem with minimal visibility, low agility, and no central point of control.

Mirantis k0rdent fixes these pain points by providing a single, open source control plane for designing and running AI-ready developer platforms wherever you need them: datacenters, public clouds, or edge locations. By leveraging Kubernetes-native, declarative workflows and composable automation, k0rdent cuts out repetitive configuration overhead, drastically reducing operational toil. You define everything once-clusters, GPU requirements, observability, cost monitoring, and policy enforcement-and k0rdent continuously reconciles that desired state across all your infrastructures.

This approach doesn't just simplify work for platform engineers; it also slashes costs. Teams can swiftly de-provision unneeded GPU resources, curb cloud overuse, and avoid expensive vendor lock-in. Moreover, by integrating validated templates and beach-head services from our open source catalog, you reduce time-to-market for AI initiatives and minimize the risk of misconfiguration. Centralized policy and data location management also ensures you remain compliant, even when scaling inference to the edge. Ultimately, k0rdent enables true workload portability, freeing your organization to move faster, optimize expenses, and reclaim control over how-and where-you run AI.

VMblog:  Can you walk us through a typical customer journey? What specific pain points do you address?

Wilde:  A typical customer is a platform architect or platform engineering team lead at a global corporation. Operating across oceans. Many locations. Lots of existing investments in public and private clouds. Strong understanding of containers and Kubernetes, but also lots of legacy applications that aren't currently on a Kubernetes substrate. They're working to launch modern applications - maybe AI or ML workloads - on Kubernetes, and the solutions in front of them might work, but not at scale, and those solutions aren't helping them in all the places they need help: multi cloud, multi cluster, observability, cost management, compliance, along with all the technical moving parts their workloads and developers need.

Here are some of the problems they face, and how k0rdent addresses them:

  • Fragmented Infrastructure - Customers juggle different clouds, private data centers, and edge devices, each with its own provisioning methods. k0rdent unifies these environments under a single declarative control plane, so teams can automate deployments anywhere, from on-prem to GPU-equipped public clouds.
  • Complex Workload Stacks - Integrating observability, cost monitoring, policy management, and backup services across multiple clusters often takes months. With k0rdent's validated templates and composable open source catalog, teams can rapidly assemble developer platforms that include all necessary components-without reinventing the wheel.
  • GPU Configuration Headaches - Setting up clusters for diverse accelerators (GPUs, TPUs, FPGAs) typically involves custom scripts and manual efforts. k0rdent provides a standardized, Kubernetes-native way to declare and provision these specialized resources across infrastructure, drastically reducing errors and downtime.
  • Cost and Resource Sprawl - Either customers run massive, centralized clusters, paying for underutilized GPUs, or they spin up a swarm of isolated clusters with no unified visibility. k0rdent's single point of control lets them scale up or down quickly, track resource usage in real time, and optimize cost or performance as needed.
  • Maintenance and Drift - Once everything is cobbled together, updates or policy changes risk breaking the entire setup. k0rdent's continuous reconciliation engine automatically enforces the desired configuration and security posture, eliminating drift and simplifying upgrades across all environments.

By adopting k0rdent, these customers move from ad hoc, inconsistent practices to a coherent, fully open source framework that lets platform engineers, developers, and decision-makers focus on innovation rather than endless integration and troubleshooting.

VMblog:  How does your technology integrate with the broader CNCF landscape? What role do you play in the modern cloud-native stack?

Wilde:  Mirantis takes a deeply integrated, upstream-first approach to the CNCF ecosystem-so much so that we recently upgraded our membership to Gold. k0rdent, in particular, builds on widely adopted CNCF projects like Kubernetes, ClusterAPI, Helm, and more, and we actively contribute code upstream to ensure compatibility and innovation. Then, through our k0rdent Catalog, we curate a range of CNCF tools for networking, security, observability, and AI/ML support, helping platform engineers stitch these components together into a single, vendor-neutral stack. This means enterprises can quickly adopt best-of-breed open source solutions for multi-cloud and edge deployments, without the usual integration headaches. By unifying all these CNCF pieces under a declarative, composable framework, k0rdent gives organizations a clear path to building, operating, and scaling modern cloud-native services-ranging from traditional applications to complex AI inference workloads.

VMblog:  Are you launching any new products or features at KubeCon? What can attendees expect to see first at your booth?

Wilde:  We're unveiling k0rdent at KubeCon, along with the brand-new k0rdent Catalog featuring 19 validated integrations for AI workloads and core cloud-native services. Built to help platform engineers unify multi-cloud, multi-cluster deployments, k0rdent streamlines everything from GPU provisioning to cost monitoring and policy enforcement-all via open source, declarative automation. Attendees visiting our booth can expect live demos showcasing how k0rdent manages these components across diverse infrastructures, without vendor lock-in. They'll also learn about the Application Catalog, where pre-validated templates let teams quickly assemble security, observability, and CI/CD services into a single, scalable workflow. Finally, we'll demonstrate how k0rdent tackles AI inference scenarios, showing real-time provisioning of GPU-enabled clusters for high-performance, cost-optimized AI.We're also planning on demonstrating some new k0rdent capabilities that are pre-release, including the ability to use k0rdent to define, deploy, operate, and lifecycle-manage developer platforms that feature KubeVirt - the popular open source Kubernetes VM-hosting platform. We think KubeVirt offers a very interesting and practical option for organizations that want to move select VM workloads off proprietary legacy private cloud platforms and perhaps off public clouds as well, and get them onto Kubernetes where they can be managed harmoniously along with containerized applications. The ability of k0rdent to work on any infrastructure, plus the small, efficient footprint of k0s Kubernetes, promises to enable VM-hosting use cases at the near-edge (e.g., in branch offices) as well as in bare metal datacenters.

VMblog:  What are the remaining barriers to Kubernetes adoption in 2025? How does your solution help overcome these challenges?

Wilde:  Many teams still find Kubernetes daunting because of its complexity, the manual integration work required to build reliable developer platforms, and the ongoing need to manage GPU clusters and edge deployments. Add AI and multi-cloud requirements, and the result is often a fragmented, high-maintenance environment that drains staff resources and leads to inconsistent security and compliance.

k0rdent tackles these challenges by unifying cluster provisioning, policy enforcement, observability, and cost analytics under a single, declarative framework. Rather than wrestling with different scripts or proprietary APIs per cloud, teams can set up and maintain both traditional and AI workloads in a consistent way-across on-prem, public cloud, and edge. k0rdent's continuous reconciliation ensures that the desired state of each cluster stays intact, eliminating drift and simplifying upgrades. Plus, the k0rdent Application Catalog provides validated templates and integrations, so even newcomers can quickly assemble a complete Kubernetes stack. By alleviating the operational overhead and skill gaps, k0rdent helps organizations embrace Kubernetes at scale and focus more on innovation than infrastructure.

VMblog:  With the rise of hybrid operations, how do you help enterprises balance cloud-native and on-premises deployments?

Wilde:  We see hybrid operations as the best of both worlds-but only if you have a consistent, unifying framework for clusters running in different places. That's exactly what k0rdent delivers. By leveraging ClusterAPI, k0rdent can provision and lifecycle-manage Kubernetes clusters anywhere: AWS, Azure, private data centers, or at the far edge. Then, its hosted-control-plane architecture (via k0smotron) lets you mix and match environments freely-for instance, running control planes in public cloud pods while hosting worker nodes on bare-metal servers that keep data local and secure. This approach allows enterprises to optimize for performance, cost, and compliance without juggling multiple disconnected tools or being forced to standardize on a single provider.

VMblog:  How is your company addressing the growing focus on sustainability in cloud operations?

Wilde:  Mirantis embraces a sustainability-first approach across our solutions and internal practices, complying with recognized standards and certifications to ensure responsible operations. Specifically, k0rdent helps organizations optimize hardware usage-especially GPU-intensive clusters-by continuously reconciling resource allocations, decommissioning idle infrastructure, and preventing wasteful overprovisioning. On top of that, k0rdent integrates cost and performance metrics for intelligent scaling, so businesses can align environmental goals with day-to-day operations. By adhering to global guidelines for energy efficiency and applying open source governance principles, Mirantis supports customers in meeting both regulatory requirements and their own sustainability targets. We believe these measures ultimately reduce emissions, keep costs down, and drive more sustainable cloud-native practices across the board.

Mirantis is ISO 14001 (Environmental management systems) certified, and we've worked closely to help customers such as Inmarsat create greener satellite networks and to support Société Générale's goal of carbon neutrality by 2050.

VMblog:  What's your perspective on the intersection of DevSecOps and cloud-native in 2025? How do you help customers navigate this?

Wilde:  DevSecOps has matured into an essential practice for organizations adopting cloud-native architectures-especially those doing complex AI or multi-cloud deployments. Yet many companies still wrestle with disconnected teams and tooling: DevOps engineers push new features fast, security teams try to enforce consistent policies, and MLOps specialists need GPU-friendly infrastructure. k0rdent bridges these gaps by providing a single, declarative system for provisioning clusters, applying security and compliance policies, integrating monitoring, and continuously reconciling them all at scale. Through a combination of composable templates and open source components, platform engineers can embed DevSecOps best practices directly into their developer platforms, ensuring that security controls-from vulnerability scans to configuration checks-stay in sync across thousands of clusters and workloads. This not only reduces complexity and human error but also allows developers, security pros, and MLOps teams to collaborate under a common framework, accelerating releases without compromising on compliance or risk management.

VMblog:  What booth activities or giveaways have you planned to engage with attendees?

Wilde:  At booth N331, we've planned exciting activities and exclusive giveaways! Test your knowledge with a fun quiz on Kubernetes and CNCF open-source projects for a chance to win a limited-edition skateboard deck.

Swing by to grab an exclusive Mirantis t-shirt and other cool swag! Whether you're tackling multi-cluster chaos or just love great design, you won't want to miss this must-have merch. Visit booth N331 to play, win, and take home awesome prizes!

VMblog:  How do you see the cloud-native landscape evolving post-2025? What should organizations be preparing for?

Wilde:  We expect cloud-native platforms to become more dynamic, distributed, and AI-centric. Teams will increasingly adopt ephemeral clusters that spin up on-demand-whether for inference, data processing, or other workloads-and then shut down when no longer needed. At the same time, data sovereignty rules and compliance constraints will keep pushing enterprises to manage workloads across diverse clouds and on-premise sites. Organizations should prepare by investing in open, declarative solutions that tie these environments together-making it easier to run modern applications while still preserving freedom of choice, security, compliance, and cost controls.

Open source is essential to all of this, and we see solutions like k0rdent as taking the role of solving the big, historic challenges that crop up when enterprises try leveraging open source at scale. k0rdent solves the ‘excessive choice' and ‘integration science project' problems by integrating all the upstream work of CNCF projects into one 100% open source solution, plus a library of composable open source (and select popular proprietary) components. It solves the ‘you can have any color so long as it's black' problem by enabling genuine, customer-driven, use-the-technologies-and-cloud-services-you-like flexibility, via use of Kubernetes-native standards like ClusterAPI and template-driven, declarative platform definition using composable open source components from the k0rdent Catalog. And all of this is validated and backed by the k0rdent project team and Mirantis, eliminating trust issues in the open source software supply chain.

VMblog:  What big changes or trends does your company see taking shape for 2025?

Wilde:  In 2025, AI goes beyond just large cloud-based models-organizations are exploring specialized open source language models (like SLMs) for deployments at the edge or in highly regulated environments. Since SLMs can run efficiently on smaller GPUs or even CPU-only infrastructure, they open up new possibilities for private, on-site inference workloads. This shift demands an approach that can handle everything from micro-clusters at remote sites to large, data-center-scale GPU farms-all while ensuring data sovereignty, lowering operational costs, and minimizing latency.

We see k0rdent's role as enabling these capabilities through a coherent, open source framework. You can spin up clusters dedicated to smaller, domain-specific models at branch offices or on edge devices, using Kubernetes-native declarative workflows. Meanwhile, centralized AI operations-like monitoring, policy enforcement, and cost optimization-remain consistent across all these clusters. By unifying how AI workloads are defined and managed (from small, resource-sparing SLMs to massive training jobs), k0rdent lets teams take advantage of emerging open source solutions, move faster with edge deployments, and scale seamlessly without becoming locked into a single cloud provider.

VMblog:  What's your top piece of advice for attendees to make the most of KubeCon 2025?

Wilde:  Show up ready to explore, connect, and learn. KubeCon can feel huge, but the real magic happens when you get hands-on with workshops, attend deep-dive sessions that stretch your skill set, and take advantage of open hallway conversations. Seek out projects or SIGs that align with your interests, and don't be afraid to approach maintainers or fellow attendees for guidance on real-world challenges. Contributing-even in a small way-can spark valuable relationships and help you level up your cloud-native journey.

Be sure to download and use the official KubeCon + CloudNativeCon schedule and app. It's your personal command center for the event: bookmark sessions, track last-minute updates, and discover new talks that match your interests. The app also helps you network with speakers and fellow attendees, set up meetups, and keep a pulse on everything happening at the conference.

David Marshall

David Marshall has been involved in the technology industry for over 19 years, and he's been working with virtualization software since 1999. He was able to become an industry expert in virtualization by becoming a pioneer in that field - one of the few people in the industry allowed to work with Alpha stage server virtualization software from industry leaders: VMware (ESX Server), Connectix and Microsoft (Virtual Server).

Through the years, he has invented, marketed and helped launch a number of successful virtualization software companies and products. David holds a BS degree in Finance, an Information Technology Certification, and a number of vendor certifications from Microsoft, CompTia and others. He's also co-authored two published books: "VMware ESX Essentials in the Virtual Data Center" and "Advanced Server Virtualization: VMware and Microsoft Platforms in the Virtual Data Center" and acted as technical editor for two popular Virtualization "For Dummies" books. With his remaining spare time, David founded and operates one of the oldest independent virtualization news blogs, VMblog.com. And co-founded CloudCow.com, a publication dedicated to Cloud Computing. Starting in 2009 and continuing all the way to 2016, David has been honored with the vExpert distinction by VMware for his virtualization evangelism.

Platinum Sponsors

Aviatrix


EnterpriseDB


Heroku from Salesforce


Mirantis


Nutanix


VictoriaMetrics

Gold Sponsors

Akamai


Control Theory


GitLab


Tintri

Latest Videos