VMblog: If you were giving a KubeCon attendee a quick overview of the company, what would you say? How would you describe the company?
Kevin Cochrane: Vultr is the largest privately-held cloud computing platform, offering unmatched usability, performance, pricing, and global reach. With 1.5 million customers in 185 countries, we stand out as the leading alternative hyperscaler, catering to enterprise-level businesses in sectors including healthcare, finance, telecom, retail, media, entertainment, and manufacturing.
Today, we provide a range of services, including Cloud Compute, Cloud GPU, Bare Metal, Managed Kubernetes, Managed Databases, Cloud Storage, and Networking solutions, enabling customers to achieve global reach and high performance while simplifying deployment and scaling of cloud-native and AI-native applications worldwide, all at a reduced cost.
VMblog: How can attendees of the event find you? What do you have planned at your booth this year? What type of things will attendees be able to do at your booth?
Cochrane: We welcome attendees to stop by the Vultr booth (P26) to learn more about how to leverage our solutions to deploy and scale AI models globally. The team will be hosting demos and can answer any questions attendees may have about getting started with Vultr.
VMblog: Can you double click on your company's technologies? And talk about the types of problems you solve for a KubeCon + CloudNativeCon attendee.
Cochrane: As mentioned, Vultr has a range of services ranging from cloud compute and cloud GPU to bare metal, managed Kubernetes and more. Most recently, we announced a few updates to our Serverless Inference platform, aimed at helping enterprises and digital startups alike thrive in the age of agentic AI.
We expect agentic AI to be the next big frontier in AI, as AI agents are poised to completely transform business. But to unlock their full potential, organizations need flexible, scalable, high-performance computing resources at the edge, closer to the end user. Serverless Inference is the only alternative to hyperscalers, offering the freedom to scale custom models with a user's data sources without lock-in or compromising IP, security, privacy, or data sovereignty.
The expansion of our platform introduces powerful new capabilities to empower businesses to autoscale models and leverage Turnkey Retrieval-Augmented Generation (RAG) in real time, to deliver performant model inference at the edge - using Meta Llama 3 or proprietary models. Turnkey RAG also eliminates the need to send data to publicly trained models, reducing the risk of data misuse while leveraging the power of AI for custom, actionable insights. Meanwhile, with Vultr's OpenAI-compatible API, businesses can integrate AI into their operations at a significantly lower cost per token compared to OpenAI's offerings, making it an attractive option for organizations looking to implement agentic AI.
VMblog: While thinking about your company's solutions, can you give readers a few examples of how your offerings are unique? What are your differentiators? What sets you apart from the competition?
Cochrane: There are a few things that set us apart from the competition. The first is our global reach. Vultr is the only independent cloud vendor that competes with the hyperscalers, across six continents. In fact, we have over 32 cloud data center locations worldwide, providing frictionless provisioning of public cloud, storage, and single-tenant bare metal.
Secondly, Vultr is the only composable/MACH Alliance-certified global cloud vendor, enabling enterprise and innovator teams to scale their digital AI infrastructure without traditional vendor lock-in. Last year, we launched the Vultr Cloud Alliance, which includes a marketplace of plug-and-play services from leading Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) providers, to enable customers to build agile cloud operations that can scale and evolve to meet their needs at every stage. The Cloud Alliance gives customers a simple, intuitive control panel that makes it easy to deploy infrastructure and add services from one central portal. Meanwhile, composable enterprise-grade cloud infrastructure and services, along with powerful API automation, allow developers to seamlessly assemble and scale modern cloud operations on demand - regardless of location.
Lastly, we are the only independent cloud vendor that enables teams to train their AI models anywhere, but scale everywhere. As it becomes increasingly complex to manage and deploy AI models, Vultr Cloud Inference leverages our global infrastructure network to accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. This in turn enables AI innovations to have maximum impact by simplifying deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.
VMblog: Where does your company fit within the container, cloud, Kubernetes ecosystem?
Cochrane: Vultr is the leading alternative hyperscaler. As such, we are paving the way for AI-driven applications, collaborating closely with our customers to address key challenges and implement cutting-edge cloud infrastructure. Our solutions are designed to help organizations efficiently scale their Kubernetes deployments, positioning them for success in the ever-evolving AI landscape.
Kubernetes is complex, and we believe that our customers should not have to spend their time managing clusters. The Vultr Kubernetes Engine (VKE) is a fully-managed product offering that makes Kubernetes easy to use. We manage the control plane, worker nodes and provide integrations with other managed services such as Load Balancers, Block Storage, and DNS.
VMblog: With regard to containers and Kubernetes, is there anything holding it back from a wider distribution? If so, what is it? And how do we overcome it?
Cochrane: Kubernetes is becoming easier to use, thanks to cloud providers like Vultr, which simplify the experience for developers by offering managed services that ensure 100% uptime for Kubernetes clusters globally. However, a significant issue that often goes unaddressed is the challenge of applying Kubernetes to new AI-native applications and managing the scalability of AI inference models within these clusters. There's a need to rethink the operational practices and guidelines for hosting containerized applications on Kubernetes.
This is where Vultr comes in, assisting customers in adapting to a new framework for managing containerized inference models on Kubernetes. Until recently, there has been limited progress in developing tools for an integrated pipeline of AI models-covering training, tuning, inference, and global scalability within a Kubernetes cluster. At Vultr, we are leading the way in this new era of AI-native applications, collaborating with our customers to tackle these challenges and establish a cloud infrastructure that enables organizations to scale their Kubernetes deployments for AI advancements.
VMblog: Are companies going all in for the cloud? Or do you see a return back to on-premises? Are there roadblocks in place keeping companies from going all cloud? And if so, what are they, and how do they address that challenge?
Cochrane: In a new industry report commissioned by Vultr and conducted by S&P Global Market Intelligence, The New Battleground: Unlocking the Power of AI Maturity with Multi-Model AI, research found that in 2025, the AI infrastructure stack will be hybrid cloud with 35% of inference taking place on-prem and 38% in the cloud/multi-cloud. I think we can contribute this to companies embracing cloud solutions in recent years, recognizing the flexibility and scalability they offer. Rather than companies returning to one-premises solutions, I foresee us entering an era of composable cloud architectures, which will allow for organizations to mix and match various cloud services to make their perfect configuration, while maintaining critical on-premises components as needed.
Data security and compliance are top concerns that have hindered a complete shift to the cloud, especially for industries handling sensitive information. Additionally, legacy systems and integration complexities create challenges for organizations, as many companies that already have substantial investments in on-premises infrastructure may find the shift to cloud to be daunting. To address these challenges, companies can adopt a phased approach to cloud migration by assessing their existing workloads, prioritizing applications and leveraging compatible cloud strategies to create an environment that supports innovation while addressing security and compliance hurdles.