VMblog: If you were giving a KubeCon attendee a quick overview of the company, what would you say? How would you describe the company?
Karthik Krishnaswamy: ngrok is a secure unified ingress platform that combines global server load balancing, reverse proxy, firewall, API gateway, and Kubernetes Ingress Controller to deliver applications and APIs across every stage of the development lifecycle from any cloud, datacenter, or home network. ngrok brings secure ingress to your test/dev environments, external networks you don't control as well as apps and APIs in production. Over 5 million developers use ngrok for testing webhooks and previewing apps and the world's top technology brands - including Microsoft, GitHub, Okta, Shopify, Zoom, and Twilio - advocate ngrok as a key integration partner. ngrok is venture backed by Lightspeed Venture Partners and Coatue. You can learn more at ngrok.com and follow them on X, (formerly Twitter) and LinkedIn.
VMblog: What kind of message will an attendee hear from you this year? What will they take back to help sell their management team and decision makers?
Krishnaswamy: The message is simple: ngrok simplifies application delivery with a unified ingress platform powered by a global network. We unlock the following use cases:
- Ingress for Dev/Test environments: ngrok brings secure connectivity to apps and APIs in localhost and dev/test environments with just one command or one function call. This is needed by developers to test integrations with webhook providers and external APIs, preview websites and applications with teammates, and test and validate mobile backends.
- Ingress for external networks - customer environments: You can now access customer environments without requiring any changes to their network configurations. ngrok equips you to deploy Bring Your Own Cloud (BYOC) architectures without friction, accelerating customer deployments.
- Ingress for external networks - devices and machines: You can standardize network access across your entire IoT fleet without requiring any support from your partners or changes to partners' network configurations. ngrok enables you to scale connectivity and device management for your IoT fleet.
- Ingress for production applications and APIs: ngrok unifies point tools such as global server load balancing (GSLB), web application firewall (WAF), load balancers, Kubernetes Ingress Controllers, and API gateways with SaaS, eliminating complexity in application and API delivery. Offload traffic routing, security, load balancing, and observability to ngrok's global network without sacrificing performance and security.
VMblog: Your company is sponsoring this year's KubeCon + CloudNativeCon event. Can you talk about what that sponsorship looks like?
Krishnaswamy: ngrok is a Silver tier sponsor this year, and we're excited to have conference attendees stop by our booth during exhibit hours. We're also hosting a coffee bar on the exhibit floor!
VMblog: How can attendees of the event find you? What do you have planned at your booth this year? What type of things will attendees be able to do at your booth?
Krishnaswamy: Our booth number is O24! Attendees can stop by for a demo, collect some swag, and connect with our team - including our founder and CEO, Alan Shreve.
VMblog: Have you sponsored KubeCon + CloudNativeCon in the past? If so, what is it about this show that keeps you coming back as a sponsor?
Krishnaswamy: We've also sponsored KubeCon + CloudNativeCon Europe this year in Amsterdam. The audience for that conference was fantastic, and we were thrilled by the level of discourse we could engage in. KubeCon events are valuable to us not just from a community engagement perspective but also for the partner relationships they have engendered, and the product inspiration we've benefited from.
VMblog: Do you have any speaking sessions during the event? If so, can you give us the details?
Krishnaswamy: We are doing a joint demo with Linkerd at our booth (O24) from 10:30-11 a.m. on Wednesday, Nov. 8.
VMblog: What are you personally most interested in seeing or learning at KubeCon + CloudNativeCon?
Krishnaswamy: The major themes that everyone (us included!) seems to look forward to hearing about are AI, observability, and security. However, it's always good to hear from developers directly about what they are doing with Kubernetes and other Cloud Native technologies - what problems they're solving, tools they're using, and headaches they are facing. Having those conversations around the expo hall and in the hallways is always a highlight of KubeCon.
VMblog: Can you double click on your company's technologies? And talk about the types of problems you solve for a KubeCon + CloudNativeCon attendee.
Krishnaswamy: Bringing ingress to your apps and APIs is a frustrating exercise of wrangling a slew of disparate low-level networking primitives. Developers must manage a number of technologies at different layers of the network stack, such as DNS, TLS certificates, network-level CIDR policies, IP and subnet routing, load balancing, VPNs, and NATs, just to name a few. In short, developers are being forced to work with the assembly language of networking. Dev teams often find themselves making changes to their code and network configurations.
This presents a challenge in the following scenarios:
- Ingress for test/dev environments: Developers waste a lot of time and effort tweaking firewall settings and routing configurations to test integration with webhook providers or preview websites and applications with their team members.
- Ingress for external networks - customer environments: Securing network access to a customer's environment - to process their on-premise data, for instance - is time-consuming and cumbersome, as you have to grapple with VPNs, VPC peering, PrivateLink, and firewall configurations, which require extensive security reviews and approvals. This delays the time to value for customers.
- Ingress for external networks - devices and machines: Connecting to IoT devices that are not part of the corporate network is daunting. Each site where these devices are deployed has its own ISP, networking, port forwarding rules, and firewall setup, so custom programming and configurations are required. This is impossible to achieve at scale, and misconfigurations can cause downtime.
- Ingress for production applications and APIs: Delivering an application or API in production requires stitching together many technologies, including WAFs, GSLBs, load balancers, reverse proxies, Kubernetes Ingress Controllers, and API gateways, as well as coordination with NetOps, SecOps, and security teams. This cumbersome process slows down innovation and competitiveness.
ngrok solves these problems by radically simplifying ingress with just one command or one function call, powered by an always-on service. Think of ngrok as your app's front door. ngrok is a globally distributed network that secures, protects, and accelerates your applications and services, no matter where you run them.
ngrok's unified ingress platform consists of the core application and API delivery layer that combines Global Server Load Balancing (GSLB), reverse proxy, API gateway, and Kubernetes Ingress Controller. The security layer consists of a firewall that protects against DDoS attacks by preventing unauthorized requests from reaching the origin server. The observability layer enables developers and operations teams to gain visibility into traffic flows. These can be forwarded to your preferred observability tool, such as CloudWatch or Datadog. Our platform comes with robust control and governance capabilities to ensure compliance with corporate security policies.
VMblog: While thinking about your company's solutions, can you give readers a few examples of how your offerings are unique? What are your differentiators? What sets you apart from the competition?
Krishnaswamy: Our differentiators are:
- Ingress-as-a-service powered by a global network: With ngrok, enterprises can unlock the power of SaaS to deliver high-performance applications and APIs with zero networking configuration and zero hardware
- Stack fit: Developers can easily integrate ngrok into their tech stack. They can script and explore with a built-in CLI. Or embed secure ingress directly into their apps with SDK. All functionality is available via APIs.
- Environment independence and portability: You can run your services anywhere. Deliver applications from any cloud, any application platform, an on-premise data center, a Raspberry Pi in your home, or even your laptop.
- Pay only for what you use: Reduce the overhead of setting up and maintaining ingress-related services with our pay-as-you-go subscription plan. Your monthly spend will be based on your actual usage - the active endpoints sending or receiving data via ngrok service in a billing cycle.
- Massive user base: Loved by more than 5 million developers and trusted by Cloud 100 companies, including Twilio, Microsoft (GitHub), and Schneider Electric
VMblog: Where does your company fit within the container, cloud, Kubernetes ecosystem?
Krishnaswamy: Enterprises can deliver apps and APIs from any environment - localhost, test/dev, public cloud, private cloud, Raspberry Pi as well as Kubernetes. Our Ingress Controller for Kubernetes provides ingress-as-a-service to your Kubernetes applications by offloading traffic management, performance, and security to ngrok's global Points of Presence (PoPs). We designed the ngrok Ingress Controller for Kubernetes to integrate seamlessly into the Kubernetes ecosystem. We validated the design through iterations from community feedback as well as dogfooding it for our own production usage.
Because it leverages native Kubernetes tooling, you can configure the route, host, and downstream service via the standard Kubernetes ingress object, and you can deploy it to your cluster with a Helm chart. Once deployed to your cluster, the ingress controller monitors ingress objects and connects to ngrok's cloud service. You can easily integrate our ingress-as-a-service into your existing tech stack without friction.
VMblog: With regard to containers and Kubernetes, is there anything holding it back from a wider distribution? If so, what is it? And how do we overcome it?
Krishnaswamy: Ingress into Kubernetes poses a significant challenge for many organizations. Maintaining custom ingress controller configurations for each Kubernetes environment results in vendor lock-in, negating one of its key benefits - environmental independence.
We also recognize that while Kubernetes provides performance and scalability enhancements, operations teams often aren't equipped to manage the infrastructure required to operate Kubernetes. Managed Kubernetes solutions provide an alternative to operating your own Kubernetes infrastructure and reduce the barrier to entry for many organizations.
VMblog: Are companies going all in for the cloud? Or do you see a return back to on-premises? Are there roadblocks in place keeping companies from going all cloud?
Krishnaswamy: We definitely see our customers leveraging us to access data in their customers' on-premise environments. Our customers are adopting an emerging architecture called Bring Your Own Cloud (BYOC) that is deployed in their customers' environments.
For instance, data management and analytics solution providers such as Databricks, which enable organizations to harness the power of large volumes of data through a centralized cloud platform, need to connect to data located in their customers' environments. As data volumes continue to surge, so do concerns related to data privacy, sovereignty, and control requirements. Moreover, processing all this data in a centralized cloud can result in huge data transfer costs. To address these challenges, a new type of architecture called Bring Your Own Cloud (BYOC) is emerging. BYOC is the data plane component that is deployed in the customer's environment to process and analyze their data using APIs. The SaaS solution continues to operate the control plane that consists of all the backend services and computational resources required for managing data sets in their own network, and connects to the BYOC data plane component running in the customer's network.
VMblog: The keynote stage will be covering a number of big topics, but what big changes or trends does your company see taking shape as we head into 2024?
Krishnaswamy: We see growing adoption of BYOC architectures (see my response to the question above).
We anticipate an increase in the adoption of ingress-as-a-service to address the complexity of providing ingress into applications running on a Kubernetes cluster. Ingress typically entails setting up low-level networking primitives like IPs, certificates, load balancers, and ports - and you need to maintain a separate configuration for every environment where you run your application. Ingress is tightly coupled to the environment where the app is deployed. For example, the same app deployed to your own data center, an EC2 instance, or a Kubernetes cluster requires wildly different networking configurations. Running your app in those three different environments means you need to manage ingress in three different ways and maintain three different configurations. This becomes a maintenance nightmare, and bespoke configurations can create vendor and platform lock-in as it becomes too expensive to expand or migrate your application.
VMblog: Do you have any advice for attendees of the show?
Krishnaswamy: Take the opportunity to participate in what's called the "hallway track." You're going to be amongst thousands of developers who are working on things very similar to you. Take a chance and start conversations in the hallway, at the coffee stations, or while eating lunch. You'll learn so much about what other people are doing, and you may connect with people who are experienced in areas you're interested in learning about.