VMblog: Provide a little backgrounder information on the company. What does your company look like in 2021?
Orest Lesyuk: With a bit more than 200 people, StarWind looks like a continuous innovation hub. While there are established product lines like the HyperConverged Appliance, StarWind VSAN, and VTL, there are always at least 3-4 new projects we're working on that involve technologies of tomorrow. Our main focus is still in the Enterprise ROBO, SMB, and Edge segment, with products aimed at relieving biggest organizational pains when deploying fault-tolerant IT infrastructure in 1000s of locations. From the new projects, I would emphasize on NVMe over Fabrics (NVMe-oF) that is about to kick FC and iSCSI out of the datacenters for good. We're really excited to bring our customers NVMe storage connectivity at 100 GbE and microsecond latency. According to one of our customers who is about to move from FC to NVMe-oF, their estimated FC infrastructure maintenance bill for the next 3 years was $3M, and they're cutting it more than 2x with the switch to TCP and NVMe-oF.
One thing I can say for sure is that we're looking much better than I expected at the beginning of the Pandemic. The work-from-home adaptation had taken us less than 2 month and then we were back to cruising altitude.
VMblog: We are here to talk about Data Management. How does your company see it and define it?
Lesyuk: If you're after the definition, it can be found on Wikipedia. Our take on it is "let's make sure that when a bad thing happens you have a working copy of the data you need that's available in the least amount of time possible. Be it a single file or a backup copy of the entire infrastructure." We are and we will be evangelizing the industry-standard 3-2-1 rule for quite some time still, as we see that both small and big customers aren't paying sufficient attention to basic data management rules.
VMblog: What are some of the things companies should be looking for in a modern data protection solution?
Lesyuk: #1: ability to easily test their backups. #2: flexibility - the data protection solution has to support the modern cloud storage solutions and allow organizations to leverage cold cloud storage for long-term archival without bloating the cloud storage bills by putting their archives on expensive hot cloud storage before it is truly archived. #3: Look for a solution that at least protects your data from the ransomware. Better if it can also scan your archives for it and help you ensure you are able to restore in a worst-case scenario.
VMblog: Why does data protection architecture tend to create silos?
Lesyuk: Silos are one of the ways to keep things safe and avoid a disaster affecting 100% of the stored material. This approach made it to data storage as well, purely as a natural way of segregation for keeping the data safe. Backup, DR, and business continuity solutions play in the "don't put all your eggs in one basket" business. It's either one basket with individually packed eggs, or multiple baskets, or even a combination of both. One thing we can do to help is manage these silos from one place. Moreover, for optimum security, I'd recommend having one copy of the data accessible only to the business owner. This is in addition to those copies accessible by the backup admin.
VMblog: How does a business recover from ransomware? How does having a good data protection platform help?
Lesyuk: With a good data protection solution, the recovery from ransomware isn't much different from a disaster recovery. Only the hardware is still there and operational. One more important thing though: know where the ransomware came from and make sure it doesn't appear upon recovery. This will involve scanning all the backups before restoration. Best, if it's a recurring scan running against all backups to help identify the ransomware even if it got in undetected and made it into the backups before it stung.
Recovery without a good data protection solution and without a written and exercised data protection plan - read it in my new book called "long, painful, and almost impossible".
VMblog: How does one deal with the nuances across multiple clouds?
Lesyuk: The best thing I'd recommend is to use a solution that delivers you from this headache.
One that allows you to manage your data in multiple clouds without having to dive deep into the specifics of each. An example from our own product line is StarWind VTL, which ingests data from your backup and then sends it to multiple cloud storage repositories, be it AWS S3, Glacier, Azure Blobs, Backblaze, or Wasabi. For the administrator, the management is completely transparent: you see where your data is and your backup software knows where to get it in a natural way. In addition, access can be set up in a way where backup application can only write new data, and only the business owner can modify or delete something. This way businesses can protect themselves even from an insider attack.
VMblog: What questions or things should companies be thinking about as they plan their budget for business continuity?
Lesyuk: What is the cost of us not having access to our IT infrastructure for 1 minute, 1 hour, 1 day, 1 week? Calculation will depend on your type of business but understanding at least the lower watermark can bring unseen clarity to your BC decisions. Whatever the formula you use, multiply the result by π/2 (~1.571) as there are harder-to-calculate things like reputation risks that formulas typically don't include.
One more question to get answered is the annual cost of your business continuity solution.
These would be the first 2 questions I'd ask to better plan the budget.
VMblog: What are the differences between a company's current backup plan and a "DR" plan?
Lesyuk: This is more as to terms and wording. Nowadays there are no strict borders between these two. Commonly, a backup plan includes the policies, hardware and software required to maintain a copy of the data for the required period of time. This data can be restored on the backup hardware or on the existing production servers. Usually, backup is not about the shortest RTO but about the amount of data that can be accessed from a specific period of time. For example, you can keep backups of data from previous years. DR, on the other hand, is about the fastest RTO and RPO. This is usually a remote location to which the workloads are replicated periodically (asynchronously). The main goal of DR is to resume operation in the fastest possible manner. Now, no one says that DR cannot play the role of backups or the other way around. All of this depends on the company's requirements.
VMblog: What capabilities should a company require when picking a disaster recovery solution?
Lesyuk: Comprehensive and flexible data distribution between online (disks), nearline (cloud), and offline (tape) media. You need a solution that can store the data where you want it, and not where they're making more money off of you. In 2021, it's a shame if your DR solution's cloud storage support is limited to plain S3 or Blob (~$24/TB/Month) and lacks the ability to push the data immediately to Glacier or Archive blob (~$1/TB/Month). Fast data ingests with minimum overhead on your infrastructure; you don't want to sacrifice your website and database performance for each restore point you're creating during the day. The best solution is to have all-flash shared storage and ingest the backup from it directly. Innovative Wall Street firms store data on NVMe and use all-NVMe backup storage. We'll see this trend becoming mainstream in the future.
VMblog: What's the biggest challenge with cybersecurity today?
Lesyuk: Organizations need to understand that ransomware and other threats are with us and they won't go extinct any time soon. The real challenge is not to protect the business operations from every threat possible. Rather, it's to create an infrastructure so resilient it can recover from an attack so fast that the harm is minimal to none. A good analogue would be our own body's immune system that fights the intruders and adapts to new threats. So in a nutshell: the biggest challenge with cybersecurity is to have a continuously evolving plan to stick to when something happens, and not putting all bets on isolation from all possible external threats.
VMblog: How does your company and technology help facilitate customer resiliency?
Lesyuk: One of the examples is the ability to replicate and restore user workloads in any cloud. This way users see little to no impact on their work even if the primary systems hosting their workloads go down. Another example is the multi-cloud data management capability coming with our solution. If your workloads are already in one cloud, we strongly advise storing the backup data in a different cloud and have a recovery plan designed for a different cloud as well.
Cloud outages are common, so if your business relies on the cloud to stay alive, it's better to have a plan B.
VMblog: What's the biggest challenges being faced by your customers today?
Lesyuk: The first issue is the inability to leverage whichever cloud storage they like. Customers are stuck with solutions that either support only a few cloud vendors, or even one cloud storage type, and there is almost always no multi-cloud data protection scheme. We enable customers to place their data anywhere they like, be it Glacier Deep Archive, Backblaze, or Azure Archive blob, or all 3 at the same time if need be. One of our customers is a hospital in California that has a regulatory requirement to store petabytes in archive data for 5 to 7 years. With StarWind VTL, they are now spending 24x less on their archives. It's a ~$1.38M in savings on cloud storage annually.
The second issue is the challenge in supporting legacy tape hardware and backup infrastructures. We have customers with lots of investment in tape who are moving away from tape-centric DR plans as they're prone to human error and have high OpEx. StarWind VTL virtualizes the organizations' tape infrastructure and automates their DR tape shipping and recovery processes by replicating the data to any cloud or object storage out there.
VMblog: What's the most important thing happening in your field at the moment?
Lesyuk: The shift organizations make in their cybersecurity strategies in order to protect the remote workforce. These changes are laying the foundation for the future work-from-anywhere reality, which is very important as only the most adapting businesses will thrive. Businesses whose cybersecurity relies on people to return to offices for full 9-5 have to be ready for a cold shower.
VMblog: Which emerging technology do you think holds the most promise once it matures?
Lesyuk: I am a big believer in NVMe flash and, in my opinion, it already matured enough to help any business. Leveraging ultra-fast storage and backup will enable organizations to recover in seconds in case of a disaster, rather than spending hours or days like they do today.
VMblog: What will forward-thinking companies be doing this year?
Lesyuk: Hire talent worldwide, build a smart work-from-anywhere workforce that leverages both office and remote work.
VMblog: What big changes do you see taking shape in the industry?
Lesyuk: Security being integrated everywhere at the hardware level. We've seen this on end-user devices, now it's coming to datacenter hardware with NVIDIA and Cloudflare putting security solutions directly into their chips.
VMblog: How has the data protection industry changed over the year, and have priorities shifted in one way another in terms of backup and recovery?
Lesyuk: The biggest change is the need to protect all the employees' work while they work from home and not from a 100% controlled sterile office environment. The data protection industry has partially adapted to this requirement but we're yet to see the giants acquire the companies doing the right things.
VMblog: What are the products offered by your company, and what is the value add you offer to your customers?
Lesyuk: Within the topic of this interview, the product aimed at helping organizations manage their data better is StarWind VTL. It enables our customers to build a fully automatic, flexible, and ransomware-resilient offsite replication and recovery process for any existing backup infrastructure. The value is the added flexibility in data management that organizations cannot get with their existing solutions.
VMblog: Talking about your product solutions, can you give a few examples of how your offerings are unique? What are your differentiators?
Lesyuk: One: only StarWind VTL enables customers to automatically convert and manage all their existing LTO tape assets to virtual tapes that can be stored both locally and in any cloud storage.
Two: We enable customers to put their data from any backup solution directly in the cloud storage tier they need, without having to land it on the expensive storage tier every time they're writing their backups to the cloud. As mentioned before, the typical issue we solve is how to enable customers to pay ~$1 vs ~$25 by writing directly to Glacier Deep Archive or Azure Archive Blob and bypassing the unnecessary writes to S3 or Hot Blob.
Three: enabling the data to be moved between the clouds or within clouds flexibly, and without having to re-download all the data before shipping it to a different cloud storage repository.
VMblog: Explain the most common use cases of your product with your current customers, and how do you retain them with your offerings?
Lesyuk: One: Customer is a hospital with a regulatory requirement to store archives for up to 7 years. StarWind enables them to keep their archives directly in Glacier Deep Archive rather than having to store it locally or in S3 which is at least 24x more expensive. The best thing is that VTL integrated the archive capability into their existing backup infrastructure while keeping it 100% intact.
Two: Customer leverages VTL as a tape library replacement for a backup infrastructure that relies on LTO tapes. With VTL, their tape backups are now much faster and also automatically replicated to Azure Blob for long term archival.
VMblog: Finally, with such a crowded space in the data protection market, why should a company choose your solution?
Lesyuk: Because we complement any existing solution by adding extremely flexible data management and ransomware-resilient data packaging to any existing infrastructure. We don't want organizations to rip and replace their existing backup solutions. We want them to keep their proven and tested backup process and add our value to it.