Contributed Post By Don Boxley, CEO, and Co-Founder, DH2i
High availability (HA) is a critical concern and key to operational success for your end clients, regardless of the computing environment, workload, or OS. System or application failures, outages, and unplanned downtime are simply not acceptable for today’s pace of business in which even 10 minutes of inoperability translates to substantial data loss. And, if this happens for one of your clients, on your watch, you can imagine what that means for the longevity of your relationship.
What you and your end clients are likely beginning to realize, however, is that traditional options for high availability have their limits. The continuous operational efficiency necessary to capitalize on digital transformation should not monopolize your clients’ financial or personnel resources with endless testing and re-testing of availability.
What’s needed instead, is a new approach to dynamically transfer workloads in IT environments based on optimizing the particular job at hand. Achieving this objective requires inherent flexibility, lack of downtime, and cost-effective methodology. In essence, what’s required is Smart Availability, which builds upon some of the basic principles of high availability to provide the previously mentioned advantages—and more.
Smart Availability is the future of HA and a critical component in the blueprint for creating business value through digital transformation.
Traditional High Availability Limitations
By definition, HA is the continuous operation of system components and applications. Historically, this goal was achieved in a mixture of different ways attended by an assortment of drawbacks. One of the more common involves failovers, in which system components are transferred to those of a secondary system for scheduled downtime or unplanned failures. Clustering techniques are often used with this method to make resources between systems—including servers, databases, processors and others—available to one another. Clustering applies to VMs and physical servers and can help enable resilience for host, OS, and guest failures. Failovers involve a degree of redundancy, which entails maintaining HA by involving backups of system components. Redundant storage and networking options may be leveraged with VMs to include all system components and/or data copies.
For the majority of your clients, the most serious problem with many of these issues is likely the cost, especially since there are several instances in which high availability is unnecessary. These relate to the actual use and importance of servers, as well as additional factors relating to what virtualization techniques are used. Low priority servers that don’t affect end users—such as those for testing—don’t require HA, nor do those with recovery time objectives (RTO) significantly greater than their restore times. Certain HA solutions, such as some of the more comprehensive hypervisor-based platforms, are indiscriminate in this regard. Consequently, your clients likely end up paying for HA for components that don’t need them. Also, traditional HA approaches involve constant testing that can drain human and financial resources. Even worse, neglecting this duty can result in unplanned downtime. Arbitrarily implementing redundancy for system components broadens your clients’ data landscapes, resulting in more copies and potential weaknesses for security and data governance.
Digital Transformation Implementation
Many of these virtualization measures for HA are losing relevance due to digital transformation. To transform the way your clients do business with digitization technologies, you must help them to implement technology strategically. Traditional HA methodologies simply do not allow for the fine-grained flexibility needed to optimize business value from digitization. Digital transformation means accounting for the varied computing environments of Linux and Windows operating systems alongside containers. It means integrating an assortment of legacy systems with newer ones specifically designed to handle the influx of big data and modern transactions systems.
Most of all, it means aligning that infrastructure for business objectives in an adaptive way for the evolving domain of your clients’ needs. Such flexibility is critical to optimizing IT processes around the goals of your clients. The reality is, most conventional methods of HA simply add to the infrastructural complexity of digital transformation, but don’t address the primary need of adapting to changing business requirements. In the wake of digital transformation, you need to be able to help your clients to streamline their various IT systems around domain objectives as opposed to doing the opposite, which simply decreases efficiency while increasing cost.
Smart Availability is ideal for digital transformation because it enables workloads to always run on the best execution environment (BEV). It couples this advantage with the continuous operations of HA but takes a radically different approach in doing so. Smart Availability takes the central idea of HA, to dedicate resources between systems to prevent downtime, and extends it to moving them for maximizing competitive advantage. It allows your clients to move workloads between operating systems, servers, and physical and virtual environments with minimal downtime. The core of this approach is in the capacity of Smart Availability technologies to move workloads independent of one another, which is a fundamental drawback to traditional physical or virtualized approaches to workload management. By disengaging an array of system components (application workloads, containers, services and share files) without having to standardize on just one OS or database, these technologies transfer them to the environment which works best from both a business objective and budgetary standpoint.
It’s important to remember that this judgment call is based on how to best achieve a defined business objective. Furthermore, these technologies provide this flexibility for individual instances to ensure negligible downtime and a smooth transition from one environment to another. The use cases for this instantaneous portability are plentiful. Your clients can use these techniques for uninterrupted availability, integration with new or legacy systems, or the incorporation of additional data sources. Most of all, they can do so with the assurance that the intelligent routing of the underlying technologies are selecting the optimal setting to execute workloads. Once properly architected, the process takes no longer than a simple stop and start of an application or container.
Helping Your Clients to Make the Best Choice
Smart Availability is important for a number of reasons. It creates the advantages of high availability at a lower cost with a greater degree of efficiency and effectiveness. Moreover, it provides the agility required to capitalize on digital transformation, enabling your clients to move systems, applications, and workloads where they can create the greatest competitive advantage. And, Smart Availability provides the flexibility needed to adapt to today’s business climate, which is changing faster than ever.
Don Boxley is a DH2i co-founder and CEO. Prior to DH2i, Don held senior marketing roles at Hewlett-Packard where he was instrumental in sales and marketing strategies that resulted in significant revenue growth in the scale-out NAS business. Don has spent more than 20 years in management positions for leading technology companies, including Hewlett-Packard, CoCreate Software, Iomega, TapeWorks Data Storage Systems and Colorado Memory Systems. Don earned his MBA from the Johnson School of Management, Cornell University.
Be First to Comment