Virtualization Architects

As an architect working in a virtualized server environment, you know the benefits – and new challenges – that virtualization brings to the data center. The close tie between hardware and the server and application software is removed, eliminating dependencies that made system design more complex up front, and more rigid and difficult to upgrade as applications grew over time. It’s faster to deploy new hardware and easier to build out an environment that can be deployed and updated more quickly than ever before. Sharing resources in a virtual environment helps virtualization architects be more efficient. But those same elements – sharing resources and abstracting applications from the underlying hardware – create a new set of issues that virtualization architects have to carefully consider to keep their risks manageable in the virtualized environment.

One key thing a virtualization architect has to get right is sizing every environment. In the past, it was necessary to estimate application peaks and size for the maximum anticipated demand plus a “safety” factor. That resulted in some wasted capacity, but it also helped to compartmentalize each application with dedicated resources and a buffer for excess demand. Even if an unexpected surge in demand harmed one application’s performance, it generally wouldn’t affect other systems. In a virtualized infrastructure, the very resource sharing that is one of virtualization’s key benefits changes that situation dramatically. Now, physical resources are more heavily utilized and unexpected demand can impact multiple applications. Additionally, resource contention may crop up even when overall utilization is well within safe limits.

Driving Down Infrastructure Costs while Ensuring Application Performance

In the early stages of virtualization over-provisioning of server and storage resources is common, but as infrastructures scale out this approach to managing capacity becomes less economical.  As your use of virtualization expands, the benefits of improving capacity management become a big opportunity for significant infrastructure cost savings. Some of the activities are straight-forward – reclaiming unused capacity and controlling sprawl provide immediate gains. But right-sizing VMs, packing servers more economically and re-balancing workloads has to be done with a thorough understanding of application performance to ensure service level commitments are upheld. These considerations, the multitude of configuration constraints in a virtual environment, and the rate of change in IT environments today require a more iterative approach to capacity management and planning. VMTurbo Operations Manager provides the tool-set today’s IT architects and virtualization planners require to manage capacity at improved utilization levels while still ensuring performance of business critical applications.

New Tools for Today’s Data Center Architects

Architecting the virtual infrastructure to meet future needs can be a complex process. To do it right, you need to answer key questions such as: How many VM’s can I add to my clusters? Can I achieve better performance by re-balancing workloads within or across clusters? How will my workloads perform on different server and storage hardware with different performance characteristics? Can I achieve better throughput by re-balancing workloads across available data stores?

Building off the patented analytics in our Economic Scheduling Engine, VMTurbo provides a rich set of offline “What If” modeling capabilities that simplify the process of managing capacity on an ongoing basis, helping our customers:

  • Plan architecture changes – i.e. consolidating clusters or migrating hypervisors
  • Re-balance VMs to improve performance and increase utilization
  • Right-size applications and VMs to ensure application performance and reclaim capacity
  • Compare costs of server upgrades for multiple hardware scenarios
  • Plan physical to virtual conversions
  • Project the impact of future workload growth

With VMTurbo, virtualization architects get both the information they need to make good decisions, and powerful tools that allow them to perform the analysis needed to turn that information into concrete plans that support the business. As part of its analysis, VMTurbo also determines which servers and data stores to run workloads on and determines where to provision or decommission server and storage capacity to best meet the objectives of the specific plan. VMTurbo’s rightsizing capabilities also enables unused resources to be reclaimed by identifying dormant VMs which can be decommissioned and by recommending specific actions to reduce the resources allocated to over provisioned VMs.

VMTurbo for Capacity Planning Done Right

Capacity planning is a simple concept that is pretty much the same everywhere, right? Not really. An activity as important as capacity planning must provide an accurate projection of the expected resource demand levels for each workload – it absolutely has to be ‘performance-based’). That same planning activity should determine how and where to allocate that workload. Most importantly, those recommendations should take into account a wide array of parameters that are often overlooked by ‘capacity based’ algorithms.

These capacity-based solutions generally gloss over important scenarios in virtualized environments where performance degradation has occurred previously – for instance, bottlenecks due to CPU wait state or storage latency. Understanding the performance footprint, and its historical variations, is critical to get capacity planning right – and the underpinnings of an Intelligent Capacity Planner. To learn more about how capacity planning can help your organization and what you should look for in a capacity planner, check out the short video below.

Free Trial & Demo CTA

Free Trial request A demo

White Paper Promo: Successfully Virtualizing Business-Critical Applications

Successfully Virtualizing Business-Critical Applications

Our Unique Top-Down Approach

Our Unique Top-Down Approach