Performance Engineering –

Higher performance for Physical to Virtual Migration

Abstract

With an urge to shift towards maintainable, reliable, scalable and cost-effective infrastructure, customers prefer virtualization. This brings along performance related issues and limitations in the hosted applications due to its more complex architecture than physical servers. This paper describes key practices to avoid performance issues in infrastructure during migration from physical to virtual environment.

Various tools and techniques are available in market for P2V migration. They may introduce performance related issues in virtual machines post migration. It happens because of several factors that are described further in this paper. Some simple yet crucial practices can save lot of time and effort in performance tuning. Approach described in this paper consider some areas while selecting virtual equivalent of a physical hardware to avoid performance issues. This also helps in minimizing performance bottlenecks like resource contention, delay due to time drift, longer downtime etc.

Introduction

Today, when cost and time are the key factors to drive business and its operability, a high-performing infrastructure plays vital role. The speed with which advancement towards cutting-edge technologies is happening in this era of digital transformation is not only daunting but an inevitable aspect that touches IT world. Hence, to adapt virtual infrastructure is sure-fire way of a focused and sustainable business. But this cannot be traded-off for application performance whether they are web, client-server, SOA, mobile, highly complex or transaction-intensive. An efficient test plan must have performance guidelines to migrate from physical to virtual environment.

Why performance is critical?

1.  As per a survey report, 60% of consumers turned disloyal to a brand with poor performing website. [1]

2.  As per a case study, 51% of on-line shoppers in US abandon their purchase due to slowness. [2]

3.  Users are more prone to drive away from a website if a page takes more than 8 seconds to load. World’s popular e-commerce website receives most of the traffic due to its high performance irrespective of access via mobile, desktop & other devices.

It is expected behavior of an application to slightly underperform when migrated to virtual environment as there is always an associated overhead. This happens due to a portion of available resources being used by underlying host or hypervisor. The common symptoms in underperforming systems can be any of the following:

§  Application takes more than usual time to launch and close.

§  Application stops responding and hangs up more often.

§  Application is not able to handle load as much as physical hardware equivalent.

§  Application utilizes virtual memory or swapping quite often.

§  Application performs badly as compared to earlier stage when it was installed.

§  Application crashes unexpectedly.

§  Memory and CPU is always high (more than 75%) on system.

§  Delayed system operations due to improper system configuration. For example, VMware® hosts that are subjected to CPU intensive operations, taking frequent system snapshots degrades performance substantially.

What are the performance roadblocks in virtual migration?

In virtual environment, any underlying host is a physical box that can host multiple virtual machines. System resources are shared between these virtual machines and are managed by underlying host. In virtualized architecture, sharing physical resources is an efficient way to utilize all the available resources but it has an associated performance overhead with it.

Time drift

This is a common symptom in virtual machines which happens due to hardware timer drift. Latency is introduced in system when guest or virtual machines are unable to read correctly from hardware time source and time starts drifting at constant rate. However, this can be avoided by synchronisation with external NTP source dedicated to virtual environment.

Multiple Guests

Allocating resources to many virtual machine means more latency in processing and functioning of guest machines. Small amount of resources allocated to a guest machine puts overhead on hypervisor.

Example: If thick provisioning is done for all the virtual machines on a single physical box and low physical memory (512 MB) is allocated each guest OS then there is frequent disk cache copied on

Hypervisor creating overhead and delayed processing.

Unplanned capacity

Thoughtless allocation of resources to create virtual machines not only poses threat of under-performing systems in production but it also leads to unforeseen waste of time and effort. Virtualization is all about efficient sharing of physical resources.

Monitoring and Security

Virtualized infrastructures are highly dynamic, where new virtual machines can be created indiscriminately, monitoring of performance and security is a major challenge. The monitoring and security of physical hardware resources do apply in securing and monitoring proper functioning of virtual systems. Hence, demanding customized security and monitoring solutions created for virtual environment is necessary.

How to keep performance problems at bay?

Comprehend pros and cons

Do not follow the trend while renewing your old systems. Virtualization is not a solution to all the problems. A careful study and planning to understand the advantages and disadvantages always tackle an unforeseen issue.

Know your system

Unfortunately, there is no sure-fire solution to guarantee a complete problem free system. Performance bottlenecks can always be ditched with efficient design approach after understanding server and storage operations. This not only does reality check but keeps the host off from over-burdening due to conflicting workloads where memory and CPU may be at odds.

Undo

Most of the virtual technologies that exist today offers “undo” solution. However, this is the reason why green flag is given to introduce virtual design in physical environment. Virtualization can be undone up to a day or even a week back. A cognizant way to use this feature must include consideration of not exposing systems to any vulnerability.

Wield Reliability and scalability

This is critical aspect, an infrastructure which gives zero downtime is ideal. “Time is money” an apt phrase when production failure occurs. A virtual design must consider reliable solution with privilege of scaling architecture horizontally and vertically. This helps in isolating rogue virtual machines which are prone to cyber-attacks.

What are the key fundamentals behind the curtains?

Business continuity

Data is the most precious asset of any business; hence it is of utmost importance to maintain the sanctity of data during migration. No matter what, business must continue.

Zero Interrupt

Downtime must be minimised for various activities like maintenance, recovery, troubleshooting & back up operations. The new hardware and software to virtualize environment must comply with these activities. Time is money; hence downtime is directly proportional to cost incurred.

Rollback and Upgrade

The point-in-time from which virtualization starts, business must be able to move back and forth. One must be able to restore customer’s entire business to last stable state should anything take wrong turn. Similarly, if business demands infrastructure upgrade then virtual environment must not make it to the list of impediments.

Case Study

Virtual machine equivalent of physical server

Project: Customer demands to migrate application to virtual environment from physical R620 Dell server as this server has configuration of minimum recommended hardware for application to run.

Problem: Identify the virtual equivalent of physical server in the new environment without compromising on performance of the application.

Solution: A study was conducted to assess the server and storage operations of customer application. Various factors were considered before specifying the virtual equivalent like time drift, type of migration (cold, storage & shared) & hyperthreading. Recommended server for hosting AUT is standard configuration of Dell® R620 server.

Approach

·  Decided to build VM with resources equivalent to DELL R620.

·  Built enterprise with performance data set same as the one QE used.

·  Identified a subset of important performance test cases with collaboration with development team.

·  Ran the tests on both physical server and virtualized server (equivalent configuration).

·  Collected all necessary metrics and compared results.

·  OS: Customised Linux version checked for compatibility on VM.

·  NTP sync: In-built & external server for comparison of results. In-built NTP sync verified to be reliable.

Test environment: HP Loadrunner, VMware ESXi, JRMC.

Conclusion: After running performance tests on pre-migration and post-migration setup to compare results, below graphs were generated that compares resources utilization on physical server and its virtual equivalent.


Conclusion & Future Research

The recommended way to migrate the enterprise to virtual environment doesn’t follow any thumb rule. Rather, it is well defined strategy with correct planning and awareness of your subject. Regular check with every milestone is an add-on to make sure your efforts doesn’t go in vain.

The fundamentals to any approach of physical to virtual migrations must include Rollback, Data Consistency, Zero interrupt and business continuity. Certainly, this is the first step to make a smooth and low risk transition.

Approach in this paper can be elegantly extended to further optimise performance of advanced technologies like next generation AI and data analytics hosted in virtualized world.

References & Appendix

[1] www.smallbusiness.co.uk

[2] www.radware.com

[3] www.vmware.com

[4] Foundations of Software and System Performance Engineering: Process, Performance Modelling, Requirements, Testing, Scalability, and Practice (Live lessons) By Andre B Bondi

Author Biography

Hemant Choubey is working as Technical Lead – testing with Aricent Technologies and has over 6 years of experience in Performance engineering. He has worked across various cutting edge tools and technologies which includes virtualization, cloud technologies. He has extensive experience working as trainer and worked on providing end to end solutions to ensure high performance.

Hemant has worked with clients across the geography and deployed various APMs for customer enterprise. He has working experience on APMs like Dynatrace & Appdynamics. He is active follower of latest trends in technologies and is working on his next area of interest which is performance testing of CoAP & MQTT based applications popularly known as IOT.

THANK YOU!