Virtualization and Client-Server Technology
The Virtualization Trend
One of the biggest trends in information technology is towards virtualization. Virtualization broadly refers to the abstraction of computer resources. This often makes them appear and perform more powerfully than they would otherwise. It accomplishes this by concealing the physical characteristics of a resource (be it an operating system, storage device, server, or application) from the other systems interacting with it. It is really sort of a shape-shifter.
For example, it can make any one of the single computer resources mentioned previously and make them behave as if there are many of them. Likewise, it can make multiple applications or physical resources working together to appear to be operating from a single logic. It can make the many seem one and the one seem many.
Virtualization exploded in 2005 and it effects have been felt specifically in many of the technologies that make up client-server environments. Network, storage and server virtualization are the three major it fields of virtualization. Network virtualization divides bandwidth into channels to combine available resources located in the network. Each channel is independent of the other. They can be assigned to different servers in real time, thereby reducing latency. This form of virtualization disguises the complexity of network operations and divides it into controllable amounts.
Storage virtualization collects physical storage from different storage devices on the network into and apparently single storage device. But, all the storage from the different locations remain intact, they are just managed by central console to reduce processing power. SANs (Storage Area Networks) rely heavily on storage virtualization. Server virtualization is the concealing of server resources. This form of virtualization conceals the number and identity of physical servers and other processors, like operating systems. The idea is to relieve the user from having to understand and manage the complexity of server resources while facilitating resource sharing and maximizing utilization.
Ultimately, virtualization aims at cost reduction by centralizing administrative tasks and improving scalability, which is the ability of products or applications to adjust to increased volume or an object’s ability to adapt to a new context, say, a new operating system.
Benefits and Components of Virtualization
The benefits of virtualization are many. For example, fully configured applications and operating systems can move location, from one physical server to another immediately for maintenance and workload continuity. This means that resources are being used to their fullest potential all the time because virtual devices can run side by side on the same physical machine. However, the major benefits of virtualization can be summed up in the following categories: partitioning, isolation, and encapsulation.
Partitioning traditionally refers to the division of memory or storage. It is usually used for running separate operating systems on the same machine so their operations don’t conflict. It is also used to free up disk space. In virtualization, partitioning allows an single physical machine to sustain multiple applications and operating systems. It also allows servers to be combined into a virtual machine to be scaled according to the architecture. Partitioning also allows computer resources to be distributed to virtual devices intelligently in response to user and system needs.
Isolation implies the independence virtual machines afford. Virtual machines are completely isolated from hosts and other machines. This provides the security that if one machine crashes the rest of the machines will be unaffected. This isolation also ensures data control. Machines can only communicate through specifically configured network connections. Data cannot infiltrate or affect other machines or applications.
Encapsulation is similar to the abstraction and information hiding that characterizes virtualization. It is defined as the combination of elements to create a new entity. For example, OOP (Object-Oriented Programming) languages like C++ and Java use encapsulation to create high-level (abstracted) objects. In virtualization, an entire virtual machine and its contents are encapsulated so it can be saved as a single file. This makes it incredibly easy to copy and move. Encapsulation resonates with virtualization in its tendency towards hiding the details to make dealing with objects easier. This is accomplished by hiding complexity with a simple interface.
Generally speaking, virtualization provides consolidation. However, as described above encapsulation allows entire business environments to be saved on a few files. This makes it easy to copy and control. If an operating system is virtualized, including its applications and configurations, it can be moved anywhere in the organization without disrupting business continuity. Virtual machines (VMs) maximize availability by automatically migrating from weak or overwhelmed hosts to another virtualized platform.
Client-Server architecture lends itself to the virtualization of applications. It stands poised to enjoy all the benefits. Application virtualization abstracts the application and removes it to a centralized database. However, the local experience of the end user is the same. Application virtualization addresses the limitations of client-server technology. For example, in traditional client-server technology, client programs must be installed on each user system, therefore client applications must be duplicated for each user.
This can affect performance if bandwidth is limited or connections are unstable. If the client program is virtualized, it is running on a server in a data center instead of taking up space and power on the user system. The experience is the same for the user and he/she can access all the features of the client program. Consolidating it in a central location, however, yields obvious benefits in application delivery. For example, initial application deployment, updates, and patches can occur in a matter of minutes or hours. This is in direct contrast to the lengthy investment of manpower that was previous required to make to ensure enterprise-wide application accessibility. Consolidating applications in data center and making the client virtual also improves security, since application files are not dispersed through an organization, but can be monitored and maintained easily from their central location. Performance and user response times improve because delivery is streamlined, requiring less bandwidth, which also reduces costs. Client virtualization also increases mobility and flexibility.
For example, since the client runs from a central server it can be accessed from other systems besides the user’s PC. Any device that can gain access to the data center can have access to the application and its services. This means business continuity improves because people can work from anywhere, such as when traveling.
Server virtualization masks the physical realities of servers. For example, the identity and number of servers is concealed, as are other processors and operating systems. Software application partitions a single physical server into several isolated environments. These are also known as emulations, partitions, containers, instances, or guests. Virtual servers are organized according to the guest-host model. Guests use virtual hardware on the server, so they can function without modification regardless of the actual server’s operating systems. There are three primary types of virtual servers: the virtual machine model, the paravirtual machine, and the virtualization of the operating system (OS) layer. The virtual machine model was described above in the partitioning discussion. However, this type of virutalization requires a few more elements.
For example, virtual machines use hypervisors to communicate and execute any processing demands made by the virtual machine. Hypervisors are also called virtual managers or VMMs (Virtual Machine Managers). They allow different operating systems to share hardware and manage executed coded that demands more privileges from the central processing unit. The paravirtual machine model (PVM) also uses the guest/host model, including a virtual machine monitor. However, in the PVM the VMM actually modifies the host’s operating system through porting.
Portability allows a computer program to function in operating systems other than the one that created without requiring additional networking. Porting, the verb form of portability performs any work demanded by the program to exist in the different environment. It allows PVMs to use resources as needed and run multiple operating systems simultaneously.
Operating system (OS) virtualization varies the guest-host paradigm. In this model, the host runs a single OS kernel as the core and exports OS functions to guests. Kernels contain the basic functions of operating systems. In this configuration, guests must use the same operating system as the host, but the distribution of services may be varied according to guest demands. This creative distribution removes the need for system calls between layers, thereby reducing CPU processing usage. Each partition must remain isolated so that any failure in one does not affect the entire system. Common code binaries and libraries exist on the same machine and can be shared. This allows OS virtual servers to host a multitude of guests (thousands) simultaneously.
Virtuozzo and Solaris Zones both provide products that support OS virtualization. XEN and UML offer PVM virtualization. VMware and Microsoft’s Virtual Server both provide the technology, such as hypervisors, to support VMs.
For small to medium businesses the top virtualization products include the following: Acronis’ FullCircle, Parallel’s Workstation for Windows and Linux, VMware’s Infratstructure 3 (Starter Version), XenSource’s XenServer for Windows.
Intel uses virtual technology in an Embedded IT (EIT) that supports client provisioning, manageability, isolation, and recovery. IBM’s Virtual Client Solution cuts costs through centralization while retaining user’s control of their computing environment.