Many of today’s most progressive technologies – such as edge computing, cloud computing and microservices have their roots in the idea of the virtual machine. This concept is primarily about separating the operating systems and software instances from the physical computer they’re running on.
A virtual machine (VM) is, in essence, a piece of software that executes programs or apps independently of a physical machine. Within a VM environment, one or more “guest” machines can be run on a single “host” computer.
Every VM operates its own operating system and works independently of other VMs, even if these VMs are housed on the same physical host. Virtual Machines are commonly run on servers, although they can also be used on desktop systems, and even on embedded platforms. Several VMs on a single physical host can share resources like CPU cycles, memory, and network bandwidth.
The creation of VMs can be traced back to the dawn of computing in the 1960s when mainframe users used time-sharing to dissociate software from a physical host. In the early 1970s, a virtual machine was defined as an “efficient, isolated duplicate of a real computer machine”.
Virtual Machines (VMs) have become increasingly popular over the past two decades. This surge in popularity is largely due to businesses incorporating server virtualization. This technology allows companies to use the processing power of their physical servers more effectively, reducing the number of physical servers required and creating more available data center space. With this method, applications with varying operating system requirements can operate via a single physical host. This eliminates the need for individual server hardware for each application.
There are typically two kinds of VMs: Process VMs, which isolate a single process, and system VMs which provide a complete separation of the operating system along with applications from the physical computer. The Java Virtual Machine, .NET Framework, and Parrot virtual machine are example of process VMs.
System VMs utilize hypervisors as intermediaries that provide software with access to hardware resources. The hypervisor imitates the computer’s central processing unit (CPU), memory, hard disk, network, and other resources. It subsequently establishes a resource pool that can be distributed to the VMs based on their specific requirements. The hypervisor can support numerous virtual hardware platforms that are insulated from each other, facilitating VMs to use Linux and Windows Server operating systems on the same physical host.
Notable players in the hypervisor market encompass VMware (ESX/ESXi), Intel/Linux Foundation (Xen), Oracle (MV Server for SPARC and Oracle VM Server for x86), and Microsoft (Hyper-V).
Virtual machines can be used on desktop computer systems as well. For instance, a Mac user has the capability to run a virtual Windows setup on their actual Mac hardware.
A hypervisor takes on the role of resource allocater and manager for the VMs. It is responsible for scheduling resources and adjusting distribution as per hypervisor and VM configurations, and has the capacity to redistribute resources in response to fluctuating demands. Primarily, hypervisors can be classified into one of two types:
The main benefit of such an arrangement is that the software is abstracted from the physical host computer, allowing users to run several OS instances on a single unit of hardware. This can help save a company from unnecessary expenditure of time, management costs, and physical space. VMs can also support legacy applications, curbing the need and cost of transferring an older application to a newer or different operating system.
Moreover, VMs are commonly used by developers for testing applications in a secure, sandboxed environment. If developers need to check compatibility of their applications with a new OS, they can use a VM for testing instead of having to buy the new hardware and OS prematurely. For instance, Microsoft has recently updated its trialling Windows VMs that allows developers to download a Windows 11 evaluation VM to experience the OS without making any changes to their main computer.
This might also aid in isolating malware that could infect a specific VM instance. As the software inside a VM cannot tamper with the host computer, the spread of malicious software is significantly reduced.
However, virtual machines aren’t without their disadvantages. Operating multiple VMs on one physical host may lead to unstable performance, particularly if the infrastructure requirements for a certain app aren’t fulfilled. This also often makes them less efficient than a physical computer.
If the physical server fails, all of the apps running on it will be affected. As a result, most IT departments strike a balance between physical and virtual systems.
The successful application of VMs in server virtualization has led to their use in other areas such as storage, networking, and desktops. If there’s a type of hardware being used in the data center, chances are the possibility of virtualizing it is being explored. An example of this would be application delivery controllers.
In network virtualization, corporations are considering network-as-a-service solutions and network functions virtualization (NFV). NFV employs general-purpose servers in place of specialized network appliances, facilitating more adaptable and scalable services. This stands in contrast to software-defined networking, in which the network control plane is separated from the forwarding plane, enabling more automated provisioning and policy-based management of network resources. A third technology, virtual network functions, are software-based services that can operate in an NFV environment, including functions such as routing, firewalling, load balancing, WAN acceleration, and encryption.
As an example, Verizon utilizes NFV to drive its Virtual Network Services, allowing clients to initiate new services and capabilities as required. Included services are virtual applications, routing, software-defined WANs, WAN optimization, and even Session Border Controller as a Service (SBCaaS) to centrally manage and securely implement IP-based real-time services like VoIP and unified communications.
The proliferation of Virtual Machines (VMs) has stimulated additional technological development, such as containers. Containers have become an appealing choice for web application developers as they further the concept of virtualization. In a container setting, a single application, along with its dependencies, can be virtualized. Unlike a VM, a container includes only the binaries, libraries, and applications, resulting in much less overhead.
While some believe that the advancement of containers may lead to the end of the virtual machine, VMs have enough capabilities and advantages to continue driving their progress. VMs remain useful for running multiple applications concurrently, or for operating legacy applications on older operating systems.
Furthermore, some argue that containers are less secure than VM hypervisors because they only share one OS for applications, whereas VMs have the capability of isolating both the application and the OS.
The research manager of IDC’s Software-Defined Compute division, Gary Chen, shared his view that the VM software market, even though maturing and nearing saturation, will continue to grow in the next five years. He suggested that it continues to be a crucial technology as users start to explore cloud architectures and containers. This view was expressed in IDC’s Worldwide Virtual Machine Software Forecast, 2019-2022.
VMs are considered integral components to new technologies including 5G and edge computing. For instance, virtual desktop infrastructure (VDI) suppliers, Microsoft, VMware, and Citrix, seek ways to expand their VDI systems to employees now working from home amidst a post-COVID hybrid model.
As Mahadev Satyanarayanan, a professor of computer science at Carnegie Mellon University, explains, “With VDI, you are transmitting your keystrokes and mouse activities to essentially a remote desktop, hence, you need very low latency.” In 2009, Satyanarayanan published about how virtual machine-based cloudlets could be employed to offer superior processing capabilities to mobile devices on the edge of the Internet, resulting in what is known today as edge computing.
In the 5G wireless space, the process of network slicing uses software-defined networking and NFV technologies to help install network functionality onto VMs on a virtualized server to provide services that once ran only on proprietary hardware.
Like many other technologies in use today, these emerging innovations would not have been developed had it not been for the original VM concepts introduced decades ago.
Keith Shaw is a freelance digital journalist who has written about the IT world for more than 20 years.