Before Virtualization there was a one-to-one relationship between a physical server and its operating system. Applications, e.g. server packages like web servers, email servers, etc., would be run within that single operating system.

It is possible to run multiple servers applications within one OS, but it’s risky as a serious error with a single application could have knock-on effects on all other applications on the system. Alternatively, you could dedicate an entire physical server for each application, but this is expensive (in hardware, power, rack space, bandwidth, etc.) and inefficient as server resources (CPU, RAM, Storage, etc.) would likely be under-utilized.

Virtualization addresses these problems by breaking up the one-to-one relationship of hardware to OS, meaning that multiple operating systems (or multiple instances of the same operating system) can run simultaneously on a single physical server. Each instance is called a Virtual Machine.

Key Properties of Virtual Machines

Partitioning

  • Run multiple operating systems on one physical machine.
  • Divide system resources between virtual machines.

Isolation

  • Provide fault and security isolation at the hardware level.
  • Preserve performance with advanced resource controls.

Encapsulation

  • Save the entire state of a virtual machine to files.
  • Move and copy virtual machines as easily as moving and copying files.

Hardware Independence

  • Provision or migrate any virtual machine to any physical server.

[^VMware, https://www.vmware.com/solutions/virtualization.html]


Hypervisors

A hypervisor1 is used to manage and allocate the hardware resources (CPU, RAM, etc.) to each VM.

Type 1

The type of hypervisor that runs directly on hardware (bare-metal) is called a Type 1 hypervisor; examples include VMware ESXi, Microsoft Hyper-V, and others. They are also known as ‘bare-metal’ or ‘native’ hypervisors as they run directly (native) on the hardware (metal). These are commonly used in data centers largely due to their efficiency.

Type 2

Type 2 hypervisors run as programs within an operating system, like a regular computer application. Examples include VMware Workstation, Oracle VirtualBox, etc.

In this case, the OS running on the hardware is called the Host OS, and OSes running in VMs are called Guest OSes.

Type 2 hypervisors are also called hosted hypervisors.

Type 2 hypervisors are not as common in data centers, but they are commonly used on personal-use devices. Most often they are used for running applications that do not support the native OS of a given device (e.g. running a Windows-only app on a Mac or Linux PC).


Connecting VMs to a Network

VMs are connected to each other and to the external network via a virtual switch running in the hypervisor. This virtual switch has many of the same functionalities as a physical switch, including support for VLANs and trunking. VLANs are particularly useful for separating the traffic from different VMs.

Interfaces on the virtual switch are connected to the physical NIC or NICs of the server for external communication.


Containers 2

A container is a software package that contains an App along with all dependencies (libraries, other programs, etc.) for the contained app to run.

Info

Multiple apps can be run within a single container, but this is not common and somewhat contrary to the point of containers anyway.

Containers run on a Container Engine, e.g. Docker. 3 Container engines are run on a host OS. Various versions of Linux are popular for this purpose.

Containers themselves are lightweight (they have a small size) and include only the dependencies required to run the specific app.

In small systems it is possible to manually oversee all containers, however large-scale systems (e.g. Microservices 4) may require thousands of containers. A Container Orchestrator is a software platform for automating the deployment, management, scaling, etc. of containers. Kubernetes is the most popular example, although there are alternatives such as Docker Swarm. These are particularly helpful with managing systems that use large numbers of containers.

VMs vs Containers

  • Speed: VMs can take a long time to boot (several minutes in some cases) as each VM runs its own full-fat operating system. Containers can boot in milliseconds.
  • Space: VMs can take a lot of disk space (10s of gigabytes is common) while containers take up very little (depending on the application being run).
  • CPU/Memory: VMs typically use significantly more resources due to the need for an entire operating system being run on each VM.
  • Portability: While VMs are very portable, containers are even better due to their smaller size, faster boot times, and compatibility; Docker containers can be run on nearly any container service.

You might be thinking containers are looking a lot better than VMs…

  • Isolation: Since each VM runs its own OS its less likely that an issue with one will affect another. Only an issue with the Hypervisor or the server hardware itself would affect all VMs. Containers, on the other hand, all must be run within a single OS; if the OS crashes then all the containers are going down with it. This isolation also provides some security benefits.

There is a major movement toward the use of containers, especially with the rise of microservices, automation, and DevOps 5, but VMs are still widely used today.

Footnotes

  1. Obviously, Hypervisor is a really cool name. They’re also sometimes called a Virtual Machine Monitor (VMM).

  2. This section was added after Cisco updated exam topic 1.12. Previously, containers weren’t an explicitly stated topic, hence the page’s overall focus on VMs relative to containers.

  3. Docker is probably the most popular container engine (at time of writing).

  4. Microservice Architecture is an approach to software architecture that divides a larger solution into smaller parts (microservices).

  5. The combination of Software Development and IT Operations. While software development and IT are both under the technology umbrella (and industry outsiders may be hard pressed to tell the difference) IT/Networking specialists have historically operated with a certain ambivalence (or perhaps just ignorance) toward the applications that actually generate all the data we’re transporting.