Containerization which is also called Virtualization comes in many flavors. Most people have experience in running an operating system on a desktop or laptop computer and that is referred to as “running on bare metal”. Bare metal refers to the fact that the operating system is running directly on the computer hardware.
The most common type of virtualization are virtual machines. A virtual machine runs on a “hypervisor” program that essentially allows for the creation of “virtual” hardware and a virtual operating system. What this means is that virtual machine software is able to simulate components like a CPU, memory, and motherboard peripherals like USB ports in software. In addition, virtual machines emulate an entire hard disk structure inside of a file. They also emulate video cards to display the screen and they emulate a sound card for sound.
A virtual machine can run Windows, MacOS, Linux, Android, ChromeOS and others on your computer in a window. Your host operating system is loaded on the bare metal hardware and you add a hypervisor software like Oracle VirtualBox, Microsoft Hyper-V, VMware , Kernel Virtual Machine (KVM), Red Hat Enterprise Virtualization (RHEV), or Citrix XenServer to create one or more virtual machines that run inside the host operating system using these softwares.
Out of all virtualization software, Virtual Machines use the most resources because they have to virtualize everything including all of the hardware. This methodology normally allocates memory and CPU resources away from the host operating system and can impact performance of all but the most powerful host operating systems. This is somewhat softened by the fact that modern Intel and AMD processors have a hardware virtualization feature that makes running virtual machines on a host more efficient.
The most effiicient form of virtualization are Docker Containers. Rather than virtualizing the computer hardware, Docker containers use the underlying computer hardware and even the underlying computer operating system and operating system kernel. Docker containers are not virtual machines. Docker containers are virtual applications. Docker containers are images that become containers when run on the Docker Engine. Docker engine is available now for both Linux and Windows-based applications. The advantage to a Dockerized application is that each container is an isolated environment and will run the same way despite differences on the underlying infrastructure configuration.
Software vendors like Docker images because a complex application with a lot of dependencies and configuration that would require hours of user support at a customer site is packaged as one easy to deploy transportable package that will run anywhere. Docker images are an industry standard and they are extremely lightweight meaning that you can run several Docker containers at once and they have very few overhead resource requirements.
Docker containers can even communicate to each other over a Docker network that can isolate the inner workings of an application, thereby reducing its threat attack surface and making it more secure. Docker containers are ideal because they separate application dependencies from Infrastructure architecture. Docker containers are also an advantage because rather than reserving resources from the host like a virtual machine, Docker containers are given capped limits on memory and CPU resources that they only consume under load. Multiple Docker containers share the underlying operating system kernel and take up much less disk space because they only contain the application elements rather than the entire operating system.
The downside of a Docker container is that it must mount disk space outside of the container to maintain non-volatile persistent data. Also, if a new version of a Docker container application becomes available, the container cannot be upgraded, but must instead be destroyed and recreated. So basically, by using Docker you gain an efficient environment that is portable, but you sacrifice some flexibility.
Another type of container are Linux Containers called LXC for short. LXC is a solution for virtualizing software at the operating system level unlike Docker that virtualizes at the application level. The advantage of LXC is that you can run single applications in virtual environments like Docker, but you can also virtualize an entire operating system inside an LXC container.
The main advantage to this is that LXC, unlike Virtual Machines, does not need to virtualize the hardware. This makes it easy to control a virtual environment using tools from the underlying host operating system requiring less resources while still preserving transportability.
If this sounds like Docker, it is because LXC used to be the underlying technology for Docker prior to the time that Docker created a Windows Docker version. I like LXC because I can create a portable container, install any software I want in it and even upgrade the operating system in the LXC container. The only shared component is the kernel of the underlying host. LXC containers are all Linux and so there is no such thing as a Windows LXC container.
Finally, there is another type of container called LXD. LXD is an extension of LXC containers. LXD uses a REST API that connects to libxlc which is the LXC software library. LXD is written in the “Go” language and creates a system daemon that applications can access locally using a Unix socket number or via https over the network.
The LXD daemon can take advantage of host level level security to make containers more integrated and therefore more secure. The LXD daemon handles networking and data storage from the LXD command line interface and it theoretically simplifies the sharing of resources with containers.
LXD offers container migration and container snapshots which LXC does not have. Canonical, the maker of Ubuntu launched LXD in late 2014 as an advancement for LXC. The first production use of LXD was in 2016. LXD depends on LXC. When you use LXD, you are using the underlying LXC container in an abstract form. You can use LXC without LXD, but LXC is a requirement for LXD.
LXD builds on top of LXC to provide a new and “better” user experience. LXD adds tools and the key feature is to make LXD containers controllable over the network.
In what I have read, LXC+LXD is not intended to be a replacement for Docker. LXD is designed for hosting virtual environments that can be upgraded based on a distribution image. Docker focuses on minimal containers that perform simpler functions and can’t be upgraded or reconfigured, but instead are simply replaced with a new container.
If you intend to virtualize an entire operating system or want to run a persistent application with non-volatile data, LXD is a better solution. LXC/LXD both run only Linux operating systems instances.
I have experience with Docker and LXC deployment on QNAP NAS. QNAP is migrating towards Docker and LXD and obsoleting the LXC container interface in their container station. Which to use? Clearly there are cases where virtual machines, Docker containers or LXC/LXD containers have greater advantages. Because of resource use, I revert to a virtual machine only in instances where the kernel of the host operating systems lacks kernel support for a feature that I need for my application.