Linux Containers (LXC) Overview
If you're a developer, chances are you’ve heard of Docker and perhaps you've used it for its ability to pack, ship and run applications within "containers". With all the attention Docker has received in the tech industry, it'd be hard not to have heard something about it; everyone from Amazon, PayPal and Google have been building services to support Docker.
Regardless of whether you have a use-case for Docker, or containers in general, it's at least advantageous to dig your heels in and become familiar with containers. Now, if you're a beginner then you might want to crawl before you walk, as the old saying goes. Docker has evolved over the years and to become familiar with its complexities might take some time and patience.
If you're still wary about jumping into Docker, fear not, there are other options such as LXC or Linux Containers. Although LXC doesn't quite bring the same user experience as Docker, it has its usefulness. In this post we give an introduction to LXC containers and how to use them.
What is LXC
LXC (short for "Linux containers") is a solution for virtualizing software at the operating system level within the Linux kernel. Unlike traditional hypervisors (think VMware, KVM and Hyper-V), LXC lets you run single applications in virtual environments as well as virtualize an entire operating system inside an LXC container.
LXC’s main advantages include making it easy to control a virtual environment using userspace tools from the host OS, requiring less overhead than a traditional hypervisor and increasing the portability of individual apps by making it possible to distribute them inside containers.
If you’re thinking that LXC sounds a lot like Docker or CoreOS containers, it’s because LXC used to be the underlying technology that made Docker and CoreOS tick. More recently, however, Docker has gone in its own direction and no longer depends on LXC.
LXC was at the origin of the container revolution years ago. So if you’re thinking that LXC sounds a lot like Docker containers, it’s because LXC used to be the underlying technology that made Docker tick and LXC principles, if not LXC code, remain central to the way containers are developing.
Understanding The Key Differences between LXC and Docker
LXC is a container technology which gives you lightweight Linux containers and Docker is a single application virtualization engine based on containers. They may sound similar but are completely different. Unlike LXC containers Docker containers do no behave like lightweight VMs and cannot be treated as such. Docker containers are restricted to a single application by design.
Difference between LXC and Docker:
Parameter | LXC | Docker |
Developed by | LXC was created by IBM, Virtuozzo, Google and Eric Biederman. | Docker was created by Solomon Hykes in 2003. |
Data Retrieval | LXC does not support data retrieval after it is processed. | Data retrieval is supported in Docker. |
Usability | It is a multi-purpose solution for virtualization. | It is single purpose solution. |
Platform | LXC is supported only on Linux platform. | Docker is platform dependent. |
Virtualization | LXC provides us full system virtualization. | Docker provides application virtualization. |
Cloud support | There is no need for cloud storage as Linux provides each feature. | The need of cloud storage is required for a sizeable ecosystem. |
Popularity | Due to some constraints LXC is not much popular among the developers. | Docker is popular due to containers and it took containers to a next level. |
GETTING STARTED
In the following demo we will be using Virtual Box and Vagrant for our host image. Our host image is Ubuntu 18.04.
vagrant@ubunruvm01:~$ lsb_release -dirc Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic
LXC Installation: In Ubuntu 18.04 LXC and LXD are already installed. We can view this by running the which command and looking for lxc and lxd
vagrant@ubunruvm01:~# which lxc lxd /usr/bin/lxc /usr/bin/lxd
We can also see what packages are installed with dpkg -l
vagrant@ubunruvm01:~# sudo dpkg -l | grep lxd ii lxd 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - daemon ii lxd-client 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - client
To install lxd is very easy and straight forward. LXD can be installed using the "apt-get" command.
vagrant@ubunruvm01:~# sudo apt-get install lxd
Although lxd comes pre-installed the service is not started. Below we can see that the service is enabled but is not running using the “systemctl status” command.
vagrant@ubunruvm01:~# sudo systemctl status lxd ● lxd.service - LXD - main daemon Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled) Active: inactive (dead) since Thu 2021-02-25 16:35:36 UTC; 6s ago Docs: man:lxd(1)
To start the lxd service run the following command using “systemctl”
vagrant@ubunruvm01:~# sudo systemctl start lxd
We can see that the service is now running.
vagrant@ubunruvm01:~# sudo systemctl status lxd ● lxd.service - LXD - main daemon Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled) Active: active (running) since Thu 2021-02-25 17:36:46 UTC; 21s ago Docs: man:lxd(1)
LXD requires root privileges, so in order to run lxd commands without having to sudo or with root privileges you need to add your user to the lxd group.
vagrant@ubunruvm01:~# sudo getent group lxd lxd:x:108:ubuntu
To add your user to the lxd group run the following command:
vagrant@ubunruvm01:~$ sudo gpasswd -a vagrant lxd Adding user vagrant to group lxd
We can now see that the user “vagrant” has been aded to the lxd group.
vagrant@ubunruvm01:~# sudo getent group lxd lxd:x:108:ubuntu,vagrant
For the changes to take affect without having to logout run the following command:
vagrant@ubunruvm01:~# newgrp lxd vagrant@ubunruvm01:~$ groups lxd vagrant
By default, LXD comes with no configured network or storage. You can get a basic configuration by initializing the service using “lxd init”. In the following example, accept all defaults except for “storage backend” where we will use a directory on our local system.
vagrant@ubunruvm01:~$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
To see what version of LXC you can run the following command:
vagrant@ubunruvm01:~$ lxc version To start your first container, try: lxc launch ubuntu:18.04 Client version: 3.0.3 Server version: 3.0.3
To see a list of available commands, you run “lxd help”.
vagrant@ubunruvm01:~$ lxc help Description: Command line client for LXD All of LXD's features can be driven through the various commands below. For help with any of those, simply call them with --help. Usage: lxc [command] Available Commands: alias Manage command aliases cluster Manage cluster members config Manage container and server configuration options console Attach to container consoles copy Copy containers within or in between LXD instances delete Delete containers and snapshots exec Execute commands in containers file Manage files in containers help Help about any command image Manage images info Show container or server information launch Create and start containers from images list List containers move Move containers within or in between LXD instances network Manage and attach containers to networks operation List, show and delete background operations profile Manage profiles publish Publish containers as images remote Manage the list of remote servers rename Rename containers and snapshots restart Restart containers restore Restore containers from snapshots snapshot Create container snapshots start Start containers stop Stop containers storage Manage storage pools and volumes version Show local and remote versions
For more information about a command, you can run “lxc <command> help”
vagrant@ubunruvm01:~$ lxc help storage Description: Manage storage pools and volumes Usage: lxc storage [command] Available Commands: create Create storage pools delete Delete storage pools edit Edit storage pool configurations as YAML get Get values for storage pool configuration keys info Show useful information about storage pools list List available storage pools set Set storage pool configuration keys show Show storage pool configurations and resources unset Unset storage pool configuration keys volume Manage storage volumes Global Flags: --debug Show all debug messages --force-local Force using the local unix socket -h, --help Print help -v, --verbose Show all information messages --version Print version number
For example, to list available storage pools run the following command. The below command will show the directory storage that was setup during the initialization. Which is basically where all the machines images will be stored.
vagrant@ubunruvm01:~$ lxc storage list +---------+-------------+--------+------------------------------------+---------+ | NAME | DESCRIPTION | DRIVER | SOURCE | USED BY | +---------+-------------+--------+------------------------------------+---------+ | default | | dir | /var/lib/lxd/storage-pools/default | 1 | +---------+-------------+--------+------------------------------------+---------+
LXC images are hosted on a remote images server for use by LXC and LXD. To see where the remote images will be pulled from, we’ll run the following command “lxc remote list”. Below we can see where the images will be downloaded from.
vagrant@ubunruvm01:~$ lxc remote list +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | images | https://images.linuxcontainers.org | simplestreams | | YES | NO | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | local (default) | unix:// | lxd | tls | NO | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | | YES | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | | YES | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+
Let's search the remote repositories for a centos image. To see a list of local images, run the following command “lxc image list”. Below you can see that there are currently no images downloaded.
vagrant@ubunruvm01:~$ lxc image list +-------+-------------+--------+-------------+------+------+-------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+-------------+--------+-------------+------+------+-------------+
To search for a specific image you can run “lxc image list images: <name of image>”. For the name of the image you don’t have to know the complete name. You can type pieces of the name like “cen” for centos.
vagrant@ubunruvm01:~$ lxc image list images:cen +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7 (3 more) | 28c8f402ea46 | yes | Centos 7 amd64 (20210225_07:08) | x86_64 | 83.47MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/armhf (1 more) | acb5e3324d40 | yes | Centos 7 armhf (20210225_07:08) | armv7l | 79.33MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud (1 more) | 67f8af33783e | yes | Centos 7 amd64 (20210225_07:08) | x86_64 | 90.03MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud/armhf | 3fe2549b0ac8 | yes | Centos 7 armhf (20210225_07:08) | armv7l | 85.65MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud/i386 | d03240b67ad2 | yes | Centos 7 i386 (20210225_07:08) | i686 | 90.90MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/i386 (1 more) | 4ed3b99a7c72 | yes | Centos 7 i386 (20210225_07:08) | i686 | 84.11MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8 (3 more) | a59629c1729d | yes | Centos 8 amd64 (20210225_07:08) | x86_64 | 125.57MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream (3 more) | 065036c1c697 | yes | Centos 8-Stream amd64 (20210225_07:08) | x86_64 | 128.31MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream/arm64 (1 more) | 732d429f292e | yes | Centos 8-Stream arm64 (20210225_07:08) | aarch64 | 124.61MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream/cloud (1 more) | 77e1ba7d7c37 | yes | Centos 8-Stream amd64 (20210225_07:08) | x86_64 | 142.73MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
In this example, lets start a ubuntu 20.10 image. We can start the image using “lxc launch”. This will download the image to the local host and become our base image. Next time we start a ubuntu image it will use the base image instead of downloading it from the remote repository.
vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 Creating the container Container name is: outgoing-monkey Starting outgoing-monkey
Once the image is finished downloading we can see that it has also started a container with the name “outgoing-monkey”. We can run “lxc image list” to see the image locally and see the size of the image is only 365MB. Now that the image is stored locally, next time we start an image, it will start up much faster.
vagrant@ubunruvm01:~$ lxc image list +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | | a173d56bc66e | no | ubuntu 20.10 amd64 (release) (20210209) | x86_64 | 365.55MB | Feb 25, 2021 at 9:37pm (UTC) | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
We can view the running container with “lxc list”.Here we can see the container is running and has been given an IP address on 10.91.196.0. Any additional containers will be given an IP address on this network as well.
vagrant@ubunruvm01:~$ lxc list +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+ | outgoing-monkey | RUNNING | 10.91.196.201 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe0b:f9fd (eth0) | PERSISTENT | 0 | +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+
We can view the interfaces and see the network we created during the initialization, which is “lxdbr0”.
vagrant@ubunruvm01:~$ ip a show dev lxdbr0 4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fe:e0:1f:3b:62:b1 brd ff:ff:ff:ff:ff:ff inet 10.91.196.1/24 scope global lxdbr0 valid_lft forever preferred_lft forever inet6 fd42:41c3:c657:13ec::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::68d8:4fff:fe30:3a99/64 scope link valid_lft forever preferred_lft forever
When a name is no specified for the container, a random name will be assigned. Lets delete this container and launch a container with an assigned name. However, Before deleting a container you’ll need make sure the container is stopped. To stop a running container use “lxc stop <container name>”.
vagrant@ubunruvm01:~$ lxc stop outgoing-monkey
After stopping the container you can delete it by using “lxc delete <container name>”. After deleting the container run lxc list to see that the container has been deleted.
vagrant@ubunruvm01:~$ lxc delete outgoing-monkey vagrant@ubunruvm01:~$ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+
Now lets recreate a container with the same image but with the name “myubuntu”.
vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 myubuntu Creating myubuntu Starting myubuntu
We can see that another container was created but with the name myubuntu.
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
Copying Containers
Another neat thing with LXC containers is that you can very quickly and easily copy one container to another. If we want to copy our myubuntu container to motherubuntu then we would use the “lxc copy” command.
vagrant@ubunruvm01:~$ lxc copy myubuntu myotherubuntu vagrant@ubunruvm01:~$ lxc list +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myotherubuntu | STOPPED | | | PERSISTENT | 0 | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see that we have another container named myotherubuntu but it's stopped. We can start the container with “lxc start”.
vagrant@ubunruvm01:~$ lxc start myotherubuntu
We can list the containers and see that our other container has started and has been assigned an IP address on the 10.91.196 network.
vagrant@ubunruvm01:~$ lxc stop myotherubuntu vagrant@ubunruvm01:~$ lxc move myotherubuntu myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm | STOPPED | | | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see that the container was renamed to myvm. Now lets start the myvm container with “lxc start”.
vagrant@ubunruvm01:~$ lxc start myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | myvm | RUNNING | 10.91.196.142 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe09:1db5 (eth0) | PERSISTENT | 0 | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+
How to Use the Containers
So now let’s go over how to use the containers. The containers have a default ubuntu account user but for this demonstration we’ll log in as root. To log into a container as root we’ll use the “lxc exec” command.
vagrant@ubunruvm01:~$ lxc exec myvm bash
We can see that we’re now logged into the container but the hostname still has the name of the container before we renamed it. So lets rename the container properly here.
root@myotherubuntu:~# hostnamectl Static hostname: myotherubuntu Icon name: computer-container Chassis: container Machine ID: 58e4b9329ef14803af7f8d9802d52944 Boot ID: 0020920e423e40c6b050226c1506b153 Virtualization: lxc Operating System: Ubuntu 20.10 Kernel: Linux 4.15.0-135-generic Architecture: x86-64
We can change the hostname with “hostnamectl command”.
root@myotherubuntu:~# hostnamectl set-hostname myvm root@myotherubuntu:~# hostnamectl Static hostname: myvm Icon name: computer-container Chassis: container Machine ID: 58e4b9329ef14803af7f8d9802d52944 Boot ID: 0020920e423e40c6b050226c1506b153 Virtualization: lxc Operating System: Ubuntu 20.10 Kernel: Linux 4.15.0-135-generic Architecture: x86-64
The containers use resources from its host machine, If we look at things such as the kernel, memory, cpus and compare them to our host machine, we can see it uses the hosts resources.
root@myotherubuntu:~# lsb_release -dirc Distributor ID: Ubuntu Description: Ubuntu 20.10 Release: 20.10 Codename: groovy root@myotherubuntu:~# uname -r 4.15.0-135-generic root@myotherubuntu:~# nproc 2 root@myotherubuntu:~# free -m total used free shared buff/cache available Mem: 1992 63 1874 0 55 1929 Swap: 0 0 0
We can also ping between the lxc containers using the dns name. Lets try this pinging our myubuntu container from the myvm container.
root@myotherubuntu:~# ping myubuntu PING myubuntu(myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a)) 56 data bytes 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=1 ttl=64 time=0.071 ms 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=2 ttl=64 time=0.081 ms 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=3 ttl=64 time=0.050 ms
If we log into the myubuntu container we can do the same ting from there. This time let's log into the myubuntu container using the ubuntu user account. We can do this using the “lxc exec” command.
vagrant@ubunruvm01:~$ lxc exec myubuntu su - ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myubuntu:~$ hostname myubuntu Now that we are logged into the myubuntu container, lets ping our other vm. ubuntu@myubuntu:~$ ping myvm PING myvm(myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5)) 56 data bytes 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=2 ttl=64 time=0.045 ms 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=3 ttl=64 time=0.079 ms
So far we’ve used different commands such as: image, launch, start, stop, delete, list, exec, copy and move. Now lets look at a few other commands. We can view information about each container using “lxc info”. This will give us info on things like the hostname, status, interfaces, cpu and memory usage.
vagrant@ubunruvm01:~$ lxc info myubuntu | less Name: myubuntu Remote: unix:// Architecture: x86_64 Created: 2021/02/25 21:58 UTC Status: Running Type: persistent Profiles: default Pid: 4750 Ips: eth0: inet 10.91.196.51 veth8W74TM eth0: inet6 fd42:41c3:c657:13ec:216:3eff:fe9c:476a veth8W74TM eth0: inet6 fe80::216:3eff:fe9c:476a veth8W74TM lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 46 CPU usage: CPU usage (in seconds): 47 Memory usage: Memory (current): 281.92MB Memory (peak): 610.49MB Network usage: lo: Bytes received: 7.36kB Bytes sent: 7.36kB Packets received: 74 Packets sent: 74
We can also view the machine configuration with “lxc config show” command.
vagrant@ubunruvm01:~$ lxc config show myubuntu | lessarchitecture: x86_64 config: image.architecture: amd64 image.description: ubuntu 20.10 amd64 (release) (20210209) image.label: release image.os: ubuntu image.release: groovy image.serial: "20210209" image.version: "20.10" volatile.base_image: a173d56bc66e8b5c080ceb6f2d70d1bb413f61d7b2dac2644e21e1d69494a502 volatile.eth0.hwaddr: 00:16:3e:9c:47:6a volatile.idmap.base: "0" volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.power: RUNNING devices: {} ephemeral: false profiles: - default stateful: false description: "" (END)
Viewing LXC Profiles
Profiles are like configuration files for instances. They can store any configuration that an instance can (key/value or devices) and any number of profiles can be applied to an instance. Profiles aren't specific to containers or virtual machines, they may contain configuration and devices that are valid for either type. If you don't apply specific profiles to an instance, then the default profile is applied automatically. Since we didn’t apply a specific profile, the default profile was applied. We can list the profiles using “lxc profile” command.
vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | default | 2 | +---------+---------+
To view the content of the default profile we can use “lxc profile show“. The command will show us things like description, devices, storage pool and the network bridge.
vagrant@ubunruvm01:~$ lxc profile show default config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: default used_by: - /1.0/containers/myubuntu - /1.0/containers/myvm
If we want to create a new profile. We can copy the default profile to a custom name with “lxc profile copy”. In the below example we can see that a new profile called custom was created and there are no VMs using it.
vagrant@ubunruvm01:~$ lxc profile copy default custom vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | custom | 0 | +---------+---------+ | default | 2 | +---------+---------+
Now that we have 2 profiles, we can launch a new container and have it use the custom profile that we created. If we want to launch a new container using the custom profile, you can us “lxc launch” with the —profile argument. We can also customize the profile with things like limiting cpus or memory for example. Let’s say we want to limit a vm to using less than the host available memory. We can do this 1 of 2 ways. We can dynamically set the memory limit to a running vm with “lxc config” command.
We can see in the below example that our myvm container is using 2GB of memory.
vagrant@ubunruvm01:~$ lxc exec myvm bash root@myvm:~# free -m total used free shared buff/cache available Mem: 1992 95 1850 0 47 1897
Let's say we want to dynamically change that to 1GB. We can do that with the following command:
vagrant@ubunruvm01:~$ lxc config set myvm limits.memory 1GB
Now, if we check myvm we can see it's now only using 1GB of memory:
root@myvm:~# free -m total used free shared buff/cache available Mem: 953 95 811 0 47 858 Swap: 0 0 0
But let's say we want to set that memory limit inside our custom profile. First let's delete the myvm. In this example I delete the vm using —force option because the container is running, Note: you cannot delete a running container.
vagrant@ubunruvm01:~$ lxc delete --force myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
Now we can edit our custom profile using “lxc profile edit” and add the memory limits to 1GB:
vagrant@ubunruvm01:~$ lxc profile edit custom config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: custom used_by: []
To make the custom changes, delete the curly brackets “config: {}” and add “limits.memory: 1GB” underneath config as shown below. Then save the change.
config: limits.memory: 1GB
You can view the changes:
vagrant@ubunruvm01:~$ lxc profile show custom config: limits.memory: 1GB description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: custom used_by: []
Once the changes have been made you can launch a new container using the custom profile and new memory settings. Let's now launch a vm using the ubuntu:2010 image we have stored locally and we’ll name the container myvm2:
vagrant@ubunruvm01:~$ lxc image list +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | | a173d56bc66e | no | ubuntu 20.10 amd64 (release) (20210209) | x86_64 | 365.55MB | Feb 25, 2021 at 9:37pm (UTC) | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 myvm2 --profile custom Creating myvm2 Starting myvm2
Now that the vm has been created, we can view the list of profiles and see that we have 1 vm using the new custom profile we created.
vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | custom | 1 | +---------+---------+ | default | 1 | +---------+---------+
If we log into the new vm we can also see that it is using the 1GB of memory we set on the new profile. The same thing can be done for other settings like cpu.
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm2 | RUNNING | 10.91.196.46 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe42:ea4e (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# free -m total used free shared buff/cache available Mem: 953 78 689 0 185 875 Swap: 0 0 0
Copying Files To and From Containers
We can also copy files from the host machine into the containers using the “lxc file push” command. Let’s create a file called myfile and copy the file into the myvm2 container, for example.
vagrant@ubunruvm01:~$ echo test file > myfile vagrant@ubunruvm01:~$ ls myfile myfile
Next let's copy the file into the myvm2 container. For this we have to use the “lxc file push” command and specify the file to push along with the vm name and the file system we want to copy the file to. In this example it’ll be /root
vagrant@ubunruvm01:~$ lxc file push myfile myvm2/root/
If we check myvm2, we can see that the file was copied into the /root directory
root@myvm2:~# pwd /root root@myvm2:~# ls myfile snap root@myvm2:~# cat myfile test file
We can also copy the file from the container to the host machine using “lxc file pull” command. So if we delete the file from the host machine then we can pull a copy from the vm:
vagrant@ubunruvm01:~$ rm my-file vagrant@ubunruvm01:~$ lxc file pull myvm2/root/myfile . vagrant@ubunruvm01:~$ ls myfile vagrant@ubunruvm01:~$ cat myfile test file
Snapshot and Restoring Containers
Next we’ll create snapshots of a container using lxc commands. Snapshots allow saving the existing state of a vm before making changes.
First we’ll create some directories in our myvm2 container:
root@myvm2:~# for i in $(seq 5); do mkdir $i; done root@myvm2:~# ls 1 2 3 4 5 myfile snap
Next, from our host machines we’ll create a snapshot of myvm2 using “lxc snapshot” command. What this does is copy the whole filesystem of the vm. Which is 365.55MB. We’ll name the snapshot “snap1”:
vagrant@ubunruvm01:~$ lxc snapshot myvm2 snap1
Now let's log in and delete the directories we create on myvm2 and see if we can restore them from a snapshot:
vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# ls 1 2 3 4 5 myfile snap root@myvm2:~# for i in $(seq 5); do rm -rf $i; done root@myvm2:~# ls myfile snap
From the above example we can see that the directories have been deleted from our vm. Now let's say we did this accidentally. we could restore them from the snapshot that we took earlier. But first let's see what snapshots are available. Below we can see that 1 snapshot is available for myvm2:
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm2 | RUNNING | 10.91.196.46 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe42:ea4e (eth0) | PERSISTENT | 1 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see the name of the snapshot for a vm by looking at the vm’s info using “lxc info”. Looking at the very bottom we can the snapshot name is snap1 among with the date it was taken:
vagrant@ubunruvm01:~$ lxc info myvm2 Name: myvm2 Remote: unix:// Architecture: x86_64 Created: 2021/02/26 21:04 UTC Status: Running Type: persistent Profiles: custom Pid: 23807 Ips: eth0: inet 10.91.196.46 vethUXVEK2 eth0: inet6 fd42:41c3:c657:13ec:216:3eff:fe42:ea4e vethUXVEK2 eth0: inet6 fe80::216:3eff:fe42:ea4e vethUXVEK2 lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 43 CPU usage: CPU usage (in seconds): 20 Memory usage: Memory (current): 242.72MB Memory (peak): 424.61MB Network usage: eth0: Bytes received: 42.22MB Bytes sent: 198.11kB Packets received: 4118 Packets sent: 2642 lo: Bytes received: 4.55kB Bytes sent: 4.55kB Packets received: 48 Packets sent: 48 Snapshots: snap1 (taken at 2021/02/27 02:30 UTC) (stateless)
Next we can restore the snapshot using lxc restore command. From the below command we can see that the vm was restored along with the directories we deleted.
vagrant@ubunruvm01:~$ lxc restore myvm2 snap1 vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# ls 1 2 3 4 5 myfile snap
Conclusion
Well, that’s it for “Getting Started with Linux Containers”. Hopefully, you’ve found this post to be useful. If you’re someone just starting to become familiar with container concepts then LXC can be a great place to start, play around with and get your feet wet.
LXC has many other cool features and is easy to learn and grow. If you prefer to jump right into the fire then Docker may be more up your alley. Either way, both offer a great opportunity to become up-skilled.
Have any questions about LXC, Docker or other containers?
Contact Exxact Today
Getting Started With Linux Containers
Linux Containers (LXC) Overview
If you're a developer, chances are you’ve heard of Docker and perhaps you've used it for its ability to pack, ship and run applications within "containers". With all the attention Docker has received in the tech industry, it'd be hard not to have heard something about it; everyone from Amazon, PayPal and Google have been building services to support Docker.
Regardless of whether you have a use-case for Docker, or containers in general, it's at least advantageous to dig your heels in and become familiar with containers. Now, if you're a beginner then you might want to crawl before you walk, as the old saying goes. Docker has evolved over the years and to become familiar with its complexities might take some time and patience.
If you're still wary about jumping into Docker, fear not, there are other options such as LXC or Linux Containers. Although LXC doesn't quite bring the same user experience as Docker, it has its usefulness. In this post we give an introduction to LXC containers and how to use them.
What is LXC
LXC (short for "Linux containers") is a solution for virtualizing software at the operating system level within the Linux kernel. Unlike traditional hypervisors (think VMware, KVM and Hyper-V), LXC lets you run single applications in virtual environments as well as virtualize an entire operating system inside an LXC container.
LXC’s main advantages include making it easy to control a virtual environment using userspace tools from the host OS, requiring less overhead than a traditional hypervisor and increasing the portability of individual apps by making it possible to distribute them inside containers.
If you’re thinking that LXC sounds a lot like Docker or CoreOS containers, it’s because LXC used to be the underlying technology that made Docker and CoreOS tick. More recently, however, Docker has gone in its own direction and no longer depends on LXC.
LXC was at the origin of the container revolution years ago. So if you’re thinking that LXC sounds a lot like Docker containers, it’s because LXC used to be the underlying technology that made Docker tick and LXC principles, if not LXC code, remain central to the way containers are developing.
Understanding The Key Differences between LXC and Docker
LXC is a container technology which gives you lightweight Linux containers and Docker is a single application virtualization engine based on containers. They may sound similar but are completely different. Unlike LXC containers Docker containers do no behave like lightweight VMs and cannot be treated as such. Docker containers are restricted to a single application by design.
Difference between LXC and Docker:
Parameter | LXC | Docker |
Developed by | LXC was created by IBM, Virtuozzo, Google and Eric Biederman. | Docker was created by Solomon Hykes in 2003. |
Data Retrieval | LXC does not support data retrieval after it is processed. | Data retrieval is supported in Docker. |
Usability | It is a multi-purpose solution for virtualization. | It is single purpose solution. |
Platform | LXC is supported only on Linux platform. | Docker is platform dependent. |
Virtualization | LXC provides us full system virtualization. | Docker provides application virtualization. |
Cloud support | There is no need for cloud storage as Linux provides each feature. | The need of cloud storage is required for a sizeable ecosystem. |
Popularity | Due to some constraints LXC is not much popular among the developers. | Docker is popular due to containers and it took containers to a next level. |
GETTING STARTED
In the following demo we will be using Virtual Box and Vagrant for our host image. Our host image is Ubuntu 18.04.
vagrant@ubunruvm01:~$ lsb_release -dirc Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic
LXC Installation: In Ubuntu 18.04 LXC and LXD are already installed. We can view this by running the which command and looking for lxc and lxd
vagrant@ubunruvm01:~# which lxc lxd /usr/bin/lxc /usr/bin/lxd
We can also see what packages are installed with dpkg -l
vagrant@ubunruvm01:~# sudo dpkg -l | grep lxd ii lxd 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - daemon ii lxd-client 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - client
To install lxd is very easy and straight forward. LXD can be installed using the "apt-get" command.
vagrant@ubunruvm01:~# sudo apt-get install lxd
Although lxd comes pre-installed the service is not started. Below we can see that the service is enabled but is not running using the “systemctl status” command.
vagrant@ubunruvm01:~# sudo systemctl status lxd ● lxd.service - LXD - main daemon Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled) Active: inactive (dead) since Thu 2021-02-25 16:35:36 UTC; 6s ago Docs: man:lxd(1)
To start the lxd service run the following command using “systemctl”
vagrant@ubunruvm01:~# sudo systemctl start lxd
We can see that the service is now running.
vagrant@ubunruvm01:~# sudo systemctl status lxd ● lxd.service - LXD - main daemon Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled) Active: active (running) since Thu 2021-02-25 17:36:46 UTC; 21s ago Docs: man:lxd(1)
LXD requires root privileges, so in order to run lxd commands without having to sudo or with root privileges you need to add your user to the lxd group.
vagrant@ubunruvm01:~# sudo getent group lxd lxd:x:108:ubuntu
To add your user to the lxd group run the following command:
vagrant@ubunruvm01:~$ sudo gpasswd -a vagrant lxd Adding user vagrant to group lxd
We can now see that the user “vagrant” has been aded to the lxd group.
vagrant@ubunruvm01:~# sudo getent group lxd lxd:x:108:ubuntu,vagrant
For the changes to take affect without having to logout run the following command:
vagrant@ubunruvm01:~# newgrp lxd vagrant@ubunruvm01:~$ groups lxd vagrant
By default, LXD comes with no configured network or storage. You can get a basic configuration by initializing the service using “lxd init”. In the following example, accept all defaults except for “storage backend” where we will use a directory on our local system.
vagrant@ubunruvm01:~$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
To see what version of LXC you can run the following command:
vagrant@ubunruvm01:~$ lxc version To start your first container, try: lxc launch ubuntu:18.04 Client version: 3.0.3 Server version: 3.0.3
To see a list of available commands, you run “lxd help”.
vagrant@ubunruvm01:~$ lxc help Description: Command line client for LXD All of LXD's features can be driven through the various commands below. For help with any of those, simply call them with --help. Usage: lxc [command] Available Commands: alias Manage command aliases cluster Manage cluster members config Manage container and server configuration options console Attach to container consoles copy Copy containers within or in between LXD instances delete Delete containers and snapshots exec Execute commands in containers file Manage files in containers help Help about any command image Manage images info Show container or server information launch Create and start containers from images list List containers move Move containers within or in between LXD instances network Manage and attach containers to networks operation List, show and delete background operations profile Manage profiles publish Publish containers as images remote Manage the list of remote servers rename Rename containers and snapshots restart Restart containers restore Restore containers from snapshots snapshot Create container snapshots start Start containers stop Stop containers storage Manage storage pools and volumes version Show local and remote versions
For more information about a command, you can run “lxc <command> help”
vagrant@ubunruvm01:~$ lxc help storage Description: Manage storage pools and volumes Usage: lxc storage [command] Available Commands: create Create storage pools delete Delete storage pools edit Edit storage pool configurations as YAML get Get values for storage pool configuration keys info Show useful information about storage pools list List available storage pools set Set storage pool configuration keys show Show storage pool configurations and resources unset Unset storage pool configuration keys volume Manage storage volumes Global Flags: --debug Show all debug messages --force-local Force using the local unix socket -h, --help Print help -v, --verbose Show all information messages --version Print version number
For example, to list available storage pools run the following command. The below command will show the directory storage that was setup during the initialization. Which is basically where all the machines images will be stored.
vagrant@ubunruvm01:~$ lxc storage list +---------+-------------+--------+------------------------------------+---------+ | NAME | DESCRIPTION | DRIVER | SOURCE | USED BY | +---------+-------------+--------+------------------------------------+---------+ | default | | dir | /var/lib/lxd/storage-pools/default | 1 | +---------+-------------+--------+------------------------------------+---------+
LXC images are hosted on a remote images server for use by LXC and LXD. To see where the remote images will be pulled from, we’ll run the following command “lxc remote list”. Below we can see where the images will be downloaded from.
vagrant@ubunruvm01:~$ lxc remote list +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | images | https://images.linuxcontainers.org | simplestreams | | YES | NO | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | local (default) | unix:// | lxd | tls | NO | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | | YES | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+ | ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | | YES | YES | +-----------------+------------------------------------------+---------------+-----------+--------+--------+
Let's search the remote repositories for a centos image. To see a list of local images, run the following command “lxc image list”. Below you can see that there are currently no images downloaded.
vagrant@ubunruvm01:~$ lxc image list +-------+-------------+--------+-------------+------+------+-------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+-------------+--------+-------------+------+------+-------------+
To search for a specific image you can run “lxc image list images: <name of image>”. For the name of the image you don’t have to know the complete name. You can type pieces of the name like “cen” for centos.
vagrant@ubunruvm01:~$ lxc image list images:cen +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7 (3 more) | 28c8f402ea46 | yes | Centos 7 amd64 (20210225_07:08) | x86_64 | 83.47MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/armhf (1 more) | acb5e3324d40 | yes | Centos 7 armhf (20210225_07:08) | armv7l | 79.33MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud (1 more) | 67f8af33783e | yes | Centos 7 amd64 (20210225_07:08) | x86_64 | 90.03MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud/armhf | 3fe2549b0ac8 | yes | Centos 7 armhf (20210225_07:08) | armv7l | 85.65MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/cloud/i386 | d03240b67ad2 | yes | Centos 7 i386 (20210225_07:08) | i686 | 90.90MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/7/i386 (1 more) | 4ed3b99a7c72 | yes | Centos 7 i386 (20210225_07:08) | i686 | 84.11MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8 (3 more) | a59629c1729d | yes | Centos 8 amd64 (20210225_07:08) | x86_64 | 125.57MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream (3 more) | 065036c1c697 | yes | Centos 8-Stream amd64 (20210225_07:08) | x86_64 | 128.31MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream/arm64 (1 more) | 732d429f292e | yes | Centos 8-Stream arm64 (20210225_07:08) | aarch64 | 124.61MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | centos/8-Stream/cloud (1 more) | 77e1ba7d7c37 | yes | Centos 8-Stream amd64 (20210225_07:08) | x86_64 | 142.73MB | Feb 25, 2021 at 12:00am (UTC) | +----------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
In this example, lets start a ubuntu 20.10 image. We can start the image using “lxc launch”. This will download the image to the local host and become our base image. Next time we start a ubuntu image it will use the base image instead of downloading it from the remote repository.
vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 Creating the container Container name is: outgoing-monkey Starting outgoing-monkey
Once the image is finished downloading we can see that it has also started a container with the name “outgoing-monkey”. We can run “lxc image list” to see the image locally and see the size of the image is only 365MB. Now that the image is stored locally, next time we start an image, it will start up much faster.
vagrant@ubunruvm01:~$ lxc image list +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | | a173d56bc66e | no | ubuntu 20.10 amd64 (release) (20210209) | x86_64 | 365.55MB | Feb 25, 2021 at 9:37pm (UTC) | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+
We can view the running container with “lxc list”.Here we can see the container is running and has been given an IP address on 10.91.196.0. Any additional containers will be given an IP address on this network as well.
vagrant@ubunruvm01:~$ lxc list +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+ | outgoing-monkey | RUNNING | 10.91.196.201 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe0b:f9fd (eth0) | PERSISTENT | 0 | +-----------------+---------+----------------------+-----------------------------------------------+------------+-----------+
We can view the interfaces and see the network we created during the initialization, which is “lxdbr0”.
vagrant@ubunruvm01:~$ ip a show dev lxdbr0 4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether fe:e0:1f:3b:62:b1 brd ff:ff:ff:ff:ff:ff inet 10.91.196.1/24 scope global lxdbr0 valid_lft forever preferred_lft forever inet6 fd42:41c3:c657:13ec::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::68d8:4fff:fe30:3a99/64 scope link valid_lft forever preferred_lft forever
When a name is no specified for the container, a random name will be assigned. Lets delete this container and launch a container with an assigned name. However, Before deleting a container you’ll need make sure the container is stopped. To stop a running container use “lxc stop <container name>”.
vagrant@ubunruvm01:~$ lxc stop outgoing-monkey
After stopping the container you can delete it by using “lxc delete <container name>”. After deleting the container run lxc list to see that the container has been deleted.
vagrant@ubunruvm01:~$ lxc delete outgoing-monkey vagrant@ubunruvm01:~$ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+
Now lets recreate a container with the same image but with the name “myubuntu”.
vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 myubuntu Creating myubuntu Starting myubuntu
We can see that another container was created but with the name myubuntu.
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
Copying Containers
Another neat thing with LXC containers is that you can very quickly and easily copy one container to another. If we want to copy our myubuntu container to motherubuntu then we would use the “lxc copy” command.
vagrant@ubunruvm01:~$ lxc copy myubuntu myotherubuntu vagrant@ubunruvm01:~$ lxc list +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myotherubuntu | STOPPED | | | PERSISTENT | 0 | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +---------------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see that we have another container named myotherubuntu but it's stopped. We can start the container with “lxc start”.
vagrant@ubunruvm01:~$ lxc start myotherubuntu
We can list the containers and see that our other container has started and has been assigned an IP address on the 10.91.196 network.
vagrant@ubunruvm01:~$ lxc stop myotherubuntu vagrant@ubunruvm01:~$ lxc move myotherubuntu myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm | STOPPED | | | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see that the container was renamed to myvm. Now lets start the myvm container with “lxc start”.
vagrant@ubunruvm01:~$ lxc start myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+ | myvm | RUNNING | 10.91.196.142 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe09:1db5 (eth0) | PERSISTENT | 0 | +----------+---------+----------------------+-----------------------------------------------+------------+-----------+
How to Use the Containers
So now let’s go over how to use the containers. The containers have a default ubuntu account user but for this demonstration we’ll log in as root. To log into a container as root we’ll use the “lxc exec” command.
vagrant@ubunruvm01:~$ lxc exec myvm bash
We can see that we’re now logged into the container but the hostname still has the name of the container before we renamed it. So lets rename the container properly here.
root@myotherubuntu:~# hostnamectl Static hostname: myotherubuntu Icon name: computer-container Chassis: container Machine ID: 58e4b9329ef14803af7f8d9802d52944 Boot ID: 0020920e423e40c6b050226c1506b153 Virtualization: lxc Operating System: Ubuntu 20.10 Kernel: Linux 4.15.0-135-generic Architecture: x86-64
We can change the hostname with “hostnamectl command”.
root@myotherubuntu:~# hostnamectl set-hostname myvm root@myotherubuntu:~# hostnamectl Static hostname: myvm Icon name: computer-container Chassis: container Machine ID: 58e4b9329ef14803af7f8d9802d52944 Boot ID: 0020920e423e40c6b050226c1506b153 Virtualization: lxc Operating System: Ubuntu 20.10 Kernel: Linux 4.15.0-135-generic Architecture: x86-64
The containers use resources from its host machine, If we look at things such as the kernel, memory, cpus and compare them to our host machine, we can see it uses the hosts resources.
root@myotherubuntu:~# lsb_release -dirc Distributor ID: Ubuntu Description: Ubuntu 20.10 Release: 20.10 Codename: groovy root@myotherubuntu:~# uname -r 4.15.0-135-generic root@myotherubuntu:~# nproc 2 root@myotherubuntu:~# free -m total used free shared buff/cache available Mem: 1992 63 1874 0 55 1929 Swap: 0 0 0
We can also ping between the lxc containers using the dns name. Lets try this pinging our myubuntu container from the myvm container.
root@myotherubuntu:~# ping myubuntu PING myubuntu(myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a)) 56 data bytes 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=1 ttl=64 time=0.071 ms 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=2 ttl=64 time=0.081 ms 64 bytes from myubuntu.lxd (fd42:41c3:c657:13ec:216:3eff:fe9c:476a): icmp_seq=3 ttl=64 time=0.050 ms
If we log into the myubuntu container we can do the same ting from there. This time let's log into the myubuntu container using the ubuntu user account. We can do this using the “lxc exec” command.
vagrant@ubunruvm01:~$ lxc exec myubuntu su - ubuntu To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myubuntu:~$ hostname myubuntu Now that we are logged into the myubuntu container, lets ping our other vm. ubuntu@myubuntu:~$ ping myvm PING myvm(myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5)) 56 data bytes 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=1 ttl=64 time=0.026 ms 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=2 ttl=64 time=0.045 ms 64 bytes from myvm.lxd (fd42:41c3:c657:13ec:216:3eff:fe09:1db5): icmp_seq=3 ttl=64 time=0.079 ms
So far we’ve used different commands such as: image, launch, start, stop, delete, list, exec, copy and move. Now lets look at a few other commands. We can view information about each container using “lxc info”. This will give us info on things like the hostname, status, interfaces, cpu and memory usage.
vagrant@ubunruvm01:~$ lxc info myubuntu | less Name: myubuntu Remote: unix:// Architecture: x86_64 Created: 2021/02/25 21:58 UTC Status: Running Type: persistent Profiles: default Pid: 4750 Ips: eth0: inet 10.91.196.51 veth8W74TM eth0: inet6 fd42:41c3:c657:13ec:216:3eff:fe9c:476a veth8W74TM eth0: inet6 fe80::216:3eff:fe9c:476a veth8W74TM lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 46 CPU usage: CPU usage (in seconds): 47 Memory usage: Memory (current): 281.92MB Memory (peak): 610.49MB Network usage: lo: Bytes received: 7.36kB Bytes sent: 7.36kB Packets received: 74 Packets sent: 74
We can also view the machine configuration with “lxc config show” command.
vagrant@ubunruvm01:~$ lxc config show myubuntu | lessarchitecture: x86_64 config: image.architecture: amd64 image.description: ubuntu 20.10 amd64 (release) (20210209) image.label: release image.os: ubuntu image.release: groovy image.serial: "20210209" image.version: "20.10" volatile.base_image: a173d56bc66e8b5c080ceb6f2d70d1bb413f61d7b2dac2644e21e1d69494a502 volatile.eth0.hwaddr: 00:16:3e:9c:47:6a volatile.idmap.base: "0" volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.power: RUNNING devices: {} ephemeral: false profiles: - default stateful: false description: "" (END)
Viewing LXC Profiles
Profiles are like configuration files for instances. They can store any configuration that an instance can (key/value or devices) and any number of profiles can be applied to an instance. Profiles aren't specific to containers or virtual machines, they may contain configuration and devices that are valid for either type. If you don't apply specific profiles to an instance, then the default profile is applied automatically. Since we didn’t apply a specific profile, the default profile was applied. We can list the profiles using “lxc profile” command.
vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | default | 2 | +---------+---------+
To view the content of the default profile we can use “lxc profile show“. The command will show us things like description, devices, storage pool and the network bridge.
vagrant@ubunruvm01:~$ lxc profile show default config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: default used_by: - /1.0/containers/myubuntu - /1.0/containers/myvm
If we want to create a new profile. We can copy the default profile to a custom name with “lxc profile copy”. In the below example we can see that a new profile called custom was created and there are no VMs using it.
vagrant@ubunruvm01:~$ lxc profile copy default custom vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | custom | 0 | +---------+---------+ | default | 2 | +---------+---------+
Now that we have 2 profiles, we can launch a new container and have it use the custom profile that we created. If we want to launch a new container using the custom profile, you can us “lxc launch” with the —profile argument. We can also customize the profile with things like limiting cpus or memory for example. Let’s say we want to limit a vm to using less than the host available memory. We can do this 1 of 2 ways. We can dynamically set the memory limit to a running vm with “lxc config” command.
We can see in the below example that our myvm container is using 2GB of memory.
vagrant@ubunruvm01:~$ lxc exec myvm bash root@myvm:~# free -m total used free shared buff/cache available Mem: 1992 95 1850 0 47 1897
Let's say we want to dynamically change that to 1GB. We can do that with the following command:
vagrant@ubunruvm01:~$ lxc config set myvm limits.memory 1GB
Now, if we check myvm we can see it's now only using 1GB of memory:
root@myvm:~# free -m total used free shared buff/cache available Mem: 953 95 811 0 47 858 Swap: 0 0 0
But let's say we want to set that memory limit inside our custom profile. First let's delete the myvm. In this example I delete the vm using —force option because the container is running, Note: you cannot delete a running container.
vagrant@ubunruvm01:~$ lxc delete --force myvm vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
Now we can edit our custom profile using “lxc profile edit” and add the memory limits to 1GB:
vagrant@ubunruvm01:~$ lxc profile edit custom config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: custom used_by: []
To make the custom changes, delete the curly brackets “config: {}” and add “limits.memory: 1GB” underneath config as shown below. Then save the change.
config: limits.memory: 1GB
You can view the changes:
vagrant@ubunruvm01:~$ lxc profile show custom config: limits.memory: 1GB description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: custom used_by: []
Once the changes have been made you can launch a new container using the custom profile and new memory settings. Let's now launch a vm using the ubuntu:2010 image we have stored locally and we’ll name the container myvm2:
vagrant@ubunruvm01:~$ lxc image list +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | | a173d56bc66e | no | ubuntu 20.10 amd64 (release) (20210209) | x86_64 | 365.55MB | Feb 25, 2021 at 9:37pm (UTC) | +-------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ vagrant@ubunruvm01:~$ lxc launch ubuntu:20.10 myvm2 --profile custom Creating myvm2 Starting myvm2
Now that the vm has been created, we can view the list of profiles and see that we have 1 vm using the new custom profile we created.
vagrant@ubunruvm01:~$ lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | custom | 1 | +---------+---------+ | default | 1 | +---------+---------+
If we log into the new vm we can also see that it is using the 1GB of memory we set on the new profile. The same thing can be done for other settings like cpu.
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm2 | RUNNING | 10.91.196.46 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe42:ea4e (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# free -m total used free shared buff/cache available Mem: 953 78 689 0 185 875 Swap: 0 0 0
Copying Files To and From Containers
We can also copy files from the host machine into the containers using the “lxc file push” command. Let’s create a file called myfile and copy the file into the myvm2 container, for example.
vagrant@ubunruvm01:~$ echo test file > myfile vagrant@ubunruvm01:~$ ls myfile myfile
Next let's copy the file into the myvm2 container. For this we have to use the “lxc file push” command and specify the file to push along with the vm name and the file system we want to copy the file to. In this example it’ll be /root
vagrant@ubunruvm01:~$ lxc file push myfile myvm2/root/
If we check myvm2, we can see that the file was copied into the /root directory
root@myvm2:~# pwd /root root@myvm2:~# ls myfile snap root@myvm2:~# cat myfile test file
We can also copy the file from the container to the host machine using “lxc file pull” command. So if we delete the file from the host machine then we can pull a copy from the vm:
vagrant@ubunruvm01:~$ rm my-file vagrant@ubunruvm01:~$ lxc file pull myvm2/root/myfile . vagrant@ubunruvm01:~$ ls myfile vagrant@ubunruvm01:~$ cat myfile test file
Snapshot and Restoring Containers
Next we’ll create snapshots of a container using lxc commands. Snapshots allow saving the existing state of a vm before making changes.
First we’ll create some directories in our myvm2 container:
root@myvm2:~# for i in $(seq 5); do mkdir $i; done root@myvm2:~# ls 1 2 3 4 5 myfile snap
Next, from our host machines we’ll create a snapshot of myvm2 using “lxc snapshot” command. What this does is copy the whole filesystem of the vm. Which is 365.55MB. We’ll name the snapshot “snap1”:
vagrant@ubunruvm01:~$ lxc snapshot myvm2 snap1
Now let's log in and delete the directories we create on myvm2 and see if we can restore them from a snapshot:
vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# ls 1 2 3 4 5 myfile snap root@myvm2:~# for i in $(seq 5); do rm -rf $i; done root@myvm2:~# ls myfile snap
From the above example we can see that the directories have been deleted from our vm. Now let's say we did this accidentally. we could restore them from the snapshot that we took earlier. But first let's see what snapshots are available. Below we can see that 1 snapshot is available for myvm2:
vagrant@ubunruvm01:~$ lxc list +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myubuntu | RUNNING | 10.91.196.51 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe9c:476a (eth0) | PERSISTENT | 0 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+ | myvm2 | RUNNING | 10.91.196.46 (eth0) | fd42:41c3:c657:13ec:216:3eff:fe42:ea4e (eth0) | PERSISTENT | 1 | +----------+---------+---------------------+-----------------------------------------------+------------+-----------+
We can see the name of the snapshot for a vm by looking at the vm’s info using “lxc info”. Looking at the very bottom we can the snapshot name is snap1 among with the date it was taken:
vagrant@ubunruvm01:~$ lxc info myvm2 Name: myvm2 Remote: unix:// Architecture: x86_64 Created: 2021/02/26 21:04 UTC Status: Running Type: persistent Profiles: custom Pid: 23807 Ips: eth0: inet 10.91.196.46 vethUXVEK2 eth0: inet6 fd42:41c3:c657:13ec:216:3eff:fe42:ea4e vethUXVEK2 eth0: inet6 fe80::216:3eff:fe42:ea4e vethUXVEK2 lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 43 CPU usage: CPU usage (in seconds): 20 Memory usage: Memory (current): 242.72MB Memory (peak): 424.61MB Network usage: eth0: Bytes received: 42.22MB Bytes sent: 198.11kB Packets received: 4118 Packets sent: 2642 lo: Bytes received: 4.55kB Bytes sent: 4.55kB Packets received: 48 Packets sent: 48 Snapshots: snap1 (taken at 2021/02/27 02:30 UTC) (stateless)
Next we can restore the snapshot using lxc restore command. From the below command we can see that the vm was restored along with the directories we deleted.
vagrant@ubunruvm01:~$ lxc restore myvm2 snap1 vagrant@ubunruvm01:~$ lxc exec myvm2 bash root@myvm2:~# ls 1 2 3 4 5 myfile snap
Conclusion
Well, that’s it for “Getting Started with Linux Containers”. Hopefully, you’ve found this post to be useful. If you’re someone just starting to become familiar with container concepts then LXC can be a great place to start, play around with and get your feet wet.
LXC has many other cool features and is easy to learn and grow. If you prefer to jump right into the fire then Docker may be more up your alley. Either way, both offer a great opportunity to become up-skilled.
Have any questions about LXC, Docker or other containers?
Contact Exxact Today