How to Create Reproducible Yocto Environments with Buildah and LVM

How to Create Reproducible Yocto Environments with Buildah

By Lisandro Pérez Meyer

Part of my job is to prepare Yocto setups for our customers' devices. My goal is twofold: set up an environment a client can easily reproduce; and ensure this environment also allows developers to have multiple projects at the same time. In this blog post I explore the workflow I use in order to achieve this goal.

The recommended way of creating a successful Yocto environment is by using a well-tested OS. That means either installing a specific OS or resorting to some form of virtualization. While the first option is valid, it might get complicated for a developer to handle if a different parallel project requires a different OS version. Even if it does not, there is still the risk of mixing configurations and so reproducibility is affected.

Another solution involves making use of virtual machines. This approach allows developers to easily isolate projects. We can provide the client a VM download so they can easily “replicate” the environment. But this approach has its drawbacks too. Even though virtualizing a full machine is today a very well-supported scenario, it still leaves us dealing with virtualized hardware. In other words, it is not as fast as if we were using bare metal. 

Going this route also impacts reproducibility, which would be tied to managing golden screenshots of the VM. Thankfully, containers were designed to solve this issue. Within containers applications run on bare hardware but only see the resources assigned to the container itself. 

This approach is faster than running a VM. The containers are created by defining them in a file/script so they become easily reproducible. Sharing this simple file is much faster than sharing a full VM. And the developer still has the option of sharing the image itself if necessary.

Building Containers

My first attempt to build a suitable container involved using Docker and following as a guide Boundary Device’s “Using Docker Containers for Reproducible Yocto Builds”. I started adding more software to the image but then learned about Buildah and Podman. After a little exploration, I discovered I preferred them. Podman does not require a daemon running in order to handle the containers, and Buildah provided more flexibility by allowing me to create Docker images using not only dockerfiles but also bash scripts.

In this example l create an Ubuntu 18.04 (bionic) image, provision the packages by using my apt-cacher-ng proxy, and set up a user to work with. Here’s a look at the script code. (You can also find it here.)

#!/bin/bash

container=$(buildah from ubuntu:18.04)
echo "The container is $container"

I’m using Bash and telling Buildah to use Ubuntu 18.04 as a base for our image. Buildah will return the name of the current container. (You can do this from the command line if you prefer.)

mountpoint=$(buildah mount ${container})
echo "Mount point: $mountpoint"

Next, I mount the image so I can modify it directly on the filesystem level in order to set up the user. This requires running the script with Buildah unshare.

# Check whether squid-deb-proxy-client's APT avahi discover is installed.
if [[ -z "${http_proxy}" ]]; then
  aad="/usr/share/squid-deb-proxy-client/apt-avahi-discover"
  if [ -f "$aad" ]; then
    export http_proxy=$($aad)
    echo "APT proxy/cacher detected at $http_proxy"
  fi
fi

The code above checks if http_proxy is set or else it tries to run squid-deb-proxy-client’s script to find an APT proxy on the network. This is very useful if you need to iterate the development process. If you just want to build the image once and don’t have other Debian-based machines around, it is not necessary.

# Be sure to have the latest metadata.
buildah run $container apt update

# Install the software we need for Yocto and some extra tools.
buildah run $container apt-get install --yes \
    bc bison bsdmainutils build-essential curl locales \
    flex g++-multilib gcc-multilib git gnupg gperf lib32ncurses5-dev \
    lib32z1-dev libncurses5-dev git-lfs \
    libsdl1.2-dev libxml2-utils lzop \
    openjdk-8-jdk lzop wget git-core unzip \
    genisoimage sudo socat xterm gawk cpio texinfo \
    gettext vim diffstat chrpath rsync \
    python-mako libusb-1.0-0-dev exuberant-ctags \
    pngcrush schedtool xsltproc zip zlib1g-dev libswitch-perl \
    p7zip-full repo

 
# Optional tools that are useful in development environments

buildah run $container apt-get install --yes tree tmux

buildah run $container apt-get clean

This code installed the typical dependencies that Yocto requires plus some other tools I normally use. (For example, I love tree.)

if [[ -z "${user}" ]]; then
  user=$USER
fi

if [[ -z "${uid}" ]]; then
  uid=$(id -u)
fi

if [[ -z "${gid}" ]]; then
  gid=$(id -g)
fi

echo "Using user $user with UID $uid and GID $gid"

# ===== create user/setup environment =====
mkdir -p ${mountpoint}/home/${user} && \
echo "${user}:x:${uid}:${gid}:${user},,,:/home/${user}:/bin/bash" >> ${mountpoint}/etc/passwd && \
echo "${user}:x:${uid}:" >> ${mountpoint}/etc/group && \
echo "${user} ALL=(ALL) NOPASSWD: ALL" > ${mountpoint}/etc/sudoers.d/${user} && \
chmod 0440 ${mountpoint}/etc/sudoers.d/${user} && \
chown ${uid}:${gid} -R ${mountpoint}/home/${user}

Buildah runs from your normal user and its privileges so it can not modify the image’s system files by running buildah run. In order to solve this I make use of the mounted filesystem.

# Configure ccache.
buildah config --env USE_CCACHE=1 $container
buildah config --env CCACHE_DIR=/home/${user}/.ccache $container

# some QT-Apps/Gazebo do not show controls without this
buildah config --env QT_X11_NO_MITSHM=1 $container

# Set the locale
buildah run $container locale-gen en_US.UTF-8
buildah config --env LANG=en_US.UTF-8 $container

I normally want to use ccache to build. I also set some other useful stuff, like a locale.

# Clean up apt.
buildah run $container apt clean
buildah run $container apt autoremove
buildah run $container rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Since images are immutable, I don’t need to keep apt metadata or temporary files. This reduces the final image size, even if just by a few MB.

buildah config --env HOME=/home/${user} $container
buildah config --env USER=${user} $container
buildah config --user ${user} $container
buildah config --workingdir /home/${user} $container

Next, I set some defaults for the image. $HOME and $USER will be set up every time the image is used. And I set the default user and working directory.

# Finally save the running container to an image
buildah commit --format docker $container yocto_ubuntu_18.04:latest
buildah unmount $container

Commit the image and unmount it. Note the resulting image is called yocto_ubuntu_18.04.

Now the image is ready. As mentioned above we need to run this script using buildah unshare. I  defined some variables that can be defined upon the script invocation. A full invocation would then be:

$ http_proxy="http://192.168.1.3:3142/" user=builduser uid=1000 gid=1000 buildah unshare ./create_ubuntu_18.04_bionic.sh 2>&1 | tee buildlog

I typically use my default user and let the script find the proxy:

$ buildah unshare ./create_ubuntu_18.04_bionic.sh 2>&1 | tee buildlog

Using the Image

Now it’s time to create a session for the project_a:

podman run -it localhost/yocto_ubuntu_18.04

Done! If you’ve followed my example you can now work within a reproducible environment.

Handling Parallel Projects

Regarding parallel projects, you may face one of two scenarios. The first one involves each project having a different base OS. This is simply achieved by creating a different container for each of them. The second happens when both projects use the same OS. Can you still share the image? Fortunately, the answer is yes because containers can be immutable. That means that they won’t change if modified during a session, and thus the base OS would be the same each time you spawn a session.

So how can we use a container image in this way and make your life easier? The solution is mount points. Create a directory for each project on your main filesystem and then mount it in a specific path within the container’s session. For example, you can have directories project_a and project_b and mount them as /home/myuser/ within the chroot. This means that if specific configuration is needed for each project (like ssh accounts, keys, git configurations, etc.) they can be stored within the project’s directory itself and be properly placed while running a session.

Let’s run my previously generated image bind mounting our project in home:

podman run -v /path/to/project_a:/home/${user} -it localhost/yocto_ubuntu_18.04

When you do this, be sure to change ${user} with the proper image-configured user.

Managing Disk Space

Building a Yocto project normally takes space. A lot of space. It is not uncommon to require 150GiB of space for a full build, and that’s definitely not a maximum. If the disk is big enough it is easy to store all this information. But what if you run out of space?

Here’s my solution: use LVM logical volumes, one for each project. This allows me to reassign space as needed. If my disk runs out of space I can buy another one, connect it and let the OS use it to make more space for LVM volume groups. This way the data can be split into two or more disks and the container won’t even notice it.

The Takeaway

A better organized and easily reproducible environment paves the way for reduced time to market and a higher quality final product. It also allows developers to more easily handle multiple projects simultaneously, which results in fewer issues. Using containers has helped me achieve these goals, and I really hope it does for you too!

If you liked my blog check out more of my work, including this blog on creating over-the-air update mechanisms for your remote devices.