The goal is to use an LXC
container with Docker
to automate the build process for boondocks-os
.
- Host VM Kernel Upgrade
- Configure Host VM
- Install
LXD
fromsnap
on the Host VM - Init
LXD
on Host VM usingZFS
and Network Bridge - Create
boondocks-os
container - Setup
boondocks-os
container - Create
builder
Ubuntu User - Snapshot
boondocks-os
Container - Run Build
Ubuntu 16.04.4
Host OS
Ubuntu 16.04
Container OS
LXD
latest version installed fromsnap
ZFS
storage pool as raw block device- Default
lxc
managed network bridge (environment-specifc) Yocto
Example configurations:
- Boondocks OS Container Configuration
- Host VM
/etc/sysctl.conf
Additions - Host VM
/etc/security/limits.conf
Additions - Boondocks OS
ZFS
Storage Pool Configuration
- This build process currently requires
Ubuntu 16.04.x
running as the containerOS
. - Ensure you have access to the VM's console during network configuration changes. (ESXi, VMWare Workstation, etc.)
apt
update
andupgrade
theHost OS
prior to installing and/or upgradinglxd
andsnap
.ZFS
tools should be installed on the host:sudo apt install zfsutils-linux
- Advanced
LXD
network configuration topics are beyond the scope of this build guide. Anything beyond a default network bridge will require additional research on the reader's part for the specific environment. - Kernel upgrade instructions are beyond the scope of this build guide. Please Google It! it.
Before proceeding any further, determine if the Host VM
requires a kernel version upgrade. This guide was built using kernel version: Linux 4.15.0-22-generic x86_64
To determine the current kernel version:
uname -msr
Once the kernel version is upgraded properly, proceed.
The Host VM
requires changes to the default limits for inodes
and files
. Make the following changes:
Reboot the Host VM
and then continue.
The following packages are required on the host:
snap
LXD
zfsutils-linux
The version of LXD
running on the host should be upgraded to the latest prior to sudo lxd init
.
sudo snap install lxd
Reference: Installing the LXD snap
Reference: LXD is now available in the Ubuntu Snap Store
Minimum, recommended Host VM resource allocations.
Resource | Configuration |
---|---|
CPU Cores | 6 to 12 . 6 recommended minimum. The build process is very cpu intensive. |
Memory | 32GB The build process is very memory intensive. Highly advise 32GB minimum allocated during active builds. |
sudo lxd init
- Configure a
ZFS
storage pool namedlxd-pool
using defaults; raw block device for this example is:/dev/sdb
- Configure default
LXD
network bridge namedlxd-bridge
using appropriate IPv4 and IPv6 settings for the environment. - Verify external and/or internal host network connectivity.
lxc profile show default
You should see that LXD
is:
- Using
lxd-bridge
for bridged networking - With an
eth0
network interfce - On the
ZFS
storage pool calledlxd-pool
config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: lxd-bridge
type: nic
root:
path: /
pool: lxd-pool
type: disk
name: default
used_by: []
Now copy the default profile to a new profile called: boondocks-os
. This copied profile will be assigned to the boondocks-so
container when it is created.
lxc profile copy default boondocks-os
You should see the created ZFS
pool called lxd-pool
on the block device: /dev/sdb
.
sudo zpool status
pool: lxd-pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
lxd-pool ONLINE 0 0 0
sdc ONLINE 0 0 0
errors: No known data errors
Below are steps to create, modify, and start the container and show its resulting configuration.
Create container assigned the previously created boondocks-os
profile:
lxc init ubuntu:16.04 boondocks-os --profile boondocks-os
The container used to build the boondocks-os
needs a few security setting changed.
Set container to be privileged and load required kernel modules.
lxc config set boondocks-os security.nesting true
lxc config set boondocks-os security.privileged true
lxc config set boondocks-os linux.kernel_modules ip_tables
If the environment requires a specific MAC address for the container, set it as follows:
lxc config set boondocks-os volatile.eth0.hwaddr 00:16:3e:55:bf:68
Docker
requires additional lxc
configuration changes to support the build. Edit the container configuration and add the following raw.lxc
keys as a child section of config
. Syntax is yaml
.
To add new raw.lxc
keys:
lxc config edit boondocks-os
raw.lxc: |-
lxc.apparmor.profile=unconfined
lxc.cgroup.devices.allow=a
lxc.cap.drop=
See below for a sample container configuration showing the raw.lxc
keys added.
Docker will by default startup using the vfs
storage driver when running on a ZFS storage pool. This does not provide a compatible backing filesystem to support the build. Adding a lxc disc device
to the container will allow Docker to use the much preferred overlay2
storage driver.
Add a new disk device
to the boondocks-os
container supplying a valid path value for source. Path value will be specifc to the environment / Host OS
.
mkdir -p /lxc/boondocks-os/docker/
lxc config device add boondocks-os docker disk source=/lxc/boondocks-os/docker/ path=/var/lib/docker
lxc start boondocks-os
This is a sample of the resulting container configuration.
lxc config show boondocks-os
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 16.04 LTS amd64 (release) (20180522)
image.label: release
image.os: ubuntu
image.release: xenial
image.serial: "20180522"
image.version: "16.04"
linux.kernel_modules: ip_tables
raw.lxc: |-
lxc.apparmor.profile=unconfined
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: "true"
security.privileged: "true"
volatile.base_image: 08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074
volatile.eth0.hwaddr: 00:16:3e:55:bf:68
volatile.idmap.base: "0"
volatile.idmap.next: '[]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
devices:
docker:
path: /var/lib/docker
source: /lxc/boondocks-os/docker/
type: disk
ephemeral: false
profiles:
- boondocks-os
stateful: false
description: Boondocks OS Build Container
lxc exec boondocks-os bash
You should now have a bash
shell into the container:
root@boondocks-os:~#
Verify external and potential internal host network connectivity.
apt update && apt upgrade -y
There are numerous prerequisites that are required to build the Boondocks OS
using Yocto
. (Some of these may already be installed on the host system.) This list of prerequisites is an aggregate of everything needed vs. installing one-off at varying steps in the process.
apt install -y apt-transport-https build-essential ca-certificates chrpath cpio curl debianutils diffstat gawk git iputils-ping iptables jq make python python3 python3-pexpect python3-pip socat software-properties-common texinfo xz-utils zip
Follow the general install guidelines for Ubuntu
.
curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - && apt install -y nodejs
nodejs --version && npm --version
We'll be using docker.ce
, not docker.io
. The official docker-ce
package now supports LXC
extensions that are required to properly run Docker
containers inside an LXC
container.
Follow the general install guidelines for Ubuntu 16.04
.
https://docs.docker.com/install/linux/docker-ce/ubuntu/
The actual steps used are also listed below.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y docker-ce
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-22-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.41GiB
Name: boondocks-os-builder
ID: 6NSR:4MD7:VS3D:O46V:5SRK:3SUL:DG2C:5R3O:VUUP:V6X7:LCPE:S2PJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
docker version
Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:17:20 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:15:30 2018
OS/Arch: linux/amd64
Experimental: false
Run the example hello-world
.
docker run hello-world
Look at the image that was downloaded for hello-world
.
docker image list
Cleanup the hello-world
example.
docker system prune --force && \
docker image rm hello-world && \
docker image list
This user runs manual builds from the shell and is used for custom CI
platform integrations.
builder
should be a member of the following groups: sudo
, docker
.
adduser builder && \
usermod -aG sudo builder && \
usermod -aG docker builder
su - builder
The $PATH
needs to be updated by adding /sbin
to gain access to /sbin/iptables
.
nano .profile
Add /sbin
to the beginning of the path as follows:
PATH="/sbin:$HOME/bin:$HOME/.local/bin:$PATH"
At this point, logout of the boondocks-os
container so a snapshot can be taken. This snapshot can be restored or used to create another container instance.
Snapshot the boondocks-os
container.
Next, a manual source build will be done as the builder
user inside the container to fix any remaining host dependencies, etc.
A manual test of the build process will be performed to ensure all system dependencies, libraries, configurations, etc. are in place. This will also ensure enough hardware resources have been allocated to the VM.
Did you resolve mkfs.ext4 Failure inside LXC Container ?