set up
arch
archwiki_setup redhat_guide linux_containers_guide
- install
lxc
pacman -S lxc dnsmasq
- or on debian
apt-get install lxc dnsmasq-base uidmap acl libpam-cgfs echo "kernel.unprivileged_userns_clone=1" >> /etc/sysctl.conf reboot
- add the following line to '/etc/pam.d/system-login'
- (debian '/etc/pam.d/login')
session optional pam_cgfs.so -c freezer,memory,name=systemd,unified
- create '/etc/default/lxc-net' config
# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your # containers. Set to "false" if you'll use virbr0 or another existing # bridge, or mavlan to your host's NIC. USE_LXC_BRIDGE="true" # If you change the LXC_BRIDGE to something other than lxcbr0, then # you will also need to update your /etc/lxc/default.conf as well as the # configuration (/var/lib/lxc/<container>/config) for any containers # already created using the default config to reflect the new bridge # name. # If you have the dnsmasq daemon installed, you'll also have to update # /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon. LXC_BRIDGE="lxcbr0" LXC_ADDR="10.0.3.1" LXC_NETMASK="255.255.255.0" LXC_NETWORK="10.0.3.0/24" LXC_DHCP_RANGE="10.0.3.2,10.0.3.254" LXC_DHCP_MAX="253" # Uncomment the next line if you'd like to use a conf-file for the lxcbr0 # dnsmasq. For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have # container 'mail1' always get ip address 10.0.3.100. #LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf # Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc # domain. You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR) # to your system dnsmasq configuration file (normally /etc/dnsmasq.conf, # or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager). # Once these changes are made, restart the lxc-net and network-manager services. # 'container1.lxc' will then resolve on your host. #LXC_DOMAIN="lxc"
- add the following lines to '/etc/lxc/default.conf'
lxc.net.0.type = veth lxc.net.0.link = lxcbr0 lxc.net.0.flags = up lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx lxc.idmap = u 0 100000 65536 lxc.idmap = g 0 100000 65536
- start
lxc-net
systemctl restart lxc-net
- check that
lxcbr0
bridge has been createdip a s lxcbr0
- create '/etc/subuid'
pyratebeard:100000:65536
- create '/etc/subgid'
pyratebeard:100000:65536
- create '/etc/lxc/lxc-usernet' for allowing user to create network devices
pyratebeard veth lxcbr0 10
-
veth
- virtual ethernet -
lxcbr0
- network bridge -
10
- number of devices allowed
-
- create local dirs
mkdir ~/.{config,cache}/lxc mkdir ~/.local/share
- create '~/.config/lxc/default.conf'
lxc.net.0.type = veth lxc.net.0.link = lxcbr0 lxc.net.0.flags = up lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx lxc.idmap = u 0 100000 65536 lxc.idmap = g 0 100000 65536
- make '~/.local/share' executable and set acls
chmod +x ~/.local/share setfacl -m u:100000:x /home/pyratebeard setfacl -m u:100000:x /home/pyratebeard/.local
create container
lxc-create -t download -n <name>
# or
lxc-create -n <name> -t download -- --dist alpine --release 3.13 --arch amd64
lxc-start -d -n <name>
lxc-attach -n <name>
or
vi ~/.local/share/lxc/powerzone/rootfs/etc/shadow
# remove `!` from root user
lxc-start -n powerzone
lxc-console -n powerzone
- python module for script api 5
alpine linux config
/sbin/apk update
/sbin/apk upgrade
passwd
busybox adduser pyratebeard
busybox adduser pyratebeard wheel
apk add doas vim openssh zsh
echo "permit nopass pyratebeard" | tee -a /etc/doas.d/doas.conf
ln -s /bin/zsh /usr/bin/zsh
/sbin/rc-update add sshd
/sbin/rc-service sshd start
/sbin/rc-status
logout (ctrl-a q
to exit console)
debian config
passwd
apt-get install openssh-server python3
vi /etc/ssh/sshd_config
PermitRootLogin yes
systemctl reload sshd
alpine services
add files to /etc/init.d/
#!/sbin/openrc-run
name="test"
command="echo hello"
known errors
- systemd containers fail to start
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing. Freezing execution.
- '/sys/fs/cgroup/systemd' dir doesn't exist
- to fix, create dir, mount cgroup, set permissions lxc-users group post
sudo mkdir /sys/fs/cgroup/systemd sudo mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd sudo chown pyratebeard:users /sys/fs/cgroup/systemd
- keyserver not found on
lxc-create
- to fix add
DOWNLOAD_KEYSERVER="hkp://keyserver.ubuntu.com:80"
beforelxc-create
cmd - https://github.com/lxc/lxc/issues/3874
-
https://github.com/lxc/lxc/commit/f2a5d95d00a55bed27ef9920d67617cc75fecad8
Setting up the GPG keyring ERROR: Unable to fetch GPG key from keyserver
- to fix add
- wait_ondaemonized_startL 833 no such file or directory
-
lxc-start
in foreground gives segmentation faultlxc-start -n test /bin/sh
-
- unable to start on debian 11 (error "895 received container state aborting" - in forground mode "1365 numerical result out of range")
- use
unpriv
startlxc-unpriv-start -n <name>
- use
moving containers
lxc-stop -n $NAME
cd ~/.local/share/lxc/$NAME
sudo tar --numeric-owner -czvf ../$NAME.tgz ./*
chown pyratebeard: ../$NAME.tgz
rsync -avh $NAME.tgz user@hostname:.local/share/lxc/
ssh user@hostname
mkdir ~/.local/share/lxc/$NAME
cd ~/.local/share/lxc/$NAME
sudo tar --numeric-owner -xzvf ../$NAME.tgz .
- tried this between wht-rht-obj and fka
- container runs (after adding user gid to /etc/subgid)
- no ip address though. veth is created but ip4 not given
- check dir/file permissions
- .local/share/lxc/$NAME = 755 100000:100000
- .local/share/lxc/$NAME/rootfs/* = 100000:100000
- .local/share/lxc/$NAME/config = pyratebeard:users
example
setting up multiple websites behind haproxy
- install openzfs
- start lx daemon
sudo apt install zfsutils-linux sudo lxd init
- answer questions
- launch containers
lxc launch ubuntu:18.04 subdomain1 lxc launch ubuntu:18.04 subdomain2 lxc launch ubuntu:18.04 haproxy lxc list
gollum haproxy log pastebin radicale site stagit znc ftp
debian test
- debian 10 (aws instance)
- 'admin' user
apt-get install lxc dnsmasq-base uidmap
- follow setup (see own wiki)
- building debian containers works well
- ansible playbook runs using proxyjump in ssh config
- attempting to run haproxy in container
- iptables rules for prerouting
sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d <public_ip>/24 --dport 80 -j DNAT --to-destination <haproxy_ip>:80
sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d <public_ip>/24 --dport 443 -j DNAT --to-destination <haproxy_ip>:443
sudo iptables -L -n -t nat
sudo apt-get install iptables-persistent
-
haproxy container
apt-get install haproxy
- add the following to the 'global' section
... maxconn 2048 ... tune.ssl.default-dh-param 2048
- add the following to the 'defaults' section
... option forwardfor option http-server-close ...
- create frontend
frontend http_frontend bind *:80 acl infratuxture hdr(host) -i penguin.renre.com #acl anotherlxc hdr(host) -i anotherdomain.renre.com use_backend penguin if infratuxture #use_backend anotherdomain if anotherlxc
-
create backend
backend penguin balance leastconn http-request set-header X-Client-IP %[src] server penguin 10.0.3.162:80 check #backend anotherdomain # balance leastconn # http-request set-header X-Client-IP %[src] # server anotherdomain an.oth.er.ip:80 check
- infratuxture container
apt-get install git lighttpd
- pull git repo in html dir
cd /var/www/html git clone https://git.renre.com/infrastructure/linux-patching.github.io.git .
bindmount
- mount a dir on lxc, add follwoing to container conf
mp0: /path/on/host,mp=/mount/path/on/container
uid/gid mapping
- in lxc conf
lxc.idmap: u 0 100000 1005 lxc.idmap: g 0 100000 1005 lxc.idmap: u 1005 1005 1 lxc.idmap: g 1005 1005 1 lxc.idmap: u 1006 101006 64530 lxc.idmap: g 1006 101006 64530
- explanation taken from itsembedded
The format of the lxc.idmap configuration lines are , where selects whether the mapping is for user id’s or group id’s.
Below is an explanation of what each mapping combination does:
(u/g) 0 10000 1000 - map 1000 ID’s starting from 0, to ID’s starting at 100000. This means that the ROOT UID/GID 0:0 on the guest will be mapped to 100000:100000 on the host, 1:1 will be mapped to 100001:1000001, and so on.
(u/g) 1000 1000 1 - map the UID/GID pair 1000:1000 to 1000:1000 on the host. The number 1 is there to specify we’re only mapping a single ID, and not a range.
(u/g) 1001 101000 64535 - map 64535 ID’s starting at 1001, to ID’s starting at 101000. This means that UID/GID pair 1001:1001 on the guest will be mapped to 101000:101000, 1002:1002 to 101001:101001, all the way to finally 65535:65535 to 165534:165534.
mounting zfs dataset in lxc container
- requires uid/gid mapping
- this example is for using the www-data user with nextcloud
- on host
zfs set acltype=posixacl pool/dataset setfacl -m u:100033:rwx /path/to/dataset
- add mount point as above
- on container check acl
getfacl /path/to/mount