Installing Gentoo on a Virtual Private Server

I’ve been running Gentoo on my servers for ages. Recently I moved from a hosted hardware server to a set of virtual ones, running at OVH Cloud and Oracle Cloud.

Below is an example of how I setup 2 servers with Gentoo on Oracle Cloud while gentoo is not offered. This is done in 3 main steps:

  1. Installing Gentoo on a local VM on our own environment
  2. Pushing this VM image onto the VPS
  3. Finishing the setup and enjoying Gentoo

This is mostly written as a reminder on how to do it and it may voluntarily skip many details.

On installing Gentoo itself, I suggest reading the Gentoo Handbook.

Local Gentoo Install

On the home computer, I download the gentoo iso from links on gentoo.org and verify it:

wget https://distfiles.gentoo.org/releases/amd64/autobuilds/20240218T170410Z/install-amd64-minimal-20240218T170410Z.iso{,.asc}
gpg --import /usr/share/openpgp-keys/gentoo-release.asc
gpg --verify install-amd64-minimal-*.iso.asc

Then, I spawn an instance on OCI with RockyLinux and configure ssh by adding to the .ssh/config on our local machine:

Host vpsoci
    Hostname IP_ADDR_HERE
    User opc
    IdentityFile ~/.ssh/id_ed25519

Test it by downloading dmesg and lsmod to keep as reference for later:

ssh vpsoci "dmesg" > dmesg
ssh vpsoci "lsmod" > lsmod

Get the vps size:

ssh vpsoci "sudo sgdisk -p /dev/sda | head -n 1"

that outputs Disk /dev/sda: 97677312 sectors, 46.6 GiB.

Next I create our own disk image and start our own local VM and connect to it:

qemu-img create -f raw gentoo.raw $((97677312 * 512))
qemu-system-x86_64 -drive file=gentoo.raw,format=raw,cache=writeback -cdrom install-amd64-minimal-*.iso -boot d -machine type=q35,accel=kvm -cpu host -smp 2 -m 1024 -vnc :0 -device virtio-net,netdev=vmnic -netdev user,id=vmnic,hostfwd=tcp::10022-:22
vncviewer :0

On it, I add a password and start ssh:

passwd
rc-service sshd start

Now I can ssh root@localhost -p 10022 and partition the VM:

sgdisk -n1:0:+2M -t1:EF02 /dev/sda
sgdisk -n2:0:+512M -t2:EF00 /dev/sda
sgdisk -n3:0:0 -t3:8300 /dev/sda
mkfs.fat -F 32 -n efi-boot /dev/sda2

And I setup dm-crypt on the whole disk. I kept the default settings. To pick a password, I suggesting following Randall Munroe’s advice at xkcd.

cryptsetup luksFormat /dev/sda3
cryptsetup luksDump /dev/sda3
cryptsetup luksOpen --allow-discards /dev/sda3 root
mkfs.ext4 -m 1 /dev/mapper/root
mount /dev/mapper/root /mnt/gentoo

Then, I install the base system on it

cd /mnt/gentoo
wget https://distfiles.gentoo.org/releases/amd64/autobuilds/20240218T170410Z/stage3-amd64-openrc-20240218T170410Z.tar.xz{,.asc}
gpg --import /usr/share/openpgp-keys/gentoo-release.asc
gpg --verify stage3-*.tar.xz.DIGESTS
tar xpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
rm stage3-*.tar.*

And edit the portage configuration:

vi /mnt/gentoo/etc/portage/make.conf
mkdir --parents /mnt/gentoo/etc/portage/repos.conf
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf

Prepare to chroot

cp --dereference /etc/resolv.conf /mnt/gentoo/etc/
mount --types proc /proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys && mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev && mount --make-rslave /mnt/gentoo/dev
mount --bind /run /mnt/gentoo/run && mount --make-slave /mnt/gentoo/run
chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) ${PS1}"

Once in the chroot, I update and start configuring the system:

emerge-webrsync
eselect profile set 15
echo "Europe/Paris" > /etc/timezone
rm /etc/localtime
emerge --config sys-libs/timezone-data
cat <<EOF > /etc/locale.gen
en_DK.UTF-8 UTF-8
en_US.UTF-8 UTF-8
fr_FR.UTF-8 UTF-8
C.UTF8 UTF-8
EOF
locale-gen
eselect locale list
eselect locale set 4
env-update && source /etc/profile && export PS1="(chroot) ${PS1}"

Install gentoo gpg keys:

emerge -qaAv getuto && getuto

Add -v3 to the sync-uri of /etc/portage/binrepos.conf/gentoobinhost.conf to be able to install binary packages more fit for our target system.

Install base components:

emerge -qaAvg sys-apps/fakeroot app-admin/sysklogd sys-process/cronie net-misc/chrony app-editors/neovim
rc-update add sysklogd default
rc-update add cronie default
rc-update add sshd default
rc-update add chronyd default

The kernel

I like configuring my kernels and use genkernel to build them. Here it is setup to allow an ssh connection to the initramfs to be able to decrypt the root volume.

echo "sys-kernel/genkernel -firmware" >> /etc/portage/package.use/main
echo "sys-kernel/gentoo-sources experimental" >> /etc/portage/package.use/main
net-misc/dropbear static -pam
sys-libs/zlib static-libs
virtual/libcrypt static-libs
sys-libs/libxcrypt static-libs
sys-apps/kmod zstd static-libs
emerge -qaAvg sys-kernel/gentoo-sources sys-kernel/genkernel net-misc/dropbear

I edit /etc/dropbear/authorized_keys with the keys I want to use.

Set GRUB_PLATFORMS="efi-64 emu pc" to /etc/portage/make.conf. Set GRUB_CMDLINE_LINUX_DEFAULT="root_trim=yes crypt_root=/dev/sda3 ip=dhcp dosshd gk.sshd.port=2222" to /etc/default/grub. Using port 2222 helps make ssh not complain that the host is different from the initramfs and the live system.

Finish grub setup:

mount /dev/sda2 /boot
echo "sys-boot/grub device-mapper efiemu" >> /etc/portage/package.use/main
emerge -qaAvg sys-boot/grub
grub-install --target=x86_64-efi --efi-directory=/boot --removable
grub-mkconfig -o /boot/grub/grub.cfg

Generate the kernel configuration and then run:

genkernel all --kernel-config=/usr/src/kernel-config-6.1.67-v1

Configuring the system

Run blkid to get the UUID of /dev/mapper/root and then write /etc/fstab like so:

# <fs>                  <mountpoint>    <type>          <opts>          <dump> <pass>
LABEL=efi-boot          /boot           vfat            noauto,noatime  0 2
#/dev/mapper/root
UUID="1ec95162-17a6-462c-b0ba-00878f74fe29"  /  ext4    noatime         0 1

Set the hostname. Please have more inspiration.

echo vpsoci > /etc/hostname

For the network, just use dhcpcd:

emerge -qaAdg dhcpcd
rc-update add dhcpcd default
rc-service dhcpcd start

Add a udev rule to rename enp0s3 to eth0:

rc-config add udev
mkdir -p /etc/udev/rules.d/
cat <<EOF > /etc/udev/rules.d/76-net-name-use-custom.rules
SUBSYSTEM=="net", ACTION=="add", ENV{ID_NET_NAME_PATH}=="enp0s3", NAME="eth0"
EOF

Copying the image to the VPS

Here’s the trick given by Adyxax when he installed NixOS on a VPS: the idea is to rewrite the disk image on the vps with the one from the local VM.

This can be done by writing directly to the block device, bypassing the filesystem that has to be remounted read-only to avoid it writing over our data.

On the VPS, as root, after ensuring root can connect directly (need to modify /root/.ssh/authorized_keys), this can be done like so:

dnf install zstd
swapoff -a
mount -o remount,ro /boot
mount -o remount,ro /boot/efi
systemctl stop systemd-journald.service systemd-journald-dev-log.socket systemd-journald.socket
systemctl stop rsyslog.service crond.service chronyd.service gssproxy.service
systemctl stop systemd-udevd.service systemd-udevd-control.socket systemd-udevd-kernel.socket
systemctl stop rpcbind.service rpcbind.socket
systemctl stop tuned.service polkit.service atd.service auditd.service libstoragemgmt.service
systemctl stop tuned.service polkit.service atd.service libstoragemgmt.service
systemctl stop oracle-cloud-agent-updater.service oracle-cloud-agent.service pmcd.service pmie.service pmie_farm.service
systemctl stop firewalld.service irqbalance.service pmlogger.service pmlogger_farm.service
systemctl stop dtprobed.service
systemctl stop getty@tty1.service serial-getty@ttyS0.service

service auditd stop
mount -o remount,ro /var/oled
systemctl stop dbus-broker.service systemd-logind.service dbus.socket
systemctl stop iscsid.service iscsid.socket
mount -o remount,ro /

And the long command to run from your environment:

zstd -k -3 gentoo.raw --stdout | ssh -4 root@vpsoci "zstdcat | dd of=/dev/sda"; echo -e \a

Finishing the install

Reboot the VPS, and hopefully enjoy Gentoo.

Note on rewriting the root volume

This may no longer be possible directly in the future starting with kernel 6.8 when the option BLK_DEV_WRITE_MOUNTED will be set to no:

Allow writing to mounted block devices

When a block device is mounted, writing to its buffer cache is very likely going to cause filesystem corruption. It is also rather easy to crash the kernel in this way since the filesystem has no practical way of detecting these writes to buffer cache and verifying its metadata integrity. However there are some setups that need this capability like running fsck on read-only mounted root device, modifying some features on mounted ext4 filesystem, and similar. If you say N, the kernel will prevent processes from writing to block devices that are mounted by filesystems which provides some more protection from runaway privileged processes and generally makes it much harder to crash filesystem drivers. Note however that this does not prevent underlying device(s) from being modified by other means, e.g. by directly submitting SCSI commands or through access to lower layers of storage stack. If in doubt, say Y. The configuration can be overridden with the bdev_allow_write_mounted boot option.

See also this great article from LWN: Defending mounted filesystems from the root user