ZTS: Use QEMU for tests on Linux and FreeBSD
This commit adds functional tests for these systems:
- AlmaLinux 8, AlmaLinux 9, ArchLinux
- CentOS Stream 9, Fedora 39, Fedora 40
- Debian 11, Debian 12
- FreeBSD 13, FreeBSD 14, FreeBSD 15
- Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04
- enabled by default:
- AlmaLinux 8, AlmaLinux 9
- Debian 11, Debian 12
- Fedora 39, Fedora 40
- FreeBSD 13, FreeBSD 14
Workflow for each operating system:
- install qemu on the github runner
- download current cloud image of operating system
- start and init that image via cloud-init
- install dependencies and poweroff system
- start system and build openzfs and then poweroff again
- clone build system and start 2 instances of it
- run functional testings and complete in around 3h
- when tests are done, do some logfile preparing
- show detailed results for each system
- in the end, generate the job summary
Real-world benefits from this PR:
1. The github runner scripts are in the zfs repo itself. That means
you can just open a PR against zfs, like "Add Fedora 41 tester", and
see the results directly in the PR. ZFS admins no longer need
manually to login to the buildbot server to update the buildbot config
with new version of Fedora/Almalinux.
2. Github runners allow you to run the entire test suite against your
private branch before submitting a formal PR to openzfs. Just open a
PR against your private zfs repo, and the exact same
Fedora/Alma/FreeBSD runners will fire up and run ZTS. This can be
useful if you want to iterate on a ZTS change before submitting a
formal PR.
3. buildbot is incredibly cumbersome. Our buildbot config files alone
are ~1500 lines (not including any build/setup scripts)!
It's a huge pain to setup.
4. We're running the super ancient buildbot 0.8.12. It's so ancient
it requires python2. We actually have to build python2 from source
for almalinux9 just to get it to run. Ugrading to a more modern
buildbot is a huge undertaking, and the UI on the newer versions is
worse.
5. Buildbot uses EC2 instances. EC2 is a pain because:
* It costs money
* They throttle IOPS and CPU usage, leading to mysterious,
* hard-to-diagnose, failures and timeouts in ZTS.
* EC2 is high maintenance. We have to setup security groups, SSH
* keys, networking, users, etc, in AWS and it's a pain. We also
* have to periodically go in an kill zombie EC2 instances that
* buildbot is unable to kill off.
6. Buildbot doesn't always handle failures well. One of the things we
saw in the past was the FreeBSD builders would often die, and each
builder death would take up a "slot" in buildbot. So we would
periodically have to restart buildbot via a cron job to get the slots
back.
7. This PR divides up the ZTS test list into two parts, launches two
VMs, and on each VM runs half the test suite. The test results are
then merged and shown in the sumary page. So we're basically
parallelizing ZTS on the same github runner. This leads to lower
overall ZTS runtimes (2.5-3 hours vs 4+ hours on buildbot), and one
unified set of results per runner, which is nice.
8. Since the tests are running on a VM, we have much more control over
what happens. We can capture the serial console output even if the
test completely brings down the VM. In the future, we could also
restart the test on the VM where it left off, so that if a single test
panics the VM, we can just restart it and run the remaining ZTS tests
(this functionaly is not yet implemented though, just an idea).
9. Using the runners, users can manually kill or restart a test run
via the github IU. That really isn't possible with buildbot unless
you're an admin.
10. Anecdotally, the tests seem to be more stable and constant under
the QEMU runners.
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #16537
2024-06-17 17:52:58 +03:00
|
|
|
#!/usr/bin/env bash
|
|
|
|
|
|
|
|
######################################################################
|
|
|
|
# 1) setup qemu instance on action runner
|
|
|
|
######################################################################
|
|
|
|
|
|
|
|
set -eu
|
|
|
|
|
|
|
|
# install needed packages
|
|
|
|
export DEBIAN_FRONTEND="noninteractive"
|
|
|
|
sudo apt-get -y update
|
|
|
|
sudo apt-get install -y axel cloud-image-utils daemonize guestfs-tools \
|
|
|
|
ksmtuned virt-manager linux-modules-extra-$(uname -r) zfsutils-linux
|
|
|
|
|
|
|
|
# generate ssh keys
|
|
|
|
rm -f ~/.ssh/id_ed25519
|
|
|
|
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -q -N ""
|
|
|
|
|
|
|
|
# we expect RAM shortage
|
|
|
|
cat << EOF | sudo tee /etc/ksmtuned.conf > /dev/null
|
2024-10-12 10:53:32 +03:00
|
|
|
# /etc/ksmtuned.conf - Configuration file for ksmtuned
|
ZTS: Use QEMU for tests on Linux and FreeBSD
This commit adds functional tests for these systems:
- AlmaLinux 8, AlmaLinux 9, ArchLinux
- CentOS Stream 9, Fedora 39, Fedora 40
- Debian 11, Debian 12
- FreeBSD 13, FreeBSD 14, FreeBSD 15
- Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04
- enabled by default:
- AlmaLinux 8, AlmaLinux 9
- Debian 11, Debian 12
- Fedora 39, Fedora 40
- FreeBSD 13, FreeBSD 14
Workflow for each operating system:
- install qemu on the github runner
- download current cloud image of operating system
- start and init that image via cloud-init
- install dependencies and poweroff system
- start system and build openzfs and then poweroff again
- clone build system and start 2 instances of it
- run functional testings and complete in around 3h
- when tests are done, do some logfile preparing
- show detailed results for each system
- in the end, generate the job summary
Real-world benefits from this PR:
1. The github runner scripts are in the zfs repo itself. That means
you can just open a PR against zfs, like "Add Fedora 41 tester", and
see the results directly in the PR. ZFS admins no longer need
manually to login to the buildbot server to update the buildbot config
with new version of Fedora/Almalinux.
2. Github runners allow you to run the entire test suite against your
private branch before submitting a formal PR to openzfs. Just open a
PR against your private zfs repo, and the exact same
Fedora/Alma/FreeBSD runners will fire up and run ZTS. This can be
useful if you want to iterate on a ZTS change before submitting a
formal PR.
3. buildbot is incredibly cumbersome. Our buildbot config files alone
are ~1500 lines (not including any build/setup scripts)!
It's a huge pain to setup.
4. We're running the super ancient buildbot 0.8.12. It's so ancient
it requires python2. We actually have to build python2 from source
for almalinux9 just to get it to run. Ugrading to a more modern
buildbot is a huge undertaking, and the UI on the newer versions is
worse.
5. Buildbot uses EC2 instances. EC2 is a pain because:
* It costs money
* They throttle IOPS and CPU usage, leading to mysterious,
* hard-to-diagnose, failures and timeouts in ZTS.
* EC2 is high maintenance. We have to setup security groups, SSH
* keys, networking, users, etc, in AWS and it's a pain. We also
* have to periodically go in an kill zombie EC2 instances that
* buildbot is unable to kill off.
6. Buildbot doesn't always handle failures well. One of the things we
saw in the past was the FreeBSD builders would often die, and each
builder death would take up a "slot" in buildbot. So we would
periodically have to restart buildbot via a cron job to get the slots
back.
7. This PR divides up the ZTS test list into two parts, launches two
VMs, and on each VM runs half the test suite. The test results are
then merged and shown in the sumary page. So we're basically
parallelizing ZTS on the same github runner. This leads to lower
overall ZTS runtimes (2.5-3 hours vs 4+ hours on buildbot), and one
unified set of results per runner, which is nice.
8. Since the tests are running on a VM, we have much more control over
what happens. We can capture the serial console output even if the
test completely brings down the VM. In the future, we could also
restart the test on the VM where it left off, so that if a single test
panics the VM, we can just restart it and run the remaining ZTS tests
(this functionaly is not yet implemented though, just an idea).
9. Using the runners, users can manually kill or restart a test run
via the github IU. That really isn't possible with buildbot unless
you're an admin.
10. Anecdotally, the tests seem to be more stable and constant under
the QEMU runners.
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #16537
2024-06-17 17:52:58 +03:00
|
|
|
# https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-ksm
|
|
|
|
KSM_MONITOR_INTERVAL=60
|
|
|
|
|
|
|
|
# Millisecond sleep between ksm scans for 16Gb server.
|
|
|
|
# Smaller servers sleep more, bigger sleep less.
|
2024-10-12 10:53:32 +03:00
|
|
|
KSM_SLEEP_MSEC=30
|
|
|
|
|
|
|
|
KSM_NPAGES_BOOST=0
|
|
|
|
KSM_NPAGES_DECAY=0
|
|
|
|
KSM_NPAGES_MIN=1000
|
|
|
|
KSM_NPAGES_MAX=25000
|
|
|
|
|
|
|
|
KSM_THRES_COEF=80
|
|
|
|
KSM_THRES_CONST=8192
|
ZTS: Use QEMU for tests on Linux and FreeBSD
This commit adds functional tests for these systems:
- AlmaLinux 8, AlmaLinux 9, ArchLinux
- CentOS Stream 9, Fedora 39, Fedora 40
- Debian 11, Debian 12
- FreeBSD 13, FreeBSD 14, FreeBSD 15
- Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04
- enabled by default:
- AlmaLinux 8, AlmaLinux 9
- Debian 11, Debian 12
- Fedora 39, Fedora 40
- FreeBSD 13, FreeBSD 14
Workflow for each operating system:
- install qemu on the github runner
- download current cloud image of operating system
- start and init that image via cloud-init
- install dependencies and poweroff system
- start system and build openzfs and then poweroff again
- clone build system and start 2 instances of it
- run functional testings and complete in around 3h
- when tests are done, do some logfile preparing
- show detailed results for each system
- in the end, generate the job summary
Real-world benefits from this PR:
1. The github runner scripts are in the zfs repo itself. That means
you can just open a PR against zfs, like "Add Fedora 41 tester", and
see the results directly in the PR. ZFS admins no longer need
manually to login to the buildbot server to update the buildbot config
with new version of Fedora/Almalinux.
2. Github runners allow you to run the entire test suite against your
private branch before submitting a formal PR to openzfs. Just open a
PR against your private zfs repo, and the exact same
Fedora/Alma/FreeBSD runners will fire up and run ZTS. This can be
useful if you want to iterate on a ZTS change before submitting a
formal PR.
3. buildbot is incredibly cumbersome. Our buildbot config files alone
are ~1500 lines (not including any build/setup scripts)!
It's a huge pain to setup.
4. We're running the super ancient buildbot 0.8.12. It's so ancient
it requires python2. We actually have to build python2 from source
for almalinux9 just to get it to run. Ugrading to a more modern
buildbot is a huge undertaking, and the UI on the newer versions is
worse.
5. Buildbot uses EC2 instances. EC2 is a pain because:
* It costs money
* They throttle IOPS and CPU usage, leading to mysterious,
* hard-to-diagnose, failures and timeouts in ZTS.
* EC2 is high maintenance. We have to setup security groups, SSH
* keys, networking, users, etc, in AWS and it's a pain. We also
* have to periodically go in an kill zombie EC2 instances that
* buildbot is unable to kill off.
6. Buildbot doesn't always handle failures well. One of the things we
saw in the past was the FreeBSD builders would often die, and each
builder death would take up a "slot" in buildbot. So we would
periodically have to restart buildbot via a cron job to get the slots
back.
7. This PR divides up the ZTS test list into two parts, launches two
VMs, and on each VM runs half the test suite. The test results are
then merged and shown in the sumary page. So we're basically
parallelizing ZTS on the same github runner. This leads to lower
overall ZTS runtimes (2.5-3 hours vs 4+ hours on buildbot), and one
unified set of results per runner, which is nice.
8. Since the tests are running on a VM, we have much more control over
what happens. We can capture the serial console output even if the
test completely brings down the VM. In the future, we could also
restart the test on the VM where it left off, so that if a single test
panics the VM, we can just restart it and run the remaining ZTS tests
(this functionaly is not yet implemented though, just an idea).
9. Using the runners, users can manually kill or restart a test run
via the github IU. That really isn't possible with buildbot unless
you're an admin.
10. Anecdotally, the tests seem to be more stable and constant under
the QEMU runners.
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #16537
2024-06-17 17:52:58 +03:00
|
|
|
|
|
|
|
LOGFILE=/var/log/ksmtuned.log
|
|
|
|
DEBUG=1
|
|
|
|
EOF
|
|
|
|
sudo systemctl restart ksm
|
|
|
|
sudo systemctl restart ksmtuned
|
|
|
|
|
|
|
|
# not needed
|
|
|
|
sudo systemctl stop docker.socket
|
|
|
|
sudo systemctl stop multipathd.socket
|
|
|
|
|
|
|
|
# remove default swapfile and /mnt
|
|
|
|
sudo swapoff -a
|
|
|
|
sudo umount -l /mnt
|
|
|
|
DISK="/dev/disk/cloud/azure_resource-part1"
|
|
|
|
sudo sed -e "s|^$DISK.*||g" -i /etc/fstab
|
|
|
|
sudo wipefs -aq $DISK
|
|
|
|
sudo systemctl daemon-reload
|
|
|
|
|
|
|
|
sudo modprobe loop
|
|
|
|
sudo modprobe zfs
|
|
|
|
|
|
|
|
# partition the disk as needed
|
|
|
|
DISK="/dev/disk/cloud/azure_resource"
|
|
|
|
sudo sgdisk --zap-all $DISK
|
|
|
|
sudo sgdisk -p \
|
|
|
|
-n 1:0:+16G -c 1:"swap" \
|
|
|
|
-n 2:0:0 -c 2:"tests" \
|
|
|
|
$DISK
|
|
|
|
sync
|
|
|
|
sleep 1
|
|
|
|
|
|
|
|
# swap with same size as RAM
|
|
|
|
sudo mkswap $DISK-part1
|
|
|
|
sudo swapon $DISK-part1
|
|
|
|
|
|
|
|
# 60GB data disk
|
|
|
|
SSD1="$DISK-part2"
|
|
|
|
|
|
|
|
# 10GB data disk on ext4
|
|
|
|
sudo fallocate -l 10G /test.ssd1
|
|
|
|
SSD2=$(sudo losetup -b 4096 -f /test.ssd1 --show)
|
|
|
|
|
|
|
|
# adjust zfs module parameter and create pool
|
|
|
|
exec 1>/dev/null
|
|
|
|
ARC_MIN=$((1024*1024*256))
|
|
|
|
ARC_MAX=$((1024*1024*512))
|
|
|
|
echo $ARC_MIN | sudo tee /sys/module/zfs/parameters/zfs_arc_min
|
|
|
|
echo $ARC_MAX | sudo tee /sys/module/zfs/parameters/zfs_arc_max
|
|
|
|
echo 1 | sudo tee /sys/module/zfs/parameters/zvol_use_blk_mq
|
|
|
|
sudo zpool create -f -o ashift=12 zpool $SSD1 $SSD2 \
|
|
|
|
-O relatime=off -O atime=off -O xattr=sa -O compression=lz4 \
|
|
|
|
-O mountpoint=/mnt/tests
|
|
|
|
|
|
|
|
# no need for some scheduler
|
|
|
|
for i in /sys/block/s*/queue/scheduler; do
|
|
|
|
echo "none" | sudo tee $i > /dev/null
|
|
|
|
done
|