Most notable fixes from a Proxmox VE perspective are:
* "virtio-net: correctly copy vnet header when flushing TX"
To prevent a stack overflow that could lead to leaking parts of the
QEMU process's memory.
* "hw/pflash: implement update buffer for block writes"
To prevent an edge case for half-completed writes. This potentially
affected EFI disks.
* Fixes to i386 emulation and ARM emulation.
No changes for patches were necessary (all are just automatic context
changes).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This reverts commit 6b7c1815e1.
The attempted fix has been reported to cause high CPU usage after
backup [0]. Not difficult to reproduce and it's iothreads getting
stuck in a loop. Downgrading to pve-qemu-kvm=8.1.2-4 helps which was
also verified by Christian, thanks! The issue this was supposed to fix
is much rarer, so revert for now, while upstream is still working on a
proper fix.
[0]: https://forum.proxmox.com/threads/138140/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While the patch gives bdrv_graph_wrlock() as an example where the
issue can manifest, something similar can happen even when that is
disabled. Was able to reproduce the issue with
while true; do qm resize 115 scsi0 +4M; sleep 1; done
while running
fio --name=make-mirror-work --size=100M --direct=1 --rw=randwrite \
--bs=4k --ioengine=psync --numjobs=5 --runtime=1200 --time_based
in the VM.
Fix picked up from:
https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg01102.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When using iothread, after commits
1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
it can happen that polling gets stuck when draining. This would cause
IO in the guest to get completely stuck.
A workaround for users is stopping and resuming the vCPUs because that
would also stop and resume the dataplanes which would kick the host
notifiers.
This can happen with block jobs like backup and drive mirror as well
as with hotplug [2].
Reports in the community forum that might be about this issue[0][1]
and there is also one in the enterprise support channel.
As a workaround in the code, just re-enable notifications and kick the
virt queue after draining. Draining is already costly and rare, so no
need to worry about a performance penalty here. This was taken from
the following comment of a QEMU developer [3] (in my debugging,
I had already found re-enabling notification to work around the issue,
but also kicking the queue is more complete).
[0]: https://forum.proxmox.com/threads/137286/
[1]: https://forum.proxmox.com/threads/137536/
[2]: https://issues.redhat.com/browse/RHEL-3934
[3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This fixes the host->guest direction with noNVC as a client (and
likely others).
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
The issue prevented FreeBSD 14 VMs with SATA disk from booting.
The commit it fixes e2a5d9b3d9c3 ("hw/ide/ahci: simplify and document
PxCI handling") is part of stable 8.1.2.
The patch was already applied to the block branch upstream:
https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg02711.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
As reported in the community forum [0] and reproduced locally this
breaks VirtIO network adapters in (at least) the German ISO of Windows
Server 2022. The fix itself was for
> Issue is not fatal but as result acpi-index/"PCI Label ID" property
> is either not shown in device details page or shows incorrect value.
so revert and tolerate that as a stop-gap, rather than have the
devices not working at all.
[0]: https://forum.proxmox.com/threads/92094/post-605684
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The implementation of the helper is_path_tmpfs() is similar to the
existing qemu_fd_getfs() function in util/mmap-alloc.c, which
unfortunately only takes an existing fd.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Taking a snapshot became prohibitively slow because of the
migration_transferred_bytes() call in migration_rate_exceeded() [0].
This also applied to the async snapshot taking in Proxmox VE, so
work around the issue until it is fixed upstream.
[0]: https://gitlab.com/qemu-project/qemu/-/issues/1821
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bigger notable changes:
* Commit 1a30b0f5d7 ("block: .bdrv_open is non-coroutine and
unlocked") broke the PVE backup patches, in particular setting up
the backup dump block driver, because bdrv_new_open_driver() cannot
be called from a coroutine. To fix it, bdrv_co_open() is used
instead, and while it's a much more involved function, the result
should be essentially the same. The only difference I noticed is
that the BDRV_O_ALLOW_RDWR flag is also set in the resulting bds
(block driver state), but that shouldn't hurt.
Smaller notable changes:
* aio_set_fd_handler() dropped its 'is_external' parameter stating
that all callers now pass false in 60f782b6b7 ("aio: remove
aio_disable_external() API"). The calls in the PVE patches also
passed false, so just drop the parameter too.
* global_state_store() does not have a return value anymore, so the
user in the PVE savevm-async patch was adapted. For context, see
c33f1829f8 ("migration: never fail in global_state_store()").
* Renames affecting the PVE savevm-async patch:
migrate_use_block() -> migrate_block() and ram_counters -> mig_stats
9d4b1e5f22 ("migration: Move migrate_use_block() to options.c")
aff3f6606d ("migration: Rename ram_counters to mig_stats")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If there is a pending DMA operation during ide_bus_reset(), the fact
that the IDEstate is already reset before the operation is canceled
can be problematic. In particular, ide_dma_cb() might be called and
then use the reset IDEstate which contains the signature after the
reset. When used to construct the IO operation this leads to
ide_get_sector() returning 0 and nsector being 1. This is particularly
bad, because a write command will thus destroy the first sector which
often contains a partition table or similar.
Upstream discussion:
https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg04239.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since upstream QEMU 8.0, it's no longer possible to call
bdrv_img_create() from a coroutine anymore, meaning a backup with the
directory format would crash the QEMU instance.
The feature is only exposed via the monitor and was intended to be
experimental. There were no user reports about the breakage and it
only was noticed during the rebase for QEMU 8.1, because other parts
of the backup code needed adaptation and I decided to check the
BACKUP_FORMAT_DIR case too.
It should not stay in a broken state of course, but avoid the
maintenance cost and just make it a removed feature for Proxmox VE 8
retroactively.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With the drive-backup QMP command, upstream QEMU uses a drained
section for the source drive when creating the backup job. Do the same
here to avoid subtle bugs.
There, the drained section extends until after the job is started, but
this cannot be done here for multi-disk backups (could at most start
the first job). The important thing is that the cbw
(copy-before-write) node is in place and the bcs (block-copy-state)
bitmap is initialized, which both happen during job creation (ensured
by the "block/backup: move bcs bitmap initialization to job creation"
PVE patch).
One such bug is one reported in the community forum [0], where using a
drive with iothread can lead to an overlapping block-copy request and
consequently an assertion failure. The block-copy code relies on the
bcs bitmap to determine if a request for a certain range can be
created. Each time a request is created, it resets the bcs bitmap at
that range to indicate that it's being handled.
The duplicate request can happen as follows:
Thread A attaches the cbw node
Thread B creates a request and resets the bitmap at that range
Thread A clears the bitmap and merges it with the PBS bitmap
The merging can lead to the bitmap being set again at the range of
the previous request, so the block-copy code thinks it's fine to
create a request there.
Thread B creates another requests at an overlapping range before the
other request is finished.
The drained section ensures that nothing else can interfere with the
bcs bitmap between attaching the copy-before-write block node and
initialization of the bitmap.
[0]: https://forum.proxmox.com/threads/133149/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Add a filter to the "vma extract" command. A comma seperated list of
disk images that should be extracted can be passed with the "-d" option.
Example to extract an IDE drive and an SCSI drive from vzdump.vma:
vma extract vzdump.vma -d "drive-ide0,drive-scsi0" extractdir
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reported in the community forum [0]. By default, there can be large
amounts of memory left assigned to the QEMU process after backup.
Likely because of fragmentation, it's necessary to explicitly call
malloc_trim() to tell glibc that it shouldn't keep all that memory
resident for the process.
QEMU itself already does a malloc_trim() in the RCU thread, but that
code path might not be reached (or not for a long time) under usual
operation. The value of 4 MiB for the argument was also copied from
there.
Example with the following configuration:
> agent: 1
> boot: order=scsi0
> cores: 4
> cpu: x86-64-v2-AES
> ide2: none,media=cdrom
> memory: 1024
> name: backup-mem
> net0: virtio=DA:58:18:26:59:9F,bridge=vmbr0,firewall=1
> numa: 0
> ostype: l26
> scsi0: rbd:base-107-disk-0/vm-106-disk-1,size=4302M
> scsihw: virtio-scsi-pci
> smbios1: uuid=b2d4511e-8d01-44f1-afd6-9581b30c24a6
> sockets: 2
> startup: order=2
> virtio0: lvmthin:vm-106-disk-1,iothread=1,size=1G
> virtio1: lvmthin:vm-106-disk-2,iothread=1,size=1G
> virtio2: lvmthin:vm-106-disk-3,iothread=1,size=1G
> vmgenid: 0a1d8751-5e02-449d-977e-c0160e900231
Before the change:
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 370948 kB
> root@pve8a1 ~ # vzdump 106 --storage pbs
> (...)
> INFO: Backup job finished successfully
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 2114964 kB
After the change:
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 398788 kB
> root@pve8a1 ~ # vzdump 106 --storage pbs
> (...)
> INFO: Backup job finished successfully
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 424356 kB
[0]: https://forum.proxmox.com/threads/131339/
Co-diagnosed-by: Friedrich Weber <f.weber@proxmox.com>
Co-diagnosed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Add format attributes to functions that take printf-like arguments. This
provides additional compile-time checking that the correct parameters
are passed to the functions.
This fixes compiler warnings generated by the -Wsuggest-attribute=format
flag.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Each pause+resume operation (which is also done as part of taking a VM
snapshot) would increase the number of open file descriptors by the
number of vhost devices (e.g. network devices by default). This could
lead to crashes during backup and surely other issues once the system
limit (default 1024) was reached [0].
[0]: https://forum.proxmox.com/threads/131603/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Not difficult to run into, just have a drive with iothread, take a PBS
backup and then take a snapshot or hibernate. Resuming will fail with
> qemu: qemu_mutex_unlock_impl: Operation not permitted
because of not acquiring the correct AioContext first.
Migration is not affected, because it runs in coroutine context.
Reported in the community forum:
https://forum.proxmox.com/threads/129899/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The QAPI change for QEMU 8.0 dropped redundant has_foo parameters, but
in the blockdev_mirror_common() function (which is not part of the
QAPI itself but called from there) the argument pair was has_bitmap
and bitmap_name rather than has_bitmap and bitmap.
Reported-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The previous version was picked from the mailing list and still had
an object_dynamic_cast call in a hot path, which is avoided with the
version that landed in git.
Also adds a few more exceptions for devices that need reentrancy.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
where there is no good reason to keep them separate. It's a pain
during rebase if there are multiple patches changing the same code
over and over again. This was especially bad for the backup-related
patches. If the history of patches really is needed, it can be
extracted via git. Additionally, compilation with partial application
of patches was broken since a long time, because one of the master key
changes became part of an earlier patch during a past rebase.
If only the same files were changed by a subsequent patch and the
changes felt to belong together (obvious for later bug fixes, but also
done for features e.g. adding master key support for PBS), the patches
were squashed together.
The PBS namespace support patch was split into the individual parts
it changes, i.e. PBS block driver, pbs-restore binary and QMP backup
infrastructure, and squashed into the respective patches.
No code change is intended, git diff in the submodule should not show
any difference between applying all patches before this commit and
applying all patches after this commit.
The query-proxmox-support QMP function has been left as part of the
"PVE-Backup: Proxmox backup patches for QEMU" patch, because it's
currently only used there. If it ever is used elsewhere too, it can
be split out from there.
The recent alloc-track and BQL-related savevm-async changes have been
left separate for now, because it's not 100% clear they are the best
approach yet. This depends on what upstream decides about the BQL
stuff and whether and what kind of issues with the changes pop up.
The qemu-img dd snapshot patch has been re-ordered to after the other
qemu-img dd patches.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Namely, pvebackup_co_prepare() needs to call bdrv_co_open() rather
than bdrv_open(), because it is a coroutine itself.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Changes to other patches are all just metadata/context changes except
for pvebackup_co_prepare() needing to call bdrv_co_unref() rather than
bdrv_unref(), because it is a coroutine itself. This is documented in
d6ee2e324e ("block-coroutine-wrapper: Introduce no_co_wrapper"). The
change is necessary, because one of the stable fixes converts
bdrv_unref and blk_unref into no_co_wrappers (in preparation for a
second patch to fix a hang with the block resize QMP command).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When turning off the "KVM hardware virtualization" checkbox in Proxmox
VE, the TCG accelerator is used, so these fixes are relevant then.
The first patch is included to allow cherry-picking the others without
changes.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Required for the debian/edk2-vars-generator.py script in the
pve-edk2-firmware repository when building the edk2-stable202302
release. Without this patch, the QEMU process spawned by the script
would hang indefinietly.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patch 0008-memory-prevent-dma-reentracy-issues.patch introduced a
regression for the LSI SCSI controller leading to boot failures [0],
because, in its current form, it relies on reentrancy for a particular
ram_io region.
[0]: https://forum.proxmox.com/threads/123843
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patches were selected from the recent "Patch Round-up for stable
7.2.1" [0]. Those that should be relevant for our supported use-cases
(and the upcoming nvme use-case) were picked. Most of the patches
added now have not been submitted to qemu-stable before.
The follow-up for the virtio-rng-pci migration fix will break
migration between versions with the fix and without the fix when a
virtio-pci-rng(-non)-transitional device is used. Luckily Proxmox VE
only uses the virtio-pci-rng device, and this was fixed by
0006-virtio-rng-pci-fix-migration-compat-for-vectors.patch which was
applied before any public version of Proxmox VE's QEMU 7.2 package was
released.
[0]: https://lists.nongnu.org/archive/html/qemu-stable/2023-03/msg00010.html
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=2162569
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The patch was incomplete and (re-)introduced an issue with a potential
failing assertion upon cancelation of the DMA request.
There is a patch on qemu-devel now[0], and it's the same as this one
code-wise (except for comments). But the discussion is still ongoing.
While there shouldn't be a real issue with the patch, there might be
better approaches. The plan is to use this as a stop-gap for now and
pick up the proper solution once it's ready.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg03325.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In particular, the deadlock can occur, together with unlucky timing
between the QEMU threads, when the guest is issuing trim requests
during the start of a backup operation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ T: resolve trivial merge conflict in series file ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In qemu-server, we already allocate 2 * $mem_size + 500 MiB for driver
state (which was 32 MiB long ago according to git history). It seems
likely that the 30 MiB cutoff in the savevm-async implementation was
chosen based on that.
In bug #4476 [0], another issue caused the iteration to not make any
progress and the state file filled up all the way to the 30 MiB +
pending_size cutoff. Since the guest is not stopped immediately after
the check, it can still dirty some RAM and the current cutoff is not
enough for a reproducer VM (was done while bug #4476 still was not
fixed), dirtying memory with
> stress-ng -B 2 --bigheap-growth 64.0M'
After entering the final stage, savevm actually filled up the state
file completely, leading to an I/O error. It's probably the same
scenario as reported in the bug report, the error message was fixed in
commit a020815 ("savevm-async: fix function name in error message")
after the bug report.
If not for the bug, the cutoff will only be reached by a VM that's
dirtying RAM faster than can be written to the storage, so increase
the cutoff to 100 MiB to have a bigger chance to finish successfully,
while still trying to not increase downtime too much for
non-hibernation snapshots.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=4476
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when pend_postcopy is large. By definition, pend_postcopy won't
decrease when iterating, so a value larger than the cutoff of 400000
would lead to essentially empty iterations, filling up the state file
until only 30 MiB + pending_size remain and the second half of the
check would trigger.
Avoid this, by not considering pend_postcopy for the cutoff to enter
the final phase.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
between QEMU less than 7.2 and QEMU 7.2 without the fix (both
directions are affected).
As mentioned in the patch message, this fix itself will break
migration between QEMU 7.2 and QEMU 7.2 with the fix (in both
directions, if a virtio-rng device is attached), but this is fine,
because no pve-qemu-kvm package with QEMU 7.2 has been publicly
released yet.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Two for virtio-mem and one for vIOMMU. Both features are not yet
exposed in PVE's qemu-server, but planned to be added.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Avoids a patch and is required to compile when not all patches are
applied. No functional change is intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>