Backported from commit bfa36802d1 ("virtio-blk: avoid using ioeventfd
state in irqfd conditional") because the rework/rename dataplane ->
ioeventfd didn't happen yet.
Reported in the community forum [0] and reproduced doing a backup loop
to PBS with suspend mode with fio doing heavy IO in the guest and
using an RBD storage (with krbd).
[0]: https://forum.proxmox.com/threads/141320
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In many configurations, e.g. multiple vNICs with multiple queues or
with many Ceph OSDs, the default soft limit of 1024 is not enough.
QEMU is supposed to work fine with file descriptors >= 1024 and does
not use select() on POSIX. Bump the soft limit to the allowed hard
limit to avoid issues with the aforementioned configurations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This essentially repeats commit 6b7c181 ("add patch to work around
stuck guest IO with iothread and VirtIO block/SCSI") with an added
fix for the SCSI event virtqueue, which requires special handling.
This is to avoid the issue [3] that made the revert 2a49e66 ("Revert
"add patch to work around stuck guest IO with iothread and VirtIO
block/SCSI"") necessary the first time around.
When using iothread, after commits
1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
it can happen that polling gets stuck when draining. This would cause
IO in the guest to get completely stuck.
A workaround for users is stopping and resuming the vCPUs because that
would also stop and resume the dataplanes which would kick the host
notifiers.
This can happen with block jobs like backup and drive mirror as well
as with hotplug [2].
Reports in the community forum that might be about this issue[0][1]
and there is also one in the enterprise support channel.
As a workaround in the code, just re-enable notifications and kick the
virt queue after draining. Draining is already costly and rare, so no
need to worry about a performance penalty here.
Take special care to attach the SCSI event virtqueue host notifier
with the _no_poll() variant like in virtio_scsi_dataplane_start().
This avoids the issue from the first attempted fix where the iothread
would suddenly loop with 100% CPU usage whenever some guest IO came in
[3]. This is necessary because of commit 38738f7dbb ("virtio-scsi:
don't waste CPU polling the event virtqueue"). See [4] for the
relevant discussion.
[0]: https://forum.proxmox.com/threads/137286/
[1]: https://forum.proxmox.com/threads/137536/
[2]: https://issues.redhat.com/browse/RHEL-3934
[3]: https://forum.proxmox.com/threads/138140/
[4]: https://lore.kernel.org/qemu-devel/bfc7b20c-2144-46e9-acbc-e726276c5a31@proxmox.com/
Link: https://lore.kernel.org/qemu-devel/20240202153158.788922-1-hreitz@redhat.com/
Originally-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: Update to v2 and rebased patch series handling to v8.1.5 ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Most notable fixes from a Proxmox VE perspective are:
* "virtio-net: correctly copy vnet header when flushing TX"
To prevent a stack overflow that could lead to leaking parts of the
QEMU process's memory.
* "hw/pflash: implement update buffer for block writes"
To prevent an edge case for half-completed writes. This potentially
affected EFI disks.
* Fixes to i386 emulation and ARM emulation.
No changes for patches were necessary (all are just automatic context
changes).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This reverts commit 6b7c1815e1.
The attempted fix has been reported to cause high CPU usage after
backup [0]. Not difficult to reproduce and it's iothreads getting
stuck in a loop. Downgrading to pve-qemu-kvm=8.1.2-4 helps which was
also verified by Christian, thanks! The issue this was supposed to fix
is much rarer, so revert for now, while upstream is still working on a
proper fix.
[0]: https://forum.proxmox.com/threads/138140/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While the patch gives bdrv_graph_wrlock() as an example where the
issue can manifest, something similar can happen even when that is
disabled. Was able to reproduce the issue with
while true; do qm resize 115 scsi0 +4M; sleep 1; done
while running
fio --name=make-mirror-work --size=100M --direct=1 --rw=randwrite \
--bs=4k --ioengine=psync --numjobs=5 --runtime=1200 --time_based
in the VM.
Fix picked up from:
https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg01102.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When using iothread, after commits
1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
it can happen that polling gets stuck when draining. This would cause
IO in the guest to get completely stuck.
A workaround for users is stopping and resuming the vCPUs because that
would also stop and resume the dataplanes which would kick the host
notifiers.
This can happen with block jobs like backup and drive mirror as well
as with hotplug [2].
Reports in the community forum that might be about this issue[0][1]
and there is also one in the enterprise support channel.
As a workaround in the code, just re-enable notifications and kick the
virt queue after draining. Draining is already costly and rare, so no
need to worry about a performance penalty here. This was taken from
the following comment of a QEMU developer [3] (in my debugging,
I had already found re-enabling notification to work around the issue,
but also kicking the queue is more complete).
[0]: https://forum.proxmox.com/threads/137286/
[1]: https://forum.proxmox.com/threads/137536/
[2]: https://issues.redhat.com/browse/RHEL-3934
[3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This fixes the host->guest direction with noNVC as a client (and
likely others).
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
The issue prevented FreeBSD 14 VMs with SATA disk from booting.
The commit it fixes e2a5d9b3d9c3 ("hw/ide/ahci: simplify and document
PxCI handling") is part of stable 8.1.2.
The patch was already applied to the block branch upstream:
https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg02711.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
As reported in the community forum [0] and reproduced locally this
breaks VirtIO network adapters in (at least) the German ISO of Windows
Server 2022. The fix itself was for
> Issue is not fatal but as result acpi-index/"PCI Label ID" property
> is either not shown in device details page or shows incorrect value.
so revert and tolerate that as a stop-gap, rather than have the
devices not working at all.
[0]: https://forum.proxmox.com/threads/92094/post-605684
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The implementation of the helper is_path_tmpfs() is similar to the
existing qemu_fd_getfs() function in util/mmap-alloc.c, which
unfortunately only takes an existing fd.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Seems to be required since commit 81e2b198a8 ("configure: create a
python venv unconditionally").
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upstream QEMU commit 4271f40383 ("virtio-net: correctly report maximum
tx_queue_size value") made setting an invalid tx_queue_size for a
non-vDPA/vhost-user net device a hard error. Now, qemu-server before
commit 089aed81 ("cfg2cmd: netdev: fix value for tx_queue_size") did
just that, so the newer QEMU version would break start-up for most VMs
(a default vNIC configuration would be affected).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Taking a snapshot became prohibitively slow because of the
migration_transferred_bytes() call in migration_rate_exceeded() [0].
This also applied to the async snapshot taking in Proxmox VE, so
work around the issue until it is fixed upstream.
[0]: https://gitlab.com/qemu-project/qemu/-/issues/1821
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Bigger notable changes:
* Commit 1a30b0f5d7 ("block: .bdrv_open is non-coroutine and
unlocked") broke the PVE backup patches, in particular setting up
the backup dump block driver, because bdrv_new_open_driver() cannot
be called from a coroutine. To fix it, bdrv_co_open() is used
instead, and while it's a much more involved function, the result
should be essentially the same. The only difference I noticed is
that the BDRV_O_ALLOW_RDWR flag is also set in the resulting bds
(block driver state), but that shouldn't hurt.
Smaller notable changes:
* aio_set_fd_handler() dropped its 'is_external' parameter stating
that all callers now pass false in 60f782b6b7 ("aio: remove
aio_disable_external() API"). The calls in the PVE patches also
passed false, so just drop the parameter too.
* global_state_store() does not have a return value anymore, so the
user in the PVE savevm-async patch was adapted. For context, see
c33f1829f8 ("migration: never fail in global_state_store()").
* Renames affecting the PVE savevm-async patch:
migrate_use_block() -> migrate_block() and ram_counters -> mig_stats
9d4b1e5f22 ("migration: Move migrate_use_block() to options.c")
aff3f6606d ("migration: Rename ram_counters to mig_stats")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See the following QEMU commits for reference:
0c5f3dcbb2 ("configure: add --enable-pypi and --disable-pypi")
ac4ccac740 ("configure: rename --enable-pypi to --enable-download, control subprojects too")
6f3ae23b29 ("configure: remove --with-git-submodules=") removed
The last one removed the option and the closest thing to
git-submodule=ignore is using disable-download. Which will then just
verify that the submodules are present.
Building now will require running either
* Running 'meson subprojects download' in the qemu submodule first.
* Using --enable-download, but then the submodules would be downloaded
for each build (if not already downloaded in the submodule first)
and it's just a bit too surprising if downloads happen during build.
The disable-download option will also disable automatic downloading of
missing Python modules from PyPI. Hopefully, it's enough to add them
as Debian build dependencies when required.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If there is a pending DMA operation during ide_bus_reset(), the fact
that the IDEstate is already reset before the operation is canceled
can be problematic. In particular, ide_dma_cb() might be called and
then use the reset IDEstate which contains the signature after the
reset. When used to construct the IO operation this leads to
ide_get_sector() returning 0 and nsector being 1. This is particularly
bad, because a write command will thus destroy the first sector which
often contains a partition table or similar.
Upstream discussion:
https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg04239.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since upstream QEMU 8.0, it's no longer possible to call
bdrv_img_create() from a coroutine anymore, meaning a backup with the
directory format would crash the QEMU instance.
The feature is only exposed via the monitor and was intended to be
experimental. There were no user reports about the breakage and it
only was noticed during the rebase for QEMU 8.1, because other parts
of the backup code needed adaptation and I decided to check the
BACKUP_FORMAT_DIR case too.
It should not stay in a broken state of course, but avoid the
maintenance cost and just make it a removed feature for Proxmox VE 8
retroactively.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With the drive-backup QMP command, upstream QEMU uses a drained
section for the source drive when creating the backup job. Do the same
here to avoid subtle bugs.
There, the drained section extends until after the job is started, but
this cannot be done here for multi-disk backups (could at most start
the first job). The important thing is that the cbw
(copy-before-write) node is in place and the bcs (block-copy-state)
bitmap is initialized, which both happen during job creation (ensured
by the "block/backup: move bcs bitmap initialization to job creation"
PVE patch).
One such bug is one reported in the community forum [0], where using a
drive with iothread can lead to an overlapping block-copy request and
consequently an assertion failure. The block-copy code relies on the
bcs bitmap to determine if a request for a certain range can be
created. Each time a request is created, it resets the bcs bitmap at
that range to indicate that it's being handled.
The duplicate request can happen as follows:
Thread A attaches the cbw node
Thread B creates a request and resets the bitmap at that range
Thread A clears the bitmap and merges it with the PBS bitmap
The merging can lead to the bitmap being set again at the range of
the previous request, so the block-copy code thinks it's fine to
create a request there.
Thread B creates another requests at an overlapping range before the
other request is finished.
The drained section ensures that nothing else can interfere with the
bcs bitmap between attaching the copy-before-write block node and
initialization of the bitmap.
[0]: https://forum.proxmox.com/threads/133149/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Add a filter to the "vma extract" command. A comma seperated list of
disk images that should be extracted can be passed with the "-d" option.
Example to extract an IDE drive and an SCSI drive from vzdump.vma:
vma extract vzdump.vma -d "drive-ide0,drive-scsi0" extractdir
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reported in the community forum [0]. By default, there can be large
amounts of memory left assigned to the QEMU process after backup.
Likely because of fragmentation, it's necessary to explicitly call
malloc_trim() to tell glibc that it shouldn't keep all that memory
resident for the process.
QEMU itself already does a malloc_trim() in the RCU thread, but that
code path might not be reached (or not for a long time) under usual
operation. The value of 4 MiB for the argument was also copied from
there.
Example with the following configuration:
> agent: 1
> boot: order=scsi0
> cores: 4
> cpu: x86-64-v2-AES
> ide2: none,media=cdrom
> memory: 1024
> name: backup-mem
> net0: virtio=DA:58:18:26:59:9F,bridge=vmbr0,firewall=1
> numa: 0
> ostype: l26
> scsi0: rbd:base-107-disk-0/vm-106-disk-1,size=4302M
> scsihw: virtio-scsi-pci
> smbios1: uuid=b2d4511e-8d01-44f1-afd6-9581b30c24a6
> sockets: 2
> startup: order=2
> virtio0: lvmthin:vm-106-disk-1,iothread=1,size=1G
> virtio1: lvmthin:vm-106-disk-2,iothread=1,size=1G
> virtio2: lvmthin:vm-106-disk-3,iothread=1,size=1G
> vmgenid: 0a1d8751-5e02-449d-977e-c0160e900231
Before the change:
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 370948 kB
> root@pve8a1 ~ # vzdump 106 --storage pbs
> (...)
> INFO: Backup job finished successfully
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 2114964 kB
After the change:
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 398788 kB
> root@pve8a1 ~ # vzdump 106 --storage pbs
> (...)
> INFO: Backup job finished successfully
> root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> VmRSS: 424356 kB
[0]: https://forum.proxmox.com/threads/131339/
Co-diagnosed-by: Friedrich Weber <f.weber@proxmox.com>
Co-diagnosed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Add format attributes to functions that take printf-like arguments. This
provides additional compile-time checking that the correct parameters
are passed to the functions.
This fixes compiler warnings generated by the -Wsuggest-attribute=format
flag.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Each pause+resume operation (which is also done as part of taking a VM
snapshot) would increase the number of open file descriptors by the
number of vhost devices (e.g. network devices by default). This could
lead to crashes during backup and surely other issues once the system
limit (default 1024) was reached [0].
[0]: https://forum.proxmox.com/threads/131603/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Not difficult to run into, just have a drive with iothread, take a PBS
backup and then take a snapshot or hibernate. Resuming will fail with
> qemu: qemu_mutex_unlock_impl: Operation not permitted
because of not acquiring the correct AioContext first.
Migration is not affected, because it runs in coroutine context.
Reported in the community forum:
https://forum.proxmox.com/threads/129899/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The QAPI change for QEMU 8.0 dropped redundant has_foo parameters, but
in the blockdev_mirror_common() function (which is not part of the
QAPI itself but called from there) the argument pair was has_bitmap
and bitmap_name rather than has_bitmap and bitmap.
Reported-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>