pve-qemu-qoup/debian/patches/series

72 lines
4.3 KiB
Plaintext
Raw Normal View History

extra/0001-monitor-qmp-fix-race-with-clients-disconnecting-earl.patch
extra/0002-scsi-megasas-Internal-cdbs-have-16-byte-length.patch
extra/0003-ide-avoid-potential-deadlock-when-draining-during-tr.patch
extra/0004-migration-block-dirty-bitmap-fix-loading-bitmap-when.patch
extra/0005-Revert-Revert-graph-lock-Disable-locking-for-now.patch
extra/0006-migration-states-workaround-snapshot-performance-reg.patch
extra/0007-Revert-x86-acpi-workaround-Windows-not-handling-name.patch
extra/0008-target-i386-the-sgx_epc_get_section-stub-is-reachabl.patch
extra/0009-ui-clipboard-mark-type-as-not-available-when-there-i.patch
work around stuck guest IO with iothread and VirtIO block/SCSI This essentially repeats commit 6b7c181 ("add patch to work around stuck guest IO with iothread and VirtIO block/SCSI") with an added fix for the SCSI event virtqueue, which requires special handling. This is to avoid the issue [3] that made the revert 2a49e66 ("Revert "add patch to work around stuck guest IO with iothread and VirtIO block/SCSI"") necessary the first time around. When using iothread, after commits 1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()") 766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()") it can happen that polling gets stuck when draining. This would cause IO in the guest to get completely stuck. A workaround for users is stopping and resuming the vCPUs because that would also stop and resume the dataplanes which would kick the host notifiers. This can happen with block jobs like backup and drive mirror as well as with hotplug [2]. Reports in the community forum that might be about this issue[0][1] and there is also one in the enterprise support channel. As a workaround in the code, just re-enable notifications and kick the virt queue after draining. Draining is already costly and rare, so no need to worry about a performance penalty here. Take special care to attach the SCSI event virtqueue host notifier with the _no_poll() variant like in virtio_scsi_dataplane_start(). This avoids the issue from the first attempted fix where the iothread would suddenly loop with 100% CPU usage whenever some guest IO came in [3]. This is necessary because of commit 38738f7dbb ("virtio-scsi: don't waste CPU polling the event virtqueue"). See [4] for the relevant discussion. [0]: https://forum.proxmox.com/threads/137286/ [1]: https://forum.proxmox.com/threads/137536/ [2]: https://issues.redhat.com/browse/RHEL-3934 [3]: https://forum.proxmox.com/threads/138140/ [4]: https://lore.kernel.org/qemu-devel/bfc7b20c-2144-46e9-acbc-e726276c5a31@proxmox.com/ Link: https://lore.kernel.org/qemu-devel/20240202153158.788922-1-hreitz@redhat.com/ Originally-by: Fiona Ebner <f.ebner@proxmox.com> [ TL: Update to v2 and rebased patch series handling to v8.1.5 ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-02-02 21:35:31 +03:00
extra/0010-virtio-scsi-Attach-event-vq-notifier-with-no_poll.patch
extra/0011-virtio-Re-enable-notifications-after-drain.patch
extra/0012-qemu_init-increase-NOFILE-soft-limit-on-POSIX.patch
extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
bitmap-mirror/0002-drive-mirror-add-support-for-conditional-and-always-.patch
bitmap-mirror/0003-mirror-add-check-for-bitmap-mode-without-bitmap.patch
bitmap-mirror/0004-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
bitmap-mirror/0005-iotests-add-test-for-bitmap-mirror.patch
bitmap-mirror/0006-mirror-move-some-checks-to-qmp.patch
pve/0001-PVE-Config-block-file-change-locking-default-to-off.patch
pve/0002-PVE-Config-Adjust-network-script-path-to-etc-kvm.patch
pve/0003-PVE-Config-set-the-CPU-model-to-kvm64-32-instead-of-.patch
pve/0004-PVE-Config-ui-spice-default-to-pve-certificates.patch
pve/0005-PVE-Config-glusterfs-no-default-logfile-if-daemonize.patch
pve/0006-PVE-Config-rbd-block-rbd-disable-rbd_cache_writethro.patch
pve/0007-PVE-Up-glusterfs-allow-partial-reads.patch
pve/0008-PVE-Up-qemu-img-return-success-on-info-without-snaps.patch
pve/0009-PVE-Up-qemu-img-dd-add-osize-and-read-from-to-stdin-.patch
pve/0010-PVE-Up-qemu-img-dd-add-isize-parameter.patch
pve/0011-PVE-Up-qemu-img-dd-add-n-skip_create.patch
pve/0012-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
pve/0013-PVE-virtio-balloon-improve-query-balloon.patch
pve/0014-PVE-qapi-modify-query-machines.patch
pve/0015-PVE-qapi-modify-spice-query.patch
pve/0016-PVE-add-IOChannel-implementation-for-savevm-async.patch
pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
pve/0019-PVE-block-add-the-zeroinit-block-driver-filter.patch
pve/0020-PVE-Add-dummy-id-command-line-parameter.patch
pve/0021-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
pve/0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
pve/0023-PVE-monitor-disable-oob-capability.patch
pve/0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
pve/0025-PVE-Allow-version-code-in-machine-type.patch
pve/0026-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
pve/0027-PVE-Backup-add-vma-backup-format-code.patch
pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
pve/0029-PVE-Add-sequential-job-transaction-support.patch
pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
pve/0031-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
pve/0032-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
pve/0033-PVE-redirect-stderr-to-journal-when-daemonized.patch
pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
pve/0035-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
pve/0036-PVE-fall-back-to-open-iscsi-initiatorname.patch
pve/0037-PVE-block-stream-increase-chunk-size.patch
pve/0038-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
pve/0039-block-add-alloc-track-driver.patch
pve/0040-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
pve/0041-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
pve/0042-Revert-block-rbd-implement-bdrv_co_block_status.patch
pve/0043-alloc-track-fix-deadlock-during-drop.patch
pve/0044-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
pve/0045-savevm-async-don-t-hold-BQL-during-setup.patch
implement support for backup fleecing Excerpt from Fiona's v3 cover-letter [0]: When a backup for a VM is started, QEMU will install a "copy-before-write" filter in its block layer. This filter ensures that upon new guest writes, old data still needed for the backup is sent to the backup target first. The guest write blocks until this operation is finished so guest IO to not-yet-backed-up sectors will be limited by the speed of the backup target. With backup fleecing, such old data is cached in a fleecing image rather than sent directly to the backup target. This can help guest IO performance and even prevent hangs in certain scenarios, at the cost of requiring more storage space. With this series it will be possible to enable backup-fleecing via e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` with fleecing images created on the storage `local-lvm`. The fleecing storage should be a fast local storage which supports thin-provisioning and discard. If the storage supports qcow2, that is used as the fleecing image format. If the underlying file system does not support discard, with qcow2 and preallocation=off, at least already allocated parts of the image can be re-used later. Fleecing images are created by qemu-server via pve-storage and attached to QEMU before the backup starts, and cleaned up after the backup finished or failed. The naming schema for fleecing images is 'vm-ID-fleece-N(.FORMAT)'. The allocated images are recorded in the guest configuration, so that even after a hard failure, clean-up can be re-attempted. While not too bad, it's a non-trivial amount of code and I'm not 100% sure about the cost-benefit, so sending those as RFC. The fleecing image needs to be the exact same size as the source, but luckily, an explicit size can be specified when attaching a raw image to QEMU so there are no size issues when using storages that have coarser allocation/round up. For qcow2, it seems that virtual size can be nearly arbitrary (i.e. modulo 512 byte granularity) during allocation. [0]: https://lists.proxmox.com/pipermail/pve-devel/2024-April/062815.html Originally-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-11 18:38:26 +03:00
pve/0046-block-copy-before-write-fix-permission.patch
pve/0047-block-copy-before-write-support-unligned-snapshot-di.patch
pve/0048-block-copy-before-write-create-block_copy-bitmap-in-.patch
pve/0049-qapi-blockdev-backup-add-discard-source-parameter.patch
pve/0050-copy-before-write-allow-specifying-minimum-cluster-s.patch
pve/0051-backup-add-minimum-cluster-size-to-performance-optio.patch
pve/0052-PVE-backup-add-fleecing-option.patch