pve-qemu-qoup/debian/patches/pve/0006-PVE-Config-rbd-block-rbd-disable-rbd_cache_writethro.patch
Fiona Ebner bf251437e9 update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:

* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.

* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.

* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.

* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.

* Async snapshot-related changes:
  - The pending querying got adapted to the above-mentioned split and
  a patch is added to optimize it/make it more similar to what
  upstream code does.
  - Added initialization of the compression counters (for
    future-proofing).
  - It's necessary the hold the BQL (big QEMU lock = iothread mutex)
  during the setup phase, because block layer functions are used there
  and not doing so leads to racy, hard-to-debug crashes or hangs. It's
  necessary to change some upstream code too for this, a version of
  the patch "migration: for snapshots, hold the BQL during setup
  callbacks" is intended to be upstreamed.
  - Need to take the bdrv graph read lock before flushing.

* hmp_info_balloon was moved to a different file.

* Needed to include a new headers from time to time to still get the
correct functions.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-22 15:09:14 +02:00

33 lines
1.2 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
Date: Mon, 6 Apr 2020 12:16:36 +0200
Subject: [PATCH] PVE: [Config] rbd: block: rbd: disable
rbd_cache_writethrough_until_flush
Either the cache mode asks for a cache or not. There's no
point in having a "temporary" cache mode. This option AFAIK
was introduced as a hack for ancient virtio drivers. If
anything, we should have a separate option for it. Better
yet, VMs affected by the related issue should simply
explicitly choose writethrough.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
block/rbd.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/rbd.c b/block/rbd.c
index 978671411e..a4749f3b1b 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -963,6 +963,8 @@ static int qemu_rbd_connect(rados_t *cluster, rados_ioctx_t *io_ctx,
rados_conf_set(*cluster, "rbd_cache", "false");
}
+ rados_conf_set(*cluster, "rbd_cache_writethrough_until_flush", "false");
+
r = rados_connect(*cluster);
if (r < 0) {
error_setg_errno(errp, -r, "error connecting");