pve-qemu-qoup/debian/patches/pve/0006-PVE-Config-rbd-block-rbd-disable-rbd_cache_writethro.patch
Fabian Ebner 4567474e95 update submodule and patches to 6.2.0
Notable changes:
* bdrv_co_p{discard,readv,writev,write_zeroes} function signatures
  changed, to using int64_t for offsets/bytes and some still had int
  rather than BrdvRequestFlags for the flags.
* job_cancel_sync now has a force parameter. Commit messages in
  73895f3838cd7fdaf185cf1dbc47be58844a966f
  4cfb3f05627ad82af473e7f7ae113c3884cd04e3
  sound like using force=true makes more sense.
* Added 3 patches coming in via qemu-stable tag, most important one is
  to work around a librbd issue.
* Added another 3 patches from qemu-devel to fix issue leading to
  crash when live migrating with iothread.
* cluster_size calculation helper changed (see patch pve/0026).
* QAPI's if conditionals now use 'CONFIG_FOO' rather than
  'defined(CONFIG_FOO)'

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2022-02-15 14:03:07 +01:00

33 lines
1.2 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
Date: Mon, 6 Apr 2020 12:16:36 +0200
Subject: [PATCH] PVE: [Config] rbd: block: rbd: disable
rbd_cache_writethrough_until_flush
Either the cache mode asks for a cache or not. There's no
point in having a "temporary" cache mode. This option AFAIK
was introduced as a hack for ancient virtio drivers. If
anything, we should have a separate option for it. Better
yet, VMs affected by the related issue should simply
explicitly choose writethrough.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---
block/rbd.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/rbd.c b/block/rbd.c
index 8f183eba2a..458f6bd7eb 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -820,6 +820,8 @@ static int qemu_rbd_connect(rados_t *cluster, rados_ioctx_t *io_ctx,
rados_conf_set(*cluster, "rbd_cache", "false");
}
+ rados_conf_set(*cluster, "rbd_cache_writethrough_until_flush", "false");
+
r = rados_connect(*cluster);
if (r < 0) {
error_setg_errno(errp, -r, "error connecting");