2022-06-22 16:52:16 +03:00
|
|
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
|
|
|
From: Fabian Ebner <f.ebner@proxmox.com>
|
|
|
|
Date: Wed, 22 Jun 2022 10:45:13 +0200
|
|
|
|
Subject: [PATCH] block: alloc-track: avoid premature break
|
|
|
|
|
|
|
|
While the bdrv_co_preadv() calls are expected to return 0 on success,
|
|
|
|
qemu_iovec_memset() will return the number of bytes set (will be
|
|
|
|
local_bytes, because the slice with that size was just initialized).
|
|
|
|
|
|
|
|
Don't break out of the loop after the branch with qemu_iovec_memset(),
|
|
|
|
because there might still be work to do. Additionally, ret is an int,
|
|
|
|
which on 64-bit platforms is too small to hold the size_t returned by
|
|
|
|
qemu_iovec_memset().
|
|
|
|
|
|
|
|
The branch seems to be difficult to reach in practice, because the
|
|
|
|
whole point of alloc-track is to be used with a backing device.
|
|
|
|
|
|
|
|
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
|
|
|
|
---
|
|
|
|
block/alloc-track.c | 3 ++-
|
|
|
|
1 file changed, 2 insertions(+), 1 deletion(-)
|
|
|
|
|
|
|
|
diff --git a/block/alloc-track.c b/block/alloc-track.c
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
index 113bbd7058..b75d7c6460 100644
|
2022-06-22 16:52:16 +03:00
|
|
|
--- a/block/alloc-track.c
|
|
|
|
+++ b/block/alloc-track.c
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -175,7 +175,8 @@ static int coroutine_fn track_co_preadv(BlockDriverState *bs,
|
2022-06-22 16:52:16 +03:00
|
|
|
ret = bdrv_co_preadv(bs->backing, local_offset, local_bytes,
|
|
|
|
&local_qiov, flags);
|
|
|
|
} else {
|
|
|
|
- ret = qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
|
|
|
|
+ qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
|
|
|
|
+ ret = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret != 0) {
|