2022-06-08 14:10:51 +03:00
|
|
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
|
|
|
From: Fabian Ebner <f.ebner@proxmox.com>
|
|
|
|
Date: Wed, 25 May 2022 13:59:38 +0200
|
|
|
|
Subject: [PATCH] PVE-Backup: ensure jobs in di_list are referenced
|
|
|
|
|
|
|
|
Ensures that qmp_backup_cancel doesn't pick a job that's already been
|
|
|
|
freed. With unlucky timings it seems possible that:
|
|
|
|
1. job_exit -> job_completed -> job_finalize_single starts
|
|
|
|
2. pvebackup_co_complete_stream gets spawned in completion callback
|
|
|
|
3. job finalize_single finishes -> job's refcount hits zero -> job is
|
|
|
|
freed
|
|
|
|
4. qmp_backup_cancel comes in and locks backup_state.backup_mutex
|
|
|
|
before pvebackup_co_complete_stream can remove the job from the
|
|
|
|
di_list
|
|
|
|
5. qmp_backup_cancel will pick a job that's already been freed
|
|
|
|
|
2022-12-14 17:16:32 +03:00
|
|
|
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
|
2022-06-08 14:10:51 +03:00
|
|
|
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
|
2022-12-14 17:16:32 +03:00
|
|
|
[FE: adapt for new job lock mechanism replacing AioContext locks]
|
|
|
|
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
|
2022-06-08 14:10:51 +03:00
|
|
|
---
|
2022-12-14 17:16:32 +03:00
|
|
|
pve-backup.c | 22 +++++++++++++++++++---
|
|
|
|
1 file changed, 19 insertions(+), 3 deletions(-)
|
2022-06-08 14:10:51 +03:00
|
|
|
|
2022-06-22 16:47:34 +03:00
|
|
|
diff --git a/pve-backup.c b/pve-backup.c
|
2023-05-15 16:39:55 +03:00
|
|
|
index 79d14d6a0b..67e2b99d74 100644
|
2022-06-22 16:47:34 +03:00
|
|
|
--- a/pve-backup.c
|
|
|
|
+++ b/pve-backup.c
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -318,6 +318,13 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
|
2022-06-08 14:10:51 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
+ if (di->job) {
|
2022-12-14 17:16:32 +03:00
|
|
|
+ WITH_JOB_LOCK_GUARD() {
|
|
|
|
+ job_unref_locked(&di->job->job);
|
|
|
|
+ di->job = NULL;
|
|
|
|
+ }
|
2022-06-08 14:10:51 +03:00
|
|
|
+ }
|
|
|
|
+
|
|
|
|
// remove self from job list
|
|
|
|
backup_state.di_list = g_list_remove(backup_state.di_list, di);
|
|
|
|
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -493,6 +500,11 @@ static void create_backup_jobs_bh(void *opaque) {
|
2022-12-14 17:16:32 +03:00
|
|
|
aio_context_release(aio_context);
|
2022-06-08 14:10:51 +03:00
|
|
|
|
|
|
|
di->job = job;
|
|
|
|
+ if (job) {
|
2022-12-14 17:16:32 +03:00
|
|
|
+ WITH_JOB_LOCK_GUARD() {
|
|
|
|
+ job_ref_locked(&job->job);
|
|
|
|
+ }
|
2022-06-08 14:10:51 +03:00
|
|
|
+ }
|
|
|
|
|
|
|
|
if (!job || local_err) {
|
|
|
|
error_setg(errp, "backup_job_create failed: %s",
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -520,11 +532,15 @@ static void create_backup_jobs_bh(void *opaque) {
|
2022-06-09 15:31:13 +03:00
|
|
|
di->target = NULL;
|
2022-06-08 14:10:51 +03:00
|
|
|
}
|
2022-06-09 15:31:13 +03:00
|
|
|
|
|
|
|
- if (!canceled && di->job) {
|
2022-06-08 14:10:51 +03:00
|
|
|
+ if (di->job) {
|
2022-12-14 17:16:32 +03:00
|
|
|
WITH_JOB_LOCK_GUARD() {
|
|
|
|
- job_cancel_sync_locked(&di->job->job, true);
|
|
|
|
+ if (!canceled) {
|
|
|
|
+ job_cancel_sync_locked(&di->job->job, true);
|
|
|
|
+ canceled = true;
|
|
|
|
+ }
|
|
|
|
+ job_unref_locked(&di->job->job);
|
|
|
|
+ di->job = NULL;
|
|
|
|
}
|
2022-06-09 15:31:13 +03:00
|
|
|
- canceled = true;
|
|
|
|
}
|
2022-06-08 14:10:51 +03:00
|
|
|
}
|
|
|
|
}
|