2022-02-11 12:24:35 +03:00
|
|
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
|
|
|
From: Fabian Ebner <f.ebner@proxmox.com>
|
|
|
|
Date: Mon, 7 Feb 2022 14:21:01 +0100
|
squash related patches
where there is no good reason to keep them separate. It's a pain
during rebase if there are multiple patches changing the same code
over and over again. This was especially bad for the backup-related
patches. If the history of patches really is needed, it can be
extracted via git. Additionally, compilation with partial application
of patches was broken since a long time, because one of the master key
changes became part of an earlier patch during a past rebase.
If only the same files were changed by a subsequent patch and the
changes felt to belong together (obvious for later bug fixes, but also
done for features e.g. adding master key support for PBS), the patches
were squashed together.
The PBS namespace support patch was split into the individual parts
it changes, i.e. PBS block driver, pbs-restore binary and QMP backup
infrastructure, and squashed into the respective patches.
No code change is intended, git diff in the submodule should not show
any difference between applying all patches before this commit and
applying all patches after this commit.
The query-proxmox-support QMP function has been left as part of the
"PVE-Backup: Proxmox backup patches for QEMU" patch, because it's
currently only used there. If it ever is used elsewhere too, it can
be split out from there.
The recent alloc-track and BQL-related savevm-async changes have been
left separate for now, because it's not 100% clear they are the best
approach yet. This depends on what upstream decides about the BQL
stuff and whether and what kind of issues with the changes pop up.
The qemu-img dd snapshot patch has been re-ordered to after the other
qemu-img dd patches.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:56 +03:00
|
|
|
Subject: [PATCH] qemu-img dd: add -l option for loading a snapshot
|
2022-02-11 12:24:35 +03:00
|
|
|
|
squash related patches
where there is no good reason to keep them separate. It's a pain
during rebase if there are multiple patches changing the same code
over and over again. This was especially bad for the backup-related
patches. If the history of patches really is needed, it can be
extracted via git. Additionally, compilation with partial application
of patches was broken since a long time, because one of the master key
changes became part of an earlier patch during a past rebase.
If only the same files were changed by a subsequent patch and the
changes felt to belong together (obvious for later bug fixes, but also
done for features e.g. adding master key support for PBS), the patches
were squashed together.
The PBS namespace support patch was split into the individual parts
it changes, i.e. PBS block driver, pbs-restore binary and QMP backup
infrastructure, and squashed into the respective patches.
No code change is intended, git diff in the submodule should not show
any difference between applying all patches before this commit and
applying all patches after this commit.
The query-proxmox-support QMP function has been left as part of the
"PVE-Backup: Proxmox backup patches for QEMU" patch, because it's
currently only used there. If it ever is used elsewhere too, it can
be split out from there.
The recent alloc-track and BQL-related savevm-async changes have been
left separate for now, because it's not 100% clear they are the best
approach yet. This depends on what upstream decides about the BQL
stuff and whether and what kind of issues with the changes pop up.
The qemu-img dd snapshot patch has been re-ordered to after the other
qemu-img dd patches.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:56 +03:00
|
|
|
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
|
2022-04-25 11:07:01 +03:00
|
|
|
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
|
2022-02-11 12:24:35 +03:00
|
|
|
---
|
|
|
|
docs/tools/qemu-img.rst | 6 +++---
|
|
|
|
qemu-img-cmds.hx | 4 ++--
|
|
|
|
qemu-img.c | 33 +++++++++++++++++++++++++++++++--
|
|
|
|
3 files changed, 36 insertions(+), 7 deletions(-)
|
|
|
|
|
|
|
|
diff --git a/docs/tools/qemu-img.rst b/docs/tools/qemu-img.rst
|
2024-04-25 18:21:29 +03:00
|
|
|
index d83e8fb3c0..61c6b21859 100644
|
2022-02-11 12:24:35 +03:00
|
|
|
--- a/docs/tools/qemu-img.rst
|
|
|
|
+++ b/docs/tools/qemu-img.rst
|
update submodule and patches to QEMU 8.2.2
This version includes both the AioContext lock and the block graph
lock, so there might be some deadlocks lurking. It's not possible to
disable the block graph lock like was done in QEMU 8.1, because there
are no changes like the function bdrv_schedule_unref() that require
it. QEMU 9.0 will finally get rid of the AioContext locking.
During live-restore with a VirtIO SCSI drive with iothread there is a
known racy deadlock related to the AioContext lock. Not new [1], but
not sure if more likely now. Should be fixed in QEMU 9.0.
The block graph lock comes with annotations that can be checked by
clang's TSA. This required changes to the block drivers, i.e.
alloc-track, pbs, zeroinit as well as taking the appropriate locks
in pve-backup, savevm-async, vma-reader.
Local variable shadowing is prohibited via a compiler flag now,
required slight adaptation in vma.c.
Major changes only affect alloc-track:
* It is not possible to call a generated co-wrapper like
bdrv_get_info() while holding the block graph lock exclusively [0],
which does happen during initialization of alloc-track when the
backing hd is set and the refresh_limits driver callback is invoked.
The bdrv_get_info() call to get the cluster size is moved to
directly after opening the file child in track_open().
The important thing is that at least the request alignment for the
write target is used, because then the RMW cycle in bdrv_pwritev
will gather enough data from the backing file. Partial cluster
allocations in the target are not a fundamental issue, because the
driver returns its allocation status based on the bitmap, so any
other data that maps to the same cluster will still be copied later
by a stream job (or during writes to that cluster).
* Replacing the node cannot be done in the
track_co_change_backing_file() callback, because it is a coroutine
and cannot hold the block graph lock exclusively. So it is moved to
the stream job itself with the auto-remove option not having an
effect anymore (qemu-server would always set it anyways).
In the future, there could either be a special option for the stream
job, or maybe the upcoming blockdev-replace QMP command can be used.
Replacing the backing child is actually already done in the stream
job, so no need to do it in the track_co_change_backing_file()
callback. It also cannot be called from a coroutine. Looking at the
implementation in the qcow2 driver, it doesn't seem to be intended
to change the backing child itself, just update driver-internal
state.
Other changes:
* alloc-track: Error out early when used without auto-remove. Since
replacing the node now happens in the stream job, where the option
cannot be read from (it's internal to the driver), it will always be
treated as 'on'. Makes sure to have users beside qemu-server notice
the change (should they even exist). The option can be fully dropped
in the future while adding a version guard in qemu-server.
* alloc-track: Avoid seemingly superfluous child permission update.
Doesn't seem necessary nowadays (maybe after commit "alloc-track:
fix deadlock during drop" where the dropping is not rescheduled and
delayed anymore or some upstream change). Replacing the block node
will already update the permissions of the new node (which was the
file child before). Should there really be some issue, instead of
having a drop state, this could also be just based off the fact
whether there is still a backing child.
Dumping the cumulative (shared) permissions for the BDS with a debug
print yields the same values after this patch and with QEMU 8.1,
namely 3 and 5.
* PBS block driver: compile unconditionally. Proxmox VE always needs
it and something in the build process changed to make it not enabled
by default. Probably would need to move the build option to meson
otherwise.
* backup: job unreferencing during cleanup needs to happen outside of
coroutine, so it was moved to before invoking the clean
* mirror: Cherry-pick stable fix to avoid potential deadlock.
* savevm-async: migrate_init now can fail, so propagate potential
error.
* savevm-async: compression counters are not accessible outside
migration/ram-compress now, so drop code that prophylactically set
it to zero.
[0]: https://lore.kernel.org/qemu-devel/220be383-3b0d-4938-b584-69ad214e5d5d@proxmox.com/
[1]: https://lore.kernel.org/qemu-devel/e13b488e-bf13-44f2-acca-e724d14f43fd@proxmox.com/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-04-25 18:21:28 +03:00
|
|
|
@@ -496,10 +496,10 @@ Command description:
|
2022-02-11 12:24:35 +03:00
|
|
|
it doesn't need to be specified separately in this case.
|
|
|
|
|
|
|
|
|
|
|
|
-.. option:: dd [--image-opts] [-U] [-f FMT] [-O OUTPUT_FMT] [-n] [bs=BLOCK_SIZE] [count=BLOCKS] [skip=BLOCKS] if=INPUT of=OUTPUT
|
|
|
|
+.. option:: dd [--image-opts] [-U] [-f FMT] [-O OUTPUT_FMT] [-n] [-l SNAPSHOT_PARAM] [bs=BLOCK_SIZE] [count=BLOCKS] [skip=BLOCKS] if=INPUT of=OUTPUT
|
|
|
|
|
|
|
|
- dd copies from *INPUT* file to *OUTPUT* file converting it from
|
|
|
|
- *FMT* format to *OUTPUT_FMT* format.
|
|
|
|
+ dd copies from *INPUT* file or snapshot *SNAPSHOT_PARAM* to *OUTPUT* file
|
|
|
|
+ converting it from *FMT* format to *OUTPUT_FMT* format.
|
|
|
|
|
|
|
|
The data is by default read and written using blocks of 512 bytes but can be
|
|
|
|
modified by specifying *BLOCK_SIZE*. If count=\ *BLOCKS* is specified
|
|
|
|
diff --git a/qemu-img-cmds.hx b/qemu-img-cmds.hx
|
2024-04-25 18:21:29 +03:00
|
|
|
index 0b29a67a06..758f397232 100644
|
2022-02-11 12:24:35 +03:00
|
|
|
--- a/qemu-img-cmds.hx
|
|
|
|
+++ b/qemu-img-cmds.hx
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -60,9 +60,9 @@ SRST
|
2022-02-11 12:24:35 +03:00
|
|
|
ERST
|
|
|
|
|
|
|
|
DEF("dd", img_dd,
|
|
|
|
- "dd [--image-opts] [-U] [-f fmt] [-O output_fmt] [-n] [bs=block_size] [count=blocks] [skip=blocks] [osize=output_size] if=input of=output")
|
|
|
|
+ "dd [--image-opts] [-U] [-f fmt] [-O output_fmt] [-n] [-l snapshot_param] [bs=block_size] [count=blocks] [skip=blocks] [osize=output_size] if=input of=output")
|
|
|
|
SRST
|
|
|
|
-.. option:: dd [--image-opts] [-U] [-f FMT] [-O OUTPUT_FMT] [-n] [bs=BLOCK_SIZE] [count=BLOCKS] [skip=BLOCKS] [osize=OUTPUT_SIZE] if=INPUT of=OUTPUT
|
|
|
|
+.. option:: dd [--image-opts] [-U] [-f FMT] [-O OUTPUT_FMT] [-n] [-l SNAPSHOT_PARAM] [bs=BLOCK_SIZE] [count=BLOCKS] [skip=BLOCKS] [osize=OUTPUT_SIZE] if=INPUT of=OUTPUT
|
|
|
|
ERST
|
|
|
|
|
|
|
|
DEF("info", img_info,
|
|
|
|
diff --git a/qemu-img.c b/qemu-img.c
|
2024-04-25 18:21:29 +03:00
|
|
|
index 6fc8384f64..a6c88e0860 100644
|
2022-02-11 12:24:35 +03:00
|
|
|
--- a/qemu-img.c
|
|
|
|
+++ b/qemu-img.c
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5110,6 +5110,7 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
BlockDriver *drv = NULL, *proto_drv = NULL;
|
|
|
|
BlockBackend *blk1 = NULL, *blk2 = NULL;
|
|
|
|
QemuOpts *opts = NULL;
|
|
|
|
+ QemuOpts *sn_opts = NULL;
|
|
|
|
QemuOptsList *create_opts = NULL;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
bool image_opts = false;
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5119,6 +5120,7 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
int64_t size = 0, readsize = 0;
|
2022-12-14 17:16:32 +03:00
|
|
|
int64_t out_pos, in_pos;
|
2022-02-11 12:24:35 +03:00
|
|
|
bool force_share = false, skip_create = false;
|
|
|
|
+ const char *snapshot_name = NULL;
|
|
|
|
struct DdInfo dd = {
|
|
|
|
.flags = 0,
|
|
|
|
.count = 0,
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5156,7 +5158,7 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
{ 0, 0, 0, 0 }
|
|
|
|
};
|
|
|
|
|
|
|
|
- while ((c = getopt_long(argc, argv, ":hf:O:Un", long_options, NULL))) {
|
|
|
|
+ while ((c = getopt_long(argc, argv, ":hf:O:l:Un", long_options, NULL))) {
|
|
|
|
if (c == EOF) {
|
|
|
|
break;
|
|
|
|
}
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5179,6 +5181,19 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
case 'n':
|
|
|
|
skip_create = true;
|
|
|
|
break;
|
|
|
|
+ case 'l':
|
|
|
|
+ if (strstart(optarg, SNAPSHOT_OPT_BASE, NULL)) {
|
|
|
|
+ sn_opts = qemu_opts_parse_noisily(&internal_snapshot_opts,
|
|
|
|
+ optarg, false);
|
|
|
|
+ if (!sn_opts) {
|
|
|
|
+ error_report("Failed in parsing snapshot param '%s'",
|
|
|
|
+ optarg);
|
|
|
|
+ goto out;
|
|
|
|
+ }
|
|
|
|
+ } else {
|
|
|
|
+ snapshot_name = optarg;
|
|
|
|
+ }
|
|
|
|
+ break;
|
|
|
|
case 'U':
|
|
|
|
force_share = true;
|
|
|
|
break;
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5238,11 +5253,24 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
if (dd.flags & C_IF) {
|
|
|
|
blk1 = img_open(image_opts, in.filename, fmt, 0, false, false,
|
|
|
|
force_share);
|
|
|
|
-
|
|
|
|
if (!blk1) {
|
|
|
|
ret = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
+ if (sn_opts) {
|
|
|
|
+ bdrv_snapshot_load_tmp(blk_bs(blk1),
|
|
|
|
+ qemu_opt_get(sn_opts, SNAPSHOT_OPT_ID),
|
|
|
|
+ qemu_opt_get(sn_opts, SNAPSHOT_OPT_NAME),
|
|
|
|
+ &local_err);
|
|
|
|
+ } else if (snapshot_name != NULL) {
|
|
|
|
+ bdrv_snapshot_load_tmp_by_id_or_name(blk_bs(blk1), snapshot_name,
|
|
|
|
+ &local_err);
|
|
|
|
+ }
|
|
|
|
+ if (local_err) {
|
|
|
|
+ error_reportf_err(local_err, "Failed to load snapshot: ");
|
|
|
|
+ ret = -1;
|
|
|
|
+ goto out;
|
|
|
|
+ }
|
|
|
|
}
|
|
|
|
|
|
|
|
if (dd.flags & C_OSIZE) {
|
2024-04-25 18:21:29 +03:00
|
|
|
@@ -5397,6 +5425,7 @@ static int img_dd(int argc, char **argv)
|
2022-02-11 12:24:35 +03:00
|
|
|
out:
|
|
|
|
g_free(arg);
|
|
|
|
qemu_opts_del(opts);
|
|
|
|
+ qemu_opts_del(sn_opts);
|
|
|
|
qemu_opts_free(create_opts);
|
|
|
|
blk_unref(blk1);
|
|
|
|
blk_unref(blk2);
|