2018-02-19 12:38:54 +03:00
|
|
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
2017-04-05 11:49:19 +03:00
|
|
|
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
|
2020-04-07 17:53:19 +03:00
|
|
|
Date: Mon, 6 Apr 2020 12:16:43 +0200
|
|
|
|
Subject: [PATCH] PVE: virtio-balloon: improve query-balloon
|
2017-04-05 11:49:19 +03:00
|
|
|
|
|
|
|
Actually provide memory information via the query-balloon
|
|
|
|
command.
|
2019-06-06 13:58:15 +03:00
|
|
|
|
|
|
|
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
[FE: add BalloonInfo to member name exceptions list
|
|
|
|
rebase for 8.0 - moved to hw/core/machine-hmp-cmds.c]
|
2023-01-10 11:40:57 +03:00
|
|
|
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
|
2017-04-05 11:49:19 +03:00
|
|
|
---
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
hw/core/machine-hmp-cmds.c | 30 +++++++++++++++++++++++++++++-
|
2017-04-05 11:49:19 +03:00
|
|
|
hw/virtio/virtio-balloon.c | 33 +++++++++++++++++++++++++++++++--
|
2021-02-11 19:11:11 +03:00
|
|
|
qapi/machine.json | 22 +++++++++++++++++++++-
|
2023-01-10 11:40:57 +03:00
|
|
|
qapi/pragma.json | 1 +
|
|
|
|
4 files changed, 82 insertions(+), 4 deletions(-)
|
2017-04-05 11:49:19 +03:00
|
|
|
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
|
|
|
|
index c3e55ef9e9..0e32e6201f 100644
|
|
|
|
--- a/hw/core/machine-hmp-cmds.c
|
|
|
|
+++ b/hw/core/machine-hmp-cmds.c
|
|
|
|
@@ -169,7 +169,35 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
- monitor_printf(mon, "balloon: actual=%" PRId64 "\n", info->actual >> 20);
|
|
|
|
+ monitor_printf(mon, "balloon: actual=%" PRId64, info->actual >> 20);
|
|
|
|
+ monitor_printf(mon, " max_mem=%" PRId64, info->max_mem >> 20);
|
|
|
|
+ if (info->has_total_mem) {
|
|
|
|
+ monitor_printf(mon, " total_mem=%" PRId64, info->total_mem >> 20);
|
|
|
|
+ }
|
|
|
|
+ if (info->has_free_mem) {
|
|
|
|
+ monitor_printf(mon, " free_mem=%" PRId64, info->free_mem >> 20);
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ if (info->has_mem_swapped_in) {
|
|
|
|
+ monitor_printf(mon, " mem_swapped_in=%" PRId64, info->mem_swapped_in);
|
|
|
|
+ }
|
|
|
|
+ if (info->has_mem_swapped_out) {
|
|
|
|
+ monitor_printf(mon, " mem_swapped_out=%" PRId64, info->mem_swapped_out);
|
|
|
|
+ }
|
|
|
|
+ if (info->has_major_page_faults) {
|
|
|
|
+ monitor_printf(mon, " major_page_faults=%" PRId64,
|
|
|
|
+ info->major_page_faults);
|
|
|
|
+ }
|
|
|
|
+ if (info->has_minor_page_faults) {
|
|
|
|
+ monitor_printf(mon, " minor_page_faults=%" PRId64,
|
|
|
|
+ info->minor_page_faults);
|
|
|
|
+ }
|
|
|
|
+ if (info->has_last_update) {
|
|
|
|
+ monitor_printf(mon, " last_update=%" PRId64,
|
|
|
|
+ info->last_update);
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ monitor_printf(mon, "\n");
|
|
|
|
|
|
|
|
qapi_free_BalloonInfo(info);
|
|
|
|
}
|
2017-04-05 11:49:19 +03:00
|
|
|
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
index 746f07c4d2..a41854b902 100644
|
2017-04-05 11:49:19 +03:00
|
|
|
--- a/hw/virtio/virtio-balloon.c
|
|
|
|
+++ b/hw/virtio/virtio-balloon.c
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -804,8 +804,37 @@ static uint64_t virtio_balloon_get_features(VirtIODevice *vdev, uint64_t f,
|
2017-04-05 11:49:19 +03:00
|
|
|
static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
|
|
|
|
{
|
|
|
|
VirtIOBalloon *dev = opaque;
|
|
|
|
- info->actual = get_current_ram_size() - ((uint64_t) dev->actual <<
|
|
|
|
- VIRTIO_BALLOON_PFN_SHIFT);
|
|
|
|
+ ram_addr_t ram_size = get_current_ram_size();
|
|
|
|
+ info->actual = ram_size - ((uint64_t) dev->actual <<
|
|
|
|
+ VIRTIO_BALLOON_PFN_SHIFT);
|
|
|
|
+
|
|
|
|
+ info->max_mem = ram_size;
|
|
|
|
+
|
|
|
|
+ if (!(balloon_stats_enabled(dev) && balloon_stats_supported(dev) &&
|
|
|
|
+ dev->stats_last_update)) {
|
|
|
|
+ return;
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ info->last_update = dev->stats_last_update;
|
|
|
|
+ info->has_last_update = true;
|
|
|
|
+
|
|
|
|
+ info->mem_swapped_in = dev->stats[VIRTIO_BALLOON_S_SWAP_IN];
|
|
|
|
+ info->has_mem_swapped_in = info->mem_swapped_in >= 0 ? true : false;
|
|
|
|
+
|
|
|
|
+ info->mem_swapped_out = dev->stats[VIRTIO_BALLOON_S_SWAP_OUT];
|
|
|
|
+ info->has_mem_swapped_out = info->mem_swapped_out >= 0 ? true : false;
|
|
|
|
+
|
|
|
|
+ info->major_page_faults = dev->stats[VIRTIO_BALLOON_S_MAJFLT];
|
|
|
|
+ info->has_major_page_faults = info->major_page_faults >= 0 ? true : false;
|
|
|
|
+
|
|
|
|
+ info->minor_page_faults = dev->stats[VIRTIO_BALLOON_S_MINFLT];
|
|
|
|
+ info->has_minor_page_faults = info->minor_page_faults >= 0 ? true : false;
|
|
|
|
+
|
|
|
|
+ info->free_mem = dev->stats[VIRTIO_BALLOON_S_MEMFREE];
|
|
|
|
+ info->has_free_mem = info->free_mem >= 0 ? true : false;
|
|
|
|
+
|
|
|
|
+ info->total_mem = dev->stats[VIRTIO_BALLOON_S_MEMTOT];
|
|
|
|
+ info->has_total_mem = info->total_mem >= 0 ? true : false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_balloon_to_target(void *opaque, ram_addr_t target)
|
2021-02-11 19:11:11 +03:00
|
|
|
diff --git a/qapi/machine.json b/qapi/machine.json
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
index 604b686e59..15f5f86683 100644
|
2021-02-11 19:11:11 +03:00
|
|
|
--- a/qapi/machine.json
|
|
|
|
+++ b/qapi/machine.json
|
update submodule and patches to QEMU 8.0.0
Many changes were necessary this time around:
* QAPI was changed to avoid redundant has_* variables, see commit
44ea9d9be3 ("qapi: Start to elide redundant has_FOO in generated C")
for details. This affected many QMP commands added by Proxmox too.
* Pending querying for migration got split into two functions, one to
estimate, one for exact value, see commit c8df4a7aef ("migration:
Split save_live_pending() into state_pending_*") for details. Relevant
for savevm-async and PBS dirty bitmap.
* Some block (driver) functions got converted to coroutines, so the
Proxmox block drivers needed to be adapted.
* Alloc track auto-detaching during PBS live restore got broken by
AioContext-related changes resulting in a deadlock. The current, hacky
method was replaced by a simpler one. Stefan apparently ran into a
problem with that when he wrote the driver, but there were
improvements in the stream job code since then and I didn't manage to
reproduce the issue. It's a separate patch "alloc-track: fix deadlock
during drop" for now, you can find the details there.
* Async snapshot-related changes:
- The pending querying got adapted to the above-mentioned split and
a patch is added to optimize it/make it more similar to what
upstream code does.
- Added initialization of the compression counters (for
future-proofing).
- It's necessary the hold the BQL (big QEMU lock = iothread mutex)
during the setup phase, because block layer functions are used there
and not doing so leads to racy, hard-to-debug crashes or hangs. It's
necessary to change some upstream code too for this, a version of
the patch "migration: for snapshots, hold the BQL during setup
callbacks" is intended to be upstreamed.
- Need to take the bdrv graph read lock before flushing.
* hmp_info_balloon was moved to a different file.
* Needed to include a new headers from time to time to still get the
correct functions.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-05-15 16:39:53 +03:00
|
|
|
@@ -1056,9 +1056,29 @@
|
2021-02-11 19:11:11 +03:00
|
|
|
# @actual: the logical size of the VM in bytes
|
|
|
|
# Formula used: logical_vm_size = vm_ram_size - balloon_size
|
2017-04-05 11:49:19 +03:00
|
|
|
#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @last_update: time when stats got updated from guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @mem_swapped_in: number of pages swapped in within the guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @mem_swapped_out: number of pages swapped out within the guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @major_page_faults: number of major page faults within the guest
|
2018-02-22 14:34:57 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @minor_page_faults: number of minor page faults within the guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @free_mem: amount of memory (in bytes) free in the guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
2017-04-05 12:38:26 +03:00
|
|
|
+# @total_mem: amount of memory (in bytes) visible to the guest
|
2017-04-05 11:49:19 +03:00
|
|
|
+#
|
|
|
|
+# @max_mem: amount of memory (in bytes) assigned to the guest
|
2018-02-22 14:34:57 +03:00
|
|
|
+#
|
2021-05-27 13:43:32 +03:00
|
|
|
# Since: 0.14
|
2017-04-05 11:49:19 +03:00
|
|
|
##
|
|
|
|
-{ 'struct': 'BalloonInfo', 'data': {'actual': 'int' } }
|
|
|
|
+{ 'struct': 'BalloonInfo',
|
|
|
|
+ 'data': {'actual': 'int', '*last_update': 'int', '*mem_swapped_in': 'int',
|
|
|
|
+ '*mem_swapped_out': 'int', '*major_page_faults': 'int',
|
|
|
|
+ '*minor_page_faults': 'int', '*free_mem': 'int',
|
|
|
|
+ '*total_mem': 'int', 'max_mem': 'int' } }
|
|
|
|
|
|
|
|
##
|
|
|
|
# @query-balloon:
|
2023-01-10 11:40:57 +03:00
|
|
|
diff --git a/qapi/pragma.json b/qapi/pragma.json
|
2023-05-24 16:56:52 +03:00
|
|
|
index 7f810b0e97..325e684411 100644
|
2023-01-10 11:40:57 +03:00
|
|
|
--- a/qapi/pragma.json
|
|
|
|
+++ b/qapi/pragma.json
|
2023-05-24 16:56:52 +03:00
|
|
|
@@ -35,6 +35,7 @@
|
2023-01-10 11:40:57 +03:00
|
|
|
'member-name-exceptions': [ # visible in:
|
|
|
|
'ACPISlotType', # query-acpi-ospm-status
|
|
|
|
'AcpiTableOptions', # -acpitable
|
|
|
|
+ 'BalloonInfo', # query-balloon
|
|
|
|
'BlkdebugEvent', # blockdev-add, -blockdev
|
|
|
|
'BlkdebugSetStateOptions', # blockdev-add, -blockdev
|
|
|
|
'BlockDeviceInfo', # query-block
|