zfsonlinux/zfs-patches/0012-Zpool-iostat-remove-latency-queue-scaling.patch
Stoiko Ivanov 04a710dd91 update/rebase to zfs-0.7.12 with patches from ZOL
Reorder patches, so that the upstream changeset comes last

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2018-11-14 18:27:04 +01:00

87 lines
3.2 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Gregor Kopka <mailfrom-github@kopka.net>
Date: Wed, 26 Sep 2018 01:29:16 +0200
Subject: [PATCH] Zpool iostat: remove latency/queue scaling
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).
When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.
But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).
This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).
This removes temporal scaling from latency and queue depth figures.
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes #7945
Closes #7694
---
cmd/zpool/zpool_main.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/cmd/zpool/zpool_main.c b/cmd/zpool/zpool_main.c
index a4fd0321..591e2e5c 100644
--- a/cmd/zpool/zpool_main.c
+++ b/cmd/zpool/zpool_main.c
@@ -3493,7 +3493,7 @@ single_histo_average(uint64_t *histo, unsigned int buckets)
static void
print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
- nvlist_t *newnv, double scale)
+ nvlist_t *newnv)
{
int i;
uint64_t val;
@@ -3523,7 +3523,7 @@ print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
format = ZFS_NICENUM_1024;
for (i = 0; i < ARRAY_SIZE(names); i++) {
- val = nva[i].data[0] * scale;
+ val = nva[i].data[0];
print_one_stat(val, format, column_width, cb->cb_scripted);
}
@@ -3532,7 +3532,7 @@ print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
static void
print_iostat_latency(iostat_cbdata_t *cb, nvlist_t *oldnv,
- nvlist_t *newnv, double scale)
+ nvlist_t *newnv)
{
int i;
uint64_t val;
@@ -3562,7 +3562,7 @@ print_iostat_latency(iostat_cbdata_t *cb, nvlist_t *oldnv,
/* Print our avg latencies on the line */
for (i = 0; i < ARRAY_SIZE(names); i++) {
/* Compute average latency for a latency histo */
- val = single_histo_average(nva[i].data, nva[i].count) * scale;
+ val = single_histo_average(nva[i].data, nva[i].count);
print_one_stat(val, format, column_width, cb->cb_scripted);
}
free_calc_stats(nva, ARRAY_SIZE(names));
@@ -3701,9 +3701,9 @@ print_vdev_stats(zpool_handle_t *zhp, const char *name, nvlist_t *oldnv,
print_iostat_default(calcvs, cb, scale);
}
if (cb->cb_flags & IOS_LATENCY_M)
- print_iostat_latency(cb, oldnv, newnv, scale);
+ print_iostat_latency(cb, oldnv, newnv);
if (cb->cb_flags & IOS_QUEUES_M)
- print_iostat_queues(cb, oldnv, newnv, scale);
+ print_iostat_queues(cb, oldnv, newnv);
if (cb->cb_flags & IOS_ANYHISTO_M) {
printf("\n");
print_iostat_histos(cb, oldnv, newnv, scale, name);