Commit Graph

11 Commits

Author SHA1 Message Date
Paul Dagnelie
58bf6afd3f
Fix cpu hotplug atomic sleep issue
We move the spinlock unlock before the thread creation. This should be
safe because the thread creation code doesn't actually manipulate any
taskq data structures; that's done by the thread once it's created.

We also remove the assertion that the maxthreads is the current threads
plus one; that assertion could fail if multiple hotplug events come in
quick succession, and the first new taskq thread hasn't had a chance to
start processing yet.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
eviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #12714
2021-11-02 10:23:48 -06:00
Brian Behlendorf
2cdd75bed7 cppcheck: remove redundant ASSERTs
The ASSERT that the passed pointer isn't NULL appears after the
pointer has already been dereferenced.  Remove the redundant check.

Reviewed-by: Ryan Moeller <ryan@ixsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #11508
2021-01-26 16:12:10 -08:00
Matthew Macy
a4134da2b2
spl-taskq: Make sure thread tsd hash entry is cleared
Like any other thread created by thread_create() we need to call
thread_exit() to properly clean it up.  In particular, this ensures the
tsd hash for the thread is cleared.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matt Macy <mmacy@FreeBSD.org>
Closes #11512
2021-01-25 11:18:28 -08:00
Paul Dagnelie
60a4c7d2a2
Implement memory and CPU hotplug
ZFS currently doesn't react to hotplugging cpu or memory into the 
system in any way. This patch changes that by adding logic to the ARC 
that allows the system to take advantage of new memory that is added 
for caching purposes. It also adds logic to the taskq infrastructure 
to support dynamically expanding the number of threads allocated to a 
taskq.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Matthew Ahrens <matthew.ahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #11212
2020-12-10 14:09:23 -08:00
Brian Behlendorf
d0249a4bd0
Replace ZFS on Linux references with OpenZFS
This change updates the documentation to refer to the project
as OpenZFS instead ZFS on Linux.  Web links have been updated
to refer to https://github.com/openzfs/zfs.  The extraneous
zfsonlinux.org web links in the ZED and SPL sources have been
dropped.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #11007
2020-10-08 20:10:13 -07:00
Matthew Ahrens
b3212d2fa6
Improve performance of zio_taskq_member
__zio_execute() calls zio_taskq_member() to determine if we are running
in a zio interrupt taskq, in which case we may need to switch to
processing this zio in a zio issue taskq.  The call to
zio_taskq_member() can become a performance bottleneck when we are
processing a high rate of zio's.

zio_taskq_member() calls taskq_member() on each of the zio interrupt
taskqs, of which there are 21.  This is slow because each call to
taskq_member() does tsd_get(taskq_tsd), which on Linux is relatively
slow.

This commit improves the performance of zio_taskq_member() by having it
cache the value of tsd_get(taskq_tsd), reducing the number of those
calls to 1/21th of the current behavior.

In a test case running `zfs send -c >/dev/null` of a filesystem with
small blocks (average 2.5KB/block), zio_taskq_member() was using 6.7% of
one CPU, and with this change it is reduced to 1.3%.  Overall time to
perform the `zfs send` reduced by 10% (~150,000 block/sec to ~165,000
blocks/sec).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #10070
2020-03-03 10:29:38 -08:00
Ubuntu
7cf1fe6331 cppcheck: (error) Uninitialized variable
As of cppcheck 1.82 warnings are issued when using the list_for_each_*
functions with an uninitialized variable.  Functionally, this is fine
but to resolve the warning initialize these variables.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #9732
2019-12-18 17:24:29 -08:00
Prakash Surya
ae38e00968 Add tracepoints for taskq entry lifetime events
This adds some new DTRACE_PROBE* endpoints so that we can observe taskq
latencies on a system. Additionally, a new "taskqlatency.bt" script is
added to do this observation via "bpftrace". Lastly, a "zfs-trace.sh"
script is added to wrap "bpftrace" with the proper options required to
run and use "taskqlatency.bt".

For example, with these changes in place, a user can run the following:

    $ cd ./contrib/bpftrace
    $ sudo ./zfs-trace.sh taskqlatency.bt
    Attaching 6 probes...
    ^C

Here's some example output, showing latency information for time spent
executing the taskq entry's function:

    @exec_lat_us[dp_sync_taskq, userquota_updates_task]:
    [2, 4)                 5 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [4, 8)                 0 |                                                    |
    [8, 16)                1 |@@@@@@@@@@                                          |
    [16, 32)               2 |@@@@@@@@@@@@@@@@@@@@                                |

    @exec_lat_us[z_wr_int_h, zio_execute]:
    [8, 16)               16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [16, 32)               2 |@@@@@@                                              |

    @exec_lat_us[z_wr_iss_h, zio_execute]:
    [16, 32)               4 |@@@@@@@@@@@@@@@@                                    |
    [32, 64)              13 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [64, 128)              1 |@@@@                                                |

    @exec_lat_us[z_ioctl_int, zio_execute]:
    [2, 4)                 1 |@@@@                                                |
    [4, 8)                11 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [8, 16)                8 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@               |

    @exec_lat_us[dp_sync_taskq, sync_dnodes_task]:
    [2, 4)                 1 |@@@@@@                                              |
    [4, 8)                 7 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
    [8, 16)                8 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [16, 32)               2 |@@@@@@@@@@@@@                                       |
    [32, 64)               4 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
    [64, 128)              1 |@@@@@@                                              |
    [128, 256)             0 |                                                    |
    [256, 512)             1 |@@@@@@

Here's some example output, showing latency information for time spent
waiting on the taskq, prior to starting execution of entry's function:

    @queue_lat_us[dp_sync_taskq]:
    [2, 4)                 1 |@@@@                                                |
    [4, 8)                 7 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                      |
    [8, 16)                2 |@@@@@@@@                                            |
    [16, 32)               3 |@@@@@@@@@@@@@                                       |
    [32, 64)              12 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [64, 128)              6 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
    [128, 256)             0 |                                                    |
    [256, 512)             1 |@@@@                                                |

    @queue_lat_us[z_wr_iss]:
    [4, 8)                 4 |@@@@                                                |
    [8, 16)               13 |@@@@@@@@@@@@@@@                                     |
    [16, 32)               6 |@@@@@@@                                             |
    [32, 64)               2 |@@                                                  |
    [64, 128)             12 |@@@@@@@@@@@@@@                                      |
    [128, 256)            15 |@@@@@@@@@@@@@@@@@@                                  |
    [256, 512)            33 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@             |
    [512, 1K)             27 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                    |
    [1K, 2K)               7 |@@@@@@@@                                            |
    [2K, 4K)              14 |@@@@@@@@@@@@@@@@                                    |
    [4K, 8K)              14 |@@@@@@@@@@@@@@@@                                    |
    [8K, 16K)             23 |@@@@@@@@@@@@@@@@@@@@@@@@@@@                         |
    [16K, 32K)            43 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

    @queue_lat_us[z_wr_int]:
    [2, 4)                10 |@@@@@                                               |
    [4, 8)                71 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@           |
    [8, 16)               88 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
    [16, 32)              50 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                       |
    [32, 64)              65 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@              |
    [64, 128)             43 |@@@@@@@@@@@@@@@@@@@@@@@@@                           |
    [128, 256)            19 |@@@@@@@@@@@                                         |
    [256, 512)             3 |@                                                   |
    [512, 1K)              1 |                                                    |

Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Prakash Surya <prakash.surya@delphix.com>
Closes #9525
2019-11-01 13:14:54 -07:00
Brian Behlendorf
10fa254539
Linux 4.14, 4.19, 5.0+ compat: SIMD save/restore
Contrary to initial testing we cannot rely on these kernels to
invalidate the per-cpu FPU state and restore the FPU registers.
Nor can we guarantee that the kernel won't modify the FPU state
which we saved in the task struck.

Therefore, the kfpu_begin() and kfpu_end() functions have been
updated to save and restore the FPU state using our own dedicated
per-cpu FPU state variables.

This has the additional advantage of allowing us to use the FPU
again in user threads.  So we remove the code which was added to
use task queues to ensure some functions ran in kernel threads.

Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #9346
Closes #9403
2019-10-24 10:17:33 -07:00
Matthew Macy
e4f5fa1229 Fix strdup conflict on other platforms
In the FreeBSD kernel the strdup signature is:

```
char	*strdup(const char *__restrict, struct malloc_type *);
```

It's unfortunate that the developers have chosen to change
the signature of libc functions - but it's what I have to
deal with.

Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matt Macy <mmacy@FreeBSD.org>
Closes #9433
2019-10-10 09:47:06 -07:00
Matthew Macy
bced7e3aaa OpenZFS restructuring - move platform specific sources
Move platform specific Linux source under module/os/linux/
and update the build system accordingly.  Additional code
restructuring will follow to make the common code fully
portable.
    
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Reviewed-by: Igor Kozhukhov <igor@dilos.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matthew Macy <mmacy@FreeBSD.org>
Closes #9206
2019-09-06 11:26:26 -07:00