mirror_zfs/cmd/zed/agents
Brian Behlendorf d441e85dd7
Add support for autoexpand property
While the autoexpand property may seem like a small feature it
depends on a significant amount of system infrastructure.  Enough
of that infrastructure is now in place that with a few modifications
for Linux it can be supported.

Auto-expand works as follows; when a block device is modified
(re-sized, closed after being open r/w, etc) a change uevent is
generated for udev.  The ZED, which is monitoring udev events,
passes the change event along to zfs_deliver_dle() if the disk
or partition contains a zfs_member as identified by blkid.

From here the device is matched against all imported pool vdevs
using the vdev_guid which was read from the label by blkid.  If
a match is found the ZED reopens the pool vdev.  This re-opening
is important because it allows the vdev to be briefly closed so
the disk partition table can be re-read.  Otherwise, it wouldn't
be possible to report the maximum possible expansion size.

Finally, if the property autoexpand=on a vdev expansion will be
attempted.  After performing some sanity checks on the disk to
verify that it is safe to expand,  the primary partition (-part1)
will be expanded and the partition table updated.  The partition
is then re-opened (again) to detect the updated size which allows
the new capacity to be used.

In order to make all of the above possible the following changes
were required:

* Updated the zpool_expand_001_pos and zpool_expand_003_pos tests.
  These tests now create a pool which is layered on a loopback,
  scsi_debug, and file vdev.  This allows for testing of non-
  partitioned block device (loopback), a partition block device
  (scsi_debug), and a file which does not receive udev change
  events.  This provided for better test coverage, and by removing
  the layering on ZFS volumes there issues surrounding layering
  one pool on another are avoided.

* zpool_find_vdev_by_physpath() updated to accept a vdev guid.
  This allows for matching by guid rather than path which is a
  more reliable way for the ZED to reference a vdev.

* Fixed zfs_zevent_wait() signal handling which could result
  in the ZED spinning when a signal was not handled.

* Removed vdev_disk_rrpart() functionality which can be abandoned
  in favor of kernel provided blkdev_reread_part() function.

* Added a rwlock which is held as a writer while a disk is being
  reopened.  This is important to prevent errors from occurring
  for any configuration related IOs which bypass the SCL_ZIO lock.
  The zpool_reopen_007_pos.ksh test case was added to verify IO
  error are never observed when reopening.  This is not expected
  to impact IO performance.

Additional fixes which aren't critical but were discovered and
resolved in the course of developing this functionality.

* Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for
  ZFS volumes.  This is as good as a unique physical path, while the
  volumes are not used in the test cases anymore for other reasons
  this improvement was included.

Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Signed-off-by: Sara Hartse <sara.hartse@delphix.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #120
Closes #2437
Closes #5771
Closes #7366
Closes #7582
Closes #7629
2018-07-23 15:40:15 -07:00
..
fmd_api.c Add illumos FMD ZFS logic to ZED -- phase 2 2016-11-07 15:01:38 -08:00
fmd_api.h Add illumos FMD ZFS logic to ZED -- phase 2 2016-11-07 15:01:38 -08:00
fmd_serd.c Fix coverity defects: 154021 2016-11-08 14:34:52 -08:00
fmd_serd.h Add illumos FMD ZFS logic to ZED -- phase 2 2016-11-07 15:01:38 -08:00
README.md Add illumos FMD ZFS logic to ZED -- phase 2 2016-11-07 15:01:38 -08:00
zfs_agents.c Various ZED fixes 2017-12-08 16:58:41 -08:00
zfs_agents.h Various ZED fixes 2017-12-08 16:58:41 -08:00
zfs_diagnosis.c Update build system and packaging 2018-05-29 16:00:33 -07:00
zfs_mod.c Add support for autoexpand property 2018-07-23 15:40:15 -07:00
zfs_retire.c Various ZED fixes 2017-12-08 16:58:41 -08:00

Fault Management Logic for ZED

The integration of Fault Management Daemon (FMD) logic from illumos is being deployed in three phases. This logic is encapsulated in several software modules inside ZED.

ZED+FM Phase 1

All the phase 1 work is in current Master branch. Phase I work includes:

  • Add new paths to the persistent VDEV label for device matching.
  • Add a disk monitor for generating disk-add and disk-change events.
  • Add support for automated VDEV auto-online, auto-replace and auto-expand.
  • Expand the statechange event to include all VDEV state transitions.

ZED+FM Phase 2 (WIP)

The phase 2 work primarily entails the Diagnosis Engine and the Retire Agent modules. It also includes infrastructure to support a crude FMD environment to host these modules. For additional information see the FMD Components in ZED and Implementation Notes sections below.

ZED+FM Phase 3

Future work will add additional functionality and will likely include:

  • Add FMD module garbage collection (periodically call fmd_module_gc()).
  • Add real module property retrieval (currently hard-coded in accessors).
  • Additional diagnosis telemetry (like latency outliers and SMART data).
  • Export FMD module statistics.
  • Zedlet parallel execution and resiliency (add watchdog).

ZFS Fault Management Overview

The primary purpose with ZFS fault management is automated diagnosis and isolation of VDEV faults. A fault is something we can associate with an impact (e.g. loss of data redundancy) and a corrective action (e.g. offline or replace a disk). A typical ZFS fault management stack is comprised of error detectors (e.g. zfs_ereport_post()), a disk monitor, a diagnosis engine and response agents.

After detecting a software error, the ZFS kernel module sends error events to the ZED user daemon which in turn routes the events to its internal FMA modules based on their event subscriptions. Likewise, if a disk is added or changed in the system, the disk monitor sends disk events which are consumed by a response agent.

FMD Components in ZED

There are three FMD modules (aka agents) that are now built into ZED.

  1. A Diagnosis Engine module (agents/zfs_diagnosis.c)
  2. A Retire Agent module (agents/zfs_retire.c)
  3. A Disk Add Agent module (agents/zfs_mod.c)

To begin with, a Diagnosis Engine consumes per-vdev I/O and checksum ereports and feeds them into a Soft Error Rate Discrimination (SERD) algorithm which will generate a corresponding fault diagnosis when the tracked VDEV encounters N events in a given T time window. The initial N and T values for the SERD algorithm are estimates inherited from illumos (10 errors in 10 minutes).

In turn, a Retire Agent responds to diagnosed faults by isolating the faulty VDEV. It will notify the ZFS kernel module of the new VDEV state (degraded or faulted). The retire agent is also responsible for managing hot spares across all pools. When it encounters a device fault or a device removal it will replace the device with an appropriate spare if available.

Finally, a Disk Add Agent responds to events from a libudev disk monitor (EC_DEV_ADD or EC_DEV_STATUS) and will online, replace or expand the associated VDEV. This agent is also known as the zfs_mod or Sysevent Loadable Module (SLM) on the illumos platform. The added disk is matched to a specific VDEV using its device id, physical path or VDEV GUID.

Note that the auto-replace feature (aka hot plug) is opt-in and you must set the pool's autoreplace property to enable it. The new disk will be matched to the corresponding leaf VDEV by physical location and labeled with a GPT partition before replacing the original VDEV in the pool.

Implementation Notes

  • The FMD module API required for logic modules is emulated and implemented in the fmd_api.c and fmd_serd.c source files. This support includes module registration, memory allocation, module property accessors, basic case management, one-shot timers and SERD engines. For detailed information on the FMD module API, see the document -- "Fault Management Daemon Programmer's Reference Manual".

  • The event subscriptions for the modules (located in a module specific configuration file on illumos) are currently hard-coded into the ZED zfs_agent_dispatch() function.

  • The FMD modules are called one at a time from a single thread that consumes events queued to the modules. These events are sourced from the normal ZED events and also include events posted from the diagnosis engine and the libudev disk event monitor.

  • The FMD code modules have minimal changes and were intentionally left as similar as possible to their upstream source files.

  • The sysevent namespace in ZED differs from illumos. For example:

    • illumos uses "resource.sysevent.EC_zfs.ESC_ZFS_vdev_remove"
    • Linux uses "sysevent.fs.zfs.vdev_remove"
  • The FMD Modules port was produced by Intel Federal, LLC under award number B609815 between the U.S. Department of Energy (DOE) and Intel Federal, LLC.