2019-11-13 20:21:07 +03:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-12 00:16:13 +03:00
|
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
2019-11-13 20:21:07 +03:00
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
|
|
|
|
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
|
|
|
|
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
|
|
|
|
.\" Copyright (c) 2017 Datto Inc.
|
|
|
|
.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
|
|
|
|
.\" Copyright 2017 Nexenta Systems, Inc.
|
|
|
|
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
|
|
|
|
.\"
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
.Dd June 28, 2023
|
2019-11-13 20:21:07 +03:00
|
|
|
.Dt ZPOOL-ATTACH 8
|
2020-08-21 21:55:47 +03:00
|
|
|
.Os
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh NAME
|
2020-10-22 21:28:10 +03:00
|
|
|
.Nm zpool-attach
|
2021-05-27 03:46:40 +03:00
|
|
|
.Nd attach new device to existing ZFS vdev
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh SYNOPSIS
|
2020-10-22 21:28:10 +03:00
|
|
|
.Nm zpool
|
2019-11-13 20:21:07 +03:00
|
|
|
.Cm attach
|
2020-07-03 21:05:50 +03:00
|
|
|
.Op Fl fsw
|
2019-11-13 20:21:07 +03:00
|
|
|
.Oo Fl o Ar property Ns = Ns Ar value Oc
|
|
|
|
.Ar pool device new_device
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh DESCRIPTION
|
|
|
|
Attaches
|
|
|
|
.Ar new_device
|
|
|
|
to the existing
|
|
|
|
.Ar device .
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
The behavior differs depending on if the existing
|
|
|
|
.Ar device
|
|
|
|
is a RAID-Z device, or a mirror/plain device.
|
|
|
|
.Pp
|
|
|
|
If the existing device is a mirror or plain device
|
|
|
|
.Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 ,
|
|
|
|
the new device will be mirrored with the existing device, a resilver will be
|
|
|
|
initiated, and the new device will contribute to additional redundancy once the
|
|
|
|
resilver completes.
|
2019-11-13 20:21:07 +03:00
|
|
|
If
|
|
|
|
.Ar device
|
|
|
|
is not currently part of a mirrored configuration,
|
|
|
|
.Ar device
|
|
|
|
automatically transforms into a two-way mirror of
|
|
|
|
.Ar device
|
|
|
|
and
|
|
|
|
.Ar new_device .
|
|
|
|
If
|
|
|
|
.Ar device
|
|
|
|
is part of a two-way mirror, attaching
|
|
|
|
.Ar new_device
|
|
|
|
creates a three-way mirror, and so on.
|
|
|
|
In either case,
|
|
|
|
.Ar new_device
|
2020-07-03 21:05:50 +03:00
|
|
|
begins to resilver immediately and any running scrub is cancelled.
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
.Pp
|
|
|
|
If the existing device is a RAID-Z device
|
|
|
|
.Pq e.g. specified as Qq Ar raidz2-0 ,
|
|
|
|
the new device will become part of that RAID-Z group.
|
|
|
|
A "raidz expansion" will be initiated, and once the expansion completes,
|
|
|
|
the new device will contribute additional space to the RAID-Z group.
|
|
|
|
The expansion entails reading all allocated space from existing disks in the
|
|
|
|
RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including
|
|
|
|
the newly added
|
|
|
|
.Ar device ) .
|
|
|
|
Its progress can be monitored with
|
|
|
|
.Nm zpool Cm status .
|
|
|
|
.Pp
|
|
|
|
Data redundancy is maintained during and after the expansion.
|
|
|
|
If a disk fails while the expansion is in progress, the expansion pauses until
|
|
|
|
the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk
|
|
|
|
and waiting for reconstruction to complete).
|
|
|
|
Expansion does not change the number of failures that can be tolerated
|
|
|
|
without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion).
|
|
|
|
A RAID-Z vdev can be expanded multiple times.
|
|
|
|
.Pp
|
|
|
|
After the expansion completes, old blocks retain their old data-to-parity
|
|
|
|
ratio
|
|
|
|
.Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity
|
|
|
|
but distributed among the larger set of disks.
|
|
|
|
New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide
|
|
|
|
RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity).
|
|
|
|
However, the vdev's assumed parity ratio does not change, so slightly less
|
|
|
|
space than is expected may be reported for newly-written blocks, according to
|
|
|
|
.Nm zfs Cm list ,
|
|
|
|
.Nm df ,
|
|
|
|
.Nm ls Fl s ,
|
|
|
|
and similar tools.
|
|
|
|
.Pp
|
|
|
|
A pool-wide scrub is initiated at the end of the expansion in order to verify
|
|
|
|
the checksums of all blocks which have been copied during the expansion.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Bl -tag -width Ds
|
|
|
|
.It Fl f
|
|
|
|
Forces use of
|
|
|
|
.Ar new_device ,
|
|
|
|
even if it appears to be in use.
|
|
|
|
Not all devices can be overridden in this manner.
|
|
|
|
.It Fl o Ar property Ns = Ns Ar value
|
2021-05-27 03:46:40 +03:00
|
|
|
Sets the given pool properties.
|
|
|
|
See the
|
2021-06-04 23:29:26 +03:00
|
|
|
.Xr zpoolprops 7
|
2021-05-27 03:46:40 +03:00
|
|
|
manual page for a list of valid properties that can be set.
|
|
|
|
The only property supported at the moment is
|
|
|
|
.Sy ashift .
|
2020-07-03 21:05:50 +03:00
|
|
|
.It Fl s
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
When attaching to a mirror or plain device, the
|
2020-07-03 21:05:50 +03:00
|
|
|
.Ar new_device
|
|
|
|
is reconstructed sequentially to restore redundancy as quickly as possible.
|
2022-01-06 22:00:01 +03:00
|
|
|
Checksums are not verified during sequential reconstruction so a scrub is
|
2020-07-03 21:05:50 +03:00
|
|
|
started when the resilver completes.
|
|
|
|
.It Fl w
|
|
|
|
Waits until
|
|
|
|
.Ar new_device
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
has finished resilvering or expanding before returning.
|
2019-11-13 20:21:07 +03:00
|
|
|
.El
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh SEE ALSO
|
|
|
|
.Xr zpool-add 8 ,
|
2021-05-27 03:46:40 +03:00
|
|
|
.Xr zpool-detach 8 ,
|
2019-11-13 20:21:07 +03:00
|
|
|
.Xr zpool-import 8 ,
|
|
|
|
.Xr zpool-initialize 8 ,
|
|
|
|
.Xr zpool-online 8 ,
|
|
|
|
.Xr zpool-replace 8 ,
|
|
|
|
.Xr zpool-resilver 8
|