| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .\" | 
					
						
							|  |  |  | .\" CDDL HEADER START | 
					
						
							|  |  |  | .\" | 
					
						
							|  |  |  | .\" The contents of this file are subject to the terms of the | 
					
						
							|  |  |  | .\" Common Development and Distribution License (the "License"). | 
					
						
							|  |  |  | .\" You may not use this file except in compliance with the License. | 
					
						
							|  |  |  | .\" | 
					
						
							|  |  |  | .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE | 
					
						
							| 
									
										
										
										
											2022-07-12 00:16:13 +03:00
										 |  |  | .\" or https://opensource.org/licenses/CDDL-1.0. | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .\" See the License for the specific language governing permissions | 
					
						
							|  |  |  | .\" and limitations under the License. | 
					
						
							|  |  |  | .\" | 
					
						
							|  |  |  | .\" When distributing Covered Code, include this CDDL HEADER in each | 
					
						
							|  |  |  | .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. | 
					
						
							|  |  |  | .\" If applicable, add the following below this CDDL HEADER, with the | 
					
						
							|  |  |  | .\" fields enclosed by brackets "[]" replaced with your own identifying | 
					
						
							|  |  |  | .\" information: Portions Copyright [yyyy] [name of copyright owner] | 
					
						
							|  |  |  | .\" | 
					
						
							|  |  |  | .\" CDDL HEADER END | 
					
						
							|  |  |  | .\" | 
					
						
							|  |  |  | .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. | 
					
						
							|  |  |  | .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. | 
					
						
							|  |  |  | .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. | 
					
						
							|  |  |  | .\" Copyright (c) 2017 Datto Inc. | 
					
						
							|  |  |  | .\" Copyright (c) 2018 George Melikov. All Rights Reserved. | 
					
						
							|  |  |  | .\" Copyright 2017 Nexenta Systems, Inc. | 
					
						
							|  |  |  | .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. | 
					
						
							| 
									
										
										
										
											2021-02-18 08:30:45 +03:00
										 |  |  | .\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org> | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .\" | 
					
						
							| 
									
										
										
										
											2022-03-16 19:47:06 +03:00
										 |  |  | .Dd March 16, 2022 | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Dt ZPOOL-CREATE 8 | 
					
						
							| 
									
										
										
										
											2020-08-21 21:55:47 +03:00
										 |  |  | .Os | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Sh NAME | 
					
						
							| 
									
										
										
										
											2020-10-22 21:28:10 +03:00
										 |  |  | .Nm zpool-create | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Nd create ZFS storage pool | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Sh SYNOPSIS | 
					
						
							| 
									
										
										
										
											2020-10-22 21:28:10 +03:00
										 |  |  | .Nm zpool | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Cm create | 
					
						
							|  |  |  | .Op Fl dfn | 
					
						
							|  |  |  | .Op Fl m Ar mountpoint | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Oo Fl o Ar property Ns = Ns Ar value Oc Ns … | 
					
						
							|  |  |  | .Oo Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value Oc | 
					
						
							|  |  |  | .Op Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns … | 
					
						
							|  |  |  | .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns … | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Op Fl R Ar root | 
					
						
							|  |  |  | .Op Fl t Ar tname | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Ar pool | 
					
						
							|  |  |  | .Ar vdev Ns … | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Sh DESCRIPTION | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | Creates a new storage pool containing the virtual devices specified on the | 
					
						
							|  |  |  | command line. | 
					
						
							|  |  |  | The pool name must begin with a letter, and can only contain | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | alphanumeric characters as well as the underscore | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Pq Qq Sy _ , | 
					
						
							|  |  |  | dash | 
					
						
							|  |  |  | .Pq Qq Sy \&- , | 
					
						
							|  |  |  | colon | 
					
						
							|  |  |  | .Pq Qq Sy \&: , | 
					
						
							|  |  |  | space | 
					
						
							|  |  |  | .Pq Qq Sy \&\  , | 
					
						
							|  |  |  | and period | 
					
						
							|  |  |  | .Pq Qq Sy \&. . | 
					
						
							|  |  |  | The pool names | 
					
						
							|  |  |  | .Sy mirror , | 
					
						
							|  |  |  | .Sy raidz , | 
					
						
							| 
									
										
											  
											
												Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID.  This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
    zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons.  The supported options include:
    zpool create <pool> \
        draid[<parity>][:<data>d][:<children>c][:<spares>s] \
        <vdevs...>
    - draid[parity]       - Parity level (default 1)
    - draid[:<data>d]     - Data devices per group (default 8)
    - draid[:<children>c] - Expected number of child vdevs
    - draid[:<spares>s]   - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
  pool: tank
 state: ONLINE
config:
    NAME                  STATE     READ WRITE CKSUM
    slag7                 ONLINE       0     0     0
      draid2:8d:68c:2s-0  ONLINE       0     0     0
        L0                ONLINE       0     0     0
        L1                ONLINE       0     0     0
        ...
        U25               ONLINE       0     0     0
        U26               ONLINE       0     0     0
        spare-53          ONLINE       0     0     0
          U27             ONLINE       0     0     0
          draid2-0-0      ONLINE       0     0     0
        U28               ONLINE       0     0     0
        U29               ONLINE       0     0     0
        ...
        U42               ONLINE       0     0     0
        U43               ONLINE       0     0     0
    special
      mirror-1            ONLINE       0     0     0
        L5                ONLINE       0     0     0
        U5                ONLINE       0     0     0
      mirror-2            ONLINE       0     0     0
        L6                ONLINE       0     0     0
        U6                ONLINE       0     0     0
    spares
      draid2-0-0          INUSE     currently in use
      draid2-0-1          AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command.  These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
    -K draid|raidz|random - kind of RAID to test
    -D <value>            - dRAID data drives per group
    -S <value>            - dRAID distributed hot spares
    -R <value>            - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102 
											
										 
											2020-11-14 00:51:51 +03:00
										 |  |  | .Sy draid , | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Sy spare | 
					
						
							|  |  |  | and | 
					
						
							|  |  |  | .Sy log | 
					
						
							|  |  |  | are reserved, as are names beginning with | 
					
						
							|  |  |  | .Sy mirror , | 
					
						
							|  |  |  | .Sy raidz , | 
					
						
							| 
									
										
											  
											
												Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID.  This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
    zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons.  The supported options include:
    zpool create <pool> \
        draid[<parity>][:<data>d][:<children>c][:<spares>s] \
        <vdevs...>
    - draid[parity]       - Parity level (default 1)
    - draid[:<data>d]     - Data devices per group (default 8)
    - draid[:<children>c] - Expected number of child vdevs
    - draid[:<spares>s]   - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
  pool: tank
 state: ONLINE
config:
    NAME                  STATE     READ WRITE CKSUM
    slag7                 ONLINE       0     0     0
      draid2:8d:68c:2s-0  ONLINE       0     0     0
        L0                ONLINE       0     0     0
        L1                ONLINE       0     0     0
        ...
        U25               ONLINE       0     0     0
        U26               ONLINE       0     0     0
        spare-53          ONLINE       0     0     0
          U27             ONLINE       0     0     0
          draid2-0-0      ONLINE       0     0     0
        U28               ONLINE       0     0     0
        U29               ONLINE       0     0     0
        ...
        U42               ONLINE       0     0     0
        U43               ONLINE       0     0     0
    special
      mirror-1            ONLINE       0     0     0
        L5                ONLINE       0     0     0
        U5                ONLINE       0     0     0
      mirror-2            ONLINE       0     0     0
        L6                ONLINE       0     0     0
        U6                ONLINE       0     0     0
    spares
      draid2-0-0          INUSE     currently in use
      draid2-0-1          AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command.  These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
    -K draid|raidz|random - kind of RAID to test
    -D <value>            - dRAID data drives per group
    -S <value>            - dRAID distributed hot spares
    -R <value>            - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102 
											
										 
											2020-11-14 00:51:51 +03:00
										 |  |  | .Sy draid , | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | and | 
					
						
							|  |  |  | .Sy spare . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | The | 
					
						
							|  |  |  | .Ar vdev | 
					
						
							|  |  |  | specification is described in the | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Sx Virtual Devices | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | section of | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zpoolconcepts 7 . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Pp | 
					
						
							|  |  |  | The command attempts to verify that each device specified is accessible and not | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | currently in use by another subsystem. | 
					
						
							|  |  |  | However this check is not robust enough | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | to detect simultaneous attempts to use a new device in different pools, even if | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Sy multihost Ns = Sy enabled . | 
					
						
							| 
									
										
										
										
											2023-06-28 02:58:32 +03:00
										 |  |  | The administrator must ensure that simultaneous invocations of any combination | 
					
						
							| 
									
										
										
										
											2022-11-12 15:23:30 +03:00
										 |  |  | of | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Nm zpool Cm replace , | 
					
						
							|  |  |  | .Nm zpool Cm create , | 
					
						
							|  |  |  | .Nm zpool Cm add , | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | or | 
					
						
							| 
									
										
										
										
											2023-06-28 02:58:32 +03:00
										 |  |  | .Nm zpool Cm labelclear | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | do not refer to the same device. | 
					
						
							|  |  |  | Using the same device in two pools will result in pool corruption. | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Pp | 
					
						
							|  |  |  | There are some uses, such as being currently mounted, or specified as the | 
					
						
							|  |  |  | dedicated dump device, that prevents a device from ever being used by ZFS. | 
					
						
							|  |  |  | Other uses, such as having a preexisting UFS file system, can be overridden with | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Fl f . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Pp | 
					
						
							|  |  |  | The command also checks that the replication strategy for the pool is | 
					
						
							|  |  |  | consistent. | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | An attempt to combine redundant and non-redundant storage in a single pool, | 
					
						
							|  |  |  | or to mix disks and files, results in an error unless | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Fl f | 
					
						
							|  |  |  | is specified. | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | The use of differently-sized devices within a single raidz or mirror group is | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | also flagged as an error unless | 
					
						
							|  |  |  | .Fl f | 
					
						
							|  |  |  | is specified. | 
					
						
							|  |  |  | .Pp | 
					
						
							|  |  |  | Unless the | 
					
						
							|  |  |  | .Fl R | 
					
						
							|  |  |  | option is specified, the default mount point is | 
					
						
							|  |  |  | .Pa / Ns Ar pool . | 
					
						
							|  |  |  | The mount point must not exist or must be empty, or else the root dataset | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | will not be able to be be mounted. | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | This can be overridden with the | 
					
						
							|  |  |  | .Fl m | 
					
						
							|  |  |  | option. | 
					
						
							|  |  |  | .Pp | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | By default all supported features are enabled on the new pool. | 
					
						
							|  |  |  | The | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Fl d | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | option and the | 
					
						
							| 
									
										
										
										
											2021-02-18 08:30:45 +03:00
										 |  |  | .Fl o Ar compatibility | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | property | 
					
						
							|  |  |  | .Pq e.g Fl o Sy compatibility Ns = Ns Ar 2020 | 
					
						
							|  |  |  | can be used to restrict the features that are enabled, so that the | 
					
						
							|  |  |  | pool can be imported on other releases of ZFS. | 
					
						
							|  |  |  | .Bl -tag -width "-t tname" | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .It Fl d | 
					
						
							|  |  |  | Do not enable any features on the new pool. | 
					
						
							|  |  |  | Individual features can be enabled by setting their corresponding properties to | 
					
						
							|  |  |  | .Sy enabled | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | with | 
					
						
							|  |  |  | .Fl o . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | See | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zpool-features 7 | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | for details about feature properties. | 
					
						
							|  |  |  | .It Fl f | 
					
						
							|  |  |  | Forces use of | 
					
						
							|  |  |  | .Ar vdev Ns s , | 
					
						
							|  |  |  | even if they appear in use or specify a conflicting replication level. | 
					
						
							|  |  |  | Not all devices can be overridden in this manner. | 
					
						
							|  |  |  | .It Fl m Ar mountpoint | 
					
						
							|  |  |  | Sets the mount point for the root dataset. | 
					
						
							|  |  |  | The default mount point is | 
					
						
							|  |  |  | .Pa /pool | 
					
						
							|  |  |  | or | 
					
						
							|  |  |  | .Pa altroot/pool | 
					
						
							|  |  |  | if | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Sy altroot | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | is specified. | 
					
						
							|  |  |  | The mount point must be an absolute path, | 
					
						
							|  |  |  | .Sy legacy , | 
					
						
							|  |  |  | or | 
					
						
							|  |  |  | .Sy none . | 
					
						
							|  |  |  | For more information on dataset mount points, see | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zfsprops 7 . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .It Fl n | 
					
						
							|  |  |  | Displays the configuration that would be used without actually creating the | 
					
						
							|  |  |  | pool. | 
					
						
							|  |  |  | The actual pool creation can still fail due to insufficient privileges or | 
					
						
							|  |  |  | device sharing. | 
					
						
							|  |  |  | .It Fl o Ar property Ns = Ns Ar value | 
					
						
							|  |  |  | Sets the given pool properties. | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | See | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zpoolprops 7 | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | for a list of valid properties that can be set. | 
					
						
							|  |  |  | .It Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns … | 
					
						
							|  |  |  | Specifies compatibility feature sets. | 
					
						
							|  |  |  | See | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zpool-features 7 | 
					
						
							| 
									
										
										
										
											2021-02-18 08:30:45 +03:00
										 |  |  | for more information about compatibility feature sets. | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .It Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value | 
					
						
							|  |  |  | Sets the given pool feature. | 
					
						
							|  |  |  | See the | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zpool-features 7 | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | section for a list of valid features that can be set. | 
					
						
							|  |  |  | Value can be either disabled or enabled. | 
					
						
							|  |  |  | .It Fl O Ar file-system-property Ns = Ns Ar value | 
					
						
							|  |  |  | Sets the given file system properties in the root file system of the pool. | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | See | 
					
						
							| 
									
										
										
										
											2021-06-04 23:29:26 +03:00
										 |  |  | .Xr zfsprops 7 | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | for a list of valid properties that can be set. | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .It Fl R Ar root | 
					
						
							|  |  |  | Equivalent to | 
					
						
							|  |  |  | .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root | 
					
						
							|  |  |  | .It Fl t Ar tname | 
					
						
							|  |  |  | Sets the in-core pool name to | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | .Ar tname | 
					
						
							|  |  |  | while the on-disk name will be the name specified as | 
					
						
							|  |  |  | .Ar pool . | 
					
						
							|  |  |  | This will set the default of the | 
					
						
							|  |  |  | .Sy cachefile | 
					
						
							|  |  |  | property to | 
					
						
							|  |  |  | .Sy none . | 
					
						
							|  |  |  | This is intended | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | to handle name space collisions when creating pools for other systems, | 
					
						
							|  |  |  | such as virtual machines or physical machines whose pools live on network | 
					
						
							|  |  |  | block devices. | 
					
						
							|  |  |  | .El | 
					
						
							| 
									
										
										
										
											2021-05-27 03:46:40 +03:00
										 |  |  | . | 
					
						
							| 
									
										
										
										
											2022-03-16 19:47:06 +03:00
										 |  |  | .Sh EXAMPLES | 
					
						
							|  |  |  | .\" These are, respectively, examples 1, 2, 3, 4, 11, 12 from zpool.8 | 
					
						
							|  |  |  | .\" Make sure to update them bidirectionally | 
					
						
							|  |  |  | .Ss Example 1 : No Creating a RAID-Z Storage Pool | 
					
						
							|  |  |  | The following command creates a pool with a single raidz root vdev that | 
					
						
							|  |  |  | consists of six disks: | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Ss Example 2 : No Creating a Mirrored Storage Pool | 
					
						
							|  |  |  | The following command creates a pool with two mirrors, where each mirror | 
					
						
							|  |  |  | contains two disks: | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions | 
					
						
							|  |  |  | The following command creates a non-redundant pool using two disk partitions: | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Ss Example 4 : No Creating a ZFS Storage Pool by Using Files | 
					
						
							|  |  |  | The following command creates a non-redundant pool using files. | 
					
						
							|  |  |  | While not recommended, a pool based on files can be useful for experimental | 
					
						
							|  |  |  | purposes. | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Ss Example 5 : No Managing Hot Spares | 
					
						
							|  |  |  | The following command creates a new pool with an available hot spare: | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc | 
					
						
							|  |  |  | . | 
					
						
							|  |  |  | .Ss Example 6 : No Creating a ZFS Pool with Mirrored Separate Intent Logs | 
					
						
							|  |  |  | The following command creates a ZFS storage pool consisting of two, two-way | 
					
						
							|  |  |  | mirrors and mirrored log devices: | 
					
						
							|  |  |  | .Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf | 
					
						
							|  |  |  | . | 
					
						
							| 
									
										
										
										
											2019-11-13 20:21:07 +03:00
										 |  |  | .Sh SEE ALSO | 
					
						
							|  |  |  | .Xr zpool-destroy 8 , | 
					
						
							|  |  |  | .Xr zpool-export 8 , | 
					
						
							|  |  |  | .Xr zpool-import 8 |