mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-11-17 01:51:00 +03:00
Import pyzfs source code from ClusterHQ
libzfs_core is intended to be a stable interface for programmatic administration of ZFS. This wrapper provides one-to-one wrappers for libzfs_core API functions, but the signatures and types are more natural to Python. nvlists are wrapped as dictionaries or lists depending on their usage. Some parameters have default values depending on typical use for increased convenience. Enumerations and bit flags become strings and lists of strings in Python. Errors are reported as exceptions rather than integer errno-style error codes. The wrapper takes care to provide one-to-many mapping of the error codes to the exceptions by interpreting a context in which the error code is produced. Unit tests and automated test for the libzfs_core API are provided with this package. Please note that the API tests perform lots of ZFS dataset level operations and ZFS tries hard to ensure that any modifications do reach stable storage. That means that the operations are done synchronously and that, for example, disk caches are flushed. Thus, the tests can be very slow on real hardware. It is recommended to place the default temporary directory or a temporary directory specified by, for instance, TMP environment variable on a memory backed filesystem. Original-patch-by: Andriy Gapon <avg@FreeBSD.org> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Ported-by: loli10K <ezomori.nozomu@gmail.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes #7230
This commit is contained in:
parent
3cbe89b12a
commit
6abf922574
201
contrib/pyzfs/LICENSE
Normal file
201
contrib/pyzfs/LICENSE
Normal file
@ -0,0 +1,201 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright 2015 ClusterHQ
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
28
contrib/pyzfs/README
Normal file
28
contrib/pyzfs/README
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
This package provides a wrapper for libzfs_core C library.
|
||||||
|
|
||||||
|
libzfs_core is intended to be a stable interface for programmatic
|
||||||
|
administration of ZFS.
|
||||||
|
This wrapper provides one-to-one wrappers for libzfs_core API functions,
|
||||||
|
but the signatures and types are more natural to Python.
|
||||||
|
nvlists are wrapped as dictionaries or lists depending on their usage.
|
||||||
|
Some parameters have default values depending on typical use for
|
||||||
|
increased convenience.
|
||||||
|
Enumerations and bit flags become strings and lists of strings in Python.
|
||||||
|
Errors are reported as exceptions rather than integer errno-style
|
||||||
|
error codes. The wrapper takes care to provide one-to-many mapping
|
||||||
|
of the error codes to the exceptions by interpreting a context
|
||||||
|
in which the error code is produced.
|
||||||
|
|
||||||
|
Unit tests and automated test for the libzfs_core API are provided
|
||||||
|
with this package.
|
||||||
|
Please note that the API tests perform lots of ZFS dataset level
|
||||||
|
operations and ZFS tries hard to ensure that any modifications
|
||||||
|
do reach stable storage. That means that the operations are done
|
||||||
|
synchronously and that, for example, disk caches are flushed.
|
||||||
|
Thus, the tests can be very slow on real hardware.
|
||||||
|
It is recommended to place the default temporary directory or
|
||||||
|
a temporary directory specified by, for instance, TMP environment
|
||||||
|
variable on a memory backed filesystem.
|
||||||
|
|
||||||
|
Package documentation: http://pyzfs.readthedocs.org
|
||||||
|
Package development: https://github.com/ClusterHQ/pyzfs
|
304
contrib/pyzfs/docs/source/conf.py
Normal file
304
contrib/pyzfs/docs/source/conf.py
Normal file
@ -0,0 +1,304 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
#
|
||||||
|
# pyzfs documentation build configuration file, created by
|
||||||
|
# sphinx-quickstart on Mon Apr 6 23:48:40 2015.
|
||||||
|
#
|
||||||
|
# This file is execfile()d with the current directory set to its
|
||||||
|
# containing dir.
|
||||||
|
#
|
||||||
|
# Note that not all possible configuration values are present in this
|
||||||
|
# autogenerated file.
|
||||||
|
#
|
||||||
|
# All configuration values have a default; values that are commented out
|
||||||
|
# serve to show the default.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import shlex
|
||||||
|
|
||||||
|
# If extensions (or modules to document with autodoc) are in another directory,
|
||||||
|
# add these directories to sys.path here. If the directory is relative to the
|
||||||
|
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||||
|
sys.path.insert(0, os.path.abspath('../..'))
|
||||||
|
|
||||||
|
# -- General configuration ------------------------------------------------
|
||||||
|
|
||||||
|
# If your documentation needs a minimal Sphinx version, state it here.
|
||||||
|
#needs_sphinx = '1.0'
|
||||||
|
|
||||||
|
# Add any Sphinx extension module names here, as strings. They can be
|
||||||
|
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||||
|
# ones.
|
||||||
|
extensions = [
|
||||||
|
'sphinx.ext.autodoc',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add any paths that contain templates here, relative to this directory.
|
||||||
|
templates_path = ['_templates']
|
||||||
|
|
||||||
|
# The suffix(es) of source filenames.
|
||||||
|
# You can specify multiple suffix as a list of string:
|
||||||
|
# source_suffix = ['.rst', '.md']
|
||||||
|
source_suffix = '.rst'
|
||||||
|
|
||||||
|
# The encoding of source files.
|
||||||
|
#source_encoding = 'utf-8-sig'
|
||||||
|
|
||||||
|
# The master toctree document.
|
||||||
|
master_doc = 'index'
|
||||||
|
|
||||||
|
# General information about the project.
|
||||||
|
project = u'pyzfs'
|
||||||
|
copyright = u'2015, ClusterHQ'
|
||||||
|
author = u'ClusterHQ'
|
||||||
|
|
||||||
|
# The version info for the project you're documenting, acts as replacement for
|
||||||
|
# |version| and |release|, also used in various other places throughout the
|
||||||
|
# built documents.
|
||||||
|
#
|
||||||
|
# The short X.Y version.
|
||||||
|
version = '0.2.3'
|
||||||
|
# The full version, including alpha/beta/rc tags.
|
||||||
|
release = '0.2.3'
|
||||||
|
|
||||||
|
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||||
|
# for a list of supported languages.
|
||||||
|
#
|
||||||
|
# This is also used if you do content translation via gettext catalogs.
|
||||||
|
# Usually you set "language" from the command line for these cases.
|
||||||
|
language = None
|
||||||
|
|
||||||
|
# There are two options for replacing |today|: either, you set today to some
|
||||||
|
# non-false value, then it is used:
|
||||||
|
#today = ''
|
||||||
|
# Else, today_fmt is used as the format for a strftime call.
|
||||||
|
#today_fmt = '%B %d, %Y'
|
||||||
|
|
||||||
|
# List of patterns, relative to source directory, that match files and
|
||||||
|
# directories to ignore when looking for source files.
|
||||||
|
exclude_patterns = []
|
||||||
|
|
||||||
|
# The reST default role (used for this markup: `text`) to use for all
|
||||||
|
# documents.
|
||||||
|
#default_role = None
|
||||||
|
|
||||||
|
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||||
|
#add_function_parentheses = True
|
||||||
|
|
||||||
|
# If true, the current module name will be prepended to all description
|
||||||
|
# unit titles (such as .. function::).
|
||||||
|
#add_module_names = True
|
||||||
|
|
||||||
|
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||||
|
# output. They are ignored by default.
|
||||||
|
#show_authors = False
|
||||||
|
|
||||||
|
# The name of the Pygments (syntax highlighting) style to use.
|
||||||
|
pygments_style = 'sphinx'
|
||||||
|
|
||||||
|
# A list of ignored prefixes for module index sorting.
|
||||||
|
#modindex_common_prefix = []
|
||||||
|
|
||||||
|
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||||
|
#keep_warnings = False
|
||||||
|
|
||||||
|
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
||||||
|
todo_include_todos = False
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for HTML output ----------------------------------------------
|
||||||
|
|
||||||
|
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||||
|
# a list of builtin themes.
|
||||||
|
html_theme = 'classic'
|
||||||
|
|
||||||
|
# Theme options are theme-specific and customize the look and feel of a theme
|
||||||
|
# further. For a list of options available for each theme, see the
|
||||||
|
# documentation.
|
||||||
|
#html_theme_options = {}
|
||||||
|
|
||||||
|
# Add any paths that contain custom themes here, relative to this directory.
|
||||||
|
#html_theme_path = []
|
||||||
|
|
||||||
|
# The name for this set of Sphinx documents. If None, it defaults to
|
||||||
|
# "<project> v<release> documentation".
|
||||||
|
#html_title = None
|
||||||
|
|
||||||
|
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||||
|
#html_short_title = None
|
||||||
|
|
||||||
|
# The name of an image file (relative to this directory) to place at the top
|
||||||
|
# of the sidebar.
|
||||||
|
#html_logo = None
|
||||||
|
|
||||||
|
# The name of an image file (within the static path) to use as favicon of the
|
||||||
|
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||||
|
# pixels large.
|
||||||
|
#html_favicon = None
|
||||||
|
|
||||||
|
# Add any paths that contain custom static files (such as style sheets) here,
|
||||||
|
# relative to this directory. They are copied after the builtin static files,
|
||||||
|
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||||
|
html_static_path = ['_static']
|
||||||
|
|
||||||
|
# Add any extra paths that contain custom files (such as robots.txt or
|
||||||
|
# .htaccess) here, relative to this directory. These files are copied
|
||||||
|
# directly to the root of the documentation.
|
||||||
|
#html_extra_path = []
|
||||||
|
|
||||||
|
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||||
|
# using the given strftime format.
|
||||||
|
#html_last_updated_fmt = '%b %d, %Y'
|
||||||
|
|
||||||
|
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||||
|
# typographically correct entities.
|
||||||
|
#html_use_smartypants = True
|
||||||
|
|
||||||
|
# Custom sidebar templates, maps document names to template names.
|
||||||
|
#html_sidebars = {}
|
||||||
|
|
||||||
|
# Additional templates that should be rendered to pages, maps page names to
|
||||||
|
# template names.
|
||||||
|
#html_additional_pages = {}
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#html_domain_indices = True
|
||||||
|
|
||||||
|
# If false, no index is generated.
|
||||||
|
#html_use_index = True
|
||||||
|
|
||||||
|
# If true, the index is split into individual pages for each letter.
|
||||||
|
#html_split_index = False
|
||||||
|
|
||||||
|
# If true, links to the reST sources are added to the pages.
|
||||||
|
#html_show_sourcelink = True
|
||||||
|
|
||||||
|
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||||
|
#html_show_sphinx = True
|
||||||
|
|
||||||
|
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||||
|
#html_show_copyright = True
|
||||||
|
|
||||||
|
# If true, an OpenSearch description file will be output, and all pages will
|
||||||
|
# contain a <link> tag referring to it. The value of this option must be the
|
||||||
|
# base URL from which the finished HTML is served.
|
||||||
|
#html_use_opensearch = ''
|
||||||
|
|
||||||
|
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||||
|
#html_file_suffix = None
|
||||||
|
|
||||||
|
# Language to be used for generating the HTML full-text search index.
|
||||||
|
# Sphinx supports the following languages:
|
||||||
|
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
|
||||||
|
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
|
||||||
|
#html_search_language = 'en'
|
||||||
|
|
||||||
|
# A dictionary with options for the search language support, empty by default.
|
||||||
|
# Now only 'ja' uses this config value
|
||||||
|
#html_search_options = {'type': 'default'}
|
||||||
|
|
||||||
|
# The name of a javascript file (relative to the configuration directory) that
|
||||||
|
# implements a search results scorer. If empty, the default will be used.
|
||||||
|
#html_search_scorer = 'scorer.js'
|
||||||
|
|
||||||
|
# Output file base name for HTML help builder.
|
||||||
|
htmlhelp_basename = 'pyzfsdoc'
|
||||||
|
|
||||||
|
# -- Options for LaTeX output ---------------------------------------------
|
||||||
|
|
||||||
|
latex_elements = {
|
||||||
|
# The paper size ('letterpaper' or 'a4paper').
|
||||||
|
#'papersize': 'letterpaper',
|
||||||
|
|
||||||
|
# The font size ('10pt', '11pt' or '12pt').
|
||||||
|
#'pointsize': '10pt',
|
||||||
|
|
||||||
|
# Additional stuff for the LaTeX preamble.
|
||||||
|
#'preamble': '',
|
||||||
|
|
||||||
|
# Latex figure (float) alignment
|
||||||
|
#'figure_align': 'htbp',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Grouping the document tree into LaTeX files. List of tuples
|
||||||
|
# (source start file, target name, title,
|
||||||
|
# author, documentclass [howto, manual, or own class]).
|
||||||
|
latex_documents = [
|
||||||
|
(master_doc, 'pyzfs.tex', u'pyzfs Documentation',
|
||||||
|
u'ClusterHQ', 'manual'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# The name of an image file (relative to this directory) to place at the top of
|
||||||
|
# the title page.
|
||||||
|
#latex_logo = None
|
||||||
|
|
||||||
|
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||||
|
# not chapters.
|
||||||
|
#latex_use_parts = False
|
||||||
|
|
||||||
|
# If true, show page references after internal links.
|
||||||
|
#latex_show_pagerefs = False
|
||||||
|
|
||||||
|
# If true, show URL addresses after external links.
|
||||||
|
#latex_show_urls = False
|
||||||
|
|
||||||
|
# Documents to append as an appendix to all manuals.
|
||||||
|
#latex_appendices = []
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#latex_domain_indices = True
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for manual page output ---------------------------------------
|
||||||
|
|
||||||
|
# One entry per manual page. List of tuples
|
||||||
|
# (source start file, name, description, authors, manual section).
|
||||||
|
man_pages = [
|
||||||
|
(master_doc, 'pyzfs', u'pyzfs Documentation',
|
||||||
|
[author], 1)
|
||||||
|
]
|
||||||
|
|
||||||
|
# If true, show URL addresses after external links.
|
||||||
|
#man_show_urls = False
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for Texinfo output -------------------------------------------
|
||||||
|
|
||||||
|
# Grouping the document tree into Texinfo files. List of tuples
|
||||||
|
# (source start file, target name, title, author,
|
||||||
|
# dir menu entry, description, category)
|
||||||
|
texinfo_documents = [
|
||||||
|
(master_doc, 'pyzfs', u'pyzfs Documentation',
|
||||||
|
author, 'pyzfs', 'One line description of project.',
|
||||||
|
'Miscellaneous'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Documents to append as an appendix to all manuals.
|
||||||
|
#texinfo_appendices = []
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#texinfo_domain_indices = True
|
||||||
|
|
||||||
|
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||||
|
#texinfo_show_urls = 'footnote'
|
||||||
|
|
||||||
|
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||||
|
#texinfo_no_detailmenu = False
|
||||||
|
|
||||||
|
# Sort documentation in the same order as the source files.
|
||||||
|
autodoc_member_order = 'bysource'
|
||||||
|
|
||||||
|
|
||||||
|
#######################
|
||||||
|
# Neutralize effects of function wrapping on documented signatures.
|
||||||
|
# The affected signatures could be explcitly placed into the
|
||||||
|
# documentation (either in .rst files or as a first line of a
|
||||||
|
# docstring).
|
||||||
|
import functools
|
||||||
|
|
||||||
|
def no_op_wraps(func):
|
||||||
|
def wrapper(decorator):
|
||||||
|
return func
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
functools.wraps = no_op_wraps
|
44
contrib/pyzfs/docs/source/index.rst
Normal file
44
contrib/pyzfs/docs/source/index.rst
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
.. pyzfs documentation master file, created by
|
||||||
|
sphinx-quickstart on Mon Apr 6 23:48:40 2015.
|
||||||
|
You can adapt this file completely to your liking, but it should at least
|
||||||
|
contain the root `toctree` directive.
|
||||||
|
|
||||||
|
Welcome to pyzfs's documentation!
|
||||||
|
=================================
|
||||||
|
|
||||||
|
Contents:
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 2
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Indices and tables
|
||||||
|
==================
|
||||||
|
|
||||||
|
* :ref:`genindex`
|
||||||
|
* :ref:`modindex`
|
||||||
|
* :ref:`search`
|
||||||
|
|
||||||
|
Documentation for the libzfs_core
|
||||||
|
*********************************
|
||||||
|
|
||||||
|
.. automodule:: libzfs_core
|
||||||
|
:members:
|
||||||
|
:exclude-members: lzc_snap, lzc_recv, lzc_destroy_one,
|
||||||
|
lzc_inherit, lzc_set_props, lzc_list
|
||||||
|
|
||||||
|
Documentation for the libzfs_core exceptions
|
||||||
|
********************************************
|
||||||
|
|
||||||
|
.. automodule:: libzfs_core.exceptions
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
|
||||||
|
Documentation for the miscellaneous types that correspond to specific width C types
|
||||||
|
***********************************************************************************
|
||||||
|
|
||||||
|
.. automodule:: libzfs_core.ctypes
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
|
100
contrib/pyzfs/libzfs_core/__init__.py
Normal file
100
contrib/pyzfs/libzfs_core/__init__.py
Normal file
@ -0,0 +1,100 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
'''
|
||||||
|
Python wrappers for **libzfs_core** library.
|
||||||
|
|
||||||
|
*libzfs_core* is intended to be a stable, committed interface for programmatic
|
||||||
|
administration of ZFS.
|
||||||
|
This wrapper provides one-to-one wrappers for libzfs_core API functions,
|
||||||
|
but the signatures and types are more natural to Python.
|
||||||
|
nvlists are wrapped as dictionaries or lists depending on their usage.
|
||||||
|
Some parameters have default values depending on typical use for
|
||||||
|
increased convenience.
|
||||||
|
Output parameters are not used and return values are directly returned.
|
||||||
|
Enumerations and bit flags become strings and lists of strings in Python.
|
||||||
|
Errors are reported as exceptions rather than integer errno-style
|
||||||
|
error codes. The wrapper takes care to provide one-to-many mapping
|
||||||
|
of the error codes to the exceptions by interpreting a context
|
||||||
|
in which the error code is produced.
|
||||||
|
|
||||||
|
To submit an issue or contribute to development of this package
|
||||||
|
please visit its `GitHub repository <https://github.com/ClusterHQ/pyzfs>`_.
|
||||||
|
|
||||||
|
.. data:: MAXNAMELEN
|
||||||
|
|
||||||
|
Maximum length of any ZFS name.
|
||||||
|
'''
|
||||||
|
|
||||||
|
from ._constants import (
|
||||||
|
MAXNAMELEN,
|
||||||
|
)
|
||||||
|
|
||||||
|
from ._libzfs_core import (
|
||||||
|
lzc_create,
|
||||||
|
lzc_clone,
|
||||||
|
lzc_rollback,
|
||||||
|
lzc_rollback_to,
|
||||||
|
lzc_snapshot,
|
||||||
|
lzc_snap,
|
||||||
|
lzc_destroy_snaps,
|
||||||
|
lzc_bookmark,
|
||||||
|
lzc_get_bookmarks,
|
||||||
|
lzc_destroy_bookmarks,
|
||||||
|
lzc_snaprange_space,
|
||||||
|
lzc_hold,
|
||||||
|
lzc_release,
|
||||||
|
lzc_get_holds,
|
||||||
|
lzc_send,
|
||||||
|
lzc_send_space,
|
||||||
|
lzc_receive,
|
||||||
|
lzc_receive_with_header,
|
||||||
|
lzc_recv,
|
||||||
|
lzc_exists,
|
||||||
|
is_supported,
|
||||||
|
lzc_promote,
|
||||||
|
lzc_rename,
|
||||||
|
lzc_destroy,
|
||||||
|
lzc_inherit_prop,
|
||||||
|
lzc_set_prop,
|
||||||
|
lzc_get_props,
|
||||||
|
lzc_list_children,
|
||||||
|
lzc_list_snaps,
|
||||||
|
receive_header,
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'ctypes',
|
||||||
|
'exceptions',
|
||||||
|
'MAXNAMELEN',
|
||||||
|
'lzc_create',
|
||||||
|
'lzc_clone',
|
||||||
|
'lzc_rollback',
|
||||||
|
'lzc_rollback_to',
|
||||||
|
'lzc_snapshot',
|
||||||
|
'lzc_snap',
|
||||||
|
'lzc_destroy_snaps',
|
||||||
|
'lzc_bookmark',
|
||||||
|
'lzc_get_bookmarks',
|
||||||
|
'lzc_destroy_bookmarks',
|
||||||
|
'lzc_snaprange_space',
|
||||||
|
'lzc_hold',
|
||||||
|
'lzc_release',
|
||||||
|
'lzc_get_holds',
|
||||||
|
'lzc_send',
|
||||||
|
'lzc_send_space',
|
||||||
|
'lzc_receive',
|
||||||
|
'lzc_receive_with_header',
|
||||||
|
'lzc_recv',
|
||||||
|
'lzc_exists',
|
||||||
|
'is_supported',
|
||||||
|
'lzc_promote',
|
||||||
|
'lzc_rename',
|
||||||
|
'lzc_destroy',
|
||||||
|
'lzc_inherit_prop',
|
||||||
|
'lzc_set_prop',
|
||||||
|
'lzc_get_props',
|
||||||
|
'lzc_list_children',
|
||||||
|
'lzc_list_snaps',
|
||||||
|
'receive_header',
|
||||||
|
]
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
10
contrib/pyzfs/libzfs_core/_constants.py
Normal file
10
contrib/pyzfs/libzfs_core/_constants.py
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Important `libzfs_core` constants.
|
||||||
|
"""
|
||||||
|
|
||||||
|
#: Maximum length of any ZFS name.
|
||||||
|
MAXNAMELEN = 255
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
629
contrib/pyzfs/libzfs_core/_error_translation.py
Normal file
629
contrib/pyzfs/libzfs_core/_error_translation.py
Normal file
@ -0,0 +1,629 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Helper routines for converting ``errno`` style error codes from C functions
|
||||||
|
to Python exceptions defined by `libzfs_core` API.
|
||||||
|
|
||||||
|
The conversion heavily depends on the context of the error: the attempted
|
||||||
|
operation and the input parameters. For this reason, there is a conversion
|
||||||
|
routine for each `libzfs_core` interface function. The conversion routines
|
||||||
|
have the return code as a parameter as well as all the parameters of the
|
||||||
|
corresponding interface functions.
|
||||||
|
|
||||||
|
The parameters and exceptions are documented in the `libzfs_core` interfaces.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import errno
|
||||||
|
import re
|
||||||
|
import string
|
||||||
|
from . import exceptions as lzc_exc
|
||||||
|
from ._constants import MAXNAMELEN
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_create_translate_error(ret, name, ds_type, props):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise lzc_exc.PropertyInvalid(name)
|
||||||
|
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.FilesystemExists(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.ParentNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to create filesystem")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_clone_translate_error(ret, name, origin, props):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
_validate_snap_name(origin)
|
||||||
|
if _pool_name(name) != _pool_name(origin):
|
||||||
|
raise lzc_exc.PoolsDiffer(name) # see https://www.illumos.org/issues/5824
|
||||||
|
else:
|
||||||
|
raise lzc_exc.PropertyInvalid(name)
|
||||||
|
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.FilesystemExists(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
if not _is_valid_snap_name(origin):
|
||||||
|
raise lzc_exc.SnapshotNameInvalid(origin)
|
||||||
|
raise lzc_exc.DatasetNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to create clone")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_rollback_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise lzc_exc.SnapshotNotFound(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
if not _is_valid_fs_name(name):
|
||||||
|
raise lzc_exc.NameInvalid(name)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.FilesystemNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to rollback")
|
||||||
|
|
||||||
|
def lzc_rollback_to_translate_error(ret, name, snap):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.SnapshotNotLatest(snap)
|
||||||
|
raise _generic_exception(ret, name, "Failed to rollback")
|
||||||
|
|
||||||
|
def lzc_snapshot_translate_errors(ret, errlist, snaps, props):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EXDEV:
|
||||||
|
pool_names = map(_pool_name, snaps)
|
||||||
|
same_pool = all(x == pool_names[0] for x in pool_names)
|
||||||
|
if same_pool:
|
||||||
|
return lzc_exc.DuplicateSnapshots(name)
|
||||||
|
else:
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
elif ret == errno.EINVAL:
|
||||||
|
if any(not _is_valid_snap_name(s) for s in snaps):
|
||||||
|
return lzc_exc.NameInvalid(name)
|
||||||
|
elif any(len(s) > MAXNAMELEN for s in snaps):
|
||||||
|
return lzc_exc.NameTooLong(name)
|
||||||
|
else:
|
||||||
|
return lzc_exc.PropertyInvalid(name)
|
||||||
|
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
return lzc_exc.SnapshotExists(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
return lzc_exc.FilesystemNotFound(name)
|
||||||
|
return _generic_exception(ret, name, "Failed to create snapshot")
|
||||||
|
|
||||||
|
_handle_err_list(ret, errlist, snaps, lzc_exc.SnapshotFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_destroy_snaps_translate_errors(ret, errlist, snaps, defer):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
return lzc_exc.SnapshotIsCloned(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
return lzc_exc.PoolNotFound(name)
|
||||||
|
if ret == errno.EBUSY:
|
||||||
|
return lzc_exc.SnapshotIsHeld(name)
|
||||||
|
return _generic_exception(ret, name, "Failed to destroy snapshot")
|
||||||
|
|
||||||
|
_handle_err_list(ret, errlist, snaps, lzc_exc.SnapshotDestructionFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_bookmark_translate_errors(ret, errlist, bookmarks):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
if name:
|
||||||
|
snap = bookmarks[name]
|
||||||
|
pool_names = map(_pool_name, bookmarks.keys())
|
||||||
|
if not _is_valid_bmark_name(name):
|
||||||
|
return lzc_exc.BookmarkNameInvalid(name)
|
||||||
|
elif not _is_valid_snap_name(snap):
|
||||||
|
return lzc_exc.SnapshotNameInvalid(snap)
|
||||||
|
elif _fs_name(name) != _fs_name(snap):
|
||||||
|
return lzc_exc.BookmarkMismatch(name)
|
||||||
|
elif any(x != _pool_name(name) for x in pool_names):
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
else:
|
||||||
|
invalid_names = [b for b in bookmarks.keys() if not _is_valid_bmark_name(b)]
|
||||||
|
if invalid_names:
|
||||||
|
return lzc_exc.BookmarkNameInvalid(invalid_names[0])
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
return lzc_exc.BookmarkExists(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
return lzc_exc.SnapshotNotFound(name)
|
||||||
|
if ret == errno.ENOTSUP:
|
||||||
|
return lzc_exc.BookmarkNotSupported(name)
|
||||||
|
return _generic_exception(ret, name, "Failed to create bookmark")
|
||||||
|
|
||||||
|
_handle_err_list(ret, errlist, bookmarks.keys(), lzc_exc.BookmarkFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_get_bookmarks_translate_error(ret, fsname, props):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.FilesystemNotFound(fsname)
|
||||||
|
raise _generic_exception(ret, fsname, "Failed to list bookmarks")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_destroy_bookmarks_translate_errors(ret, errlist, bookmarks):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
return lzc_exc.NameInvalid(name)
|
||||||
|
return _generic_exception(ret, name, "Failed to destroy bookmark")
|
||||||
|
|
||||||
|
_handle_err_list(ret, errlist, bookmarks, lzc_exc.BookmarkDestructionFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_snaprange_space_translate_error(ret, firstsnap, lastsnap):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EXDEV and firstsnap is not None:
|
||||||
|
if _pool_name(firstsnap) != _pool_name(lastsnap):
|
||||||
|
raise lzc_exc.PoolsDiffer(lastsnap)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.SnapshotMismatch(lastsnap)
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
if not _is_valid_snap_name(firstsnap):
|
||||||
|
raise lzc_exc.NameInvalid(firstsnap)
|
||||||
|
elif not _is_valid_snap_name(lastsnap):
|
||||||
|
raise lzc_exc.NameInvalid(lastsnap)
|
||||||
|
elif len(firstsnap) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(firstsnap)
|
||||||
|
elif len(lastsnap) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(lastsnap)
|
||||||
|
elif _pool_name(firstsnap) != _pool_name(lastsnap):
|
||||||
|
raise lzc_exc.PoolsDiffer(lastsnap)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.SnapshotMismatch(lastsnap)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.SnapshotNotFound(lastsnap)
|
||||||
|
raise _generic_exception(ret, lastsnap, "Failed to calculate space used by range of snapshots")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_hold_translate_errors(ret, errlist, holds, fd):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EXDEV:
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
elif ret == errno.EINVAL:
|
||||||
|
if name:
|
||||||
|
pool_names = map(_pool_name, holds.keys())
|
||||||
|
if not _is_valid_snap_name(name):
|
||||||
|
return lzc_exc.NameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
return lzc_exc.NameTooLong(name)
|
||||||
|
elif any(x != _pool_name(name) for x in pool_names):
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
else:
|
||||||
|
invalid_names = [b for b in holds.keys() if not _is_valid_snap_name(b)]
|
||||||
|
if invalid_names:
|
||||||
|
return lzc_exc.NameInvalid(invalid_names[0])
|
||||||
|
fs_name = None
|
||||||
|
hold_name = None
|
||||||
|
pool_name = None
|
||||||
|
if name is not None:
|
||||||
|
fs_name = _fs_name(name)
|
||||||
|
pool_name = _pool_name(name)
|
||||||
|
hold_name = holds[name]
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
return lzc_exc.FilesystemNotFound(fs_name)
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
return lzc_exc.HoldExists(name)
|
||||||
|
if ret == errno.E2BIG:
|
||||||
|
return lzc_exc.NameTooLong(hold_name)
|
||||||
|
if ret == errno.ENOTSUP:
|
||||||
|
return lzc_exc.FeatureNotSupported(pool_name)
|
||||||
|
return _generic_exception(ret, name, "Failed to hold snapshot")
|
||||||
|
|
||||||
|
if ret == errno.EBADF:
|
||||||
|
raise lzc_exc.BadHoldCleanupFD()
|
||||||
|
_handle_err_list(ret, errlist, holds.keys(), lzc_exc.HoldFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_release_translate_errors(ret, errlist, holds):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
for _, hold_list in holds.iteritems():
|
||||||
|
if not isinstance(hold_list, list):
|
||||||
|
raise lzc_exc.TypeError('holds must be in a list')
|
||||||
|
|
||||||
|
def _map(ret, name):
|
||||||
|
if ret == errno.EXDEV:
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
elif ret == errno.EINVAL:
|
||||||
|
if name:
|
||||||
|
pool_names = map(_pool_name, holds.keys())
|
||||||
|
if not _is_valid_snap_name(name):
|
||||||
|
return lzc_exc.NameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
return lzc_exc.NameTooLong(name)
|
||||||
|
elif any(x != _pool_name(name) for x in pool_names):
|
||||||
|
return lzc_exc.PoolsDiffer(name)
|
||||||
|
else:
|
||||||
|
invalid_names = [b for b in holds.keys() if not _is_valid_snap_name(b)]
|
||||||
|
if invalid_names:
|
||||||
|
return lzc_exc.NameInvalid(invalid_names[0])
|
||||||
|
elif ret == errno.ENOENT:
|
||||||
|
return lzc_exc.HoldNotFound(name)
|
||||||
|
elif ret == errno.E2BIG:
|
||||||
|
tag_list = holds[name]
|
||||||
|
too_long_tags = [t for t in tag_list if len(t) > MAXNAMELEN]
|
||||||
|
return lzc_exc.NameTooLong(too_long_tags[0])
|
||||||
|
elif ret == errno.ENOTSUP:
|
||||||
|
pool_name = None
|
||||||
|
if name is not None:
|
||||||
|
pool_name = _pool_name(name)
|
||||||
|
return lzc_exc.FeatureNotSupported(pool_name)
|
||||||
|
else:
|
||||||
|
return _generic_exception(ret, name, "Failed to release snapshot hold")
|
||||||
|
|
||||||
|
_handle_err_list(ret, errlist, holds.keys(), lzc_exc.HoldReleaseFailure, _map)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_get_holds_translate_error(ret, snapname):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_snap_name(snapname)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.SnapshotNotFound(snapname)
|
||||||
|
if ret == errno.ENOTSUP:
|
||||||
|
raise lzc_exc.FeatureNotSupported(_pool_name(snapname))
|
||||||
|
raise _generic_exception(ret, snapname, "Failed to get holds on snapshot")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_send_translate_error(ret, snapname, fromsnap, fd, flags):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EXDEV and fromsnap is not None:
|
||||||
|
if _pool_name(fromsnap) != _pool_name(snapname):
|
||||||
|
raise lzc_exc.PoolsDiffer(snapname)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.SnapshotMismatch(snapname)
|
||||||
|
elif ret == errno.EINVAL:
|
||||||
|
if (fromsnap is not None and not _is_valid_snap_name(fromsnap) and
|
||||||
|
not _is_valid_bmark_name(fromsnap)):
|
||||||
|
raise lzc_exc.NameInvalid(fromsnap)
|
||||||
|
elif not _is_valid_snap_name(snapname) and not _is_valid_fs_name(snapname):
|
||||||
|
raise lzc_exc.NameInvalid(snapname)
|
||||||
|
elif fromsnap is not None and len(fromsnap) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(fromsnap)
|
||||||
|
elif len(snapname) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(snapname)
|
||||||
|
elif fromsnap is not None and _pool_name(fromsnap) != _pool_name(snapname):
|
||||||
|
raise lzc_exc.PoolsDiffer(snapname)
|
||||||
|
elif ret == errno.ENOENT:
|
||||||
|
if (fromsnap is not None and not _is_valid_snap_name(fromsnap) and
|
||||||
|
not _is_valid_bmark_name(fromsnap)):
|
||||||
|
raise lzc_exc.NameInvalid(fromsnap)
|
||||||
|
raise lzc_exc.SnapshotNotFound(snapname)
|
||||||
|
elif ret == errno.ENAMETOOLONG:
|
||||||
|
if fromsnap is not None and len(fromsnap) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(fromsnap)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.NameTooLong(snapname)
|
||||||
|
raise lzc_exc.StreamIOError(ret)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_send_space_translate_error(ret, snapname, fromsnap):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EXDEV and fromsnap is not None:
|
||||||
|
if _pool_name(fromsnap) != _pool_name(snapname):
|
||||||
|
raise lzc_exc.PoolsDiffer(snapname)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.SnapshotMismatch(snapname)
|
||||||
|
elif ret == errno.EINVAL:
|
||||||
|
if fromsnap is not None and not _is_valid_snap_name(fromsnap):
|
||||||
|
raise lzc_exc.NameInvalid(fromsnap)
|
||||||
|
elif not _is_valid_snap_name(snapname):
|
||||||
|
raise lzc_exc.NameInvalid(snapname)
|
||||||
|
elif fromsnap is not None and len(fromsnap) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(fromsnap)
|
||||||
|
elif len(snapname) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(snapname)
|
||||||
|
elif fromsnap is not None and _pool_name(fromsnap) != _pool_name(snapname):
|
||||||
|
raise lzc_exc.PoolsDiffer(snapname)
|
||||||
|
elif ret == errno.ENOENT and fromsnap is not None:
|
||||||
|
if not _is_valid_snap_name(fromsnap):
|
||||||
|
raise lzc_exc.NameInvalid(fromsnap)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.SnapshotNotFound(snapname)
|
||||||
|
raise _generic_exception(ret, snapname, "Failed to estimate backup stream size")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_receive_translate_error(ret, snapname, fd, force, origin, props):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
if not _is_valid_snap_name(snapname) and not _is_valid_fs_name(snapname):
|
||||||
|
raise lzc_exc.NameInvalid(snapname)
|
||||||
|
elif len(snapname) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(snapname)
|
||||||
|
elif origin is not None and not _is_valid_snap_name(origin):
|
||||||
|
raise lzc_exc.NameInvalid(origin)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.BadStream()
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
if not _is_valid_snap_name(snapname):
|
||||||
|
raise lzc_exc.NameInvalid(snapname)
|
||||||
|
else:
|
||||||
|
raise lzc_exc.DatasetNotFound(snapname)
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.DatasetExists(snapname)
|
||||||
|
if ret == errno.ENOTSUP:
|
||||||
|
raise lzc_exc.StreamFeatureNotSupported()
|
||||||
|
if ret == errno.ENODEV:
|
||||||
|
raise lzc_exc.StreamMismatch(_fs_name(snapname))
|
||||||
|
if ret == errno.ETXTBSY:
|
||||||
|
raise lzc_exc.DestinationModified(_fs_name(snapname))
|
||||||
|
if ret == errno.EBUSY:
|
||||||
|
raise lzc_exc.DatasetBusy(_fs_name(snapname))
|
||||||
|
if ret == errno.ENOSPC:
|
||||||
|
raise lzc_exc.NoSpace(_fs_name(snapname))
|
||||||
|
if ret == errno.EDQUOT:
|
||||||
|
raise lzc_exc.QuotaExceeded(_fs_name(snapname))
|
||||||
|
if ret == errno.ENAMETOOLONG:
|
||||||
|
raise lzc_exc.NameTooLong(snapname)
|
||||||
|
if ret == errno.EROFS:
|
||||||
|
raise lzc_exc.ReadOnlyPool(_pool_name(snapname))
|
||||||
|
if ret == errno.EAGAIN:
|
||||||
|
raise lzc_exc.SuspendedPool(_pool_name(snapname))
|
||||||
|
|
||||||
|
raise lzc_exc.StreamIOError(ret)
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_promote_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise lzc_exc.NotClone(name)
|
||||||
|
if ret == errno.ENOTSOCK:
|
||||||
|
raise lzc_exc.NotClone(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.FilesystemNotFound(name)
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.SnapshotExists(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to promote dataset")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_rename_translate_error(ret, source, target):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(source)
|
||||||
|
_validate_fs_name(target)
|
||||||
|
if _pool_name(source) != _pool_name(target):
|
||||||
|
raise lzc_exc.PoolsDiffer(source)
|
||||||
|
if ret == errno.EEXIST:
|
||||||
|
raise lzc_exc.FilesystemExists(target)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.FilesystemNotFound(source)
|
||||||
|
raise _generic_exception(ret, source, "Failed to rename dataset")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_destroy_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.FilesystemNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to destroy dataset")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_inherit_prop_translate_error(ret, name, prop):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise lzc_exc.PropertyInvalid(prop)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.DatasetNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to inherit a property")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_set_prop_translate_error(ret, name, prop, val):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_or_snap_name(name)
|
||||||
|
raise lzc_exc.PropertyInvalid(prop)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.DatasetNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to set a property")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_get_props_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_or_snap_name(name)
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.DatasetNotFound(name)
|
||||||
|
raise _generic_exception(ret, name, "Failed to get properties")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_list_children_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise _generic_exception(ret, name, "Error while iterating children")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_list_snaps_translate_error(ret, name):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_name(name)
|
||||||
|
raise _generic_exception(ret, name, "Error while iterating snapshots")
|
||||||
|
|
||||||
|
|
||||||
|
def lzc_list_translate_error(ret, name, opts):
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
if ret == errno.ENOENT:
|
||||||
|
raise lzc_exc.DatasetNotFound(name)
|
||||||
|
if ret == errno.EINVAL:
|
||||||
|
_validate_fs_or_snap_name(name)
|
||||||
|
raise _generic_exception(ret, name, "Error obtaining a list")
|
||||||
|
|
||||||
|
|
||||||
|
def _handle_err_list(ret, errlist, names, exception, mapper):
|
||||||
|
'''
|
||||||
|
Convert one or more errors from an operation into the requested exception.
|
||||||
|
|
||||||
|
:param int ret: the overall return code.
|
||||||
|
:param errlist: the dictionary that maps entity names to their specific error codes.
|
||||||
|
:type errlist: dict of bytes:int
|
||||||
|
:param names: the list of all names of the entities on which the operation was attempted.
|
||||||
|
:param type exception: the type of the exception to raise if an error occurred.
|
||||||
|
The exception should be a subclass of `MultipleOperationsFailure`.
|
||||||
|
:param function mapper: the function that maps an error code and a name to a Python exception.
|
||||||
|
|
||||||
|
Unless ``ret`` is zero this function will raise the ``exception``.
|
||||||
|
If the ``errlist`` is not empty, then the compound exception will contain a list of exceptions
|
||||||
|
corresponding to each individual error code in the ``errlist``.
|
||||||
|
Otherwise, the ``exception`` will contain a list with a single exception corresponding to the
|
||||||
|
``ret`` value. If the ``names`` list contains only one element, that is, the operation was
|
||||||
|
attempted on a single entity, then the name of that entity is passed to the ``mapper``.
|
||||||
|
If the operation was attempted on multiple entities, but the ``errlist`` is empty, then we
|
||||||
|
can not know which entity caused the error and, thus, ``None`` is used as a name to signify
|
||||||
|
thati fact.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Note that the ``errlist`` can contain a special element with a key of "N_MORE_ERRORS".
|
||||||
|
That element means that there were too many errors to place on the ``errlist``.
|
||||||
|
Those errors are suppressed and only their count is provided as a value of the special
|
||||||
|
``N_MORE_ERRORS`` element.
|
||||||
|
'''
|
||||||
|
if ret == 0:
|
||||||
|
return
|
||||||
|
|
||||||
|
if len(errlist) == 0:
|
||||||
|
suppressed_count = 0
|
||||||
|
if len(names) == 1:
|
||||||
|
name = names[0]
|
||||||
|
else:
|
||||||
|
name = None
|
||||||
|
errors = [mapper(ret, name)]
|
||||||
|
else:
|
||||||
|
errors = []
|
||||||
|
suppressed_count = errlist.pop('N_MORE_ERRORS', 0)
|
||||||
|
for name, err in errlist.iteritems():
|
||||||
|
errors.append(mapper(err, name))
|
||||||
|
|
||||||
|
raise exception(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
def _pool_name(name):
|
||||||
|
'''
|
||||||
|
Extract a pool name from the given dataset or bookmark name.
|
||||||
|
|
||||||
|
'/' separates dataset name components.
|
||||||
|
'@' separates a snapshot name from the rest of the dataset name.
|
||||||
|
'#' separates a bookmark name from the rest of the dataset name.
|
||||||
|
'''
|
||||||
|
return re.split('[/@#]', name, 1)[0]
|
||||||
|
|
||||||
|
|
||||||
|
def _fs_name(name):
|
||||||
|
'''
|
||||||
|
Extract a dataset name from the given snapshot or bookmark name.
|
||||||
|
|
||||||
|
'@' separates a snapshot name from the rest of the dataset name.
|
||||||
|
'#' separates a bookmark name from the rest of the dataset name.
|
||||||
|
'''
|
||||||
|
return re.split('[@#]', name, 1)[0]
|
||||||
|
|
||||||
|
|
||||||
|
def _is_valid_name_component(component):
|
||||||
|
allowed = string.ascii_letters + string.digits + '-_.: '
|
||||||
|
return component and all(x in allowed for x in component)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_valid_fs_name(name):
|
||||||
|
return name and all(_is_valid_name_component(c) for c in name.split('/'))
|
||||||
|
|
||||||
|
|
||||||
|
def _is_valid_snap_name(name):
|
||||||
|
parts = name.split('@')
|
||||||
|
return (len(parts) == 2 and _is_valid_fs_name(parts[0]) and
|
||||||
|
_is_valid_name_component(parts[1]))
|
||||||
|
|
||||||
|
|
||||||
|
def _is_valid_bmark_name(name):
|
||||||
|
parts = name.split('#')
|
||||||
|
return (len(parts) == 2 and _is_valid_fs_name(parts[0]) and
|
||||||
|
_is_valid_name_component(parts[1]))
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_fs_name(name):
|
||||||
|
if not _is_valid_fs_name(name):
|
||||||
|
raise lzc_exc.FilesystemNameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(name)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_snap_name(name):
|
||||||
|
if not _is_valid_snap_name(name):
|
||||||
|
raise lzc_exc.SnapshotNameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(name)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_bmark_name(name):
|
||||||
|
if not _is_valid_bmark_name(name):
|
||||||
|
raise lzc_exc.BookmarkNameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(name)
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_fs_or_snap_name(name):
|
||||||
|
if not _is_valid_fs_name(name) and not _is_valid_snap_name(name):
|
||||||
|
raise lzc_exc.NameInvalid(name)
|
||||||
|
elif len(name) > MAXNAMELEN:
|
||||||
|
raise lzc_exc.NameTooLong(name)
|
||||||
|
|
||||||
|
|
||||||
|
def _generic_exception(err, name, message):
|
||||||
|
if err in _error_to_exception:
|
||||||
|
return _error_to_exception[err](name)
|
||||||
|
else:
|
||||||
|
return lzc_exc.ZFSGenericError(err, message, name)
|
||||||
|
|
||||||
|
_error_to_exception = {e.errno: e for e in [
|
||||||
|
lzc_exc.ZIOError,
|
||||||
|
lzc_exc.NoSpace,
|
||||||
|
lzc_exc.QuotaExceeded,
|
||||||
|
lzc_exc.DatasetBusy,
|
||||||
|
lzc_exc.NameTooLong,
|
||||||
|
lzc_exc.ReadOnlyPool,
|
||||||
|
lzc_exc.SuspendedPool,
|
||||||
|
lzc_exc.PoolsDiffer,
|
||||||
|
lzc_exc.PropertyNotSupported,
|
||||||
|
]}
|
||||||
|
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
1270
contrib/pyzfs/libzfs_core/_libzfs_core.py
Normal file
1270
contrib/pyzfs/libzfs_core/_libzfs_core.py
Normal file
File diff suppressed because it is too large
Load Diff
259
contrib/pyzfs/libzfs_core/_nvlist.py
Normal file
259
contrib/pyzfs/libzfs_core/_nvlist.py
Normal file
@ -0,0 +1,259 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
nvlist_in and nvlist_out provide support for converting between
|
||||||
|
a dictionary on the Python side and an nvlist_t on the C side
|
||||||
|
with the automatic memory management for C memory allocations.
|
||||||
|
|
||||||
|
nvlist_in takes a dictionary and produces a CData object corresponding
|
||||||
|
to a C nvlist_t pointer suitable for passing as an input parameter.
|
||||||
|
The nvlist_t is populated based on the dictionary.
|
||||||
|
|
||||||
|
nvlist_out takes a dictionary and produces a CData object corresponding
|
||||||
|
to a C nvlist_t pointer to pointer suitable for passing as an output parameter.
|
||||||
|
Upon exit from a with-block the dictionary is populated based on the nvlist_t.
|
||||||
|
|
||||||
|
The dictionary must follow a certain format to be convertible
|
||||||
|
to the nvlist_t. The dictionary produced from the nvlist_t
|
||||||
|
will follow the same format.
|
||||||
|
|
||||||
|
Format:
|
||||||
|
- keys are always byte strings
|
||||||
|
- a value can be None in which case it represents boolean truth by its mere presence
|
||||||
|
- a value can be a bool
|
||||||
|
- a value can be a byte string
|
||||||
|
- a value can be an integer
|
||||||
|
- a value can be a CFFI CData object representing one of the following C types:
|
||||||
|
int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t, int64_t, uint64_t, boolean_t, uchar_t
|
||||||
|
- a value can be a dictionary that recursively adheres to this format
|
||||||
|
- a value can be a list of bools, byte strings, integers or CData objects of types specified above
|
||||||
|
- a value can be a list of dictionaries that adhere to this format
|
||||||
|
- all elements of a list value must be of the same type
|
||||||
|
"""
|
||||||
|
|
||||||
|
import numbers
|
||||||
|
from collections import namedtuple
|
||||||
|
from contextlib import contextmanager
|
||||||
|
from .bindings import libnvpair
|
||||||
|
from .ctypes import _type_to_suffix
|
||||||
|
|
||||||
|
_ffi = libnvpair.ffi
|
||||||
|
_lib = libnvpair.lib
|
||||||
|
|
||||||
|
|
||||||
|
def nvlist_in(props):
|
||||||
|
"""
|
||||||
|
This function converts a python dictionary to a C nvlist_t
|
||||||
|
and provides automatic memory management for the latter.
|
||||||
|
|
||||||
|
:param dict props: the dictionary to be converted.
|
||||||
|
:return: an FFI CData object representing the nvlist_t pointer.
|
||||||
|
:rtype: CData
|
||||||
|
"""
|
||||||
|
nvlistp = _ffi.new("nvlist_t **")
|
||||||
|
res = _lib.nvlist_alloc(nvlistp, 1, 0) # UNIQUE_NAME == 1
|
||||||
|
if res != 0:
|
||||||
|
raise MemoryError('nvlist_alloc failed')
|
||||||
|
nvlist = _ffi.gc(nvlistp[0], _lib.nvlist_free)
|
||||||
|
_dict_to_nvlist(props, nvlist)
|
||||||
|
return nvlist
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def nvlist_out(props):
|
||||||
|
"""
|
||||||
|
A context manager that allocates a pointer to a C nvlist_t and yields
|
||||||
|
a CData object representing a pointer to the pointer via 'as' target.
|
||||||
|
The caller can pass that pointer to a pointer to a C function that
|
||||||
|
creates a new nvlist_t object.
|
||||||
|
The context manager takes care of memory management for the nvlist_t
|
||||||
|
and also populates the 'props' dictionary with data from the nvlist_t
|
||||||
|
upon leaving the 'with' block.
|
||||||
|
|
||||||
|
:param dict props: the dictionary to be populated with data from the nvlist.
|
||||||
|
:return: an FFI CData object representing the pointer to nvlist_t pointer.
|
||||||
|
:rtype: CData
|
||||||
|
"""
|
||||||
|
nvlistp = _ffi.new("nvlist_t **")
|
||||||
|
nvlistp[0] = _ffi.NULL # to be sure
|
||||||
|
try:
|
||||||
|
yield nvlistp
|
||||||
|
# clear old entries, if any
|
||||||
|
props.clear()
|
||||||
|
_nvlist_to_dict(nvlistp[0], props)
|
||||||
|
finally:
|
||||||
|
if nvlistp[0] != _ffi.NULL:
|
||||||
|
_lib.nvlist_free(nvlistp[0])
|
||||||
|
nvlistp[0] = _ffi.NULL
|
||||||
|
|
||||||
|
|
||||||
|
_TypeInfo = namedtuple('_TypeInfo', ['suffix', 'ctype', 'is_array', 'convert'])
|
||||||
|
|
||||||
|
|
||||||
|
def _type_info(typeid):
|
||||||
|
return {
|
||||||
|
_lib.DATA_TYPE_BOOLEAN: _TypeInfo(None, None, None, None),
|
||||||
|
_lib.DATA_TYPE_BOOLEAN_VALUE: _TypeInfo("boolean_value", "boolean_t *", False, bool),
|
||||||
|
_lib.DATA_TYPE_BYTE: _TypeInfo("byte", "uchar_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_INT8: _TypeInfo("int8", "int8_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_UINT8: _TypeInfo("uint8", "uint8_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_INT16: _TypeInfo("int16", "int16_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_UINT16: _TypeInfo("uint16", "uint16_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_INT32: _TypeInfo("int32", "int32_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_UINT32: _TypeInfo("uint32", "uint32_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_INT64: _TypeInfo("int64", "int64_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_UINT64: _TypeInfo("uint64", "uint64_t *", False, int),
|
||||||
|
_lib.DATA_TYPE_STRING: _TypeInfo("string", "char **", False, _ffi.string),
|
||||||
|
_lib.DATA_TYPE_NVLIST: _TypeInfo("nvlist", "nvlist_t **", False, lambda x: _nvlist_to_dict(x, {})),
|
||||||
|
_lib.DATA_TYPE_BOOLEAN_ARRAY: _TypeInfo("boolean_array", "boolean_t **", True, bool),
|
||||||
|
# XXX use bytearray ?
|
||||||
|
_lib.DATA_TYPE_BYTE_ARRAY: _TypeInfo("byte_array", "uchar_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_INT8_ARRAY: _TypeInfo("int8_array", "int8_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_UINT8_ARRAY: _TypeInfo("uint8_array", "uint8_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_INT16_ARRAY: _TypeInfo("int16_array", "int16_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_UINT16_ARRAY: _TypeInfo("uint16_array", "uint16_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_INT32_ARRAY: _TypeInfo("int32_array", "int32_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_UINT32_ARRAY: _TypeInfo("uint32_array", "uint32_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_INT64_ARRAY: _TypeInfo("int64_array", "int64_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_UINT64_ARRAY: _TypeInfo("uint64_array", "uint64_t **", True, int),
|
||||||
|
_lib.DATA_TYPE_STRING_ARRAY: _TypeInfo("string_array", "char ***", True, _ffi.string),
|
||||||
|
_lib.DATA_TYPE_NVLIST_ARRAY: _TypeInfo("nvlist_array", "nvlist_t ***", True, lambda x: _nvlist_to_dict(x, {})),
|
||||||
|
}[typeid]
|
||||||
|
|
||||||
|
# only integer properties need to be here
|
||||||
|
_prop_name_to_type_str = {
|
||||||
|
"rewind-request": "uint32",
|
||||||
|
"type": "uint32",
|
||||||
|
"N_MORE_ERRORS": "int32",
|
||||||
|
"pool_context": "int32",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _nvlist_add_array(nvlist, key, array):
|
||||||
|
def _is_integer(x):
|
||||||
|
return isinstance(x, numbers.Integral) and not isinstance(x, bool)
|
||||||
|
|
||||||
|
ret = 0
|
||||||
|
specimen = array[0]
|
||||||
|
is_integer = _is_integer(specimen)
|
||||||
|
specimen_ctype = None
|
||||||
|
if isinstance(specimen, _ffi.CData):
|
||||||
|
specimen_ctype = _ffi.typeof(specimen)
|
||||||
|
|
||||||
|
for element in array[1:]:
|
||||||
|
if is_integer and _is_integer(element):
|
||||||
|
pass
|
||||||
|
elif type(element) is not type(specimen):
|
||||||
|
raise TypeError('Array has elements of different types: ' +
|
||||||
|
type(specimen).__name__ +
|
||||||
|
' and ' +
|
||||||
|
type(element).__name__)
|
||||||
|
elif specimen_ctype is not None:
|
||||||
|
ctype = _ffi.typeof(element)
|
||||||
|
if ctype is not specimen_ctype:
|
||||||
|
raise TypeError('Array has elements of different C types: ' +
|
||||||
|
_ffi.typeof(specimen).cname +
|
||||||
|
' and ' +
|
||||||
|
_ffi.typeof(element).cname)
|
||||||
|
|
||||||
|
if isinstance(specimen, dict):
|
||||||
|
# NB: can't use automatic memory management via nvlist_in() here,
|
||||||
|
# we have a loop, but 'with' would require recursion
|
||||||
|
c_array = []
|
||||||
|
for dictionary in array:
|
||||||
|
nvlistp = _ffi.new('nvlist_t **')
|
||||||
|
res = _lib.nvlist_alloc(nvlistp, 1, 0) # UNIQUE_NAME == 1
|
||||||
|
if res != 0:
|
||||||
|
raise MemoryError('nvlist_alloc failed')
|
||||||
|
nested_nvlist = _ffi.gc(nvlistp[0], _lib.nvlist_free)
|
||||||
|
_dict_to_nvlist(dictionary, nested_nvlist)
|
||||||
|
c_array.append(nested_nvlist)
|
||||||
|
ret = _lib.nvlist_add_nvlist_array(nvlist, key, c_array, len(c_array))
|
||||||
|
elif isinstance(specimen, bytes):
|
||||||
|
c_array = []
|
||||||
|
for string in array:
|
||||||
|
c_array.append(_ffi.new('char[]', string))
|
||||||
|
ret = _lib.nvlist_add_string_array(nvlist, key, c_array, len(c_array))
|
||||||
|
elif isinstance(specimen, bool):
|
||||||
|
ret = _lib.nvlist_add_boolean_array(nvlist, key, array, len(array))
|
||||||
|
elif isinstance(specimen, numbers.Integral):
|
||||||
|
suffix = _prop_name_to_type_str.get(key, "uint64")
|
||||||
|
cfunc = getattr(_lib, "nvlist_add_%s_array" % (suffix,))
|
||||||
|
ret = cfunc(nvlist, key, array, len(array))
|
||||||
|
elif isinstance(specimen, _ffi.CData) and _ffi.typeof(specimen) in _type_to_suffix:
|
||||||
|
suffix = _type_to_suffix[_ffi.typeof(specimen)][True]
|
||||||
|
cfunc = getattr(_lib, "nvlist_add_%s_array" % (suffix,))
|
||||||
|
ret = cfunc(nvlist, key, array, len(array))
|
||||||
|
else:
|
||||||
|
raise TypeError('Unsupported value type ' + type(specimen).__name__)
|
||||||
|
if ret != 0:
|
||||||
|
raise MemoryError('nvlist_add failed, err = %d' % ret)
|
||||||
|
|
||||||
|
|
||||||
|
def _nvlist_to_dict(nvlist, props):
|
||||||
|
pair = _lib.nvlist_next_nvpair(nvlist, _ffi.NULL)
|
||||||
|
while pair != _ffi.NULL:
|
||||||
|
name = _ffi.string(_lib.nvpair_name(pair))
|
||||||
|
typeid = int(_lib.nvpair_type(pair))
|
||||||
|
typeinfo = _type_info(typeid)
|
||||||
|
# XXX nvpair_type_is_array() is broken for DATA_TYPE_INT8_ARRAY at the moment
|
||||||
|
# see https://www.illumos.org/issues/5778
|
||||||
|
# is_array = bool(_lib.nvpair_type_is_array(pair))
|
||||||
|
is_array = typeinfo.is_array
|
||||||
|
cfunc = getattr(_lib, "nvpair_value_%s" % (typeinfo.suffix,), None)
|
||||||
|
val = None
|
||||||
|
ret = 0
|
||||||
|
if is_array:
|
||||||
|
valptr = _ffi.new(typeinfo.ctype)
|
||||||
|
lenptr = _ffi.new("uint_t *")
|
||||||
|
ret = cfunc(pair, valptr, lenptr)
|
||||||
|
if ret != 0:
|
||||||
|
raise RuntimeError('nvpair_value failed')
|
||||||
|
length = int(lenptr[0])
|
||||||
|
val = []
|
||||||
|
for i in range(length):
|
||||||
|
val.append(typeinfo.convert(valptr[0][i]))
|
||||||
|
else:
|
||||||
|
if typeid == _lib.DATA_TYPE_BOOLEAN:
|
||||||
|
val = None # XXX or should it be True ?
|
||||||
|
else:
|
||||||
|
valptr = _ffi.new(typeinfo.ctype)
|
||||||
|
ret = cfunc(pair, valptr)
|
||||||
|
if ret != 0:
|
||||||
|
raise RuntimeError('nvpair_value failed')
|
||||||
|
val = typeinfo.convert(valptr[0])
|
||||||
|
props[name] = val
|
||||||
|
pair = _lib.nvlist_next_nvpair(nvlist, pair)
|
||||||
|
return props
|
||||||
|
|
||||||
|
|
||||||
|
def _dict_to_nvlist(props, nvlist):
|
||||||
|
for k, v in props.items():
|
||||||
|
if not isinstance(k, bytes):
|
||||||
|
raise TypeError('Unsupported key type ' + type(k).__name__)
|
||||||
|
ret = 0
|
||||||
|
if isinstance(v, dict):
|
||||||
|
ret = _lib.nvlist_add_nvlist(nvlist, k, nvlist_in(v))
|
||||||
|
elif isinstance(v, list):
|
||||||
|
_nvlist_add_array(nvlist, k, v)
|
||||||
|
elif isinstance(v, bytes):
|
||||||
|
ret = _lib.nvlist_add_string(nvlist, k, v)
|
||||||
|
elif isinstance(v, bool):
|
||||||
|
ret = _lib.nvlist_add_boolean_value(nvlist, k, v)
|
||||||
|
elif v is None:
|
||||||
|
ret = _lib.nvlist_add_boolean(nvlist, k)
|
||||||
|
elif isinstance(v, numbers.Integral):
|
||||||
|
suffix = _prop_name_to_type_str.get(k, "uint64")
|
||||||
|
cfunc = getattr(_lib, "nvlist_add_%s" % (suffix,))
|
||||||
|
ret = cfunc(nvlist, k, v)
|
||||||
|
elif isinstance(v, _ffi.CData) and _ffi.typeof(v) in _type_to_suffix:
|
||||||
|
suffix = _type_to_suffix[_ffi.typeof(v)][False]
|
||||||
|
cfunc = getattr(_lib, "nvlist_add_%s" % (suffix,))
|
||||||
|
ret = cfunc(nvlist, k, v)
|
||||||
|
else:
|
||||||
|
raise TypeError('Unsupported value type ' + type(v).__name__)
|
||||||
|
if ret != 0:
|
||||||
|
raise MemoryError('nvlist_add failed')
|
||||||
|
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
45
contrib/pyzfs/libzfs_core/bindings/__init__.py
Normal file
45
contrib/pyzfs/libzfs_core/bindings/__init__.py
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
The package that contains a module per each C library that
|
||||||
|
`libzfs_core` uses. The modules expose CFFI objects required
|
||||||
|
to make calls to functions in the libraries.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import threading
|
||||||
|
import importlib
|
||||||
|
|
||||||
|
from cffi import FFI
|
||||||
|
|
||||||
|
|
||||||
|
def _setup_cffi():
|
||||||
|
class LazyLibrary(object):
|
||||||
|
|
||||||
|
def __init__(self, ffi, libname):
|
||||||
|
self._ffi = ffi
|
||||||
|
self._libname = libname
|
||||||
|
self._lib = None
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def __getattr__(self, name):
|
||||||
|
if self._lib is None:
|
||||||
|
with self._lock:
|
||||||
|
if self._lib is None:
|
||||||
|
self._lib = self._ffi.dlopen(self._libname)
|
||||||
|
|
||||||
|
return getattr(self._lib, name)
|
||||||
|
|
||||||
|
MODULES = ["libnvpair", "libzfs_core"]
|
||||||
|
ffi = FFI()
|
||||||
|
|
||||||
|
for module_name in MODULES:
|
||||||
|
module = importlib.import_module("." + module_name, __package__)
|
||||||
|
ffi.cdef(module.CDEF)
|
||||||
|
lib = LazyLibrary(ffi, module.LIBRARY)
|
||||||
|
setattr(module, "ffi", ffi)
|
||||||
|
setattr(module, "lib", lib)
|
||||||
|
|
||||||
|
|
||||||
|
_setup_cffi()
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
117
contrib/pyzfs/libzfs_core/bindings/libnvpair.py
Normal file
117
contrib/pyzfs/libzfs_core/bindings/libnvpair.py
Normal file
@ -0,0 +1,117 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Python bindings for ``libnvpair``.
|
||||||
|
"""
|
||||||
|
|
||||||
|
CDEF = """
|
||||||
|
typedef ... nvlist_t;
|
||||||
|
typedef ... nvpair_t;
|
||||||
|
|
||||||
|
|
||||||
|
typedef enum {
|
||||||
|
DATA_TYPE_UNKNOWN = 0,
|
||||||
|
DATA_TYPE_BOOLEAN,
|
||||||
|
DATA_TYPE_BYTE,
|
||||||
|
DATA_TYPE_INT16,
|
||||||
|
DATA_TYPE_UINT16,
|
||||||
|
DATA_TYPE_INT32,
|
||||||
|
DATA_TYPE_UINT32,
|
||||||
|
DATA_TYPE_INT64,
|
||||||
|
DATA_TYPE_UINT64,
|
||||||
|
DATA_TYPE_STRING,
|
||||||
|
DATA_TYPE_BYTE_ARRAY,
|
||||||
|
DATA_TYPE_INT16_ARRAY,
|
||||||
|
DATA_TYPE_UINT16_ARRAY,
|
||||||
|
DATA_TYPE_INT32_ARRAY,
|
||||||
|
DATA_TYPE_UINT32_ARRAY,
|
||||||
|
DATA_TYPE_INT64_ARRAY,
|
||||||
|
DATA_TYPE_UINT64_ARRAY,
|
||||||
|
DATA_TYPE_STRING_ARRAY,
|
||||||
|
DATA_TYPE_HRTIME,
|
||||||
|
DATA_TYPE_NVLIST,
|
||||||
|
DATA_TYPE_NVLIST_ARRAY,
|
||||||
|
DATA_TYPE_BOOLEAN_VALUE,
|
||||||
|
DATA_TYPE_INT8,
|
||||||
|
DATA_TYPE_UINT8,
|
||||||
|
DATA_TYPE_BOOLEAN_ARRAY,
|
||||||
|
DATA_TYPE_INT8_ARRAY,
|
||||||
|
DATA_TYPE_UINT8_ARRAY
|
||||||
|
} data_type_t;
|
||||||
|
typedef enum { B_FALSE, B_TRUE } boolean_t;
|
||||||
|
|
||||||
|
typedef unsigned char uchar_t;
|
||||||
|
typedef unsigned int uint_t;
|
||||||
|
|
||||||
|
int nvlist_alloc(nvlist_t **, uint_t, int);
|
||||||
|
void nvlist_free(nvlist_t *);
|
||||||
|
|
||||||
|
int nvlist_unpack(char *, size_t, nvlist_t **, int);
|
||||||
|
|
||||||
|
void dump_nvlist(nvlist_t *, int);
|
||||||
|
int nvlist_dup(nvlist_t *, nvlist_t **, int);
|
||||||
|
|
||||||
|
int nvlist_add_boolean(nvlist_t *, const char *);
|
||||||
|
int nvlist_add_boolean_value(nvlist_t *, const char *, boolean_t);
|
||||||
|
int nvlist_add_byte(nvlist_t *, const char *, uchar_t);
|
||||||
|
int nvlist_add_int8(nvlist_t *, const char *, int8_t);
|
||||||
|
int nvlist_add_uint8(nvlist_t *, const char *, uint8_t);
|
||||||
|
int nvlist_add_int16(nvlist_t *, const char *, int16_t);
|
||||||
|
int nvlist_add_uint16(nvlist_t *, const char *, uint16_t);
|
||||||
|
int nvlist_add_int32(nvlist_t *, const char *, int32_t);
|
||||||
|
int nvlist_add_uint32(nvlist_t *, const char *, uint32_t);
|
||||||
|
int nvlist_add_int64(nvlist_t *, const char *, int64_t);
|
||||||
|
int nvlist_add_uint64(nvlist_t *, const char *, uint64_t);
|
||||||
|
int nvlist_add_string(nvlist_t *, const char *, const char *);
|
||||||
|
int nvlist_add_nvlist(nvlist_t *, const char *, nvlist_t *);
|
||||||
|
int nvlist_add_boolean_array(nvlist_t *, const char *, boolean_t *, uint_t);
|
||||||
|
int nvlist_add_byte_array(nvlist_t *, const char *, uchar_t *, uint_t);
|
||||||
|
int nvlist_add_int8_array(nvlist_t *, const char *, int8_t *, uint_t);
|
||||||
|
int nvlist_add_uint8_array(nvlist_t *, const char *, uint8_t *, uint_t);
|
||||||
|
int nvlist_add_int16_array(nvlist_t *, const char *, int16_t *, uint_t);
|
||||||
|
int nvlist_add_uint16_array(nvlist_t *, const char *, uint16_t *, uint_t);
|
||||||
|
int nvlist_add_int32_array(nvlist_t *, const char *, int32_t *, uint_t);
|
||||||
|
int nvlist_add_uint32_array(nvlist_t *, const char *, uint32_t *, uint_t);
|
||||||
|
int nvlist_add_int64_array(nvlist_t *, const char *, int64_t *, uint_t);
|
||||||
|
int nvlist_add_uint64_array(nvlist_t *, const char *, uint64_t *, uint_t);
|
||||||
|
int nvlist_add_string_array(nvlist_t *, const char *, char *const *, uint_t);
|
||||||
|
int nvlist_add_nvlist_array(nvlist_t *, const char *, nvlist_t **, uint_t);
|
||||||
|
|
||||||
|
nvpair_t *nvlist_next_nvpair(nvlist_t *, nvpair_t *);
|
||||||
|
nvpair_t *nvlist_prev_nvpair(nvlist_t *, nvpair_t *);
|
||||||
|
char *nvpair_name(nvpair_t *);
|
||||||
|
data_type_t nvpair_type(nvpair_t *);
|
||||||
|
int nvpair_type_is_array(nvpair_t *);
|
||||||
|
int nvpair_value_boolean_value(nvpair_t *, boolean_t *);
|
||||||
|
int nvpair_value_byte(nvpair_t *, uchar_t *);
|
||||||
|
int nvpair_value_int8(nvpair_t *, int8_t *);
|
||||||
|
int nvpair_value_uint8(nvpair_t *, uint8_t *);
|
||||||
|
int nvpair_value_int16(nvpair_t *, int16_t *);
|
||||||
|
int nvpair_value_uint16(nvpair_t *, uint16_t *);
|
||||||
|
int nvpair_value_int32(nvpair_t *, int32_t *);
|
||||||
|
int nvpair_value_uint32(nvpair_t *, uint32_t *);
|
||||||
|
int nvpair_value_int64(nvpair_t *, int64_t *);
|
||||||
|
int nvpair_value_uint64(nvpair_t *, uint64_t *);
|
||||||
|
int nvpair_value_string(nvpair_t *, char **);
|
||||||
|
int nvpair_value_nvlist(nvpair_t *, nvlist_t **);
|
||||||
|
int nvpair_value_boolean_array(nvpair_t *, boolean_t **, uint_t *);
|
||||||
|
int nvpair_value_byte_array(nvpair_t *, uchar_t **, uint_t *);
|
||||||
|
int nvpair_value_int8_array(nvpair_t *, int8_t **, uint_t *);
|
||||||
|
int nvpair_value_uint8_array(nvpair_t *, uint8_t **, uint_t *);
|
||||||
|
int nvpair_value_int16_array(nvpair_t *, int16_t **, uint_t *);
|
||||||
|
int nvpair_value_uint16_array(nvpair_t *, uint16_t **, uint_t *);
|
||||||
|
int nvpair_value_int32_array(nvpair_t *, int32_t **, uint_t *);
|
||||||
|
int nvpair_value_uint32_array(nvpair_t *, uint32_t **, uint_t *);
|
||||||
|
int nvpair_value_int64_array(nvpair_t *, int64_t **, uint_t *);
|
||||||
|
int nvpair_value_uint64_array(nvpair_t *, uint64_t **, uint_t *);
|
||||||
|
int nvpair_value_string_array(nvpair_t *, char ***, uint_t *);
|
||||||
|
int nvpair_value_nvlist_array(nvpair_t *, nvlist_t ***, uint_t *);
|
||||||
|
"""
|
||||||
|
|
||||||
|
SOURCE = """
|
||||||
|
#include <libzfs/sys/nvpair.h>
|
||||||
|
"""
|
||||||
|
|
||||||
|
LIBRARY = "nvpair"
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
99
contrib/pyzfs/libzfs_core/bindings/libzfs_core.py
Normal file
99
contrib/pyzfs/libzfs_core/bindings/libzfs_core.py
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Python bindings for ``libzfs_core``.
|
||||||
|
"""
|
||||||
|
|
||||||
|
CDEF = """
|
||||||
|
enum lzc_send_flags {
|
||||||
|
LZC_SEND_FLAG_EMBED_DATA = 1,
|
||||||
|
LZC_SEND_FLAG_LARGE_BLOCK = 2
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef enum {
|
||||||
|
DMU_OST_NONE,
|
||||||
|
DMU_OST_META,
|
||||||
|
DMU_OST_ZFS,
|
||||||
|
DMU_OST_ZVOL,
|
||||||
|
DMU_OST_OTHER,
|
||||||
|
DMU_OST_ANY,
|
||||||
|
DMU_OST_NUMTYPES
|
||||||
|
} dmu_objset_type_t;
|
||||||
|
|
||||||
|
#define MAXNAMELEN 256
|
||||||
|
|
||||||
|
struct drr_begin {
|
||||||
|
uint64_t drr_magic;
|
||||||
|
uint64_t drr_versioninfo; /* was drr_version */
|
||||||
|
uint64_t drr_creation_time;
|
||||||
|
dmu_objset_type_t drr_type;
|
||||||
|
uint32_t drr_flags;
|
||||||
|
uint64_t drr_toguid;
|
||||||
|
uint64_t drr_fromguid;
|
||||||
|
char drr_toname[MAXNAMELEN];
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef struct zio_cksum {
|
||||||
|
uint64_t zc_word[4];
|
||||||
|
} zio_cksum_t;
|
||||||
|
|
||||||
|
typedef struct dmu_replay_record {
|
||||||
|
enum {
|
||||||
|
DRR_BEGIN, DRR_OBJECT, DRR_FREEOBJECTS,
|
||||||
|
DRR_WRITE, DRR_FREE, DRR_END, DRR_WRITE_BYREF,
|
||||||
|
DRR_SPILL, DRR_WRITE_EMBEDDED, DRR_NUMTYPES
|
||||||
|
} drr_type;
|
||||||
|
uint32_t drr_payloadlen;
|
||||||
|
union {
|
||||||
|
struct drr_begin drr_begin;
|
||||||
|
/* ... */
|
||||||
|
struct drr_checksum {
|
||||||
|
uint64_t drr_pad[34];
|
||||||
|
zio_cksum_t drr_checksum;
|
||||||
|
} drr_checksum;
|
||||||
|
} drr_u;
|
||||||
|
} dmu_replay_record_t;
|
||||||
|
|
||||||
|
int libzfs_core_init(void);
|
||||||
|
void libzfs_core_fini(void);
|
||||||
|
|
||||||
|
int lzc_snapshot(nvlist_t *, nvlist_t *, nvlist_t **);
|
||||||
|
int lzc_create(const char *, dmu_objset_type_t, nvlist_t *);
|
||||||
|
int lzc_clone(const char *, const char *, nvlist_t *);
|
||||||
|
int lzc_destroy_snaps(nvlist_t *, boolean_t, nvlist_t **);
|
||||||
|
int lzc_bookmark(nvlist_t *, nvlist_t **);
|
||||||
|
int lzc_get_bookmarks(const char *, nvlist_t *, nvlist_t **);
|
||||||
|
int lzc_destroy_bookmarks(nvlist_t *, nvlist_t **);
|
||||||
|
|
||||||
|
int lzc_snaprange_space(const char *, const char *, uint64_t *);
|
||||||
|
|
||||||
|
int lzc_hold(nvlist_t *, int, nvlist_t **);
|
||||||
|
int lzc_release(nvlist_t *, nvlist_t **);
|
||||||
|
int lzc_get_holds(const char *, nvlist_t **);
|
||||||
|
|
||||||
|
int lzc_send(const char *, const char *, int, enum lzc_send_flags);
|
||||||
|
int lzc_send_space(const char *, const char *, enum lzc_send_flags, uint64_t *);
|
||||||
|
int lzc_receive(const char *, nvlist_t *, const char *, boolean_t, int);
|
||||||
|
int lzc_receive_with_header(const char *, nvlist_t *, const char *, boolean_t,
|
||||||
|
boolean_t, int, const struct dmu_replay_record *);
|
||||||
|
|
||||||
|
boolean_t lzc_exists(const char *);
|
||||||
|
|
||||||
|
int lzc_rollback(const char *, char *, int);
|
||||||
|
int lzc_rollback_to(const char *, const char *);
|
||||||
|
|
||||||
|
int lzc_promote(const char *, nvlist_t *, nvlist_t **);
|
||||||
|
int lzc_rename(const char *, const char *, nvlist_t *, char **);
|
||||||
|
int lzc_destroy_one(const char *fsname, nvlist_t *);
|
||||||
|
int lzc_inherit(const char *fsname, const char *name, nvlist_t *);
|
||||||
|
int lzc_set_props(const char *, nvlist_t *, nvlist_t *, nvlist_t *);
|
||||||
|
int lzc_list (const char *, nvlist_t *);
|
||||||
|
"""
|
||||||
|
|
||||||
|
SOURCE = """
|
||||||
|
#include <libzfs/libzfs_core.h>
|
||||||
|
"""
|
||||||
|
|
||||||
|
LIBRARY = "zfs_core"
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
56
contrib/pyzfs/libzfs_core/ctypes.py
Normal file
56
contrib/pyzfs/libzfs_core/ctypes.py
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Utility functions for casting to a specific C type.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .bindings.libnvpair import ffi as _ffi
|
||||||
|
|
||||||
|
|
||||||
|
def _ffi_cast(type_name):
|
||||||
|
type_info = _ffi.typeof(type_name)
|
||||||
|
|
||||||
|
def _func(value):
|
||||||
|
# this is for overflow / underflow checking only
|
||||||
|
if type_info.kind == 'enum':
|
||||||
|
try:
|
||||||
|
type_info.elements[value]
|
||||||
|
except KeyError as e:
|
||||||
|
raise OverflowError('Invalid enum <%s> value %s' %
|
||||||
|
(type_info.cname, e.message))
|
||||||
|
else:
|
||||||
|
_ffi.new(type_name + '*', value)
|
||||||
|
return _ffi.cast(type_name, value)
|
||||||
|
_func.__name__ = type_name
|
||||||
|
return _func
|
||||||
|
|
||||||
|
|
||||||
|
uint8_t = _ffi_cast('uint8_t')
|
||||||
|
int8_t = _ffi_cast('int8_t')
|
||||||
|
uint16_t = _ffi_cast('uint16_t')
|
||||||
|
int16_t = _ffi_cast('int16_t')
|
||||||
|
uint32_t = _ffi_cast('uint32_t')
|
||||||
|
int32_t = _ffi_cast('int32_t')
|
||||||
|
uint64_t = _ffi_cast('uint64_t')
|
||||||
|
int64_t = _ffi_cast('int64_t')
|
||||||
|
boolean_t = _ffi_cast('boolean_t')
|
||||||
|
uchar_t = _ffi_cast('uchar_t')
|
||||||
|
|
||||||
|
|
||||||
|
# First element of the value tuple is a suffix for a single value function
|
||||||
|
# while the second element is for an array function
|
||||||
|
_type_to_suffix = {
|
||||||
|
_ffi.typeof('uint8_t'): ('uint8', 'uint8'),
|
||||||
|
_ffi.typeof('int8_t'): ('int8', 'int8'),
|
||||||
|
_ffi.typeof('uint16_t'): ('uint16', 'uint16'),
|
||||||
|
_ffi.typeof('int16_t'): ('int16', 'int16'),
|
||||||
|
_ffi.typeof('uint32_t'): ('uint32', 'uint32'),
|
||||||
|
_ffi.typeof('int32_t'): ('int32', 'int32'),
|
||||||
|
_ffi.typeof('uint64_t'): ('uint64', 'uint64'),
|
||||||
|
_ffi.typeof('int64_t'): ('int64', 'int64'),
|
||||||
|
_ffi.typeof('boolean_t'): ('boolean_value', 'boolean'),
|
||||||
|
_ffi.typeof('uchar_t'): ('byte', 'byte'),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
443
contrib/pyzfs/libzfs_core/exceptions.py
Normal file
443
contrib/pyzfs/libzfs_core/exceptions.py
Normal file
@ -0,0 +1,443 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Exceptions that can be raised by libzfs_core operations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import errno
|
||||||
|
|
||||||
|
|
||||||
|
class ZFSError(Exception):
|
||||||
|
errno = None
|
||||||
|
message = None
|
||||||
|
name = None
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
if self.name is not None:
|
||||||
|
return "[Errno %d] %s: '%s'" % (self.errno, self.message, self.name)
|
||||||
|
else:
|
||||||
|
return "[Errno %d] %s" % (self.errno, self.message)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "%s(%r, %r)" % (self.__class__.__name__, self.errno, self.message)
|
||||||
|
|
||||||
|
|
||||||
|
class ZFSGenericError(ZFSError):
|
||||||
|
|
||||||
|
def __init__(self, errno, name, message):
|
||||||
|
self.errno = errno
|
||||||
|
self.message = message
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class ZFSInitializationFailed(ZFSError):
|
||||||
|
message = "Failed to initialize libzfs_core"
|
||||||
|
|
||||||
|
def __init__(self, errno):
|
||||||
|
self.errno = errno
|
||||||
|
|
||||||
|
|
||||||
|
class MultipleOperationsFailure(ZFSError):
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
# Use first of the individual error codes
|
||||||
|
# as an overall error code. This is more consistent.
|
||||||
|
self.errno = errors[0].errno
|
||||||
|
self.errors = errors
|
||||||
|
#: this many errors were encountered but not placed on the `errors` list
|
||||||
|
self.suppressed_count = suppressed_count
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return "%s, %d errors included, %d suppressed" % (ZFSError.__str__(self),
|
||||||
|
len(self.errors), self.suppressed_count)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return "%s(%r, %r, errors=%r, supressed=%r)" % (self.__class__.__name__,
|
||||||
|
self.errno, self.message, self.errors, self.suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class DatasetNotFound(ZFSError):
|
||||||
|
|
||||||
|
"""
|
||||||
|
This exception is raised when an operation failure can be caused by a missing
|
||||||
|
snapshot or a missing filesystem and it is impossible to distinguish between
|
||||||
|
the causes.
|
||||||
|
"""
|
||||||
|
errno = errno.ENOENT
|
||||||
|
message = "Dataset not found"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class DatasetExists(ZFSError):
|
||||||
|
|
||||||
|
"""
|
||||||
|
This exception is raised when an operation failure can be caused by an existing
|
||||||
|
snapshot or filesystem and it is impossible to distinguish between
|
||||||
|
the causes.
|
||||||
|
"""
|
||||||
|
errno = errno.EEXIST
|
||||||
|
message = "Dataset already exists"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class NotClone(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Filesystem is not a clone, can not promote"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class FilesystemExists(DatasetExists):
|
||||||
|
message = "Filesystem already exists"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class FilesystemNotFound(DatasetNotFound):
|
||||||
|
message = "Filesystem not found"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class ParentNotFound(ZFSError):
|
||||||
|
errno = errno.ENOENT
|
||||||
|
message = "Parent not found"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class WrongParent(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Parent dataset is not a filesystem"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotExists(DatasetExists):
|
||||||
|
message = "Snapshot already exists"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotNotFound(DatasetNotFound):
|
||||||
|
message = "Snapshot not found"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
class SnapshotNotLatest(ZFSError):
|
||||||
|
errno = errno.EEXIST
|
||||||
|
message = "Snapshot is not the latest"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
class SnapshotIsCloned(ZFSError):
|
||||||
|
errno = errno.EEXIST
|
||||||
|
message = "Snapshot is cloned"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotIsHeld(ZFSError):
|
||||||
|
errno = errno.EBUSY
|
||||||
|
message = "Snapshot is held"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class DuplicateSnapshots(ZFSError):
|
||||||
|
errno = errno.EXDEV
|
||||||
|
message = "Requested multiple snapshots of the same filesystem"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotFailure(MultipleOperationsFailure):
|
||||||
|
message = "Creation of snapshot(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(SnapshotFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotDestructionFailure(MultipleOperationsFailure):
|
||||||
|
message = "Destruction of snapshot(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(SnapshotDestructionFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkExists(ZFSError):
|
||||||
|
errno = errno.EEXIST
|
||||||
|
message = "Bookmark already exists"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkNotFound(ZFSError):
|
||||||
|
errno = errno.ENOENT
|
||||||
|
message = "Bookmark not found"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkMismatch(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Bookmark is not in snapshot's filesystem"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkNotSupported(ZFSError):
|
||||||
|
errno = errno.ENOTSUP
|
||||||
|
message = "Bookmark feature is not supported"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkFailure(MultipleOperationsFailure):
|
||||||
|
message = "Creation of bookmark(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(BookmarkFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkDestructionFailure(MultipleOperationsFailure):
|
||||||
|
message = "Destruction of bookmark(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(BookmarkDestructionFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class BadHoldCleanupFD(ZFSError):
|
||||||
|
errno = errno.EBADF
|
||||||
|
message = "Bad file descriptor as cleanup file descriptor"
|
||||||
|
|
||||||
|
|
||||||
|
class HoldExists(ZFSError):
|
||||||
|
errno = errno.EEXIST
|
||||||
|
message = "Hold with a given tag already exists on snapshot"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class HoldNotFound(ZFSError):
|
||||||
|
errno = errno.ENOENT
|
||||||
|
message = "Hold with a given tag does not exist on snapshot"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class HoldFailure(MultipleOperationsFailure):
|
||||||
|
message = "Placement of hold(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(HoldFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class HoldReleaseFailure(MultipleOperationsFailure):
|
||||||
|
message = "Release of hold(s) failed for one or more reasons"
|
||||||
|
|
||||||
|
def __init__(self, errors, suppressed_count):
|
||||||
|
super(HoldReleaseFailure, self).__init__(errors, suppressed_count)
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotMismatch(ZFSError):
|
||||||
|
errno = errno.ENODEV
|
||||||
|
message = "Snapshot is not descendant of source snapshot"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class StreamMismatch(ZFSError):
|
||||||
|
errno = errno.ENODEV
|
||||||
|
message = "Stream is not applicable to destination dataset"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class DestinationModified(ZFSError):
|
||||||
|
errno = errno.ETXTBSY
|
||||||
|
message = "Destination dataset has modifications that can not be undone"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BadStream(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Bad backup stream"
|
||||||
|
|
||||||
|
|
||||||
|
class StreamFeatureNotSupported(ZFSError):
|
||||||
|
errno = errno.ENOTSUP
|
||||||
|
message = "Stream contains unsupported feature"
|
||||||
|
|
||||||
|
|
||||||
|
class UnknownStreamFeature(ZFSError):
|
||||||
|
errno = errno.ENOTSUP
|
||||||
|
message = "Unknown feature requested for stream"
|
||||||
|
|
||||||
|
|
||||||
|
class StreamIOError(ZFSError):
|
||||||
|
message = "I/O error while writing or reading stream"
|
||||||
|
|
||||||
|
def __init__(self, errno):
|
||||||
|
self.errno = errno
|
||||||
|
|
||||||
|
|
||||||
|
class ZIOError(ZFSError):
|
||||||
|
errno = errno.EIO
|
||||||
|
message = "I/O error"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class NoSpace(ZFSError):
|
||||||
|
errno = errno.ENOSPC
|
||||||
|
message = "No space left"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class QuotaExceeded(ZFSError):
|
||||||
|
errno = errno.EDQUOT
|
||||||
|
message = "Quouta exceeded"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class DatasetBusy(ZFSError):
|
||||||
|
errno = errno.EBUSY
|
||||||
|
message = "Dataset is busy"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class NameTooLong(ZFSError):
|
||||||
|
errno = errno.ENAMETOOLONG
|
||||||
|
message = "Dataset name is too long"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class NameInvalid(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Invalid name"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SnapshotNameInvalid(NameInvalid):
|
||||||
|
message = "Invalid name for snapshot"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class FilesystemNameInvalid(NameInvalid):
|
||||||
|
message = "Invalid name for filesystem or volume"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class BookmarkNameInvalid(NameInvalid):
|
||||||
|
message = "Invalid name for bookmark"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class ReadOnlyPool(ZFSError):
|
||||||
|
errno = errno.EROFS
|
||||||
|
message = "Pool is read-only"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class SuspendedPool(ZFSError):
|
||||||
|
errno = errno.EAGAIN
|
||||||
|
message = "Pool is suspended"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class PoolNotFound(ZFSError):
|
||||||
|
errno = errno.EXDEV
|
||||||
|
message = "No such pool"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class PoolsDiffer(ZFSError):
|
||||||
|
errno = errno.EXDEV
|
||||||
|
message = "Source and target belong to different pools"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class FeatureNotSupported(ZFSError):
|
||||||
|
errno = errno.ENOTSUP
|
||||||
|
message = "Feature is not supported in this version"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class PropertyNotSupported(ZFSError):
|
||||||
|
errno = errno.ENOTSUP
|
||||||
|
message = "Property is not supported in this version"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class PropertyInvalid(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Invalid property or property value"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
class DatasetTypeInvalid(ZFSError):
|
||||||
|
errno = errno.EINVAL
|
||||||
|
message = "Specified dataset type is unknown"
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
0
contrib/pyzfs/libzfs_core/test/__init__.py
Normal file
0
contrib/pyzfs/libzfs_core/test/__init__.py
Normal file
3708
contrib/pyzfs/libzfs_core/test/test_libzfs_core.py
Normal file
3708
contrib/pyzfs/libzfs_core/test/test_libzfs_core.py
Normal file
File diff suppressed because it is too large
Load Diff
612
contrib/pyzfs/libzfs_core/test/test_nvlist.py
Normal file
612
contrib/pyzfs/libzfs_core/test/test_nvlist.py
Normal file
@ -0,0 +1,612 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Tests for _nvlist module.
|
||||||
|
The tests convert from a `dict` to C ``nvlist_t`` and back to a `dict`
|
||||||
|
and verify that no information is lost and value types are correct.
|
||||||
|
The tests also check that various error conditions like unsupported
|
||||||
|
value types or out of bounds values are detected.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
from .._nvlist import nvlist_in, nvlist_out, _lib
|
||||||
|
from ..ctypes import (
|
||||||
|
uint8_t, int8_t, uint16_t, int16_t, uint32_t, int32_t,
|
||||||
|
uint64_t, int64_t, boolean_t, uchar_t
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestNVList(unittest.TestCase):
|
||||||
|
|
||||||
|
def _dict_to_nvlist_to_dict(self, props):
|
||||||
|
res = {}
|
||||||
|
nv_in = nvlist_in(props)
|
||||||
|
with nvlist_out(res) as nv_out:
|
||||||
|
_lib.nvlist_dup(nv_in, nv_out, 0)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def _assertIntDictsEqual(self, dict1, dict2):
|
||||||
|
self.assertEqual(len(dict1), len(dict1), "resulting dictionary is of different size")
|
||||||
|
for key in dict1.keys():
|
||||||
|
self.assertEqual(int(dict1[key]), int(dict2[key]))
|
||||||
|
|
||||||
|
def _assertIntArrayDictsEqual(self, dict1, dict2):
|
||||||
|
self.assertEqual(len(dict1), len(dict1), "resulting dictionary is of different size")
|
||||||
|
for key in dict1.keys():
|
||||||
|
val1 = dict1[key]
|
||||||
|
val2 = dict2[key]
|
||||||
|
self.assertEqual(len(val1), len(val2), "array values of different sizes")
|
||||||
|
for x, y in zip(val1, val2):
|
||||||
|
self.assertEqual(int(x), int(y))
|
||||||
|
|
||||||
|
def test_empty(self):
|
||||||
|
res = self._dict_to_nvlist_to_dict({})
|
||||||
|
self.assertEqual(len(res), 0, "expected empty dict")
|
||||||
|
|
||||||
|
def test_invalid_key_type(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict({1: None})
|
||||||
|
|
||||||
|
def test_invalid_val_type__tuple(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict({"key": (1, 2)})
|
||||||
|
|
||||||
|
def test_invalid_val_type__set(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict({"key": set(1, 2)})
|
||||||
|
|
||||||
|
def test_invalid_array_val_type(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict({"key": [(1, 2), (3, 4)]})
|
||||||
|
|
||||||
|
def test_invalid_array_of_arrays_val_type(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict({"key": [[1, 2], [3, 4]]})
|
||||||
|
|
||||||
|
def test_string_value(self):
|
||||||
|
props = {"key": "value"}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_boolean_value(self):
|
||||||
|
props = {"key": None}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_boolean_values(self):
|
||||||
|
props = {"key1": True, "key2": False}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_boolean_true_value(self):
|
||||||
|
props = {"key": boolean_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_boolean_false_value(self):
|
||||||
|
props = {"key": boolean_t(0)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_boolean_invalid_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": boolean_t(2)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_boolean_another_invalid_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": boolean_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_uint64_value(self):
|
||||||
|
props = {"key": 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_uint64_max_value(self):
|
||||||
|
props = {"key": 2 ** 64 - 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_uint64_too_large_value(self):
|
||||||
|
props = {"key": 2 ** 64}
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_uint64_negative_value(self):
|
||||||
|
props = {"key": -1}
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint64_value(self):
|
||||||
|
props = {"key": uint64_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint64_max_value(self):
|
||||||
|
props = {"key": uint64_t(2 ** 64 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint64_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint64_t(2 ** 64)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint64_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint64_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint32_value(self):
|
||||||
|
props = {"key": uint32_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint32_max_value(self):
|
||||||
|
props = {"key": uint32_t(2 ** 32 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint32_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint32_t(2 ** 32)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint32_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint32_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint16_value(self):
|
||||||
|
props = {"key": uint16_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint16_max_value(self):
|
||||||
|
props = {"key": uint16_t(2 ** 16 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint16_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint16_t(2 ** 16)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint16_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint16_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint8_value(self):
|
||||||
|
props = {"key": uint8_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint8_max_value(self):
|
||||||
|
props = {"key": uint8_t(2 ** 8 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_uint8_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint8_t(2 ** 8)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_uint8_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uint8_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_byte_value(self):
|
||||||
|
props = {"key": uchar_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_byte_max_value(self):
|
||||||
|
props = {"key": uchar_t(2 ** 8 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_byte_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uchar_t(2 ** 8)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_byte_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": uchar_t(-1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int64_value(self):
|
||||||
|
props = {"key": int64_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int64_max_value(self):
|
||||||
|
props = {"key": int64_t(2 ** 63 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int64_min_value(self):
|
||||||
|
props = {"key": int64_t(-(2 ** 63))}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int64_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int64_t(2 ** 63)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int64_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int64_t(-(2 ** 63) - 1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int32_value(self):
|
||||||
|
props = {"key": int32_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int32_max_value(self):
|
||||||
|
props = {"key": int32_t(2 ** 31 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int32_min_value(self):
|
||||||
|
props = {"key": int32_t(-(2 ** 31))}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int32_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int32_t(2 ** 31)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int32_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int32_t(-(2 ** 31) - 1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int16_value(self):
|
||||||
|
props = {"key": int16_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int16_max_value(self):
|
||||||
|
props = {"key": int16_t(2 ** 15 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int16_min_value(self):
|
||||||
|
props = {"key": int16_t(-(2 ** 15))}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int16_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int16_t(2 ** 15)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int16_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int16_t(-(2 ** 15) - 1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int8_value(self):
|
||||||
|
props = {"key": int8_t(1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int8_max_value(self):
|
||||||
|
props = {"key": int8_t(2 ** 7 - 1)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int8_min_value(self):
|
||||||
|
props = {"key": int8_t(-(2 ** 7))}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_int8_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int8_t(2 ** 7)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explicit_int8_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": int8_t(-(2 ** 7) - 1)}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_nested_dict(self):
|
||||||
|
props = {"key": {}}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_nested_nested_dict(self):
|
||||||
|
props = {"key": {"key": {}}}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_mismatching_values_array(self):
|
||||||
|
props = {"key": [1, "string"]}
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_mismatching_values_array2(self):
|
||||||
|
props = {"key": [True, 10]}
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_mismatching_values_array3(self):
|
||||||
|
props = {"key": [1, False]}
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_string_array(self):
|
||||||
|
props = {"key": ["value", "value2"]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_boolean_array(self):
|
||||||
|
props = {"key": [True, False]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_explicit_boolean_array(self):
|
||||||
|
props = {"key": [boolean_t(False), boolean_t(True)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_uint64_array(self):
|
||||||
|
props = {"key": [0, 1, 2 ** 64 - 1]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_uint64_array_too_large_value(self):
|
||||||
|
props = {"key": [0, 2 ** 64]}
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_uint64_array_negative_value(self):
|
||||||
|
props = {"key": [0, -1]}
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_mixed_explict_int_array(self):
|
||||||
|
with self.assertRaises(TypeError):
|
||||||
|
props = {"key": [uint64_t(0), uint32_t(0)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint64_array(self):
|
||||||
|
props = {"key": [uint64_t(0), uint64_t(1), uint64_t(2 ** 64 - 1)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_uint64_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint64_t(0), uint64_t(2 ** 64)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint64_array_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint64_t(0), uint64_t(-1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint32_array(self):
|
||||||
|
props = {"key": [uint32_t(0), uint32_t(1), uint32_t(2 ** 32 - 1)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_uint32_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint32_t(0), uint32_t(2 ** 32)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint32_array_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint32_t(0), uint32_t(-1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint16_array(self):
|
||||||
|
props = {"key": [uint16_t(0), uint16_t(1), uint16_t(2 ** 16 - 1)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_uint16_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint16_t(0), uint16_t(2 ** 16)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint16_array_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint16_t(0), uint16_t(-1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint8_array(self):
|
||||||
|
props = {"key": [uint8_t(0), uint8_t(1), uint8_t(2 ** 8 - 1)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_uint8_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint8_t(0), uint8_t(2 ** 8)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_uint8_array_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uint8_t(0), uint8_t(-1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_byte_array(self):
|
||||||
|
props = {"key": [uchar_t(0), uchar_t(1), uchar_t(2 ** 8 - 1)]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_byte_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uchar_t(0), uchar_t(2 ** 8)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_byte_array_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [uchar_t(0), uchar_t(-1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int64_array(self):
|
||||||
|
props = {"key": [int64_t(0), int64_t(1), int64_t(2 ** 63 - 1), int64_t(-(2 ** 63))]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_int64_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int64_t(0), int64_t(2 ** 63)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int64_array_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int64_t(0), int64_t(-(2 ** 63) - 1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int32_array(self):
|
||||||
|
props = {"key": [int32_t(0), int32_t(1), int32_t(2 ** 31 - 1), int32_t(-(2 ** 31))]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_int32_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int32_t(0), int32_t(2 ** 31)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int32_array_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int32_t(0), int32_t(-(2 ** 31) - 1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int16_array(self):
|
||||||
|
props = {"key": [int16_t(0), int16_t(1), int16_t(2 ** 15 - 1), int16_t(-(2 ** 15))]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_int16_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int16_t(0), int16_t(2 ** 15)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int16_array_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int16_t(0), int16_t(-(2 ** 15) - 1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int8_array(self):
|
||||||
|
props = {"key": [int8_t(0), int8_t(1), int8_t(2 ** 7 - 1), int8_t(-(2 ** 7))]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntArrayDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_explict_int8_array_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int8_t(0), int8_t(2 ** 7)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_explict_int8_array_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"key": [int8_t(0), int8_t(-(2 ** 7) - 1)]}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_dict_array(self):
|
||||||
|
props = {"key": [{"key": 1}, {"key": None}, {"key": {}}]}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_uint32_value(self):
|
||||||
|
props = {"rewind-request": 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_uint32_max_value(self):
|
||||||
|
props = {"rewind-request": 2 ** 32 - 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_uint32_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"rewind-request": 2 ** 32}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_implicit_uint32_negative_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"rewind-request": -1}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_implicit_int32_value(self):
|
||||||
|
props = {"pool_context": 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_int32_max_value(self):
|
||||||
|
props = {"pool_context": 2 ** 31 - 1}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_int32_min_value(self):
|
||||||
|
props = {"pool_context": -(2 ** 31)}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self._assertIntDictsEqual(props, res)
|
||||||
|
|
||||||
|
def test_implicit_int32_too_large_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"pool_context": 2 ** 31}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_implicit_int32_too_small_value(self):
|
||||||
|
with self.assertRaises(OverflowError):
|
||||||
|
props = {"pool_context": -(2 ** 31) - 1}
|
||||||
|
self._dict_to_nvlist_to_dict(props)
|
||||||
|
|
||||||
|
def test_complex_dict(self):
|
||||||
|
props = {
|
||||||
|
"key1": "str",
|
||||||
|
"key2": 10,
|
||||||
|
"key3": {
|
||||||
|
"skey1": True,
|
||||||
|
"skey2": None,
|
||||||
|
"skey3": [
|
||||||
|
True,
|
||||||
|
False,
|
||||||
|
True
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"key4": [
|
||||||
|
"ab",
|
||||||
|
"bc"
|
||||||
|
],
|
||||||
|
"key5": [
|
||||||
|
2 ** 64 - 1,
|
||||||
|
1,
|
||||||
|
2,
|
||||||
|
3
|
||||||
|
],
|
||||||
|
"key6": [
|
||||||
|
{
|
||||||
|
"skey71": "a",
|
||||||
|
"skey72": "b",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"skey71": "c",
|
||||||
|
"skey72": "d",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"skey71": "e",
|
||||||
|
"skey72": "f",
|
||||||
|
}
|
||||||
|
|
||||||
|
],
|
||||||
|
"type": 2 ** 32 - 1,
|
||||||
|
"pool_context": -(2 ** 31)
|
||||||
|
}
|
||||||
|
res = self._dict_to_nvlist_to_dict(props)
|
||||||
|
self.assertEqual(props, res)
|
||||||
|
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
40
contrib/pyzfs/setup.py
Normal file
40
contrib/pyzfs/setup.py
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
# Copyright 2015 ClusterHQ. See LICENSE file for details.
|
||||||
|
|
||||||
|
from setuptools import setup, find_packages
|
||||||
|
|
||||||
|
setup(
|
||||||
|
name="pyzfs",
|
||||||
|
version="0.2.3",
|
||||||
|
description="Wrapper for libzfs_core",
|
||||||
|
author="ClusterHQ",
|
||||||
|
author_email="support@clusterhq.com",
|
||||||
|
url="http://pyzfs.readthedocs.org",
|
||||||
|
license="Apache License, Version 2.0",
|
||||||
|
classifiers=[
|
||||||
|
"Development Status :: 4 - Beta",
|
||||||
|
"Intended Audience :: Developers",
|
||||||
|
"License :: OSI Approved :: Apache Software License",
|
||||||
|
"Programming Language :: Python :: 2 :: Only",
|
||||||
|
"Programming Language :: Python :: 2.7",
|
||||||
|
"Topic :: System :: Filesystems",
|
||||||
|
"Topic :: Software Development :: Libraries",
|
||||||
|
],
|
||||||
|
keywords=[
|
||||||
|
"ZFS",
|
||||||
|
"OpenZFS",
|
||||||
|
"libzfs_core",
|
||||||
|
],
|
||||||
|
|
||||||
|
packages=find_packages(),
|
||||||
|
include_package_data=True,
|
||||||
|
install_requires=[
|
||||||
|
"cffi",
|
||||||
|
],
|
||||||
|
setup_requires=[
|
||||||
|
"cffi",
|
||||||
|
],
|
||||||
|
zip_safe=False,
|
||||||
|
test_suite="libzfs_core.test",
|
||||||
|
)
|
||||||
|
|
||||||
|
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4
|
Loading…
Reference in New Issue
Block a user