It turns out that older versions of the glibc headers do not
properly define MS_DIRSYNC despite it being explicitly mentioned
in the man pages. They instead call it S_WRITE, so for system
where this is not correct defined map MS_DIRSYNC to S_WRITE.
At the time of this commit both Ubuntu Lucid, and Debian Squeeze
both use the out of date glibc headers.
As for MS_REC this field is also not available in the older headers.
Since there is no obvious mapping in this case we simply disable
the recursive mount option which used it.
The inclusion on dlsym(), dlopen(), and dlclose() symbols require
us to link against the dl library. Be careful to add the flag to
both the libzfs library and the commands which depend on the library.
This file has gotten stale and needed to be updated. There are
individuals who deserve to be recognized for their contributions
to the project. I've done my best to assemble names from the
commit logs of those who have submitted patches. This list may
not be comprehensive, if you feel I've overlooked your contribution
please let me know and we can get your name added.
Initial testing has shown the the right IO scheduler to use under Linux
is noop. This strikes the ideal balance by allowing the zfs elevator
to do all request ordering and prioritization. While allowing the
Linux elevator to do the maximum front/back merging allowed by the
physical device. This yields the largest possible requests for the
device with the lowest total overhead.
While 'noop' should be right for your system you can choose a different
IO scheduler with the 'zfs_vdev_scheduler' option. You may set this
value to any of the standard Linux schedulers: noop, cfq, deadline,
anticipatory. In addition, if you choose 'none' zfs will not attempt
to change the IO scheduler for the block device.
The following warning was observed under normal operation. It's
not fatal but it's something to be addressed long term. Flag the
offending allocation with KM_NODEBUG to suppress the warning and
flag the call site.
SPL: Showing stack for process 21761
Pid: 21761, comm: iozone Tainted: P ----------------
2.6.32-71.14.1.el6.x86_64 #1
Call Trace:
[<ffffffffa05465a7>] spl_debug_dumpstack+0x27/0x40 [spl]
[<ffffffffa054a84d>] kmem_alloc_debug+0x11d/0x130 [spl]
[<ffffffffa05de166>] dmu_buf_hold_array_by_dnode+0xa6/0x4e0 [zfs]
[<ffffffffa05de825>] dmu_buf_hold_array+0x65/0x90 [zfs]
[<ffffffffa05de891>] dmu_read_uio+0x41/0xd0 [zfs]
[<ffffffffa0654827>] zfs_read+0x147/0x470 [zfs]
[<ffffffffa06644a2>] zpl_read_common+0x52/0x70 [zfs]
[<ffffffffa0664503>] zpl_read+0x43/0x70 [zfs]
[<ffffffff8116d905>] vfs_read+0xb5/0x1a0
[<ffffffff8116da41>] sys_read+0x51/0x90
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
When performing a 'zfs rollback' it's critical to invalidate
the previous dcache and inode cache. If we don't there will
stale cache entries which when accessed will result in EIOs.
These two warnings in libefi serve no real purpose. When running
without DEBUG they are already supressed, and even when DEBUG is
enabled all they indicate is the device doesn't already have an
EFI label. For a Linux machine this is probably the common case.
With the recent SPL change (d599e4fa) that forces cv_destroy()
to block until all waiters have been woken. It is now unsafe
to call cv_destroy() under the zp->z_range_lock() because it
is used as the condition variable mutex. If there are waiters
cv_destroy() will block until they wake up and aquire the mutex.
However, they will never aquire the mutex because cv_destroy()
will not return allowing it's caller to drop the lock. Deadlock.
To avoid this cv_destroy() is now run asynchronously in a taskq.
This solves two problems:
1) It is no longer run under the zp->z_range_lock so no deadlock.
2) Since cv_destroy() may now block we don't want this slowing
down zfs_range_unlock() and throttling the system.
This was not as much of an issue under OpenSolaris because their
cv_destroy() implementation does not do anything. They do however
risk a bad paging request if cv_destroy() returns, the memory holding
the condition variable is free'd, and then the waiters wake up and
try to reference it. It's a very small unlikely race, but it is
possible.
It's worth taking a moment to describe how mmap is implemented
for zfs because it differs considerably from other Linux filesystems.
However, this issue is handled the same way under OpenSolaris.
The issue is that by design zfs bypasses the Linux page cache and
leaves all caching up to the ARC. This has been shown to work
well for the common read(2)/write(2) case. However, mmap(2)
is problem because it relies on being tightly integrated with the
page cache. To handle this we cache mmap'ed files twice, once in
the ARC and a second time in the page cache. The code is careful
to keep both copies synchronized.
When a file with an mmap'ed region is written to using write(2)
both the data in the ARC and existing pages in the page cache
are updated. For a read(2) data will be read first from the page
cache then the ARC if needed. Neither a write(2) or read(2) will
will ever result in new pages being added to the page cache.
New pages are added to the page cache only via .readpage() which
is called when the vfs needs to read a page off disk to back the
virtual memory region. These pages may be modified without
notifying the ARC and will be written out periodically via
.writepage(). This will occur due to either a sync or the usual
page aging behavior. Note because a read(2) of a mmap'ed file
will always check the page cache first even when the ARC is out
of date correct data will still be returned.
While this implementation ensures correct behavior it does have
have some drawbacks. The most obvious of which is that it
increases the required memory footprint when access mmap'ed
files. It also adds additional complexity to the code keeping
both caches synchronized.
Longer term it may be possible to cleanly resolve this wart by
mapping page cache pages directly on to the ARC buffers. The
Linux address space operations are flexible enough to allow
selection of which pages back a particular index. The trick
would be working out the details of which subsystem is in
charge, the ARC, the page cache, or both. It may also prove
helpful to move the ARC buffers to a scatter-gather lists
rather than a vmalloc'ed region.
Additionally, zfs_write/read_common() were used in the readpage
and writepage hooks because it was fairly easy. However, it
would be better to update zfs_fillpage and zfs_putapage to be
Linux friendly and use them instead.
The Linux specific xattr operations have all been located in the
file zpl_xattr.c. These functions primarily rely on the reworked
zfs_* functions to do their job. They are also responsible for
converting the possible Solaris style error codes to negative
Linux errors.
The Linux specific super block operations have all been located in the
file zpl_super.c. These functions primarily rely on the reworked
zfs_* functions to do their job. They are also responsible for
converting the possible Solaris style error codes to negative
Linux errors.
The Linux specific inode operations have all been located in the
file zpl_inode.c. These functions primarily rely on the reworked
zfs_* functions to do their job. They are also responsible for
converting the possible Solaris style error codes to negative
Linux errors.
The Linux specific file operations have all been located in the
file zpl_file.c. These functions primarily rely on the reworked
zfs_* functions to do their job. They are also responsible for
converting the possible Solaris style error codes to negative
Linux errors.
This first zpl_* commit also includes a common zpl.h header with
minimal entries to register the Linux specific hooks. In also
adds all the new zpl_* file to the Makefile.in. This is not a
standalone commit, you required the following zpl_* commits.
For the moment exactly how to handle xvattr is not clear. This
change largely consists of the code to comment out the offending
bits until something reasonable can be done.
A new flag is required for the zfs_rlock code to determine if
it is operation of the zvol of zpl dataset. This used to be
keyed off the zp->z_vnode, which was a hack to begin with, but
with the removal of vnodes we needed a dedicated flag.
I appologize in advance why to many things ended up in this commit.
When it could be seperated in to a whole series of commits teasing
that all apart now would take considerable time and I'm not sure
there's much merrit in it. As such I'll just summerize the intent
of the changes which are all (or partly) in this commit. Broadly
the intent is to remove as much Solaris specific code as possible
and replace it with native Linux equivilants. More specifically:
1) Replace all instances of zfsvfs_t with zfs_sb_t. While the
type is largely the same calling it private super block data
rather than a zfsvfs is more consistent with how Linux names
this. While non critical it makes the code easier to read when
your thinking in Linux friendly VFS terms.
2) Replace vnode_t with struct inode. The Linux VFS doesn't have
the notion of a vnode and there's absolutely no good reason to
create one. There are in fact several good reasons to remove it.
It just adds overhead on Linux if we were to manage one, it
conplicates the code, and it likely will lead to bugs so there's
a good change it will be out of date. The code has been updated
to remove all need for this type.
3) Replace all vtype_t's with umode types. Along with this shift
all uses of types to mode bits. The Solaris code would pass a
vtype which is redundant with the Linux mode. Just update all the
code to use the Linux mode macros and remove this redundancy.
4) Remove using of vn_* helpers and replace where needed with
inode helpers. The big example here is creating iput_aync to
replace vn_rele_async. Other vn helpers will be addressed as
needed but they should be be emulated. They are a Solaris VFS'ism
and should simply be replaced with Linux equivilants.
5) Update znode alloc/free code. Under Linux it's common to
embed the inode specific data with the inode itself. This removes
the need for an extra memory allocation. In zfs this information
is called a znode and it now embeds the inode with it. Allocators
have been updated accordingly.
6) Minimal integration with the vfs flags for setting up the
super block and handling mount options has been added this
code will need to be refined but functionally it's all there.
This will be the first and last of these to large to review commits.
For the moment we do not use dmu_write_pages() to write pages
directly in to a dmu object. It may be required at some point
in the future, but for now is simplest and cleanest to drop it.
It can be easily readded if/when needed.
For portability reasons it's handy to be able to create a root
znode and basic filesystem components without requiring the full
cooperation of the VFS. We are committing to this to simply the
filesystem creations code.
This code is used for snapshot and heavily leverages Solaris
functionality we do not want to reimplement. These files have
been removed, including references to them, and will be replaced
by a zfs_snap.c/zpl_snap.c implementation which handles snapshots.
Minor update to ensure zfs_sync() is disabled if a kernel oops/panic
is triggered. As the comment says 'data integrity is job one'. This
change could have been done by defining panicstr to oops_in_progress
in the SPL. But I felt it was better to use the native Linux API
here since to be clear.
This flag does not need to be support under Linux. As the comment
says it was only there to support fsflush() for old filesystem like
UFS. This is not needed under Linux.
Mount option parsing is still very Linux specific and will be
handled above this zfs filesystem layer. Honoring those mount
options once set if of course the responsibility of the lower
layers.
This variable was used to ensure that the ZFS module is never
removed while the filesystem is mounted. Once again the generic
Linux VFS handles this case for us so it can be removed.
The functions zfs_mount_label_policy(), zfs_mountroot(), zfs_mount()
will not be needed because most of what they do is already handled
by the generic Linux VFS layer. They all call zfs_domount() which
creates the actual dataset, the caller of this library call which
will be in the zpl layer is responsible for what's left.
Under Linux we don't need to reserve a major or minor number for
the filesystem. We can rely on the VFS to handle colisions without
this being handled by the lower ZFS layers.
Additionally, there is no need to keep a zfsfstype around. We are
not limited on Linux by the OpenSolaris infrastructure which needed
this. The upper zpl layer can specify the filesystem type.
The ZFS code is being restructured to act as a library and a stand
alone module. This allows us to leverage most of the existing code
with minimal modification. It also means we need to drop the Solaris
vfs/vnode functions they will be replaced by Linux equivilants and
updated to be Linux friendly.
For the moment we have left ZFS unchanged and it updates many values
as part of the znode. However, some of these values should be set
in the inode. For the moment this is handled by adding a function
called zfs_inode_update() which updates the inode based on the znode.
This is considered a workaround until we can systematically go
through the ZFS code and have it directly update the inode. At
which point zfs_update_inode() can be dropped entirely. Keeping
two copies of the same data isn't only inefficient it's a breeding
ground for bugs.
Under Linux the convention for filesystem specific data structure is
to embed it along with the generic vfs data structure. This differs
significantly from Solaris.
Since we want to integrates as cleanly with the Linux VFS as possible.
This changes modifies zfs_znode_alloc() to allocate a znode with an
embedded inode for use with the generic VFS. This is done by calling
iget_locked() which will allocate a new inode if needed by calling
sb->alloc_inode(). This function allocates enough memory for a
znode_t by returns a pointer to the inode structure for Linux's VFS.
This function is also responsible for setting the callback
znode->z_set_ops_inodes() which is used to register the correct
handlers for the inode.
Basic compilation of the bulk of zfs_znode.c has been enabled. After
much consideration it was decided to convert the existing vnode based
interfaces to more friendly Linux interfaces. The following commits
will systematically replace update the requiter interfaces. There
are of course pros and cons to this decision.
Pros:
* This simplifies intergration with Linux in the long term. There is
no longer any need to manage vnodes which are a foreign concept to
the Linux VFS.
* Improved long term maintainability.
* Minor performance improvements by removing vnode overhead.
Cons:
* Added work in the short term to modify multiple ZFS interfaces.
* Harder to pull in changes if we ever see any new code from Solaris.
* Mixed Solaris and Linux interfaces in some ZFS code.
Lay the initial ground work for a include/linux/ compatibility
directory. This was less critical in the past because the bulk
of the ZFS code consumes the Solaris API via the SPL. This API
was stable and the bulk Linux API differences were handled in
the SPL.
However, with the addition of a full Posix layer written directly
against the Linux APIs we are going to need more compatibility
code. It makes sense that all this code should be cleanly located
in one place. Subsequent patches should move the existing zvol
and vdev_disk compatibility code in to this directory.
This code originates in OpenSolaris and was modified by KQ Infotech
to be compatible with Linux. While supporting uios in the short
term is useful to get something working this is not an abstraction
we want to keep. This code is expected to be short lived and
removed as soon as all the remaining uio based APIs and updated.
The zfs acl code makes use of the two OpenSolaris helper functions
acl_trivial_access_masks() and ace_trivial_common(). Since they are
only called from zfs_acl.c I've brought them over from OpenSolaris
and added them as static function to this file. This way I don't
need to reimplement this functionality from scratch in the SPL.
Long term once I take a more careful look at the acl implementation
it may be the case that these functions really aren't needed. If
that turns out to be the case they can then be removed.
Remove unneeded bootfs functions. This support shouldn't be required
for the Linux port, and even if it is it would need to be reworked
to integrate cleanly with Linux.
Certain NFS/SMB share functionality is not yet in place. These
functions used to be wrapped with the generic HAVE_ZPL to prevent
them from being compiled. I still don't want them compiled but
I'm working toward eliminating the use of HAVE_ZPL. So I'm just
renaming the wrapper here to HAVE_SHARE. They still won't be
compiled until all the share issues are worked through. Share
support is the last missing piece from zfs_ioctl.c.
The zfs_check_global_label() function is part of the HAVE_MLSLABEL
support which was previously commented out by a HAVE_ZPL check.
Since we're still deciding what to do about mls labels wrap it
with the preexisting macro to keep it compiled out.
Unlike Solaris the Linux implementation embeds the inode in the
znode, and has no use for a vnode. So while it's true that fragmention
of the znode cache may occur it should not be worse than any of the
other Linux FS inode caches. Until proven that this is a problem it's
just added complexity we don't need.
These functions were dropped originally because I felt they would
need to be rewritten anyway to avoid using uios. However, this
patch readds then with they dea they can just be reworked and
the uio bits dropped.