mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2025-10-25 09:25:00 +03:00
Before Direct I/O was implemented, I've implemented lighter version I called Uncached I/O. It uses normal DMU/ARC data path with some optimizations, but evicts data from caches as soon as possible and reasonable. Originally I wired it only to a primarycache property, but now completing the integration all the way up to the VFS. While Direct I/O has the lowest possible memory bandwidth usage, it also has a significant number of limitations. It require I/Os to be page aligned, does not allow speculative prefetch, etc. The Uncached I/O does not have those limitations, but instead require additional memory copy, though still one less than regular cached I/O. As such it should fill the gap in between. Considering this I've disabled annoying EINVAL errors on misaligned requests, adding a tunable for those who wants to test their applications. To pass the information between the layers I had to change a number of APIs. But as side effect upper layers can now control not only the caching, but also speculative prefetch. I haven't wired it to VFS yet, since it require looking on some OS specifics. But while there I've implemented speculative prefetch of indirect blocks for Direct I/O, controllable via all the same mechanisms. Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored by: iXsystems, Inc. Fixes #17027 Reviewed-by: Rob Norris <robn@despairlabs.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> |
||
|---|---|---|
| .. | ||
| abd_os.c | ||
| arc_os.c | ||
| crypto_os.c | ||
| dmu_os.c | ||
| event_os.c | ||
| hkdf.c | ||
| kmod_core.c | ||
| spa_os.c | ||
| sysctl_os.c | ||
| vdev_geom.c | ||
| vdev_label_os.c | ||
| zfs_acl.c | ||
| zfs_ctldir.c | ||
| zfs_debug.c | ||
| zfs_dir.c | ||
| zfs_file_os.c | ||
| zfs_ioctl_compat.c | ||
| zfs_ioctl_os.c | ||
| zfs_racct.c | ||
| zfs_vfsops.c | ||
| zfs_vnops_os.c | ||
| zfs_znode_os.c | ||
| zio_crypt.c | ||
| zvol_os.c | ||