Perform whole-page page truncation for hole-punching under a range lock

As an attempt to perform the page truncation more optimally, the
hole-punching support added in 223df0161f
truncated performed the operation in two steps: first, sub-page "stubs"
were zeroed under the range lock in zfs_free_range() using the new
zfs_zero_partial_page() function and then the whole pages were truncated
within zfs_freesp().  This left a window of opportunity during which
the full pages could be touched.

This patch closes the window by moving the whole-page truncation into
zfs_free_range() under the range lock.

Signed-off-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2733
This commit is contained in:
Tim Chase 2014-09-25 23:40:41 -05:00 committed by Brian Behlendorf
parent dcca723ace
commit cb08f06307

View File

@ -1440,6 +1440,13 @@ zfs_free_range(znode_t *zp, uint64_t off, uint64_t len)
/* offset of last_page */ /* offset of last_page */
last_page_offset = last_page << PAGE_CACHE_SHIFT; last_page_offset = last_page << PAGE_CACHE_SHIFT;
/* truncate whole pages */
if (last_page_offset > first_page_offset) {
truncate_inode_pages_range(ZTOI(zp)->i_mapping,
first_page_offset, last_page_offset - 1);
}
/* truncate sub-page ranges */
if (first_page > last_page) { if (first_page > last_page) {
/* entire punched area within a single page */ /* entire punched area within a single page */
zfs_zero_partial_page(zp, off, len); zfs_zero_partial_page(zp, off, len);
@ -1607,31 +1614,10 @@ out:
/* /*
* Truncate the page cache - for file truncate operations, use * Truncate the page cache - for file truncate operations, use
* the purpose-built API for truncations. For punching operations, * the purpose-built API for truncations. For punching operations,
* truncate only whole pages within the region; partial pages are * the truncation is handled under a range lock in zfs_free_range.
* zeroed under a range lock in zfs_free_range().
*/ */
if (len == 0) if (len == 0)
truncate_setsize(ZTOI(zp), off); truncate_setsize(ZTOI(zp), off);
else if (zp->z_is_mapped) {
loff_t first_page, last_page;
loff_t first_page_offset, last_page_offset;
/* first possible full page in hole */
first_page = (off + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
/* last page of hole */
last_page = (off + len) >> PAGE_CACHE_SHIFT;
/* offset of first_page */
first_page_offset = first_page << PAGE_CACHE_SHIFT;
/* offset of last_page */
last_page_offset = last_page << PAGE_CACHE_SHIFT;
/* truncate whole pages */
if (last_page_offset > first_page_offset) {
truncate_inode_pages_range(ZTOI(zp)->i_mapping,
first_page_offset, last_page_offset - 1);
}
}
return (error); return (error);
} }