Fix zfs_putpage() lock inversion (again)

This is a follow up commit to 74328ee which correctly resolved a lock
inversion between zfs_putpage() and zfs_free_range().  Unfortunately,
in the process it accidentally introduced another inversion between
zfs_putpage() and zfs_read().  The page must be unlocked before taking
the range lock.  This patch corrects that issue.

In addition, because the locking rules here are subtle a block comment
has been added clearly explaining why the ordering here is critical.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ned Bass <bass6@llnl.gov>
Issue #2976
This commit is contained in:
Brian Behlendorf 2015-01-06 16:54:57 -08:00
parent 33b6dbbc51
commit d958324f97

View File

@ -3899,15 +3899,28 @@ zfs_putpage(struct inode *ip, struct page *pp, struct writeback_control *wbc)
} }
#endif #endif
rl = zfs_range_lock(zp, pgoff, pglen, RL_WRITER); /*
* The ordering here is critical and must adhere to the following
set_page_writeback(pp); * rules in order to avoid deadlocking in either zfs_read() or
* zfs_free_range() due to a lock inversion.
*
* 1) The page must be unlocked prior to acquiring the range lock.
* This is critical because zfs_read() calls find_lock_page()
* which may block on the page lock while holding the range lock.
*
* 2) Before setting or clearing write back on a page the range lock
* must be held in order to prevent a lock inversion with the
* zfs_free_range() function.
*/
unlock_page(pp); unlock_page(pp);
rl = zfs_range_lock(zp, pgoff, pglen, RL_WRITER);
set_page_writeback(pp);
tx = dmu_tx_create(zsb->z_os); tx = dmu_tx_create(zsb->z_os);
dmu_tx_hold_write(tx, zp->z_id, pgoff, pglen); dmu_tx_hold_write(tx, zp->z_id, pgoff, pglen);
dmu_tx_hold_sa(tx, zp->z_sa_hdl, B_FALSE); dmu_tx_hold_sa(tx, zp->z_sa_hdl, B_FALSE);
zfs_sa_upgrade_txholds(tx, zp); zfs_sa_upgrade_txholds(tx, zp);
err = dmu_tx_assign(tx, TXG_NOWAIT); err = dmu_tx_assign(tx, TXG_NOWAIT);
if (err != 0) { if (err != 0) {
if (err == ERESTART) if (err == ERESTART)