[pve-devel] [PATCH zfsonlinux 2/3] update/rebase to zfs-0.7.12 with patches from ZOL
Stoiko Ivanov
s.ivanov at proxmox.com
Wed Nov 14 20:16:18 CET 2018
Reorder patches, so that the upstream changeset comes last.
Signed-off-by: Stoiko Ivanov <s.ivanov at proxmox.com>
---
...004-Add-Breaks-Replaces-to-zfs-initramfs.patch} | 15 +-
...ll-init-scripts-to-support-non-systemd-s.patch} | 21 +-
...lock-between-zfs-umount-snapentry_expire.patch} | 0
... 0008-Fix-race-in-dnode_check_slots_free.patch} | 9 +-
...askq-and-context-switch-cost-of-zio-pipe.patch} | 118 +-
...port-activity-test-in-more-zdb-code-paths.patch | 221 ++
.../0011-Fix-statfs-2-for-32-bit-user-space.patch | 180 ++
...Zpool-iostat-remove-latency-queue-scaling.patch | 86 +
...-4.19-rc3-compat-Remove-refcount_t-compat.patch | 878 +++++++
...4-Prefix-all-refcount-functions-with-zfs_.patch | 2527 ++++++++++++++++++++
zfs-patches/0015-Fix-arc_release-refcount.patch | 29 +
.../0016-Allow-use-of-pool-GUID-as-root-pool.patch | 59 +
.../0017-ZTS-Update-O_TMPFILE-support-check.patch | 67 +
...-flake8-invalid-escape-sequence-x-warning.patch | 35 +
...ldRequires-gcc-make-elfutils-libelf-devel.patch | 51 +
zfs-patches/0020-Tag-zfs-0.7.12.patch | 55 +
zfs-patches/series | 21 +-
17 files changed, 4279 insertions(+), 93 deletions(-)
rename zfs-patches/{0008-Add-Breaks-Replaces-to-zfs-initramfs.patch => 0004-Add-Breaks-Replaces-to-zfs-initramfs.patch} (81%)
rename zfs-patches/{0009-Revert-Install-init-scripts-to-support-non-systemd-s.patch => 0005-Revert-Install-init-scripts-to-support-non-systemd-s.patch} (83%)
rename zfs-patches/{0004-Fix-deadlock-between-zfs-umount-snapentry_expire.patch => 0006-Fix-deadlock-between-zfs-umount-snapentry_expire.patch} (100%)
rename zfs-patches/{0005-Fix-race-in-dnode_check_slots_free.patch => 0008-Fix-race-in-dnode_check_slots_free.patch} (97%)
rename zfs-patches/{0006-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch => 0009-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch} (86%)
create mode 100644 zfs-patches/0010-Skip-import-activity-test-in-more-zdb-code-paths.patch
create mode 100644 zfs-patches/0011-Fix-statfs-2-for-32-bit-user-space.patch
create mode 100644 zfs-patches/0012-Zpool-iostat-remove-latency-queue-scaling.patch
create mode 100644 zfs-patches/0013-Linux-4.19-rc3-compat-Remove-refcount_t-compat.patch
create mode 100644 zfs-patches/0014-Prefix-all-refcount-functions-with-zfs_.patch
create mode 100644 zfs-patches/0015-Fix-arc_release-refcount.patch
create mode 100644 zfs-patches/0016-Allow-use-of-pool-GUID-as-root-pool.patch
create mode 100644 zfs-patches/0017-ZTS-Update-O_TMPFILE-support-check.patch
create mode 100644 zfs-patches/0018-Fix-flake8-invalid-escape-sequence-x-warning.patch
create mode 100644 zfs-patches/0019-Add-BuildRequires-gcc-make-elfutils-libelf-devel.patch
create mode 100644 zfs-patches/0020-Tag-zfs-0.7.12.patch
diff --git a/zfs-patches/0008-Add-Breaks-Replaces-to-zfs-initramfs.patch b/zfs-patches/0004-Add-Breaks-Replaces-to-zfs-initramfs.patch
similarity index 81%
rename from zfs-patches/0008-Add-Breaks-Replaces-to-zfs-initramfs.patch
rename to zfs-patches/0004-Add-Breaks-Replaces-to-zfs-initramfs.patch
index e1e95ef..b6180b4 100644
--- a/zfs-patches/0008-Add-Breaks-Replaces-to-zfs-initramfs.patch
+++ b/zfs-patches/0004-Add-Breaks-Replaces-to-zfs-initramfs.patch
@@ -1,4 +1,4 @@
-From 5ac80068e911d3b0935903f713c5f492d518da91 Mon Sep 17 00:00:00 2001
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Stoiko Ivanov <s.ivanov at proxmox.com>
Date: Mon, 29 Oct 2018 15:49:20 +0100
Subject: [PATCH] Add Breaks/Replaces to zfs-initramfs
@@ -13,10 +13,10 @@ Signed-off-by: Stoiko Ivanov <s.ivanov at proxmox.com>
2 files changed, 4 insertions(+)
diff --git a/debian/control b/debian/control
-index 4d22ff50..a414e449 100644
+index f33008df..d3d1034e 100644
--- a/debian/control
+++ b/debian/control
-@@ -117,6 +117,8 @@ Depends: busybox-initramfs | busybox-static | busybox,
+@@ -116,6 +116,8 @@ Depends: busybox-initramfs | busybox-static | busybox,
zfs-modules | zfs-dkms,
zfsutils-linux (>= ${binary:Version}),
${misc:Depends}
@@ -26,11 +26,11 @@ index 4d22ff50..a414e449 100644
The Z file system is a pooled filesystem designed for maximum data
integrity, supporting data snapshots, multiple copies, and data
diff --git a/debian/control.in b/debian/control.in
-index 96154c5c..b9c34331 100644
+index 0a9ceef6..09ef18cc 100644
--- a/debian/control.in
+++ b/debian/control.in
-@@ -117,6 +117,8 @@ Depends: busybox-initramfs | busybox-static | busybox,
- zfs-modules | zfs-dkms,
+@@ -100,6 +100,8 @@ Depends: busybox-initramfs | busybox-static | busybox,
+ initramfs-tools,
zfsutils-linux (>= ${binary:Version}),
${misc:Depends}
+Breaks: zfsutils-linux (<= 0.7.11-pve1~bpo1)
@@ -38,6 +38,3 @@ index 96154c5c..b9c34331 100644
Description: OpenZFS root filesystem capabilities for Linux - initramfs
The Z file system is a pooled filesystem designed for maximum data
integrity, supporting data snapshots, multiple copies, and data
---
-2.11.0
-
diff --git a/zfs-patches/0009-Revert-Install-init-scripts-to-support-non-systemd-s.patch b/zfs-patches/0005-Revert-Install-init-scripts-to-support-non-systemd-s.patch
similarity index 83%
rename from zfs-patches/0009-Revert-Install-init-scripts-to-support-non-systemd-s.patch
rename to zfs-patches/0005-Revert-Install-init-scripts-to-support-non-systemd-s.patch
index e305b7d..1a8561a 100644
--- a/zfs-patches/0009-Revert-Install-init-scripts-to-support-non-systemd-s.patch
+++ b/zfs-patches/0005-Revert-Install-init-scripts-to-support-non-systemd-s.patch
@@ -1,4 +1,4 @@
-From c37a6401469cae16519db06d623d49c6f606f312 Mon Sep 17 00:00:00 2001
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Thomas Lamprecht <t.lamprecht at proxmox.com>
Date: Wed, 31 Oct 2018 15:37:39 +0100
Subject: [PATCH] Revert "Install init scripts to support non-systemd setups.
@@ -11,11 +11,11 @@ systems)
Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
---
- debian/rules | 8 --------
- debian/zfs-zed.zfs-zed.init | 1 -
- debian/zfsutils-linux.zfs-import.init | 1 -
- debian/zfsutils-linux.zfs-mount.init | 1 -
- debian/zfsutils-linux.zfs-share.init | 1 -
+ debian/rules | 8 --------
+ debian/zfs-zed.zfs-zed.init | 1 -
+ debian/zfsutils-linux.zfs-import.init | 1 -
+ debian/zfsutils-linux.zfs-mount.init | 1 -
+ debian/zfsutils-linux.zfs-share.init | 1 -
5 files changed, 12 deletions(-)
delete mode 120000 debian/zfs-zed.zfs-zed.init
delete mode 120000 debian/zfsutils-linux.zfs-import.init
@@ -23,10 +23,10 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
delete mode 120000 debian/zfsutils-linux.zfs-share.init
diff --git a/debian/rules b/debian/rules
-index 5fba58ff..81c301e4 100644
+index 3ba4b99a..d6cf5a56 100755
--- a/debian/rules
+++ b/debian/rules
-@@ -161,14 +153,6 @@ override_dh_install:
+@@ -117,14 +117,6 @@ override_dh_install:
find . -name lib*.la -delete
dh_install --fail-missing
@@ -40,7 +40,7 @@ index 5fba58ff..81c301e4 100644
-
# ------------
- override_dh_prep-deb-files:
+ debian-copyright:
diff --git a/debian/zfs-zed.zfs-zed.init b/debian/zfs-zed.zfs-zed.init
deleted file mode 120000
index 3f41f681..00000000
@@ -73,6 +73,3 @@ index 3f069f9b..00000000
@@ -1 +0,0 @@
-../etc/init.d/zfs-share
\ No newline at end of file
---
-2.19.1
-
diff --git a/zfs-patches/0004-Fix-deadlock-between-zfs-umount-snapentry_expire.patch b/zfs-patches/0006-Fix-deadlock-between-zfs-umount-snapentry_expire.patch
similarity index 100%
rename from zfs-patches/0004-Fix-deadlock-between-zfs-umount-snapentry_expire.patch
rename to zfs-patches/0006-Fix-deadlock-between-zfs-umount-snapentry_expire.patch
diff --git a/zfs-patches/0005-Fix-race-in-dnode_check_slots_free.patch b/zfs-patches/0008-Fix-race-in-dnode_check_slots_free.patch
similarity index 97%
rename from zfs-patches/0005-Fix-race-in-dnode_check_slots_free.patch
rename to zfs-patches/0008-Fix-race-in-dnode_check_slots_free.patch
index 9cebd00..1cbabe6 100644
--- a/zfs-patches/0005-Fix-race-in-dnode_check_slots_free.patch
+++ b/zfs-patches/0008-Fix-race-in-dnode_check_slots_free.patch
@@ -17,13 +17,8 @@ treated as a list_node_t when it is technically a multilist_node_t.
Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
Signed-off-by: Tom Caputi <tcaputi at datto.com>
-Requires-spl: spl-0.7-release
-Issue #7147
-Issue #7388
-Issue #7997
-
-(cherry-picked from behlendorf/issue-7997 4764f6f3be90be073d2700653dff286371e52583)
-Signed-off-by: Stoiko Ivanov <s.ivanov at proxmox.com>
+Closes #7147
+Closes #7388
---
include/sys/dmu_impl.h | 1 +
include/sys/dnode.h | 4 ++++
diff --git a/zfs-patches/0006-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch b/zfs-patches/0009-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
similarity index 86%
rename from zfs-patches/0006-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
rename to zfs-patches/0009-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
index 92dda45..b17b062 100644
--- a/zfs-patches/0006-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
+++ b/zfs-patches/0009-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
@@ -20,15 +20,11 @@ Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
Reviewed by: George Wilson <george.wilson at delphix.com>
Signed-off-by: Matthew Ahrens <mahrens at delphix.com>
External-issue: DLPX-59292
-Requires-spl: spl-0.7-release
Closes #7736
-
-(cherry-picked from behlendorf/issue-7736 496657ab3bcfeb638b1786e1759980ccfcacb08e)
-Signed-off-by: Stoiko Ivanov <s.ivanov at proxmox.com>
---
include/sys/zio.h | 4 +-
- module/zfs/zio.c | 250 +++++++++++++++++++++++++++++-------------------------
- 2 files changed, 137 insertions(+), 117 deletions(-)
+ module/zfs/zio.c | 252 +++++++++++++++++++++++++++++-------------------------
+ 2 files changed, 139 insertions(+), 117 deletions(-)
diff --git a/include/sys/zio.h b/include/sys/zio.h
index 4b0eecc2..3618912c 100644
@@ -53,7 +49,7 @@ index 4b0eecc2..3618912c 100644
/*
* The io_reexecute flags are distinct from io_flags because the child must
diff --git a/module/zfs/zio.c b/module/zfs/zio.c
-index 9a465e1b..b08b4747 100644
+index 9a465e1b..dd0dfcdb 100644
--- a/module/zfs/zio.c
+++ b/module/zfs/zio.c
@@ -75,9 +75,6 @@ uint64_t zio_buf_cache_frees[SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT];
@@ -226,16 +222,18 @@ index 9a465e1b..b08b4747 100644
zio_free_bp_init(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -1472,7 +1490,7 @@ zio_free_bp_init(zio_t *zio)
+@@ -1472,7 +1490,9 @@ zio_free_bp_init(zio_t *zio)
zio->io_pipeline = ZIO_DDT_FREE_PIPELINE;
}
- return (ZIO_PIPELINE_CONTINUE);
++ ASSERT3P(zio->io_bp, ==, &zio->io_bp_copy);
++
+ return (zio);
}
/*
-@@ -1541,12 +1559,12 @@ zio_taskq_member(zio_t *zio, zio_taskq_type_t q)
+@@ -1541,12 +1561,12 @@ zio_taskq_member(zio_t *zio, zio_taskq_type_t q)
return (B_FALSE);
}
@@ -250,7 +248,7 @@ index 9a465e1b..b08b4747 100644
}
void
-@@ -1687,14 +1705,13 @@ __attribute__((always_inline))
+@@ -1687,14 +1707,13 @@ __attribute__((always_inline))
static inline void
__zio_execute(zio_t *zio)
{
@@ -267,7 +265,7 @@ index 9a465e1b..b08b4747 100644
ASSERT(!MUTEX_HELD(&zio->io_lock));
ASSERT(ISP2(stage));
-@@ -1736,12 +1753,16 @@ __zio_execute(zio_t *zio)
+@@ -1736,12 +1755,16 @@ __zio_execute(zio_t *zio)
zio->io_stage = stage;
zio->io_pipeline_trace |= zio->io_stage;
@@ -288,7 +286,7 @@ index 9a465e1b..b08b4747 100644
}
}
-@@ -2215,7 +2236,7 @@ zio_gang_tree_issue(zio_t *pio, zio_gang_node_t *gn, blkptr_t *bp, abd_t *data,
+@@ -2215,7 +2238,7 @@ zio_gang_tree_issue(zio_t *pio, zio_gang_node_t *gn, blkptr_t *bp, abd_t *data,
zio_nowait(zio);
}
@@ -297,7 +295,7 @@ index 9a465e1b..b08b4747 100644
zio_gang_assemble(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -2227,16 +2248,16 @@ zio_gang_assemble(zio_t *zio)
+@@ -2227,16 +2250,16 @@ zio_gang_assemble(zio_t *zio)
zio_gang_tree_assemble(zio, bp, &zio->io_gang_tree);
@@ -317,7 +315,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT(BP_IS_GANG(bp) && zio->io_gang_leader == zio);
-@@ -2250,7 +2271,7 @@ zio_gang_issue(zio_t *zio)
+@@ -2250,7 +2273,7 @@ zio_gang_issue(zio_t *zio)
zio->io_pipeline = ZIO_INTERLOCK_PIPELINE;
@@ -326,7 +324,7 @@ index 9a465e1b..b08b4747 100644
}
static void
-@@ -2290,7 +2311,7 @@ zio_write_gang_done(zio_t *zio)
+@@ -2290,7 +2313,7 @@ zio_write_gang_done(zio_t *zio)
abd_put(zio->io_abd);
}
@@ -335,7 +333,7 @@ index 9a465e1b..b08b4747 100644
zio_write_gang_block(zio_t *pio)
{
spa_t *spa = pio->io_spa;
-@@ -2349,7 +2370,7 @@ zio_write_gang_block(zio_t *pio)
+@@ -2349,7 +2372,7 @@ zio_write_gang_block(zio_t *pio)
}
pio->io_error = error;
@@ -344,7 +342,7 @@ index 9a465e1b..b08b4747 100644
}
if (pio == gio) {
-@@ -2423,7 +2444,7 @@ zio_write_gang_block(zio_t *pio)
+@@ -2423,7 +2446,7 @@ zio_write_gang_block(zio_t *pio)
zio_nowait(zio);
@@ -353,7 +351,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -2444,7 +2465,7 @@ zio_write_gang_block(zio_t *pio)
+@@ -2444,7 +2467,7 @@ zio_write_gang_block(zio_t *pio)
* used for nopwrite, assuming that the salt and the checksums
* themselves remain secret.
*/
@@ -362,7 +360,7 @@ index 9a465e1b..b08b4747 100644
zio_nop_write(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -2471,7 +2492,7 @@ zio_nop_write(zio_t *zio)
+@@ -2471,7 +2494,7 @@ zio_nop_write(zio_t *zio)
BP_GET_COMPRESS(bp) != BP_GET_COMPRESS(bp_orig) ||
BP_GET_DEDUP(bp) != BP_GET_DEDUP(bp_orig) ||
zp->zp_copies != BP_GET_NDVAS(bp_orig))
@@ -371,7 +369,7 @@ index 9a465e1b..b08b4747 100644
/*
* If the checksums match then reset the pipeline so that we
-@@ -2491,7 +2512,7 @@ zio_nop_write(zio_t *zio)
+@@ -2491,7 +2514,7 @@ zio_nop_write(zio_t *zio)
zio->io_flags |= ZIO_FLAG_NOPWRITE;
}
@@ -380,7 +378,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -2519,7 +2540,7 @@ zio_ddt_child_read_done(zio_t *zio)
+@@ -2519,7 +2542,7 @@ zio_ddt_child_read_done(zio_t *zio)
mutex_exit(&pio->io_lock);
}
@@ -389,7 +387,7 @@ index 9a465e1b..b08b4747 100644
zio_ddt_read_start(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -2540,7 +2561,7 @@ zio_ddt_read_start(zio_t *zio)
+@@ -2540,7 +2563,7 @@ zio_ddt_read_start(zio_t *zio)
zio->io_vsd = dde;
if (ddp_self == NULL)
@@ -398,7 +396,7 @@ index 9a465e1b..b08b4747 100644
for (p = 0; p < DDT_PHYS_TYPES; p++, ddp++) {
if (ddp->ddp_phys_birth == 0 || ddp == ddp_self)
-@@ -2553,23 +2574,23 @@ zio_ddt_read_start(zio_t *zio)
+@@ -2553,23 +2576,23 @@ zio_ddt_read_start(zio_t *zio)
zio->io_priority, ZIO_DDT_CHILD_FLAGS(zio) |
ZIO_FLAG_DONT_PROPAGATE, &zio->io_bookmark));
}
@@ -426,7 +424,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT(BP_GET_DEDUP(bp));
-@@ -2581,12 +2602,12 @@ zio_ddt_read_done(zio_t *zio)
+@@ -2581,12 +2604,12 @@ zio_ddt_read_done(zio_t *zio)
ddt_entry_t *dde = zio->io_vsd;
if (ddt == NULL) {
ASSERT(spa_load_state(zio->io_spa) != SPA_LOAD_NONE);
@@ -441,7 +439,7 @@ index 9a465e1b..b08b4747 100644
}
if (dde->dde_repair_abd != NULL) {
abd_copy(zio->io_abd, dde->dde_repair_abd,
-@@ -2599,7 +2620,7 @@ zio_ddt_read_done(zio_t *zio)
+@@ -2599,7 +2622,7 @@ zio_ddt_read_done(zio_t *zio)
ASSERT(zio->io_vsd == NULL);
@@ -450,7 +448,7 @@ index 9a465e1b..b08b4747 100644
}
static boolean_t
-@@ -2780,7 +2801,7 @@ zio_ddt_ditto_write_done(zio_t *zio)
+@@ -2780,7 +2803,7 @@ zio_ddt_ditto_write_done(zio_t *zio)
ddt_exit(ddt);
}
@@ -459,7 +457,7 @@ index 9a465e1b..b08b4747 100644
zio_ddt_write(zio_t *zio)
{
spa_t *spa = zio->io_spa;
-@@ -2822,7 +2843,7 @@ zio_ddt_write(zio_t *zio)
+@@ -2822,7 +2845,7 @@ zio_ddt_write(zio_t *zio)
}
zio->io_pipeline = ZIO_WRITE_PIPELINE;
ddt_exit(ddt);
@@ -468,7 +466,7 @@ index 9a465e1b..b08b4747 100644
}
ditto_copies = ddt_ditto_copies_needed(ddt, dde, ddp);
-@@ -2848,7 +2869,7 @@ zio_ddt_write(zio_t *zio)
+@@ -2848,7 +2871,7 @@ zio_ddt_write(zio_t *zio)
zio->io_bp_override = NULL;
BP_ZERO(bp);
ddt_exit(ddt);
@@ -477,7 +475,7 @@ index 9a465e1b..b08b4747 100644
}
dio = zio_write(zio, spa, txg, bp, zio->io_orig_abd,
-@@ -2890,12 +2911,12 @@ zio_ddt_write(zio_t *zio)
+@@ -2890,12 +2913,12 @@ zio_ddt_write(zio_t *zio)
if (dio)
zio_nowait(dio);
@@ -492,7 +490,7 @@ index 9a465e1b..b08b4747 100644
zio_ddt_free(zio_t *zio)
{
spa_t *spa = zio->io_spa;
-@@ -2916,7 +2937,7 @@ zio_ddt_free(zio_t *zio)
+@@ -2916,7 +2939,7 @@ zio_ddt_free(zio_t *zio)
}
ddt_exit(ddt);
@@ -501,7 +499,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -2953,7 +2974,7 @@ zio_io_to_allocate(spa_t *spa)
+@@ -2953,7 +2976,7 @@ zio_io_to_allocate(spa_t *spa)
return (zio);
}
@@ -510,7 +508,7 @@ index 9a465e1b..b08b4747 100644
zio_dva_throttle(zio_t *zio)
{
spa_t *spa = zio->io_spa;
-@@ -2963,7 +2984,7 @@ zio_dva_throttle(zio_t *zio)
+@@ -2963,7 +2986,7 @@ zio_dva_throttle(zio_t *zio)
!spa_normal_class(zio->io_spa)->mc_alloc_throttle_enabled ||
zio->io_child_type == ZIO_CHILD_GANG ||
zio->io_flags & ZIO_FLAG_NODATA) {
@@ -519,7 +517,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT(zio->io_child_type > ZIO_CHILD_GANG);
-@@ -2979,22 +3000,7 @@ zio_dva_throttle(zio_t *zio)
+@@ -2979,22 +3002,7 @@ zio_dva_throttle(zio_t *zio)
nio = zio_io_to_allocate(zio->io_spa);
mutex_exit(&spa->spa_alloc_lock);
@@ -543,7 +541,7 @@ index 9a465e1b..b08b4747 100644
}
void
-@@ -3013,7 +3019,7 @@ zio_allocate_dispatch(spa_t *spa)
+@@ -3013,7 +3021,7 @@ zio_allocate_dispatch(spa_t *spa)
zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE, B_TRUE);
}
@@ -552,7 +550,7 @@ index 9a465e1b..b08b4747 100644
zio_dva_allocate(zio_t *zio)
{
spa_t *spa = zio->io_spa;
-@@ -3054,18 +3060,18 @@ zio_dva_allocate(zio_t *zio)
+@@ -3054,18 +3062,18 @@ zio_dva_allocate(zio_t *zio)
zio->io_error = error;
}
@@ -575,7 +573,7 @@ index 9a465e1b..b08b4747 100644
zio_dva_claim(zio_t *zio)
{
int error;
-@@ -3074,7 +3080,7 @@ zio_dva_claim(zio_t *zio)
+@@ -3074,7 +3082,7 @@ zio_dva_claim(zio_t *zio)
if (error)
zio->io_error = error;
@@ -584,7 +582,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3172,7 +3178,7 @@ zio_free_zil(spa_t *spa, uint64_t txg, blkptr_t *bp)
+@@ -3172,7 +3180,7 @@ zio_free_zil(spa_t *spa, uint64_t txg, blkptr_t *bp)
* force the underlying vdev layers to call either zio_execute() or
* zio_interrupt() to ensure that the pipeline continues with the correct I/O.
*/
@@ -593,7 +591,7 @@ index 9a465e1b..b08b4747 100644
zio_vdev_io_start(zio_t *zio)
{
vdev_t *vd = zio->io_vd;
-@@ -3192,7 +3198,7 @@ zio_vdev_io_start(zio_t *zio)
+@@ -3192,7 +3200,7 @@ zio_vdev_io_start(zio_t *zio)
* The mirror_ops handle multiple DVAs in a single BP.
*/
vdev_mirror_ops.vdev_op_io_start(zio);
@@ -602,7 +600,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT3P(zio->io_logical, !=, zio);
-@@ -3269,31 +3275,31 @@ zio_vdev_io_start(zio_t *zio)
+@@ -3269,31 +3277,31 @@ zio_vdev_io_start(zio_t *zio)
!vdev_dtl_contains(vd, DTL_PARTIAL, zio->io_txg, 1)) {
ASSERT(zio->io_type == ZIO_TYPE_WRITE);
zio_vdev_io_bypass(zio);
@@ -640,7 +638,7 @@ index 9a465e1b..b08b4747 100644
zio_vdev_io_done(zio_t *zio)
{
vdev_t *vd = zio->io_vd;
-@@ -3301,7 +3307,7 @@ zio_vdev_io_done(zio_t *zio)
+@@ -3301,7 +3309,7 @@ zio_vdev_io_done(zio_t *zio)
boolean_t unexpected_error = B_FALSE;
if (zio_wait_for_children(zio, ZIO_CHILD_VDEV_BIT, ZIO_WAIT_DONE)) {
@@ -649,7 +647,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT(zio->io_type == ZIO_TYPE_READ || zio->io_type == ZIO_TYPE_WRITE);
-@@ -3337,7 +3343,7 @@ zio_vdev_io_done(zio_t *zio)
+@@ -3337,7 +3345,7 @@ zio_vdev_io_done(zio_t *zio)
if (unexpected_error)
VERIFY(vdev_probe(vd, zio) == NULL);
@@ -658,7 +656,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3366,13 +3372,13 @@ zio_vsd_default_cksum_report(zio_t *zio, zio_cksum_report_t *zcr, void *ignored)
+@@ -3366,13 +3374,13 @@ zio_vsd_default_cksum_report(zio_t *zio, zio_cksum_report_t *zcr, void *ignored)
zcr->zcr_free = zio_abd_free;
}
@@ -674,7 +672,7 @@ index 9a465e1b..b08b4747 100644
}
if (vd == NULL && !(zio->io_flags & ZIO_FLAG_CONFIG_WRITER))
-@@ -3402,7 +3408,7 @@ zio_vdev_io_assess(zio_t *zio)
+@@ -3402,7 +3410,7 @@ zio_vdev_io_assess(zio_t *zio)
zio->io_stage = ZIO_STAGE_VDEV_IO_START >> 1;
zio_taskq_dispatch(zio, ZIO_TASKQ_ISSUE,
zio_requeue_io_start_cut_in_line);
@@ -683,7 +681,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3442,7 +3448,7 @@ zio_vdev_io_assess(zio_t *zio)
+@@ -3442,7 +3450,7 @@ zio_vdev_io_assess(zio_t *zio)
zio->io_physdone(zio->io_logical);
}
@@ -692,7 +690,7 @@ index 9a465e1b..b08b4747 100644
}
void
-@@ -3477,7 +3483,7 @@ zio_vdev_io_bypass(zio_t *zio)
+@@ -3477,7 +3485,7 @@ zio_vdev_io_bypass(zio_t *zio)
* Generate and verify checksums
* ==========================================================================
*/
@@ -701,7 +699,7 @@ index 9a465e1b..b08b4747 100644
zio_checksum_generate(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -3491,7 +3497,7 @@ zio_checksum_generate(zio_t *zio)
+@@ -3491,7 +3499,7 @@ zio_checksum_generate(zio_t *zio)
checksum = zio->io_prop.zp_checksum;
if (checksum == ZIO_CHECKSUM_OFF)
@@ -710,7 +708,7 @@ index 9a465e1b..b08b4747 100644
ASSERT(checksum == ZIO_CHECKSUM_LABEL);
} else {
-@@ -3505,10 +3511,10 @@ zio_checksum_generate(zio_t *zio)
+@@ -3505,10 +3513,10 @@ zio_checksum_generate(zio_t *zio)
zio_checksum_compute(zio, checksum, zio->io_abd, zio->io_size);
@@ -723,7 +721,7 @@ index 9a465e1b..b08b4747 100644
zio_checksum_verify(zio_t *zio)
{
zio_bad_cksum_t info;
-@@ -3523,7 +3529,7 @@ zio_checksum_verify(zio_t *zio)
+@@ -3523,7 +3531,7 @@ zio_checksum_verify(zio_t *zio)
* We're either verifying a label checksum, or nothing at all.
*/
if (zio->io_prop.zp_checksum == ZIO_CHECKSUM_OFF)
@@ -732,7 +730,7 @@ index 9a465e1b..b08b4747 100644
ASSERT(zio->io_prop.zp_checksum == ZIO_CHECKSUM_LABEL);
}
-@@ -3538,7 +3544,7 @@ zio_checksum_verify(zio_t *zio)
+@@ -3538,7 +3546,7 @@ zio_checksum_verify(zio_t *zio)
}
}
@@ -741,7 +739,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3581,7 +3587,7 @@ zio_worst_error(int e1, int e2)
+@@ -3581,7 +3589,7 @@ zio_worst_error(int e1, int e2)
* I/O completion
* ==========================================================================
*/
@@ -750,7 +748,7 @@ index 9a465e1b..b08b4747 100644
zio_ready(zio_t *zio)
{
blkptr_t *bp = zio->io_bp;
-@@ -3590,7 +3596,7 @@ zio_ready(zio_t *zio)
+@@ -3590,7 +3598,7 @@ zio_ready(zio_t *zio)
if (zio_wait_for_children(zio, ZIO_CHILD_GANG_BIT | ZIO_CHILD_DDT_BIT,
ZIO_WAIT_READY)) {
@@ -759,7 +757,7 @@ index 9a465e1b..b08b4747 100644
}
if (zio->io_ready) {
-@@ -3636,7 +3642,7 @@ zio_ready(zio_t *zio)
+@@ -3636,7 +3644,7 @@ zio_ready(zio_t *zio)
*/
for (; pio != NULL; pio = pio_next) {
pio_next = zio_walk_parents(zio, &zl);
@@ -768,7 +766,7 @@ index 9a465e1b..b08b4747 100644
}
if (zio->io_flags & ZIO_FLAG_NODATA) {
-@@ -3652,7 +3658,7 @@ zio_ready(zio_t *zio)
+@@ -3652,7 +3660,7 @@ zio_ready(zio_t *zio)
zio->io_spa->spa_syncing_txg == zio->io_txg)
zio_handle_ignored_writes(zio);
@@ -777,7 +775,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3716,7 +3722,7 @@ zio_dva_throttle_done(zio_t *zio)
+@@ -3716,7 +3724,7 @@ zio_dva_throttle_done(zio_t *zio)
zio_allocate_dispatch(zio->io_spa);
}
@@ -786,7 +784,7 @@ index 9a465e1b..b08b4747 100644
zio_done(zio_t *zio)
{
/*
-@@ -3733,7 +3739,7 @@ zio_done(zio_t *zio)
+@@ -3733,7 +3741,7 @@ zio_done(zio_t *zio)
* wait for them and then repeat this pipeline stage.
*/
if (zio_wait_for_children(zio, ZIO_CHILD_ALL_BITS, ZIO_WAIT_DONE)) {
@@ -795,7 +793,7 @@ index 9a465e1b..b08b4747 100644
}
/*
-@@ -3957,7 +3963,12 @@ zio_done(zio_t *zio)
+@@ -3957,7 +3965,12 @@ zio_done(zio_t *zio)
if ((pio->io_flags & ZIO_FLAG_GODFATHER) &&
(zio->io_reexecute & ZIO_REEXECUTE_SUSPEND)) {
zio_remove_child(pio, zio, remove_zl);
@@ -809,7 +807,7 @@ index 9a465e1b..b08b4747 100644
}
}
-@@ -3969,7 +3980,11 @@ zio_done(zio_t *zio)
+@@ -3969,7 +3982,11 @@ zio_done(zio_t *zio)
*/
ASSERT(!(zio->io_flags & ZIO_FLAG_GODFATHER));
zio->io_flags |= ZIO_FLAG_DONT_PROPAGATE;
@@ -822,7 +820,7 @@ index 9a465e1b..b08b4747 100644
} else if (zio->io_reexecute & ZIO_REEXECUTE_SUSPEND) {
/*
* We'd fail again if we reexecuted now, so suspend
-@@ -3987,7 +4002,7 @@ zio_done(zio_t *zio)
+@@ -3987,7 +4004,7 @@ zio_done(zio_t *zio)
(task_func_t *)zio_reexecute, zio, 0,
&zio->io_tqent);
}
@@ -831,7 +829,7 @@ index 9a465e1b..b08b4747 100644
}
ASSERT(zio->io_child_count == 0);
-@@ -4023,12 +4038,17 @@ zio_done(zio_t *zio)
+@@ -4023,12 +4040,17 @@ zio_done(zio_t *zio)
zio->io_state[ZIO_WAIT_DONE] = 1;
mutex_exit(&zio->io_lock);
@@ -850,7 +848,7 @@ index 9a465e1b..b08b4747 100644
}
if (zio->io_waiter != NULL) {
-@@ -4040,7 +4060,7 @@ zio_done(zio_t *zio)
+@@ -4040,7 +4062,7 @@ zio_done(zio_t *zio)
zio_destroy(zio);
}
diff --git a/zfs-patches/0010-Skip-import-activity-test-in-more-zdb-code-paths.patch b/zfs-patches/0010-Skip-import-activity-test-in-more-zdb-code-paths.patch
new file mode 100644
index 0000000..b23f828
--- /dev/null
+++ b/zfs-patches/0010-Skip-import-activity-test-in-more-zdb-code-paths.patch
@@ -0,0 +1,221 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Olaf Faaland <faaland1 at llnl.gov>
+Date: Mon, 20 Aug 2018 10:05:23 -0700
+Subject: [PATCH] Skip import activity test in more zdb code paths
+
+Since zdb opens the pools read-only, it cannot damage the pool in the
+event the pool is already imported either on the same host or on
+another one.
+
+If the pool vdev structure is changing while zdb is importing the
+pool, it may cause zdb to crash. However this is unlikely, and in any
+case it's a user space process and can simply be run again.
+
+For this reason, zdb should disable the multihost activity test on
+import that is normally run.
+
+This commit fixes a few zdb code paths where that had been overlooked.
+It also adds tests to ensure that several common use cases handle this
+properly in the future.
+
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Reviewed-by: Gu Zheng <guzheng2331314 at 163.com>
+Signed-off-by: Olaf Faaland <faaland1 at llnl.gov>
+Closes #7797
+Closes #7801
+---
+ cmd/zdb/zdb.c | 39 +++++++-----
+ tests/runfiles/linux.run | 3 +-
+ tests/zfs-tests/tests/functional/mmp/Makefile.am | 1 +
+ .../zfs-tests/tests/functional/mmp/mmp_on_zdb.ksh | 74 ++++++++++++++++++++++
+ 4 files changed, 101 insertions(+), 16 deletions(-)
+ create mode 100755 tests/zfs-tests/tests/functional/mmp/mmp_on_zdb.ksh
+
+diff --git a/cmd/zdb/zdb.c b/cmd/zdb/zdb.c
+index 17a0ae25..bb9fd3f1 100644
+--- a/cmd/zdb/zdb.c
++++ b/cmd/zdb/zdb.c
+@@ -24,7 +24,7 @@
+ * Copyright (c) 2011, 2016 by Delphix. All rights reserved.
+ * Copyright (c) 2014 Integros [integros.com]
+ * Copyright 2016 Nexenta Systems, Inc.
+- * Copyright (c) 2017 Lawrence Livermore National Security, LLC.
++ * Copyright (c) 2017, 2018 Lawrence Livermore National Security, LLC.
+ * Copyright (c) 2015, 2017, Intel Corporation.
+ */
+
+@@ -3660,6 +3660,22 @@ dump_simulated_ddt(spa_t *spa)
+ }
+
+ static void
++zdb_set_skip_mmp(char *target)
++{
++ spa_t *spa;
++
++ /*
++ * Disable the activity check to allow examination of
++ * active pools.
++ */
++ mutex_enter(&spa_namespace_lock);
++ if ((spa = spa_lookup(target)) != NULL) {
++ spa->spa_import_flags |= ZFS_IMPORT_SKIP_MMP;
++ }
++ mutex_exit(&spa_namespace_lock);
++}
++
++static void
+ dump_zpool(spa_t *spa)
+ {
+ dsl_pool_t *dp = spa_get_dsl(spa);
+@@ -4412,14 +4428,15 @@ main(int argc, char **argv)
+ target, strerror(ENOMEM));
+ }
+
+- /*
+- * Disable the activity check to allow examination of
+- * active pools.
+- */
+ if (dump_opt['C'] > 1) {
+ (void) printf("\nConfiguration for import:\n");
+ dump_nvlist(cfg, 8);
+ }
++
++ /*
++ * Disable the activity check to allow examination of
++ * active pools.
++ */
+ error = spa_import(target_pool, cfg, NULL,
+ flags | ZFS_IMPORT_SKIP_MMP);
+ }
+@@ -4430,16 +4447,7 @@ main(int argc, char **argv)
+
+ if (error == 0) {
+ if (target_is_spa || dump_opt['R']) {
+- /*
+- * Disable the activity check to allow examination of
+- * active pools.
+- */
+- mutex_enter(&spa_namespace_lock);
+- if ((spa = spa_lookup(target)) != NULL) {
+- spa->spa_import_flags |= ZFS_IMPORT_SKIP_MMP;
+- }
+- mutex_exit(&spa_namespace_lock);
+-
++ zdb_set_skip_mmp(target);
+ error = spa_open_rewind(target, &spa, FTAG, policy,
+ NULL);
+ if (error) {
+@@ -4462,6 +4470,7 @@ main(int argc, char **argv)
+ }
+ }
+ } else {
++ zdb_set_skip_mmp(target);
+ error = open_objset(target, DMU_OST_ANY, FTAG, &os);
+ }
+ }
+diff --git a/tests/runfiles/linux.run b/tests/runfiles/linux.run
+index d8fe6f3a..ddf01aaf 100644
+--- a/tests/runfiles/linux.run
++++ b/tests/runfiles/linux.run
+@@ -499,7 +499,8 @@ tags = ['functional', 'mmap']
+ [tests/functional/mmp]
+ tests = ['mmp_on_thread', 'mmp_on_uberblocks', 'mmp_on_off', 'mmp_interval',
+ 'mmp_active_import', 'mmp_inactive_import', 'mmp_exported_import',
+- 'mmp_write_uberblocks', 'mmp_reset_interval', 'multihost_history']
++ 'mmp_write_uberblocks', 'mmp_reset_interval', 'multihost_history',
++ 'mmp_on_zdb']
+ tags = ['functional', 'mmp']
+
+ [tests/functional/mount]
+diff --git a/tests/zfs-tests/tests/functional/mmp/Makefile.am b/tests/zfs-tests/tests/functional/mmp/Makefile.am
+index ecf16f80..f2d0ad0e 100644
+--- a/tests/zfs-tests/tests/functional/mmp/Makefile.am
++++ b/tests/zfs-tests/tests/functional/mmp/Makefile.am
+@@ -10,6 +10,7 @@ dist_pkgdata_SCRIPTS = \
+ mmp_exported_import.ksh \
+ mmp_write_uberblocks.ksh \
+ mmp_reset_interval.ksh \
++ mmp_on_zdb.ksh \
+ setup.ksh \
+ cleanup.ksh
+
+diff --git a/tests/zfs-tests/tests/functional/mmp/mmp_on_zdb.ksh b/tests/zfs-tests/tests/functional/mmp/mmp_on_zdb.ksh
+new file mode 100755
+index 00000000..b646475a
+--- /dev/null
++++ b/tests/zfs-tests/tests/functional/mmp/mmp_on_zdb.ksh
+@@ -0,0 +1,74 @@
++#!/bin/ksh
++
++#
++# This file and its contents are supplied under the terms of the
++# Common Development and Distribution License ("CDDL"), version 1.0.
++# You may only use this file in accordance with the terms of version
++# 1.0 of the CDDL.
++#
++# A full copy of the text of the CDDL should have accompanied this
++# source. A copy of the CDDL is also available via the Internet at
++# http://www.illumos.org/license/CDDL.
++#
++
++#
++# Copyright (c) 2018 Lawrence Livermore National Security, LLC.
++# Copyright (c) 2018 by Nutanix. All rights reserved.
++#
++
++. $STF_SUITE/include/libtest.shlib
++. $STF_SUITE/tests/functional/mmp/mmp.cfg
++. $STF_SUITE/tests/functional/mmp/mmp.kshlib
++
++#
++# Description:
++# zdb will work while multihost is enabled.
++#
++# Strategy:
++# 1. Create a pool
++# 2. Enable multihost
++# 3. Run zdb -d with pool and dataset arguments.
++# 4. Create a checkpoint
++# 5. Run zdb -kd with pool and dataset arguments.
++# 6. Discard the checkpoint
++# 7. Export the pool
++# 8. Run zdb -ed with pool and dataset arguments.
++#
++
++function cleanup
++{
++ datasetexists $TESTPOOL && destroy_pool $TESTPOOL
++ for DISK in $DISKS; do
++ zpool labelclear -f $DEV_RDSKDIR/$DISK
++ done
++ log_must mmp_clear_hostid
++}
++
++log_assert "Verify zdb -d works while multihost is enabled"
++log_onexit cleanup
++
++verify_runnable "global"
++verify_disk_count "$DISKS" 2
++
++default_mirror_setup_noexit $DISKS
++log_must mmp_set_hostid $HOSTID1
++log_must zpool set multihost=on $TESTPOOL
++log_must zfs snap $TESTPOOL/$TESTFS at snap
++
++log_must zdb -d $TESTPOOL
++log_must zdb -d $TESTPOOL/
++log_must zdb -d $TESTPOOL/$TESTFS
++log_must zdb -d $TESTPOOL/$TESTFS at snap
++
++log_must zpool export $TESTPOOL
++
++log_must zdb -ed $TESTPOOL
++log_must zdb -ed $TESTPOOL/
++log_must zdb -ed $TESTPOOL/$TESTFS
++log_must zdb -ed $TESTPOOL/$TESTFS at snap
++
++log_must zpool import $TESTPOOL
++
++cleanup
++
++log_pass "zdb -d works while multihost is enabled"
diff --git a/zfs-patches/0011-Fix-statfs-2-for-32-bit-user-space.patch b/zfs-patches/0011-Fix-statfs-2-for-32-bit-user-space.patch
new file mode 100644
index 0000000..eac6f59
--- /dev/null
+++ b/zfs-patches/0011-Fix-statfs-2-for-32-bit-user-space.patch
@@ -0,0 +1,180 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Brian Behlendorf <behlendorf1 at llnl.gov>
+Date: Mon, 24 Sep 2018 17:11:25 -0700
+Subject: [PATCH] Fix statfs(2) for 32-bit user space
+
+When handling a 32-bit statfs() system call the returned fields,
+although 64-bit in the kernel, must be limited to 32-bits or an
+EOVERFLOW error will be returned.
+
+This is less of an issue for block counts since the default
+reported block size in 128KiB. But since it is possible to
+set a smaller block size, these values will be scaled as
+needed to fit in a 32-bit unsigned long.
+
+Unlike most other filesystems the total possible file counts
+are more likely to overflow because they are calculated based
+on the available free space in the pool. In order to prevent
+this the reported value must be capped at 2^32-1. This is
+only for statfs(2) reporting, there are no changes to the
+internal ZFS limits.
+
+Reviewed-by: Andreas Dilger <andreas.dilger at whamcloud.com>
+Reviewed-by: Richard Yao <ryao at gentoo.org>
+Signed-off-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Issue #7927
+Closes #7122
+Closes #7937
+---
+ config/kernel-in-compat-syscall.m4 | 20 ++++++++++++++++++++
+ config/kernel.m4 | 1 +
+ include/linux/vfs_compat.h | 18 ++++++++++++++++++
+ module/zfs/zfs_vfsops.c | 8 +++-----
+ module/zfs/zpl_super.c | 22 ++++++++++++++++++++++
+ 5 files changed, 64 insertions(+), 5 deletions(-)
+ create mode 100644 config/kernel-in-compat-syscall.m4
+
+diff --git a/config/kernel-in-compat-syscall.m4 b/config/kernel-in-compat-syscall.m4
+new file mode 100644
+index 00000000..9fca9da2
+--- /dev/null
++++ b/config/kernel-in-compat-syscall.m4
+@@ -0,0 +1,20 @@
++dnl #
++dnl # 4.5 API change
++dnl # Added in_compat_syscall() which can be overridden on a per-
++dnl # architecture basis. Prior to this is_compat_task() was the
++dnl # provided interface.
++dnl #
++AC_DEFUN([ZFS_AC_KERNEL_IN_COMPAT_SYSCALL], [
++ AC_MSG_CHECKING([whether in_compat_syscall() is available])
++ ZFS_LINUX_TRY_COMPILE([
++ #include <linux/compat.h>
++ ],[
++ in_compat_syscall();
++ ],[
++ AC_MSG_RESULT(yes)
++ AC_DEFINE(HAVE_IN_COMPAT_SYSCALL, 1,
++ [in_compat_syscall() is available])
++ ],[
++ AC_MSG_RESULT(no)
++ ])
++])
+diff --git a/config/kernel.m4 b/config/kernel.m4
+index c7ca260c..3777f45c 100644
+--- a/config/kernel.m4
++++ b/config/kernel.m4
+@@ -129,6 +129,7 @@ AC_DEFUN([ZFS_AC_CONFIG_KERNEL], [
+ ZFS_AC_KERNEL_GLOBAL_PAGE_STATE
+ ZFS_AC_KERNEL_ACL_HAS_REFCOUNT
+ ZFS_AC_KERNEL_USERNS_CAPABILITIES
++ ZFS_AC_KERNEL_IN_COMPAT_SYSCALL
+
+ AS_IF([test "$LINUX_OBJ" != "$LINUX"], [
+ KERNELMAKE_PARAMS="$KERNELMAKE_PARAMS O=$LINUX_OBJ"
+diff --git a/include/linux/vfs_compat.h b/include/linux/vfs_compat.h
+index c8203bd5..90b3cca7 100644
+--- a/include/linux/vfs_compat.h
++++ b/include/linux/vfs_compat.h
+@@ -30,6 +30,7 @@
+ #include <sys/taskq.h>
+ #include <sys/cred.h>
+ #include <linux/backing-dev.h>
++#include <linux/compat.h>
+
+ /*
+ * 2.6.28 API change,
+@@ -626,4 +627,21 @@ inode_set_iversion(struct inode *ip, u64 val)
+ }
+ #endif
+
++/*
++ * Returns true when called in the context of a 32-bit system call.
++ */
++static inline int
++zpl_is_32bit_api(void)
++{
++#ifdef CONFIG_COMPAT
++#ifdef HAVE_IN_COMPAT_SYSCALL
++ return (in_compat_syscall());
++#else
++ return (is_compat_task());
++#endif
++#else
++ return (BITS_PER_LONG == 32);
++#endif
++}
++
+ #endif /* _ZFS_VFS_H */
+diff --git a/module/zfs/zfs_vfsops.c b/module/zfs/zfs_vfsops.c
+index 76113393..bcdfa26b 100644
+--- a/module/zfs/zfs_vfsops.c
++++ b/module/zfs/zfs_vfsops.c
+@@ -1245,15 +1245,13 @@ zfs_statvfs(struct dentry *dentry, struct kstatfs *statp)
+ {
+ zfsvfs_t *zfsvfs = dentry->d_sb->s_fs_info;
+ uint64_t refdbytes, availbytes, usedobjs, availobjs;
+- uint64_t fsid;
+- uint32_t bshift;
+
+ ZFS_ENTER(zfsvfs);
+
+ dmu_objset_space(zfsvfs->z_os,
+ &refdbytes, &availbytes, &usedobjs, &availobjs);
+
+- fsid = dmu_objset_fsid_guid(zfsvfs->z_os);
++ uint64_t fsid = dmu_objset_fsid_guid(zfsvfs->z_os);
+ /*
+ * The underlying storage pool actually uses multiple block
+ * size. Under Solaris frsize (fragment size) is reported as
+@@ -1265,7 +1263,7 @@ zfs_statvfs(struct dentry *dentry, struct kstatfs *statp)
+ */
+ statp->f_frsize = zfsvfs->z_max_blksz;
+ statp->f_bsize = zfsvfs->z_max_blksz;
+- bshift = fls(statp->f_bsize) - 1;
++ uint32_t bshift = fls(statp->f_bsize) - 1;
+
+ /*
+ * The following report "total" blocks of various kinds in
+@@ -1282,7 +1280,7 @@ zfs_statvfs(struct dentry *dentry, struct kstatfs *statp)
+ * static metadata. ZFS doesn't preallocate files, so the best
+ * we can do is report the max that could possibly fit in f_files,
+ * and that minus the number actually used in f_ffree.
+- * For f_ffree, report the smaller of the number of object available
++ * For f_ffree, report the smaller of the number of objects available
+ * and the number of blocks (each object will take at least a block).
+ */
+ statp->f_ffree = MIN(availobjs, availbytes >> DNODE_SHIFT);
+diff --git a/module/zfs/zpl_super.c b/module/zfs/zpl_super.c
+index 5c426b0a..216c7940 100644
+--- a/module/zfs/zpl_super.c
++++ b/module/zfs/zpl_super.c
+@@ -181,6 +181,28 @@ zpl_statfs(struct dentry *dentry, struct kstatfs *statp)
+ spl_fstrans_unmark(cookie);
+ ASSERT3S(error, <=, 0);
+
++ /*
++ * If required by a 32-bit system call, dynamically scale the
++ * block size up to 16MiB and decrease the block counts. This
++ * allows for a maximum size of 64EiB to be reported. The file
++ * counts must be artificially capped at 2^32-1.
++ */
++ if (unlikely(zpl_is_32bit_api())) {
++ while (statp->f_blocks > UINT32_MAX &&
++ statp->f_bsize < SPA_MAXBLOCKSIZE) {
++ statp->f_frsize <<= 1;
++ statp->f_bsize <<= 1;
++
++ statp->f_blocks >>= 1;
++ statp->f_bfree >>= 1;
++ statp->f_bavail >>= 1;
++ }
++
++ uint64_t usedobjs = statp->f_files - statp->f_ffree;
++ statp->f_ffree = MIN(statp->f_ffree, UINT32_MAX - usedobjs);
++ statp->f_files = statp->f_ffree + usedobjs;
++ }
++
+ return (error);
+ }
+
diff --git a/zfs-patches/0012-Zpool-iostat-remove-latency-queue-scaling.patch b/zfs-patches/0012-Zpool-iostat-remove-latency-queue-scaling.patch
new file mode 100644
index 0000000..88ea5c4
--- /dev/null
+++ b/zfs-patches/0012-Zpool-iostat-remove-latency-queue-scaling.patch
@@ -0,0 +1,86 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Gregor Kopka <mailfrom-github at kopka.net>
+Date: Wed, 26 Sep 2018 01:29:16 +0200
+Subject: [PATCH] Zpool iostat: remove latency/queue scaling
+
+Bandwidth and iops are average per second while *_wait are averages
+per request for latency or, for queue depths, an instantaneous
+measurement at the end of an interval (according to man zpool).
+
+When calculating the first two it makes sense to do
+x/interval_duration (x being the increase in total bytes or number of
+requests over the duration of the interval, interval_duration in
+seconds) to 'scale' from amount/interval_duration to amount/second.
+
+But applying the same math for the latter (*_wait latencies/queue) is
+wrong as there is no interval_duration component in the values (these
+are time/requests to get to average_time/request or already an
+absulute number).
+
+This bug leads to the only correct continuous *_wait figures for both
+latencies and queue depths from 'zpool iostat -l/q' being with
+duration=1 as then the wrong math cancels itself (x/1 is a nop).
+
+This removes temporal scaling from latency and queue depth figures.
+
+Reviewed-by: Tony Hutter <hutter2 at llnl.gov>
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Signed-off-by: Gregor Kopka <gregor at kopka.net>
+Closes #7945
+Closes #7694
+---
+ cmd/zpool/zpool_main.c | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/cmd/zpool/zpool_main.c b/cmd/zpool/zpool_main.c
+index a4fd0321..591e2e5c 100644
+--- a/cmd/zpool/zpool_main.c
++++ b/cmd/zpool/zpool_main.c
+@@ -3493,7 +3493,7 @@ single_histo_average(uint64_t *histo, unsigned int buckets)
+
+ static void
+ print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
+- nvlist_t *newnv, double scale)
++ nvlist_t *newnv)
+ {
+ int i;
+ uint64_t val;
+@@ -3523,7 +3523,7 @@ print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
+ format = ZFS_NICENUM_1024;
+
+ for (i = 0; i < ARRAY_SIZE(names); i++) {
+- val = nva[i].data[0] * scale;
++ val = nva[i].data[0];
+ print_one_stat(val, format, column_width, cb->cb_scripted);
+ }
+
+@@ -3532,7 +3532,7 @@ print_iostat_queues(iostat_cbdata_t *cb, nvlist_t *oldnv,
+
+ static void
+ print_iostat_latency(iostat_cbdata_t *cb, nvlist_t *oldnv,
+- nvlist_t *newnv, double scale)
++ nvlist_t *newnv)
+ {
+ int i;
+ uint64_t val;
+@@ -3562,7 +3562,7 @@ print_iostat_latency(iostat_cbdata_t *cb, nvlist_t *oldnv,
+ /* Print our avg latencies on the line */
+ for (i = 0; i < ARRAY_SIZE(names); i++) {
+ /* Compute average latency for a latency histo */
+- val = single_histo_average(nva[i].data, nva[i].count) * scale;
++ val = single_histo_average(nva[i].data, nva[i].count);
+ print_one_stat(val, format, column_width, cb->cb_scripted);
+ }
+ free_calc_stats(nva, ARRAY_SIZE(names));
+@@ -3701,9 +3701,9 @@ print_vdev_stats(zpool_handle_t *zhp, const char *name, nvlist_t *oldnv,
+ print_iostat_default(calcvs, cb, scale);
+ }
+ if (cb->cb_flags & IOS_LATENCY_M)
+- print_iostat_latency(cb, oldnv, newnv, scale);
++ print_iostat_latency(cb, oldnv, newnv);
+ if (cb->cb_flags & IOS_QUEUES_M)
+- print_iostat_queues(cb, oldnv, newnv, scale);
++ print_iostat_queues(cb, oldnv, newnv);
+ if (cb->cb_flags & IOS_ANYHISTO_M) {
+ printf("\n");
+ print_iostat_histos(cb, oldnv, newnv, scale, name);
diff --git a/zfs-patches/0013-Linux-4.19-rc3-compat-Remove-refcount_t-compat.patch b/zfs-patches/0013-Linux-4.19-rc3-compat-Remove-refcount_t-compat.patch
new file mode 100644
index 0000000..bc142a0
--- /dev/null
+++ b/zfs-patches/0013-Linux-4.19-rc3-compat-Remove-refcount_t-compat.patch
@@ -0,0 +1,878 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Tim Schumacher <timschumi at gmx.de>
+Date: Wed, 26 Sep 2018 19:29:26 +0200
+Subject: [PATCH] Linux 4.19-rc3+ compat: Remove refcount_t compat
+
+torvalds/linux at 59b57717f ("blkcg: delay blkg destruction until
+after writeback has finished") added a refcount_t to the blkcg
+structure. Due to the refcount_t compatibility code, zfs_refcount_t
+was used by mistake.
+
+Resolve this by removing the compatibility code and replacing the
+occurrences of refcount_t with zfs_refcount_t.
+
+Reviewed-by: Franz Pletz <fpletz at fnordicwalking.de>
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Signed-off-by: Tim Schumacher <timschumi at gmx.de>
+Closes #7885
+Closes #7932
+---
+ cmd/ztest/ztest.c | 6 +++---
+ include/linux/vfs_compat.h | 5 -----
+ include/sys/abd.h | 2 +-
+ include/sys/arc.h | 2 +-
+ include/sys/arc_impl.h | 8 +++----
+ include/sys/dbuf.h | 2 +-
+ include/sys/dmu_tx.h | 4 ++--
+ include/sys/dnode.h | 4 ++--
+ include/sys/dsl_dataset.h | 2 +-
+ include/sys/metaslab_impl.h | 5 ++---
+ include/sys/refcount.h | 52 ++++++++++++++++++++-------------------------
+ include/sys/rrwlock.h | 4 ++--
+ include/sys/sa_impl.h | 2 +-
+ include/sys/spa_impl.h | 6 +++---
+ include/sys/zap.h | 2 +-
+ include/sys/zfs_znode.h | 2 +-
+ module/zfs/arc.c | 12 +++++------
+ module/zfs/dbuf.c | 10 ++++-----
+ module/zfs/dmu.c | 2 +-
+ module/zfs/dmu_tx.c | 6 +++---
+ module/zfs/dnode.c | 6 +++---
+ module/zfs/dsl_dataset.c | 2 +-
+ module/zfs/metaslab.c | 4 ++--
+ module/zfs/refcount.c | 30 +++++++++++++-------------
+ module/zfs/rrwlock.c | 4 ++--
+ module/zfs/sa.c | 2 +-
+ module/zfs/spa_misc.c | 8 +++----
+ module/zfs/zfs_ctldir.c | 10 ++++-----
+ module/zfs/zfs_znode.c | 2 +-
+ 29 files changed, 97 insertions(+), 109 deletions(-)
+
+diff --git a/cmd/ztest/ztest.c b/cmd/ztest/ztest.c
+index a410eeef..24967a76 100644
+--- a/cmd/ztest/ztest.c
++++ b/cmd/ztest/ztest.c
+@@ -1189,7 +1189,7 @@ ztest_spa_prop_set_uint64(zpool_prop_t prop, uint64_t value)
+ */
+ typedef struct {
+ list_node_t z_lnode;
+- refcount_t z_refcnt;
++ zfs_refcount_t z_refcnt;
+ uint64_t z_object;
+ zfs_rlock_t z_range_lock;
+ } ztest_znode_t;
+@@ -1248,13 +1248,13 @@ ztest_znode_get(ztest_ds_t *zd, uint64_t object)
+ for (zp = list_head(&zll->z_list); (zp);
+ zp = list_next(&zll->z_list, zp)) {
+ if (zp->z_object == object) {
+- refcount_add(&zp->z_refcnt, RL_TAG);
++ zfs_refcount_add(&zp->z_refcnt, RL_TAG);
+ break;
+ }
+ }
+ if (zp == NULL) {
+ zp = ztest_znode_init(object);
+- refcount_add(&zp->z_refcnt, RL_TAG);
++ zfs_refcount_add(&zp->z_refcnt, RL_TAG);
+ list_insert_head(&zll->z_list, zp);
+ }
+ mutex_exit(&zll->z_lock);
+diff --git a/include/linux/vfs_compat.h b/include/linux/vfs_compat.h
+index 90b3cca7..c01f5850 100644
+--- a/include/linux/vfs_compat.h
++++ b/include/linux/vfs_compat.h
+@@ -297,9 +297,6 @@ lseek_execute(
+ * This is several orders of magnitude larger than expected grace period.
+ * At 60 seconds the kernel will also begin issuing RCU stall warnings.
+ */
+-#ifdef refcount_t
+-#undef refcount_t
+-#endif
+
+ #include <linux/posix_acl.h>
+
+@@ -430,8 +427,6 @@ typedef mode_t zpl_equivmode_t;
+ #define zpl_posix_acl_valid(ip, acl) posix_acl_valid(acl)
+ #endif
+
+-#define refcount_t zfs_refcount_t
+-
+ #endif /* CONFIG_FS_POSIX_ACL */
+
+ /*
+diff --git a/include/sys/abd.h b/include/sys/abd.h
+index cd710501..4898606a 100644
+--- a/include/sys/abd.h
++++ b/include/sys/abd.h
+@@ -52,7 +52,7 @@ typedef struct abd {
+ abd_flags_t abd_flags;
+ uint_t abd_size; /* excludes scattered abd_offset */
+ struct abd *abd_parent;
+- refcount_t abd_children;
++ zfs_refcount_t abd_children;
+ union {
+ struct abd_scatter {
+ uint_t abd_offset;
+diff --git a/include/sys/arc.h b/include/sys/arc.h
+index 1ea4937b..943ebfb5 100644
+--- a/include/sys/arc.h
++++ b/include/sys/arc.h
+@@ -76,7 +76,7 @@ struct arc_prune {
+ void *p_private;
+ uint64_t p_adjust;
+ list_node_t p_node;
+- refcount_t p_refcnt;
++ zfs_refcount_t p_refcnt;
+ };
+
+ typedef enum arc_strategy {
+diff --git a/include/sys/arc_impl.h b/include/sys/arc_impl.h
+index c6363f2a..ed2b0abe 100644
+--- a/include/sys/arc_impl.h
++++ b/include/sys/arc_impl.h
+@@ -74,12 +74,12 @@ typedef struct arc_state {
+ /*
+ * total amount of evictable data in this state
+ */
+- refcount_t arcs_esize[ARC_BUFC_NUMTYPES];
++ zfs_refcount_t arcs_esize[ARC_BUFC_NUMTYPES];
+ /*
+ * total amount of data in this state; this includes: evictable,
+ * non-evictable, ARC_BUFC_DATA, and ARC_BUFC_METADATA.
+ */
+- refcount_t arcs_size;
++ zfs_refcount_t arcs_size;
+ /*
+ * supports the "dbufs" kstat
+ */
+@@ -163,7 +163,7 @@ typedef struct l1arc_buf_hdr {
+ uint32_t b_l2_hits;
+
+ /* self protecting */
+- refcount_t b_refcnt;
++ zfs_refcount_t b_refcnt;
+
+ arc_callback_t *b_acb;
+ abd_t *b_pabd;
+@@ -180,7 +180,7 @@ typedef struct l2arc_dev {
+ kmutex_t l2ad_mtx; /* lock for buffer list */
+ list_t l2ad_buflist; /* buffer list */
+ list_node_t l2ad_node; /* device list node */
+- refcount_t l2ad_alloc; /* allocated bytes */
++ zfs_refcount_t l2ad_alloc; /* allocated bytes */
+ } l2arc_dev_t;
+
+ typedef struct l2arc_buf_hdr {
+diff --git a/include/sys/dbuf.h b/include/sys/dbuf.h
+index f3f2007d..127acad3 100644
+--- a/include/sys/dbuf.h
++++ b/include/sys/dbuf.h
+@@ -212,7 +212,7 @@ typedef struct dmu_buf_impl {
+ * If nonzero, the buffer can't be destroyed.
+ * Protected by db_mtx.
+ */
+- refcount_t db_holds;
++ zfs_refcount_t db_holds;
+
+ /* buffer holding our data */
+ arc_buf_t *db_buf;
+diff --git a/include/sys/dmu_tx.h b/include/sys/dmu_tx.h
+index 74b7e111..96bbcb05 100644
+--- a/include/sys/dmu_tx.h
++++ b/include/sys/dmu_tx.h
+@@ -97,8 +97,8 @@ typedef struct dmu_tx_hold {
+ dmu_tx_t *txh_tx;
+ list_node_t txh_node;
+ struct dnode *txh_dnode;
+- refcount_t txh_space_towrite;
+- refcount_t txh_memory_tohold;
++ zfs_refcount_t txh_space_towrite;
++ zfs_refcount_t txh_memory_tohold;
+ enum dmu_tx_hold_type txh_type;
+ uint64_t txh_arg1;
+ uint64_t txh_arg2;
+diff --git a/include/sys/dnode.h b/include/sys/dnode.h
+index 2dd087b3..1e77e0a3 100644
+--- a/include/sys/dnode.h
++++ b/include/sys/dnode.h
+@@ -266,8 +266,8 @@ struct dnode {
+ uint8_t *dn_dirtyctx_firstset; /* dbg: contents meaningless */
+
+ /* protected by own devices */
+- refcount_t dn_tx_holds;
+- refcount_t dn_holds;
++ zfs_refcount_t dn_tx_holds;
++ zfs_refcount_t dn_holds;
+
+ kmutex_t dn_dbufs_mtx;
+ /*
+diff --git a/include/sys/dsl_dataset.h b/include/sys/dsl_dataset.h
+index 1281674b..d96f526d 100644
+--- a/include/sys/dsl_dataset.h
++++ b/include/sys/dsl_dataset.h
+@@ -186,7 +186,7 @@ typedef struct dsl_dataset {
+ * Owning counts as a long hold. See the comments above
+ * dsl_pool_hold() for details.
+ */
+- refcount_t ds_longholds;
++ zfs_refcount_t ds_longholds;
+
+ /* no locking; only for making guesses */
+ uint64_t ds_trysnap_txg;
+diff --git a/include/sys/metaslab_impl.h b/include/sys/metaslab_impl.h
+index f8a713a4..60151937 100644
+--- a/include/sys/metaslab_impl.h
++++ b/include/sys/metaslab_impl.h
+@@ -179,8 +179,7 @@ struct metaslab_class {
+ * number of allocations allowed.
+ */
+ uint64_t mc_alloc_max_slots;
+- refcount_t mc_alloc_slots;
+-
++ zfs_refcount_t mc_alloc_slots;
+ uint64_t mc_alloc_groups; /* # of allocatable groups */
+
+ uint64_t mc_alloc; /* total allocated space */
+@@ -230,7 +229,7 @@ struct metaslab_group {
+ * are unable to handle their share of allocations.
+ */
+ uint64_t mg_max_alloc_queue_depth;
+- refcount_t mg_alloc_queue_depth;
++ zfs_refcount_t mg_alloc_queue_depth;
+
+ /*
+ * A metalab group that can no longer allocate the minimum block
+diff --git a/include/sys/refcount.h b/include/sys/refcount.h
+index a96220b2..5c5198d8 100644
+--- a/include/sys/refcount.h
++++ b/include/sys/refcount.h
+@@ -41,17 +41,6 @@ extern "C" {
+ */
+ #define FTAG ((char *)__func__)
+
+-/*
+- * Starting with 4.11, torvalds/linux at f405df5, the linux kernel defines a
+- * refcount_t type of its own. The macro below effectively changes references
+- * in the ZFS code from refcount_t to zfs_refcount_t at compile time, so that
+- * existing code need not be altered, reducing conflicts when landing openZFS
+- * patches.
+- */
+-
+-#define refcount_t zfs_refcount_t
+-#define refcount_add zfs_refcount_add
+-
+ #ifdef ZFS_DEBUG
+ typedef struct reference {
+ list_node_t ref_link;
+@@ -69,23 +58,28 @@ typedef struct refcount {
+ uint64_t rc_removed_count;
+ } zfs_refcount_t;
+
+-/* Note: refcount_t must be initialized with refcount_create[_untracked]() */
+-
+-void refcount_create(refcount_t *rc);
+-void refcount_create_untracked(refcount_t *rc);
+-void refcount_create_tracked(refcount_t *rc);
+-void refcount_destroy(refcount_t *rc);
+-void refcount_destroy_many(refcount_t *rc, uint64_t number);
+-int refcount_is_zero(refcount_t *rc);
+-int64_t refcount_count(refcount_t *rc);
+-int64_t zfs_refcount_add(refcount_t *rc, void *holder_tag);
+-int64_t refcount_remove(refcount_t *rc, void *holder_tag);
+-int64_t refcount_add_many(refcount_t *rc, uint64_t number, void *holder_tag);
+-int64_t refcount_remove_many(refcount_t *rc, uint64_t number, void *holder_tag);
+-void refcount_transfer(refcount_t *dst, refcount_t *src);
+-void refcount_transfer_ownership(refcount_t *, void *, void *);
+-boolean_t refcount_held(refcount_t *, void *);
+-boolean_t refcount_not_held(refcount_t *, void *);
++/*
++ * Note: zfs_refcount_t must be initialized with
++ * refcount_create[_untracked]()
++ */
++
++void refcount_create(zfs_refcount_t *rc);
++void refcount_create_untracked(zfs_refcount_t *rc);
++void refcount_create_tracked(zfs_refcount_t *rc);
++void refcount_destroy(zfs_refcount_t *rc);
++void refcount_destroy_many(zfs_refcount_t *rc, uint64_t number);
++int refcount_is_zero(zfs_refcount_t *rc);
++int64_t refcount_count(zfs_refcount_t *rc);
++int64_t zfs_refcount_add(zfs_refcount_t *rc, void *holder_tag);
++int64_t refcount_remove(zfs_refcount_t *rc, void *holder_tag);
++int64_t refcount_add_many(zfs_refcount_t *rc, uint64_t number,
++ void *holder_tag);
++int64_t refcount_remove_many(zfs_refcount_t *rc, uint64_t number,
++ void *holder_tag);
++void refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src);
++void refcount_transfer_ownership(zfs_refcount_t *, void *, void *);
++boolean_t refcount_held(zfs_refcount_t *, void *);
++boolean_t refcount_not_held(zfs_refcount_t *, void *);
+
+ void refcount_init(void);
+ void refcount_fini(void);
+@@ -94,7 +88,7 @@ void refcount_fini(void);
+
+ typedef struct refcount {
+ uint64_t rc_count;
+-} refcount_t;
++} zfs_refcount_t;
+
+ #define refcount_create(rc) ((rc)->rc_count = 0)
+ #define refcount_create_untracked(rc) ((rc)->rc_count = 0)
+diff --git a/include/sys/rrwlock.h b/include/sys/rrwlock.h
+index 7a328fd6..e1c1756c 100644
+--- a/include/sys/rrwlock.h
++++ b/include/sys/rrwlock.h
+@@ -57,8 +57,8 @@ typedef struct rrwlock {
+ kmutex_t rr_lock;
+ kcondvar_t rr_cv;
+ kthread_t *rr_writer;
+- refcount_t rr_anon_rcount;
+- refcount_t rr_linked_rcount;
++ zfs_refcount_t rr_anon_rcount;
++ zfs_refcount_t rr_linked_rcount;
+ boolean_t rr_writer_wanted;
+ boolean_t rr_track_all;
+ } rrwlock_t;
+diff --git a/include/sys/sa_impl.h b/include/sys/sa_impl.h
+index b68b7610..7eddd875 100644
+--- a/include/sys/sa_impl.h
++++ b/include/sys/sa_impl.h
+@@ -110,7 +110,7 @@ typedef struct sa_idx_tab {
+ list_node_t sa_next;
+ sa_lot_t *sa_layout;
+ uint16_t *sa_variable_lengths;
+- refcount_t sa_refcount;
++ zfs_refcount_t sa_refcount;
+ uint32_t *sa_idx_tab; /* array of offsets */
+ } sa_idx_tab_t;
+
+diff --git a/include/sys/spa_impl.h b/include/sys/spa_impl.h
+index fa7490ac..62ac8f67 100644
+--- a/include/sys/spa_impl.h
++++ b/include/sys/spa_impl.h
+@@ -78,7 +78,7 @@ typedef struct spa_config_lock {
+ kthread_t *scl_writer;
+ int scl_write_wanted;
+ kcondvar_t scl_cv;
+- refcount_t scl_count;
++ zfs_refcount_t scl_count;
+ } spa_config_lock_t;
+
+ typedef struct spa_config_dirent {
+@@ -281,12 +281,12 @@ struct spa {
+
+ /*
+ * spa_refcount & spa_config_lock must be the last elements
+- * because refcount_t changes size based on compilation options.
++ * because zfs_refcount_t changes size based on compilation options.
+ * In order for the MDB module to function correctly, the other
+ * fields must remain in the same location.
+ */
+ spa_config_lock_t spa_config_lock[SCL_LOCKS]; /* config changes */
+- refcount_t spa_refcount; /* number of opens */
++ zfs_refcount_t spa_refcount; /* number of opens */
+
+ taskq_t *spa_upgrade_taskq; /* taskq for upgrade jobs */
+ };
+diff --git a/include/sys/zap.h b/include/sys/zap.h
+index 43b7fbd2..7acc3bec 100644
+--- a/include/sys/zap.h
++++ b/include/sys/zap.h
+@@ -226,7 +226,7 @@ int zap_lookup_norm_by_dnode(dnode_t *dn, const char *name,
+ boolean_t *ncp);
+
+ int zap_count_write_by_dnode(dnode_t *dn, const char *name,
+- int add, refcount_t *towrite, refcount_t *tooverwrite);
++ int add, zfs_refcount_t *towrite, zfs_refcount_t *tooverwrite);
+
+ /*
+ * Create an attribute with the given name and value.
+diff --git a/include/sys/zfs_znode.h b/include/sys/zfs_znode.h
+index 26d1eb37..33bc20d1 100644
+--- a/include/sys/zfs_znode.h
++++ b/include/sys/zfs_znode.h
+@@ -209,7 +209,7 @@ typedef struct znode_hold {
+ uint64_t zh_obj; /* object id */
+ kmutex_t zh_lock; /* lock serializing object access */
+ avl_node_t zh_node; /* avl tree linkage */
+- refcount_t zh_refcount; /* active consumer reference count */
++ zfs_refcount_t zh_refcount; /* active consumer reference count */
+ } znode_hold_t;
+
+ /*
+diff --git a/module/zfs/arc.c b/module/zfs/arc.c
+index bcf74dd6..7518d5c8 100644
+--- a/module/zfs/arc.c
++++ b/module/zfs/arc.c
+@@ -1966,7 +1966,7 @@ add_reference(arc_buf_hdr_t *hdr, void *tag)
+
+ state = hdr->b_l1hdr.b_state;
+
+- if ((refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) &&
++ if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) &&
+ (state != arc_anon)) {
+ /* We don't use the L2-only state list. */
+ if (state != arc_l2c_only) {
+@@ -2505,7 +2505,7 @@ arc_return_buf(arc_buf_t *buf, void *tag)
+
+ ASSERT3P(buf->b_data, !=, NULL);
+ ASSERT(HDR_HAS_L1HDR(hdr));
+- (void) refcount_add(&hdr->b_l1hdr.b_refcnt, tag);
++ (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag);
+ (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
+
+ arc_loaned_bytes_update(-arc_buf_size(buf));
+@@ -2519,7 +2519,7 @@ arc_loan_inuse_buf(arc_buf_t *buf, void *tag)
+
+ ASSERT3P(buf->b_data, !=, NULL);
+ ASSERT(HDR_HAS_L1HDR(hdr));
+- (void) refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
++ (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
+ (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, tag);
+
+ arc_loaned_bytes_update(arc_buf_size(buf));
+@@ -3533,7 +3533,7 @@ arc_prune_async(int64_t adjust)
+ if (refcount_count(&ap->p_refcnt) >= 2)
+ continue;
+
+- refcount_add(&ap->p_refcnt, ap->p_pfunc);
++ zfs_refcount_add(&ap->p_refcnt, ap->p_pfunc);
+ ap->p_adjust = adjust;
+ if (taskq_dispatch(arc_prune_taskq, arc_prune_task,
+ ap, TQ_SLEEP) == TASKQID_INVALID) {
+@@ -5549,7 +5549,7 @@ arc_add_prune_callback(arc_prune_func_t *func, void *private)
+ refcount_create(&p->p_refcnt);
+
+ mutex_enter(&arc_prune_mtx);
+- refcount_add(&p->p_refcnt, &arc_prune_list);
++ zfs_refcount_add(&p->p_refcnt, &arc_prune_list);
+ list_insert_head(&arc_prune_list, p);
+ mutex_exit(&arc_prune_mtx);
+
+@@ -5815,7 +5815,7 @@ arc_release(arc_buf_t *buf, void *tag)
+ nhdr->b_l1hdr.b_mfu_hits = 0;
+ nhdr->b_l1hdr.b_mfu_ghost_hits = 0;
+ nhdr->b_l1hdr.b_l2_hits = 0;
+- (void) refcount_add(&nhdr->b_l1hdr.b_refcnt, tag);
++ (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag);
+ buf->b_hdr = nhdr;
+
+ mutex_exit(&buf->b_evict_lock);
+diff --git a/module/zfs/dbuf.c b/module/zfs/dbuf.c
+index 6edb39d6..5101c848 100644
+--- a/module/zfs/dbuf.c
++++ b/module/zfs/dbuf.c
+@@ -104,7 +104,7 @@ static boolean_t dbuf_evict_thread_exit;
+ * become eligible for arc eviction.
+ */
+ static multilist_t *dbuf_cache;
+-static refcount_t dbuf_cache_size;
++static zfs_refcount_t dbuf_cache_size;
+ unsigned long dbuf_cache_max_bytes = 100 * 1024 * 1024;
+
+ /* Cap the size of the dbuf cache to log2 fraction of arc size. */
+@@ -2384,7 +2384,7 @@ dbuf_create(dnode_t *dn, uint8_t level, uint64_t blkid,
+
+ ASSERT(dn->dn_object == DMU_META_DNODE_OBJECT ||
+ refcount_count(&dn->dn_holds) > 0);
+- (void) refcount_add(&dn->dn_holds, db);
++ (void) zfs_refcount_add(&dn->dn_holds, db);
+ atomic_inc_32(&dn->dn_dbufs_count);
+
+ dprintf_dbuf(db, "db=%p\n", db);
+@@ -2749,7 +2749,7 @@ __dbuf_hold_impl(struct dbuf_hold_impl_data *dh)
+ (void) refcount_remove_many(&dbuf_cache_size,
+ dh->dh_db->db.db_size, dh->dh_db);
+ }
+- (void) refcount_add(&dh->dh_db->db_holds, dh->dh_tag);
++ (void) zfs_refcount_add(&dh->dh_db->db_holds, dh->dh_tag);
+ DBUF_VERIFY(dh->dh_db);
+ mutex_exit(&dh->dh_db->db_mtx);
+
+@@ -2873,7 +2873,7 @@ dbuf_rm_spill(dnode_t *dn, dmu_tx_t *tx)
+ void
+ dbuf_add_ref(dmu_buf_impl_t *db, void *tag)
+ {
+- int64_t holds = refcount_add(&db->db_holds, tag);
++ int64_t holds = zfs_refcount_add(&db->db_holds, tag);
+ VERIFY3S(holds, >, 1);
+ }
+
+@@ -2893,7 +2893,7 @@ dbuf_try_add_ref(dmu_buf_t *db_fake, objset_t *os, uint64_t obj, uint64_t blkid,
+
+ if (found_db != NULL) {
+ if (db == found_db && dbuf_refcount(db) > db->db_dirtycnt) {
+- (void) refcount_add(&db->db_holds, tag);
++ (void) zfs_refcount_add(&db->db_holds, tag);
+ result = B_TRUE;
+ }
+ mutex_exit(&found_db->db_mtx);
+diff --git a/module/zfs/dmu.c b/module/zfs/dmu.c
+index a09ac4f9..a76cdd9f 100644
+--- a/module/zfs/dmu.c
++++ b/module/zfs/dmu.c
+@@ -342,7 +342,7 @@ dmu_bonus_hold(objset_t *os, uint64_t object, void *tag, dmu_buf_t **dbp)
+ db = dn->dn_bonus;
+
+ /* as long as the bonus buf is held, the dnode will be held */
+- if (refcount_add(&db->db_holds, tag) == 1) {
++ if (zfs_refcount_add(&db->db_holds, tag) == 1) {
+ VERIFY(dnode_add_ref(dn, db));
+ atomic_inc_32(&dn->dn_dbufs_count);
+ }
+diff --git a/module/zfs/dmu_tx.c b/module/zfs/dmu_tx.c
+index 6ebff267..b1508ffa 100644
+--- a/module/zfs/dmu_tx.c
++++ b/module/zfs/dmu_tx.c
+@@ -114,7 +114,7 @@ dmu_tx_hold_dnode_impl(dmu_tx_t *tx, dnode_t *dn, enum dmu_tx_hold_type type,
+ dmu_tx_hold_t *txh;
+
+ if (dn != NULL) {
+- (void) refcount_add(&dn->dn_holds, tx);
++ (void) zfs_refcount_add(&dn->dn_holds, tx);
+ if (tx->tx_txg != 0) {
+ mutex_enter(&dn->dn_mtx);
+ /*
+@@ -124,7 +124,7 @@ dmu_tx_hold_dnode_impl(dmu_tx_t *tx, dnode_t *dn, enum dmu_tx_hold_type type,
+ */
+ ASSERT(dn->dn_assigned_txg == 0);
+ dn->dn_assigned_txg = tx->tx_txg;
+- (void) refcount_add(&dn->dn_tx_holds, tx);
++ (void) zfs_refcount_add(&dn->dn_tx_holds, tx);
+ mutex_exit(&dn->dn_mtx);
+ }
+ }
+@@ -916,7 +916,7 @@ dmu_tx_try_assign(dmu_tx_t *tx, uint64_t txg_how)
+ if (dn->dn_assigned_txg == 0)
+ dn->dn_assigned_txg = tx->tx_txg;
+ ASSERT3U(dn->dn_assigned_txg, ==, tx->tx_txg);
+- (void) refcount_add(&dn->dn_tx_holds, tx);
++ (void) zfs_refcount_add(&dn->dn_tx_holds, tx);
+ mutex_exit(&dn->dn_mtx);
+ }
+ towrite += refcount_count(&txh->txh_space_towrite);
+diff --git a/module/zfs/dnode.c b/module/zfs/dnode.c
+index 4a169c49..77d38c36 100644
+--- a/module/zfs/dnode.c
++++ b/module/zfs/dnode.c
+@@ -1267,7 +1267,7 @@ dnode_hold_impl(objset_t *os, uint64_t object, int flag, int slots,
+ if ((flag & DNODE_MUST_BE_FREE) && type != DMU_OT_NONE)
+ return (SET_ERROR(EEXIST));
+ DNODE_VERIFY(dn);
+- (void) refcount_add(&dn->dn_holds, tag);
++ (void) zfs_refcount_add(&dn->dn_holds, tag);
+ *dnp = dn;
+ return (0);
+ }
+@@ -1484,7 +1484,7 @@ dnode_hold_impl(objset_t *os, uint64_t object, int flag, int slots,
+ return (type == DMU_OT_NONE ? ENOENT : EEXIST);
+ }
+
+- if (refcount_add(&dn->dn_holds, tag) == 1)
++ if (zfs_refcount_add(&dn->dn_holds, tag) == 1)
+ dbuf_add_ref(db, dnh);
+
+ mutex_exit(&dn->dn_mtx);
+@@ -1524,7 +1524,7 @@ dnode_add_ref(dnode_t *dn, void *tag)
+ mutex_exit(&dn->dn_mtx);
+ return (FALSE);
+ }
+- VERIFY(1 < refcount_add(&dn->dn_holds, tag));
++ VERIFY(1 < zfs_refcount_add(&dn->dn_holds, tag));
+ mutex_exit(&dn->dn_mtx);
+ return (TRUE);
+ }
+diff --git a/module/zfs/dsl_dataset.c b/module/zfs/dsl_dataset.c
+index bd03b486..b7562bcd 100644
+--- a/module/zfs/dsl_dataset.c
++++ b/module/zfs/dsl_dataset.c
+@@ -645,7 +645,7 @@ void
+ dsl_dataset_long_hold(dsl_dataset_t *ds, void *tag)
+ {
+ ASSERT(dsl_pool_config_held(ds->ds_dir->dd_pool));
+- (void) refcount_add(&ds->ds_longholds, tag);
++ (void) zfs_refcount_add(&ds->ds_longholds, tag);
+ }
+
+ void
+diff --git a/module/zfs/metaslab.c b/module/zfs/metaslab.c
+index ee24850d..40658d51 100644
+--- a/module/zfs/metaslab.c
++++ b/module/zfs/metaslab.c
+@@ -2663,7 +2663,7 @@ metaslab_group_alloc_increment(spa_t *spa, uint64_t vdev, void *tag, int flags)
+ if (!mg->mg_class->mc_alloc_throttle_enabled)
+ return;
+
+- (void) refcount_add(&mg->mg_alloc_queue_depth, tag);
++ (void) zfs_refcount_add(&mg->mg_alloc_queue_depth, tag);
+ }
+
+ void
+@@ -3360,7 +3360,7 @@ metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, zio_t *zio,
+ * them individually when an I/O completes.
+ */
+ for (d = 0; d < slots; d++) {
+- reserved_slots = refcount_add(&mc->mc_alloc_slots, zio);
++ reserved_slots = zfs_refcount_add(&mc->mc_alloc_slots, zio);
+ }
+ zio->io_flags |= ZIO_FLAG_IO_ALLOCATING;
+ slot_reserved = B_TRUE;
+diff --git a/module/zfs/refcount.c b/module/zfs/refcount.c
+index a151acea..13f9bb6b 100644
+--- a/module/zfs/refcount.c
++++ b/module/zfs/refcount.c
+@@ -55,7 +55,7 @@ refcount_fini(void)
+ }
+
+ void
+-refcount_create(refcount_t *rc)
++refcount_create(zfs_refcount_t *rc)
+ {
+ mutex_init(&rc->rc_mtx, NULL, MUTEX_DEFAULT, NULL);
+ list_create(&rc->rc_list, sizeof (reference_t),
+@@ -68,21 +68,21 @@ refcount_create(refcount_t *rc)
+ }
+
+ void
+-refcount_create_tracked(refcount_t *rc)
++refcount_create_tracked(zfs_refcount_t *rc)
+ {
+ refcount_create(rc);
+ rc->rc_tracked = B_TRUE;
+ }
+
+ void
+-refcount_create_untracked(refcount_t *rc)
++refcount_create_untracked(zfs_refcount_t *rc)
+ {
+ refcount_create(rc);
+ rc->rc_tracked = B_FALSE;
+ }
+
+ void
+-refcount_destroy_many(refcount_t *rc, uint64_t number)
++refcount_destroy_many(zfs_refcount_t *rc, uint64_t number)
+ {
+ reference_t *ref;
+
+@@ -103,25 +103,25 @@ refcount_destroy_many(refcount_t *rc, uint64_t number)
+ }
+
+ void
+-refcount_destroy(refcount_t *rc)
++refcount_destroy(zfs_refcount_t *rc)
+ {
+ refcount_destroy_many(rc, 0);
+ }
+
+ int
+-refcount_is_zero(refcount_t *rc)
++refcount_is_zero(zfs_refcount_t *rc)
+ {
+ return (rc->rc_count == 0);
+ }
+
+ int64_t
+-refcount_count(refcount_t *rc)
++refcount_count(zfs_refcount_t *rc)
+ {
+ return (rc->rc_count);
+ }
+
+ int64_t
+-refcount_add_many(refcount_t *rc, uint64_t number, void *holder)
++refcount_add_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ {
+ reference_t *ref = NULL;
+ int64_t count;
+@@ -143,13 +143,13 @@ refcount_add_many(refcount_t *rc, uint64_t number, void *holder)
+ }
+
+ int64_t
+-zfs_refcount_add(refcount_t *rc, void *holder)
++zfs_refcount_add(zfs_refcount_t *rc, void *holder)
+ {
+ return (refcount_add_many(rc, 1, holder));
+ }
+
+ int64_t
+-refcount_remove_many(refcount_t *rc, uint64_t number, void *holder)
++refcount_remove_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ {
+ reference_t *ref;
+ int64_t count;
+@@ -197,13 +197,13 @@ refcount_remove_many(refcount_t *rc, uint64_t number, void *holder)
+ }
+
+ int64_t
+-refcount_remove(refcount_t *rc, void *holder)
++refcount_remove(zfs_refcount_t *rc, void *holder)
+ {
+ return (refcount_remove_many(rc, 1, holder));
+ }
+
+ void
+-refcount_transfer(refcount_t *dst, refcount_t *src)
++refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src)
+ {
+ int64_t count, removed_count;
+ list_t list, removed;
+@@ -234,7 +234,7 @@ refcount_transfer(refcount_t *dst, refcount_t *src)
+ }
+
+ void
+-refcount_transfer_ownership(refcount_t *rc, void *current_holder,
++refcount_transfer_ownership(zfs_refcount_t *rc, void *current_holder,
+ void *new_holder)
+ {
+ reference_t *ref;
+@@ -264,7 +264,7 @@ refcount_transfer_ownership(refcount_t *rc, void *current_holder,
+ * might be held.
+ */
+ boolean_t
+-refcount_held(refcount_t *rc, void *holder)
++refcount_held(zfs_refcount_t *rc, void *holder)
+ {
+ reference_t *ref;
+
+@@ -292,7 +292,7 @@ refcount_held(refcount_t *rc, void *holder)
+ * since the reference might not be held.
+ */
+ boolean_t
+-refcount_not_held(refcount_t *rc, void *holder)
++refcount_not_held(zfs_refcount_t *rc, void *holder)
+ {
+ reference_t *ref;
+
+diff --git a/module/zfs/rrwlock.c b/module/zfs/rrwlock.c
+index 704f7606..effff330 100644
+--- a/module/zfs/rrwlock.c
++++ b/module/zfs/rrwlock.c
+@@ -183,9 +183,9 @@ rrw_enter_read_impl(rrwlock_t *rrl, boolean_t prio, void *tag)
+ if (rrl->rr_writer_wanted || rrl->rr_track_all) {
+ /* may or may not be a re-entrant enter */
+ rrn_add(rrl, tag);
+- (void) refcount_add(&rrl->rr_linked_rcount, tag);
++ (void) zfs_refcount_add(&rrl->rr_linked_rcount, tag);
+ } else {
+- (void) refcount_add(&rrl->rr_anon_rcount, tag);
++ (void) zfs_refcount_add(&rrl->rr_anon_rcount, tag);
+ }
+ ASSERT(rrl->rr_writer == NULL);
+ mutex_exit(&rrl->rr_lock);
+diff --git a/module/zfs/sa.c b/module/zfs/sa.c
+index 1fb1a8b5..df4f6fd8 100644
+--- a/module/zfs/sa.c
++++ b/module/zfs/sa.c
+@@ -1337,7 +1337,7 @@ sa_idx_tab_hold(objset_t *os, sa_idx_tab_t *idx_tab)
+ ASSERTV(sa_os_t *sa = os->os_sa);
+
+ ASSERT(MUTEX_HELD(&sa->sa_lock));
+- (void) refcount_add(&idx_tab->sa_refcount, NULL);
++ (void) zfs_refcount_add(&idx_tab->sa_refcount, NULL);
+ }
+
+ void
+diff --git a/module/zfs/spa_misc.c b/module/zfs/spa_misc.c
+index cc1c641d..f6c9b40b 100644
+--- a/module/zfs/spa_misc.c
++++ b/module/zfs/spa_misc.c
+@@ -80,7 +80,7 @@
+ * definition they must have an existing reference, and will never need
+ * to lookup a spa_t by name.
+ *
+- * spa_refcount (per-spa refcount_t protected by mutex)
++ * spa_refcount (per-spa zfs_refcount_t protected by mutex)
+ *
+ * This reference count keep track of any active users of the spa_t. The
+ * spa_t cannot be destroyed or freed while this is non-zero. Internally,
+@@ -414,7 +414,7 @@ spa_config_tryenter(spa_t *spa, int locks, void *tag, krw_t rw)
+ }
+ scl->scl_writer = curthread;
+ }
+- (void) refcount_add(&scl->scl_count, tag);
++ (void) zfs_refcount_add(&scl->scl_count, tag);
+ mutex_exit(&scl->scl_lock);
+ }
+ return (1);
+@@ -448,7 +448,7 @@ spa_config_enter(spa_t *spa, int locks, void *tag, krw_t rw)
+ }
+ scl->scl_writer = curthread;
+ }
+- (void) refcount_add(&scl->scl_count, tag);
++ (void) zfs_refcount_add(&scl->scl_count, tag);
+ mutex_exit(&scl->scl_lock);
+ }
+ ASSERT(wlocks_held <= locks);
+@@ -768,7 +768,7 @@ spa_open_ref(spa_t *spa, void *tag)
+ {
+ ASSERT(refcount_count(&spa->spa_refcount) >= spa->spa_minref ||
+ MUTEX_HELD(&spa_namespace_lock));
+- (void) refcount_add(&spa->spa_refcount, tag);
++ (void) zfs_refcount_add(&spa->spa_refcount, tag);
+ }
+
+ /*
+diff --git a/module/zfs/zfs_ctldir.c b/module/zfs/zfs_ctldir.c
+index 0ab5b4f0..de3c5a41 100644
+--- a/module/zfs/zfs_ctldir.c
++++ b/module/zfs/zfs_ctldir.c
+@@ -120,7 +120,7 @@ typedef struct {
+ taskqid_t se_taskqid; /* scheduled unmount taskqid */
+ avl_node_t se_node_name; /* zfs_snapshots_by_name link */
+ avl_node_t se_node_objsetid; /* zfs_snapshots_by_objsetid link */
+- refcount_t se_refcount; /* reference count */
++ zfs_refcount_t se_refcount; /* reference count */
+ } zfs_snapentry_t;
+
+ static void zfsctl_snapshot_unmount_delay_impl(zfs_snapentry_t *se, int delay);
+@@ -169,7 +169,7 @@ zfsctl_snapshot_free(zfs_snapentry_t *se)
+ static void
+ zfsctl_snapshot_hold(zfs_snapentry_t *se)
+ {
+- refcount_add(&se->se_refcount, NULL);
++ zfs_refcount_add(&se->se_refcount, NULL);
+ }
+
+ /*
+@@ -192,7 +192,7 @@ static void
+ zfsctl_snapshot_add(zfs_snapentry_t *se)
+ {
+ ASSERT(RW_WRITE_HELD(&zfs_snapshot_lock));
+- refcount_add(&se->se_refcount, NULL);
++ zfs_refcount_add(&se->se_refcount, NULL);
+ avl_add(&zfs_snapshots_by_name, se);
+ avl_add(&zfs_snapshots_by_objsetid, se);
+ }
+@@ -269,7 +269,7 @@ zfsctl_snapshot_find_by_name(char *snapname)
+ search.se_name = snapname;
+ se = avl_find(&zfs_snapshots_by_name, &search, NULL);
+ if (se)
+- refcount_add(&se->se_refcount, NULL);
++ zfs_refcount_add(&se->se_refcount, NULL);
+
+ return (se);
+ }
+@@ -290,7 +290,7 @@ zfsctl_snapshot_find_by_objsetid(spa_t *spa, uint64_t objsetid)
+ search.se_objsetid = objsetid;
+ se = avl_find(&zfs_snapshots_by_objsetid, &search, NULL);
+ if (se)
+- refcount_add(&se->se_refcount, NULL);
++ zfs_refcount_add(&se->se_refcount, NULL);
+
+ return (se);
+ }
+diff --git a/module/zfs/zfs_znode.c b/module/zfs/zfs_znode.c
+index e222c791..0ca10f82 100644
+--- a/module/zfs/zfs_znode.c
++++ b/module/zfs/zfs_znode.c
+@@ -272,7 +272,7 @@ zfs_znode_hold_enter(zfsvfs_t *zfsvfs, uint64_t obj)
+ ASSERT3U(zh->zh_obj, ==, obj);
+ found = B_TRUE;
+ }
+- refcount_add(&zh->zh_refcount, NULL);
++ zfs_refcount_add(&zh->zh_refcount, NULL);
+ mutex_exit(&zfsvfs->z_hold_locks[i]);
+
+ if (found == B_TRUE)
diff --git a/zfs-patches/0014-Prefix-all-refcount-functions-with-zfs_.patch b/zfs-patches/0014-Prefix-all-refcount-functions-with-zfs_.patch
new file mode 100644
index 0000000..55efcb8
--- /dev/null
+++ b/zfs-patches/0014-Prefix-all-refcount-functions-with-zfs_.patch
@@ -0,0 +1,2527 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Tim Schumacher <timschumi at gmx.de>
+Date: Mon, 1 Oct 2018 19:42:05 +0200
+Subject: [PATCH] Prefix all refcount functions with zfs_
+
+Recent changes in the Linux kernel made it necessary to prefix
+the refcount_add() function with zfs_ due to a name collision.
+
+To bring the other functions in line with that and to avoid future
+collisions, prefix the other refcount functions as well.
+
+Reviewed by: Matthew Ahrens <mahrens at delphix.com>
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Signed-off-by: Tim Schumacher <timschumi at gmx.de>
+Closes #7963
+---
+ cmd/ztest/ztest.c | 10 +-
+ include/sys/refcount.h | 70 ++++++-----
+ include/sys/trace_dbuf.h | 2 +-
+ module/zfs/abd.c | 22 ++--
+ module/zfs/arc.c | 301 ++++++++++++++++++++++++-----------------------
+ module/zfs/dbuf.c | 66 +++++------
+ module/zfs/dbuf_stats.c | 4 +-
+ module/zfs/dmu_tx.c | 36 +++---
+ module/zfs/dnode.c | 40 +++----
+ module/zfs/dnode_sync.c | 6 +-
+ module/zfs/dsl_dataset.c | 12 +-
+ module/zfs/dsl_destroy.c | 6 +-
+ module/zfs/metaslab.c | 23 ++--
+ module/zfs/refcount.c | 42 +++----
+ module/zfs/rrwlock.c | 35 +++---
+ module/zfs/sa.c | 8 +-
+ module/zfs/spa.c | 8 +-
+ module/zfs/spa_misc.c | 35 +++---
+ module/zfs/zfs_ctldir.c | 6 +-
+ module/zfs/zfs_znode.c | 10 +-
+ module/zfs/zio.c | 4 +-
+ 21 files changed, 381 insertions(+), 365 deletions(-)
+
+diff --git a/cmd/ztest/ztest.c b/cmd/ztest/ztest.c
+index 24967a76..5868d60a 100644
+--- a/cmd/ztest/ztest.c
++++ b/cmd/ztest/ztest.c
+@@ -1205,7 +1205,7 @@ ztest_znode_init(uint64_t object)
+ ztest_znode_t *zp = umem_alloc(sizeof (*zp), UMEM_NOFAIL);
+
+ list_link_init(&zp->z_lnode);
+- refcount_create(&zp->z_refcnt);
++ zfs_refcount_create(&zp->z_refcnt);
+ zp->z_object = object;
+ zfs_rlock_init(&zp->z_range_lock);
+
+@@ -1215,10 +1215,10 @@ ztest_znode_init(uint64_t object)
+ static void
+ ztest_znode_fini(ztest_znode_t *zp)
+ {
+- ASSERT(refcount_is_zero(&zp->z_refcnt));
++ ASSERT(zfs_refcount_is_zero(&zp->z_refcnt));
+ zfs_rlock_destroy(&zp->z_range_lock);
+ zp->z_object = 0;
+- refcount_destroy(&zp->z_refcnt);
++ zfs_refcount_destroy(&zp->z_refcnt);
+ list_link_init(&zp->z_lnode);
+ umem_free(zp, sizeof (*zp));
+ }
+@@ -1268,8 +1268,8 @@ ztest_znode_put(ztest_ds_t *zd, ztest_znode_t *zp)
+ ASSERT3U(zp->z_object, !=, 0);
+ zll = &zd->zd_range_lock[zp->z_object & (ZTEST_OBJECT_LOCKS - 1)];
+ mutex_enter(&zll->z_lock);
+- refcount_remove(&zp->z_refcnt, RL_TAG);
+- if (refcount_is_zero(&zp->z_refcnt)) {
++ zfs_refcount_remove(&zp->z_refcnt, RL_TAG);
++ if (zfs_refcount_is_zero(&zp->z_refcnt)) {
+ list_remove(&zll->z_list, zp);
+ ztest_znode_fini(zp);
+ }
+diff --git a/include/sys/refcount.h b/include/sys/refcount.h
+index 5c5198d8..7eeb1366 100644
+--- a/include/sys/refcount.h
++++ b/include/sys/refcount.h
+@@ -63,26 +63,24 @@ typedef struct refcount {
+ * refcount_create[_untracked]()
+ */
+
+-void refcount_create(zfs_refcount_t *rc);
+-void refcount_create_untracked(zfs_refcount_t *rc);
+-void refcount_create_tracked(zfs_refcount_t *rc);
+-void refcount_destroy(zfs_refcount_t *rc);
+-void refcount_destroy_many(zfs_refcount_t *rc, uint64_t number);
+-int refcount_is_zero(zfs_refcount_t *rc);
+-int64_t refcount_count(zfs_refcount_t *rc);
+-int64_t zfs_refcount_add(zfs_refcount_t *rc, void *holder_tag);
+-int64_t refcount_remove(zfs_refcount_t *rc, void *holder_tag);
+-int64_t refcount_add_many(zfs_refcount_t *rc, uint64_t number,
+- void *holder_tag);
+-int64_t refcount_remove_many(zfs_refcount_t *rc, uint64_t number,
+- void *holder_tag);
+-void refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src);
+-void refcount_transfer_ownership(zfs_refcount_t *, void *, void *);
+-boolean_t refcount_held(zfs_refcount_t *, void *);
+-boolean_t refcount_not_held(zfs_refcount_t *, void *);
+-
+-void refcount_init(void);
+-void refcount_fini(void);
++void zfs_refcount_create(zfs_refcount_t *);
++void zfs_refcount_create_untracked(zfs_refcount_t *);
++void zfs_refcount_create_tracked(zfs_refcount_t *);
++void zfs_refcount_destroy(zfs_refcount_t *);
++void zfs_refcount_destroy_many(zfs_refcount_t *, uint64_t);
++int zfs_refcount_is_zero(zfs_refcount_t *);
++int64_t zfs_refcount_count(zfs_refcount_t *);
++int64_t zfs_refcount_add(zfs_refcount_t *, void *);
++int64_t zfs_refcount_remove(zfs_refcount_t *, void *);
++int64_t zfs_refcount_add_many(zfs_refcount_t *, uint64_t, void *);
++int64_t zfs_refcount_remove_many(zfs_refcount_t *, uint64_t, void *);
++void zfs_refcount_transfer(zfs_refcount_t *, zfs_refcount_t *);
++void zfs_refcount_transfer_ownership(zfs_refcount_t *, void *, void *);
++boolean_t zfs_refcount_held(zfs_refcount_t *, void *);
++boolean_t zfs_refcount_not_held(zfs_refcount_t *, void *);
++
++void zfs_refcount_init(void);
++void zfs_refcount_fini(void);
+
+ #else /* ZFS_DEBUG */
+
+@@ -90,30 +88,30 @@ typedef struct refcount {
+ uint64_t rc_count;
+ } zfs_refcount_t;
+
+-#define refcount_create(rc) ((rc)->rc_count = 0)
+-#define refcount_create_untracked(rc) ((rc)->rc_count = 0)
+-#define refcount_create_tracked(rc) ((rc)->rc_count = 0)
+-#define refcount_destroy(rc) ((rc)->rc_count = 0)
+-#define refcount_destroy_many(rc, number) ((rc)->rc_count = 0)
+-#define refcount_is_zero(rc) ((rc)->rc_count == 0)
+-#define refcount_count(rc) ((rc)->rc_count)
++#define zfs_refcount_create(rc) ((rc)->rc_count = 0)
++#define zfs_refcount_create_untracked(rc) ((rc)->rc_count = 0)
++#define zfs_refcount_create_tracked(rc) ((rc)->rc_count = 0)
++#define zfs_refcount_destroy(rc) ((rc)->rc_count = 0)
++#define zfs_refcount_destroy_many(rc, number) ((rc)->rc_count = 0)
++#define zfs_refcount_is_zero(rc) ((rc)->rc_count == 0)
++#define zfs_refcount_count(rc) ((rc)->rc_count)
+ #define zfs_refcount_add(rc, holder) atomic_inc_64_nv(&(rc)->rc_count)
+-#define refcount_remove(rc, holder) atomic_dec_64_nv(&(rc)->rc_count)
+-#define refcount_add_many(rc, number, holder) \
++#define zfs_refcount_remove(rc, holder) atomic_dec_64_nv(&(rc)->rc_count)
++#define zfs_refcount_add_many(rc, number, holder) \
+ atomic_add_64_nv(&(rc)->rc_count, number)
+-#define refcount_remove_many(rc, number, holder) \
++#define zfs_refcount_remove_many(rc, number, holder) \
+ atomic_add_64_nv(&(rc)->rc_count, -number)
+-#define refcount_transfer(dst, src) { \
++#define zfs_refcount_transfer(dst, src) { \
+ uint64_t __tmp = (src)->rc_count; \
+ atomic_add_64(&(src)->rc_count, -__tmp); \
+ atomic_add_64(&(dst)->rc_count, __tmp); \
+ }
+-#define refcount_transfer_ownership(rc, current_holder, new_holder) (void)0
+-#define refcount_held(rc, holder) ((rc)->rc_count > 0)
+-#define refcount_not_held(rc, holder) (B_TRUE)
++#define zfs_refcount_transfer_ownership(rc, current_holder, new_holder) (void)0
++#define zfs_refcount_held(rc, holder) ((rc)->rc_count > 0)
++#define zfs_refcount_not_held(rc, holder) (B_TRUE)
+
+-#define refcount_init()
+-#define refcount_fini()
++#define zfs_refcount_init()
++#define zfs_refcount_fini()
+
+ #endif /* ZFS_DEBUG */
+
+diff --git a/include/sys/trace_dbuf.h b/include/sys/trace_dbuf.h
+index c3e70c37..e97b6113 100644
+--- a/include/sys/trace_dbuf.h
++++ b/include/sys/trace_dbuf.h
+@@ -71,7 +71,7 @@
+ __entry->db_offset = db->db.db_offset; \
+ __entry->db_size = db->db.db_size; \
+ __entry->db_state = db->db_state; \
+- __entry->db_holds = refcount_count(&db->db_holds); \
++ __entry->db_holds = zfs_refcount_count(&db->db_holds); \
+ snprintf(__get_str(msg), TRACE_DBUF_MSG_MAX, \
+ DBUF_TP_PRINTK_FMT, DBUF_TP_PRINTK_ARGS); \
+ } else { \
+diff --git a/module/zfs/abd.c b/module/zfs/abd.c
+index 138b041c..5a6a8158 100644
+--- a/module/zfs/abd.c
++++ b/module/zfs/abd.c
+@@ -597,7 +597,7 @@ abd_alloc(size_t size, boolean_t is_metadata)
+ }
+ abd->abd_size = size;
+ abd->abd_parent = NULL;
+- refcount_create(&abd->abd_children);
++ zfs_refcount_create(&abd->abd_children);
+
+ abd->abd_u.abd_scatter.abd_offset = 0;
+
+@@ -614,7 +614,7 @@ abd_free_scatter(abd_t *abd)
+ {
+ abd_free_pages(abd);
+
+- refcount_destroy(&abd->abd_children);
++ zfs_refcount_destroy(&abd->abd_children);
+ ABDSTAT_BUMPDOWN(abdstat_scatter_cnt);
+ ABDSTAT_INCR(abdstat_scatter_data_size, -(int)abd->abd_size);
+ ABDSTAT_INCR(abdstat_scatter_chunk_waste,
+@@ -641,7 +641,7 @@ abd_alloc_linear(size_t size, boolean_t is_metadata)
+ }
+ abd->abd_size = size;
+ abd->abd_parent = NULL;
+- refcount_create(&abd->abd_children);
++ zfs_refcount_create(&abd->abd_children);
+
+ if (is_metadata) {
+ abd->abd_u.abd_linear.abd_buf = zio_buf_alloc(size);
+@@ -664,7 +664,7 @@ abd_free_linear(abd_t *abd)
+ zio_data_buf_free(abd->abd_u.abd_linear.abd_buf, abd->abd_size);
+ }
+
+- refcount_destroy(&abd->abd_children);
++ zfs_refcount_destroy(&abd->abd_children);
+ ABDSTAT_BUMPDOWN(abdstat_linear_cnt);
+ ABDSTAT_INCR(abdstat_linear_data_size, -(int)abd->abd_size);
+
+@@ -775,8 +775,8 @@ abd_get_offset_impl(abd_t *sabd, size_t off, size_t size)
+
+ abd->abd_size = size;
+ abd->abd_parent = sabd;
+- refcount_create(&abd->abd_children);
+- (void) refcount_add_many(&sabd->abd_children, abd->abd_size, abd);
++ zfs_refcount_create(&abd->abd_children);
++ (void) zfs_refcount_add_many(&sabd->abd_children, abd->abd_size, abd);
+
+ return (abd);
+ }
+@@ -818,7 +818,7 @@ abd_get_from_buf(void *buf, size_t size)
+ abd->abd_flags = ABD_FLAG_LINEAR;
+ abd->abd_size = size;
+ abd->abd_parent = NULL;
+- refcount_create(&abd->abd_children);
++ zfs_refcount_create(&abd->abd_children);
+
+ abd->abd_u.abd_linear.abd_buf = buf;
+
+@@ -836,11 +836,11 @@ abd_put(abd_t *abd)
+ ASSERT(!(abd->abd_flags & ABD_FLAG_OWNER));
+
+ if (abd->abd_parent != NULL) {
+- (void) refcount_remove_many(&abd->abd_parent->abd_children,
++ (void) zfs_refcount_remove_many(&abd->abd_parent->abd_children,
+ abd->abd_size, abd);
+ }
+
+- refcount_destroy(&abd->abd_children);
++ zfs_refcount_destroy(&abd->abd_children);
+ abd_free_struct(abd);
+ }
+
+@@ -872,7 +872,7 @@ abd_borrow_buf(abd_t *abd, size_t n)
+ } else {
+ buf = zio_buf_alloc(n);
+ }
+- (void) refcount_add_many(&abd->abd_children, n, buf);
++ (void) zfs_refcount_add_many(&abd->abd_children, n, buf);
+
+ return (buf);
+ }
+@@ -904,7 +904,7 @@ abd_return_buf(abd_t *abd, void *buf, size_t n)
+ ASSERT0(abd_cmp_buf(abd, buf, n));
+ zio_buf_free(buf, n);
+ }
+- (void) refcount_remove_many(&abd->abd_children, n, buf);
++ (void) zfs_refcount_remove_many(&abd->abd_children, n, buf);
+ }
+
+ void
+diff --git a/module/zfs/arc.c b/module/zfs/arc.c
+index 7518d5c8..32ac0837 100644
+--- a/module/zfs/arc.c
++++ b/module/zfs/arc.c
+@@ -1181,7 +1181,7 @@ hdr_full_cons(void *vbuf, void *unused, int kmflag)
+
+ bzero(hdr, HDR_FULL_SIZE);
+ cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL);
+- refcount_create(&hdr->b_l1hdr.b_refcnt);
++ zfs_refcount_create(&hdr->b_l1hdr.b_refcnt);
+ mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL);
+ list_link_init(&hdr->b_l1hdr.b_arc_node);
+ list_link_init(&hdr->b_l2hdr.b_l2node);
+@@ -1228,7 +1228,7 @@ hdr_full_dest(void *vbuf, void *unused)
+
+ ASSERT(HDR_EMPTY(hdr));
+ cv_destroy(&hdr->b_l1hdr.b_cv);
+- refcount_destroy(&hdr->b_l1hdr.b_refcnt);
++ zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt);
+ mutex_destroy(&hdr->b_l1hdr.b_freeze_lock);
+ ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
+ arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS);
+@@ -1893,20 +1893,20 @@ arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state)
+ ASSERT0(hdr->b_l1hdr.b_bufcnt);
+ ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
+ ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
+- (void) refcount_add_many(&state->arcs_esize[type],
++ (void) zfs_refcount_add_many(&state->arcs_esize[type],
+ HDR_GET_LSIZE(hdr), hdr);
+ return;
+ }
+
+ ASSERT(!GHOST_STATE(state));
+ if (hdr->b_l1hdr.b_pabd != NULL) {
+- (void) refcount_add_many(&state->arcs_esize[type],
++ (void) zfs_refcount_add_many(&state->arcs_esize[type],
+ arc_hdr_size(hdr), hdr);
+ }
+ for (buf = hdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) {
+ if (arc_buf_is_shared(buf))
+ continue;
+- (void) refcount_add_many(&state->arcs_esize[type],
++ (void) zfs_refcount_add_many(&state->arcs_esize[type],
+ arc_buf_size(buf), buf);
+ }
+ }
+@@ -1928,20 +1928,20 @@ arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state)
+ ASSERT0(hdr->b_l1hdr.b_bufcnt);
+ ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
+ ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(&state->arcs_esize[type],
+ HDR_GET_LSIZE(hdr), hdr);
+ return;
+ }
+
+ ASSERT(!GHOST_STATE(state));
+ if (hdr->b_l1hdr.b_pabd != NULL) {
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(&state->arcs_esize[type],
+ arc_hdr_size(hdr), hdr);
+ }
+ for (buf = hdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) {
+ if (arc_buf_is_shared(buf))
+ continue;
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(&state->arcs_esize[type],
+ arc_buf_size(buf), buf);
+ }
+ }
+@@ -1960,7 +1960,7 @@ add_reference(arc_buf_hdr_t *hdr, void *tag)
+ ASSERT(HDR_HAS_L1HDR(hdr));
+ if (!MUTEX_HELD(HDR_LOCK(hdr))) {
+ ASSERT(hdr->b_l1hdr.b_state == arc_anon);
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
+ }
+
+@@ -1998,7 +1998,7 @@ remove_reference(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, void *tag)
+ * arc_l2c_only counts as a ghost state so we don't need to explicitly
+ * check to prevent usage of the arc_l2c_only list.
+ */
+- if (((cnt = refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) &&
++ if (((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) &&
+ (state != arc_anon)) {
+ multilist_insert(state->arcs_list[arc_buf_type(hdr)], hdr);
+ ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0);
+@@ -2043,7 +2043,7 @@ arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index)
+ abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits;
+ abi->abi_mfu_hits = l1hdr->b_mfu_hits;
+ abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits;
+- abi->abi_holds = refcount_count(&l1hdr->b_refcnt);
++ abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt);
+ }
+
+ if (l2hdr) {
+@@ -2079,7 +2079,7 @@ arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
+ */
+ if (HDR_HAS_L1HDR(hdr)) {
+ old_state = hdr->b_l1hdr.b_state;
+- refcnt = refcount_count(&hdr->b_l1hdr.b_refcnt);
++ refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt);
+ bufcnt = hdr->b_l1hdr.b_bufcnt;
+ update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL);
+ } else {
+@@ -2148,7 +2148,7 @@ arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
+ * the reference. As a result, we use the arc
+ * header pointer for the reference.
+ */
+- (void) refcount_add_many(&new_state->arcs_size,
++ (void) zfs_refcount_add_many(&new_state->arcs_size,
+ HDR_GET_LSIZE(hdr), hdr);
+ ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
+ } else {
+@@ -2175,13 +2175,15 @@ arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
+ if (arc_buf_is_shared(buf))
+ continue;
+
+- (void) refcount_add_many(&new_state->arcs_size,
++ (void) zfs_refcount_add_many(
++ &new_state->arcs_size,
+ arc_buf_size(buf), buf);
+ }
+ ASSERT3U(bufcnt, ==, buffers);
+
+ if (hdr->b_l1hdr.b_pabd != NULL) {
+- (void) refcount_add_many(&new_state->arcs_size,
++ (void) zfs_refcount_add_many(
++ &new_state->arcs_size,
+ arc_hdr_size(hdr), hdr);
+ } else {
+ ASSERT(GHOST_STATE(old_state));
+@@ -2203,7 +2205,7 @@ arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
+ * header on the ghost state.
+ */
+
+- (void) refcount_remove_many(&old_state->arcs_size,
++ (void) zfs_refcount_remove_many(&old_state->arcs_size,
+ HDR_GET_LSIZE(hdr), hdr);
+ } else {
+ arc_buf_t *buf;
+@@ -2229,13 +2231,13 @@ arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr,
+ if (arc_buf_is_shared(buf))
+ continue;
+
+- (void) refcount_remove_many(
++ (void) zfs_refcount_remove_many(
+ &old_state->arcs_size, arc_buf_size(buf),
+ buf);
+ }
+ ASSERT3U(bufcnt, ==, buffers);
+ ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
+- (void) refcount_remove_many(
++ (void) zfs_refcount_remove_many(
+ &old_state->arcs_size, arc_hdr_size(hdr), hdr);
+ }
+ }
+@@ -2506,7 +2508,7 @@ arc_return_buf(arc_buf_t *buf, void *tag)
+ ASSERT3P(buf->b_data, !=, NULL);
+ ASSERT(HDR_HAS_L1HDR(hdr));
+ (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag);
+- (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
++ (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
+
+ arc_loaned_bytes_update(-arc_buf_size(buf));
+ }
+@@ -2520,7 +2522,7 @@ arc_loan_inuse_buf(arc_buf_t *buf, void *tag)
+ ASSERT3P(buf->b_data, !=, NULL);
+ ASSERT(HDR_HAS_L1HDR(hdr));
+ (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
+- (void) refcount_remove(&hdr->b_l1hdr.b_refcnt, tag);
++ (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag);
+
+ arc_loaned_bytes_update(arc_buf_size(buf));
+ }
+@@ -2547,13 +2549,13 @@ arc_hdr_free_on_write(arc_buf_hdr_t *hdr)
+
+ /* protected by hash lock, if in the hash table */
+ if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ ASSERT(state != arc_anon && state != arc_l2c_only);
+
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(&state->arcs_esize[type],
+ size, hdr);
+ }
+- (void) refcount_remove_many(&state->arcs_size, size, hdr);
++ (void) zfs_refcount_remove_many(&state->arcs_size, size, hdr);
+ if (type == ARC_BUFC_METADATA) {
+ arc_space_return(size, ARC_SPACE_META);
+ } else {
+@@ -2581,7 +2583,8 @@ arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
+ * refcount ownership to the hdr since it always owns
+ * the refcount whenever an arc_buf_t is shared.
+ */
+- refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, buf, hdr);
++ zfs_refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, buf,
++ hdr);
+ hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf));
+ abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd,
+ HDR_ISTYPE_METADATA(hdr));
+@@ -2609,7 +2612,8 @@ arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
+ * We are no longer sharing this buffer so we need
+ * to transfer its ownership to the rightful owner.
+ */
+- refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, hdr, buf);
++ zfs_refcount_transfer_ownership(&hdr->b_l1hdr.b_state->arcs_size, hdr,
++ buf);
+ arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
+ abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd);
+ abd_put(hdr->b_l1hdr.b_pabd);
+@@ -2833,7 +2837,7 @@ arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize,
+ * it references and compressed arc enablement.
+ */
+ arc_hdr_alloc_pabd(hdr);
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+
+ return (hdr);
+ }
+@@ -2927,8 +2931,10 @@ arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new)
+ * the wrong pointer address when calling arc_hdr_destroy() later.
+ */
+
+- (void) refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr);
+- (void) refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(nhdr), nhdr);
++ (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr),
++ hdr);
++ (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(nhdr),
++ nhdr);
+
+ buf_discard_identity(hdr);
+ kmem_cache_free(old, hdr);
+@@ -3008,7 +3014,7 @@ arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr)
+
+ vdev_space_update(dev->l2ad_vdev, -psize, 0, 0);
+
+- (void) refcount_remove_many(&dev->l2ad_alloc, psize, hdr);
++ (void) zfs_refcount_remove_many(&dev->l2ad_alloc, psize, hdr);
+ arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR);
+ }
+
+@@ -3018,7 +3024,7 @@ arc_hdr_destroy(arc_buf_hdr_t *hdr)
+ if (HDR_HAS_L1HDR(hdr)) {
+ ASSERT(hdr->b_l1hdr.b_buf == NULL ||
+ hdr->b_l1hdr.b_bufcnt > 0);
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
+ }
+ ASSERT(!HDR_IO_IN_PROGRESS(hdr));
+@@ -3171,7 +3177,7 @@ arc_evict_hdr(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
+ return (bytes_evicted);
+ }
+
+- ASSERT0(refcount_count(&hdr->b_l1hdr.b_refcnt));
++ ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt));
+ while (hdr->b_l1hdr.b_buf) {
+ arc_buf_t *buf = hdr->b_l1hdr.b_buf;
+ if (!mutex_tryenter(&buf->b_evict_lock)) {
+@@ -3484,7 +3490,7 @@ arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type,
+ {
+ uint64_t evicted = 0;
+
+- while (refcount_count(&state->arcs_esize[type]) != 0) {
++ while (zfs_refcount_count(&state->arcs_esize[type]) != 0) {
+ evicted += arc_evict_state(state, spa, ARC_EVICT_ALL, type);
+
+ if (!retry)
+@@ -3507,7 +3513,7 @@ arc_prune_task(void *ptr)
+ if (func != NULL)
+ func(ap->p_adjust, ap->p_private);
+
+- refcount_remove(&ap->p_refcnt, func);
++ zfs_refcount_remove(&ap->p_refcnt, func);
+ }
+
+ /*
+@@ -3530,14 +3536,14 @@ arc_prune_async(int64_t adjust)
+ for (ap = list_head(&arc_prune_list); ap != NULL;
+ ap = list_next(&arc_prune_list, ap)) {
+
+- if (refcount_count(&ap->p_refcnt) >= 2)
++ if (zfs_refcount_count(&ap->p_refcnt) >= 2)
+ continue;
+
+ zfs_refcount_add(&ap->p_refcnt, ap->p_pfunc);
+ ap->p_adjust = adjust;
+ if (taskq_dispatch(arc_prune_taskq, arc_prune_task,
+ ap, TQ_SLEEP) == TASKQID_INVALID) {
+- refcount_remove(&ap->p_refcnt, ap->p_pfunc);
++ zfs_refcount_remove(&ap->p_refcnt, ap->p_pfunc);
+ continue;
+ }
+ ARCSTAT_BUMP(arcstat_prune);
+@@ -3559,8 +3565,9 @@ arc_adjust_impl(arc_state_t *state, uint64_t spa, int64_t bytes,
+ {
+ int64_t delta;
+
+- if (bytes > 0 && refcount_count(&state->arcs_esize[type]) > 0) {
+- delta = MIN(refcount_count(&state->arcs_esize[type]), bytes);
++ if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) {
++ delta = MIN(zfs_refcount_count(&state->arcs_esize[type]),
++ bytes);
+ return (arc_evict_state(state, spa, delta, type));
+ }
+
+@@ -3603,8 +3610,9 @@ restart:
+ */
+ adjustmnt = arc_meta_used - arc_meta_limit;
+
+- if (adjustmnt > 0 && refcount_count(&arc_mru->arcs_esize[type]) > 0) {
+- delta = MIN(refcount_count(&arc_mru->arcs_esize[type]),
++ if (adjustmnt > 0 &&
++ zfs_refcount_count(&arc_mru->arcs_esize[type]) > 0) {
++ delta = MIN(zfs_refcount_count(&arc_mru->arcs_esize[type]),
+ adjustmnt);
+ total_evicted += arc_adjust_impl(arc_mru, 0, delta, type);
+ adjustmnt -= delta;
+@@ -3620,8 +3628,9 @@ restart:
+ * simply decrement the amount of data evicted from the MRU.
+ */
+
+- if (adjustmnt > 0 && refcount_count(&arc_mfu->arcs_esize[type]) > 0) {
+- delta = MIN(refcount_count(&arc_mfu->arcs_esize[type]),
++ if (adjustmnt > 0 &&
++ zfs_refcount_count(&arc_mfu->arcs_esize[type]) > 0) {
++ delta = MIN(zfs_refcount_count(&arc_mfu->arcs_esize[type]),
+ adjustmnt);
+ total_evicted += arc_adjust_impl(arc_mfu, 0, delta, type);
+ }
+@@ -3629,17 +3638,17 @@ restart:
+ adjustmnt = arc_meta_used - arc_meta_limit;
+
+ if (adjustmnt > 0 &&
+- refcount_count(&arc_mru_ghost->arcs_esize[type]) > 0) {
++ zfs_refcount_count(&arc_mru_ghost->arcs_esize[type]) > 0) {
+ delta = MIN(adjustmnt,
+- refcount_count(&arc_mru_ghost->arcs_esize[type]));
++ zfs_refcount_count(&arc_mru_ghost->arcs_esize[type]));
+ total_evicted += arc_adjust_impl(arc_mru_ghost, 0, delta, type);
+ adjustmnt -= delta;
+ }
+
+ if (adjustmnt > 0 &&
+- refcount_count(&arc_mfu_ghost->arcs_esize[type]) > 0) {
++ zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type]) > 0) {
+ delta = MIN(adjustmnt,
+- refcount_count(&arc_mfu_ghost->arcs_esize[type]));
++ zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type]));
+ total_evicted += arc_adjust_impl(arc_mfu_ghost, 0, delta, type);
+ }
+
+@@ -3688,8 +3697,8 @@ arc_adjust_meta_only(void)
+ * evict some from the MRU here, and some from the MFU below.
+ */
+ target = MIN((int64_t)(arc_meta_used - arc_meta_limit),
+- (int64_t)(refcount_count(&arc_anon->arcs_size) +
+- refcount_count(&arc_mru->arcs_size) - arc_p));
++ (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) +
++ zfs_refcount_count(&arc_mru->arcs_size) - arc_p));
+
+ total_evicted += arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA);
+
+@@ -3699,7 +3708,8 @@ arc_adjust_meta_only(void)
+ * space allotted to the MFU (which is defined as arc_c - arc_p).
+ */
+ target = MIN((int64_t)(arc_meta_used - arc_meta_limit),
+- (int64_t)(refcount_count(&arc_mfu->arcs_size) - (arc_c - arc_p)));
++ (int64_t)(zfs_refcount_count(&arc_mfu->arcs_size) - (arc_c -
++ arc_p)));
+
+ total_evicted += arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA);
+
+@@ -3817,8 +3827,8 @@ arc_adjust(void)
+ * arc_p here, and then evict more from the MFU below.
+ */
+ target = MIN((int64_t)(arc_size - arc_c),
+- (int64_t)(refcount_count(&arc_anon->arcs_size) +
+- refcount_count(&arc_mru->arcs_size) + arc_meta_used - arc_p));
++ (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) +
++ zfs_refcount_count(&arc_mru->arcs_size) + arc_meta_used - arc_p));
+
+ /*
+ * If we're below arc_meta_min, always prefer to evict data.
+@@ -3902,8 +3912,8 @@ arc_adjust(void)
+ * cache. The following logic enforces these limits on the ghost
+ * caches, and evicts from them as needed.
+ */
+- target = refcount_count(&arc_mru->arcs_size) +
+- refcount_count(&arc_mru_ghost->arcs_size) - arc_c;
++ target = zfs_refcount_count(&arc_mru->arcs_size) +
++ zfs_refcount_count(&arc_mru_ghost->arcs_size) - arc_c;
+
+ bytes = arc_adjust_impl(arc_mru_ghost, 0, target, ARC_BUFC_DATA);
+ total_evicted += bytes;
+@@ -3921,8 +3931,8 @@ arc_adjust(void)
+ * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
+ * mru ghost + mfu ghost <= arc_c
+ */
+- target = refcount_count(&arc_mru_ghost->arcs_size) +
+- refcount_count(&arc_mfu_ghost->arcs_size) - arc_c;
++ target = zfs_refcount_count(&arc_mru_ghost->arcs_size) +
++ zfs_refcount_count(&arc_mfu_ghost->arcs_size) - arc_c;
+
+ bytes = arc_adjust_impl(arc_mfu_ghost, 0, target, ARC_BUFC_DATA);
+ total_evicted += bytes;
+@@ -4422,10 +4432,10 @@ static uint64_t
+ arc_evictable_memory(void)
+ {
+ uint64_t arc_clean =
+- refcount_count(&arc_mru->arcs_esize[ARC_BUFC_DATA]) +
+- refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) +
+- refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_DATA]) +
+- refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_DATA]) +
++ zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) +
++ zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_DATA]) +
++ zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
+ uint64_t arc_dirty = MAX((int64_t)arc_size - (int64_t)arc_clean, 0);
+
+ /*
+@@ -4532,8 +4542,8 @@ arc_adapt(int bytes, arc_state_t *state)
+ {
+ int mult;
+ uint64_t arc_p_min = (arc_c >> arc_p_min_shift);
+- int64_t mrug_size = refcount_count(&arc_mru_ghost->arcs_size);
+- int64_t mfug_size = refcount_count(&arc_mfu_ghost->arcs_size);
++ int64_t mrug_size = zfs_refcount_count(&arc_mru_ghost->arcs_size);
++ int64_t mfug_size = zfs_refcount_count(&arc_mfu_ghost->arcs_size);
+
+ if (state == arc_l2c_only)
+ return;
+@@ -4698,7 +4708,7 @@ arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
+ */
+ if (!GHOST_STATE(state)) {
+
+- (void) refcount_add_many(&state->arcs_size, size, tag);
++ (void) zfs_refcount_add_many(&state->arcs_size, size, tag);
+
+ /*
+ * If this is reached via arc_read, the link is
+@@ -4710,8 +4720,8 @@ arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
+ * trying to [add|remove]_reference it.
+ */
+ if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+- (void) refcount_add_many(&state->arcs_esize[type],
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ (void) zfs_refcount_add_many(&state->arcs_esize[type],
+ size, tag);
+ }
+
+@@ -4720,8 +4730,8 @@ arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
+ * data, and we have outgrown arc_p, update arc_p
+ */
+ if (arc_size < arc_c && hdr->b_l1hdr.b_state == arc_anon &&
+- (refcount_count(&arc_anon->arcs_size) +
+- refcount_count(&arc_mru->arcs_size) > arc_p))
++ (zfs_refcount_count(&arc_anon->arcs_size) +
++ zfs_refcount_count(&arc_mru->arcs_size) > arc_p))
+ arc_p = MIN(arc_c, arc_p + size);
+ }
+ }
+@@ -4758,13 +4768,13 @@ arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag)
+
+ /* protected by hash lock, if in the hash table */
+ if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ ASSERT(state != arc_anon && state != arc_l2c_only);
+
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(&state->arcs_esize[type],
+ size, tag);
+ }
+- (void) refcount_remove_many(&state->arcs_size, size, tag);
++ (void) zfs_refcount_remove_many(&state->arcs_size, size, tag);
+
+ VERIFY3U(hdr->b_type, ==, type);
+ if (type == ARC_BUFC_METADATA) {
+@@ -4811,7 +4821,7 @@ arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
+ * another prefetch (to make it less likely to be evicted).
+ */
+ if (HDR_PREFETCH(hdr)) {
+- if (refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
++ if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
+ /* link protected by hash lock */
+ ASSERT(multilist_link_active(
+ &hdr->b_l1hdr.b_arc_node));
+@@ -4852,7 +4862,7 @@ arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
+
+ if (HDR_PREFETCH(hdr)) {
+ new_state = arc_mru;
+- if (refcount_count(&hdr->b_l1hdr.b_refcnt) > 0)
++ if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) > 0)
+ arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH);
+ DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
+ } else {
+@@ -4876,7 +4886,7 @@ arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
+ * the head of the list now.
+ */
+ if ((HDR_PREFETCH(hdr)) != 0) {
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ /* link protected by hash_lock */
+ ASSERT(multilist_link_active(&hdr->b_l1hdr.b_arc_node));
+ }
+@@ -4896,7 +4906,7 @@ arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock)
+ * This is a prefetch access...
+ * move this block back to the MRU state.
+ */
+- ASSERT0(refcount_count(&hdr->b_l1hdr.b_refcnt));
++ ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt));
+ new_state = arc_mru;
+ }
+
+@@ -5098,7 +5108,7 @@ arc_read_done(zio_t *zio)
+ ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
+ }
+
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt) ||
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt) ||
+ callback_list != NULL);
+
+ if (no_zio_error) {
+@@ -5109,7 +5119,7 @@ arc_read_done(zio_t *zio)
+ arc_change_state(arc_anon, hdr, hash_lock);
+ if (HDR_IN_HASH_TABLE(hdr))
+ buf_hash_remove(hdr);
+- freeable = refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
++ freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
+ }
+
+ /*
+@@ -5129,7 +5139,7 @@ arc_read_done(zio_t *zio)
+ * in the cache).
+ */
+ ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
+- freeable = refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
++ freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt);
+ }
+
+ /* execute each callback and free its structure */
+@@ -5282,7 +5292,7 @@ top:
+ VERIFY0(arc_buf_alloc_impl(hdr, private,
+ compressed_read, B_TRUE, &buf));
+ } else if (*arc_flags & ARC_FLAG_PREFETCH &&
+- refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
++ zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) {
+ arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH);
+ }
+ DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr);
+@@ -5348,7 +5358,7 @@ top:
+ ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL);
+ ASSERT(GHOST_STATE(hdr->b_l1hdr.b_state));
+ ASSERT(!HDR_IO_IN_PROGRESS(hdr));
+- ASSERT(refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL);
+ ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL);
+
+@@ -5546,7 +5556,7 @@ arc_add_prune_callback(arc_prune_func_t *func, void *private)
+ p->p_pfunc = func;
+ p->p_private = private;
+ list_link_init(&p->p_node);
+- refcount_create(&p->p_refcnt);
++ zfs_refcount_create(&p->p_refcnt);
+
+ mutex_enter(&arc_prune_mtx);
+ zfs_refcount_add(&p->p_refcnt, &arc_prune_list);
+@@ -5562,15 +5572,15 @@ arc_remove_prune_callback(arc_prune_t *p)
+ boolean_t wait = B_FALSE;
+ mutex_enter(&arc_prune_mtx);
+ list_remove(&arc_prune_list, p);
+- if (refcount_remove(&p->p_refcnt, &arc_prune_list) > 0)
++ if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0)
+ wait = B_TRUE;
+ mutex_exit(&arc_prune_mtx);
+
+ /* wait for arc_prune_task to finish */
+ if (wait)
+ taskq_wait_outstanding(arc_prune_taskq, 0);
+- ASSERT0(refcount_count(&p->p_refcnt));
+- refcount_destroy(&p->p_refcnt);
++ ASSERT0(zfs_refcount_count(&p->p_refcnt));
++ zfs_refcount_destroy(&p->p_refcnt);
+ kmem_free(p, sizeof (*p));
+ }
+
+@@ -5613,7 +5623,7 @@ arc_freed(spa_t *spa, const blkptr_t *bp)
+ * this hdr, then we don't destroy the hdr.
+ */
+ if (!HDR_HAS_L1HDR(hdr) || (!HDR_IO_IN_PROGRESS(hdr) &&
+- refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) {
++ zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) {
+ arc_change_state(arc_anon, hdr, hash_lock);
+ arc_hdr_destroy(hdr);
+ mutex_exit(hash_lock);
+@@ -5659,7 +5669,7 @@ arc_release(arc_buf_t *buf, void *tag)
+ ASSERT(HDR_EMPTY(hdr));
+
+ ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1);
+- ASSERT3S(refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1);
++ ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1);
+ ASSERT(!list_link_active(&hdr->b_l1hdr.b_arc_node));
+
+ hdr->b_l1hdr.b_arc_access = 0;
+@@ -5687,7 +5697,7 @@ arc_release(arc_buf_t *buf, void *tag)
+ ASSERT3P(state, !=, arc_anon);
+
+ /* this buffer is not on any list */
+- ASSERT3S(refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0);
++ ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0);
+
+ if (HDR_HAS_L2HDR(hdr)) {
+ mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx);
+@@ -5778,12 +5788,13 @@ arc_release(arc_buf_t *buf, void *tag)
+ ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
+ ASSERT3P(state, !=, arc_l2c_only);
+
+- (void) refcount_remove_many(&state->arcs_size,
++ (void) zfs_refcount_remove_many(&state->arcs_size,
+ arc_buf_size(buf), buf);
+
+- if (refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) {
++ if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) {
+ ASSERT3P(state, !=, arc_l2c_only);
+- (void) refcount_remove_many(&state->arcs_esize[type],
++ (void) zfs_refcount_remove_many(
++ &state->arcs_esize[type],
+ arc_buf_size(buf), buf);
+ }
+
+@@ -5804,7 +5815,7 @@ arc_release(arc_buf_t *buf, void *tag)
+ nhdr = arc_hdr_alloc(spa, psize, lsize, compress, type);
+ ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL);
+ ASSERT0(nhdr->b_l1hdr.b_bufcnt);
+- ASSERT0(refcount_count(&nhdr->b_l1hdr.b_refcnt));
++ ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt));
+ VERIFY3U(nhdr->b_type, ==, type);
+ ASSERT(!HDR_SHARED_DATA(nhdr));
+
+@@ -5819,11 +5830,11 @@ arc_release(arc_buf_t *buf, void *tag)
+ buf->b_hdr = nhdr;
+
+ mutex_exit(&buf->b_evict_lock);
+- (void) refcount_add_many(&arc_anon->arcs_size,
++ (void) zfs_refcount_add_many(&arc_anon->arcs_size,
+ HDR_GET_LSIZE(nhdr), buf);
+ } else {
+ mutex_exit(&buf->b_evict_lock);
+- ASSERT(refcount_count(&hdr->b_l1hdr.b_refcnt) == 1);
++ ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1);
+ /* protected by hash lock, or hdr is on arc_anon */
+ ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
+ ASSERT(!HDR_IO_IN_PROGRESS(hdr));
+@@ -5860,7 +5871,7 @@ arc_referenced(arc_buf_t *buf)
+ int referenced;
+
+ mutex_enter(&buf->b_evict_lock);
+- referenced = (refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt));
++ referenced = (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt));
+ mutex_exit(&buf->b_evict_lock);
+ return (referenced);
+ }
+@@ -5877,7 +5888,7 @@ arc_write_ready(zio_t *zio)
+ fstrans_cookie_t cookie = spl_fstrans_mark();
+
+ ASSERT(HDR_HAS_L1HDR(hdr));
+- ASSERT(!refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt));
++ ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt));
+ ASSERT(hdr->b_l1hdr.b_bufcnt > 0);
+
+ /*
+@@ -6029,7 +6040,7 @@ arc_write_done(zio_t *zio)
+ if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp))
+ panic("bad overwrite, hdr=%p exists=%p",
+ (void *)hdr, (void *)exists);
+- ASSERT(refcount_is_zero(
++ ASSERT(zfs_refcount_is_zero(
+ &exists->b_l1hdr.b_refcnt));
+ arc_change_state(arc_anon, exists, hash_lock);
+ mutex_exit(hash_lock);
+@@ -6059,7 +6070,7 @@ arc_write_done(zio_t *zio)
+ arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
+ }
+
+- ASSERT(!refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
++ ASSERT(!zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
+ callback->awcb_done(zio, buf, callback->awcb_private);
+
+ abd_put(zio->io_abd);
+@@ -6222,7 +6233,7 @@ arc_tempreserve_space(uint64_t reserve, uint64_t txg)
+ /* assert that it has not wrapped around */
+ ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0);
+
+- anon_size = MAX((int64_t)(refcount_count(&arc_anon->arcs_size) -
++ anon_size = MAX((int64_t)(zfs_refcount_count(&arc_anon->arcs_size) -
+ arc_loaned_bytes), 0);
+
+ /*
+@@ -6245,9 +6256,10 @@ arc_tempreserve_space(uint64_t reserve, uint64_t txg)
+ if (reserve + arc_tempreserve + anon_size > arc_c / 2 &&
+ anon_size > arc_c / 4) {
+ uint64_t meta_esize =
+- refcount_count(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_count(
++ &arc_anon->arcs_esize[ARC_BUFC_METADATA]);
+ uint64_t data_esize =
+- refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
+ dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
+ "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
+ arc_tempreserve >> 10, meta_esize >> 10,
+@@ -6263,11 +6275,11 @@ static void
+ arc_kstat_update_state(arc_state_t *state, kstat_named_t *size,
+ kstat_named_t *evict_data, kstat_named_t *evict_metadata)
+ {
+- size->value.ui64 = refcount_count(&state->arcs_size);
++ size->value.ui64 = zfs_refcount_count(&state->arcs_size);
+ evict_data->value.ui64 =
+- refcount_count(&state->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]);
+ evict_metadata->value.ui64 =
+- refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]);
+ }
+
+ static int
+@@ -6484,25 +6496,25 @@ arc_state_init(void)
+ offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node),
+ arc_state_multilist_index_func);
+
+- refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
+- refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
+- refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
+- refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
+- refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
+- refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
+-
+- refcount_create(&arc_anon->arcs_size);
+- refcount_create(&arc_mru->arcs_size);
+- refcount_create(&arc_mru_ghost->arcs_size);
+- refcount_create(&arc_mfu->arcs_size);
+- refcount_create(&arc_mfu_ghost->arcs_size);
+- refcount_create(&arc_l2c_only->arcs_size);
++ zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
++
++ zfs_refcount_create(&arc_anon->arcs_size);
++ zfs_refcount_create(&arc_mru->arcs_size);
++ zfs_refcount_create(&arc_mru_ghost->arcs_size);
++ zfs_refcount_create(&arc_mfu->arcs_size);
++ zfs_refcount_create(&arc_mfu_ghost->arcs_size);
++ zfs_refcount_create(&arc_l2c_only->arcs_size);
+
+ arc_anon->arcs_state = ARC_STATE_ANON;
+ arc_mru->arcs_state = ARC_STATE_MRU;
+@@ -6515,25 +6527,25 @@ arc_state_init(void)
+ static void
+ arc_state_fini(void)
+ {
+- refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
+- refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
+- refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
+- refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
+- refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
+- refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
+- refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
+-
+- refcount_destroy(&arc_anon->arcs_size);
+- refcount_destroy(&arc_mru->arcs_size);
+- refcount_destroy(&arc_mru_ghost->arcs_size);
+- refcount_destroy(&arc_mfu->arcs_size);
+- refcount_destroy(&arc_mfu_ghost->arcs_size);
+- refcount_destroy(&arc_l2c_only->arcs_size);
++ zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
++ zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
++ zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
++
++ zfs_refcount_destroy(&arc_anon->arcs_size);
++ zfs_refcount_destroy(&arc_mru->arcs_size);
++ zfs_refcount_destroy(&arc_mru_ghost->arcs_size);
++ zfs_refcount_destroy(&arc_mfu->arcs_size);
++ zfs_refcount_destroy(&arc_mfu_ghost->arcs_size);
++ zfs_refcount_destroy(&arc_l2c_only->arcs_size);
+
+ multilist_destroy(arc_mru->arcs_list[ARC_BUFC_METADATA]);
+ multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]);
+@@ -6704,8 +6716,8 @@ arc_fini(void)
+ mutex_enter(&arc_prune_mtx);
+ while ((p = list_head(&arc_prune_list)) != NULL) {
+ list_remove(&arc_prune_list, p);
+- refcount_remove(&p->p_refcnt, &arc_prune_list);
+- refcount_destroy(&p->p_refcnt);
++ zfs_refcount_remove(&p->p_refcnt, &arc_prune_list);
++ zfs_refcount_destroy(&p->p_refcnt);
+ kmem_free(p, sizeof (*p));
+ }
+ mutex_exit(&arc_prune_mtx);
+@@ -7108,7 +7120,7 @@ top:
+ ARCSTAT_INCR(arcstat_l2_lsize, -HDR_GET_LSIZE(hdr));
+
+ bytes_dropped += arc_hdr_size(hdr);
+- (void) refcount_remove_many(&dev->l2ad_alloc,
++ (void) zfs_refcount_remove_many(&dev->l2ad_alloc,
+ arc_hdr_size(hdr), hdr);
+ }
+
+@@ -7527,7 +7539,8 @@ l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz)
+ list_insert_head(&dev->l2ad_buflist, hdr);
+ mutex_exit(&dev->l2ad_mtx);
+
+- (void) refcount_add_many(&dev->l2ad_alloc, psize, hdr);
++ (void) zfs_refcount_add_many(&dev->l2ad_alloc, psize,
++ hdr);
+
+ /*
+ * Normally the L2ARC can use the hdr's data, but if
+@@ -7762,7 +7775,7 @@ l2arc_add_vdev(spa_t *spa, vdev_t *vd)
+ offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node));
+
+ vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand);
+- refcount_create(&adddev->l2ad_alloc);
++ zfs_refcount_create(&adddev->l2ad_alloc);
+
+ /*
+ * Add device to global list
+@@ -7808,7 +7821,7 @@ l2arc_remove_vdev(vdev_t *vd)
+ l2arc_evict(remdev, 0, B_TRUE);
+ list_destroy(&remdev->l2ad_buflist);
+ mutex_destroy(&remdev->l2ad_mtx);
+- refcount_destroy(&remdev->l2ad_alloc);
++ zfs_refcount_destroy(&remdev->l2ad_alloc);
+ kmem_free(remdev, sizeof (l2arc_dev_t));
+ }
+
+diff --git a/module/zfs/dbuf.c b/module/zfs/dbuf.c
+index 5101c848..62b77bb0 100644
+--- a/module/zfs/dbuf.c
++++ b/module/zfs/dbuf.c
+@@ -165,7 +165,7 @@ dbuf_cons(void *vdb, void *unused, int kmflag)
+ mutex_init(&db->db_mtx, NULL, MUTEX_DEFAULT, NULL);
+ cv_init(&db->db_changed, NULL, CV_DEFAULT, NULL);
+ multilist_link_init(&db->db_cache_link);
+- refcount_create(&db->db_holds);
++ zfs_refcount_create(&db->db_holds);
+ multilist_link_init(&db->db_cache_link);
+
+ return (0);
+@@ -179,7 +179,7 @@ dbuf_dest(void *vdb, void *unused)
+ mutex_destroy(&db->db_mtx);
+ cv_destroy(&db->db_changed);
+ ASSERT(!multilist_link_active(&db->db_cache_link));
+- refcount_destroy(&db->db_holds);
++ zfs_refcount_destroy(&db->db_holds);
+ }
+
+ /*
+@@ -317,7 +317,7 @@ dbuf_hash_remove(dmu_buf_impl_t *db)
+ * We mustn't hold db_mtx to maintain lock ordering:
+ * DBUF_HASH_MUTEX > db_mtx.
+ */
+- ASSERT(refcount_is_zero(&db->db_holds));
++ ASSERT(zfs_refcount_is_zero(&db->db_holds));
+ ASSERT(db->db_state == DB_EVICTING);
+ ASSERT(!MUTEX_HELD(&db->db_mtx));
+
+@@ -354,7 +354,7 @@ dbuf_verify_user(dmu_buf_impl_t *db, dbvu_verify_type_t verify_type)
+ ASSERT(db->db.db_data != NULL);
+ ASSERT3U(db->db_state, ==, DB_CACHED);
+
+- holds = refcount_count(&db->db_holds);
++ holds = zfs_refcount_count(&db->db_holds);
+ if (verify_type == DBVU_EVICTING) {
+ /*
+ * Immediate eviction occurs when holds == dirtycnt.
+@@ -478,7 +478,7 @@ dbuf_cache_above_hiwater(void)
+ uint64_t dbuf_cache_hiwater_bytes =
+ (dbuf_cache_target * dbuf_cache_hiwater_pct) / 100;
+
+- return (refcount_count(&dbuf_cache_size) >
++ return (zfs_refcount_count(&dbuf_cache_size) >
+ dbuf_cache_target + dbuf_cache_hiwater_bytes);
+ }
+
+@@ -490,7 +490,7 @@ dbuf_cache_above_lowater(void)
+ uint64_t dbuf_cache_lowater_bytes =
+ (dbuf_cache_target * dbuf_cache_lowater_pct) / 100;
+
+- return (refcount_count(&dbuf_cache_size) >
++ return (zfs_refcount_count(&dbuf_cache_size) >
+ dbuf_cache_target - dbuf_cache_lowater_bytes);
+ }
+
+@@ -524,7 +524,7 @@ dbuf_evict_one(void)
+ if (db != NULL) {
+ multilist_sublist_remove(mls, db);
+ multilist_sublist_unlock(mls);
+- (void) refcount_remove_many(&dbuf_cache_size,
++ (void) zfs_refcount_remove_many(&dbuf_cache_size,
+ db->db.db_size, db);
+ dbuf_destroy(db);
+ } else {
+@@ -611,7 +611,7 @@ dbuf_evict_notify(void)
+ * because it's OK to occasionally make the wrong decision here,
+ * and grabbing the lock results in massive lock contention.
+ */
+- if (refcount_count(&dbuf_cache_size) > dbuf_cache_target_bytes()) {
++ if (zfs_refcount_count(&dbuf_cache_size) > dbuf_cache_target_bytes()) {
+ if (dbuf_cache_above_hiwater())
+ dbuf_evict_one();
+ cv_signal(&dbuf_evict_cv);
+@@ -679,7 +679,7 @@ retry:
+ dbuf_cache = multilist_create(sizeof (dmu_buf_impl_t),
+ offsetof(dmu_buf_impl_t, db_cache_link),
+ dbuf_cache_multilist_index_func);
+- refcount_create(&dbuf_cache_size);
++ zfs_refcount_create(&dbuf_cache_size);
+
+ tsd_create(&zfs_dbuf_evict_key, NULL);
+ dbuf_evict_thread_exit = B_FALSE;
+@@ -723,7 +723,7 @@ dbuf_fini(void)
+ mutex_destroy(&dbuf_evict_lock);
+ cv_destroy(&dbuf_evict_cv);
+
+- refcount_destroy(&dbuf_cache_size);
++ zfs_refcount_destroy(&dbuf_cache_size);
+ multilist_destroy(dbuf_cache);
+ }
+
+@@ -910,7 +910,7 @@ dbuf_loan_arcbuf(dmu_buf_impl_t *db)
+
+ ASSERT(db->db_blkid != DMU_BONUS_BLKID);
+ mutex_enter(&db->db_mtx);
+- if (arc_released(db->db_buf) || refcount_count(&db->db_holds) > 1) {
++ if (arc_released(db->db_buf) || zfs_refcount_count(&db->db_holds) > 1) {
+ int blksz = db->db.db_size;
+ spa_t *spa = db->db_objset->os_spa;
+
+@@ -983,7 +983,7 @@ dbuf_read_done(zio_t *zio, arc_buf_t *buf, void *vdb)
+ /*
+ * All reads are synchronous, so we must have a hold on the dbuf
+ */
+- ASSERT(refcount_count(&db->db_holds) > 0);
++ ASSERT(zfs_refcount_count(&db->db_holds) > 0);
+ ASSERT(db->db_buf == NULL);
+ ASSERT(db->db.db_data == NULL);
+ if (db->db_level == 0 && db->db_freed_in_flight) {
+@@ -1017,7 +1017,7 @@ dbuf_read_impl(dmu_buf_impl_t *db, zio_t *zio, uint32_t flags)
+
+ DB_DNODE_ENTER(db);
+ dn = DB_DNODE(db);
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+ /* We need the struct_rwlock to prevent db_blkptr from changing. */
+ ASSERT(RW_LOCK_HELD(&dn->dn_struct_rwlock));
+ ASSERT(MUTEX_HELD(&db->db_mtx));
+@@ -1150,7 +1150,7 @@ dbuf_fix_old_data(dmu_buf_impl_t *db, uint64_t txg)
+ dr->dt.dl.dr_data = kmem_alloc(bonuslen, KM_SLEEP);
+ arc_space_consume(bonuslen, ARC_SPACE_BONUS);
+ bcopy(db->db.db_data, dr->dt.dl.dr_data, bonuslen);
+- } else if (refcount_count(&db->db_holds) > db->db_dirtycnt) {
++ } else if (zfs_refcount_count(&db->db_holds) > db->db_dirtycnt) {
+ int size = arc_buf_size(db->db_buf);
+ arc_buf_contents_t type = DBUF_GET_BUFC_TYPE(db);
+ spa_t *spa = db->db_objset->os_spa;
+@@ -1182,7 +1182,7 @@ dbuf_read(dmu_buf_impl_t *db, zio_t *zio, uint32_t flags)
+ * We don't have to hold the mutex to check db_state because it
+ * can't be freed while we have a hold on the buffer.
+ */
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+
+ if (db->db_state == DB_NOFILL)
+ return (SET_ERROR(EIO));
+@@ -1277,7 +1277,7 @@ dbuf_read(dmu_buf_impl_t *db, zio_t *zio, uint32_t flags)
+ static void
+ dbuf_noread(dmu_buf_impl_t *db)
+ {
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+ ASSERT(db->db_blkid != DMU_BONUS_BLKID);
+ mutex_enter(&db->db_mtx);
+ while (db->db_state == DB_READ || db->db_state == DB_FILL)
+@@ -1397,7 +1397,7 @@ dbuf_free_range(dnode_t *dn, uint64_t start_blkid, uint64_t end_blkid,
+ mutex_exit(&db->db_mtx);
+ continue;
+ }
+- if (refcount_count(&db->db_holds) == 0) {
++ if (zfs_refcount_count(&db->db_holds) == 0) {
+ ASSERT(db->db_buf);
+ dbuf_destroy(db);
+ continue;
+@@ -1544,7 +1544,7 @@ dbuf_dirty(dmu_buf_impl_t *db, dmu_tx_t *tx)
+ int txgoff = tx->tx_txg & TXG_MASK;
+
+ ASSERT(tx->tx_txg != 0);
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+ DMU_TX_DIRTY_BUF(tx, db);
+
+ DB_DNODE_ENTER(db);
+@@ -1912,7 +1912,7 @@ dbuf_undirty(dmu_buf_impl_t *db, dmu_tx_t *tx)
+ ASSERT(db->db_dirtycnt > 0);
+ db->db_dirtycnt -= 1;
+
+- if (refcount_remove(&db->db_holds, (void *)(uintptr_t)txg) == 0) {
++ if (zfs_refcount_remove(&db->db_holds, (void *)(uintptr_t)txg) == 0) {
+ ASSERT(db->db_state == DB_NOFILL || arc_released(db->db_buf));
+ dbuf_destroy(db);
+ return (B_TRUE);
+@@ -1929,7 +1929,7 @@ dmu_buf_will_dirty(dmu_buf_t *db_fake, dmu_tx_t *tx)
+ dbuf_dirty_record_t *dr;
+
+ ASSERT(tx->tx_txg != 0);
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+
+ /*
+ * Quick check for dirtyness. For already dirty blocks, this
+@@ -1981,7 +1981,7 @@ dmu_buf_will_fill(dmu_buf_t *db_fake, dmu_tx_t *tx)
+ ASSERT(db->db_blkid != DMU_BONUS_BLKID);
+ ASSERT(tx->tx_txg != 0);
+ ASSERT(db->db_level == 0);
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+
+ ASSERT(db->db.db_object != DMU_META_DNODE_OBJECT ||
+ dmu_tx_private_ok(tx));
+@@ -2056,7 +2056,7 @@ dmu_buf_write_embedded(dmu_buf_t *dbuf, void *data,
+ void
+ dbuf_assign_arcbuf(dmu_buf_impl_t *db, arc_buf_t *buf, dmu_tx_t *tx)
+ {
+- ASSERT(!refcount_is_zero(&db->db_holds));
++ ASSERT(!zfs_refcount_is_zero(&db->db_holds));
+ ASSERT(db->db_blkid != DMU_BONUS_BLKID);
+ ASSERT(db->db_level == 0);
+ ASSERT3U(dbuf_is_metadata(db), ==, arc_is_metadata(buf));
+@@ -2075,7 +2075,7 @@ dbuf_assign_arcbuf(dmu_buf_impl_t *db, arc_buf_t *buf, dmu_tx_t *tx)
+ ASSERT(db->db_state == DB_CACHED || db->db_state == DB_UNCACHED);
+
+ if (db->db_state == DB_CACHED &&
+- refcount_count(&db->db_holds) - 1 > db->db_dirtycnt) {
++ zfs_refcount_count(&db->db_holds) - 1 > db->db_dirtycnt) {
+ mutex_exit(&db->db_mtx);
+ (void) dbuf_dirty(db, tx);
+ bcopy(buf->b_data, db->db.db_data, db->db.db_size);
+@@ -2120,7 +2120,7 @@ dbuf_destroy(dmu_buf_impl_t *db)
+ dmu_buf_impl_t *dndb;
+
+ ASSERT(MUTEX_HELD(&db->db_mtx));
+- ASSERT(refcount_is_zero(&db->db_holds));
++ ASSERT(zfs_refcount_is_zero(&db->db_holds));
+
+ if (db->db_buf != NULL) {
+ arc_buf_destroy(db->db_buf, db);
+@@ -2140,7 +2140,7 @@ dbuf_destroy(dmu_buf_impl_t *db)
+
+ if (multilist_link_active(&db->db_cache_link)) {
+ multilist_remove(dbuf_cache, db);
+- (void) refcount_remove_many(&dbuf_cache_size,
++ (void) zfs_refcount_remove_many(&dbuf_cache_size,
+ db->db.db_size, db);
+ }
+
+@@ -2186,7 +2186,7 @@ dbuf_destroy(dmu_buf_impl_t *db)
+ DB_DNODE_EXIT(db);
+ }
+
+- ASSERT(refcount_is_zero(&db->db_holds));
++ ASSERT(zfs_refcount_is_zero(&db->db_holds));
+
+ db->db_parent = NULL;
+
+@@ -2383,7 +2383,7 @@ dbuf_create(dnode_t *dn, uint8_t level, uint64_t blkid,
+ dbuf_add_ref(parent, db);
+
+ ASSERT(dn->dn_object == DMU_META_DNODE_OBJECT ||
+- refcount_count(&dn->dn_holds) > 0);
++ zfs_refcount_count(&dn->dn_holds) > 0);
+ (void) zfs_refcount_add(&dn->dn_holds, db);
+ atomic_inc_32(&dn->dn_dbufs_count);
+
+@@ -2744,9 +2744,9 @@ __dbuf_hold_impl(struct dbuf_hold_impl_data *dh)
+ }
+
+ if (multilist_link_active(&dh->dh_db->db_cache_link)) {
+- ASSERT(refcount_is_zero(&dh->dh_db->db_holds));
++ ASSERT(zfs_refcount_is_zero(&dh->dh_db->db_holds));
+ multilist_remove(dbuf_cache, dh->dh_db);
+- (void) refcount_remove_many(&dbuf_cache_size,
++ (void) zfs_refcount_remove_many(&dbuf_cache_size,
+ dh->dh_db->db.db_size, dh->dh_db);
+ }
+ (void) zfs_refcount_add(&dh->dh_db->db_holds, dh->dh_tag);
+@@ -2938,7 +2938,7 @@ dbuf_rele_and_unlock(dmu_buf_impl_t *db, void *tag)
+ * dnode so we can guarantee in dnode_move() that a referenced bonus
+ * buffer has a corresponding dnode hold.
+ */
+- holds = refcount_remove(&db->db_holds, tag);
++ holds = zfs_refcount_remove(&db->db_holds, tag);
+ ASSERT(holds >= 0);
+
+ /*
+@@ -3017,7 +3017,7 @@ dbuf_rele_and_unlock(dmu_buf_impl_t *db, void *tag)
+ dbuf_destroy(db);
+ } else if (!multilist_link_active(&db->db_cache_link)) {
+ multilist_insert(dbuf_cache, db);
+- (void) refcount_add_many(&dbuf_cache_size,
++ (void) zfs_refcount_add_many(&dbuf_cache_size,
+ db->db.db_size, db);
+ mutex_exit(&db->db_mtx);
+
+@@ -3037,7 +3037,7 @@ dbuf_rele_and_unlock(dmu_buf_impl_t *db, void *tag)
+ uint64_t
+ dbuf_refcount(dmu_buf_impl_t *db)
+ {
+- return (refcount_count(&db->db_holds));
++ return (zfs_refcount_count(&db->db_holds));
+ }
+
+ void *
+@@ -3340,7 +3340,7 @@ dbuf_sync_leaf(dbuf_dirty_record_t *dr, dmu_tx_t *tx)
+
+ if (db->db_state != DB_NOFILL &&
+ dn->dn_object != DMU_META_DNODE_OBJECT &&
+- refcount_count(&db->db_holds) > 1 &&
++ zfs_refcount_count(&db->db_holds) > 1 &&
+ dr->dt.dl.dr_override_state != DR_OVERRIDDEN &&
+ *datap == db->db_buf) {
+ /*
+diff --git a/module/zfs/dbuf_stats.c b/module/zfs/dbuf_stats.c
+index 1712c9c1..7afc9ddc 100644
+--- a/module/zfs/dbuf_stats.c
++++ b/module/zfs/dbuf_stats.c
+@@ -89,7 +89,7 @@ __dbuf_stats_hash_table_data(char *buf, size_t size, dmu_buf_impl_t *db)
+ (u_longlong_t)db->db.db_size,
+ !!dbuf_is_metadata(db),
+ db->db_state,
+- (ulong_t)refcount_count(&db->db_holds),
++ (ulong_t)zfs_refcount_count(&db->db_holds),
+ /* arc_buf_info_t */
+ abi.abi_state_type,
+ abi.abi_state_contents,
+@@ -113,7 +113,7 @@ __dbuf_stats_hash_table_data(char *buf, size_t size, dmu_buf_impl_t *db)
+ (ulong_t)doi.doi_metadata_block_size,
+ (u_longlong_t)doi.doi_bonus_size,
+ (ulong_t)doi.doi_indirection,
+- (ulong_t)refcount_count(&dn->dn_holds),
++ (ulong_t)zfs_refcount_count(&dn->dn_holds),
+ (u_longlong_t)doi.doi_fill_count,
+ (u_longlong_t)doi.doi_max_offset);
+
+diff --git a/module/zfs/dmu_tx.c b/module/zfs/dmu_tx.c
+index b1508ffa..135743e9 100644
+--- a/module/zfs/dmu_tx.c
++++ b/module/zfs/dmu_tx.c
+@@ -132,8 +132,8 @@ dmu_tx_hold_dnode_impl(dmu_tx_t *tx, dnode_t *dn, enum dmu_tx_hold_type type,
+ txh = kmem_zalloc(sizeof (dmu_tx_hold_t), KM_SLEEP);
+ txh->txh_tx = tx;
+ txh->txh_dnode = dn;
+- refcount_create(&txh->txh_space_towrite);
+- refcount_create(&txh->txh_memory_tohold);
++ zfs_refcount_create(&txh->txh_space_towrite);
++ zfs_refcount_create(&txh->txh_memory_tohold);
+ txh->txh_type = type;
+ txh->txh_arg1 = arg1;
+ txh->txh_arg2 = arg2;
+@@ -228,9 +228,9 @@ dmu_tx_count_write(dmu_tx_hold_t *txh, uint64_t off, uint64_t len)
+ if (len == 0)
+ return;
+
+- (void) refcount_add_many(&txh->txh_space_towrite, len, FTAG);
++ (void) zfs_refcount_add_many(&txh->txh_space_towrite, len, FTAG);
+
+- if (refcount_count(&txh->txh_space_towrite) > 2 * DMU_MAX_ACCESS)
++ if (zfs_refcount_count(&txh->txh_space_towrite) > 2 * DMU_MAX_ACCESS)
+ err = SET_ERROR(EFBIG);
+
+ if (dn == NULL)
+@@ -295,7 +295,8 @@ dmu_tx_count_write(dmu_tx_hold_t *txh, uint64_t off, uint64_t len)
+ static void
+ dmu_tx_count_dnode(dmu_tx_hold_t *txh)
+ {
+- (void) refcount_add_many(&txh->txh_space_towrite, DNODE_MIN_SIZE, FTAG);
++ (void) zfs_refcount_add_many(&txh->txh_space_towrite, DNODE_MIN_SIZE,
++ FTAG);
+ }
+
+ void
+@@ -418,7 +419,7 @@ dmu_tx_hold_free_impl(dmu_tx_hold_t *txh, uint64_t off, uint64_t len)
+ return;
+ }
+
+- (void) refcount_add_many(&txh->txh_memory_tohold,
++ (void) zfs_refcount_add_many(&txh->txh_memory_tohold,
+ 1 << dn->dn_indblkshift, FTAG);
+
+ err = dmu_tx_check_ioerr(zio, dn, 1, i);
+@@ -477,7 +478,7 @@ dmu_tx_hold_zap_impl(dmu_tx_hold_t *txh, const char *name)
+ * - 2 blocks for possibly split leaves,
+ * - 2 grown ptrtbl blocks
+ */
+- (void) refcount_add_many(&txh->txh_space_towrite,
++ (void) zfs_refcount_add_many(&txh->txh_space_towrite,
+ MZAP_MAX_BLKSZ, FTAG);
+
+ if (dn == NULL)
+@@ -568,7 +569,8 @@ dmu_tx_hold_space(dmu_tx_t *tx, uint64_t space)
+ txh = dmu_tx_hold_object_impl(tx, tx->tx_objset,
+ DMU_NEW_OBJECT, THT_SPACE, space, 0);
+ if (txh)
+- (void) refcount_add_many(&txh->txh_space_towrite, space, FTAG);
++ (void) zfs_refcount_add_many(&txh->txh_space_towrite, space,
++ FTAG);
+ }
+
+ #ifdef ZFS_DEBUG
+@@ -919,8 +921,8 @@ dmu_tx_try_assign(dmu_tx_t *tx, uint64_t txg_how)
+ (void) zfs_refcount_add(&dn->dn_tx_holds, tx);
+ mutex_exit(&dn->dn_mtx);
+ }
+- towrite += refcount_count(&txh->txh_space_towrite);
+- tohold += refcount_count(&txh->txh_memory_tohold);
++ towrite += zfs_refcount_count(&txh->txh_space_towrite);
++ tohold += zfs_refcount_count(&txh->txh_memory_tohold);
+ }
+
+ /* needed allocation: worst-case estimate of write space */
+@@ -962,7 +964,7 @@ dmu_tx_unassign(dmu_tx_t *tx)
+ mutex_enter(&dn->dn_mtx);
+ ASSERT3U(dn->dn_assigned_txg, ==, tx->tx_txg);
+
+- if (refcount_remove(&dn->dn_tx_holds, tx) == 0) {
++ if (zfs_refcount_remove(&dn->dn_tx_holds, tx) == 0) {
+ dn->dn_assigned_txg = 0;
+ cv_broadcast(&dn->dn_notxholds);
+ }
+@@ -1100,10 +1102,10 @@ dmu_tx_destroy(dmu_tx_t *tx)
+ dnode_t *dn = txh->txh_dnode;
+
+ list_remove(&tx->tx_holds, txh);
+- refcount_destroy_many(&txh->txh_space_towrite,
+- refcount_count(&txh->txh_space_towrite));
+- refcount_destroy_many(&txh->txh_memory_tohold,
+- refcount_count(&txh->txh_memory_tohold));
++ zfs_refcount_destroy_many(&txh->txh_space_towrite,
++ zfs_refcount_count(&txh->txh_space_towrite));
++ zfs_refcount_destroy_many(&txh->txh_memory_tohold,
++ zfs_refcount_count(&txh->txh_memory_tohold));
+ kmem_free(txh, sizeof (dmu_tx_hold_t));
+ if (dn != NULL)
+ dnode_rele(dn, tx);
+@@ -1135,7 +1137,7 @@ dmu_tx_commit(dmu_tx_t *tx)
+ mutex_enter(&dn->dn_mtx);
+ ASSERT3U(dn->dn_assigned_txg, ==, tx->tx_txg);
+
+- if (refcount_remove(&dn->dn_tx_holds, tx) == 0) {
++ if (zfs_refcount_remove(&dn->dn_tx_holds, tx) == 0) {
+ dn->dn_assigned_txg = 0;
+ cv_broadcast(&dn->dn_notxholds);
+ }
+@@ -1250,7 +1252,7 @@ dmu_tx_hold_spill(dmu_tx_t *tx, uint64_t object)
+ txh = dmu_tx_hold_object_impl(tx, tx->tx_objset, object,
+ THT_SPILL, 0, 0);
+ if (txh != NULL)
+- (void) refcount_add_many(&txh->txh_space_towrite,
++ (void) zfs_refcount_add_many(&txh->txh_space_towrite,
+ SPA_OLD_MAXBLOCKSIZE, FTAG);
+ }
+
+diff --git a/module/zfs/dnode.c b/module/zfs/dnode.c
+index 77d38c36..989a8ec7 100644
+--- a/module/zfs/dnode.c
++++ b/module/zfs/dnode.c
+@@ -124,8 +124,8 @@ dnode_cons(void *arg, void *unused, int kmflag)
+ * Every dbuf has a reference, and dropping a tracked reference is
+ * O(number of references), so don't track dn_holds.
+ */
+- refcount_create_untracked(&dn->dn_holds);
+- refcount_create(&dn->dn_tx_holds);
++ zfs_refcount_create_untracked(&dn->dn_holds);
++ zfs_refcount_create(&dn->dn_tx_holds);
+ list_link_init(&dn->dn_link);
+
+ bzero(&dn->dn_next_nblkptr[0], sizeof (dn->dn_next_nblkptr));
+@@ -180,8 +180,8 @@ dnode_dest(void *arg, void *unused)
+ mutex_destroy(&dn->dn_mtx);
+ mutex_destroy(&dn->dn_dbufs_mtx);
+ cv_destroy(&dn->dn_notxholds);
+- refcount_destroy(&dn->dn_holds);
+- refcount_destroy(&dn->dn_tx_holds);
++ zfs_refcount_destroy(&dn->dn_holds);
++ zfs_refcount_destroy(&dn->dn_tx_holds);
+ ASSERT(!list_link_active(&dn->dn_link));
+
+ for (i = 0; i < TXG_SIZE; i++) {
+@@ -377,7 +377,7 @@ dnode_buf_byteswap(void *vbuf, size_t size)
+ void
+ dnode_setbonuslen(dnode_t *dn, int newsize, dmu_tx_t *tx)
+ {
+- ASSERT3U(refcount_count(&dn->dn_holds), >=, 1);
++ ASSERT3U(zfs_refcount_count(&dn->dn_holds), >=, 1);
+
+ dnode_setdirty(dn, tx);
+ rw_enter(&dn->dn_struct_rwlock, RW_WRITER);
+@@ -394,7 +394,7 @@ dnode_setbonuslen(dnode_t *dn, int newsize, dmu_tx_t *tx)
+ void
+ dnode_setbonus_type(dnode_t *dn, dmu_object_type_t newtype, dmu_tx_t *tx)
+ {
+- ASSERT3U(refcount_count(&dn->dn_holds), >=, 1);
++ ASSERT3U(zfs_refcount_count(&dn->dn_holds), >=, 1);
+ dnode_setdirty(dn, tx);
+ rw_enter(&dn->dn_struct_rwlock, RW_WRITER);
+ dn->dn_bonustype = newtype;
+@@ -405,7 +405,7 @@ dnode_setbonus_type(dnode_t *dn, dmu_object_type_t newtype, dmu_tx_t *tx)
+ void
+ dnode_rm_spill(dnode_t *dn, dmu_tx_t *tx)
+ {
+- ASSERT3U(refcount_count(&dn->dn_holds), >=, 1);
++ ASSERT3U(zfs_refcount_count(&dn->dn_holds), >=, 1);
+ ASSERT(RW_WRITE_HELD(&dn->dn_struct_rwlock));
+ dnode_setdirty(dn, tx);
+ dn->dn_rm_spillblk[tx->tx_txg&TXG_MASK] = DN_KILL_SPILLBLK;
+@@ -596,8 +596,8 @@ dnode_allocate(dnode_t *dn, dmu_object_type_t ot, int blocksize, int ibs,
+ ASSERT0(dn->dn_allocated_txg);
+ ASSERT0(dn->dn_assigned_txg);
+ ASSERT0(dn->dn_dirty_txg);
+- ASSERT(refcount_is_zero(&dn->dn_tx_holds));
+- ASSERT3U(refcount_count(&dn->dn_holds), <=, 1);
++ ASSERT(zfs_refcount_is_zero(&dn->dn_tx_holds));
++ ASSERT3U(zfs_refcount_count(&dn->dn_holds), <=, 1);
+ ASSERT(avl_is_empty(&dn->dn_dbufs));
+
+ for (i = 0; i < TXG_SIZE; i++) {
+@@ -786,8 +786,8 @@ dnode_move_impl(dnode_t *odn, dnode_t *ndn)
+ ndn->dn_dirty_txg = odn->dn_dirty_txg;
+ ndn->dn_dirtyctx = odn->dn_dirtyctx;
+ ndn->dn_dirtyctx_firstset = odn->dn_dirtyctx_firstset;
+- ASSERT(refcount_count(&odn->dn_tx_holds) == 0);
+- refcount_transfer(&ndn->dn_holds, &odn->dn_holds);
++ ASSERT(zfs_refcount_count(&odn->dn_tx_holds) == 0);
++ zfs_refcount_transfer(&ndn->dn_holds, &odn->dn_holds);
+ ASSERT(avl_is_empty(&ndn->dn_dbufs));
+ avl_swap(&ndn->dn_dbufs, &odn->dn_dbufs);
+ ndn->dn_dbufs_count = odn->dn_dbufs_count;
+@@ -975,7 +975,7 @@ dnode_move(void *buf, void *newbuf, size_t size, void *arg)
+ * hold before the dbuf is removed, the hold is discounted, and the
+ * removal is blocked until the move completes.
+ */
+- refcount = refcount_count(&odn->dn_holds);
++ refcount = zfs_refcount_count(&odn->dn_holds);
+ ASSERT(refcount >= 0);
+ dbufs = odn->dn_dbufs_count;
+
+@@ -1003,7 +1003,7 @@ dnode_move(void *buf, void *newbuf, size_t size, void *arg)
+
+ list_link_replace(&odn->dn_link, &ndn->dn_link);
+ /* If the dnode was safe to move, the refcount cannot have changed. */
+- ASSERT(refcount == refcount_count(&ndn->dn_holds));
++ ASSERT(refcount == zfs_refcount_count(&ndn->dn_holds));
+ ASSERT(dbufs == ndn->dn_dbufs_count);
+ zrl_exit(&ndn->dn_handle->dnh_zrlock); /* handle has moved */
+ mutex_exit(&os->os_lock);
+@@ -1152,7 +1152,7 @@ dnode_special_close(dnode_handle_t *dnh)
+ * has a hold on this dnode while we are trying to evict this
+ * dnode.
+ */
+- while (refcount_count(&dn->dn_holds) > 0)
++ while (zfs_refcount_count(&dn->dn_holds) > 0)
+ delay(1);
+ ASSERT(dn->dn_dbuf == NULL ||
+ dmu_buf_get_user(&dn->dn_dbuf->db) == NULL);
+@@ -1207,8 +1207,8 @@ dnode_buf_evict_async(void *dbu)
+ * it wouldn't be eligible for eviction and this function
+ * would not have been called.
+ */
+- ASSERT(refcount_is_zero(&dn->dn_holds));
+- ASSERT(refcount_is_zero(&dn->dn_tx_holds));
++ ASSERT(zfs_refcount_is_zero(&dn->dn_holds));
++ ASSERT(zfs_refcount_is_zero(&dn->dn_tx_holds));
+
+ dnode_destroy(dn); /* implicit zrl_remove() for first slot */
+ zrl_destroy(&dnh->dnh_zrlock);
+@@ -1460,7 +1460,7 @@ dnode_hold_impl(objset_t *os, uint64_t object, int flag, int slots,
+ }
+
+ mutex_enter(&dn->dn_mtx);
+- if (!refcount_is_zero(&dn->dn_holds)) {
++ if (!zfs_refcount_is_zero(&dn->dn_holds)) {
+ DNODE_STAT_BUMP(dnode_hold_free_refcount);
+ mutex_exit(&dn->dn_mtx);
+ dnode_slots_rele(dnc, idx, slots);
+@@ -1520,7 +1520,7 @@ boolean_t
+ dnode_add_ref(dnode_t *dn, void *tag)
+ {
+ mutex_enter(&dn->dn_mtx);
+- if (refcount_is_zero(&dn->dn_holds)) {
++ if (zfs_refcount_is_zero(&dn->dn_holds)) {
+ mutex_exit(&dn->dn_mtx);
+ return (FALSE);
+ }
+@@ -1544,7 +1544,7 @@ dnode_rele_and_unlock(dnode_t *dn, void *tag)
+ dmu_buf_impl_t *db = dn->dn_dbuf;
+ dnode_handle_t *dnh = dn->dn_handle;
+
+- refs = refcount_remove(&dn->dn_holds, tag);
++ refs = zfs_refcount_remove(&dn->dn_holds, tag);
+ mutex_exit(&dn->dn_mtx);
+
+ /*
+@@ -1608,7 +1608,7 @@ dnode_setdirty(dnode_t *dn, dmu_tx_t *tx)
+ return;
+ }
+
+- ASSERT(!refcount_is_zero(&dn->dn_holds) ||
++ ASSERT(!zfs_refcount_is_zero(&dn->dn_holds) ||
+ !avl_is_empty(&dn->dn_dbufs));
+ ASSERT(dn->dn_datablksz != 0);
+ ASSERT0(dn->dn_next_bonuslen[txg&TXG_MASK]);
+diff --git a/module/zfs/dnode_sync.c b/module/zfs/dnode_sync.c
+index 8d65e385..2febb520 100644
+--- a/module/zfs/dnode_sync.c
++++ b/module/zfs/dnode_sync.c
+@@ -422,7 +422,7 @@ dnode_evict_dbufs(dnode_t *dn)
+
+ mutex_enter(&db->db_mtx);
+ if (db->db_state != DB_EVICTING &&
+- refcount_is_zero(&db->db_holds)) {
++ zfs_refcount_is_zero(&db->db_holds)) {
+ db_marker->db_level = db->db_level;
+ db_marker->db_blkid = db->db_blkid;
+ db_marker->db_state = DB_SEARCH;
+@@ -451,7 +451,7 @@ dnode_evict_bonus(dnode_t *dn)
+ {
+ rw_enter(&dn->dn_struct_rwlock, RW_WRITER);
+ if (dn->dn_bonus != NULL) {
+- if (refcount_is_zero(&dn->dn_bonus->db_holds)) {
++ if (zfs_refcount_is_zero(&dn->dn_bonus->db_holds)) {
+ mutex_enter(&dn->dn_bonus->db_mtx);
+ dbuf_destroy(dn->dn_bonus);
+ dn->dn_bonus = NULL;
+@@ -517,7 +517,7 @@ dnode_sync_free(dnode_t *dn, dmu_tx_t *tx)
+ * zfs_obj_to_path() also depends on this being
+ * commented out.
+ *
+- * ASSERT3U(refcount_count(&dn->dn_holds), ==, 1);
++ * ASSERT3U(zfs_refcount_count(&dn->dn_holds), ==, 1);
+ */
+
+ /* Undirty next bits */
+diff --git a/module/zfs/dsl_dataset.c b/module/zfs/dsl_dataset.c
+index b7562bcd..2e79c489 100644
+--- a/module/zfs/dsl_dataset.c
++++ b/module/zfs/dsl_dataset.c
+@@ -287,7 +287,7 @@ dsl_dataset_evict_async(void *dbu)
+ mutex_destroy(&ds->ds_lock);
+ mutex_destroy(&ds->ds_opening_lock);
+ mutex_destroy(&ds->ds_sendstream_lock);
+- refcount_destroy(&ds->ds_longholds);
++ zfs_refcount_destroy(&ds->ds_longholds);
+ rrw_destroy(&ds->ds_bp_rwlock);
+
+ kmem_free(ds, sizeof (dsl_dataset_t));
+@@ -422,7 +422,7 @@ dsl_dataset_hold_obj(dsl_pool_t *dp, uint64_t dsobj, void *tag,
+ mutex_init(&ds->ds_opening_lock, NULL, MUTEX_DEFAULT, NULL);
+ mutex_init(&ds->ds_sendstream_lock, NULL, MUTEX_DEFAULT, NULL);
+ rrw_init(&ds->ds_bp_rwlock, B_FALSE);
+- refcount_create(&ds->ds_longholds);
++ zfs_refcount_create(&ds->ds_longholds);
+
+ bplist_create(&ds->ds_pending_deadlist);
+ dsl_deadlist_open(&ds->ds_deadlist,
+@@ -458,7 +458,7 @@ dsl_dataset_hold_obj(dsl_pool_t *dp, uint64_t dsobj, void *tag,
+ mutex_destroy(&ds->ds_lock);
+ mutex_destroy(&ds->ds_opening_lock);
+ mutex_destroy(&ds->ds_sendstream_lock);
+- refcount_destroy(&ds->ds_longholds);
++ zfs_refcount_destroy(&ds->ds_longholds);
+ bplist_destroy(&ds->ds_pending_deadlist);
+ dsl_deadlist_close(&ds->ds_deadlist);
+ kmem_free(ds, sizeof (dsl_dataset_t));
+@@ -520,7 +520,7 @@ dsl_dataset_hold_obj(dsl_pool_t *dp, uint64_t dsobj, void *tag,
+ mutex_destroy(&ds->ds_lock);
+ mutex_destroy(&ds->ds_opening_lock);
+ mutex_destroy(&ds->ds_sendstream_lock);
+- refcount_destroy(&ds->ds_longholds);
++ zfs_refcount_destroy(&ds->ds_longholds);
+ kmem_free(ds, sizeof (dsl_dataset_t));
+ if (err != 0) {
+ dmu_buf_rele(dbuf, tag);
+@@ -651,14 +651,14 @@ dsl_dataset_long_hold(dsl_dataset_t *ds, void *tag)
+ void
+ dsl_dataset_long_rele(dsl_dataset_t *ds, void *tag)
+ {
+- (void) refcount_remove(&ds->ds_longholds, tag);
++ (void) zfs_refcount_remove(&ds->ds_longholds, tag);
+ }
+
+ /* Return B_TRUE if there are any long holds on this dataset. */
+ boolean_t
+ dsl_dataset_long_held(dsl_dataset_t *ds)
+ {
+- return (!refcount_is_zero(&ds->ds_longholds));
++ return (!zfs_refcount_is_zero(&ds->ds_longholds));
+ }
+
+ void
+diff --git a/module/zfs/dsl_destroy.c b/module/zfs/dsl_destroy.c
+index d980f7d1..946eb1d3 100644
+--- a/module/zfs/dsl_destroy.c
++++ b/module/zfs/dsl_destroy.c
+@@ -258,7 +258,7 @@ dsl_destroy_snapshot_sync_impl(dsl_dataset_t *ds, boolean_t defer, dmu_tx_t *tx)
+ rrw_enter(&ds->ds_bp_rwlock, RW_READER, FTAG);
+ ASSERT3U(dsl_dataset_phys(ds)->ds_bp.blk_birth, <=, tx->tx_txg);
+ rrw_exit(&ds->ds_bp_rwlock, FTAG);
+- ASSERT(refcount_is_zero(&ds->ds_longholds));
++ ASSERT(zfs_refcount_is_zero(&ds->ds_longholds));
+
+ if (defer &&
+ (ds->ds_userrefs > 0 ||
+@@ -619,7 +619,7 @@ dsl_destroy_head_check_impl(dsl_dataset_t *ds, int expected_holds)
+ if (ds->ds_is_snapshot)
+ return (SET_ERROR(EINVAL));
+
+- if (refcount_count(&ds->ds_longholds) != expected_holds)
++ if (zfs_refcount_count(&ds->ds_longholds) != expected_holds)
+ return (SET_ERROR(EBUSY));
+
+ mos = ds->ds_dir->dd_pool->dp_meta_objset;
+@@ -647,7 +647,7 @@ dsl_destroy_head_check_impl(dsl_dataset_t *ds, int expected_holds)
+ dsl_dataset_phys(ds->ds_prev)->ds_num_children == 2 &&
+ ds->ds_prev->ds_userrefs == 0) {
+ /* We need to remove the origin snapshot as well. */
+- if (!refcount_is_zero(&ds->ds_prev->ds_longholds))
++ if (!zfs_refcount_is_zero(&ds->ds_prev->ds_longholds))
+ return (SET_ERROR(EBUSY));
+ }
+ return (0);
+diff --git a/module/zfs/metaslab.c b/module/zfs/metaslab.c
+index 40658d51..2a5581c3 100644
+--- a/module/zfs/metaslab.c
++++ b/module/zfs/metaslab.c
+@@ -223,7 +223,7 @@ metaslab_class_create(spa_t *spa, metaslab_ops_t *ops)
+ mc->mc_rotor = NULL;
+ mc->mc_ops = ops;
+ mutex_init(&mc->mc_lock, NULL, MUTEX_DEFAULT, NULL);
+- refcount_create_tracked(&mc->mc_alloc_slots);
++ zfs_refcount_create_tracked(&mc->mc_alloc_slots);
+
+ return (mc);
+ }
+@@ -237,7 +237,7 @@ metaslab_class_destroy(metaslab_class_t *mc)
+ ASSERT(mc->mc_space == 0);
+ ASSERT(mc->mc_dspace == 0);
+
+- refcount_destroy(&mc->mc_alloc_slots);
++ zfs_refcount_destroy(&mc->mc_alloc_slots);
+ mutex_destroy(&mc->mc_lock);
+ kmem_free(mc, sizeof (metaslab_class_t));
+ }
+@@ -585,7 +585,7 @@ metaslab_group_create(metaslab_class_t *mc, vdev_t *vd)
+ mg->mg_activation_count = 0;
+ mg->mg_initialized = B_FALSE;
+ mg->mg_no_free_space = B_TRUE;
+- refcount_create_tracked(&mg->mg_alloc_queue_depth);
++ zfs_refcount_create_tracked(&mg->mg_alloc_queue_depth);
+
+ mg->mg_taskq = taskq_create("metaslab_group_taskq", metaslab_load_pct,
+ maxclsyspri, 10, INT_MAX, TASKQ_THREADS_CPU_PCT | TASKQ_DYNAMIC);
+@@ -608,7 +608,7 @@ metaslab_group_destroy(metaslab_group_t *mg)
+ taskq_destroy(mg->mg_taskq);
+ avl_destroy(&mg->mg_metaslab_tree);
+ mutex_destroy(&mg->mg_lock);
+- refcount_destroy(&mg->mg_alloc_queue_depth);
++ zfs_refcount_destroy(&mg->mg_alloc_queue_depth);
+ kmem_free(mg, sizeof (metaslab_group_t));
+ }
+
+@@ -907,7 +907,7 @@ metaslab_group_allocatable(metaslab_group_t *mg, metaslab_group_t *rotor,
+ if (mg->mg_no_free_space)
+ return (B_FALSE);
+
+- qdepth = refcount_count(&mg->mg_alloc_queue_depth);
++ qdepth = zfs_refcount_count(&mg->mg_alloc_queue_depth);
+
+ /*
+ * If this metaslab group is below its qmax or it's
+@@ -928,7 +928,7 @@ metaslab_group_allocatable(metaslab_group_t *mg, metaslab_group_t *rotor,
+ for (mgp = mg->mg_next; mgp != rotor; mgp = mgp->mg_next) {
+ qmax = mgp->mg_max_alloc_queue_depth;
+
+- qdepth = refcount_count(&mgp->mg_alloc_queue_depth);
++ qdepth = zfs_refcount_count(&mgp->mg_alloc_queue_depth);
+
+ /*
+ * If there is another metaslab group that
+@@ -2679,7 +2679,7 @@ metaslab_group_alloc_decrement(spa_t *spa, uint64_t vdev, void *tag, int flags)
+ if (!mg->mg_class->mc_alloc_throttle_enabled)
+ return;
+
+- (void) refcount_remove(&mg->mg_alloc_queue_depth, tag);
++ (void) zfs_refcount_remove(&mg->mg_alloc_queue_depth, tag);
+ }
+
+ void
+@@ -2693,7 +2693,7 @@ metaslab_group_alloc_verify(spa_t *spa, const blkptr_t *bp, void *tag)
+ for (d = 0; d < ndvas; d++) {
+ uint64_t vdev = DVA_GET_VDEV(&dva[d]);
+ metaslab_group_t *mg = vdev_lookup_top(spa, vdev)->vdev_mg;
+- VERIFY(refcount_not_held(&mg->mg_alloc_queue_depth, tag));
++ VERIFY(zfs_refcount_not_held(&mg->mg_alloc_queue_depth, tag));
+ }
+ #endif
+ }
+@@ -3348,7 +3348,7 @@ metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, zio_t *zio,
+ ASSERT(mc->mc_alloc_throttle_enabled);
+ mutex_enter(&mc->mc_lock);
+
+- reserved_slots = refcount_count(&mc->mc_alloc_slots);
++ reserved_slots = zfs_refcount_count(&mc->mc_alloc_slots);
+ if (reserved_slots < mc->mc_alloc_max_slots)
+ available_slots = mc->mc_alloc_max_slots - reserved_slots;
+
+@@ -3360,7 +3360,8 @@ metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, zio_t *zio,
+ * them individually when an I/O completes.
+ */
+ for (d = 0; d < slots; d++) {
+- reserved_slots = zfs_refcount_add(&mc->mc_alloc_slots, zio);
++ reserved_slots = zfs_refcount_add(&mc->mc_alloc_slots,
++ zio);
+ }
+ zio->io_flags |= ZIO_FLAG_IO_ALLOCATING;
+ slot_reserved = B_TRUE;
+@@ -3378,7 +3379,7 @@ metaslab_class_throttle_unreserve(metaslab_class_t *mc, int slots, zio_t *zio)
+ ASSERT(mc->mc_alloc_throttle_enabled);
+ mutex_enter(&mc->mc_lock);
+ for (d = 0; d < slots; d++) {
+- (void) refcount_remove(&mc->mc_alloc_slots, zio);
++ (void) zfs_refcount_remove(&mc->mc_alloc_slots, zio);
+ }
+ mutex_exit(&mc->mc_lock);
+ }
+diff --git a/module/zfs/refcount.c b/module/zfs/refcount.c
+index 13f9bb6b..0a93aafb 100644
+--- a/module/zfs/refcount.c
++++ b/module/zfs/refcount.c
+@@ -38,7 +38,7 @@ static kmem_cache_t *reference_cache;
+ static kmem_cache_t *reference_history_cache;
+
+ void
+-refcount_init(void)
++zfs_refcount_init(void)
+ {
+ reference_cache = kmem_cache_create("reference_cache",
+ sizeof (reference_t), 0, NULL, NULL, NULL, NULL, NULL, 0);
+@@ -48,14 +48,14 @@ refcount_init(void)
+ }
+
+ void
+-refcount_fini(void)
++zfs_refcount_fini(void)
+ {
+ kmem_cache_destroy(reference_cache);
+ kmem_cache_destroy(reference_history_cache);
+ }
+
+ void
+-refcount_create(zfs_refcount_t *rc)
++zfs_refcount_create(zfs_refcount_t *rc)
+ {
+ mutex_init(&rc->rc_mtx, NULL, MUTEX_DEFAULT, NULL);
+ list_create(&rc->rc_list, sizeof (reference_t),
+@@ -68,21 +68,21 @@ refcount_create(zfs_refcount_t *rc)
+ }
+
+ void
+-refcount_create_tracked(zfs_refcount_t *rc)
++zfs_refcount_create_tracked(zfs_refcount_t *rc)
+ {
+- refcount_create(rc);
++ zfs_refcount_create(rc);
+ rc->rc_tracked = B_TRUE;
+ }
+
+ void
+-refcount_create_untracked(zfs_refcount_t *rc)
++zfs_refcount_create_untracked(zfs_refcount_t *rc)
+ {
+- refcount_create(rc);
++ zfs_refcount_create(rc);
+ rc->rc_tracked = B_FALSE;
+ }
+
+ void
+-refcount_destroy_many(zfs_refcount_t *rc, uint64_t number)
++zfs_refcount_destroy_many(zfs_refcount_t *rc, uint64_t number)
+ {
+ reference_t *ref;
+
+@@ -103,25 +103,25 @@ refcount_destroy_many(zfs_refcount_t *rc, uint64_t number)
+ }
+
+ void
+-refcount_destroy(zfs_refcount_t *rc)
++zfs_refcount_destroy(zfs_refcount_t *rc)
+ {
+- refcount_destroy_many(rc, 0);
++ zfs_refcount_destroy_many(rc, 0);
+ }
+
+ int
+-refcount_is_zero(zfs_refcount_t *rc)
++zfs_refcount_is_zero(zfs_refcount_t *rc)
+ {
+ return (rc->rc_count == 0);
+ }
+
+ int64_t
+-refcount_count(zfs_refcount_t *rc)
++zfs_refcount_count(zfs_refcount_t *rc)
+ {
+ return (rc->rc_count);
+ }
+
+ int64_t
+-refcount_add_many(zfs_refcount_t *rc, uint64_t number, void *holder)
++zfs_refcount_add_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ {
+ reference_t *ref = NULL;
+ int64_t count;
+@@ -145,11 +145,11 @@ refcount_add_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ int64_t
+ zfs_refcount_add(zfs_refcount_t *rc, void *holder)
+ {
+- return (refcount_add_many(rc, 1, holder));
++ return (zfs_refcount_add_many(rc, 1, holder));
+ }
+
+ int64_t
+-refcount_remove_many(zfs_refcount_t *rc, uint64_t number, void *holder)
++zfs_refcount_remove_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ {
+ reference_t *ref;
+ int64_t count;
+@@ -197,13 +197,13 @@ refcount_remove_many(zfs_refcount_t *rc, uint64_t number, void *holder)
+ }
+
+ int64_t
+-refcount_remove(zfs_refcount_t *rc, void *holder)
++zfs_refcount_remove(zfs_refcount_t *rc, void *holder)
+ {
+- return (refcount_remove_many(rc, 1, holder));
++ return (zfs_refcount_remove_many(rc, 1, holder));
+ }
+
+ void
+-refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src)
++zfs_refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src)
+ {
+ int64_t count, removed_count;
+ list_t list, removed;
+@@ -234,7 +234,7 @@ refcount_transfer(zfs_refcount_t *dst, zfs_refcount_t *src)
+ }
+
+ void
+-refcount_transfer_ownership(zfs_refcount_t *rc, void *current_holder,
++zfs_refcount_transfer_ownership(zfs_refcount_t *rc, void *current_holder,
+ void *new_holder)
+ {
+ reference_t *ref;
+@@ -264,7 +264,7 @@ refcount_transfer_ownership(zfs_refcount_t *rc, void *current_holder,
+ * might be held.
+ */
+ boolean_t
+-refcount_held(zfs_refcount_t *rc, void *holder)
++zfs_refcount_held(zfs_refcount_t *rc, void *holder)
+ {
+ reference_t *ref;
+
+@@ -292,7 +292,7 @@ refcount_held(zfs_refcount_t *rc, void *holder)
+ * since the reference might not be held.
+ */
+ boolean_t
+-refcount_not_held(zfs_refcount_t *rc, void *holder)
++zfs_refcount_not_held(zfs_refcount_t *rc, void *holder)
+ {
+ reference_t *ref;
+
+diff --git a/module/zfs/rrwlock.c b/module/zfs/rrwlock.c
+index effff330..582b40a5 100644
+--- a/module/zfs/rrwlock.c
++++ b/module/zfs/rrwlock.c
+@@ -85,7 +85,7 @@ rrn_find(rrwlock_t *rrl)
+ {
+ rrw_node_t *rn;
+
+- if (refcount_count(&rrl->rr_linked_rcount) == 0)
++ if (zfs_refcount_count(&rrl->rr_linked_rcount) == 0)
+ return (NULL);
+
+ for (rn = tsd_get(rrw_tsd_key); rn != NULL; rn = rn->rn_next) {
+@@ -120,7 +120,7 @@ rrn_find_and_remove(rrwlock_t *rrl, void *tag)
+ rrw_node_t *rn;
+ rrw_node_t *prev = NULL;
+
+- if (refcount_count(&rrl->rr_linked_rcount) == 0)
++ if (zfs_refcount_count(&rrl->rr_linked_rcount) == 0)
+ return (B_FALSE);
+
+ for (rn = tsd_get(rrw_tsd_key); rn != NULL; rn = rn->rn_next) {
+@@ -143,8 +143,8 @@ rrw_init(rrwlock_t *rrl, boolean_t track_all)
+ mutex_init(&rrl->rr_lock, NULL, MUTEX_DEFAULT, NULL);
+ cv_init(&rrl->rr_cv, NULL, CV_DEFAULT, NULL);
+ rrl->rr_writer = NULL;
+- refcount_create(&rrl->rr_anon_rcount);
+- refcount_create(&rrl->rr_linked_rcount);
++ zfs_refcount_create(&rrl->rr_anon_rcount);
++ zfs_refcount_create(&rrl->rr_linked_rcount);
+ rrl->rr_writer_wanted = B_FALSE;
+ rrl->rr_track_all = track_all;
+ }
+@@ -155,8 +155,8 @@ rrw_destroy(rrwlock_t *rrl)
+ mutex_destroy(&rrl->rr_lock);
+ cv_destroy(&rrl->rr_cv);
+ ASSERT(rrl->rr_writer == NULL);
+- refcount_destroy(&rrl->rr_anon_rcount);
+- refcount_destroy(&rrl->rr_linked_rcount);
++ zfs_refcount_destroy(&rrl->rr_anon_rcount);
++ zfs_refcount_destroy(&rrl->rr_linked_rcount);
+ }
+
+ static void
+@@ -173,10 +173,10 @@ rrw_enter_read_impl(rrwlock_t *rrl, boolean_t prio, void *tag)
+ DTRACE_PROBE(zfs__rrwfastpath__rdmiss);
+ #endif
+ ASSERT(rrl->rr_writer != curthread);
+- ASSERT(refcount_count(&rrl->rr_anon_rcount) >= 0);
++ ASSERT(zfs_refcount_count(&rrl->rr_anon_rcount) >= 0);
+
+ while (rrl->rr_writer != NULL || (rrl->rr_writer_wanted &&
+- refcount_is_zero(&rrl->rr_anon_rcount) && !prio &&
++ zfs_refcount_is_zero(&rrl->rr_anon_rcount) && !prio &&
+ rrn_find(rrl) == NULL))
+ cv_wait(&rrl->rr_cv, &rrl->rr_lock);
+
+@@ -216,8 +216,8 @@ rrw_enter_write(rrwlock_t *rrl)
+ mutex_enter(&rrl->rr_lock);
+ ASSERT(rrl->rr_writer != curthread);
+
+- while (refcount_count(&rrl->rr_anon_rcount) > 0 ||
+- refcount_count(&rrl->rr_linked_rcount) > 0 ||
++ while (zfs_refcount_count(&rrl->rr_anon_rcount) > 0 ||
++ zfs_refcount_count(&rrl->rr_linked_rcount) > 0 ||
+ rrl->rr_writer != NULL) {
+ rrl->rr_writer_wanted = B_TRUE;
+ cv_wait(&rrl->rr_cv, &rrl->rr_lock);
+@@ -250,24 +250,25 @@ rrw_exit(rrwlock_t *rrl, void *tag)
+ }
+ DTRACE_PROBE(zfs__rrwfastpath__exitmiss);
+ #endif
+- ASSERT(!refcount_is_zero(&rrl->rr_anon_rcount) ||
+- !refcount_is_zero(&rrl->rr_linked_rcount) ||
++ ASSERT(!zfs_refcount_is_zero(&rrl->rr_anon_rcount) ||
++ !zfs_refcount_is_zero(&rrl->rr_linked_rcount) ||
+ rrl->rr_writer != NULL);
+
+ if (rrl->rr_writer == NULL) {
+ int64_t count;
+ if (rrn_find_and_remove(rrl, tag)) {
+- count = refcount_remove(&rrl->rr_linked_rcount, tag);
++ count = zfs_refcount_remove(
++ &rrl->rr_linked_rcount, tag);
+ } else {
+ ASSERT(!rrl->rr_track_all);
+- count = refcount_remove(&rrl->rr_anon_rcount, tag);
++ count = zfs_refcount_remove(&rrl->rr_anon_rcount, tag);
+ }
+ if (count == 0)
+ cv_broadcast(&rrl->rr_cv);
+ } else {
+ ASSERT(rrl->rr_writer == curthread);
+- ASSERT(refcount_is_zero(&rrl->rr_anon_rcount) &&
+- refcount_is_zero(&rrl->rr_linked_rcount));
++ ASSERT(zfs_refcount_is_zero(&rrl->rr_anon_rcount) &&
++ zfs_refcount_is_zero(&rrl->rr_linked_rcount));
+ rrl->rr_writer = NULL;
+ cv_broadcast(&rrl->rr_cv);
+ }
+@@ -288,7 +289,7 @@ rrw_held(rrwlock_t *rrl, krw_t rw)
+ if (rw == RW_WRITER) {
+ held = (rrl->rr_writer == curthread);
+ } else {
+- held = (!refcount_is_zero(&rrl->rr_anon_rcount) ||
++ held = (!zfs_refcount_is_zero(&rrl->rr_anon_rcount) ||
+ rrn_find(rrl) != NULL);
+ }
+ mutex_exit(&rrl->rr_lock);
+diff --git a/module/zfs/sa.c b/module/zfs/sa.c
+index df4f6fd8..08f6165d 100644
+--- a/module/zfs/sa.c
++++ b/module/zfs/sa.c
+@@ -1132,7 +1132,7 @@ sa_tear_down(objset_t *os)
+ avl_destroy_nodes(&sa->sa_layout_hash_tree, &cookie))) {
+ sa_idx_tab_t *tab;
+ while ((tab = list_head(&layout->lot_idx_tab))) {
+- ASSERT(refcount_count(&tab->sa_refcount));
++ ASSERT(zfs_refcount_count(&tab->sa_refcount));
+ sa_idx_tab_rele(os, tab);
+ }
+ }
+@@ -1317,13 +1317,13 @@ sa_idx_tab_rele(objset_t *os, void *arg)
+ return;
+
+ mutex_enter(&sa->sa_lock);
+- if (refcount_remove(&idx_tab->sa_refcount, NULL) == 0) {
++ if (zfs_refcount_remove(&idx_tab->sa_refcount, NULL) == 0) {
+ list_remove(&idx_tab->sa_layout->lot_idx_tab, idx_tab);
+ if (idx_tab->sa_variable_lengths)
+ kmem_free(idx_tab->sa_variable_lengths,
+ sizeof (uint16_t) *
+ idx_tab->sa_layout->lot_var_sizes);
+- refcount_destroy(&idx_tab->sa_refcount);
++ zfs_refcount_destroy(&idx_tab->sa_refcount);
+ kmem_free(idx_tab->sa_idx_tab,
+ sizeof (uint32_t) * sa->sa_num_attrs);
+ kmem_free(idx_tab, sizeof (sa_idx_tab_t));
+@@ -1560,7 +1560,7 @@ sa_find_idx_tab(objset_t *os, dmu_object_type_t bonustype, sa_hdr_phys_t *hdr)
+ idx_tab->sa_idx_tab =
+ kmem_zalloc(sizeof (uint32_t) * sa->sa_num_attrs, KM_SLEEP);
+ idx_tab->sa_layout = tb;
+- refcount_create(&idx_tab->sa_refcount);
++ zfs_refcount_create(&idx_tab->sa_refcount);
+ if (tb->lot_var_sizes)
+ idx_tab->sa_variable_lengths = kmem_alloc(sizeof (uint16_t) *
+ tb->lot_var_sizes, KM_SLEEP);
+diff --git a/module/zfs/spa.c b/module/zfs/spa.c
+index 02dda927..5002b3cb 100644
+--- a/module/zfs/spa.c
++++ b/module/zfs/spa.c
+@@ -2302,7 +2302,7 @@ spa_load(spa_t *spa, spa_load_state_t state, spa_import_type_t type,
+ * and are making their way through the eviction process.
+ */
+ spa_evicting_os_wait(spa);
+- spa->spa_minref = refcount_count(&spa->spa_refcount);
++ spa->spa_minref = zfs_refcount_count(&spa->spa_refcount);
+ if (error) {
+ if (error != EEXIST) {
+ spa->spa_loaded_ts.tv_sec = 0;
+@@ -4260,7 +4260,7 @@ spa_create(const char *pool, nvlist_t *nvroot, nvlist_t *props,
+ * and are making their way through the eviction process.
+ */
+ spa_evicting_os_wait(spa);
+- spa->spa_minref = refcount_count(&spa->spa_refcount);
++ spa->spa_minref = zfs_refcount_count(&spa->spa_refcount);
+ spa->spa_load_state = SPA_LOAD_NONE;
+
+ mutex_exit(&spa_namespace_lock);
+@@ -6852,12 +6852,12 @@ spa_sync(spa_t *spa, uint64_t txg)
+ * allocations look at mg_max_alloc_queue_depth, and async
+ * allocations all happen from spa_sync().
+ */
+- ASSERT0(refcount_count(&mg->mg_alloc_queue_depth));
++ ASSERT0(zfs_refcount_count(&mg->mg_alloc_queue_depth));
+ mg->mg_max_alloc_queue_depth = max_queue_depth;
+ queue_depth_total += mg->mg_max_alloc_queue_depth;
+ }
+ mc = spa_normal_class(spa);
+- ASSERT0(refcount_count(&mc->mc_alloc_slots));
++ ASSERT0(zfs_refcount_count(&mc->mc_alloc_slots));
+ mc->mc_alloc_max_slots = queue_depth_total;
+ mc->mc_alloc_throttle_enabled = zio_dva_throttle_enabled;
+
+diff --git a/module/zfs/spa_misc.c b/module/zfs/spa_misc.c
+index f6c9b40b..6514813e 100644
+--- a/module/zfs/spa_misc.c
++++ b/module/zfs/spa_misc.c
+@@ -366,7 +366,7 @@ spa_config_lock_init(spa_t *spa)
+ spa_config_lock_t *scl = &spa->spa_config_lock[i];
+ mutex_init(&scl->scl_lock, NULL, MUTEX_DEFAULT, NULL);
+ cv_init(&scl->scl_cv, NULL, CV_DEFAULT, NULL);
+- refcount_create_untracked(&scl->scl_count);
++ zfs_refcount_create_untracked(&scl->scl_count);
+ scl->scl_writer = NULL;
+ scl->scl_write_wanted = 0;
+ }
+@@ -381,7 +381,7 @@ spa_config_lock_destroy(spa_t *spa)
+ spa_config_lock_t *scl = &spa->spa_config_lock[i];
+ mutex_destroy(&scl->scl_lock);
+ cv_destroy(&scl->scl_cv);
+- refcount_destroy(&scl->scl_count);
++ zfs_refcount_destroy(&scl->scl_count);
+ ASSERT(scl->scl_writer == NULL);
+ ASSERT(scl->scl_write_wanted == 0);
+ }
+@@ -406,7 +406,7 @@ spa_config_tryenter(spa_t *spa, int locks, void *tag, krw_t rw)
+ }
+ } else {
+ ASSERT(scl->scl_writer != curthread);
+- if (!refcount_is_zero(&scl->scl_count)) {
++ if (!zfs_refcount_is_zero(&scl->scl_count)) {
+ mutex_exit(&scl->scl_lock);
+ spa_config_exit(spa, locks & ((1 << i) - 1),
+ tag);
+@@ -441,7 +441,7 @@ spa_config_enter(spa_t *spa, int locks, void *tag, krw_t rw)
+ }
+ } else {
+ ASSERT(scl->scl_writer != curthread);
+- while (!refcount_is_zero(&scl->scl_count)) {
++ while (!zfs_refcount_is_zero(&scl->scl_count)) {
+ scl->scl_write_wanted++;
+ cv_wait(&scl->scl_cv, &scl->scl_lock);
+ scl->scl_write_wanted--;
+@@ -464,8 +464,8 @@ spa_config_exit(spa_t *spa, int locks, void *tag)
+ if (!(locks & (1 << i)))
+ continue;
+ mutex_enter(&scl->scl_lock);
+- ASSERT(!refcount_is_zero(&scl->scl_count));
+- if (refcount_remove(&scl->scl_count, tag) == 0) {
++ ASSERT(!zfs_refcount_is_zero(&scl->scl_count));
++ if (zfs_refcount_remove(&scl->scl_count, tag) == 0) {
+ ASSERT(scl->scl_writer == NULL ||
+ scl->scl_writer == curthread);
+ scl->scl_writer = NULL; /* OK in either case */
+@@ -484,7 +484,8 @@ spa_config_held(spa_t *spa, int locks, krw_t rw)
+ spa_config_lock_t *scl = &spa->spa_config_lock[i];
+ if (!(locks & (1 << i)))
+ continue;
+- if ((rw == RW_READER && !refcount_is_zero(&scl->scl_count)) ||
++ if ((rw == RW_READER &&
++ !zfs_refcount_is_zero(&scl->scl_count)) ||
+ (rw == RW_WRITER && scl->scl_writer == curthread))
+ locks_held |= 1 << i;
+ }
+@@ -602,7 +603,7 @@ spa_add(const char *name, nvlist_t *config, const char *altroot)
+
+ spa->spa_deadman_synctime = MSEC2NSEC(zfs_deadman_synctime_ms);
+
+- refcount_create(&spa->spa_refcount);
++ zfs_refcount_create(&spa->spa_refcount);
+ spa_config_lock_init(spa);
+ spa_stats_init(spa);
+
+@@ -680,7 +681,7 @@ spa_remove(spa_t *spa)
+
+ ASSERT(MUTEX_HELD(&spa_namespace_lock));
+ ASSERT(spa->spa_state == POOL_STATE_UNINITIALIZED);
+- ASSERT3U(refcount_count(&spa->spa_refcount), ==, 0);
++ ASSERT3U(zfs_refcount_count(&spa->spa_refcount), ==, 0);
+
+ nvlist_free(spa->spa_config_splitting);
+
+@@ -705,7 +706,7 @@ spa_remove(spa_t *spa)
+ nvlist_free(spa->spa_feat_stats);
+ spa_config_set(spa, NULL);
+
+- refcount_destroy(&spa->spa_refcount);
++ zfs_refcount_destroy(&spa->spa_refcount);
+
+ spa_stats_destroy(spa);
+ spa_config_lock_destroy(spa);
+@@ -766,7 +767,7 @@ spa_next(spa_t *prev)
+ void
+ spa_open_ref(spa_t *spa, void *tag)
+ {
+- ASSERT(refcount_count(&spa->spa_refcount) >= spa->spa_minref ||
++ ASSERT(zfs_refcount_count(&spa->spa_refcount) >= spa->spa_minref ||
+ MUTEX_HELD(&spa_namespace_lock));
+ (void) zfs_refcount_add(&spa->spa_refcount, tag);
+ }
+@@ -778,9 +779,9 @@ spa_open_ref(spa_t *spa, void *tag)
+ void
+ spa_close(spa_t *spa, void *tag)
+ {
+- ASSERT(refcount_count(&spa->spa_refcount) > spa->spa_minref ||
++ ASSERT(zfs_refcount_count(&spa->spa_refcount) > spa->spa_minref ||
+ MUTEX_HELD(&spa_namespace_lock));
+- (void) refcount_remove(&spa->spa_refcount, tag);
++ (void) zfs_refcount_remove(&spa->spa_refcount, tag);
+ }
+
+ /*
+@@ -794,7 +795,7 @@ spa_close(spa_t *spa, void *tag)
+ void
+ spa_async_close(spa_t *spa, void *tag)
+ {
+- (void) refcount_remove(&spa->spa_refcount, tag);
++ (void) zfs_refcount_remove(&spa->spa_refcount, tag);
+ }
+
+ /*
+@@ -807,7 +808,7 @@ spa_refcount_zero(spa_t *spa)
+ {
+ ASSERT(MUTEX_HELD(&spa_namespace_lock));
+
+- return (refcount_count(&spa->spa_refcount) == spa->spa_minref);
++ return (zfs_refcount_count(&spa->spa_refcount) == spa->spa_minref);
+ }
+
+ /*
+@@ -1878,7 +1879,7 @@ spa_init(int mode)
+ #endif
+
+ fm_init();
+- refcount_init();
++ zfs_refcount_init();
+ unique_init();
+ range_tree_init();
+ metaslab_alloc_trace_init();
+@@ -1914,7 +1915,7 @@ spa_fini(void)
+ metaslab_alloc_trace_fini();
+ range_tree_fini();
+ unique_fini();
+- refcount_fini();
++ zfs_refcount_fini();
+ fm_fini();
+ qat_fini();
+
+diff --git a/module/zfs/zfs_ctldir.c b/module/zfs/zfs_ctldir.c
+index de3c5a41..2964b65a 100644
+--- a/module/zfs/zfs_ctldir.c
++++ b/module/zfs/zfs_ctldir.c
+@@ -144,7 +144,7 @@ zfsctl_snapshot_alloc(char *full_name, char *full_path, spa_t *spa,
+ se->se_root_dentry = root_dentry;
+ se->se_taskqid = TASKQID_INVALID;
+
+- refcount_create(&se->se_refcount);
++ zfs_refcount_create(&se->se_refcount);
+
+ return (se);
+ }
+@@ -156,7 +156,7 @@ zfsctl_snapshot_alloc(char *full_name, char *full_path, spa_t *spa,
+ static void
+ zfsctl_snapshot_free(zfs_snapentry_t *se)
+ {
+- refcount_destroy(&se->se_refcount);
++ zfs_refcount_destroy(&se->se_refcount);
+ strfree(se->se_name);
+ strfree(se->se_path);
+
+@@ -179,7 +179,7 @@ zfsctl_snapshot_hold(zfs_snapentry_t *se)
+ static void
+ zfsctl_snapshot_rele(zfs_snapentry_t *se)
+ {
+- if (refcount_remove(&se->se_refcount, NULL) == 0)
++ if (zfs_refcount_remove(&se->se_refcount, NULL) == 0)
+ zfsctl_snapshot_free(se);
+ }
+
+diff --git a/module/zfs/zfs_znode.c b/module/zfs/zfs_znode.c
+index 0ca10f82..7b893dc7 100644
+--- a/module/zfs/zfs_znode.c
++++ b/module/zfs/zfs_znode.c
+@@ -149,7 +149,7 @@ zfs_znode_hold_cache_constructor(void *buf, void *arg, int kmflags)
+ znode_hold_t *zh = buf;
+
+ mutex_init(&zh->zh_lock, NULL, MUTEX_DEFAULT, NULL);
+- refcount_create(&zh->zh_refcount);
++ zfs_refcount_create(&zh->zh_refcount);
+ zh->zh_obj = ZFS_NO_OBJECT;
+
+ return (0);
+@@ -161,7 +161,7 @@ zfs_znode_hold_cache_destructor(void *buf, void *arg)
+ znode_hold_t *zh = buf;
+
+ mutex_destroy(&zh->zh_lock);
+- refcount_destroy(&zh->zh_refcount);
++ zfs_refcount_destroy(&zh->zh_refcount);
+ }
+
+ void
+@@ -279,7 +279,7 @@ zfs_znode_hold_enter(zfsvfs_t *zfsvfs, uint64_t obj)
+ kmem_cache_free(znode_hold_cache, zh_new);
+
+ ASSERT(MUTEX_NOT_HELD(&zh->zh_lock));
+- ASSERT3S(refcount_count(&zh->zh_refcount), >, 0);
++ ASSERT3S(zfs_refcount_count(&zh->zh_refcount), >, 0);
+ mutex_enter(&zh->zh_lock);
+
+ return (zh);
+@@ -292,11 +292,11 @@ zfs_znode_hold_exit(zfsvfs_t *zfsvfs, znode_hold_t *zh)
+ boolean_t remove = B_FALSE;
+
+ ASSERT(zfs_znode_held(zfsvfs, zh->zh_obj));
+- ASSERT3S(refcount_count(&zh->zh_refcount), >, 0);
++ ASSERT3S(zfs_refcount_count(&zh->zh_refcount), >, 0);
+ mutex_exit(&zh->zh_lock);
+
+ mutex_enter(&zfsvfs->z_hold_locks[i]);
+- if (refcount_remove(&zh->zh_refcount, NULL) == 0) {
++ if (zfs_refcount_remove(&zh->zh_refcount, NULL) == 0) {
+ avl_remove(&zfsvfs->z_hold_trees[i], zh);
+ remove = B_TRUE;
+ }
+diff --git a/module/zfs/zio.c b/module/zfs/zio.c
+index dd0dfcdb..3f8fca38 100644
+--- a/module/zfs/zio.c
++++ b/module/zfs/zio.c
+@@ -2338,7 +2338,7 @@ zio_write_gang_block(zio_t *pio)
+ ASSERT(!(pio->io_flags & ZIO_FLAG_NODATA));
+
+ flags |= METASLAB_ASYNC_ALLOC;
+- VERIFY(refcount_held(&mc->mc_alloc_slots, pio));
++ VERIFY(zfs_refcount_held(&mc->mc_alloc_slots, pio));
+
+ /*
+ * The logical zio has already placed a reservation for
+@@ -3766,7 +3766,7 @@ zio_done(zio_t *zio)
+ ASSERT(zio->io_priority == ZIO_PRIORITY_ASYNC_WRITE);
+ ASSERT(zio->io_bp != NULL);
+ metaslab_group_alloc_verify(zio->io_spa, zio->io_bp, zio);
+- VERIFY(refcount_not_held(
++ VERIFY(zfs_refcount_not_held(
+ &(spa_normal_class(zio->io_spa)->mc_alloc_slots), zio));
+ }
+
diff --git a/zfs-patches/0015-Fix-arc_release-refcount.patch b/zfs-patches/0015-Fix-arc_release-refcount.patch
new file mode 100644
index 0000000..9cee846
--- /dev/null
+++ b/zfs-patches/0015-Fix-arc_release-refcount.patch
@@ -0,0 +1,29 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Brian Behlendorf <behlendorf1 at llnl.gov>
+Date: Mon, 8 Oct 2018 14:59:34 -0700
+Subject: [PATCH] Fix arc_release() refcount
+
+Update arc_release to use arc_buf_size(). This hunk was accidentally
+dropped when porting compressed send/recv, 2aa34383b.
+
+Reviewed-by: Matthew Ahrens <mahrens at delphix.com>
+Signed-off-by: Tom Caputi <tcaputi at datto.com>
+Signed-off-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Closes #8000
+---
+ module/zfs/arc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/module/zfs/arc.c b/module/zfs/arc.c
+index 32ac0837..a7fb2429 100644
+--- a/module/zfs/arc.c
++++ b/module/zfs/arc.c
+@@ -5831,7 +5831,7 @@ arc_release(arc_buf_t *buf, void *tag)
+
+ mutex_exit(&buf->b_evict_lock);
+ (void) zfs_refcount_add_many(&arc_anon->arcs_size,
+- HDR_GET_LSIZE(nhdr), buf);
++ arc_buf_size(buf), buf);
+ } else {
+ mutex_exit(&buf->b_evict_lock);
+ ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1);
diff --git a/zfs-patches/0016-Allow-use-of-pool-GUID-as-root-pool.patch b/zfs-patches/0016-Allow-use-of-pool-GUID-as-root-pool.patch
new file mode 100644
index 0000000..435a6dc
--- /dev/null
+++ b/zfs-patches/0016-Allow-use-of-pool-GUID-as-root-pool.patch
@@ -0,0 +1,59 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: George Melikov <mail at gmelikov.ru>
+Date: Wed, 24 Oct 2018 06:06:40 +0300
+Subject: [PATCH] Allow use of pool GUID as root pool
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+It's helpful if there are pools with same names,
+but you need to use only one of them.
+
+Main case is twin servers, meanwhile some software
+requires the same name of pools (e.g. Proxmox).
+
+Reviewed-by: Kash Pande <kash at tripleback.net>
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Signed-off-by: George Melikov <mail at gmelikov.ru>
+Signed-off-by: Igor ‘guardian’ Lidin of Moscow, Russia
+Closes #8052
+---
+ contrib/initramfs/scripts/zfs | 11 ++++++++++-
+ 1 file changed, 10 insertions(+), 1 deletion(-)
+
+diff --git a/contrib/initramfs/scripts/zfs b/contrib/initramfs/scripts/zfs
+index 86329e76..dacd71d2 100644
+--- a/contrib/initramfs/scripts/zfs
++++ b/contrib/initramfs/scripts/zfs
+@@ -193,7 +193,7 @@ import_pool()
+
+ # Verify that the pool isn't already imported
+ # Make as sure as we can to not require '-f' to import.
+- "${ZPOOL}" status "$pool" > /dev/null 2>&1 && return 0
++ "${ZPOOL}" get name,guid -o value -H 2>/dev/null | grep -Fxq "$pool" && return 0
+
+ # For backwards compatibility, make sure that ZPOOL_IMPORT_PATH is set
+ # to something we can use later with the real import(s). We want to
+@@ -772,6 +772,7 @@ mountroot()
+ # root=zfs:<pool>/<dataset> (uses this for rpool - first part, without 'zfs:')
+ #
+ # Option <dataset> could also be <snapshot>
++ # Option <pool> could also be <guid>
+
+ # ------------
+ # Support force option
+@@ -889,6 +890,14 @@ mountroot()
+ /bin/sh
+ fi
+
++ # In case the pool was specified as guid, resolve guid to name
++ pool="$("${ZPOOL}" get name,guid -o name,value -H | \
++ awk -v pool="${ZFS_RPOOL}" '$2 == pool { print $1 }')"
++ if [ -n "$pool" ]; then
++ ZFS_BOOTFS="${pool}/${ZFS_BOOTFS#*/}"
++ ZFS_RPOOL="${pool}"
++ fi
++
+ # Set elevator=noop on the root pool's vdevs' disks. ZFS already
+ # does this for wholedisk vdevs (for all pools), so this is only
+ # important for partitions.
diff --git a/zfs-patches/0017-ZTS-Update-O_TMPFILE-support-check.patch b/zfs-patches/0017-ZTS-Update-O_TMPFILE-support-check.patch
new file mode 100644
index 0000000..439529f
--- /dev/null
+++ b/zfs-patches/0017-ZTS-Update-O_TMPFILE-support-check.patch
@@ -0,0 +1,67 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Brian Behlendorf <behlendorf1 at llnl.gov>
+Date: Mon, 14 May 2018 20:36:30 -0700
+Subject: [PATCH] ZTS: Update O_TMPFILE support check
+
+In CentOS 7.5 the kernel provided a compatibility wrapper to support
+O_TMPFILE. This results in the test setup script correctly detecting
+kernel support. But the ZFS module was built without O_TMPFILE
+support due to the non-standard CentOS kernel interface.
+
+Handle this case by updating the setup check to fail either when
+the kernel or the ZFS module fail to provide support. The reason
+will be clearly logged in the test results.
+
+Reviewed-by: Chunwei Chen <tuxoko at gmail.com>
+Signed-off-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Closes #7528
+---
+ tests/zfs-tests/tests/functional/tmpfile/setup.ksh | 11 +++++++----
+ tests/zfs-tests/tests/functional/tmpfile/tmpfile_test.c | 11 ++++++-----
+ 2 files changed, 13 insertions(+), 9 deletions(-)
+
+diff --git a/tests/zfs-tests/tests/functional/tmpfile/setup.ksh b/tests/zfs-tests/tests/functional/tmpfile/setup.ksh
+index 243a5b77..bc00a2a2 100755
+--- a/tests/zfs-tests/tests/functional/tmpfile/setup.ksh
++++ b/tests/zfs-tests/tests/functional/tmpfile/setup.ksh
+@@ -31,9 +31,12 @@
+
+ . $STF_SUITE/include/libtest.shlib
+
+-if ! $STF_SUITE/tests/functional/tmpfile/tmpfile_test /tmp; then
+- log_unsupported "The kernel doesn't support O_TMPFILE."
++DISK=${DISKS%% *}
++default_setup_noexit $DISK
++
++if ! $STF_SUITE/tests/functional/tmpfile/tmpfile_test $TESTDIR; then
++ default_cleanup_noexit
++ log_unsupported "The kernel/filesystem doesn't support O_TMPFILE"
+ fi
+
+-DISK=${DISKS%% *}
+-default_setup $DISK
++log_pass
+diff --git a/tests/zfs-tests/tests/functional/tmpfile/tmpfile_test.c b/tests/zfs-tests/tests/functional/tmpfile/tmpfile_test.c
+index 5fb67b47..91527ac5 100644
+--- a/tests/zfs-tests/tests/functional/tmpfile/tmpfile_test.c
++++ b/tests/zfs-tests/tests/functional/tmpfile/tmpfile_test.c
+@@ -36,13 +36,14 @@ main(int argc, char *argv[])
+
+ fd = open(argv[1], O_TMPFILE | O_WRONLY, 0666);
+ if (fd < 0) {
+- /*
+- * Only fail on EISDIR. If we get EOPNOTSUPP, that means
+- * kernel support O_TMPFILE, but the path at argv[1] doesn't.
+- */
+ if (errno == EISDIR) {
+- fprintf(stderr, "kernel doesn't support O_TMPFILE\n");
++ fprintf(stderr,
++ "The kernel doesn't support O_TMPFILE\n");
+ return (1);
++ } else if (errno == EOPNOTSUPP) {
++ fprintf(stderr,
++ "The filesystem doesn't support O_TMPFILE\n");
++ return (2);
+ }
+ perror("open");
+ } else {
diff --git a/zfs-patches/0018-Fix-flake8-invalid-escape-sequence-x-warning.patch b/zfs-patches/0018-Fix-flake8-invalid-escape-sequence-x-warning.patch
new file mode 100644
index 0000000..57fd42b
--- /dev/null
+++ b/zfs-patches/0018-Fix-flake8-invalid-escape-sequence-x-warning.patch
@@ -0,0 +1,35 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Brian Behlendorf <behlendorf1 at llnl.gov>
+Date: Wed, 24 Oct 2018 23:26:08 -0700
+Subject: [PATCH] Fix flake8 "invalid escape sequence 'x'" warning
+
+From, https://lintlyci.github.io/Flake8Rules/rules/W605.html
+
+As of Python 3.6, a backslash-character pair that is not a valid
+escape sequence now generates a DeprecationWarning. Although this
+will eventually become a SyntaxError, that will not be for several
+Python releases.
+
+Note 'float_pobj' was simply removed from arcstat.py since it
+was entirely unused.
+
+Reviewed-by: John Kennedy <john.kennedy at delphix.com>
+Reviewed-by: Richard Elling <Richard.Elling at RichardElling.com>
+Signed-off-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Closes #8056
+---
+ cmd/arcstat/arcstat.py | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/cmd/arcstat/arcstat.py b/cmd/arcstat/arcstat.py
+index b52a8c29..d7d3e9b7 100755
+--- a/cmd/arcstat/arcstat.py
++++ b/cmd/arcstat/arcstat.py
+@@ -112,7 +112,6 @@ cur = {}
+ d = {}
+ out = None
+ kstat = None
+-float_pobj = re.compile("^[0-9]+(\.[0-9]+)?$")
+
+
+ def detailed_usage():
diff --git a/zfs-patches/0019-Add-BuildRequires-gcc-make-elfutils-libelf-devel.patch b/zfs-patches/0019-Add-BuildRequires-gcc-make-elfutils-libelf-devel.patch
new file mode 100644
index 0000000..b56b4c3
--- /dev/null
+++ b/zfs-patches/0019-Add-BuildRequires-gcc-make-elfutils-libelf-devel.patch
@@ -0,0 +1,51 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Tony Hutter <hutter2 at llnl.gov>
+Date: Wed, 7 Nov 2018 15:48:24 -0800
+Subject: [PATCH] Add BuildRequires gcc, make, elfutils-libelf-devel
+
+This adds a BuildRequires for gcc, make, and elfutils-libelf-devel
+into our spec files. gcc has been a packaging requirement for
+awhile now:
+
+https://fedoraproject.org/wiki/Packaging:C_and_C%2B%2B
+
+These additional BuildRequires allow us to mock build in
+Fedora 29.
+
+Reviewed-by: Neal Gompa <ngompa at datto.com>
+Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
+Signed-off-by: Tony Hutter <hutter2 at llnl.gov>
+Closes #8095
+Closes #8102
+---
+ rpm/generic/zfs-kmod.spec.in | 4 ++++
+ rpm/generic/zfs.spec.in | 1 +
+ 2 files changed, 5 insertions(+)
+
+diff --git a/rpm/generic/zfs-kmod.spec.in b/rpm/generic/zfs-kmod.spec.in
+index d4746f5b..ecf14ece 100644
+--- a/rpm/generic/zfs-kmod.spec.in
++++ b/rpm/generic/zfs-kmod.spec.in
+@@ -52,6 +52,10 @@ URL: http://zfsonlinux.org/
+ Source0: %{module}-%{version}.tar.gz
+ Source10: kmodtool
+ BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id} -u -n)
++%if 0%{?rhel}%{?fedora}
++BuildRequires: gcc, make
++BuildRequires: elfutils-libelf-devel
++%endif
+
+ # The developments headers will conflict with the dkms packages.
+ Conflicts: %{module}-dkms
+diff --git a/rpm/generic/zfs.spec.in b/rpm/generic/zfs.spec.in
+index fa6f1571..c1b8f2c8 100644
+--- a/rpm/generic/zfs.spec.in
++++ b/rpm/generic/zfs.spec.in
+@@ -91,6 +91,7 @@ Provides: %{name}-kmod-common = %{version}
+ Conflicts: zfs-fuse
+
+ %if 0%{?rhel}%{?fedora}%{?suse_version}
++BuildRequires: gcc, make
+ BuildRequires: zlib-devel
+ BuildRequires: libuuid-devel
+ BuildRequires: libblkid-devel
diff --git a/zfs-patches/0020-Tag-zfs-0.7.12.patch b/zfs-patches/0020-Tag-zfs-0.7.12.patch
new file mode 100644
index 0000000..ef3d9fc
--- /dev/null
+++ b/zfs-patches/0020-Tag-zfs-0.7.12.patch
@@ -0,0 +1,55 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Tony Hutter <hutter2 at llnl.gov>
+Date: Thu, 8 Nov 2018 14:38:37 -0800
+Subject: [PATCH] Tag zfs-0.7.12
+
+META file and changelog updated.
+
+Signed-off-by: Tony Hutter <hutter2 at llnl.gov>
+---
+ META | 2 +-
+ rpm/generic/zfs-kmod.spec.in | 3 +++
+ rpm/generic/zfs.spec.in | 3 +++
+ 3 files changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/META b/META
+index 4b0cdb9c..8631f885 100644
+--- a/META
++++ b/META
+@@ -1,7 +1,7 @@
+ Meta: 1
+ Name: zfs
+ Branch: 1.0
+-Version: 0.7.11
++Version: 0.7.12
+ Release: 1
+ Release-Tags: relext
+ License: CDDL
+diff --git a/rpm/generic/zfs-kmod.spec.in b/rpm/generic/zfs-kmod.spec.in
+index ecf14ece..3b97e91d 100644
+--- a/rpm/generic/zfs-kmod.spec.in
++++ b/rpm/generic/zfs-kmod.spec.in
+@@ -195,6 +195,9 @@ chmod u+x ${RPM_BUILD_ROOT}%{kmodinstdir_prefix}/*/extra/*/*/*
+ rm -rf $RPM_BUILD_ROOT
+
+ %changelog
++* Thu Nov 08 2018 Tony Hutter <hutter2 at llnl.gov> - 0.7.12-1
++- Released 0.7.12-1, detailed release notes are available at:
++- https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.12
+ * Thu Sep 13 2018 Tony Hutter <hutter2 at llnl.gov> - 0.7.11-1
+ - Released 0.7.11-1, detailed release notes are available at:
+ - https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.11
+diff --git a/rpm/generic/zfs.spec.in b/rpm/generic/zfs.spec.in
+index c1b8f2c8..f28793a8 100644
+--- a/rpm/generic/zfs.spec.in
++++ b/rpm/generic/zfs.spec.in
+@@ -372,6 +372,9 @@ systemctl --system daemon-reload >/dev/null || true
+ %endif
+
+ %changelog
++* Thu Nov 08 2018 Tony Hutter <hutter2 at llnl.gov> - 0.7.12-1
++- Released 0.7.12-1, detailed release notes are available at:
++- https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.12
+ * Thu Sep 13 2018 Tony Hutter <hutter2 at llnl.gov> - 0.7.11-1
+ - Released 0.7.11-1, detailed release notes are available at:
+ - https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.11
diff --git a/zfs-patches/series b/zfs-patches/series
index 756a299..cbeff2d 100644
--- a/zfs-patches/series
+++ b/zfs-patches/series
@@ -1,9 +1,20 @@
0001-remove-DKMS-modules-and-dracut-build.patch
0002-import-with-d-dev-disk-by-id-in-scan-service.patch
0003-always-load-ZFS-module-on-boot.patch
-0004-Fix-deadlock-between-zfs-umount-snapentry_expire.patch
-0005-Fix-race-in-dnode_check_slots_free.patch
-0006-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
+0004-Add-Breaks-Replaces-to-zfs-initramfs.patch
+0005-Revert-Install-init-scripts-to-support-non-systemd-s.patch
+0006-Fix-deadlock-between-zfs-umount-snapentry_expire.patch
0007-deadlock-between-mm_sem-and-tx-assign-in-zfs_write-a.patch
-0008-Add-Breaks-Replaces-to-zfs-initramfs.patch
-0009-Revert-Install-init-scripts-to-support-non-systemd-s.patch
+0008-Fix-race-in-dnode_check_slots_free.patch
+0009-Reduce-taskq-and-context-switch-cost-of-zio-pipe.patch
+0010-Skip-import-activity-test-in-more-zdb-code-paths.patch
+0011-Fix-statfs-2-for-32-bit-user-space.patch
+0012-Zpool-iostat-remove-latency-queue-scaling.patch
+0013-Linux-4.19-rc3-compat-Remove-refcount_t-compat.patch
+0014-Prefix-all-refcount-functions-with-zfs_.patch
+0015-Fix-arc_release-refcount.patch
+0016-Allow-use-of-pool-GUID-as-root-pool.patch
+0017-ZTS-Update-O_TMPFILE-support-check.patch
+0018-Fix-flake8-invalid-escape-sequence-x-warning.patch
+0019-Add-BuildRequires-gcc-make-elfutils-libelf-devel.patch
+0020-Tag-zfs-0.7.12.patch
--
2.11.0
More information about the pve-devel
mailing list