Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Tags: Architectures Platforms default
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

Architectures Platforms default
Brian Behlendorf
Move zap_attribute_t to the heap in dsl_deadlist_merge

In the case of a regular compilation, the compiler
raises a warning for a dsl_deadlist_merge function, that
the stack size is to large. In debug build this can
generate an error.

Move large structures to heap.

Reviewed-by: Richard Yao <richard.yao@alumni.stonybrook.edu>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Mariusz Zaborski <mariusz.zaborski@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #14524

Pull-request: #14921 part 1/1
Brian Behlendorf
Revert "initramfs: use `mount.zfs` instead of `mount`"

This broke mounting of snapshots on / for users.

See https://github.com/openzfs/zfs/issues/9461#issuecomment-1376162949 for more context.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes #14908

Pull-request: #14920 part 1/1
Brian Behlendorf
Fix NULL pointer dereference when doing concurrent 'send' operations

A NULL pointer will occur when doing a 'zfs send -S' on a dataset that
is still being received.  The problem is that the new 'send' will
rightfully fail to own the datasets (i.e. dsl_dataset_own_force() will
fail), but then dmu_send() will still do the dsl_dataset_disown().

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Luís Henriques <henrix@camandro.org>
Closes #14903
Closes #14890

Pull-request: #14919 part 1/1
Brian Behlendorf
ZTS: threadsappend_001_pos

Correct exception path used in zts-report.py.in.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>

Pull-request: #14915 part 1/1
Alexander Motin
ZIL: Allow to replay blocks of any size.

There seems to be no reason for ZIL blocks to be limited by 128KB
other than replay code is written in such a way.  This change does
not increase the limit yet, just removes the artificial limitation.

Avoided extra memcpy() may save us a second during replay.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14910 part 1/1
Alexander Motin
ZIL: Allow to replay blocks of any size.

There seems to be no reason for ZIL blocks to be limited by 128KB
other than replay code is written in such a way.  This change does
not increase the limit yet, just removes the artificial limitation.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14910 part 1/1
Alexander Motin
ZIL: Allow ZIL to replay blocks of any size.

There seems to be no reason for ZIL blocks to be limited by 128KB
other than replay code is written in such a way.  This change does
not increase the limit yet, just removes the artificial limitation.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14910 part 1/1
Alexander Motin
ZIL: Improve next log block size prediction.

Detect single-threaded workloads by checking the previous block is
fully written and flushed.  It allows to make size prediction logic
much more precise and skip commit delays, since we can give up on
write aggregation in that case.

Since single-threaded workloads are no longer delayed, increase
zfs_commit_timeout_pct from 5 to 10%.  Parallel workloads should
less care about it, and it should provide more aggregation.

Remove zil_min_commit_timeout tunable, since very fast ZILs should
detect most of workloads as single-threaded.  And when not, not
delaying writes wastes extra block space allocated for aggregation.

Track history in context of bursts, not individual log blocks.  It
allows to not blow away all the history by single large burst of
many block, and same time allows optimizations covering multiple
blocks in a burst and even predicted following burst.  For each
burst account its optimal block size and minimal first block size.
Use that statistics from the last 8 bursts to predict first block
size of the next burst.

Remove predefined set of block sizes.  Allocate any size we see fit,
multiple of 4KB, as required by ZIL now.  With compression enabled
by default, ZFS already writes pretty random block sizes, so this
should not surprise space allocator any more.

Allow zio_alloc_zil() to allocate bigger blocks if predicted size
does not align well with pool's minimum allocation size.  ZIL can
make a good use of whatever block size it is given.

Reduce max_waste_space from 12 to 6% and max_copied_data from 63KB
to 8KB.  It allows prediction to be more precise on large bursts,
improve space efficiency and reduce extra memory copying.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14909 part 1/1
Alexander Motin
ZIL: Improve next log block size prediction.

Detect single-threaded workloads by checking the previous block is
fully written and flushed.  It allows to make size prediction logic
much more precise and skip commit delays, since we can give up on
write aggregation in that case.

Since single-threaded workloads are no longer delayed, increase
zfs_commit_timeout_pct from 5 to 10%.  Parallel workloads should
less care about it, and it should provide more aggregation.

Remove zil_min_commit_timeout tunable, since very fast ZILs should
detect most of workloads as single-threaded.  And when not, not
delaying writes wastes extra block space allocated for aggregation.

Track history in context of bursts, not individual log blocks.  It
allows to not blow away all the history by single large burst of
many block, and same time allows optimizations covering multiple
blocks in a burst and even predicted following burst.  For each
burst account its optimal block size and minimal first block size.
Use that statistics from the last 8 bursts to predict first block
size of the next burst.

Remove predefined set of block sizes.  Allocate any size we see fit,
multiple of 4KB, as required by ZIL now.  With compression enabled
by default, ZFS already writes pretty random block sizes, so this
should not surprise space allocator any more.

Reduce max_waste_space from 12 to 6% and max_copied_data from 63KB
to 8KB.  It allows prediction to be more precise on large bursts,
improve space efficiency and reduce extra memory copying.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14909 part 1/1
Alexander Motin
ZIL: Improve next log block size prediction.

Detect single-threaded workloads by checking the previous block is
fully written and flushed.  It allows to make size prediction logic
much more precise and skip commit delays, since we can give up on
write aggregation in that case.

Since single-threaded workloads are no longer delayed, increase
zfs_commit_timeout_pct from 5 to 10%.  Parallel workloads should
less care about it, and it should provide more aggregation.

Remove zil_min_commit_timeout tunable, since very fast ZILs should
detect most of workloads as single-threaded.  And when not, not
delaying writes wastes extra block space allocated for aggregation.

Track history in context of bursts, not individual log blocks.  It
allows to not blow away all the history by single large burst of
many block, and same time allows optimizations covering multiple
blocks in a burst and even predicted following burst.  For each
burst account its optimal block size and minimal first block size.
Use that statistics from the last 8 bursts to predict first block
size of the next burst.

Remove predefined set of block sizes.  Allocate any size we see fit,
multiple of 4KB, as required by ZIL now.  With compression enabled
by default, ZFS already writes pretty random block sizes, so this
should not surprise space allocator any more.

Reduce max_waste_space from 12 to 6% and max_copied_data from 63KB
to 8KB.  It allows prediction to be more precise on large bursts,
improve space efficiency and reduce extra memory copying.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14909 part 1/1
Alexander Motin
ZIL: Improve next log block size prediction.

Detect single-threaded workloads by checking the previous block is
fully written and flushed.  It allows to make size prediction logic
much more precise and skip commit delays, since we can give up on
write aggregation in that case.

Since single-threaded workloads are no longer delayed, increase
zfs_commit_timeout_pct from 5 to 10%.  Parallel workloads should
less care about it, and it should provide more aggregation.

Remove zil_min_commit_timeout tunable, since very fast ZILs should
detect most of workloads as single-threaded.  And when not, not
delaying writes wastes extra block space allocated for aggregation.

Track history in context of bursts, not individual log blocks.  It
allows to not blow away all the history by single large burst of
many block, and same time allows optimizations covering multiple
blocks in a burst and even predicted following burst.  For each
burst account its optimal block size and minimal first block size.
Use that statistics from the last 8 bursts to predict first block
size of the next burst.

Remove predefined set of block sizes.  Allocate any size we see fit,
multiple of 4KB, as required by ZIL now.  With compression enabled
by default, ZFS already writes pretty random block sizes, so this
should not surprise space allocator any more.

Reduce max_waste_space from 12 to 6% and max_copied_data from 63KB
to 8KB.  It allows prediction to be more precise on large bursts,
improve space efficiency and reduce extra memory copying.

Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.

Pull-request: #14909 part 1/1
Rich Ercolani
Revert "initramfs: use `mount.zfs` instead of `mount`"

This broke mounting of snapshots on / for users.

See https://github.com/openzfs/zfs/issues/9461#issuecomment-1376162949 for more context.

Signed-off-by: Rich Ercolani <rincebrain@gmail.com>

Pull-request: #14908 part 1/1
Brian Behlendorf
ZTS: Add zpool_resilver_concurrent exception

The zpool_resilver_concurrent test case requires the ZED which is not used
on FreeBSD.  Add this test to the known list of skipped tested for FreeBSD.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #14904

Pull-request: #14907 part 1/1
Brian Behlendorf
Fix test-runner on FreeBSD

CLOCK_MONOTONIC_RAW is only a thing on Linux and macOS. I'm not
actually sure why the previous hardcoding of a constant didn't
error out, but when we removed it, it sure does now.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Rich Ercolani <rincebrain@gmail.com>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes #12995

Pull-request: #14906 part 7/7
Brian Behlendorf
FreeBSD: add missing vop_fplookup assignments

It became illegal to not have them as of
5f6df177758b9dff88e4b6069aeb2359e8b0c493 ("vfs: validate that vop
vectors provide all or none fplookup vops") upstream.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Closes #14788

Pull-request: #14906 part 6/6
Brian Behlendorf
Fix checkstyle warning

Resolve a missed checkstyle warning.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #14799

Pull-request: #14906 part 4/4
Brian Behlendorf
FreeBSD: add missing vn state transition for .zfs

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Closes #14774

Pull-request: #14906 part 3/4
Brian Behlendorf
FreeBSD: fix up EINVAL from getdirentries on .zfs

Without the change:
/.zfs
/.zfs/snapshot
find: /.zfs: Invalid argument

Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Closes #14774

Pull-request: #14906 part 2/4
Brian Behlendorf
FreeBSD: make zfs_vfs_held() definition consistent with declaration

Noticed while attempting to change FreeBSD's boolean_t into an actual
bool: in include/sys/zfs_ioctl_impl.h, zfs_vfs_held() is declared to
return a boolean_t, but in module/os/freebsd/zfs/zfs_ioctl_os.c it is
defined to return an int. Make the definition match the declaration.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Dimitry Andric <dimitry@andric.com>
Closes #14776

Pull-request: #14906 part 1/4
Luís Henriques
Fix NULL pointer dereference when doing concurrent 'send' operations

A NULL pointer will occur when doing a 'zfs send -S' on a dataset that
is still being received.  The problem is that the new 'send' will
rightfully fail to own the datasets (i.e. dsl_dataset_own_force() will
fail), but then dmu_send() will still do the dsl_dataset_disown().

Signed-off-by: Luís Henriques <henrix@camandro.org>

Pull-request: #14903 part 1/1
Tony Hutter
TEST ONLY Force blk-mq to be unsettable for sanity

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-builders: fedora38

Pull-request: #14879 part 3/3
Tony Hutter
Revert "Optionally skip zil_close during zvol_create_minor_impl"

This reverts commit e197bb24f1857c823b44c2175b2318c472d79731.

Pull-request: #14879 part 2/3
Tony Hutter
zvol: Fix zvol_misc crashes when using blk-mq

We have recently been seeing a lot of zvol_misc test failures when
blk-mq was enabled on F38 and Centos 9 (#14872).  The failures look
to be caused by kernel memory corruption.

This fix removes a slightly dubious optimization in
zfs_uiomove_bvec_rq() that saved the iterator contents of a
rq_for_each_segment().  This optimization allowed restoring the "saved
state" from a previous rq_for_each_segment() call on the same uio so
that you wouldn't need to iterate though each bvec on every
zfs_uiomove_bvec_rq() call.  However, if the kernel is manipulating
the requests/bios/bvecs under the covers between zfs_uiomove_bvec_rq()
calls, then it could result in corruption from using the "saved state".

Fixes: #14872
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-builders: fedora38

Pull-request: #14879 part 1/3
Tony Hutter
TEST ONLY Force blk-mq to be unsettable for sanity

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-builders: fedora38

Pull-request: #14879 part 3/3
Tony Hutter
Revert "Optionally skip zil_close during zvol_create_minor_impl"

This reverts commit e197bb24f1857c823b44c2175b2318c472d79731.

Pull-request: #14879 part 2/3
Brian Behlendorf
ZTS: zvol_misc_trim disable blk mq

Disable the zvol_misc_fua.ksh and zvol_misc_trim.ksh test cases on impacted
kernels.  This issue is being actively worked in #14872 and as part of that
fix this commit will be reverted.

    VERIFY(zh->zh_claim_txg == 0) failed
    PANIC at zil.c:904:zil_create()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #14872
Closes #1487

Pull-request: #14870 part 1/1
Brian Behlendorf
ZTS: zvol_misc_trim disable blk mq

Disable the zvol_misc_fua.ksh and zvol_misc_trim.ksh test cases on impacted
kernels.  This issue is being actively worked in #14872 and as part of that
fix this commit will be reverted.

    VERIFY(zh->zh_claim_txg == 0) failed
    PANIC at zil.c:904:zil_create()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #14872
Closes #1487

Pull-request: #14870 part 1/1
Rich Ercolani
Pack our DDT ZAPs a bit denser.

The DDT is really inefficient on 4k and up vdevs, because it always
allocates 4k blocks, and while compression could save us somewhat
at ashift 9, that stops being true.

Signed-off-by: Rich Ercolani <rincebrain@gmail.com>

Pull-request: #14654 part 1/1
Rob Norris
zdb: add -B option to generate backup stream

This is more-or-less like `zfs send`, but specifying the snapshot by its
objset id for situations where it can't be referenced any other way.

Sponsored-By: Klara, Inc.
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>

Pull-request: #14642 part 1/1
Jorgen Lundman
Fix aarch64 assembly for macOS/M1

Give up advocating to use asm_linkage.h to unify assembly work
between the platforms and just pepper the file with #ifdef instead.

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 4/4
Jorgen Lundman
Fix blake3 on macOS/arm64

    BLAKE3_CTX *ctx = blake3_per_cpu_ctx[CPU_SEQID_UNSTABLE];

We have macOS arm64 to call kmem_alloc() as the cpu_number()
changes quite frequently, and would reuse an already active
ctx.

If in future we want to avoid kmem_alloc, we can use the
blake3_per_cpu_ctx[CPU_SEQID_UNSTABLE] but check if it is
busy, and move to the next free slot. Easily implemented with
CAS.

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 3/4
Jorgen Lundman
Upstream: SHA2 reworking and API for iterating

over multiple implementations

The changes in the shared files to enable macOS support to PR

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 2/4
Jorgen Lundman
Upstream: Add macOS support

Add source files to enable macOS support
Change autoconf/Makefiles to compile
Prepare zfs-tests for macOS but changes come later

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 1/4
Jorgen Lundman
Fix aarch64 assembly for macOS/M1

Give up advocating to use asm_linkage.h to unify assembly work
between the platforms and just pepper the file with #ifdef instead.

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 4/4
Jorgen Lundman
Fix blake3 on macOS/arm64

    BLAKE3_CTX *ctx = blake3_per_cpu_ctx[CPU_SEQID_UNSTABLE];

We have macOS arm64 to call kmem_alloc() as the cpu_number()
changes quite frequently, and would reuse an already active
ctx.

If in future we want to avoid kmem_alloc, we can use the
blake3_per_cpu_ctx[CPU_SEQID_UNSTABLE] but check if it is
busy, and move to the next free slot. Easily implemented with
CAS.

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 3/4
Jorgen Lundman
Upstream: SHA2 reworking and API for iterating

over multiple implementations

The changes in the shared files to enable macOS support to PR

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 2/4
Jorgen Lundman
Upstream: Add macOS support

Add source files to enable macOS support
Change autoconf/Makefiles to compile
Prepare zfs-tests for macOS but changes come later

Signed-off-by: Jorgen Lundman <lundman@lundman.net>

Pull-request: #12110 part 1/4
Allan Jude
zfs: support force exporting pools

This is primarily of use when a pool has lost its disk, while the user
doesn't care about any pending (or otherwise) transactions.

Implement various control methods to make this feasible:
- txg_wait can now take a NOSUSPEND flag, in which case the caller will
  be alerted if their txg can't be committed.  This is primarily of
  interest for callers that would normally pass TXG_WAIT, but don't want
  to wait if the pool becomes suspended, which allows unwinding in some
  cases, specifically when one is attempting a non-forced export.
  Without this, the non-forced export would preclude a forced export
  by virtue of holding the namespace lock indefinitely.
- txg_wait also returns failure for TXG_WAIT users if a pool is actually
  being force exported.  Adjust most callers to tolerate this.
- spa_config_enter_flags now takes a NOSUSPEND flag to the same effect.
- DMU objset initiator which may be set on an objset being forcibly
  exported / unmounted.
- SPA export initiator may be set on a pool being forcibly exported.
- DMU send/recv now use an interruption mechanism which relies on the
  SPA export initiator being able to enumerate datasets and closing any
  send/recv streams, causing their EINTR paths to be invoked.
- ZIO now has a cancel entry point, which tells all suspended zios to
  fail, and which suppresses the failures for non-CANFAIL users.
- metaslab, etc. cleanup, which consists of simply throwing away any
  changes that were not able to be synced out.
- Linux specific: introduce a new tunable,
  zfs_forced_export_unmount_enabled, which allows the filesystem to
  remain in a modified 'unmounted' state upon exiting zpl_umount_begin,
  to achieve parity with FreeBSD and illumos,
  which have VFS-level support for yanking filesystems out from under
  users.  However, this only helps when the user is actively performing
  I/O, while not sitting on the filesystem.  In particular, this allows
  test #3 below to pass on Linux.
- Add basic logic to zpool to indicate a force-exporting pool, instead
  of crashing due to lack of config, etc.

Add tests which cover the basic use cases:
- Force export while a send is in progress
- Force export while a recv is in progress
- Force export while POSIX I/O is in progress

This change modifies the libzfs ABI:
- New ZPOOL_STATUS_FORCE_EXPORTING zpool_status_t enum value.
- New field libzfs_force_export for libzfs_handle.

Co-Authored-by: Will Andrews <will@firepipe.net>
Co-Authored-by: Allan Jude <allan@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Catalogics, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #3461
Signed-off-by: Will Andrews <will@firepipe.net>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Mariusz Zaborski <mariusz.zaborski@klarasystems.com>

Pull-request: #11082 part 1/1
Brian Atkinson
Adding Direct IO Support

Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.

O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated  TXG is synced.
For both O_DIRECT read and write request the offset and requeset sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).

For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.

For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.

For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.

Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Dedup
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
          any data in the user buffers and written directly down to the
  VDEV disks is guaranteed to not change. There is no concern
  with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
        protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify_pct` that contols the
percentage of O_DIRECT writes that can occur to a top-level
VDEV before a checksum verify is run before the contents of the
user buffers are committed to disk. In the event of a checksum
verification failure the write will be redirected through the
ARC. The deafault value for `zfs_vdev_direct_write_verify_pct`
is 2 percent of Direct I/O writes to a top-level VDEV. The
number of O_DIRECT write checksum verification errors can be
observed by doing `zpool status -d`, which will list all
verification errors that have occurred on a top-level VDEV.
Along with `zpool status`, a ZED event will be issues as
`dio_verify` when a checksum verification error occurs.

A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
  the request as a buffered IO request.
standard - Follows the alignment restrictions  outlined above for
  write/read IO requests when the O_DIRECT flag is used.
always  - Treats every write/read IO request as though it passed
          O_DIRECT and will do O_DIRECT if the alignment restrictions
  are met otherwise will redirect through the ARC. This
  property will not allow a request to fail.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>

Pull-request: #10018 part 1/1
Brian Atkinson
Adding Direct IO Support

Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.

O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated  TXG is synced.
For both O_DIRECT read and write request the offset and requeset sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).

For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.

For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.

For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.

Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Dedup
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
          any data in the user buffers and written directly down to the
  VDEV disks is guaranteed to not change. There is no concern
  with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
        protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify_pct` that contols the
percentage of O_DIRECT writes that can occur to a top-level
VDEV before a checksum verify is run before the contents of the
user buffers are committed to disk. In the event of a checksum
verification failure the write will be redirected through the
ARC. The deafault value for `zfs_vdev_direct_write_verify_pct`
is 2 percent of Direct I/O writes to a top-level VDEV. The
number of O_DIRECT write checksum verification errors can be
observed by doing `zpool status -d`, which will list all
verification errors that have occurred on a top-level VDEV.
Along with `zpool status`, a ZED event will be issues as
`dio_verify` when a checksum verification error occurs.

A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
  the request as a buffered IO request.
standard - Follows the alignment restrictions  outlined above for
  write/read IO requests when the O_DIRECT flag is used.
always  - Treats every write/read IO request as though it passed
          O_DIRECT and will do O_DIRECT if the alignment restrictions
  are met otherwise will redirect through the ARC. This
  property will not allow a request to fail.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>

Pull-request: #10018 part 1/1