Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Tags: Architectures Platforms default
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

Architectures Platforms default
Pavel Snajdr
Fix arc__wait__for__eviction tracepoint

3442c2a02d added new `arc_wait_for_eviction` tracepoint, which fails to
compile, when tracepoints are enabled.

The tracepoint definition begins with `DEFINE_ARC_WAIT_FOR_EVICTION_EVENT`
and is a multi-line definition, so this fixes the backslash
and parenthesis accordingly.

Signed-off-by: Pavel Snajdr <snajpa@snajpa.net>

Pull-request: #10669 part 1/1
Ryan Moeller
Add missed thread_exit() to vdev_rebuild_thread

Signed-off-by: Ryan Moeller <ryan@iXsystems.com>

Pull-request: #10668 part 2/2
Matt Macy
Add missed thread_exit() to vdev_autotrim_thread

Signed-off-by: Matt Macy <mmacy@FreeBSD.org>

Pull-request: #10668 part 1/1
Roland Fehrenbacher
Prevent double insert into sublist for userquota_updates_task

This is to protect multilists with a lock against access from
multiple threads.

Signed-off-by: Roland Fehrenbacher <rf@q-leap.de>

Pull-request: #10665 part 1/1
  • Amazon 2 x86_64 (BUILD): cloning spl -  stdio
  • Debian 8 arm (BUILD): cloning spl -  stdio
  • Debian 8 ppc64 (BUILD): cloning spl -  stdio
  • Debian 8 ppc (BUILD): cloning spl -  stdio
  • Ubuntu 16.04 aarch64 (BUILD): cloning spl -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning spl -  stdio
  • Kernel.org Built-in x86_64 (BUILD): cloning spl -  stdio
Jonathon
Verify zfs module loaded before starting services

This is a minor change to the systemd service templates that verifies
the zfs kernel module is loaded by the kernel prior to attempting to
import any zpool.

The services check for the presence of /sys/module/zfs which indicates
the zfs is module is loaded. This uses the systemd built-in check
ConditionPathIsDirectory.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Matthew Thode <prometheanfire@gentoo.org>
Signed-off-by: Jonathon Fernyhough <jonathon.fernyhough@york.ac.uk>
Closes #10663
Jonathon Fernyhough
Verify zfs module loaded before starting services

This is a minor change to the systemd service templates that verifies
the zfs kernel module is loaded by the kernel prior to attempting to
import any zpool.

The services check for the presence of /sys/module/zfs which indicates
the zfs is module is loaded. This uses the systemd built-in check
ConditionPathIsDirectory.

Signed-off-by: Jonathon Fernyhough <jonathon.fernyhough@york.ac.uk>

Pull-request: #10663 part 1/1
George Amanakis
Fix logging in l2arc_rebuild()

In case the L2ARC rebuild was canceled, do not log to spa history
log as the pool may be in the process of being removed and a panic
may occur:

BUG: kernel NULL pointer dereference, address: 0000000000000018
RIP: 0010:spa_history_log_internal+0xb1/0x120 [zfs]
Call Trace:
l2arc_rebuild+0x464/0x7c0 [zfs]
l2arc_dev_rebuild_start+0x2d/0x130 [zfs]
? l2arc_rebuild+0x7c0/0x7c0 [zfs]
thread_generic_wrapper+0x78/0xb0 [spl]
kthread+0xfb/0x130
? IS_ERR+0x10/0x10 [spl]
? kthread_park+0x90/0x90
ret_from_fork+0x35/0x40

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #10659
Richard Laager
zvol_wait: Ignore locked zvols

Thanks: James Dingwall <james-launchpad@dingwall.me.uk>
Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10662 part 1/1
Ryan Moeller
FreeBSD: Fix `zfs jail` and add a test

zfs_jail was not using zfs_ioctl so failed to map the IOC number
correctly.  Use zfs_ioctl to perform the jail ioctl and add a test
case for FreeBSD.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Moeller <ryan@iXsystems.com>
Closes #10658
Matthew Macy
Fix page fault in zfsctl_snapdir_getattr

Must acquire the z_teardown_lock before accessing the zfsvfs_t object.
I can't reproduce this panic on demand, but this looks like the
correct solution.

Reviewed-by: Ryan Moeller <ryan@ixsystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Authored-by: asomers <asomers@FreeBSD.org>
Signed-off-by: Matt Macy <mmacy@FreeBSD.org>
Closes #10656
Allan Jude
Change the error handling for invalid property values

ZFS recv should return a useful error message when an invalid index
property value is provided in the send stream properties nvlist

With a compression= property outside of the understood range:

Before:
```
receiving full stream of zof/zstd_send@send2 into testpool/recv@send2
internal error: Invalid argument
Aborted (core dumped)
```
Note: the recv completes successfully, the abort() is likely just to
make it easier to track the unexpected error code.

After:
```
receiving full stream of zof/zstd_send@send2 into testpool/recv@send2
cannot receive compression property on testpool/recv: invalid property
value received 28.9M stream in 1 seconds (28.9M/sec)
```

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Closes #10631
GitHub
Add dracut_install test

Add `test` to dracut so we can test things (e.g. for #10661)

Pull-request: #10661 part 2/2
Jonathon Fernyhough
Verify zfs module loaded before starting services

This is a minor change to the systemd service templates that verifies
the zfs kernel module is loaded by the kernel prior to attempting to
import any zpool.

This checks for the presence of /sys/module/zfs which indicates the
zfs is module is loaded.

Signed-off-by: Jonathon Fernyhough <jonathon.fernyhough@york.ac.uk>

Pull-request: #10661 part 1/1
Matthew Thode
Revert "Verify zfs module loaded before starting services"

This reverts commit ae12b023082fd91e89507a2a1fc014e64c6767f0.

Pull-request: #10660 part 1/1
Matthew Macy
Changes to make openzfs build within FreeBSD buildworld

A collection of header changes to enable FreeBSD to build
with vendored OpenZFS.

Reviewed-by: Ryan Moeller <ryan@ixsystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matt Macy <mmacy@FreeBSD.org>
Closes #10635
Ryan Moeller
Convert Linux-isms to FreeBSD-isms in platform zfs_debug.c

Change some comments copied from the Linux code to describe
the appropriate methods on FreeBSD.

Convert some tunables to ZFS_MODULE_PARAM so they get created
on FreeBSD.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Moeller <ryan@iXsystems.com>
Closes #10647
Ryan Moeller
ZTS: FreeBSD does have a l2arc.trim_ahead tunable

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Moeller <ryan@iXsystems.com>
Closes #10633
Matthew Ahrens
Revise ARC shrinker algorithm

The ARC shrinker callback `arc_shrinker_count/_scan()` is invoked by the
kernel's shrinker mechanism when the system is running low on free
pages.  This happens via 2 code paths:

1. "direct reclaim": The system is attempting to allocate a page, but we
are low on memory.  The ARC shrinker callback is invoked from the
page-allocation code path.

2. "indirect reclaim": kswapd notices that there aren't many free pages,
so it invokes the ARC shrinker callback.

In both cases, the kernel's shrinker code requests that the ARC shrinker
callback release some of its cache, and then it measures how many pages
were released.  However, it's measurement of released pages does not
include pages that are freed via `__free_pages()`, which is how the ARC
releases memory (via `abd_free_chunks()`).  Rather, the kernel shrinker
code is looking for pages to be placed on the lists of reclaimable pages
(which is separate from actually-free pages).

Because the kernel shrinker code doesn't detect that the ARC has
released pages, it may call the ARC shrinker callback many times,
resulting in the ARC "collapsing" down to `arc_c_min`.  This has several
negative impacts:

1. ZFS doesn't use RAM to cache data effectively.

2. In the direct reclaim case, a single page allocation may wait a long
time (e.g. more than a minute) while we evict the entire ARC.

3. Even with the improvements made in 67c0f0dedc5 ("ARC shrinking blocks
reads/writes"), occasionally `arc_size` may stay above `arc_c` for the
entire time of the ARC collapse, thus blocking ZFS read/write operations
in `arc_get_data_impl()`.

To address these issues, this commit limits the ways that the ARC
shrinker callback can be used by the kernel shrinker code, and mitigates
the impact of arc_is_overflowing() on ZFS read/write operations.

With this commit:

1. We limit the amount of data that can be reclaimed from the ARC via
the "direct reclaim" shrinker.  This limits the amount of time it takes
to allocate a single page.

2. We do not allow the ARC to shrink via kswapd (indirect reclaim).
Instead we rely on `arc_evict_zthr` to monitor free memory and reduce
the ARC target size to keep sufficient free memory in the system.  Note
that we can't simply rely on limiting the amount that we reclaim at once
(as for the direct reclaim case), because kswapd's "boosted" logic can
invoke the callback an unlimited number of times (see
`balance_pgdat()`).

3. When `arc_is_overflowing()` and we want to allocate memory,
`arc_get_data_impl()` will wait only for a multiple of the requested
amount of data to be evicted, rather than waiting for the ARC to no
longer be overflowing.  This allows ZFS reads/writes to make progress
even while the ARC is overflowing, while also ensuring that the eviction
thread makes progress towards reducing the total amount of memory used
by the ARC.

4. The amount of memory that the ARC always tries to keep free for the
rest of the system, `arc_sys_free` is increased.

5. Now that the shrinker callback is able to provide feedback to the
kernel's shrinker code about our progress, we can safely enable
the kswapd hook. This will allow the arc to receive notifications
when memory pressure is first detected by the kernel. We also
re-enable the appropriate kstats to track these callbacks.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #10600
Ryan Moeller
ZTS: zvol_misc_volmode is flaky on FreeBSD

Mark this as a known issue.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Moeller <ryan@iXsystems.com>
Closes #10655
Ryan Moeller
ZTS: Use POSIX-compatible space character class

FreeBSD recently integrated a change which causes \s in a regex to
throw an error instead of silently being misinterpreted as an s.

Change the regex in zpool_colors.ksh to use [[:space:]].

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Moeller <freqlabs@FreeBSD.org>
Closes #10651
George Amanakis
Fix logging in l2arc_rebuild()

In case the L2ARC rebuild was canceled, do not log to spa history
log as the pool may be in the process of being removed and a panic
may occur:

BUG: kernel NULL pointer dereference, address: 0000000000000018
RIP: 0010:spa_history_log_internal+0xb1/0x120 [zfs]
Call Trace:
l2arc_rebuild+0x464/0x7c0 [zfs]
l2arc_dev_rebuild_start+0x2d/0x130 [zfs]
? l2arc_rebuild+0x7c0/0x7c0 [zfs]
thread_generic_wrapper+0x78/0xb0 [spl]
kthread+0xfb/0x130
? IS_ERR+0x10/0x10 [spl]
? kthread_park+0x90/0x90
ret_from_fork+0x35/0x40

Signed-off-by: George Amanakis <gamanakis@gmail.com>

Pull-request: #10659 part 1/1
Ryan Moeller
FreeBSD: Fix `zfs jail` and add a test

The ioctl was not using zfs_ioctl.

Signed-off-by: Ryan Moeller <ryan@iXsystems.com>

Pull-request: #10658 part 1/1
Ryan Moeller
FreeBSD: Fix `zfs jail` and add a test

The ioctl was not using zfs_ioctl.

Signed-off-by: Ryan Moeller <ryan@iXsystems.com>

Pull-request: #10658 part 1/1
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Kernel.org Built-in x86_64 (BUILD): cloning zfs -  stdio
Ryan Moeller
ZTS: Remove bashisms from zfs-tests.sh

Signed-off-by: Ryan Moeller <ryan@iXsystems.com>

Pull-request: #10640 part 1/1
Richard Laager
Fix another dependency loop

zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: #10356

Pull-request: #10388 part 2/2
Richard Laager
Fix a dependency loop

When generating units with zfs-mount-generator, if the pool is already
imported, zfs-import.target is not needed.  This avoids a dependency
loop on root-on-ZFS systems:
  systemd-random-seed.service After (via RequiresMountsFor)
  var-lib.mount After
  zfs-import.target After
  zfs-import-{cache,scan}.service After
  cryptsetup.service After
  systemd-random-seed.service

Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10388 part 1/2
Richard Laager
Fix another dependency loop

zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: #10356

Pull-request: #10388 part 2/2
Richard Laager
Fix a dependency loop

When generating units with zfs-mount-generator, if the pool is already
imported, zfs-import.target is not needed.  This avoids a dependency
loop on root-on-ZFS systems:
  systemd-random-seed.service After (via RequiresMountsFor)
  var-lib.mount After
  zfs-import.target After
  zfs-import-{cache,scan}.service After
  cryptsetup.service After
  systemd-random-seed.service

Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10388 part 1/2
Richard Laager
Fix another dependency loop

zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: #10356

Pull-request: #10388 part 2/2
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Kernel.org Built-in x86_64 (BUILD): cloning zfs -  stdio
Richard Laager
Fix a dependency loop

When generating units with zfs-mount-generator, if the pool is already
imported, zfs-import.target is not needed.  This avoids a dependency
loop on root-on-ZFS systems:
  systemd-random-seed.service After (via RequiresMountsFor)
  var-lib.mount After
  zfs-import.target After
  zfs-import-{cache,scan}.service After
  cryptsetup.service After
  systemd-random-seed.service

Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10388 part 1/2
  • Amazon 2 x86_64 (BUILD): cloning zfs -  stdio
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Ubuntu 18.04 x86_64 (STYLE): cloning zfs -  stdio
Richard Laager
Fix another dependency loop

zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: #10356

Pull-request: #10388 part 2/2
Richard Laager
Fix a dependency loop

When generating units with zfs-mount-generator, if the pool is already
imported, zfs-import.target is not needed.  This avoids a dependency
loop on root-on-ZFS systems:
  systemd-random-seed.service After (via RequiresMountsFor)
  var-lib.mount After
  zfs-import.target After
  zfs-import-{cache,scan}.service After
  cryptsetup.service After
  systemd-random-seed.service

Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10388 part 1/2
Richard Laager
Fix another dependency loop

zfs-load-key-DATASET.service was gaining an
After=systemd-journald.socket due to its stdout/stderr going to the
journal (which is the default).  systemd-journald.socket has an After
(via RequiresMountsFor=/run/systemd/journal) on -.mount.  If the root
filesystem is encrypted, -.mount gets an After
zfs-load-key-DATASET.service.

By setting stdout and stderr to null on the key load services, we avoid
this loop.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
Closes: #10356

Pull-request: #10388 part 2/2
Richard Laager
Fix a dependency loop

When generating units with zfs-mount-generator, if the pool is already
imported, zfs-import.target is not needed.  This avoids a dependency
loop on root-on-ZFS systems:
  systemd-random-seed.service After (via RequiresMountsFor)
  var-lib.mount After
  zfs-import.target After
  zfs-import-{cache,scan}.service After
  cryptsetup.service After
  systemd-random-seed.service

Signed-off-by: Richard Laager <rlaager@wiktel.com>

Pull-request: #10388 part 1/2
Michael Niewöhner
well, I didn't say 'finally', did I?

Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>

Pull-request: #10278 part 26/26
Michael Niewöhner
finally fix it....

Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>

Pull-request: #10278 part 25/25
Brian Behlendorf
[WIP] Distributed Parity (dRAID) Feature

WARNING: This is still work in progress.  The user interface
and on-disk format have changed from previous versions of this
PR.  It is not compatible with previous versions.  The on-disk
format is not finalized and may continue to change in future
versions.

This patch adds a new top-level vdev type called dRAID, which
stands for Distributed parity RAID.  This pool configuration
allows all dRAID vdevs to participate when rebuilding to a hot
spare device.  This can substantially reduce the total time
required to restore full parity to pool with a failed device.

A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create
the pool and reasonable default values will be chosen based on
the number of child vdevs in the dRAID vdev.

    zpool create <pool> draid[1,2,3] <vdevs...>

Unlike raidz, additional optional dRAID configuration values can
be provided as part of the draid type as colon separated values.
This allows administrators to fully specify a layout for either
performance or capacity reasons.  The supported options include:

  - draid[:<groups>g] - Redundancy groups
  - draid[:<spares>s] - Distributed hot spares (default 1)
  - draid[:<data>d]  - Data devices per group
  - draid[:<iter>i]  - Iterations perform when generating
                        a valid dRAID mapping (default 3)

As part of adding test coverage for the new dRAID vdev type
the following options were added to the ztest command.  These
options are leverages by the zloop.sh test script to test a
wide range of dRAID configurations.

  -K draid|raidz|random -- kind of RAID to test
  -D <value> -- dRAID data drives per redundancy group
  -G <value> -- dRAID redundancy group count
  -S <value> -- dRAID distributed spare drives
  -R <value> -- RAID parity (raidz or dRAID)
  -L        -- (Existing -G (dump log option) was renamed -L)

The zpool_create, zpool_import, redundancy, replacement and
fault test groups have all been updated provide test coverage
for the dRAID feature.

TODO:
- [x] - Rebased on master, will be frequently rebased from now on.
- [x] - Add dRAID config validation functionality.
- [x] - Enforced reasonable defaults to prevent harmful configs.
- [x] - Move common dRAID functions to zcommon.
- [x] - Replaced `draidcfg` command with `zpool create ...`.
- [x] - Add functionalty to load/save known dRAID layouts.
- [x] - Convert custom dRAID debugging to normal ZFS debugging.
- [x] - Cleaned up 'zpool status' output.
- [x] - Permutations for 255 device pool reduce to fit in label.
- [x] - Rebuild works with virtual hot spare and physical device.
- [x] - Allow adding new top-level dRAID vdevs (no removal).
- [x] - Logical spares are now mandatory for dRAID.
- [x] - Update `ztest` to add dRAID pools to its pool layouts.
- [x] - User commands updated to detect dRAID kmod support.
- [x] - Resolve checksum errors for non-uniform groups in pools
- [x] - Investigate reducing the permutations size in the label.
- [x] - Review and update the sequential rebuild code.
- [x] - Debug (or remove) dRAID mirror code (currently disabled).
- [x] - Add `zpool replace` option to request rebuild or resilver.
- [x] - Add new and extend existing ZTS test cases.
- [x] - Investigate stale labels on disk preventing pool import.
- [x] - Verify checksum errors are reported correctly (zinject).
- [x] - Support 'zpool detach/attach' for rebuild during sparing.
- [x] - Add support for ZED to kick in logical spares.
- [x] - Developer documention for vdev_rebuild.c
- [x] - Verify gang block handling works correctly.
- [x] - Verify rebuild/resilver `zpool status` reporting.
- [x] - Update packaging as needed.
- [x] - Add new and extend existing ZTS test cases.
- [x] - Documentation updates (man pages, comments, wiki, etc).
- [x] - Verify corruption repair works correctly.
- [x] - Implement vdev_xlate() to support initialize and trim.
- [x] - Developer documention for vdev_draid.c
- [ ] - Performance optimizaiton / analysis.

Future work:
- [ ] - Add a utility to generate known balanced layouts.
- [ ] - Generate known balanced dRAID layouts for common configurations.

Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Srikanth N S <nsrikanth@cray.com>
Co-authored-by: Stuart Maybee <smaybee@cray.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>

External-issue: ZFS-12 ZFS-35 ZFS-36 ZFS-17 ZFS-56 ZFS 95 ZFS-96
External-issue: ZFS-100 ZFS-103 ZFS-106 ZFS-110 ZFS-111 ZFS-117
External-issue: ZFS-137 ZFS-139 ZFS-202
Issue #9558

Pull-request: #10102 part 1/1
Matthew Ahrens
Merge remote-tracking branch 'origin/upstreams/master' into raidz

Pull-request: #8853 part 12/12