Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Tags: Architectures Distributions Performance Style Tests
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

Architectures Distributions Performance Style Tests
Gregor Kopka
Zpool iostat: remove latency/queue scaling (#7694)

Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for
both latencies and queue depths from 'zpool iostat -l' are with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: #7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>

Pull-request: #7945 part 1/1
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc64 (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 aarch64 (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning zfs -  stdio
Prakash Surya
WIP: Concurrent modifications to "/etc/dfs/sharetab" does not work

Pull-request: #7944 part 1/1
Brian Behlendorf
Fix zfs_write() /  mmap update_time() lock inversion

When a page is faulted in by filemap_page_mkwrite() this function
may be called by update_time() with the file's mmap_sem held.
Therefore it's necessary to use TXG_NOWAIT since we cannot release
the mmap_sem, and even if we could, it would be undesirable to
delay the page fault.  TXG_NOTHROTTLE will be set as needed to
bypass the write throttle.  In the unlikely case the transaction
cannot be assigned set z_atime_dirty=1 so at least the times will
be updated when the file is closed.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>

Pull-request: #7942 part 1/1
  • Amazon 2 x86_64 (BUILD): cloning zfs -  stdio
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc64 (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 aarch64 (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning zfs -  stdio
  • CentOS 6 x86_64 (BUILD): cloning zfs -  stdio
  • CentOS 7 x86_64 (BUILD): cloning zfs -  stdio
  • Debian 9 x86_64 (BUILD): cloning zfs -  stdio
  • Kernel.org Built-in x86_64 (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 x86_64 (BUILD): cloning zfs -  stdio
  • Ubuntu 18.04 x86_64 (BUILD): cloning zfs -  stdio
  • Ubuntu 17.10 x86_64 (STYLE): cloning zfs -  stdio
  • Amazon 2 x86_64 Release (TEST): zfstests failed -  stdioconsole
  • CentOS 6 x86_64 (TEST): zimport failed -  stdio
  • CentOS 7 x86_64 (TEST): zfstests failed -  stdioconsole
  • Ubuntu 18.04 x86_64 (TEST): zfstests failed -  stdioconsoletests
Prakash Surya
Verify 'zfs destroy' will unshare the dataset

This change adds a new test case to the zfs-test suite to verify that
when 'zfs destroy' is used on a shared dataset, the dataset will be
unshared after the destroy operation completes.

Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 2/2
Prakash Surya
Fix "zfs destroy" when "sharenfs=on" is used

When using "zfs destroy" on a dataset that is using "sharenfs=on" and
has been automatically exported (by libzfs), the dataset will not be
automatically unexported as it should be. This workflow appears to have
been broken by this commit: 3fd3e56cfd543d7d7a1bf502bfc0db6e24139668

In that change, the "zfs_unmount" function was modified to use the
"mnt.mnt_special" field when determining the mount point that is being
unmounted, rather than "mnt.mnt_mountp".

As a result, when "mntpt" is passed into "zfs_unshare_proto", it's value
is now the dataset name rather than the mountpoint. Thus, when this
value is used with the "is_shared" function (via "zfs_unshare_proto") it
will not find a match (since that function assumes it'll be passed the
mountpoint) and incorrectly reports that the dataset is not shared.

This can be easily reproduced with the following commands:

    $ sudo zpool create tank xvdb
    $ sudo zfs create -o sharenfs=on tank/fish
    $ sudo zfs destroy tank/fish

    $ sudo zfs list -r tank
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    tank  97.5K  7.27G    24K  /tank

    $ sudo exportfs
    /tank/fish      <world>
    $ sudo cat /etc/dfs/sharetab
    /tank/fish      -      nfs    rw,crossmnt

At this point, the "tank/fish" filesystem doesn't exist, but it's still
listed as exported when looking at "exportfs" and "/etc/dfs/sharetab".

Also note, this change brings us back in-sync with the illumos code, as
it pertains to this one line; on illumos, "mnt.mnt_mountp" is used.

Co-authored-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 1/2
Prakash Surya
fixup: link #6143 to "rollback_003_pos" failure

Pull-request: #7941 part 3/4
Prakash Surya
Fix "zfs destroy" when "sharenfs=on" is used

When using "zfs destroy" on a dataset that is using "sharenfs=on" and
has been automatically exported (by libzfs), the dataset will not be
automatically unexported as it should be. This workflow appears to have
been broken by this commit: 3fd3e56cfd543d7d7a1bf502bfc0db6e24139668

In that change, the "zfs_unmount" function was modified to use the
"mnt.mnt_special" field when determining the mount point that is being
unmounted, rather than "mnt.mnt_mountp".

As a result, when "mntpt" is passed into "zfs_unshare_proto", it's value
is now the dataset name rather than the mountpoint. Thus, when this
value is used with the "is_shared" function (via "zfs_unshare_proto") it
will not find a match (since that function assumes it'll be passed the
mountpoint) and incorrectly reports that the dataset is not shared.

This can be easily reproduced with the following commands:

    $ sudo zpool create tank xvdb
    $ sudo zfs create -o sharenfs=on tank/fish
    $ sudo zfs destroy tank/fish

    $ sudo zfs list -r tank
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    tank  97.5K  7.27G    24K  /tank

    $ sudo exportfs
    /tank/fish      <world>
    $ sudo cat /etc/dfs/sharetab
    /tank/fish      -      nfs    rw,crossmnt

At this point, the "tank/fish" filesystem doesn't exist, but it's still
listed as exported when looking at "exportfs" and "/etc/dfs/sharetab".

Also note, this change brings us back in-sync with the illumos code, as
it pertains to this one line; on illumos, "mnt.mnt_mountp" is used.

Co-authored-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 2/4
Prakash Surya
Verify 'zfs destroy' will unshare the dataset

This change adds a new test case to the zfs-test suite to verify that
when 'zfs destroy' is used on a shared dataset, the dataset will be
unshared after the destroy operation completes.

Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 1/4
Prakash Surya
fixup: link #6143 to "rollback_003_pos" failure

Pull-request: #7941 part 3/3
Prakash Surya
Fix "zfs destroy" when "sharenfs=on" is used

When using "zfs destroy" on a dataset that is using "sharenfs=on" and
has been automatically exported (by libzfs), the dataset will not be
automatically unexported as it should be. This workflow appears to have
been broken by this commit: 3fd3e56cfd543d7d7a1bf502bfc0db6e24139668

In that change, the "zfs_unmount" function was modified to use the
"mnt.mnt_special" field when determining the mount point that is being
unmounted, rather than "mnt.mnt_mountp".

As a result, when "mntpt" is passed into "zfs_unshare_proto", it's value
is now the dataset name rather than the mountpoint. Thus, when this
value is used with the "is_shared" function (via "zfs_unshare_proto") it
will not find a match (since that function assumes it'll be passed the
mountpoint) and incorrectly reports that the dataset is not shared.

This can be easily reproduced with the following commands:

    $ sudo zpool create tank xvdb
    $ sudo zfs create -o sharenfs=on tank/fish
    $ sudo zfs destroy tank/fish

    $ sudo zfs list -r tank
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    tank  97.5K  7.27G    24K  /tank

    $ sudo exportfs
    /tank/fish      <world>
    $ sudo cat /etc/dfs/sharetab
    /tank/fish      -      nfs    rw,crossmnt

At this point, the "tank/fish" filesystem doesn't exist, but it's still
listed as exported when looking at "exportfs" and "/etc/dfs/sharetab".

Also note, this change brings us back in-sync with the illumos code, as
it pertains to this one line; on illumos, "mnt.mnt_mountp" is used.

Co-authored-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 2/3
Prakash Surya
Verify 'zfs destroy' will unshare the dataset

This change adds a new test case to the zfs-test suite to verify that
when 'zfs destroy' is used on a shared dataset, the dataset will be
unshared after the destroy operation completes.

Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 1/3
Prakash Surya
Fix "zfs destroy" when "sharenfs=on" is used

When using "zfs destroy" on a dataset that is using "sharenfs=on" and
has been automatically exported (by libzfs), the dataset will not be
automatically unexported as it should be. This workflow appears to have
been broken by this commit: 3fd3e56cfd543d7d7a1bf502bfc0db6e24139668

In that change, the "zfs_unmount" function was modified to use the
"mnt.mnt_special" field when determining the mount point that is being
unmounted, rather than "mnt.mnt_mountp".

As a result, when "mntpt" is passed into "zfs_unshare_proto", it's value
is now the dataset name rather than the mountpoint. Thus, when this
value is used with the "is_shared" function (via "zfs_unshare_proto") it
will not find a match (since that function assumes it'll be passed the
mountpoint) and incorrectly reports that the dataset is not shared.

This can be easily reproduced with the following commands:

    $ sudo zpool create tank xvdb
    $ sudo zfs create -o sharenfs=on tank/fish
    $ sudo zfs destroy tank/fish

    $ sudo zfs list -r tank
    NAME  USED  AVAIL  REFER  MOUNTPOINT
    tank  97.5K  7.27G    24K  /tank

    $ sudo exportfs
    /tank/fish      <world>
    $ sudo cat /etc/dfs/sharetab
    /tank/fish      -      nfs    rw,crossmnt

At this point, the "tank/fish" filesystem doesn't exist, but it's still
listed as exported when looking at "exportfs" and "/etc/dfs/sharetab".

Also note, this change brings us back in-sync with the illumos code, as
it pertains to this one line; on illumos, "mnt.mnt_mountp" is used.

Co-authored-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 2/2
Prakash Surya
Verify 'zfs destroy' will unshare the dataset

This change adds a new test case to the zfs-test suite to verify that
when 'zfs destroy' is used on a shared dataset, the dataset will be
unshared after the destroy operation completes.

Signed-off-by: Prakash Surya <prakash.surya@delphix.com>

Requires-builders: all

Pull-request: #7941 part 1/2
Gregor Kopka
Fix flake 8 style warnings (issue #7925)

Ran zts-report.py and test-runner.py from ./tests/test-runner/bin/
through the 2to3 (https://docs.python.org/2/library/2to3.html).
Checked the result, fixed 'maxint' -> 'maxsize' that 2to3 missed.
Fixed the complaints Builtbot had.

Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes #7925

Pull-request: #7929 part 1/1
  • Amazon 2 x86_64 (BUILD): cloning zfs -  stdio
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Debian 8 ppc64 (BUILD): cloning zfs -  stdio
  • Debian 8 ppc (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 aarch64 (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning zfs -  stdio
  • CentOS 6 x86_64 (BUILD): cloning zfs -  stdio
  • CentOS 7 x86_64 (BUILD): cloning zfs -  stdio
  • Debian 9 x86_64 (BUILD): cloning zfs -  stdio
  • Fedora 28 x86_64 (BUILD): cloning zfs -  stdio
  • Kernel.org Built-in x86_64 (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 x86_64 (BUILD): cloning zfs -  stdio
  • Ubuntu 18.04 x86_64 (BUILD): cloning zfs -  stdio
loli10K
Verify ZED detects physically removed L2ARC device

This commit adds a new test case to the ZFS Test Suite to verify ZED
can detect when a cache device is physically removed from a running
system.

Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Requires-builders: test

Pull-request: #7926 part 1/1
loli10K
Verify ZED detects physically removed L2ARC device

This commit adds a new test case to the ZFS Test Suite to verify ZED
can detect when a cache device is physically removed from a running
system.

Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Requires-builders: test

Pull-request: #7926 part 1/1
loli10K
Verify ZED detects physically removed L2ARC device

This commit adds a new test case to the ZFS Test Suite to verify ZED
can detect when a cache device is physically removed from a running
system.

Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Requires-builders: test

Pull-request: #7926 part 1/1
loli10K
Verify ZED detects physically removed L2ARC device

This commit adds a new test case to the ZFS Test Suite to verify ZED
can detect when a cache device is physically removed from a running
system.

Signed-off-by: loli10K <ezomori.nozomu@gmail.com>

Pull-request: #7926 part 1/1
Paul Dagnelie
zfs_tests: get rid of Illumos dd

Illumos dd was ported to Linux because it has the stride option
that the GNU dd lacks. This option is required by some redacted
send tests. As a replacement, a new simple tool was created to
provide the needed functionality.

Pull-request: #7846 part 7/7
Paul Dagnelie
Make the removal thread do more work

Pull-request: #7846 part 6/7
Paul Dagnelie
Fix various tests

Pull-request: #7846 part 5/7
Paul Dagnelie
Add params to man file, fix test #!

Pull-request: #7846 part 4/7
Paul Dagnelie
bug fixes

Pull-request: #7846 part 3/7
Paul Dagnelie
DLPX-58193 dsl_dataset_hold_obj can leak bookmarks

Reviewed at: http://reviews.delphix.com/r/39617/

Pull-request: #7846 part 2/7
Paul Dagnelie
Implement Redacted Send/Receive

Pull-request: #7846 part 1/7
Paul Dagnelie
Make the removal thread do more work

Pull-request: #7846 part 6/6
Paul Dagnelie
Fix various tests

Pull-request: #7846 part 5/6
Paul Dagnelie
Add params to man file, fix test #!

Pull-request: #7846 part 4/6
Paul Dagnelie
bug fixes

Pull-request: #7846 part 3/6
Paul Dagnelie
DLPX-58193 dsl_dataset_hold_obj can leak bookmarks

Reviewed at: http://reviews.delphix.com/r/39617/

Pull-request: #7846 part 2/6
Paul Dagnelie
Implement Redacted Send/Receive

Pull-request: #7846 part 1/6
Paul Dagnelie
Make the removal thread do more work

Pull-request: #7846 part 6/6
Paul Dagnelie
Fix various tests

Pull-request: #7846 part 5/6
Paul Dagnelie
Add params to man file, fix test #!

Pull-request: #7846 part 4/6
Paul Dagnelie
bug fixes

Pull-request: #7846 part 3/6
Paul Dagnelie
DLPX-58193 dsl_dataset_hold_obj can leak bookmarks

Reviewed at: http://reviews.delphix.com/r/39617/

Pull-request: #7846 part 2/6
Paul Dagnelie
Implement Redacted Send/Receive

Pull-request: #7846 part 1/6