Home - Waterfall Grid T-Grid Console Builders Recent Builds Buildslaves Changesources - JSON API - About

Console View


Tags: Architectures Platforms default
Legend:   Passed Failed Warnings Failed Again Running Exception Offline No data

Architectures Platforms default
Brian Behlendorf
ZTS: import_rewind_device_replaced reliably fails

The import_rewind_device_replaced.ksh test was never entirely reliable
because it depends on MOS data not being overwritten.  The MOS data is
not protected by the snapshot so occasional failures were always
expected.  However, this test is now failing reliably on all platforms
indicating something has changed in the code since the test was marked
"maybe".  Convert the test to a "known" failure until the root cause
is identified and resolved.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>

Pull-request: #12821 part 1/1
наб
contrib/bash_completion.d: fix error spew from __zfs_match_snapshot()

Given:
  /sbin/zfs list filling/a-zvol<TAB> -o space,refratio
The rest of the cmdline gets vored by:
  /sbin/zfs list filling/a-zvolcannot open 'filling/a-zvol':
  operation not applicable to datasets of this type

With -x (fragment):
  + COMPREPLY=($(compgen -W "$(__zfs_match_snapshot)" -- "$cur"))
  +++ __zfs_match_snapshot
  +++ local base_dataset=filling/dziadtop-nowe-duchy
  +++ [[ filling/dziadtop-nowe-duchy != filling/dziadtop-nowe-duchy ]]
  +++ [[ filling/dziadtop-nowe-duchy != '' ]]
  +++ __zfs_list_datasets filling/dziadtop-nowe-duchy
  +++ /sbin/zfs list -H -o name -s name -t filesystem
                    -r filling/dziadtop-nowe-duchy
  +++ tail -n +2
  cannot open 'filling/dziadtop-nowe-duchy':
  operation not applicable to datasets of this type
  +++ echo filling/dziadtop-nowe-duchy
  +++ echo filling/dziadtop-nowe-duchy@
  ++ compgen -W 'filling/dziadtop-nowe-duchy

This properly completes with:
  $ /sbin/zfs list filling/a-zvol<TAB> -o space,refratio
  filling/a-zvol  filling/a-zvol@
  $ /sbin/zfs list filling/a-zvol<cursor> -o space,refratio

Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>

Pull-request: #12820 part 1/1
наб
contrib/bash_completion.d: fix error spew from __zfs_match_snapshot()

Given:
  /sbin/zfs list filling/a-zvol<TAB> -o space,refratio
The rest of the cmdline gets vored by:
  /sbin/zfs list filling/a-zvolcannot open 'filling/a-zvol': operation not applicable to datasets of this type

With -x (fragment):
  + COMPREPLY=($(compgen -W "$(__zfs_match_snapshot)" -- "$cur"))
  +++ __zfs_match_snapshot
  +++ local base_dataset=filling/dziadtop-nowe-duchy
  +++ [[ filling/dziadtop-nowe-duchy != filling/dziadtop-nowe-duchy ]]
  +++ [[ filling/dziadtop-nowe-duchy != '' ]]
  +++ __zfs_list_datasets filling/dziadtop-nowe-duchy
  +++ /sbin/zfs list -H -o name -s name -t filesystem -r filling/dziadtop-nowe-duchy
  +++ tail -n +2
  cannot open 'filling/dziadtop-nowe-duchy': operation not applicable to datasets of this type
  +++ echo filling/dziadtop-nowe-duchy
  +++ echo filling/dziadtop-nowe-duchy@
  ++ compgen -W 'filling/dziadtop-nowe-duchy

This properly completes with:
  $ /sbin/zfs list filling/a-zvol<TAB> -o space,refratio
  filling/a-zvol  filling/a-zvol@
  $ /sbin/zfs list filling/a-zvol<cursor> -o space,refratio

Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>

Pull-request: #12820 part 1/1
Coleman Kane
Linux 5.16: The blk-cgroup.h header is where struct blkcg_gq is defined

The definition of struct blkcg_gq was moved into blk-cgroup.h, which is
a header that's been in Linux since 2015. This is used by
vdev_blkg_tryget() in module/os/linux/zfs/vdev_disk.c. Since the kernel
for CentOS 7 and similar-generation releases doesn't have this header,
its inclusion is guarded by a configure test.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 4/4
Coleman Kane
Linux 5.16: bio_set_dev is no longer a helper macro

This change adds a confiugre check to determine if bio_set_dev is a
helper macro or not. If not, then the attempt to override its internal
call to bio_associate_blkg(), with a macro definition to our own
version, is no longer possible, as the compiler won't use it when
compiling the new inline function replacement implemented in the header.
This change also creates a new vdev_bio_set_dev() function that performs
the same work, and also performs the work implemented in
vdev_bio_associate_blkg(), as it is the only thing calling that function
in our code. Our custom vdev_bio_associate_blkg() is now only compiled
if the bio_set_dev() is a macro in the Linux headers.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 3/4
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/4
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/4
Coleman Kane
Linux 5.16: The blk-cgroup.h header is where struct blkcg_gq is defined

The definition of struct blkcg_gq was moved into blk-cgroup.h, which is
a header that's been in Linux since 2015. This is used by
vdev_blkg_tryget() in module/os/linux/zfs/vdev_disk.c.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 4/4
Coleman Kane
Linux 5.16: bio_set_dev is no longer a helper macro

This change adds a confiugre check to determine if bio_set_dev is a
helper macro or not. If not, then the attempt to override its internal
call to bio_associate_blkg(), with a macro definition to our own
version, is no longer possible, as the compiler won't use it when
compiling the new inline function replacement implemented in the header.
This change also creates a new vdev_bio_set_dev() function that performs
the same work, and also performs the work implemented in
vdev_bio_associate_blkg(), as it is the only thing calling that function
in our code. Our custom vdev_bio_associate_blkg() is now only compiled
if the bio_set_dev() is a macro in the Linux headers.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 3/4
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/4
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/4
Coleman Kane
Linux 5.16: The blk-cgroup.h header is where struct blkcg_gq is defined

The definition of struct blkcg_gq was moved into blk-cgroup.h, which is
a header that's been in Linux since 2015. This is used by
vdev_blkg_tryget() in module/os/linux/zfs/vdev_disk.c.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 4/4
Coleman Kane
Linux 5.16: bio_set_dev is no longer a helper macro

This change adds a confiugre check to determine if bio_set_dev is a
helper macro or not. If not, then the attempt to override its internal
call to bio_associate_blkg(), with a macro definition to our own
version, is no longer possible, as the compiler won't use it when
compiling the new inline function replacement implemented in the header.
This change also creates a new vdev_bio_set_dev() function that performs
the same work, and also performs the work implemented in
vdev_bio_associate_blkg(), as it is the only thing calling that function
in our code. Our custom vdev_bio_associate_blkg() is now only compiled
if the bio_set_dev() is a macro in the Linux headers.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 3/4
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning zfs -  stdio
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/4
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/4
Coleman Kane
Linux 5.16: bio_set_dev is no longer a helper macro

This change adds a confiugre check to determine if bio_set_dev is a
helper macro or not. If not, then the attempt to override its internal
call to bio_associate_blkg(), with a macro definition to our own
version, is no longer possible, as the compiler won't use it when
compiling the new inline function replacement implemented in the header.
This change also creates a new vdev_bio_set_dev() function that performs
the same work, and also performs the work implemented in
vdev_bio_associate_blkg(), as it is the only thing calling that function
in our code. Our custom vdev_bio_associate_blkg() is now only compiled
if the bio_set_dev() is a macro in the Linux headers.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 3/3
  • Debian 8 arm (BUILD): cloning zfs -  stdio
  • Ubuntu 16.04 i386 (BUILD): cloning zfs -  stdio
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/3
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/3
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/2
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/2
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/2
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/2
Coleman Kane
Linux 5.16: type member of iov_iter renamed iter_type

The iov_iter->type member was renamed iov_iter->iter_type. However,
while looking into this, realized that in 2018 a iov_iter_type(*iov)
accessor function was introduced. So if that is present, use it,
otherwise fall back to trying the existing behavior of directly
accessing type from iov_iter.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 2/2
Coleman Kane
Linux 5.16: block_device_operations->submit_bio now returns void

The return type for the submit_bio member of struct
block_device_operations was changed to no longer return a value.

Signed-off-by: Coleman Kane <ckane@colemankane.org>

Pull-request: #12819 part 1/1
Manoj Joseph
fix required arguments

Pull-request: #12818 part 2/2
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
  • Debian 10 arm64 (BUILD): cloning zfs -  stdio
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
Manoj Joseph
long opts for zdb

Signed-off-by: Manoj Joseph <manoj.joseph@delphix.com>

Pull-request: #12818 part 1/1
  • Debian 10 arm64 (BUILD): cloning zfs -  stdio
George Amanakis
Improved zpool status output, list all affected datasets

Currently, determining which datasets are affected by corruption is
a manual process.

The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.

Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:

pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec  3
08:27:57 2021
config:

        NAME        STATE    READ WRITE CKSUM
        test        ONLINE      0    0    0
          sdb      ONLINE      0    0 1.58K

errors: Permanent errors have been detected in the following files:

        test@1:/test.0.0
        /test/test.0.0
        /test/1clone/test.0.0

A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, and updating the man pages.

Signed-off-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>

Pull-request: #12812 part 1/1
George Amanakis
Improved zpool status output, list all affected datasets

Currently, determining which datasets are affected by corruption is
a manual process.

The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.

Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:

pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec  3
08:27:57 2021
config:

        NAME        STATE    READ WRITE CKSUM
        test        ONLINE      0    0    0
          sdb      ONLINE      0    0 1.58K

errors: Permanent errors have been detected in the following files:

        test@1:/test.0.0
        /test/test.0.0
        /test/1clone/test.0.0

A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, and updating the man pages.

Signed-off-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>

Pull-request: #12812 part 1/1
George Amanakis
Improved zpool status output, list all affected datasets

Currently, determining which datasets are affected by corruption is
a manual process.

The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.

Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:

pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec  3
08:27:57 2021
config:

        NAME        STATE    READ WRITE CKSUM
        test        ONLINE      0    0    0
          sdb      ONLINE      0    0 1.58K

errors: Permanent errors have been detected in the following files:

        test@1:/test.0.0
        /test/test.0.0
        /test/1clone/test.0.0

A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175.

Signed-off-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>

Pull-request: #12812 part 1/1
George Amanakis
Improved zpool status output, list all affected datasets

Currently, determining which datasets are affected by corruption is
a manual process.

The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.

Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:

pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec  3
08:27:57 2021
config:

        NAME        STATE    READ WRITE CKSUM
        test        ONLINE      0    0    0
          sdb      ONLINE      0    0 1.58K

errors: Permanent errors have been detected in the following files:

        test@1:/test.0.0
        /test/test.0.0
        /test/1clone/test.0.0

A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175.

Signed-off-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>

Pull-request: #12812 part 1/1
  • Debian 8 arm (BUILD): cloning zfs -  stdio
shaan1337
Update z_sync_writes_cnt in FreeBSD implementation for consistency

Signed-off-by: Shaan Nobee <sniper111@gmail.com>

Pull-request: #12790 part 3/3
shaan1337
Add two new counters: z_sync_writes_cnt & z_async_writes_cnt to the znode to keep track of active sync & non-sync page writebacks

Do a commit when:
i)  the current writeback is not intended to be synced and there are active sync writebacks
ii) the current writeback is intended to be synced and there are active non-sync page writebacks

This prevents sync page writebacks from accidentally blocking on non-sync page writebacks which can take several seconds to complete.

Signed-off-by: Shaan Nobee <sniper111@gmail.com>

Pull-request: #12790 part 2/3
shaan1337
Add failing test

Signed-off-by: Shaan Nobee <sniper111@gmail.com>

Pull-request: #12790 part 1/3