> Without the change there would be no way other than recreating a user/group
> that occupies the same UID/GID to revoke a granted permission to unknown user.
This issue has been reported yesterday in the ZFS on Linux mailing list: it
would be nice to get it fixed but maybe we should add a new
Only filesystems and volumes are valid 'zfs remap' parameters: when passed a
snapshot name zfs_remap_indirects() does not handle the EINVAL returned from
libzfs_core, which results in failing an assertion and consequently crashing.
```
loli@openindiana:~$ uname -a
SunOS openindiana 5.11 master
Running `::zfs_params` on a semi-recent Illumos build (8a051e3a96) produces the
following warning:
```
mdb: variable zfetch_block_cap not found: unknown symbol name
```
Full output:
```
loli@openindiana:~$ uname -a
SunOS openindiana 5.11 master-0-g8a051e3a96 i86pc i386 i86pc
loli@openindiana
condition, sorry i wasn't clear. Full output when the test
fails:
```
loli@openindiana:~$ export DISKS='c4t0d0 c4t1d0 c4t2d0'
loli@openindiana:~$ ppriv -s EIP=basic -e /opt/zfs-tests/bin/zfstest -c
/opt/zfs-tests/runfiles/illumos-8940.run
Test: /opt/zfs-tests/tests/functional/removal/remove_mirror
If we are in the middle of an incremental 'zfs receive', the child .../%recv
will exist. If we run 'zfs promote' .../%recv, it will "work", but then zfs
gets confused about the status of the new dataset.
Attempting to do this promote should be an error. Similarly renaming .../%recv
datasets
When replacing a faulted device which was previously handled by a spare
multiple levels of nested interior VDEVs will be present in the pool
configuration: `get_replication()` needs to handle this situation gracefully to
let zpool add new devices to the pool:
```
root@openindiana:~#
`zfs send -t ` for an incremental send should be able to resume
successfully when sending to the same pool: a subtle issue in
`zfs_iter_children()` doesn't currently allow this.
Because resuming from a token requires "guid" -> "dataset" mapping
(`guid_to_name()`), we have to walk the whole
If we're creating a pool with version >= SPA_VERSION_DSL_SCRUB (v11) we need to
account for additional space needed by the origin dataset which will also be
snapshotted: "poolname"+"/"+"$ORIGIN"+"@"+"$ORIGIN".
Enforce this limit in `pool_namecheck()`.
Ported from
I'm sorry, i forgot to report here that "_panic: allocating allocated segment_"
is the same bug happening on non-debug builds. I only have a debug build of
Illumos, but the troubleshooting was originally done to fix
https://github.com/zfsonlinux/zfs/issues/6315 which is a non-debug ZFSonLinux
Illumos 4080 inadvertently allows `zpool clear` on readonly pools: fix this by
reintroducing a check (`POOL_CHECK_READONLY`) in the `zfs_ioc_clear`
registration code. Because i don't think we should be allowed to clear readonly
pools. Probably.
Completely related to this, when we try to `zpool
When iterating over the input nvlist in `dsl_props_set_sync_impl()` when we
don't preserve the nvpair name before looking up `ZPROP_VALUE`, so when we
later go to process it `nvpair_name()` is always "value" instead of the actual
property name.
These are properties set on a filesystem when
When this change was tested on Linux, zfs_create_013_pos did in fact mark those
as UNSUPPORTED
(http://build.zfsonlinux.org/builders/Amazon%202015.09%20x86_64%20Release%20%28TEST%29/builds/2293/steps/shell_9/logs/log)
```
Test:
Originally discovered and reported on ZFS on Linux, copy/pasting here relevant
contents:
If we manage to export the pool on which we are creating a dataset (filesystem
or zvol) between ```libzfs`zfs_create()``` entrypoint and
```libzfs`zpool_open()``` call (for which we never check the return
13 matches
Mail list logo