[Kernel-packages] [Bug 1687791] Re: Install of kdump-tools fails

2018-06-30 Thread Matt LaPlante
Confirmed issue on Bionic. Also confirmed the suggested patch in 1661629
#3 fixed the install for me.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to makedumpfile in Ubuntu.
https://bugs.launchpad.net/bugs/1687791

Title:
  Install of kdump-tools fails

Status in makedumpfile package in Ubuntu:
  Confirmed

Bug description:
  When installing Ubuntu xenial via netimage, installation fails because
  of an error, when configuring the kdump-tools. /var/log/syslog says:

  ...

  May  2 22:15:17 in-target: Setting up grub2 (2.02~beta2-36ubuntu3.9) ...^M
  May  2 22:15:17 in-target: Setting up lxc-common (2.0.7-0ubuntu1~16.04.2) 
...^M
  May  2 22:15:17 in-target: Running in chroot, ignoring request.^M
  May  2 22:15:17 in-target: invoke-rc.d: policy-rc.d denied execution of 
reload.^M
  May  2 22:15:17 in-target: Setting up grub-gfxpayload-lists (0.7) ...^M
  May  2 22:15:17 in-target: Processing triggers for libc-bin (2.23-0ubuntu7) 
...^M
  May  2 22:15:17 in-target: Processing triggers for systemd (229-4ubuntu17) 
...^M
  May  2 22:15:17 in-target: Processing triggers for ureadahead (0.100.0-19) 
...^M
  May  2 22:15:17 in-target: Processing triggers for dbus (1.10.6-1ubuntu3.3) 
...^M
  May  2 22:15:17 in-target: Errors were encountered while processing:^M
  May  2 22:15:17 in-target:  kdump-tools^M
  May  2 22:15:17 in-target:  linux-crashdump^M
  May  2 22:15:18 in-target: E: Sub-process /usr/bin/dpkg returned an error 
code (1)
  May  2 22:15:18 in-target: Setting up kdump-tools (1:1.5.9-5ubuntu0.4) ...
  May  2 22:15:18 in-target: dpkg: error processing package kdump-tools 
(--configure):
  May  2 22:15:18 in-target:  subprocess installed post-installation script 
returned error exit status 1
  May  2 22:15:18 in-target: dpkg: dependency problems prevent configuration of 
linux-crashdump:
  May  2 22:15:18 in-target:  linux-crashdump depends on kdump-tools; however:
  May  2 22:15:18 in-target:   Package kdump-tools is not configured yet.
  May  2 22:15:18 in-target: 
  May  2 22:15:18 in-target: dpkg: error processing package linux-crashdump 
(--configure):
  May  2 22:15:18 in-target:  dependency problems - leaving unconfigured
  May  2 22:15:18 in-target: Errors were encountered while processing:
  May  2 22:15:18 in-target:  kdump-tools
  May  2 22:15:18 in-target:  linux-crashdump
  May  2 22:15:18 main-menu[616]: WARNING **: Configuring 'pkgsel' failed with 
error code 100
  May  2 22:15:18 main-menu[616]: WARNING **: Menu item 'pkgsel' failed.

  
  After this the 'Select and install software' dialog pops up and says, that 
'Installation step failed'. If one presses '', one gets bombed back 
into the 'Ubuntu installer main menu'.
  Now you get into a loop, when choosing the 'Select and install software' item 
from the menu.

  Installer image is 'Linux foo 4.4.0-62-generic #83-Ubuntu SMP Wed Jan
  18 14:10:15 UTC 2017 x86_64 GNU/Linux'.

  The only workaround found so far is:
  cp -p /target/var/lib/dpkg/info/kdump-tools.postinst \
  /target/var/lib/dpkg/info/kdump-tools.postinst.suck
  printf '#!/bin/sh\nexit 0\n' \
  >/target/var/lib/dpkg/info/kdump-tools.postinst

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/makedumpfile/+bug/1687791/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-03 Thread Matt LaPlante
Yes, please close... thanks again.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee  secondarycacheall

[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-03 Thread Matt LaPlante
Great, thank you. zdb confirms what we already sort of suspected: the
file is ~12T with only 5.5G of real content. I was aware that ZFS
reported on disk size differently, but the fact that this isn't a
feature of compression threw me off in the output.

 Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
 7516K   128K  5.51G 512  11.1T0.07  ZFS plain file

At any rate, it's still not clear to me how we arrived at this giant
(but sparse) file, since the VM and image tools (qemu-img) only consider
it to be 10G logical. I'm guessing that the implication is that the bug
is with qemu rather than ZFS. My initial concern was that it had been a
storage layer corruption, but I guess it's equally plausible that
somehow the emulation layer ran wild and just started massively growing
the file out of bounds on disk.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
  

[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-02 Thread Matt LaPlante
1. Yes, compression is on for the parent and inherited. The ratio only
shows 1.37x, which seems rather low if a 12T file was all zeros except
for >10G?

root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee | grep comp
rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x 
  -
rpool/DATA/fusion/store/vms/plexee  compression   lz4   
  inherited from rpool
rpool/DATA/fusion/store/vms/plexee  refcompressratio  1.37x   

2. 
root@fusion:~# du -h /data/store/vms/plexee/plexee-root
5.6G/data/store/vms/plexee/plexee-root

This is what I would expect the VM to be using based on how the file was
provisioned and its internal disk state.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  

[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-04-24 Thread Matt LaPlante
A scrub of the pool detected no issues, no errors.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee  secondarycache   

[Kernel-packages] [Bug 1766308] [NEW] inexplicably large file reported by zfs filesystem

2018-04-23 Thread Matt LaPlante
Public bug reported:

I have a zfs filesystem containing a single qemu kvm disk image. The vm
has been working normally. The image file is only allocated 10G, however
today I became aware that the file, when examined from the ZFS host
(hypervisor) is reporting an inexplicable, massive file size of around
12T. 12T is larger than the pool itself. Snapshots or other filesystem
features should not be involved. I'm suspicious that the file(system?)
has been corrupted.


root@fusion:~# ls -l /data/store/vms/plexee/
total 6164615
-rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
^^ !!

root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
image: /data/store/vms/plexee//plexee-root
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 5.9G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false


root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  /data/store/vms/plexee


root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
NAMEPROPERTY  VALUE 
  SOURCE
rpool/DATA/fusion/store/vms/plexee  type  filesystem
  -
rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 2018 
  -
rpool/DATA/fusion/store/vms/plexee  used  5.88G 
  -
rpool/DATA/fusion/store/vms/plexee  available 484G  
  -
rpool/DATA/fusion/store/vms/plexee  referenced5.88G 
  -
rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x 
  -
rpool/DATA/fusion/store/vms/plexee  mounted   yes   
  -
rpool/DATA/fusion/store/vms/plexee  quota none  
  default
rpool/DATA/fusion/store/vms/plexee  reservation   none  
  default
rpool/DATA/fusion/store/vms/plexee  recordsize128K  
  default
rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
rpool/DATA/fusion/store/vms/plexee  sharenfs  off   
  default
rpool/DATA/fusion/store/vms/plexee  checksum  on
  default
rpool/DATA/fusion/store/vms/plexee  compression   lz4   
  inherited from rpool
rpool/DATA/fusion/store/vms/plexee  atime off   
  inherited from rpool
rpool/DATA/fusion/store/vms/plexee  devices   off   
  inherited from rpool
rpool/DATA/fusion/store/vms/plexee  exec  on
  default
rpool/DATA/fusion/store/vms/plexee  setuidon
  default
rpool/DATA/fusion/store/vms/plexee  readonly  off   
  default
rpool/DATA/fusion/store/vms/plexee  zoned off   
  default
rpool/DATA/fusion/store/vms/plexee  snapdir   hidden
  default
rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted
  default
rpool/DATA/fusion/store/vms/plexee  canmount  on
  default
rpool/DATA/fusion/store/vms/plexee  xattr on
  default
rpool/DATA/fusion/store/vms/plexee  copies1 
  default
rpool/DATA/fusion/store/vms/plexee  version   5 
  -
rpool/DATA/fusion/store/vms/plexee  utf8only  off   
  -
rpool/DATA/fusion/store/vms/plexee  normalization none  
  -
rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive 
  -
rpool/DATA/fusion/store/vms/plexee  vscan off   
  default
rpool/DATA/fusion/store/vms/plexee  nbmandoff   
  default
rpool/DATA/fusion/store/vms/plexee  sharesmb  off   
  default
rpool/DATA/fusion/store/vms/plexee  refquota  none  
  default
rpool/DATA/fusion/store/vms/plexee  refreservationnone  
  default
rpool/DATA/fusion/store/vms/plexee  primarycache  all   
  default
rpool/DATA/fusion/store/vms/plexee  secondarycacheall   
  default
rpool/DATA/fusion/store/vms/plexee  usedbysnapshots   0 
  -
rpool/DATA/fusion/store/vms/plexee  usedbydataset 5.88G 
  -
rpool/DATA/fusion/store/vms/plexee  usedbychildren0 
  -
rpool/DATA/fusion/store/vms/plexee  usedbyrefreservation  0 
  -
rpool/DATA/fusion/store/vms/plexee  logbias   latency