Can you supply me the commands required to reproduce this issue, I'm not
familiar with the test environment you are using.

Thanks

** Changed in: charm-lxd
       Status: Incomplete => In Progress

** Changed in: charm-lxd
       Status: In Progress => New

** Changed in: zfs-linux (Ubuntu)
       Status: New => In Progress

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu)
     Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: charm-lxd
     Assignee: Colin Ian King (colin-king) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1801349

Title:
  zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27

Status in OpenStack LXD Charm:
  New
Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  Test: tests/gate-basic-cosmic-rocky

  As part of the config, the lxd charm creates a pool device depending
  on the config.  The test config is:

          lxd_config = {
              'block-devices': '/dev/vdb',
              'ephemeral-unmount': '/mnt',
              'storage-type': 'zfs',
              'overwrite': True
          }

  The config drive is normally mounted on /mnt, and the lxd charm
  umounts it as part of the start up.  The /etc/fstab on the unit is:

  # cat /etc/fstab 
  LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
  LABEL=UEFI      /boot/efi       vfat    defaults        0 0
  /dev/vdb        /mnt    auto    
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig       
0       2
  /dev/vdc        none    swap    sw,comment=cloudconfig  0       0

  
  However, even after umount-ing the /mnt off of /dev/vdb, the zpool create 
command still fails:

  # zpool create -f lxd /dev/vdb
  /dev/vdb is in use and contains a unknown filesystem.

  
  If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then 
rebooted, then the zpool create command succeeds:

  # zpool list
  NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
  lxd   14.9G   106K  14.9G         -     0%     0%  1.00x  ONLINE  -

  # zpool status lxd
    pool: lxd
   state: ONLINE
    scan: none requested
  config:

          NAME        STATE     READ WRITE CKSUM
          lxd         ONLINE       0     0     0
            vdb       ONLINE       0     0     0

  errors: No known data errors

  Something odd is going on with cosmic (18.10) and the combination of
  lxd, zfs and the kernel

  lxd version: 3.6
  zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
  Linux: 4.18.0-10-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to