On Wed, 26 Feb 2020 at 21:16, Michael van Elst <mlel...@serpens.de> wrote:
>
> ci4...@gmail.com (Chavdar Ivanov) writes:
>
> ># vnconfig -c vnd0 /dev/zvol/rdsk/pail/nbsd1
> >vnconfig: /dev/rvnd0: VNDIOCSET: Operation not supported
>
> If zvols aren't broken then this should work:
>
> ccdconfig -c ccd0 0 /dev/zvol/rdsk/pail/nbsd1
>
> and then access /dev/ccd0[a-p]

# ccdconfig -c ccd0 0 /dev/zvol/rdsk/pail/nbsd1
ccdconfig: ioctl (CCDIOCSET): /dev/ccd0d: Block device required
# ccdconfig -c ccd0 0 /dev/zvol/dsk/pail/nbsd1
ccdconfig: ioctl (CCDIOCSET): /dev/ccd0d: Inappropriate ioctl for device

Apparently not, in this case. I think the comments in the thread
suggest that already.

FWIW I tried the sequence below, which could be used to accomplish the
original question -

Under NetBSD-current:
 - created a zvol
 - iSCSI exported it (now using net/istgt)
 - attached it from a W10 machine, initialized, copied a few files
 - created a snapshot of the zvol, compressed with xz (slow...)
 - deleted some of the files from the W10 mounted iSCSI volume
 - zfs send the snapshot of the volume to a FreeNAS server

On the FeeNAS server:
 - xzcat | zfs receive the stream sent from the NetBSD box (obviously
'zfs send | ssh ... zfs receive' will work as well)
 - created an extent, target and associated target on the FreeNAS
server (it did not work properly when I added it as a second LUN to an
existing target, it was seen as read-only and attempt to online it
failed, stating the disk was invalid)

On the W10 iSCSI initiator:
 - as I was using the same W10 initiator, I had to offline the volume
as mounted from the NetBSD iSCSI server - otherwise it complains about
the disks having the same GUIDs
 - attached to the newly created target from the W10 box, onlined the
disk, found the contents before the deletion, as expected.

Basically a quick test to give me some additional confidence in the
ZFS implementation. Looking good IMHO.

>
> --
> --
>                                 Michael van Elst
> Internet: mlel...@serpens.de
>                                 "A potential Snark may lurk in every tree."

And another hit, just a moment ago.

On OmniOS I can:
...
root@omni2:/export/home/xci
# zfs create -V 4G data/testv

       20-02-26 - 21:58:59

root@omni2:/export/home/xci
# zpool create asd /dev/zvol/dsk/data/testv

       20-02-26 - 21:59:04

root@omni2:/export/home/xci
# zpool list

       20-02-26 - 21:59:36
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
HEALTH  ALTROOT
asd    3.75G   648K  3.75G        -         -     0%     0%  1.00x  ONLINE  -
data   15.9G  85.1M  15.8G        -         -     0%     0%  1.00x  ONLINE  -
rpool  31.8G  3.28G  28.5G        -         -     5%    10%  1.00x  ONLINE  -
....

So I can create a pool backed by a zvol; I thought this was not
allowed, bit seems to work there.

Under NetBSD-current, I got a panic doing the same:
...
(gdb) target kvm netbsd.6.core
0xffffffff80224225 in cpu_reboot ()
(gdb) bt
#0  0xffffffff80224225 in cpu_reboot ()
#1  0xffffffff809fdbff in kern_reboot ()
#2  0xffffffff80a3fff9 in vpanic ()
#3  0xffffffff80a400bd in panic ()
#4  0xffffffff80a36cc4 in lockdebug_abort ()
#5  0xffffffff809f2367 in mutex_vector_enter ()
#6  0xffffffff81efda2c in zfsdev_close ()
#7  0xffffffff80abd94c in spec_close ()
#8  0xffffffff80ab0808 in VOP_CLOSE ()
#9  0xffffffff80aa8119 in vn_close ()
#10 0xffffffff81ee55f6 in vdev_disk_close ()
#11 0xffff92870dd15688 in ?? ()
#12 0x0000000000000004 in ?? ()
#13 0xffff9981951269f0 in ?? ()
#14 0xffffffff81ee2f6d in vdev_set_state ()
Backtrace stopped: frame did not save the PC
....


Chavdar

-- 
----

Reply via email to