Thanks to everyone for their help! yes dtrace did help and I found that in my
layered driver, the prop_op entry point had an error in setting the [Ss]ize
dynamic property, and apparently that's what ZFS looks for, not just Nblocks!
what took me so long in getting to this error was that the drive
I explored this a bit and found that the ldi_ioctl in my layered driver does
fail, but fails because of an "iappropriate ioctl for device " error, which the
underlying ramdisk driver's ioctl returns. So doesn't seem like that's an issue
at all (since I know the storage pool creation is successfu
With what Edward suggested, I got rid of the ldi_get_size() error by defining
the prop_op entry point appropriately.
However, the zpool create still fails - with zio_wait() returning 22.
bash-3.00# dtrace -n 'fbt::ldi_get_size:entry{self->t=1;}
fbt::ldi_get_size:entry/self->t/{}
fbt::ldi_get_s
With what Edward suggested, I got rid of the ldi_get_size() error by defining
the prop_op entry point appropriately.
However, the zpool create still fails - with zio_wait() returning 22.
bash-3.00# dtrace -n 'fbt::ldi_get_size:entry{self->t=1;}
fbt::ldi_get_size:entry/self->t/{}
fbt::ldi_get_
ufs does not access the [Nn]blocks and [Ss]ize properties,
but the ldi and specfs do. so if you were to do a stat(2) on your
device before it was opened, you would get an incorrect size reported.
most native block devices do something like ddi_prop_op_nblocks().
(search for callers of this functi
Thanks Edward.
Currently my layered driver does not implement the prop_op(9E) entry point - I
didn't realize this was necessary since my layered driver worked fine without
it when used over UFS.
My layered driver sits above a ramdisk driver.
I realized the same problem that you've mentioned whe
Thanks Eric and Manoj.
Here's what ldi_get_size() returns:
bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool create
adsl-pool /dev/layerzfsminor1' dtrace: description 'fbt::ldi_get_size:return'
matched 1 probe
cannot create 'adsl-pool': invalid argument for this pool operat
I ran zpool with truss, and here is the system call trace. (again, zfs_lyr is
the layered driver I am trying to use to talk to the ramdisk driver).
When I compared it to a successful zpool creation, the culprit is the last
failing ioctl
i.e. ioctl(3, ZFS_IOC_CREATE_POOL, )
I tried looking at th
To give more information, here is the kernel log dump when I run:
zpool create mypool
(my layered driver is called zfs_lyr.)
May 14 02:19:27 unknown zfs_lyr: [ID 902459 kern.notice] NOTICE: Inside
zfs_lyr_open***SYNC
May 14 02:19:27 unknown zfs_lyr: [ID 215874 kern.notice]
May 14 02:19:27