Re: ZFS on current vs wedges - best practice?

2021-07-19 Thread Jeff Rizzo



On 7/19/21 11:50 AM, Frank Kardel wrote:

Hi Jeff !

Yes, you can use wedge names.

you can configure thm like "zpool create tank raidz2 
wedges/wedgename-a wedges/wedgename-b wedges/wedgename-c 
wedges/wedgename-d wedges/wedgename-e"


OK, so I have to specify the names at create time?  If so, I guess I can 
try removing/replacing them.



For wedges to work you need to start devpubd (/etc/rc.conf: 
devpubd=YES) before ZFS. Currently devpub start too late with its 
dependencies.


To start devpubd earlier you can use following dependencies in 
/etc/rc.d/devpubd


# PROVIDE: devpubd
# REQUIRE: root
# BEFORE:  DISKS



Ah, OK.  I had forgotten about devpubd.  Thanks!



to recover (not tested) you may try:

start devpubd

zpool export tank # you may try without this first

zpool import -d /dev/wedges # list found pools

zpool import -d /dev/wedges -a # imports all found pools



Recovery was as easy as "zpool export tank; zpool import tank" - it 
looks at all dk* devices when importing.  I will try the devpubd stuff 
for avoiding this in the future.




Use the hints at your own risk. I learned those when we briefly 
(9.99.85-9.99.86) broke zfs vdev access via symlinks. be sure to use a 
recent (>2021-07-18) -current kernel in case you are running -current.


Frank



I will update my kernel ASAP.  Thanks a bunch.


+j



Re: ZFS on current vs wedges - best practice?

2021-07-19 Thread Frank Kardel

Hi Jeff !

Yes, you can use wedge names.

you can configure thm like "zpool create tank raidz2 wedges/wedgename-a 
wedges/wedgename-b wedges/wedgename-c wedges/wedgename-d wedges/wedgename-e"


For wedges to work you need to start devpubd (/etc/rc.conf: devpubd=YES) 
before ZFS. Currently devpub start too late with its dependencies.


To start devpubd earlier you can use following dependencies in 
/etc/rc.d/devpubd


# PROVIDE: devpubd
# REQUIRE: root
# BEFORE:  DISKS

to recover (not tested) you may try:

start devpubd

zpool export tank # you may try without this first

zpool import -d /dev/wedges # list found pools

zpool import -d /dev/wedges -a # imports all found pools

Use the hints at your own risk. I learned those when we briefly 
(9.99.85-9.99.86) broke zfs vdev access via symlinks. be sure to use a 
recent (>2021-07-18) -current kernel in case you are running -current.


Frank

On 07/19/21 19:52, Jeff Rizzo wrote:
I had forgotten about this little detail, and am not sure about the 
best way to deal with it.



I have four disks partitioned with GPT (so I can create a raidframe 
raid1 on part of the disk, and use the rest for ZFS), and I made the 
mistake (?) of using wedge names to create the zpool.  So, after a 
reboot (but not the first time! only happened after n reboots), the 
wedges reordered themselves, and now my zpool looks like this:



NAME  STATE READ WRITE CKSUM
tank  UNAVAIL  0 0 0
  raidz2-0UNAVAIL  0 0 0
3140223856450238961   UNAVAIL  0 0 0  was /dev/dk4
1770477436286968258   FAULTED  0 0 0  was /dev/dk5
11594062134542531370  UNAVAIL  0 0 0  was /dev/dk6
dk7   ONLINE   0 0 0


I _ think_ I can figure out how to recover my data without recreating 
the entire pool.  (I hope - suggestions there welcome as well!  Once I 
recover this time, I'm going to have to replace the vdevs one at a 
time anyway because I just realized they wedges are misaligned to the 
underlying disk block size.  Sigh.)



However, I'm not sure the best way (is there a way?) to keep this from 
happening again.  Can I use wedge names? (Will those persist across 
boots?)  Other than this minor detail, I've been quite happy with ZFS 
in 9 and -current.