[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
Is there anyone who could help me over go this bug so I can rescue my ZFS pool data, pool will be lost as I understand. I accidentaly added disk to pool and not as mirror what was intention,... and it can nor be removed even there is no data! -- You received this bug notification because you ar

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
Why is the second disk missing? If you accidentally added it and ended up with a striped pool, as long as both disks are connected, you can import the pool normally. Then use the new device_removal feature to remove the new disk from the pool. If you've done something crazy like pulled the disk an

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
new device_removal feature ,... where it is ? It might work. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1906542 Title: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
Do you mean this feature which is comi ng,... when ? https://github.com/openzfs/openzfs/pull/251 -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1906542 Title: echo 1 >> /sys/module/z

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
root@jonipekka-desktop:~# zpool import pool: rpool id: 5077426391014001687 state: UNAVAIL status: One or more devices are faulted. action: The pool cannot be imported due to damaged devices or data. config: rpoolUNAVAIL insufficient re

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
device_removal only works if you can import the pool normally. That is what you should have used after you accidentally added the second disk as another top-level vdev. Whatever you have done in the interim, though, has resulted in the second device showing as FAULTED. Unless you can fix that, devi

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2021-10-30 Thread Rich
(A bit delayed, but just for anyone finding this...) No, you cannot remove a FAULTED normal data device - device_removal involves migrating all the data off the old one, which you cannot do if it's not there. (logs and caches are different.) You'll need to recreate the pool. -- You received th

Re: [Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Joni-Pekka Kurronen
I tried remove command before taking it out, i was in beleaf system will then correct problem, So basically secon disk has no data and not corrupted,... I need that option due import readonly -f pool -d dose not fix problem so i can then copy disk. I have only essential's at backup ,... s

Re: [Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-04 Thread Joni-Pekka Kurronen
hi, Dose new ZFS allow just removeing FAULTED device, so I have old clean disk alone, scrub that,... then REPARTITIONING FAULTED device ( i had incorrect size, there is boot area also ), and then attach FAULTED DEVICE AS NEW MIRROR DISK as it was intented ??? zfs remove old-rpool  -d  fault