On Sun, 29 Oct 2017, at 06:02 PM, Qu Wenruo wrote:
> If so, mount it, do minimal write like creating an empty file, to update
> both superblock copies, and then try fix-device-size.
Tried that, and it didn't work. Made a recording:
https://youtu.be/SFd3QscNT6w
--
To unsubscribe from this list:
On Sat, 28 Oct 2017, at 03:58 PM, Qu Wenruo wrote:
> Don't get confused with the name, to use "fix-dev-size" you need to run
> "btrfs rescue fix-dev-size"
[hendry@nuc btrfs-progs]$ sudo ./btrfs rescue fix-device-size /dev/sdc1
warning, device 2 is missing
ERROR: devid 2 is missing or not
On Fri, 13 Oct 2017, at 09:42 AM, Kai Hendry wrote:
> It probably is... since when I remove my new 4TB USB disk from the
> front, I am at least able to mount my two 2x2TB in degraded mode and see
> my data!
Just a follow up. I have not been of late been able to mount my data,
even in
Thank you Austin & Chris for your replies!
On Fri, 13 Oct 2017, at 01:19 AM, Austin S. Hemmelgarn wrote:
> Same here on a pair of 3 year old NUC's. Based on the traces and the
> other information, I'd be willing to bet this is probably the root cause
> of the issues.
It probably is... since
A guy on #btrsfs suggests:
15:09 hendry: super_total_bytes 8001581707264 mismatch with
fs_devices total_rw_bytes 8001581710848 that one is because unaligned
partitions, 4.12 - 4.13 kernels are affected (at least some versions)
However I rebooted into 4.9.54-1-lts and I have the same issue.
On Tue, 10 Oct 2017, at 10:06 AM, Satoru Takeuchi wrote:
> Probably `btrfs device remove missing /mnt/raid1` works.
That command worked. Took a really long time, but it worked. However
when I unmounted /mnt/raid1 and tried mounting it again, it fails! :(
Hi there,
My /mnt/raid1 suddenly became full somewhat expectedly, so I bought 2
new USB 4TB hard drives (one WD, one Seagate) to upgrade to.
After adding sde and sdd I started to see errors in dmesg [2].
https://s.natalian.org/2017-10-07/raid1-newdisks.txt
[2]
On Tue, 7 Jun 2016, at 07:10 PM, Austin S. Hemmelgarn wrote:
> Yes, although you would then need to be certain to run a balance with
> -dconvert=raid1 -mconvert=raid1 to clean up anything that got allocated
> before the new disk was added.
I don't quite understand when I should run this
Sorry I unsubscribed from linux-btrfs@vger.kernel.org since the traffic
was a bit too high for me.
On Tue, 7 Jun 2016, at 11:42 AM, Chris Murphy wrote:
> Your command turned this from a 3 drive volume into a 2 drive volume,
> it removed the drive you asked to be removed.
I actually had 2 drives
On Mon, 6 Jun 2016, at 10:16 PM, Austin S. Hemmelgarn wrote:
> Based on the fact that you appear to want to carry a disk to copy data
> more quickly than over then internet, then what you've already done plus
> this is the correct way to do it.
The trouble is the way I ended up doing it:
1)
Hi there,
I planned to remove one of my disks, so that I can take it from
Singapore to the UK and then re-establish another remote RAID1 store.
delete is an alias of remove, so I added a new disk (devid 3) and
proceeded to run:
btrfs device delete /dev/sdc1 /mnt/raid1 (devid 1)
nuc:~$ uname
11 matches
Mail list logo