On 2015-07-08 18:16, Donald Pearson wrote:
Basically I wouldn't trust the drive that's already showing signs of
failure to survive a dd. It isn't completely full, so the recover is
less load. That's just the way I see it. But I see your point of
trying to get drive images now to hedge against
On 2015-07-08 15:06, Donald Pearson wrote:
I wouldn't use dd.
I would use recover to get the data if at all possible, then you can
experiment with try to fix the degraded condition live. If you have
any chance of getting data from the pool, you reduce that chance every
time you make a change.
Basically I wouldn't trust the drive that's already showing signs of
failure to survive a dd. It isn't completely full, so the recover is
less load. That's just the way I see it. But I see your point of
trying to get drive images now to hedge against failures.
Unfortunately those errors are ove
Hello Donald,
thanks for your reply. I appreciate your help.
> I would use recover to get the data if at all possible, then you can
experiment with try to fix the degraded condition live. If you have
any chance of getting data from the pool, you reduce that chance every
time you make a change.
t;
>>>
>>> So, also this drive is failing?!
>>>
>>> Regards,
>>> Hendrik
>>>
>>>
>>> On 07.07.2015 00:59, Donald Pearson wrote:
>>>>
>>>>
>>>> Anything in dmesg?
>>>>
>>>> On Mon, Jul 6,
ng up.
Greetings,
Hendrik
-- Originalnachricht--
Von: Donald Pearson
Datum: Mo., 6. Juli 2015 23:49
An: Hendrik Friedel;
Cc: Omar Sandoval;Hugo Mills;Btrfs BTRFS;
Betreff:Re: size 2.73TiB used 240.97GiB after balance
If you can mount it RO, first thing to do is back up any data that
yo
wrote:
>>>
>>> Hallo,
>>>
>>> It seems, that mounting works, but the System locks completely soon after
>>> I
>>> backing up.
>>>
>>>
>>> Greetings,
>>>
>>> Hendrik
>>>
>>>
.
Greetings,
Hendrik
-- Originalnachricht--
Von: Donald Pearson
Datum: Mo., 6. Juli 2015 23:49
An: Hendrik Friedel;
Cc: Omar Sandoval;Hugo Mills;Btrfs BTRFS;
Betreff:Re: size 2.73TiB used 240.97GiB after balance
If you can mount it RO, first thing to do is back up any data that youcare
on: Donald Pearson
>
> Datum: Mo., 6. Juli 2015 23:49
>
> An: Hendrik Friedel;
>
> Cc: Omar Sandoval;Hugo Mills;Btrfs BTRFS;
>
> Betreff:Re: size 2.73TiB used 240.97GiB after balance
>
>
> If you can mount it RO, first thing to do is back up any data that
If you can mount it RO, first thing to do is back up any data that you
care about.
According to the bug that Omar posted you should not try a device
replace and you should not try a scrub with a missing device.
You may be able to just do a device delete missing, then separately do
a device add of
Hello,
oh dear, I fear I am in trouble:
recovery-mounted, I tried to save some data, but the system hung.
So I re-booted and sdc is now physically disconnected.
Label: none uuid: b4a6cce6-dc9c-4a13-80a4-ed6bc5b40bb8
Total devices 3 FS bytes used 4.67TiB
devid1 size 2.73TiB u
On 07/06/2015 01:01 PM, Donald Pearson wrote:
> Based on my experience Hugo's advice is critical, get the bad drive
> out of the pool when in raid56 and do not try to replace or delete it
> while it's still attached and recognized.
>
> If you add a new device, mount degraded and rebalance. If you
Based on my experience Hugo's advice is critical, get the bad drive
out of the pool when in raid56 and do not try to replace or delete it
while it's still attached and recognized.
If you add a new device, mount degraded and rebalance. If you don't,
mount degraded then device delete missing.
On M
On Mon, Jul 06, 2015 at 09:44:53PM +0200, Hendrik Friedel wrote:
> Hello,
>
> ok, sdc seems to have failed (sorry, I checked only sdd and sdb
> SMART values, as sdc is brand new. Maybe a bad assumption, from my
> side.
>
> I have mounted the device
> mount -o recovery,ro
>
> So, what should I do
Hello,
ok, sdc seems to have failed (sorry, I checked only sdd and sdb SMART
values, as sdc is brand new. Maybe a bad assumption, from my side.
I have mounted the device
mount -o recovery,ro
So, what should I do now:
btrfs device delete /dev/sdc /mnt
or
mount -o degraded /dev/sdb /mnt
btrfs
Hello,
I started with a raid1:
devid1 size 2.73TiB used 2.67TiB path /dev/sdd
devid2 size 2.73TiB used 2.67TiB path /dev/sdb
Then I added a third device, /dev/sdc1 and a balance
btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/__Complete_Disk/
Now the file-system
16 matches
Mail list logo