> I think it is best that you just repeat the fixing again on the real
> disks and just make sure you have an uptodate/latest kernel+tools when
> fixing the few damaged files.
> With btrfs inspect-internal inode-resolve 257
> you can see what file(s) are damaged.
I inspected the damaged files,
On Sun, Mar 27, 2016 at 4:59 PM, John Marrett wrote:
>>> If you do want to use a newer one, I'd build against kernel.org, just
>>> because the developers only use that base. And use 4.4.6 or 4.5.
>>
>> At this point I could remove the overlays and recover the filesystem
>> permanently, however I'm
>> If you do want to use a newer one, I'd build against kernel.org, just
>> because the developers only use that base. And use 4.4.6 or 4.5.
>
> At this point I could remove the overlays and recover the filesystem
> permanently, however I'm also deeply indebted to the btrfs community
> and want to
>> I was looking under btrfs device, sorry about that. I do have the
>> command. I tried replace and it seemed more promising than the last
>> attempt, it wrote enough data to the new drive to overflow and break
>> my overlay. I'm trying it without the overlay on the destination
>> device, I'll rep
On Sat, Mar 26, 2016 at 3:01 PM, John Marrett wrote:
>> Well off hand it seems like the missing 2.73TB has nothing on it at
>> all, and doesn't need to be counted as missing. The other missing is
>> counted, and should have all of its data replicated elsewhere. But
>> then you're running into csum
> Well off hand it seems like the missing 2.73TB has nothing on it at
> all, and doesn't need to be counted as missing. The other missing is
> counted, and should have all of its data replicated elsewhere. But
> then you're running into csum errors. So something still isn't right,
> we just don't u
On Sat, Mar 26, 2016 at 6:15 AM, John Marrett wrote:
> Chris,
>
>> Post 'btrfs fi usage' for the fileystem. That may give some insight
>> what's expected to be on all the missing drives.
>
> Here's the information, I believe that the missing we see in most
> entries is the failed and absent drive,
Chris,
> Post 'btrfs fi usage' for the fileystem. That may give some insight
> what's expected to be on all the missing drives.
Here's the information, I believe that the missing we see in most
entries is the failed and absent drive, only the unallocated shows two
missing entries, the 2.73 TB is
[let me try keeping the list cc'd]
On Fri, Mar 25, 2016 at 7:21 PM, John Marrett wrote:
> Chris,
>
>> Quite honestly I don't understand how Btrfs raid1 volume with two
>> missing devices even permits you to mount it degraded,rw in the first
>> place.
>
> I think you missed my previous post, it's
Chris,
> Quite honestly I don't understand how Btrfs raid1 volume with two
> missing devices even permits you to mount it degraded,rw in the first
> place.
I think you missed my previous post, it's simple, I patched the kernel
to bypass the check for missing devices with rw mounts, I did this
bec
On Fri, Mar 25, 2016 at 4:31 PM, John Marrett wrote:
> Continuing with my recovery efforts I've built overlay mounts of each
> of the block devices supporting my btrfs filesystem as well as the new
> disk I'm trying to introduce. I have patched the kernel to disable the
> check for multiple missin
Continuing with my recovery efforts I've built overlay mounts of each
of the block devices supporting my btrfs filesystem as well as the new
disk I'm trying to introduce. I have patched the kernel to disable the
check for multiple missing devices. I then exported the overlayed
devices using iSCSI t
Henk,
> I asume you did btrfs device add ?
> Or did you do this withbtrfs replace ?
Just realised I missed this question, sorry, I performed an add
followed by a (failed) delete.
-JohnF
>
>> filesystem successfully, when I attempted to remove the failed drive I
>> encountered an error
After further discussion in #btrfs:
I left out the raid level, it's raid1:
ubuntu@ubuntu:~$ sudo btrfs filesystem df /mnt
Data, RAID1: total=6.04TiB, used=5.46TiB
System, RAID1: total=32.00MiB, used=880.00KiB
Metadata, RAID1: total=14.00GiB, used=11.59GiB
GlobalReserve, single: total=512.00MiB, u
On Tue, Mar 22, 2016 at 9:19 PM, John Marrett wrote:
> I recently had a drive failure in a file server running btrfs. The
> failed drive was completely non-functional. I added a new drive to the
I asume you did btrfs device add ?
Or did you do this withbtrfs replace ?
> filesystem succ
I recently had a drive failure in a file server running btrfs. The
failed drive was completely non-functional. I added a new drive to the
filesystem successfully, when I attempted to remove the failed drive I
encountered an error. I discovered that I actually experienced a dual
drive failure, the s
16 matches
Mail list logo