Peter,
> Bad news. That means that probably the disk is damaged and
> further issues may happen.
This system has a long history, I have had a dual drive failure in the
past, I managed to recover from that with ddrescue. I've subsequently
copied the contents of the drives to new disks and expanded
I have a filesystem with uncorrectable errors in metadata. In the past
when I've experienced corruption due to drive failures it affected the
data and not metadata. I was able to delete the files and restored
their content from backup. Unfortunately I can't do this this time as
I have no way to dir
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10 and tripled
performance again up to 3000 re
In preparation for a system and storage upgrade I performed some btrfs
performance tests. I created a ten disk raid1 using 7.2k 3 TB SAS
drives and used aio to test IOOP rates. I was surprised to measure 215
read and 72 write IOOPs on the clean new filesystem. Sequential writes
ran as expected at r
> I think it is best that you just repeat the fixing again on the real
> disks and just make sure you have an uptodate/latest kernel+tools when
> fixing the few damaged files.
> With btrfs inspect-internal inode-resolve 257
> you can see what file(s) are damaged.
I inspected the damaged files,
>> If you do want to use a newer one, I'd build against kernel.org, just
>> because the developers only use that base. And use 4.4.6 or 4.5.
>
> At this point I could remove the overlays and recover the filesystem
> permanently, however I'm also deeply indebted to the btrfs community
> and want to
>> I was looking under btrfs device, sorry about that. I do have the
>> command. I tried replace and it seemed more promising than the last
>> attempt, it wrote enough data to the new drive to overflow and break
>> my overlay. I'm trying it without the overlay on the destination
>> device, I'll rep
> Well off hand it seems like the missing 2.73TB has nothing on it at
> all, and doesn't need to be counted as missing. The other missing is
> counted, and should have all of its data replicated elsewhere. But
> then you're running into csum errors. So something still isn't right,
> we just don't u
Chris,
> Post 'btrfs fi usage' for the fileystem. That may give some insight
> what's expected to be on all the missing drives.
Here's the information, I believe that the missing we see in most
entries is the failed and absent drive, only the unallocated shows two
missing entries, the 2.73 TB is
Chris,
> Quite honestly I don't understand how Btrfs raid1 volume with two
> missing devices even permits you to mount it degraded,rw in the first
> place.
I think you missed my previous post, it's simple, I patched the kernel
to bypass the check for missing devices with rw mounts, I did this
bec
Continuing with my recovery efforts I've built overlay mounts of each
of the block devices supporting my btrfs filesystem as well as the new
disk I'm trying to introduce. I have patched the kernel to disable the
check for multiple missing devices. I then exported the overlayed
devices using iSCSI t
Henk,
> I asume you did btrfs device add ?
> Or did you do this withbtrfs replace ?
Just realised I missed this question, sorry, I performed an add
followed by a (failed) delete.
-JohnF
>
>> filesystem successfully, when I attempted to remove the failed drive I
>> encountered an error
ata and rebuild the filesystem.
-JohnF
On Tue, Mar 22, 2016 at 5:18 PM, Henk Slager wrote:
> On Tue, Mar 22, 2016 at 9:19 PM, John Marrett wrote:
>> I recently had a drive failure in a file server running btrfs. The
>> failed drive was completely non-functional. I added a new dr
I recently had a drive failure in a file server running btrfs. The
failed drive was completely non-functional. I added a new drive to the
filesystem successfully, when I attempted to remove the failed drive I
encountered an error. I discovered that I actually experienced a dual
drive failure, the s
14 matches
Mail list logo