Re: Btrfs Check - "type mismatch with chunk"

2015-12-24 Thread covici
Duncan <1i5t5.dun...@cox.net> wrote: > Zach Fuller posted on Thu, 24 Dec 2015 13:15:22 -0600 as excerpted: > > > I am currently running btrfs on a 2TB GPT drive. The drive is working > > fine, still mounts correctly, and I have experienced no data corruption. > > Whenever I run "btrfs check" on t

Re: ssd not detected on ssd drive

2015-12-24 Thread covici
Duncan <1i5t5.dun...@cox.net> wrote: > covici posted on Thu, 24 Dec 2015 05:56:22 -0500 as excerpted: > > > Hi. I was making a few file systems on my ssd drives (using lvm on top) > > and noticed that the ssd was not detected. The only thing that happened > > is that the metadata is duplicated.

[btrfs:integration-4.5 34/41] fs/btrfs/extent-tree.c:566:39: error: 'extent_root' undeclared

2015-12-24 Thread kbuild test robot
tree: https://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git integration-4.5 head: 140e639f1a3ff052c3921818e2120fdfa4427681 commit: f7d3d2f99eeaa9f5c111965b1516972f4fc5e449 [34/41] Merge branch 'freespace-tree' into for-linus-4.5 config: i386-randconfig-x006-1106 (attached

Re: Raid 5/6 Stability

2015-12-24 Thread Duncan
jwalmer posted on Thu, 24 Dec 2015 08:56:15 -0500 as excerpted: > Thanks for the speedy replies! Earlier Duncan said, "there's still no > user-side multi-device filesystem health monitoring application." I'm > mostly worried about device errors/failures, not my filesystem health. EUNFORESEEN_AMBI

Re: Loss of connection to Half of the drives

2015-12-24 Thread Duncan
Chris Murphy posted on Thu, 24 Dec 2015 13:57:35 -0700 as excerpted: >> All this makes me ask why? Why implement Raid10 in this non-standard >> fashion and create this mess of compromise? > > Because it was a straightforward extension of how the file system > already behaves. To implement drive

Re: Btrfs Check - "type mismatch with chunk"

2015-12-24 Thread Duncan
Zach Fuller posted on Thu, 24 Dec 2015 13:15:22 -0600 as excerpted: > I am currently running btrfs on a 2TB GPT drive. The drive is working > fine, still mounts correctly, and I have experienced no data corruption. > Whenever I run "btrfs check" on the drive, it returns 100,000+ messages > stating

Re: ssd not detected on ssd drive

2015-12-24 Thread Duncan
covici posted on Thu, 24 Dec 2015 05:56:22 -0500 as excerpted: > Hi. I was making a few file systems on my ssd drives (using lvm on top) > and noticed that the ssd was not detected. The only thing that happened > is that the metadata is duplicated. Is this a problem, or a waste of > space? If

Re: Btrfs Check - "type mismatch with chunk"

2015-12-24 Thread Christoph Anton Mitterer
Hey. There was a patch for that issue. Simply try that and when it goes away with it, there's probably no further reason to worry. Cheers, Chris. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: Loss of connection to Half of the drives

2015-12-24 Thread Chris Murphy
On Thu, Dec 24, 2015 at 9:19 AM, Donald Pearson wrote: > Got it. I'm not the biggest fan of mixing mdraid with btrfs raid in > order to work around deficiencies. Hopefully in the future btrfs will > allow me to select my mirror groups. As far as I know, mdadm -l raid10 works this same way, you

btrfs-progs: Provide a library covering 'btrfs' functionality

2015-12-24 Thread Caio Lima
Hi all. I am searching a project to start contribute to btrfs and I would like to know if there is anyone working on "a library covering 'btrfs' functionality". I am confused because I found "libbtrfs.so.0.1" and "libbtrfs.a" when we compile the btrfs-progs and I would like to know if these libs ar

Btrfs Check - "type mismatch with chunk"

2015-12-24 Thread Zach Fuller
I am currently running btrfs on a 2TB GPT drive. The drive is working fine, still mounts correctly, and I have experienced no data corruption. Whenever I run "btrfs check" on the drive, it returns 100,000+ messages stating "bad extent [###, ###), type mismatch with chunk". Whenever I try to run "bt

Re: btrfs und lvm-cache?

2015-12-24 Thread Piotr Pawłow
W dniu 24.12.2015 o 16:29, Neuer User pisze: Am 24.12.2015 um 15:56 schrieb Piotr Pawłow: Hello, - both hdd and ssd in one LVM VG - one LV on each hdd, containing a btrfs filesystem - both btrfs LV configured as RAID1 - the single SDD used as a LVM cache device for both HDD LVs to speed up rand

Re: Loss of connection to Half of the drives

2015-12-24 Thread Donald Pearson
On Wed, Dec 23, 2015 at 7:21 PM, Duncan <1i5t5.dun...@cox.net> wrote: > Donald Pearson posted on Wed, 23 Dec 2015 09:53:41 -0600 as excerpted: > >> Additionally real Raid10 will run circles around what BTRFS is doing in >> terms of performance. In the 20 drive array you're striping across 10 >> dr

Re: btrfs und lvm-cache?

2015-12-24 Thread Neuer User
Am 24.12.2015 um 15:56 schrieb Piotr Pawłow: > Hello, >> - both hdd and ssd in one LVM VG >> - one LV on each hdd, containing a btrfs filesystem >> - both btrfs LV configured as RAID1 >> - the single SDD used as a LVM cache device for both HDD LVs to speed up >> random access, where possible > > I

Re: btrfs und lvm-cache?

2015-12-24 Thread Neuer User
Am 24.12.2015 um 03:04 schrieb Duncan: I had a look at bcache, but focused on lvmcache mainly because of the flexibility it offers. It can be easily added and removed. For LVM it is just another LV, so all the LVM magic applies. But thanks, I should take another look at bcache. > I'll let other

Re: btrfs und lvm-cache?

2015-12-24 Thread Neuer User
Am 23.12.2015 um 21:56 schrieb Chris Murphy: > Btrfs always writes to the 'cache LV' and then it's up to lvmcache to > determine how and when things are written to the 'cache pool LV' vs > the 'origin LV' and I have no idea if there's a case with writeback > mode where things write to the SSD and o

Re: btrfs und lvm-cache?

2015-12-24 Thread Piotr Pawłow
Hello, - both hdd and ssd in one LVM VG - one LV on each hdd, containing a btrfs filesystem - both btrfs LV configured as RAID1 - the single SDD used as a LVM cache device for both HDD LVs to speed up random access, where possible I have a setup like this for my /home. It works but it's a crapp

Re: Raid 5/6 Stability

2015-12-24 Thread jwalmer
Thanks for the speedy replies! Earlier Duncan said, "there's still no user-side multi-device filesystem health monitoring application." I'm mostly worried about device errors/failures, not my filesystem health. Since my implimentation of btrfs will be on a storage array, I'm not going to be doin

[PATCH 1/4 v2] fstests: fix btrfs test failures after commit 27d077ec0bda

2015-12-24 Thread fdmanana
From: Filipe Manana Commit 27d077ec0bda (common: use mount/umount helpers everywhere) made a few btrfs test fail for 2 different reasons: 1) Some tests (btrfs/029 and btrfs/031) use $SCRATCH_MNT as a mount point for some subvolume created in $TEST_DEV, therefore calling _scratch_unmount do

[PATCH 3/4 v2] fstests: cleanup test btrfs/031

2015-12-24 Thread fdmanana
From: Filipe Manana The test was using $SCRATCH_MNT as a mountpoint for $SCRATCH_DEV, which is counter intuitive and not expected by the fstests framework - this made the test fail after commit 27d077ec0bda (common: use mount/umount helpers everywhere). So rewrite the test to use the scratch devi

[PATCH 2/4 v2] fstests: cleanup test btrfs/029

2015-12-24 Thread fdmanana
From: Filipe Manana The test was using $SCRATCH_MNT as a mountpoint for $SCRATCH_DEV, which is counter intuitive and not expected by the fstests framework - this made the test fail after commit 27d077ec0bda (common: use mount/umount helpers everywhere). So rewrite the test to use the scratch devi

[PATCH 4/4 v2] fstests: fix cleanup of test btrfs/003

2015-12-24 Thread fdmanana
From: Filipe Manana If the test fails after removing a device and before adding it back, it attempts to add back the device in its _cleanup() function. However this is broken because the device identifier is stored in a variable local to the function _test_replace() and not in a global variable.

Re: [PATCH 1/2] fstests: fix btrfs test failures after commit 27d077ec0bda

2015-12-24 Thread Filipe Manana
On Wed, Dec 23, 2015 at 11:49 PM, Dave Chinner wrote: > On Tue, Dec 22, 2015 at 02:22:40AM +, fdman...@kernel.org wrote: >> From: Filipe Manana >> >> Commit 27d077ec0bda (common: use mount/umount helpers everywhere) made >> a few btrfs test fail for 2 different reasons: >> >> 1) Some tests (b

Re: BTRFS: could not find root 8

2015-12-24 Thread Swâmi Petaramesh
Le jeudi 24 décembre 2015, 10:29:02 CET Hugo Mills a écrit : > >systemd is now probing for qgroups on startup. The message is > simply indicating that qgroups are not enabled on the FS. It's harmless. Thanks Hugo. Then it’s harmless but worrying at 1st sight ;-) Kind regards. -- Swâmi Petar

ssd not detected on ssd drive

2015-12-24 Thread covici
Hi. I was making a few file systems on my ssd drives (using lvm on top) and noticed that the ssd was not detected. The only thing that happened is that the metadata is duplicated. Is this a problem, or a waste of space? If I wanted to remake the file systems -- which I don't want to do unless n

Re: Raid 5/6 Stability

2015-12-24 Thread Gerald Hopf
Duncan wrote: So 4.4 is what I'd consider the magical raid56-stability release, and I'd actually expect the wiki to be updated shortly thereafter, tho 4.4 is close enough now, and there have been no major raid56 bugs reported in the 4.3 and 4.4 cycles, that arguably the wiki's raid56 status co

Re: BTRFS: could not find root 8

2015-12-24 Thread Hugo Mills
On Thu, Dec 24, 2015 at 10:04:48AM +0100, Swâmi Petaramesh wrote: > Hi there, > > I’m running Arch Linux with kernel 4.2.5-1-ARCH > > Since a few days, I have noticed a series of messages : > > « BTRFS: could not find root 8 » > > During boot. systemd is now probing for qgroups on startup.

BTRFS: could not find root 8

2015-12-24 Thread Swâmi Petaramesh
Hi there, I’m running Arch Linux with kernel 4.2.5-1-ARCH Since a few days, I have noticed a series of messages : « BTRFS: could not find root 8 » During boot. Besides that, the system behaves normally, and I have no clue… Here is an extract of the relevant parts of my dmesg : [1.515764]