RE: Problem with btrfs snapshots

2016-11-06 Thread Дмитрий Нечаев
Part of script: btrfs filesystem sync cp -pR --reflink btrfs filesystem sync So if another copy or two of script is starting in the moment, then first snapshot is making - then we receive ENOSPC. -Original Message- From: Austin S. Hemmelgarn

Re: btrfs support for filesystems >8TB on 32bit architectures

2016-11-06 Thread Qu Wenruo
At 11/07/2016 01:36 PM, Marc MERLIN wrote: (sorry for the bad subject line from the mdadm list on the previous mail) On Mon, Nov 07, 2016 at 12:18:10PM +0800, Qu Wenruo wrote: I'm totally wrong here. DirectIO needs the 'buf' parameter of read()/pread() to be 512 bytes aligned. While we are

Re: btrfs support for filesystems >8TB on 32bit architectures

2016-11-06 Thread Marc MERLIN
(sorry for the bad subject line from the mdadm list on the previous mail) On Mon, Nov 07, 2016 at 12:18:10PM +0800, Qu Wenruo wrote: > I'm totally wrong here. > > DirectIO needs the 'buf' parameter of read()/pread() to be 512 bytes > aligned. > > While we are using a lot of stack memory() and

Re: clearing blocks wrongfully marked as bad if --update=no-bbl can't be used?

2016-11-06 Thread Qu Wenruo
At 11/07/2016 09:39 AM, Qu Wenruo wrote: At 11/07/2016 09:20 AM, Marc MERLIN wrote: On Mon, Nov 07, 2016 at 09:11:54AM +0800, Qu Wenruo wrote: Well, turns out you were right. My array is 14TB and dd was only able to copy 8.8TB out of it. I wonder if it's a bug with bcache and source

Re: clearing blocks wrongfully marked as bad if --update=no-bbl can't be used?

2016-11-06 Thread Qu Wenruo
At 11/07/2016 09:20 AM, Marc MERLIN wrote: On Mon, Nov 07, 2016 at 09:11:54AM +0800, Qu Wenruo wrote: Well, turns out you were right. My array is 14TB and dd was only able to copy 8.8TB out of it. I wonder if it's a bug with bcache and source devices that are too big? At least we know it's

Re: clearing blocks wrongfully marked as bad if --update=no-bbl can't be used?

2016-11-06 Thread Marc MERLIN
On Mon, Nov 07, 2016 at 09:11:54AM +0800, Qu Wenruo wrote: > > Well, turns out you were right. My array is 14TB and dd was only able to > > copy 8.8TB out of it. > > > > I wonder if it's a bug with bcache and source devices that are too big? > > At least we know it's not a problem of

Re: btrfs check --repair: ERROR: cannot read chunk root

2016-11-06 Thread Qu Wenruo
At 11/04/2016 04:01 PM, Marc MERLIN wrote: On Mon, Oct 31, 2016 at 09:21:40PM -0700, Marc MERLIN wrote: On Tue, Nov 01, 2016 at 12:13:38PM +0800, Qu Wenruo wrote: Would you try to locate the range where we starts to fail to read? I still think the root problem is we failed to read the

Re: btrfs btree_ctree_super fault

2016-11-06 Thread Dave Jones
On Mon, Oct 31, 2016 at 01:44:55PM -0600, Chris Mason wrote: > On Mon, Oct 31, 2016 at 12:35:16PM -0700, Linus Torvalds wrote: > >On Mon, Oct 31, 2016 at 11:55 AM, Dave Jones > >wrote: > >> > >> BUG: Bad page state in process kworker/u8:12 pfn:4e0e39 > >>

Announcing btrfs-dedupe

2016-11-06 Thread James Pharaoh
Hi all, I'm pleased to announce my btrfs deduplication utility, written in Rust. This operates on whole files, is fast, and I believe complements the existing utilities (duperemove, bedup), which exist currently. Please visit the homepage for more information: http://btrfs-dedupe.com James