If sequential writer is writing in the middle of the page and it just redirties
the last written page by continuing from it.
In the above case this can end up with seeking back to that firstly redirtied
page after writing all the pages at the end of file because btrfs updates
mapping->writeback_in
Now that we bail out immediately if ->writepage() returns an error,
we don't need an extra error to retain the error code.
Signed-off-by: Liu Bo
---
fs/btrfs/extent_io.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 1
This is to test if COW enabled btrfs can end up with single 4k extents
when doing subpagesize buffered writes.
Signed-off-by: Liu Bo
---
v2: - Fix 027.out to make sure we don't get single 4k extents.
- Add the original mail list discussion as the reference.
tests/btrfs/027 | 97
This is to test if COW enabled btrfs can end up with single 4k extents
when doing subpagesize buffered writes.
Signed-off-by: Liu Bo
---
tests/btrfs/027 | 94 +
tests/btrfs/027.out | 2 ++
tests/btrfs/group | 1 +
3 files changed, 97 in
On Mon, Mar 7, 2016 at 3:55 PM, Tobias Hunger wrote:
> Hi,
>
> I have been running systemd-nspawn containers on top of a btrfs
> filesystem for a while now.
>
> This works great: Snapshots are a huge help to manage containers!
>
> But today I ran btrfs subvol list . *inside* a container. To my
> s
Hi,
I have been running systemd-nspawn containers on top of a btrfs
filesystem for a while now.
This works great: Snapshots are a huge help to manage containers!
But today I ran btrfs subvol list . *inside* a container. To my
surprise I got a list of *all* subvolumes on that drive. That is
basic
On Mon, Mar 7, 2016 at 1:43 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Chris Murphy posted on Mon, 07 Mar 2016 12:44:20 -0700 as excerpted:
>
>> On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber
>> wrote:
>>> And this is really something to be proud of? I mean, this is a file
>>> system that is part of t
Chris Murphy posted on Mon, 07 Mar 2016 12:44:20 -0700 as excerpted:
> On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber
> wrote:
>> And this is really something to be proud of? I mean, this is a file
>> system that is part of the vanilla linux kernel, not marked as
>> experimental or something, and you
Marc Haber posted on Mon, 07 Mar 2016 09:30:43 +0100 as excerpted:
> I have dug aroud in my auth.logs, and thanks to my not working in a root
> shell but using sudo for every single command I can say that the
> filesystem was created on September 1, 2015, so it is not _this_ old,
> and snapshot.de
On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber wrote:
> And this is really something to be proud of? I mean, this is a file
> system that is part of the vanilla linux kernel, not marked as
> experimental or something, and you're still concerned about file
> systems that were made a year ago? This is a
On Mon, Mar 07, 2016 at 01:56:54PM -0500, Austin S. Hemmelgarn wrote:
> Yeah, in general, if you want to get good upstream support for BTRFS (such
> as from the mailing lists), you still want to steer clear of 'Enterprise'
> branded distros (RHEL (and by extension CentOS) is particularly bad about
On Mon, Mar 7, 2016 at 11:56 AM, Austin S. Hemmelgarn
wrote:
> People don't often think about it, but given the degree of code and
> version divergence due to patches, RHEL, SLES, and OEL kernels are strictly
> speaking, forks of Linux (most distro kernels are, but usually not to the
> extreme de
On 2016-03-07 13:39, Chris Murphy wrote:
On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber wrote:
[1] Does RHEL 6 have btrfs in the first place?
They do, but you need a decoder ring to figure out what's been
backported to have some vague idea of what equivalent kernel.org
kernel it is.
Yeah, in gen
On Mon, Mar 07, 2016 at 07:15:24PM +0100, Garmine 42 wrote:
> According to the manpage duplicate -s is valid and the high CPU usage is
> intended. Although a warning could be valid in case of -ss.
Or use a different letter. Anyway, that was my stupidity and no
developer time should be wasted for t
On Mon, Mar 07, 2016 at 11:09:57AM -0700, Chris Murphy wrote:
> On Mon, Mar 7, 2016 at 10:38 AM, Marc Haber
> wrote:
> > On Mon, Mar 07, 2016 at 06:27:17PM +0100, Marc Haber wrote:
> >> how long is btrfs-image taking to run on a 400 GiB filesystem?
> >>
> >> I have /bin/btrfs-image -s -t 8 -s /de
On Mon, Mar 7, 2016 at 1:42 AM, Marc Haber wrote:
> On Sun, Mar 06, 2016 at 01:27:10PM -0700, Chris Murphy wrote:
>> On the one hand, the practical advice is to just blow it away and use
>> everything current, go back to the same workload including thousands
>> of snapshots, and see if this bala
On Mon, Mar 7, 2016 at 10:38 AM, Marc Haber wrote:
> On Mon, Mar 07, 2016 at 06:27:17PM +0100, Marc Haber wrote:
>> how long is btrfs-image taking to run on a 400 GiB filesystem?
>>
>> I have /bin/btrfs-image -s -t 8 -s /dev/mapper/mydevice - | pixz -9 >
>> file.on.other.fs running for four hours
On Mon, Mar 07, 2016 at 06:27:17PM +0100, Marc Haber wrote:
> how long is btrfs-image taking to run on a 400 GiB filesystem?
>
> I have /bin/btrfs-image -s -t 8 -s /dev/mapper/mydevice - | pixz -9 >
> file.on.other.fs running for four hours now
Strike my question please, I didn't see that I had t
Hi,
how long is btrfs-image taking to run on a 400 GiB filesystem?
I have /bin/btrfs-image -s -t 8 -s /dev/mapper/mydevice - | pixz -9 >
file.on.other.fs running for four hours now, and it's constantly
taking a single core, but is neither reading from the disk nor writing
to its output.
Is that
On Tue, Feb 23, 2016 at 01:59:11PM -0800, Marc MERLIN wrote:
> I have a freshly created md5 array, with drives that I specifically
> scanned one by one block by block, and for good measure, I also scanned
> the entire software raid with a check command which took 3 days to run.
>
> Everything pass
On Sat, Mar 05, 2016 at 09:09:09PM +0100, Marc Haber wrote:
> On Sat, Mar 05, 2016 at 12:34:09PM -0700, Chris Murphy wrote:
> > So understanding the usage is important to figuring out what's
> > happening. I'd file a bug and include as much information on how the
> > fs got into this state as po
On Sun, Mar 06, 2016 at 01:37:31PM -0700, Chris Murphy wrote:
> On Sun, Mar 6, 2016 at 1:27 PM, Chris Murphy wrote:
> > So if it were me, I'd gather all possible data, including complete,
> > not trimmed, logs.
>
> Also include in the bug, the balance script being used. It might be a
> contributi
On Sun, Mar 06, 2016 at 01:27:10PM -0700, Chris Murphy wrote:
> Marc said it was created maybe 2 years ago and doesn't remember what
> version of the tools were used. Between it being two years ago and
> also being Debian, for all we know it could've been 0.19. *shrug*
You are mixing up Debian uns
On Sun, Mar 06, 2016 at 06:43:46AM +, Duncan wrote:
> Marc Haber posted on Sat, 05 Mar 2016 21:09:09 +0100 as excerpted:
> > On Sat, Mar 05, 2016 at 12:34:09PM -0700, Chris Murphy wrote:
> >> Something is happening with the usage of this file system that's out of
> >> the ordinary. This is the
On Sat, Mar 05, 2016 at 10:25:08AM -0800, Christoph Hellwig wrote:
> I'm not sure xfstests is the right fit, as it does not test a file
> system, but rather block devices.
I asked Dave if he'd take a fallocate-for-bdevs test, and he didn't
object. After all, we're testing a semi-standard FS API,
25 matches
Mail list logo