On 04/30/2015 06:48 AM, Mike Galbraith wrote:
> On Thu, 2015-04-30 at 05:58 -0700, Daniel Phillips wrote:
>> On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
>>> On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote:
>>>
>>>> Lovely sou
On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote:
Lovely sounding argument, but it is wrong because Tux3 still beats XFS
even with seek time factored out of the equation.
Hm. Do you have big-storage comparison numbers
On Wednesday, April 29, 2015 5:20:08 PM PDT, Dave Chinner wrote:
It's easy to be fast on empty filesystems. XFS does not aim to be
fast in such situations - it aims to have consistent performance
across the life of the filesystem.
In this case, ext4, btrfs and tux3 have optimal allocation
On Wednesday, April 29, 2015 8:50:57 PM PDT, Mike Galbraith wrote:
On Wed, 2015-04-29 at 13:40 -0700, Daniel Phillips wrote:
That order of magnitude latency difference is striking. It sounds
good, but what does it mean? I see a smaller difference here, maybe
because of running under KVM
On Wednesday, April 29, 2015 6:46:16 PM PDT, Dave Chinner wrote:
I measured fsync performance using a 7200 RPM disk as a virtual
drive under KVM, configured with cache=none so that asynchronous
writes are cached and synchronous writes translate into direct
writes to the block device.
Yup, a
On Wednesday, April 29, 2015 5:20:08 PM PDT, Dave Chinner wrote:
It's easy to be fast on empty filesystems. XFS does not aim to be
fast in such situations - it aims to have consistent performance
across the life of the filesystem.
In this case, ext4, btrfs and tux3 have optimal allocation
On Wednesday, April 29, 2015 8:50:57 PM PDT, Mike Galbraith wrote:
On Wed, 2015-04-29 at 13:40 -0700, Daniel Phillips wrote:
That order of magnitude latency difference is striking. It sounds
good, but what does it mean? I see a smaller difference here, maybe
because of running under KVM
On Wednesday, April 29, 2015 6:46:16 PM PDT, Dave Chinner wrote:
I measured fsync performance using a 7200 RPM disk as a virtual
drive under KVM, configured with cache=none so that asynchronous
writes are cached and synchronous writes translate into direct
writes to the block device.
Yup, a
On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote:
Lovely sounding argument, but it is wrong because Tux3 still beats XFS
even with seek time factored out of the equation.
Hm. Do you have big-storage comparison numbers
On 04/30/2015 06:48 AM, Mike Galbraith wrote:
On Thu, 2015-04-30 at 05:58 -0700, Daniel Phillips wrote:
On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote:
Lovely sounding argument, but it is wrong because Tux3 still
Hi Ted,
On 04/30/2015 07:57 AM, Theodore Ts'o wrote:
This is one of the reasons why I find head-to-head competitions
between file systems to be not very helpful for anything other than
benchmarketing. It's almost certain that the benchmark won't be
fair in some way, and it doesn't really
On 04/30/2015 07:28 AM, Howard Chu wrote:
Daniel Phillips wrote:
On 04/30/2015 06:48 AM, Mike Galbraith wrote:
On Thu, 2015-04-30 at 05:58 -0700, Daniel Phillips wrote:
On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote:
On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote
On 04/30/2015 07:33 AM, Mike Galbraith wrote:
Well ok, let's forget bad blood, straw men... and answering my question
too I suppose. Not having any sexy IO gizmos in my little desktop box,
I don't care deeply which stomps the other flat on beastly boxen.
I'm with you, especially the forget
On Wednesday, April 29, 2015 12:05:26 PM PDT, Mike Galbraith wrote:
Here's something that _might_ interest xfs folks.
cd git (source repository of git itself)
make clean
echo 3 > /proc/sys/vm/drop_caches
time make -j8 test
ext42m20.721s
xfs 6m41.887s <-- ick
btrfs 1m32.038s
tux3
On Wednesday, April 29, 2015 9:42:43 AM PDT, Mike Galbraith wrote:
[dbench bakeoff]
With dbench v4.00, tux3 seems to be king of the max_latency hill, but
btrfs took throughput on my box. With v3.04, tux3 took 1st place at
splashing about in pagecache, but last place at dbench -S.
Hohum,
On Tuesday, April 28, 2015 11:33:33 PM PDT, Mike Galbraith wrote:
On Tue, 2015-04-28 at 23:01 -0700, Daniel Phillips wrote:
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote:
Where does tux3 live? What I found looked abandoned.
Current work is here:
https://github.com
On Tuesday, April 28, 2015 11:20:08 PM PDT, Richard Weinberger wrote:
On Wed, Apr 29, 2015 at 8:01 AM, Daniel Phillips wrote:
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote: ...
Where can I find the fsync code?
IOW how to reproduce your results? :)
Hi Richard,
If you can
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote:
Where does tux3 live? What I found looked abandoned.
Current work is here:
https://github.com/OGAWAHirofumi/linux-tux3
Note, the new fsync code isn't pushed to that tree yet, however Hirofumi's
optimized syncfs is already in
On Tuesday, April 28, 2015 11:33:33 PM PDT, Mike Galbraith wrote:
On Tue, 2015-04-28 at 23:01 -0700, Daniel Phillips wrote:
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote:
Where does tux3 live? What I found looked abandoned.
Current work is here:
https://github.com
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote:
Where does tux3 live? What I found looked abandoned.
Current work is here:
https://github.com/OGAWAHirofumi/linux-tux3
Note, the new fsync code isn't pushed to that tree yet, however Hirofumi's
optimized syncfs is already in
On Tuesday, April 28, 2015 11:20:08 PM PDT, Richard Weinberger wrote:
On Wed, Apr 29, 2015 at 8:01 AM, Daniel Phillips dan...@phunq.net wrote:
On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote: ...
Where can I find the fsync code?
IOW how to reproduce your results? :)
Hi
On Wednesday, April 29, 2015 9:42:43 AM PDT, Mike Galbraith wrote:
[dbench bakeoff]
With dbench v4.00, tux3 seems to be king of the max_latency hill, but
btrfs took throughput on my box. With v3.04, tux3 took 1st place at
splashing about in pagecache, but last place at dbench -S.
Hohum,
On Wednesday, April 29, 2015 12:05:26 PM PDT, Mike Galbraith wrote:
Here's something that _might_ interest xfs folks.
cd git (source repository of git itself)
make clean
echo 3 /proc/sys/vm/drop_caches
time make -j8 test
ext42m20.721s
xfs 6m41.887s -- ick
btrfs 1m32.038s
tux3
Greetings,
This post is dedicated to Ted, who raised doubts a while back about
whether Tux3 can ever have a fast fsync:
https://lkml.org/lkml/2013/5/11/128
"Re: Tux3 Report: Faster than tmpfs, what?"
Ted suggested that Tux3's inherently asynchronous design would be a
limitation when it
Greetings,
This post is dedicated to Ted, who raised doubts a while back about
whether Tux3 can ever have a fast fsync:
https://lkml.org/lkml/2013/5/11/128
Re: Tux3 Report: Faster than tmpfs, what?
Ted suggested that Tux3's inherently asynchronous design would be a
limitation when it
On Tuesday, June 24, 2014 4:52:15 AM PDT, James Bottomley wrote:
On Tue, 2014-06-24 at 04:27 -0700, Daniel Phillips wrote:
I emphatically disagree that it is premature for asking Tux3 to be
merged. You might think so, but I do not. While I do not begrudge
you your opinion, Linux did not get
On Tuesday, June 24, 2014 3:59:40 AM PDT, Theodore Ts'o wrote:
On Tue, Jun 24, 2014 at 02:10:52AM -0700, Daniel Phillips wrote:
That makes sense, because the patches to transform our workarounds
into shiny new kernel hooks are still in progress, as I said. I would
appreciate the courtesy
On Saturday, June 21, 2014 6:06:00 PM PDT, Dave Chinner wrote:
BTW, it's worth noting that reviewers are *allowed* to change their
mind at any time during a discussion or during review cycles.
Indeed, this occurs quite commonly. It's no different to multiple
reviewers disagreeing on what the
On Monday, June 23, 2014 9:41:30 PM PDT, James Bottomley wrote:
[rhetoric snipped]
... I'm still arguing the facts: proving
that page forking can be integrated into writeback without adding to the
maintenance burden is a big issue for tux3.
Sorry, I must have missed those facts, I only saw
On Monday, June 23, 2014 9:41:30 PM PDT, James Bottomley wrote:
[rhetoric snipped]
... I'm still arguing the facts: proving
that page forking can be integrated into writeback without adding to the
maintenance burden is a big issue for tux3.
Sorry, I must have missed those facts, I only saw
On Saturday, June 21, 2014 6:06:00 PM PDT, Dave Chinner wrote:
BTW, it's worth noting that reviewers are *allowed* to change their
mind at any time during a discussion or during review cycles.
Indeed, this occurs quite commonly. It's no different to multiple
reviewers disagreeing on what the
On Tuesday, June 24, 2014 3:59:40 AM PDT, Theodore Ts'o wrote:
On Tue, Jun 24, 2014 at 02:10:52AM -0700, Daniel Phillips wrote:
That makes sense, because the patches to transform our workarounds
into shiny new kernel hooks are still in progress, as I said. I would
appreciate the courtesy
On Tuesday, June 24, 2014 4:52:15 AM PDT, James Bottomley wrote:
On Tue, 2014-06-24 at 04:27 -0700, Daniel Phillips wrote:
I emphatically disagree that it is premature for asking Tux3 to be
merged. You might think so, but I do not. While I do not begrudge
you your opinion, Linux did not get
On Sunday, June 22, 2014 11:34:50 AM PDT, Theodore Ts'o wrote:
On Sat, Jun 21, 2014 at 08:32:03PM -0700, Daniel Phillips wrote:
That's a bit disingenuous: the concern has always been how page forking
interacted with writeback. It's not new, it was one of the major
things
brought up at LSF
On Saturday, June 21, 2014 12:29:01 PM PDT, James Bottomley wrote:
On Thu, 2014-06-19 at 14:58 -0700, Daniel Phillips wrote:
On Thursday, June 19, 2014 2:26:48 AM PDT, Lukáš Czerner wrote:
...
the concern has always been how page forking interacted with
writeback.
More accurately
On Saturday, June 21, 2014 12:29:01 PM PDT, James Bottomley wrote:
On Thu, 2014-06-19 at 14:58 -0700, Daniel Phillips wrote:
On Thursday, June 19, 2014 2:26:48 AM PDT, Lukáš Czerner wrote:
...
the concern has always been how page forking interacted with
writeback.
More accurately
On Sunday, June 22, 2014 11:34:50 AM PDT, Theodore Ts'o wrote:
On Sat, Jun 21, 2014 at 08:32:03PM -0700, Daniel Phillips wrote:
That's a bit disingenuous: the concern has always been how page forking
interacted with writeback. It's not new, it was one of the major
things
brought up at LSF
On Saturday, June 21, 2014 12:29:01 PM PDT, James Bottomley wrote:
On Thu, 2014-06-19 at 14:58 -0700, Daniel Phillips wrote:
We already removed 450 lines of core kernel workarounds from Tux3 with
an
approach that was literally cut and pasted from one of Dave's
emails. Then
Dave changed his
On Saturday, June 21, 2014 12:29:01 PM PDT, James Bottomley wrote:
On Thu, 2014-06-19 at 14:58 -0700, Daniel Phillips wrote:
We already removed 450 lines of core kernel workarounds from Tux3 with
an
approach that was literally cut and pasted from one of Dave's
emails. Then
Dave changed his
On Thursday, June 19, 2014 9:24:10 AM PDT, Josef Bacik wrote:
On 05/16/2014 05:50 PM, Daniel Phillips wrote:
We would like to offer Tux3 for review for mainline merge. We
have prepared a new repository suitable for pulling:
https://urldefense.proofpoint.com/v1/url?u=https://git.kernel.org
On Thursday, June 19, 2014 2:26:48 AM PDT, Lukáš Czerner wrote:
On Thu, 19 Jun 2014, Pavel Machek wrote:
Date: Thu, 19 Jun 2014 10:21:29 +0200
From: Pavel Machek
To: James Bottomley
Cc: Daniel Phillips , Dave Chinner
,
linux-kernel@vger.kernel.org, linux-fsde...@vger.kernel.org
On Thursday, June 19, 2014 2:26:48 AM PDT, Lukáš Czerner wrote:
On Thu, 19 Jun 2014, Pavel Machek wrote:
Date: Thu, 19 Jun 2014 10:21:29 +0200
From: Pavel Machek pa...@ucw.cz
To: James Bottomley james.bottom...@hansenpartnership.com
Cc: Daniel Phillips dan...@phunq.net, Dave Chinner
da
On Thursday, June 19, 2014 9:24:10 AM PDT, Josef Bacik wrote:
On 05/16/2014 05:50 PM, Daniel Phillips wrote:
We would like to offer Tux3 for review for mainline merge. We
have prepared a new repository suitable for pulling:
https://urldefense.proofpoint.com/v1/url?u=https://git.kernel.org
On Friday, June 13, 2014 1:20:39 PM PDT, Pavel Machek wrote:
Hi!
On Fri 2014-06-13 10:49:39, Daniel Phillips wrote:
Hi Pavel, On Friday, June 13, 2014 3:32:16 AM PDT, Pavel Machek wrote:
...
Actually, would it make sense to have staging/fs/?
That makes sense to me, if a suitably expert
On Friday, June 13, 2014 1:20:39 PM PDT, Pavel Machek wrote:
Hi!
On Fri 2014-06-13 10:49:39, Daniel Phillips wrote:
Hi Pavel, On Friday, June 13, 2014 3:32:16 AM PDT, Pavel Machek wrote:
...
Actually, would it make sense to have staging/fs/?
That makes sense to me, if a suitably expert
Hi Pavel, On Friday, June 13, 2014 3:32:16 AM PDT, Pavel Machek wrote:
Hmm, it seems that merging filesystems is getting harder over
time. Soon, it will be impossible to merge new filesystem.
My thought exactly, but it carries more weight coming from you.
It is getting more unpleasant to
Hi Pavel, On Friday, June 13, 2014 3:32:16 AM PDT, Pavel Machek wrote:
Hmm, it seems that merging filesystems is getting harder over
time. Soon, it will be impossible to merge new filesystem.
My thought exactly, but it carries more weight coming from you.
It is getting more unpleasant to
On Tuesday, June 3, 2014 8:21:55 AM PDT, Jan Kara wrote:
On Tue 03-06-14 07:14:44, Christoph Hellwig wrote:
On Tue, Jun 03, 2014 at 04:05:31PM +0200, Jan Kara wrote:
...
So I agree per-bdi / per-sb matters only in simple setups but machines
with single rotating disk with several partitions
Hi Dave,
Here is a non-incremental patch. This implements your suggestion
from yesterday, except that the wb list lock is dropped before
calling ->writeback().
Regards,
Daniel
>From d030d328757b160b39b252e82811a94843513cfc Mon Sep 17 00:00:00 2001
From: Daniel Phillips
Date: Tue, 3 Ju
Hi Dave,
On 06/02/2014 08:33 PM, Dave Chinner wrote:
> On Mon, Jun 02, 2014 at 01:02:29PM -0700, Daniel Phillips wrote:
>>
>> Redirty_tail nearly works, but "if (!list_empty(>b_dirty))" is
>> not correct because the inode needs to end up on the di
Hi Dave,
On 06/02/2014 08:33 PM, Dave Chinner wrote:
On Mon, Jun 02, 2014 at 01:02:29PM -0700, Daniel Phillips wrote:
Redirty_tail nearly works, but if (!list_empty(wb-b_dirty)) is
not correct because the inode needs to end up on the dirty list
whether it was already
Hi Dave,
Here is a non-incremental patch. This implements your suggestion
from yesterday, except that the wb list lock is dropped before
calling -writeback().
Regards,
Daniel
From d030d328757b160b39b252e82811a94843513cfc Mon Sep 17 00:00:00 2001
From: Daniel Phillips dan...@tux3.org
Date: Tue
On Tuesday, June 3, 2014 8:21:55 AM PDT, Jan Kara wrote:
On Tue 03-06-14 07:14:44, Christoph Hellwig wrote:
On Tue, Jun 03, 2014 at 04:05:31PM +0200, Jan Kara wrote:
...
So I agree per-bdi / per-sb matters only in simple setups but machines
with single rotating disk with several partitions
On 06/01/2014 08:30 PM, Dave Chinner wrote:
> I get very worried whenever I see locks inside inode->i_lock. In
> general, i_lock is supposed to be the innermost lock that is taken,
> and there are very few exceptions to that - the inode LRU list is
> one of the few.
I generally trust Hirofumi to
On 06/01/2014 08:15 PM, Dave Chinner wrote:
> On Sun, Jun 01, 2014 at 02:41:02PM -0700, I wrote:
>> +
>> +/*
>> + * Add inode to writeback dirty list with current time.
>> + */
>> +void inode_writeback_touch(struct inode *inode)
>> +{
>> +struct backing_dev_info *bdi = inode->i_sb->s_bdi;
>>
On 06/01/2014 08:15 PM, Dave Chinner wrote:
On Sun, Jun 01, 2014 at 02:41:02PM -0700, I wrote:
+
+/*
+ * Add inode to writeback dirty list with current time.
+ */
+void inode_writeback_touch(struct inode *inode)
+{
+struct backing_dev_info *bdi = inode-i_sb-s_bdi;
+
On 06/01/2014 08:30 PM, Dave Chinner wrote:
I get very worried whenever I see locks inside inode-i_lock. In
general, i_lock is supposed to be the innermost lock that is taken,
and there are very few exceptions to that - the inode LRU list is
one of the few.
I generally trust Hirofumi to
ars young to fs-writeback and does not trigger
repeated ->writeback flushes.
Signed-off-by: Daniel Phillips
---
fs/tux3/Makefile | 2 +-
fs/tux3/commit.c | 1 -
fs/tux3/commit_flusher.c | 180 ++
fs/tux3/commit_flusher.h | 16 --
f
h that is avoided.
Regards,
Daniel
---
From: Daniel Phillips
Subject: [PATCH] Add a super operation for writeback
Add a "writeback" super operation to be called in the
form:
progress = sb->s_op->writeback(sb, , );
The filesystem is expected to flush some inodes
.
Regards,
Daniel
---
From: Daniel Phillips dan...@tux3.org
Subject: [PATCH] Add a super operation for writeback
Add a writeback super operation to be called in the
form:
progress = sb-s_op-writeback(sb, wbc, pages);
The filesystem is expected to flush some inodes to disk
and return
young to fs-writeback and does not trigger
repeated -writeback flushes.
Signed-off-by: Daniel Phillips dan...@tux3.org
---
fs/tux3/Makefile | 2 +-
fs/tux3/commit.c | 1 -
fs/tux3/commit_flusher.c | 180 ++
fs/tux3/commit_flusher.h | 16 --
fs
Hi Dongsu,
On Thursday, May 22, 2014 2:52:27 AM PDT, Dongsu Park wrote:
First of all, thank you for trying to merge it to mainline.
Maybe I cannot say the code is clean enough, but basically
the filesystem seems to work at least.
Thank you for confirming that. We test Tux3 extensively so we
Hi Dongsu,
On Thursday, May 22, 2014 2:52:27 AM PDT, Dongsu Park wrote:
First of all, thank you for trying to merge it to mainline.
Maybe I cannot say the code is clean enough, but basically
the filesystem seems to work at least.
Thank you for confirming that. We test Tux3 extensively so we
Hi Dave,
This is to address your concern about theoretical interaction between
direct IO and Tux3 page fork.
On Monday, May 19, 2014 10:41:40 PM PDT, I wrote:
Except that Direct IO impacts on the design of the page forking code
(because of how things like get_user_pages() need to be aware of
On Friday, May 16, 2014 10:29:43 PM PDT, I wrote:
Hirofumi is the one who deserves congratulations,
recognition for providing more than half the code including most
of the hard parts, and thanks for bringing Tux3 back to life.
An epilogue... one gentleman took that suggestion seriously and
On Friday, May 16, 2014 10:29:43 PM PDT, I wrote:
Hirofumi is the one who deserves congratulations,
recognition for providing more than half the code including most
of the hard parts, and thanks for bringing Tux3 back to life.
An epilogue... one gentleman took that suggestion seriously and
Hi Dave,
This is to address your concern about theoretical interaction between
direct IO and Tux3 page fork.
On Monday, May 19, 2014 10:41:40 PM PDT, I wrote:
Except that Direct IO impacts on the design of the page forking code
(because of how things like get_user_pages() need to be aware of
On Monday, May 19, 2014 8:18:02 PM PDT, Dave Chinner wrote:
On Mon, May 19, 2014 at 05:55:30PM -0700, Daniel Phillips wrote:
On 05/18/2014 04:55 PM, Dave Chinner wrote:
...
I'm not commenting on the c99 comment style, I'm passing comment on
the fact that a filesystem that has commented out
On 05/18/2014 04:55 PM, Dave Chinner wrote:
On Fri, May 16, 2014 at 05:50:59PM -0700, Daniel Phillips wrote:
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
https://git.kernel.org/cgit/linux/kernel/git/daniel/linux-tux3.git
On 05/18/2014 04:55 PM, Dave Chinner wrote:
On Fri, May 16, 2014 at 05:50:59PM -0700, Daniel Phillips wrote:
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
https://git.kernel.org/cgit/linux/kernel/git/daniel/linux-tux3.git
On Monday, May 19, 2014 8:18:02 PM PDT, Dave Chinner wrote:
On Mon, May 19, 2014 at 05:55:30PM -0700, Daniel Phillips wrote:
On 05/18/2014 04:55 PM, Dave Chinner wrote:
...
I'm not commenting on the c99 comment style, I'm passing comment on
the fact that a filesystem that has commented out
On Friday, May 16, 2014 10:09:50 PM PDT, Martin Steigerwald wrote:
Hi Daniel!
Am Freitag, 16. Mai 2014, 17:50:59 schrieb Daniel Phillips:
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
At long last!
Congrats for arriving
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
https://git.kernel.org/cgit/linux/kernel/git/daniel/linux-tux3.git/
Tux3 kernel module files are here:
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
https://git.kernel.org/cgit/linux/kernel/git/daniel/linux-tux3.git/
Tux3 kernel module files are here:
On Friday, May 16, 2014 10:09:50 PM PDT, Martin Steigerwald wrote:
Hi Daniel!
Am Freitag, 16. Mai 2014, 17:50:59 schrieb Daniel Phillips:
We would like to offer Tux3 for review for mainline merge. We have
prepared a new repository suitable for pulling:
At long last!
Congrats for arriving
Today is May Day, an ancient festival day. A time for dancing, singing,
feasting and refactoring repository topology. We are preparing Tux3
now for review with a view to mainline merge.
Our repository is currently on Github:
https://github.com/OGAWAHirofumi/tux3
This repository needs some
Today is May Day, an ancient festival day. A time for dancing, singing,
feasting and refactoring repository topology. We are preparing Tux3
now for review with a view to mainline merge.
Our repository is currently on Github:
https://github.com/OGAWAHirofumi/tux3
This repository needs some
Yesterday I wrote:
> When we checked read performance on the untarred tree, we immediately saw
> mixed results. Re-tarring the kernel tree is faster than Ext4, but
directory listing is
> slower by a multiple. So we need to analyze and fix ls without
breaking the good tar
> and untar behavior.
Yesterday I wrote:
When we checked read performance on the untarred tree, we immediately saw
mixed results. Re-tarring the kernel tree is faster than Ext4, but
directory listing is
slower by a multiple. So we need to analyze and fix ls without
breaking the good tar
and untar behavior. The
Hi,
It is about time to post Tux3 patches for review. Almost. Running the
classic kernel untar test on an older 60 MB/sec hard disk turned up a
performance gap versus Ext4 by a factor of six. Hmm, that is in the
range where it could be a basic design bug, we need to do something.
Tux3
Hi,
It is about time to post Tux3 patches for review. Almost. Running the
classic kernel untar test on an older 60 MB/sec hard disk turned up a
performance gap versus Ext4 by a factor of six. Hmm, that is in the
range where it could be a basic design bug, we need to do something.
Tux3
At Korea Linux Forum last fall, Linus asked, haven't I been hearing
about Tux3 for ten years? I said, no, that was Tux2, completely
different. You only heard about Tux3 for six years.
Well, that is about long enough to keep hearing about an out of tree
filesystem. Last time we talked about
At Korea Linux Forum last fall, Linus asked, haven't I been hearing
about Tux3 for ten years? I said, no, that was Tux2, completely
different. You only heard about Tux3 for six years.
Well, that is about long enough to keep hearing about an out of tree
filesystem. Last time we talked about
Hi Willy,
I understand completely. I don't blame you. Filter the thread. Done.
I am not tired of the subject, quite the contrary. Please do not speak
for me in that regard. After many years of wandering in the toxic
wasteland, finally some actual progress.
Regards,
Daniel
--
To unsubscribe
On 07/25/2013 02:34 PM, Willy Tarreau wrote:
> Guys, could we please stop this endless boring thread ?
Willy, I believe we are on the same side of the civility debate, but I
somehow got the feeling that you just characterized my comment re "open
and honest" as "endless and boring".
I agree
On 07/25/2013 02:34 PM, Willy Tarreau wrote:
Guys, could we please stop this endless boring thread ?
Willy, I believe we are on the same side of the civility debate, but I
somehow got the feeling that you just characterized my comment re open
and honest as endless and boring.
I agree that the
Hi Willy,
I understand completely. I don't blame you. Filter the thread. Done.
I am not tired of the subject, quite the contrary. Please do not speak
for me in that regard. After many years of wandering in the toxic
wasteland, finally some actual progress.
Regards,
Daniel
--
To unsubscribe
On 07/24/2013 12:51 AM, Felipe Contreras wrote:
> Your mistaken fallacy seems to be that you think one can *always* be
> both A (open and honest), and B (polite)...
You are are right, I do think that you can *always* be both open and
honest, and polite. I do not believe that I am mistaken. And I
On 07/24/2013 12:51 AM, Felipe Contreras wrote:
Your mistaken fallacy seems to be that you think one can *always* be
both A (open and honest), and B (polite)...
You are are right, I do think that you can *always* be both open and
honest, and polite. I do not believe that I am mistaken. And I
On 07/22/2013 09:02 PM, Luck, Tony wrote:
> Some thoughts on the format of the discussion at KS:
>
> ...
>
5) Volunteers are under-represented at Kernel Summit
Volunteers are the "dark matter" of Linux Kernel contribution. They are
not the "usual suspects" who nearly all have full time jobs
On 07/22/2013 09:02 PM, Luck, Tony wrote:
Some thoughts on the format of the discussion at KS:
...
5) Volunteers are under-represented at Kernel Summit
Volunteers are the dark matter of Linux Kernel contribution. They are
not the usual suspects who nearly all have full time jobs now,
On 07/20/2013 12:36 PM, Felipe Contreras wrote:
> I think you need more than "hope" to change one of the fundamental
> rules of LKML; be open and honest, even if that means expressing your
> opinion in a way that others might consider offensive and colorful.
Logical fallacy type: bifurcation. You
On 07/18/2013 03:54 PM, Sarah Sharp wrote:
> Let's shift this discussion away from the terms "abuse" and
> "professionalism" to "respect" and "civility".
Brilliant, and +1 for a session at KS. In the mean time, why don't we
all try to demonstrate the real meaning of respect and civility, by
On 07/18/2013 03:54 PM, Sarah Sharp wrote:
Let's shift this discussion away from the terms abuse and
professionalism to respect and civility.
Brilliant, and +1 for a session at KS. In the mean time, why don't we
all try to demonstrate the real meaning of respect and civility, by
practising it
On 07/20/2013 12:36 PM, Felipe Contreras wrote:
I think you need more than hope to change one of the fundamental
rules of LKML; be open and honest, even if that means expressing your
opinion in a way that others might consider offensive and colorful.
Logical fallacy type: bifurcation. You can
Hi Ted,
On Thu, Jun 27, 2013 at 11:36 AM, Theodore Ts'o wrote:
> If the goal is to optimize file system freeze operations, sure. But
> that's also not a common operation, so if it burns a little extra CPU
> time, it wouldn't cause me to think that it was a high priority issue.
> Decreasing the
On Wed, Jun 26, 2013 at 10:18 PM, OGAWA Hirofumi
wrote:
> ...vfs can't know the data is whether after sync point or before
> sync point, and have to wait or not. FS is using the behavior like
> data=journal has tracking of those already, and can reuse it.
Clearly showing why the current
On Wed, Jun 26, 2013 at 10:18 PM, OGAWA Hirofumi
hirof...@mail.parknet.co.jp wrote:
...vfs can't know the data is whether after sync point or before
sync point, and have to wait or not. FS is using the behavior like
data=journal has tracking of those already, and can reuse it.
Clearly showing
Hi Ted,
On Thu, Jun 27, 2013 at 11:36 AM, Theodore Ts'o ty...@mit.edu wrote:
If the goal is to optimize file system freeze operations, sure. But
that's also not a common operation, so if it burns a little extra CPU
time, it wouldn't cause me to think that it was a high priority issue.
Hi Dave,
On Wed, Jun 26, 2013 at 9:47 PM, Dave Chinner wrote:
> You have your own wait code, that doesn't make what the VFS does
> unnecesary. Quite frankly, I don't trust individual filesystems to
> get it right - there's a long history of filesystem specific data
> sync problems (including in
101 - 200 of 1437 matches
Mail list logo