On Tue, Apr 08, 2014 at 04:40:32PM -0400, Sasha Levin wrote:
On 03/30/2014 08:40 PM, Dave Chinner wrote:
On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
filldir on a directory inode vs page fault on regular file
a false positive. We have to change locking
algorithms to avoid such deficiencies of lockdep (a case of lockdep
considered harmful, perhaps?) so it's not something I'm about to
rush...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux
On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
filldir on a directory inode vs page fault on regular file. Known
issue, definitely a false positive. We have to change locking
algorithms to avoid such deficiencies
On Sun, Mar 30, 2014 at 08:20:30PM -0400, Dave Jones wrote:
On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
Not sure if I've reported this already (it looks familiar, though I've
not managed
to find it in my
On Thu, Jun 27, 2013 at 04:22:04PM +0900, OGAWA Hirofumi wrote:
Dave Chinner da...@fromorbit.com writes:
Otherwise, vfs can't know the data is whether after sync point or before
sync point, and have to wait or not. FS is using the behavior like
data=journal has tracking of those already
On Thu, Jun 27, 2013 at 05:55:43PM +1000, Dave Chinner wrote:
On Wed, Jun 26, 2013 at 08:22:55PM -0400, Dave Jones wrote:
On Wed, Jun 26, 2013 at 09:18:53PM +0200, Oleg Nesterov wrote:
On 06/25, Dave Jones wrote:
Took a lot longer to trigger this time. (13 hours of runtime
On Thu, Jun 27, 2013 at 08:06:12PM +1000, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 05:55:43PM +1000, Dave Chinner wrote:
On Wed, Jun 26, 2013 at 08:22:55PM -0400, Dave Jones wrote:
On Wed, Jun 26, 2013 at 09:18:53PM +0200, Oleg Nesterov wrote:
On 06/25, Dave Jones wrote
On Thu, Jun 27, 2013 at 11:21:51AM -0400, Dave Jones wrote:
On Thu, Jun 27, 2013 at 10:52:18PM +1000, Dave Chinner wrote:
Yup, that's about three of orders of magnitude faster on this
workload
Lightly smoke tested patch below - it passed the first round of
XFS data
On Thu, Jun 27, 2013 at 10:30:55AM -0400, Dave Jones wrote:
On Thu, Jun 27, 2013 at 05:55:43PM +1000, Dave Chinner wrote:
Is this just a soft lockup warning? Or is the system hung?
I've only seen it completely lock up the box 2-3 times out of dozens
of times I've seen this, and tbh
On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
On Thu, Jun 27, 2013 at 3:18 PM, Dave Chinner da...@fromorbit.com wrote:
Right, that will be what is happening - the entire system will go
unresponsive when a sync call happens, so it's entirely possible
to see the soft
On Fri, Jun 28, 2013 at 11:13:01AM +1000, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 11:21:51AM -0400, Dave Jones wrote:
On Thu, Jun 27, 2013 at 10:52:18PM +1000, Dave Chinner wrote:
Yup, that's about three of orders of magnitude faster on this
workload
Lightly
on XFS and
ext4 in 3.10-rc6:
https://lkml.org/lkml/2013/6/27/772
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On Thu, Jun 27, 2013 at 07:59:50PM -1000, Linus Torvalds wrote:
On Thu, Jun 27, 2013 at 5:54 PM, Dave Chinner da...@fromorbit.com wrote:
On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
So what made it all start happening now? I don't recall us having had
these kinds
On Thu, Jun 27, 2013 at 04:54:11PM +0200, Michal Hocko wrote:
On Thu 27-06-13 09:24:26, Dave Chinner wrote:
On Wed, Jun 26, 2013 at 10:15:09AM +0200, Michal Hocko wrote:
On Tue 25-06-13 12:27:54, Dave Chinner wrote:
On Tue, Jun 18, 2013 at 03:50:25PM +0200, Michal Hocko wrote
On Fri, Jun 28, 2013 at 12:28:19PM +0200, Jan Kara wrote:
On Fri 28-06-13 13:58:25, Dave Chinner wrote:
writeback: store inodes under writeback on a separate list
From: Dave Chinner dchin...@redhat.com
When there are lots of cached inodes, a sync(2) operation walks all
of them
walk interface it has to hide the fact it is actually
using per-node lists and locks...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On Sun, Jun 30, 2013 at 12:05:31PM +1000, Dave Chinner wrote:
On Sat, Jun 29, 2013 at 03:23:48PM -0700, Linus Torvalds wrote:
On Sat, Jun 29, 2013 at 1:13 PM, Dave Jones da...@redhat.com wrote:
So with that patch, those two boxes have now been fuzzing away for
over 24hrs without seeing
On Sun, Jun 30, 2013 at 08:33:49PM +0200, Michal Hocko wrote:
On Sat 29-06-13 12:55:09, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 04:54:11PM +0200, Michal Hocko wrote:
On Thu 27-06-13 09:24:26, Dave Chinner wrote:
On Wed, Jun 26, 2013 at 10:15:09AM +0200, Michal Hocko wrote
On Mon, Jul 01, 2013 at 09:50:05AM +0200, Michal Hocko wrote:
On Mon 01-07-13 11:25:58, Dave Chinner wrote:
On Sun, Jun 30, 2013 at 08:33:49PM +0200, Michal Hocko wrote:
On Sat 29-06-13 12:55:09, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 04:54:11PM +0200, Michal Hocko wrote
On Mon, Jul 01, 2013 at 01:57:34PM -0400, Dave Jones wrote:
On Fri, Jun 28, 2013 at 01:54:37PM +1000, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
On Thu, Jun 27, 2013 at 3:18 PM, Dave Chinner da...@fromorbit.com
wrote:
Right
On Mon, Jul 01, 2013 at 02:00:37PM +0200, Jan Kara wrote:
On Sat 29-06-13 13:39:24, Dave Chinner wrote:
On Fri, Jun 28, 2013 at 12:28:19PM +0200, Jan Kara wrote:
On Fri 28-06-13 13:58:25, Dave Chinner wrote:
writeback: store inodes under writeback on a separate list
From: Dave
On Tue, Jul 02, 2013 at 02:01:46AM -0400, Dave Jones wrote:
On Tue, Jul 02, 2013 at 12:07:41PM +1000, Dave Chinner wrote:
On Mon, Jul 01, 2013 at 01:57:34PM -0400, Dave Jones wrote:
On Fri, Jun 28, 2013 at 01:54:37PM +1000, Dave Chinner wrote:
On Thu, Jun 27, 2013 at 04:54:53PM -1000
On Tue, Jul 02, 2013 at 11:22:00AM +0200, Michal Hocko wrote:
On Mon 01-07-13 18:10:56, Dave Chinner wrote:
On Mon, Jul 01, 2013 at 09:50:05AM +0200, Michal Hocko wrote:
On Mon 01-07-13 11:25:58, Dave Chinner wrote:
That is the recycle stat, which indicates we've found an inode being
On Tue, Jul 02, 2013 at 10:19:37AM +0200, Jan Kara wrote:
On Tue 02-07-13 16:29:54, Dave Chinner wrote:
We could, but we just end up in the same place with sync as we are
now - with a long list of clean inodes with a few inodes hidden in
it that are under IO. i.e. we still have
On Mon, Jun 23, 2014 at 04:27:14PM -0400, Dave Jones wrote:
On Thu, Jun 19, 2014 at 12:03:40PM +1000, Dave Chinner wrote:
On Fri, Jun 13, 2014 at 10:19:25AM -0400, Dave Jones wrote:
On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:
970 if (WARN_ON_ONCE
-wq which means progress
is made and the inode gets unlocked. Then the kworker for the work
on the xfs-data queue will get the lock, complete it's work and
everything has resolved itself.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe
On Mon, Jun 23, 2014 at 11:25:21PM -0400, Tejun Heo wrote:
Hello,
On Tue, Jun 24, 2014 at 01:02:40PM +1000, Dave Chinner wrote:
As I understand it, what then happens is that the workqueue code
grabs another kworker thread and runs the next work item in it's
queue. IOWs, work items can
On Wed, Jun 25, 2014 at 10:18:36AM -0400, Tejun Heo wrote:
Hello, Dave.
On Wed, Jun 25, 2014 at 03:56:41PM +1000, Dave Chinner wrote:
Hmmm - that's different from my understanding of what the original
behaviour WQ_MEM_RECLAIM gave us. i.e. that WQ_MEM_RECLAIM
workqueues had a rescuer
in between
benchmark runs. There is no ned to adding kernel functionality for
somethign that can be so easily acheived by other means, especially
in benchmark environments where *everything* is tightly controlled.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send
On Fri, Jun 13, 2014 at 10:19:25AM -0400, Dave Jones wrote:
On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:
970 if (WARN_ON_ONCE((current-flags (PF_MEMALLOC|PF_KSWAPD))
==
971 PF_MEMALLOC))
What were you running at the time? The XFS
by the change know how to tune CFQ is a bad idea.
When CFQ misbehaves, most people just switch to deadline or no-op
because they don't understand how CFQ works, nor what what all the
nobs do or which ones to tweak to solve their problem
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe
the iovec iterator stuff
that got merged in from the vfs tree. ISTR a 32 bit-only bug in that
stuff go past in to do with not being able to partition a 32GB block
dev on a 32 bit system due to a 32 bit size_t overflow somewhere
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from
to me that vfat-on-mmc needs fixing...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
On Fri, Jun 20, 2014 at 12:30:25PM +0100, Mel Gorman wrote:
On Fri, Jun 20, 2014 at 07:42:14AM +1000, Dave Chinner wrote:
On Thu, Jun 19, 2014 at 02:38:44PM -0400, Jeff Moyer wrote:
Mel Gorman mgor...@suse.de writes:
The existing CFQ default target_latency results in very poor
a long time
ago if reviewers weren't allowed to change their minds
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
some_other_struct {
struct vq vq[MAX_NUM_VQ];
};
This keeps locality to objects within a queue, but separates each
queue onto it's own cacheline
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe
this scripting for you.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
the kernel to use a
virtual-size of 4096 for the sector as an additional
performance 'hint', so nothing will even try to use
smaller i/o's than that.
Just format the filesystem with 4k sector size.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line
.
Reviewed-by: Dave Chinner dchin...@redhat.com
Reviewed-by: Gao feng gaof...@cn.fujitsu.com
Signed-off-by: Dwight Engen dwight.en...@oracle.com
Signed-off-by: Ben Myers b...@sgi.com
[ kamal: 3.8-stable prereq for
23adbe1 fs,userns: Change inode_capable to capable_wrt_inode_uidgid ]
Signed-off
On Thu, Jun 26, 2014 at 09:13:19AM +0300, Artem Bityutskiy wrote:
On Thu, 2014-06-26 at 11:06 +1000, Dave Chinner wrote:
Your particular use case can be handled by directing your benchmark
at a filesystem mount point and unmounting the filesystem in between
benchmark runs. There is no ned
, Dave Chinner wrote:
Your particular use case can be handled by directing your benchmark
at a filesystem mount point and unmounting the filesystem in between
benchmark runs. There is no ned to adding kernel functionality for
somethign that can be so easily acheived by other means, especially
it once complete. That guarantees that the file is not modified in
any way while userspace is doing the defrag...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
to be merged? If not, can you please revert the
optimistic spinning patch that caused the regression?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
journal, and so
on.
Fundamentally, ext3 performance is simply not a relevant performance
metric anymore - it's a legacy filesystem in maintenance mode and
has been for a few years now...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe
.
.
Dave Chinner (3):
xfs: prevent deadlock trying to cover an active log
xfs: prevent stack overflows from page cache allocation
xfs: xfs_remove deadlocks due to inverted AGF vs AGI lock ordering
None of the XFS patches you're backporting were marked for stable.
What criteria did you choose
a significant performance regression due to:
4fc828e locking/rwsem: Support optimistic spinning
which changed the rwsem behaviour in 3.16-rc1.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
On Wed, Jul 02, 2014 at 10:09:34AM +0200, Jiri Slaby wrote:
On 07/02/2014 01:53 AM, Dave Chinner wrote:
On Mon, Jun 30, 2014 at 01:51:22PM +0200, Jiri Slaby wrote:
This is the start of the stable review cycle for the 3.12.24 release.
There are 181 patches in this series, all will be posted
On Wed, Jul 02, 2014 at 08:31:08PM -0700, Davidlohr Bueso wrote:
On Thu, 2014-07-03 at 12:32 +1000, Dave Chinner wrote:
Hi folks,
I've got a workload that hammers the mmap_sem via multi-threads
memory allocation and page faults: it's called xfs_repair.
Another reason for concurrent
On Thu, Jul 03, 2014 at 02:59:33PM +1000, Dave Chinner wrote:
On Wed, Jul 02, 2014 at 08:31:08PM -0700, Davidlohr Bueso wrote:
On Thu, 2014-07-03 at 12:32 +1000, Dave Chinner wrote:
Hi folks,
I've got a workload that hammers the mmap_sem via multi-threads
memory allocation and page
On Thu, Jul 03, 2014 at 09:38:52AM +0200, Peter Zijlstra wrote:
On Thu, Jul 03, 2014 at 03:39:11PM +1000, Dave Chinner wrote:
There's another regression with the optimisitic spinning in rwsems
as well: it increases the size of the struct rw_semaphore by 16
bytes. That has increased the size
[re-added lkml]
On Thu, Jul 03, 2014 at 11:50:20AM -0700, Jason Low wrote:
On Wed, Jul 2, 2014 at 7:32 PM, Dave Chinner da...@fromorbit.com wrote:
This is what the kernel profile looks like on the strided run:
- 83.06% [kernel] [k] osq_lock
- osq_lock
- 100.00
need to support forever more.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
already lost more
performance that careful packing of the dentry slab cache gains you.
There's no point in carefully tuning DNAME_INLINE_LEN for debug
options - it's just code that will break and annoy people as debug
implementations change.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Thu, Jul 03, 2014 at 06:54:50PM -0700, Jason Low wrote:
On Thu, 2014-07-03 at 18:46 -0700, Jason Low wrote:
On Fri, 2014-07-04 at 11:01 +1000, Dave Chinner wrote:
FWIW, the rwsems in the struct xfs_inode are often heavily
read/write contended, so there are lots of IO related
On Fri, Jul 04, 2014 at 12:06:19AM -0700, Jason Low wrote:
On Fri, 2014-07-04 at 16:13 +1000, Dave Chinner wrote:
On Thu, Jul 03, 2014 at 06:54:50PM -0700, Jason Low wrote:
On Thu, 2014-07-03 at 18:46 -0700, Jason Low wrote:
On Fri, 2014-07-04 at 11:01 +1000, Dave Chinner wrote
On Thu, Dec 19, 2013 at 11:24:11AM -0500, Tejun Heo wrote:
Yo, Dave.
On Thu, Dec 19, 2013 at 03:08:21PM +1100, Dave Chinner wrote:
If knowing that the underlying device has gone away somehow helps
filesystem, maybe we can expose that interface and avoid flushing
after hotunplug
On Thu, Dec 19, 2013 at 09:26:12AM -0600, Christoph Lameter wrote:
On Thu, 19 Dec 2013, Dave Chinner wrote:
On Wed, Dec 18, 2013 at 07:24:46PM +, Christoph Lameter wrote:
The counter increment in inode_lru_isolate is happening after
spinlocks have been dropped with preemption
dangerous.
Do you claim that it is now safe to mount (rw) and access filesystem
between suspend and resume?
No, I didn't claim that. less dangerous is still dangerous, just
less so than it was before.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send
perforamnce regressions and a quota inode handling
regression.
Dave Chinner (3):
Revert xfs: block allocation work needs to be kswapd aware
xfs: refine the allocation stack switch
xfs: null unused quota inodes when
On Mon, Jul 28, 2014 at 03:21:20PM -0600, Andreas Dilger wrote:
On Jul 25, 2014, at 6:38 PM, Dave Chinner da...@fromorbit.com wrote:
On Fri, Jul 25, 2014 at 10:52:57AM -0700, Zach Brown wrote:
On Fri, Jul 25, 2014 at 01:37:19PM -0400, Abhijith Das wrote:
Hi all,
The topic
On Mon, Jul 28, 2014 at 08:22:22AM -0400, Abhijith Das wrote:
- Original Message -
From: Dave Chinner da...@fromorbit.com
To: Zach Brown z...@redhat.com
Cc: Abhijith Das a...@redhat.com, linux-kernel@vger.kernel.org,
linux-fsdevel linux-fsde...@vger.kernel.org,
cluster
() can be
applied.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
the
bulkstat call and the open-by-handle as the generation number in the
handle will no longer match that of the inode.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
On Thu, Jul 31, 2014 at 01:19:45PM +0200, Andreas Dilger wrote:
On Jul 31, 2014, at 6:49, Dave Chinner da...@fromorbit.com wrote:
On Mon, Jul 28, 2014 at 03:19:31PM -0600, Andreas Dilger wrote:
On Jul 28, 2014, at 6:52 AM, Abhijith Das a...@redhat.com wrote:
OnJuly 26, 2014 12:27:19 AM
rounding error of fiemap length parameter
Christoph Hellwig (2):
xfs: remove xfs_bulkstat_single
xfs: require 64-bit sector_t
Dave Chinner (22):
xfs: create libxfs infrastructure
libxfs: move header files
libxfs: move source files
xfs: global error sign conversion
On Tue, Aug 12, 2014 at 11:56:12PM +1000, Stephen Rothwell wrote:
Hi Dave,
On Tue, 12 Aug 2014 22:53:13 +1000 Dave Chinner da...@fromorbit.com wrote:
FYI, this will be the last pull request I will send you from a tree
on oss.sgi.com. I'm moving everything XFS related over to kernel.org
in this patchset does address some of the problems with
spinning when there are readers. CC'ing Dave Chinner, who did the
testing with the xfs_repair workload.
This patch set enables proper reader spinning and so the problem
that we see with xfs_repair workload should go away. I should have
On Wed, Aug 13, 2014 at 12:41:06PM -0400, Waiman Long wrote:
On 08/13/2014 01:51 AM, Dave Chinner wrote:
On Mon, Aug 04, 2014 at 11:44:19AM -0400, Waiman Long wrote:
On 08/04/2014 12:10 AM, Jason Low wrote:
On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
The rwsem_can_spin_on_owner
On Thu, Jul 24, 2014 at 03:41:31PM -0700, yuanh wrote:
Hi all,
Two file descriptors are pointing the same file. When fsync is called on
one fd, the data written by the other fd will also be flushed? We are using
linux XFS.
Yes.
-Dave.
--
Dave Chinner
da...@fromorbit.com
be done in userspace, and even hidden within
the readdir() or ftw/ntfw() implementations themselves so it's OS,
kernel and filesystem independent..
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
On Fri, Aug 01, 2014 at 07:54:56AM +0200, Andreas Dilger wrote:
On Aug 1, 2014, at 1:53, Dave Chinner da...@fromorbit.com wrote:
On Thu, Jul 31, 2014 at 01:19:45PM +0200, Andreas Dilger wrote:
None of these issues are relevant in the API that I'm thinking about.
The syscall just passes
On Fri, Aug 15, 2014 at 01:58:09PM -0400, Waiman Long wrote:
On 08/14/2014 11:34 PM, Dave Chinner wrote:
create sparse vm image file of 500TB on ssd with XFS on it
xfs_io -f -c truncate 500t -c extsize 1m /path/to/vm/image/file
Thank for the testing recipe. I am afraid that I can't
(inode-i_mutex));
WARN_ON(to inode-i_size);
if (from = to || bsize == PAGE_CACHE_SIZE)
Jan, Have you sent this patch upstream yet? I'm seeing it fire in
my testing in 3.18-rc1 kernels, so I was wondering what your plans
are for this...
Cheers,
Dave.
--
Dave Chinner
da
can't serialise page faults against IO path and data manipulation
functions (e.g. hole punch). We shouldn't be repeating that disaster
is we can avoid it
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
.
Brian Foster (2):
xfs: don't log inode unless extent shift makes extent modifications
xfs: trim eofblocks before collapse range
Chris Mason (1):
xfs: don't zero partial page cache pages during O_DIRECT writes
Dave Chinner (4):
xfs: don't dirty
memory filesystem to production
quality that can take full advantage of the technology...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On Sun, Sep 14, 2014 at 03:25:45PM +0300, Boaz Harrosh wrote:
On 09/11/2014 07:38 AM, Dave Chinner wrote:
And so ext4 is buggy, because what ext4 does
... is not a retry - it falls back to a fundamentally different
code path. i.e:
sys_write()
new_sync_write
?
Cheers,
Dave?
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
that the kernel defines,
not the glibc posix wrapper
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On Mon, Oct 27, 2014 at 10:58:12AM +0100, Jan Kara wrote:
On Mon 27-10-14 12:04:22, Dave Chinner wrote:
On Thu, Oct 16, 2014 at 01:01:27PM +0200, Jan Kara wrote:
From de3426d6495f4b44b14c09b7c7202e9a86d864b9 Mon Sep 17 00:00:00 2001
From: Jan Kara j...@suse.cz
Date: Thu, 16 Oct 2014 12
with IOPRIO_ADV_WILLNEED to store frequently
accessed metadata in flash. Conversely, journal writes need to
be issued with IOPRIO_ADV_DONTNEED so they don't unneceessarily
consume flash space as they are never-read IOs...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send
On Wed, Oct 29, 2014 at 03:10:51PM -0600, Jens Axboe wrote:
On 10/29/2014 02:14 PM, Dave Chinner wrote:
On Wed, Oct 29, 2014 at 11:23:38AM -0700, Jason B. Akers wrote:
The following series enables the use of Solid State hybrid drives
ATA standard 3.2 defines the hybrid information feature
On Wed, Oct 29, 2014 at 03:24:11PM -0700, Dan Williams wrote:
On Wed, Oct 29, 2014 at 3:09 PM, Dave Chinner da...@fromorbit.com wrote:
On Wed, Oct 29, 2014 at 03:10:51PM -0600, Jens Axboe wrote:
As for the fs accessing this, the io nice fields are readily exposed
through the -bi_rw setting
On Wed, Sep 10, 2014 at 11:23:37AM -0400, Matthew Wilcox wrote:
On Wed, Sep 03, 2014 at 05:47:24PM +1000, Dave Chinner wrote:
+ error = get_block(inode, block, bh, 0);
+ if (!error (bh.b_size PAGE_SIZE))
+ error = -EIO;
+ if (error)
+ goto unlock_page;
page
On Wed, Sep 10, 2014 at 07:49:40PM +0300, Boaz Harrosh wrote:
On 09/03/2014 02:13 PM, Dave Chinner wrote:
When direct IO fails ext4 falls back to buffered IO, right? And
dax_do_io() can return partial writes, yes?
There is no buffered writes with DAX. .I.E buffered writes
...@huawei.com
For the second time: use memalloc_noio_save/memalloc_noio_restore.
And please put a great big comment in the code explaining why you
need to do this special thing with memory reclaim flags.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send
On Tue, Sep 02, 2014 at 05:03:27PM +0800, Xue jiufei wrote:
Hi, Dave
On 2014/9/2 7:51, Dave Chinner wrote:
On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote:
The patch trys to solve one deadlock problem caused by cluster
fs, like ocfs2. And the problem may happen at least
On Wed, Sep 03, 2014 at 09:38:31AM +0800, Junxiao Bi wrote:
Hi Jiufei,
On 09/02/2014 05:03 PM, Xue jiufei wrote:
Hi, Dave
On 2014/9/2 7:51, Dave Chinner wrote:
On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote:
The patch trys to solve one deadlock problem caused by cluster
On Wed, Sep 03, 2014 at 12:21:24PM +0800, Junxiao Bi wrote:
On 09/03/2014 11:10 AM, Dave Chinner wrote:
On Wed, Sep 03, 2014 at 09:38:31AM +0800, Junxiao Bi wrote:
Hi Jiufei,
On 09/02/2014 05:03 PM, Xue jiufei wrote:
Hi, Dave
On 2014/9/2 7:51, Dave Chinner wrote:
On Fri, Aug 29
. page fault into
preallocated space).
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
dax: add IO completion callback for page faults
From: Dave Chinner dchin...@redhat.com
When a page fault drops into a hole, it needs to allocate an extent.
Filesystems may allocate unwritten extents so
. This is wrong, IMO,
dax_truncate_page() should remain as a function and it should
correctly calculate how much of the page shoul dbe trimmed, not
leave landmines that other code has to clean up...
(Yup, I'm tracking down a truncate bug in XFS from fsx...)
Cheers,
Dave.
--
Dave Chinner
da
capabilities...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dave Chinner dchin...@redhat.com
Add initial DAX support to XFS. This is EXPERIMENTAL, and it *will*
eat your data. You have been warned, and will be repeatedly warned
if you try to use it:
# mount -o dax /dev/ram0 /mnt/test
[ 2539.332402] XFS (ram0): DAX enabled. Warning: EXPERIMENTAL
of fixing this anomoly is going to be completely
unnoticable...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
. The current code only masks the page
reclaim gfp_mask, not those that are passed to the shrinkers.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info
PF_MEMALLOC_NOIO, then we can
introduce PF_MEMALLOC_NOFS and have the mm subsystem mask both flags
appropriately when setting the gfp_mask in the shrink_control
settings. But fundamentally, our reclaim heirarchy defines that NOIO
implies NOFS, and so we need to fix PF_MEMALLOC_NOIO anyway.
Cheers,
Dave.
--
Dave
correctly without resorting to games
like this.
Also, this patch doesn't have a description or a valid SOB on it
Please read Documentation/SubmittingPatches so you get the format of
the patches correct for V2. ;)
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from
-cpu concurrency depth. If the kworker thread pool is depleted
then you have bigger problems than emergency sync not
deadlocking
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
On Mon, Nov 10, 2014 at 09:44:05AM -0500, Johannes Weiner wrote:
On Mon, Nov 10, 2014 at 05:46:40PM +1100, Dave Chinner wrote:
On Thu, Nov 06, 2014 at 06:50:28PM -0500, Johannes Weiner wrote:
The slab shrinkers currently rely on the reclaim code providing an
ad-hoc concept of NUMA nodes
701 - 800 of 3916 matches
Mail list logo