uch more comprehensive fsdax test
coverage. That exercises a lot of the weird mmap corner cases that
cause problems so it would be good to actually test that nothing new
got broken in FSDAX by this patchset.
-Dave.
--
Dave Chinner
da...@fromorbit.com
ord latency_record[MAXLR];
> int latencytop_enabled;
>
> #ifdef CONFIG_SYSCTL
> -static int sysctl_latencytop(struct ctl_table *table, int write, void
> *buffer,
> - size_t *lenp, loff_t *ppos)
> +static int sysctl_latencytop(const struct ctl_table *table, int write,
> + void *buffer,
> + size_t *lenp, loff_t *ppos)
> {
> int err;
>
And this.
I could go on, but there are so many examples of this in the patch
that I think that it needs to be toosed away and regenerated in a
way that doesn't trash the existing function parameter formatting.
-Dave.
--
Dave Chinner
da...@fromorbit.com
confirm
this hypothesis yet.
I suspect the fix may well be to use xfs_trans_buf_get() in the
xfs_inode_item_precommit() path if XFS_ISTALE is already set on the
inode we are trying to log. We don't need a populated cluster buffer
to read data out of or write data into in this path - all we need to
do is attach the inode to the buffer so that when the buffer
invalidation is committed to the journal it will also correctly
finish the stale inode log item.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
to have been processed
before the module is removed. We have an rcu_barrier() in
xfs_destroy_caches() to avoid this ..
Wait. What is xfs_buf_terminate()? I don't recall that function
Yeah, there's the bug.
exit_xfs_fs(void)
{
xfs_buf_terminate();
xfs_mru_cac
ork
> properly for these modified functions.
>
> Miscellanea:
>
> o Remove extra trailing ; and blank line from xfs_agf_verify
>
> Signed-off-by: Joe Perches
> ---
XFS bits look fine.
Acked-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
On Mon, Sep 18, 2017 at 05:00:58PM -0500, Eric Sandeen wrote:
> On 9/18/17 4:31 PM, Dave Chinner wrote:
> > On Mon, Sep 18, 2017 at 09:28:55AM -0600, Jens Axboe wrote:
> >> On 09/18/2017 09:27 AM, Christoph Hellwig wrote:
> >>> On Mon, Sep 18, 2017 at 08:26:
lem triage.
Yes, the first invalidation should also have a comment like the post
IO invalidation - the comment probably got dropped and not noticed
when the changeover from internal XFS code to generic iomap code was
made...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
being triggered.
It needs to be on by default, bu tI'm sure we can wrap it with
something like an xfs_alert_tag() type of construct so the tag can
be set in /proc/fs/xfs/panic_mask to suppress it if testers so
desire.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
frastructure,
and nobody has been able to reproduce it exactly
outside of the reaim benchmark. We've reproduced other, similar
issues, and the fixes for those are queued for the 4.9 window.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Mon, Mar 23, 2015 at 12:24:00PM +, Mel Gorman wrote:
> These are three follow-on patches based on the xfsrepair workload Dave
> Chinner reported was problematic in 4.0-rc1 due to changes in page table
> management -- https://lkml.org/lkml/2015/3/1/226.
>
> Much of the prob
On Thu, Mar 19, 2015 at 06:29:47PM -0700, Linus Torvalds wrote:
> On Thu, Mar 19, 2015 at 5:23 PM, Dave Chinner wrote:
> >
> > Bit more variance there than the pte checking, but runtime
> > difference is in the noise - 5m4s vs 4m54s - and profiles are
> > identical
for 'system wide' (6 runs):
266,750 migrate:mm_migrate_pages ( +- 7.43% )
10.002032292 seconds time elapsed ( +- 0.00% )
Bit more variance there than the pte checking, but runtime
difference is in the noise - 5m4s vs 4m54s - and profiles are
identical to the pte checking version.
C
On Thu, Mar 19, 2015 at 04:05:46PM -0700, Linus Torvalds wrote:
> On Thu, Mar 19, 2015 at 3:41 PM, Dave Chinner wrote:
> >
> > My recollection wasn't faulty - I pulled it from an earlier email.
> > That said, the original measurement might have been faulty. I ran
> &g
at one-liner
> pte_dirty/write change going on?
Possibly. The xfs_repair binary has definitely been rebuilt (testing
unrelated bug fixes that only affect phase 6/7 behaviour), but
otherwise the system libraries are unchanged.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
18 entries 88 ( 0%)
Hash buckets with 19 entries 24 ( 0%)
Hash buckets with 20 entries 11 ( 0%)
Hash buckets with 21 entries 10 ( 0%)
Hash buckets with 22 entries 1 ( 0%)
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Mar 17, 2015 at 02:30:57PM -0700, Linus Torvalds wrote:
> On Tue, Mar 17, 2015 at 1:51 PM, Dave Chinner wrote:
> >
> > On the -o ag_stride=-1 -o bhash=101073 config, the 60s perf stat I
> > was using during steady state shows:
> >
> > 471,752 mi
On Tue, Mar 17, 2015 at 09:53:57AM -0700, Linus Torvalds wrote:
> On Tue, Mar 17, 2015 at 12:06 AM, Dave Chinner wrote:
> >
> > TO close the loop here, now I'm back home and can run tests:
> >
> > config3.19
, especially
for the large memory footprint cases. I haven't had a chance to look
at any stats or profiles yet, so I don't know yet whether this is
still page fault related or some other problem
Cheers,
Dave
--
Dave Chinner
da...@fromorbit.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote:
> On Mon, Mar 9, 2015 at 4:29 AM, Dave Chinner wrote:
> >
> >> Also, is there some sane way for me to actually see this behavior on a
> >> regular machine with just a single socket? Dave is apparently run
ch/1 -d /mnt/scratch/2 -d \
/mnt/scratch/3 -d /mnt/scratch/4 -d /mnt/scratch/5 -d \
/mnt/scratch/6 -d /mnt/scratch/7
That should only take a few minutes to run - if you throw 8p at it
then it should run at >100k files/s being created.
Then unmount and run &qu
On Thu, Mar 05, 2015 at 11:54:52PM +, Mel Gorman wrote:
> Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
>
>Across the board the 4.0-rc1 numbers are much slower, and the
>degradation is far worse when using the large memory footprint
>
21 matches
Mail list logo