Re: [PATCH 00/13] fs/dax: Fix FS DAX page reference counts

2024-06-30 Thread Dave Chinner
uch more comprehensive fsdax test coverage. That exercises a lot of the weird mmap corner cases that cause problems so it would be good to actually test that nothing new got broken in FSDAX by this patchset. -Dave. -- Dave Chinner da...@fromorbit.com

Re: [PATCH 11/11] sysctl: treewide: constify the ctl_table argument of handlers

2024-03-15 Thread Dave Chinner
ord latency_record[MAXLR]; > int latencytop_enabled; > > #ifdef CONFIG_SYSCTL > -static int sysctl_latencytop(struct ctl_table *table, int write, void > *buffer, > - size_t *lenp, loff_t *ppos) > +static int sysctl_latencytop(const struct ctl_table *table, int write, > + void *buffer, > + size_t *lenp, loff_t *ppos) > { > int err; > And this. I could go on, but there are so many examples of this in the patch that I think that it needs to be toosed away and regenerated in a way that doesn't trash the existing function parameter formatting. -Dave. -- Dave Chinner da...@fromorbit.com

Re: [powerpc] kernel BUG fs/xfs/xfs_message.c:102! [4k block]

2023-10-12 Thread Dave Chinner
confirm this hypothesis yet. I suspect the fix may well be to use xfs_trans_buf_get() in the xfs_inode_item_precommit() path if XFS_ISTALE is already set on the inode we are trying to log. We don't need a populated cluster buffer to read data out of or write data into in this path - all we need to do is attach the inode to the buffer so that when the buffer invalidation is committed to the journal it will also correctly finish the stale inode log item. Cheers, Dave. -- Dave Chinner da...@fromorbit.com

Re: BUG xfs_buf while running tests/xfs/435 (next-20220715)

2022-07-18 Thread Dave Chinner
to have been processed before the module is removed. We have an rcu_barrier() in xfs_destroy_caches() to avoid this .. Wait. What is xfs_buf_terminate()? I don't recall that function Yeah, there's the bug. exit_xfs_fs(void) { xfs_buf_terminate(); xfs_mru_cac

Re: [trivial PATCH] treewide: Align function definition open/close braces

2017-12-18 Thread Dave Chinner
ork > properly for these modified functions. > > Miscellanea: > > o Remove extra trailing ; and blank line from xfs_agf_verify > > Signed-off-by: Joe Perches > --- XFS bits look fine. Acked-by: Dave Chinner -- Dave Chinner da...@fromorbit.com

Re: [linux-next][XFS][trinity] WARNING: CPU: 32 PID: 31369 at fs/iomap.c:993

2017-09-18 Thread Dave Chinner
On Mon, Sep 18, 2017 at 05:00:58PM -0500, Eric Sandeen wrote: > On 9/18/17 4:31 PM, Dave Chinner wrote: > > On Mon, Sep 18, 2017 at 09:28:55AM -0600, Jens Axboe wrote: > >> On 09/18/2017 09:27 AM, Christoph Hellwig wrote: > >>> On Mon, Sep 18, 2017 at 08:26:

Re: [linux-next][XFS][trinity] WARNING: CPU: 32 PID: 31369 at fs/iomap.c:993

2017-09-18 Thread Dave Chinner
lem triage. Yes, the first invalidation should also have a comment like the post IO invalidation - the comment probably got dropped and not noticed when the changeover from internal XFS code to generic iomap code was made... Cheers, Dave. -- Dave Chinner da...@fromorbit.com

Re: [linux-next][XFS][trinity] WARNING: CPU: 32 PID: 31369 at fs/iomap.c:993

2017-09-18 Thread Dave Chinner
being triggered. It needs to be on by default, bu tI'm sure we can wrap it with something like an xfs_alert_tag() type of construct so the tag can be set in /proc/fs/xfs/panic_mask to suppress it if testers so desire. Cheers, Dave. -- Dave Chinner da...@fromorbit.com

Re: Linux 4.8: Reported regressions as of Sunday, 2016-09-18

2016-09-18 Thread Dave Chinner
frastructure, and nobody has been able to reproduce it exactly outside of the reaim benchmark. We've reproduced other, similar issues, and the fixes for those are queued for the 4.9 window. Cheers, Dave. -- Dave Chinner da...@fromorbit.com

Re: [PATCH 0/3] Reduce system overhead of automatic NUMA balancing

2015-03-24 Thread Dave Chinner
On Mon, Mar 23, 2015 at 12:24:00PM +, Mel Gorman wrote: > These are three follow-on patches based on the xfsrepair workload Dave > Chinner reported was problematic in 4.0-rc1 due to changes in page table > management -- https://lkml.org/lkml/2015/3/1/226. > > Much of the prob

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-19 Thread Dave Chinner
On Thu, Mar 19, 2015 at 06:29:47PM -0700, Linus Torvalds wrote: > On Thu, Mar 19, 2015 at 5:23 PM, Dave Chinner wrote: > > > > Bit more variance there than the pte checking, but runtime > > difference is in the noise - 5m4s vs 4m54s - and profiles are > > identical

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-19 Thread Dave Chinner
for 'system wide' (6 runs): 266,750 migrate:mm_migrate_pages ( +- 7.43% ) 10.002032292 seconds time elapsed ( +- 0.00% ) Bit more variance there than the pte checking, but runtime difference is in the noise - 5m4s vs 4m54s - and profiles are identical to the pte checking version. C

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-19 Thread Dave Chinner
On Thu, Mar 19, 2015 at 04:05:46PM -0700, Linus Torvalds wrote: > On Thu, Mar 19, 2015 at 3:41 PM, Dave Chinner wrote: > > > > My recollection wasn't faulty - I pulled it from an earlier email. > > That said, the original measurement might have been faulty. I ran > &g

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-19 Thread Dave Chinner
at one-liner > pte_dirty/write change going on? Possibly. The xfs_repair binary has definitely been rebuilt (testing unrelated bug fixes that only affect phase 6/7 behaviour), but otherwise the system libraries are unchanged. Cheers, Dave. -- Dave Chinner da...@fromorbit.com ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-18 Thread Dave Chinner
18 entries 88 ( 0%) Hash buckets with 19 entries 24 ( 0%) Hash buckets with 20 entries 11 ( 0%) Hash buckets with 21 entries 10 ( 0%) Hash buckets with 22 entries 1 ( 0%) Cheers, Dave. -- Dave Chinner da...@fromorbit.com ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-17 Thread Dave Chinner
On Tue, Mar 17, 2015 at 02:30:57PM -0700, Linus Torvalds wrote: > On Tue, Mar 17, 2015 at 1:51 PM, Dave Chinner wrote: > > > > On the -o ag_stride=-1 -o bhash=101073 config, the 60s perf stat I > > was using during steady state shows: > > > > 471,752 mi

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-17 Thread Dave Chinner
On Tue, Mar 17, 2015 at 09:53:57AM -0700, Linus Torvalds wrote: > On Tue, Mar 17, 2015 at 12:06 AM, Dave Chinner wrote: > > > > TO close the loop here, now I'm back home and can run tests: > > > > config3.19

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-17 Thread Dave Chinner
, especially for the large memory footprint cases. I haven't had a chance to look at any stats or profiles yet, so I don't know yet whether this is still page fault related or some other problem Cheers, Dave -- Dave Chinner da...@fromorbit.com ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-09 Thread Dave Chinner
On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote: > On Mon, Mar 9, 2015 at 4:29 AM, Dave Chinner wrote: > > > >> Also, is there some sane way for me to actually see this behavior on a > >> regular machine with just a single socket? Dave is apparently run

Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

2015-03-09 Thread Dave Chinner
ch/1 -d /mnt/scratch/2 -d \ /mnt/scratch/3 -d /mnt/scratch/4 -d /mnt/scratch/5 -d \ /mnt/scratch/6 -d /mnt/scratch/7 That should only take a few minutes to run - if you throw 8p at it then it should run at >100k files/s being created. Then unmount and run &qu

Re: [PATCH 2/2] mm: numa: Do not clear PTEs or PMDs for NUMA hinting faults

2015-03-05 Thread Dave Chinner
On Thu, Mar 05, 2015 at 11:54:52PM +, Mel Gorman wrote: > Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226 > >Across the board the 4.0-rc1 numbers are much slower, and the >degradation is far worse when using the large memory footprint >