Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-24 Thread Rick Lindsley
Thanks, Tejun, appreciate the feedback. On 6/23/20 4:13 PM, Tejun Heo wrote: The problem is fitting that into an interface which wholly doesn't fit that particular requirement. It's not that difficult to imagine different ways to represent however many memory slots, right? Perhaps we have

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-23 Thread Rick Lindsley
On 6/23/20 4:45 AM, Greg Kroah-Hartman wrote: Sure, but "help, I'm abusing your code interface, so fix your code interface and not my caller code" really isn't the best mantra :) Well, those are your words, not mine. What we're saying is, "we've identified an interface that doesn't scale in

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-23 Thread Rick Lindsley
On 6/22/20 11:02 PM, Greg Kroah-Hartman wrote: First off, this is not my platform, and not my problem, so it's funny you ask me :) Wlll, not your platform perhaps but MAINTAINERS does list you first and Tejun second as maintainers for kernfs. So in that sense, any patches would need to

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-22 Thread Rick Lindsley
On Mon, Jun 22, 2020 at 01:48:45PM -0400, Tejun Heo wrote: It should be obvious that representing each consecutive memory range with a separate directory entry is far from an optimal way of representing something like this. It's outright silly. On 6/22/20 11:03 AM, Greg Kroah-Hartman wrote:

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-22 Thread Rick Lindsley
On 6/22/20 10:53 AM, Tejun Heo wrote: I don't know. The above highlights the absurdity of the approach itself to me. You seem to be aware of it too in writing: 250,000 "devices". Just because it is absurd doesn't mean it wasn't built that way :) I agree, and I'm trying to influence the next

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-19 Thread Rick Lindsley
On 6/19/20 3:23 PM, Tejun Heo wrote: Spending 5 minutes during boot creating sysfs objects doesn't seem like a particularly good solution and I don't know whether anyone else would experience similar issues. Again, not necessarily against improving the scalability of kernfs code but the use

Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency improvement

2020-06-19 Thread Rick Lindsley
On 6/19/20 8:38 AM, Tejun Heo wrote: I don't have strong objections to the series but the rationales don't seem particularly strong. It's solving a suspected problem but only half way. It isn't clear whether this can be the long term solution for the problem machine and whether it will benefit

Re: [kernfs] ea7c5fc39a: stress-ng.stream.ops_per_sec 11827.2% improvement

2020-06-10 Thread Rick Lindsley
On 6/10/20 7:06 PM, kernel test robot wrote: On Sun, Jun 07, 2020 at 09:13:08AM +0800, Ian Kent wrote: It seems the result of stress-ng is inaccurate if test time too short, we'll increase the test time to avoid unreasonable results, sorry for the inconvenience. Thank you for your response!

Re: [PATCH 0/4] kernfs: proposed locking and concurrency improvement

2020-05-27 Thread Rick Lindsley
On 5/24/20 11:16 PM, Greg Kroah-Hartman wrote: Independant of your kernfs changes, why do we really need to represent all of this memory with that many different "memory objects"? What is that providing to userspace? I remember Ben Herrenschmidt did a lot of work on some of the kernfs and

Re: [PATCH v4 10/10] sched/fair: Provide idle search schedstats

2018-12-24 Thread Rick Lindsley
On 12/06/2018 01:28 PM, Steve Sistare wrote: Add schedstats to measure the effectiveness of searching for idle CPUs and stealing tasks. This is a temporary patch intended for use during development only. SCHEDSTAT_VERSION is bumped to 16, and the following fields are added to the per-CPU

Re: 2.6.23.1: oops in diskstats_show()

2008-01-29 Thread Rick Lindsley
my kernel 2.6.23.1 oopsed today in diskstats_show(), leaving the block_subsys_lock mutex locked. I have an Athlon 64 X2 (dual-core), architecture x86_64. I have not tried with 2.6.24 yet, but it looks like there was no relevant change in 2.6.24. Hmm. Yes, this should not happen.

Re: 2.6.23.1: oops in diskstats_show()

2008-01-29 Thread Rick Lindsley
my kernel 2.6.23.1 oopsed today in diskstats_show(), leaving the block_subsys_lock mutex locked. I have an Athlon 64 X2 (dual-core), architecture x86_64. I have not tried with 2.6.24 yet, but it looks like there was no relevant change in 2.6.24. Hmm. Yes, this should not happen.

Re: [patch] sched: schedstat needs a diet

2007-10-31 Thread Rick Lindsley
On 10/18/07, Mathieu Desnoyers <[EMAIL PROTECTED]> wrote: > Good question indeed. How large is this memory footprint exactly ? If it > is as small as you say, I suspect that the real issue could be that > these variable are accessed by the scheduler critical paths and >

Re: [patch] sched: schedstat needs a diet

2007-10-31 Thread Rick Lindsley
On 10/18/07, Mathieu Desnoyers [EMAIL PROTECTED] wrote: Good question indeed. How large is this memory footprint exactly ? If it is as small as you say, I suspect that the real issue could be that these variable are accessed by the scheduler critical paths and therefore

Bad hotplug/scheduler interaction?

2007-09-13 Thread Rick Lindsley
I'm concerned that we don't have adequate protection for the scheduler during cpu hotplug events, but I'm willing to believe I simply don't understand the mechanism well enough. We had a crash in (comparatively ancient) 2.6.16.* but I think the relevant code is basically unchanged since then.

Bad hotplug/scheduler interaction?

2007-09-13 Thread Rick Lindsley
I'm concerned that we don't have adequate protection for the scheduler during cpu hotplug events, but I'm willing to believe I simply don't understand the mechanism well enough. We had a crash in (comparatively ancient) 2.6.16.* but I think the relevant code is basically unchanged since then.

Re: [PATCH] Documentation update sched-stat.txt

2007-07-20 Thread Rick Lindsley
, it is probably worthwhile to review schedstats for usefulness. Is it still useful and, perhaps more to the point, is it still measuring the right stuff? Should counters be added or deleted? (That discussion should be separate from this patch.) Rick Acked-by: Rick Lindsley <[EMAIL PROTEC

Re: [PATCH] Documentation update sched-stat.txt

2007-07-20 Thread Rick Lindsley
, it is probably worthwhile to review schedstats for usefulness. Is it still useful and, perhaps more to the point, is it still measuring the right stuff? Should counters be added or deleted? (That discussion should be separate from this patch.) Rick Acked-by: Rick Lindsley [EMAIL PROTECTED] Index

Re: [PATCH]: Fix bogus softlockup warning with sysrq-t

2007-03-23 Thread Rick Lindsley
We've seen these here and had arrived at a similar patch. Extensive prints on the console can take longer than the watchdog likes. Acked-by: Rick Lindsley <[EMAIL PROTECTED]> Rick - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a m

Re: [PATCH]: Fix bogus softlockup warning with sysrq-t

2007-03-23 Thread Rick Lindsley
We've seen these here and had arrived at a similar patch. Extensive prints on the console can take longer than the watchdog likes. Acked-by: Rick Lindsley [EMAIL PROTECTED] Rick - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED

Re: 2.6.11: iostat values broken, or IDE siimage driver ?

2005-03-02 Thread Rick Lindsley
Mike -- where did you get your iostat from? There's a couple of different flavors out there and it may not make a difference but just in case ... Rick - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: 2.6.11: iostat values broken, or IDE siimage driver ?

2005-03-02 Thread Rick Lindsley
Mike -- where did you get your iostat from? There's a couple of different flavors out there and it may not make a difference but just in case ... Rick - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: [PATCH 12/13] schedstats additions for sched-balance-fork

2005-02-25 Thread Rick Lindsley
There is little help we get from userspace, and i'm not sure we want to add scheduler overhead for this single benchmark - when something like a _tiny_ bit of NUMAlib use within the OpenMP library would probably solve things equally well! There's has been a general problem with

Re: [PATCH 3/13] rework schedstats

2005-02-25 Thread Rick Lindsley
I have an updated userspace parser for this thing, if you are still keeping it on your website. Sure, be happy to include it, thanks! Send it along. Is it for version 11 or version 12? Move balancing fields into struct sched_domain, so we can get more useful results on systems

Re: [PATCH 3/13] rework schedstats

2005-02-25 Thread Rick Lindsley
I have an updated userspace parser for this thing, if you are still keeping it on your website. Sure, be happy to include it, thanks! Send it along. Is it for version 11 or version 12? Move balancing fields into struct sched_domain, so we can get more useful results on systems

Re: [PATCH 12/13] schedstats additions for sched-balance-fork

2005-02-25 Thread Rick Lindsley
There is little help we get from userspace, and i'm not sure we want to add scheduler overhead for this single benchmark - when something like a _tiny_ bit of NUMAlib use within the OpenMP library would probably solve things equally well! There's has been a general problem with

Locking document available for general review

2001-06-20 Thread Rick Lindsley
So long as what locks are used for, and when to use them remains a black art, tuning for large system scalability will be limited to people with the time to puzzle out if a lock is truly being used correctly or they are, in fact, staring at a bug. In an effort to assist both scalability and

Locking document available for general review

2001-06-20 Thread Rick Lindsley
So long as what locks are used for, and when to use them remains a black art, tuning for large system scalability will be limited to people with the time to puzzle out if a lock is truly being used correctly or they are, in fact, staring at a bug. In an effort to assist both scalability and

locking question

2001-02-05 Thread Rick Lindsley
As part of better understanding some of the issues in SMP, I've been working at documenting all the global kernel locks in use, including what's left of the BKL, and have run into a use of the BKL that seems pretty consistent and also pretty obscure. The code base I'm inspecting is 2.4.0-test8.

Comprehensive list of locks available?

2000-11-06 Thread Rick Lindsley
Now that we've taken to heart the "one lock does not fit all" and we made the kernel increasingly fine-grained with regards to locking, there are many more locks appearing in the code. While Linux does not currently support hierarchical locks, it is still true that the order in which you acquire