Thanks, Tejun, appreciate the feedback.
On 6/23/20 4:13 PM, Tejun Heo wrote:
The problem is fitting that into an interface which wholly doesn't fit that
particular requirement. It's not that difficult to imagine different ways to
represent however many memory slots, right?
Perhaps we have
On 6/23/20 4:45 AM, Greg Kroah-Hartman wrote:
Sure, but "help, I'm abusing your code interface, so fix your code
interface and not my caller code" really isn't the best mantra :)
Well, those are your words, not mine. What we're saying is, "we've
identified an interface that doesn't scale in
On 6/22/20 11:02 PM, Greg Kroah-Hartman wrote:
First off, this is not my platform, and not my problem, so it's funny
you ask me :)
Wlll, not your platform perhaps but MAINTAINERS does list you first and
Tejun second as maintainers for kernfs. So in that sense, any patches would
need to
On Mon, Jun 22, 2020 at 01:48:45PM -0400, Tejun Heo wrote:
It should be obvious that representing each consecutive memory range with a
separate directory entry is far from an optimal way of representing
something like this. It's outright silly.
On 6/22/20 11:03 AM, Greg Kroah-Hartman wrote:
On 6/22/20 10:53 AM, Tejun Heo wrote:
I don't know. The above highlights the absurdity of the approach itself to
me. You seem to be aware of it too in writing: 250,000 "devices".
Just because it is absurd doesn't mean it wasn't built that way :)
I agree, and I'm trying to influence the next
On 6/19/20 3:23 PM, Tejun Heo wrote:
Spending 5 minutes during boot creating sysfs objects doesn't seem like a
particularly good solution and I don't know whether anyone else would
experience similar issues. Again, not necessarily against improving the
scalability of kernfs code but the use
On 6/19/20 8:38 AM, Tejun Heo wrote:
I don't have strong objections to the series but the rationales don't seem
particularly strong. It's solving a suspected problem but only half way. It
isn't clear whether this can be the long term solution for the problem
machine and whether it will benefit
On 6/10/20 7:06 PM, kernel test robot wrote:
On Sun, Jun 07, 2020 at 09:13:08AM +0800, Ian Kent wrote:
It seems the result of stress-ng is inaccurate if test time too
short, we'll increase the test time to avoid unreasonable results,
sorry for the inconvenience.
Thank you for your response!
On 5/24/20 11:16 PM, Greg Kroah-Hartman wrote:
Independant of your kernfs changes, why do we really need to represent
all of this memory with that many different "memory objects"? What is
that providing to userspace?
I remember Ben Herrenschmidt did a lot of work on some of the kernfs and
On 12/06/2018 01:28 PM, Steve Sistare wrote:
Add schedstats to measure the effectiveness of searching for idle CPUs
and stealing tasks. This is a temporary patch intended for use during
development only. SCHEDSTAT_VERSION is bumped to 16, and the following
fields are added to the per-CPU
my kernel 2.6.23.1 oopsed today in diskstats_show(), leaving the
block_subsys_lock mutex locked. I have an Athlon 64 X2 (dual-core),
architecture x86_64. I have not tried with 2.6.24 yet, but it looks
like there was no relevant change in 2.6.24.
Hmm. Yes, this should not happen.
my kernel 2.6.23.1 oopsed today in diskstats_show(), leaving the
block_subsys_lock mutex locked. I have an Athlon 64 X2 (dual-core),
architecture x86_64. I have not tried with 2.6.24 yet, but it looks
like there was no relevant change in 2.6.24.
Hmm. Yes, this should not happen.
On 10/18/07, Mathieu Desnoyers <[EMAIL PROTECTED]> wrote:
> Good question indeed. How large is this memory footprint exactly ? If it
> is as small as you say, I suspect that the real issue could be that
> these variable are accessed by the scheduler critical paths and
>
On 10/18/07, Mathieu Desnoyers [EMAIL PROTECTED] wrote:
Good question indeed. How large is this memory footprint exactly ? If it
is as small as you say, I suspect that the real issue could be that
these variable are accessed by the scheduler critical paths and
therefore
I'm concerned that we don't have adequate protection for the scheduler
during cpu hotplug events, but I'm willing to believe I simply don't
understand the mechanism well enough. We had a crash in (comparatively
ancient) 2.6.16.* but I think the relevant code is basically unchanged
since then.
I'm concerned that we don't have adequate protection for the scheduler
during cpu hotplug events, but I'm willing to believe I simply don't
understand the mechanism well enough. We had a crash in (comparatively
ancient) 2.6.16.* but I think the relevant code is basically unchanged
since then.
, it is probably worthwhile to review
schedstats for usefulness. Is it still useful and, perhaps more to the
point, is it still measuring the right stuff? Should counters be added
or deleted? (That discussion should be separate from this patch.)
Rick
Acked-by: Rick Lindsley <[EMAIL PROTEC
, it is probably worthwhile to review
schedstats for usefulness. Is it still useful and, perhaps more to the
point, is it still measuring the right stuff? Should counters be added
or deleted? (That discussion should be separate from this patch.)
Rick
Acked-by: Rick Lindsley [EMAIL PROTECTED]
Index
We've seen these here and had arrived at a similar patch. Extensive prints
on the console can take longer than the watchdog likes.
Acked-by: Rick Lindsley <[EMAIL PROTECTED]>
Rick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a m
We've seen these here and had arrived at a similar patch. Extensive prints
on the console can take longer than the watchdog likes.
Acked-by: Rick Lindsley [EMAIL PROTECTED]
Rick
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED
Mike -- where did you get your iostat from? There's a couple of different
flavors out there and it may not make a difference but just in case ...
Rick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Mike -- where did you get your iostat from? There's a couple of different
flavors out there and it may not make a difference but just in case ...
Rick
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
There is little help we get from userspace, and i'm not sure we want to
add scheduler overhead for this single benchmark - when something like a
_tiny_ bit of NUMAlib use within the OpenMP library would probably solve
things equally well!
There's has been a general problem with
I have an updated userspace parser for this thing, if you are still
keeping it on your website.
Sure, be happy to include it, thanks! Send it along. Is it for version
11 or version 12?
Move balancing fields into struct sched_domain, so we can get more
useful results on systems
I have an updated userspace parser for this thing, if you are still
keeping it on your website.
Sure, be happy to include it, thanks! Send it along. Is it for version
11 or version 12?
Move balancing fields into struct sched_domain, so we can get more
useful results on systems
There is little help we get from userspace, and i'm not sure we want to
add scheduler overhead for this single benchmark - when something like a
_tiny_ bit of NUMAlib use within the OpenMP library would probably solve
things equally well!
There's has been a general problem with
So long as what locks are used for, and when to use them remains a
black art, tuning for large system scalability will be limited to
people with the time to puzzle out if a lock is truly being used
correctly or they are, in fact, staring at a bug.
In an effort to assist both scalability and
So long as what locks are used for, and when to use them remains a
black art, tuning for large system scalability will be limited to
people with the time to puzzle out if a lock is truly being used
correctly or they are, in fact, staring at a bug.
In an effort to assist both scalability and
As part of better understanding some of the issues in SMP,
I've been working at documenting all the global kernel locks in use,
including what's left of the BKL, and have run into a use of the BKL
that seems pretty consistent and also pretty obscure.
The code base I'm inspecting is 2.4.0-test8.
Now that we've taken to heart the "one lock does not fit all" and we
made the kernel increasingly fine-grained with regards to locking,
there are many more locks appearing in the code. While Linux does not
currently support hierarchical locks, it is still true that the order
in which you acquire
30 matches
Mail list logo