probably doesn't illuminate much but...
On 10/27/2016 03:19 PM, Sean Young wrote:
On Wed, Oct 26, 2016 at 01:16:16PM -0500, Nathan Zimmer wrote:
On 10/25/2016 03:41 PM, Sean Young wrote:
On Mon, Oct 24, 2016 at 04:49:25PM -0500, Nathan Zimmer wrote:
[1.565062] serial8250: ttyS1 at I/O 0x2f8
On 10/27/2016 03:19 PM, Sean Young wrote:
On Wed, Oct 26, 2016 at 01:16:16PM -0500, Nathan Zimmer wrote:
On 10/25/2016 03:41 PM, Sean Young wrote:
On Mon, Oct 24, 2016 at 04:49:25PM -0500, Nathan Zimmer wrote:
[1.565062] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
The isa
On 10/29/2016 04:16 PM, Sean Young wrote:
On Fri, Oct 28, 2016 at 02:42:25PM -0500, Nathan Zimmer wrote:
On Thu, Oct 27, 2016 at 09:19:16PM +0100, Sean Young wrote:
On Wed, Oct 26, 2016 at 01:16:16PM -0500, Nathan Zimmer wrote:
On 10/25/2016 03:41 PM, Sean Young wrote:
On Mon, Oct 24, 2016
The of reconfiguration notification chains should be exported for use
by modules.
Signed-off-by:Nathan Fontenot
---
Index: linux-next/drivers/of/base.c
===
--- linux-next.orig/drivers/of/base.c 2012-11-28 09:18:02.0 -0600
+
On Wed, 2012-11-28 at 17:09 +, David Woodhouse wrote:
> On Wed, 2012-11-28 at 12:04 -0500, David Miller wrote:
> > Do you want me to pull that tree into net-next or is there a plan to
> > repost the entire series of work for a final submission?
>
> I think it needs a little more testing/consen
I am noticing the cpufreq_driver_lock is quite hot.
On an idle 512 system perf shows me most of the system time is spent on this
lock. This is quite signifigant as top shows 5% of time in system time.
My solution was to first convert the lock to a rwlock and then to the rcu.
Nathan Zimmer (2
This completely eliminates the contention I am seeing in __cpufreq_cpu_get.
It also nicely stages the lock to be replaced by the rcu.
CC: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 44 ++--
1 file c
In general rwlocks are discourged so we are moving it to use the rcu instead.
CC: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 177 +-
1 file changed, 98 insertions(+), 79 deletions(-)
diff --git a/drive
Ok, I'll rebase and retest from linux-next then.
From: Rafael J. Wysocki [r...@sisk.pl]
Sent: Tuesday, February 05, 2013 4:13 AM
To: Viresh Kumar
Cc: Nathan Zimmer; linux-kernel@vger.kernel.org; linux...@vger.kernel.org;
cpuf...@vger.kernel.org;
This eliminates the contention I am seeing in __cpufreq_cpu_get.
It also nicely stages the lock to be replaced by the rcu.
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 42 +-
1 file c
-next
Nathan Zimmer (2):
cpufreq: Convert the cpufreq_driver_lock to a rwlock
cpufreq: Convert the cpufreq_driver_lock to use the rcu
drivers/cpufreq/cpufreq.c | 150 +-
1 file changed, 83 insertions(+), 67 deletions(-)
--
1.8.0.1
--
To
In general rwlocks are discourged so we are moving it to use the rcu instead.
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 173 +-
1 file changed, 96 insertions(+), 77 deletions(-)
Could you send me the config used?
From: Fengguang Wu [fengguang...@intel.com]
Sent: Thursday, February 07, 2013 8:05 PM
To: fengguang...@intel.com; Viresh Kumar
Cc: Nathan Zimmer; cpuf...@vger.kernel.org; linux...@vger.kernel.org;
linux-kernel
Nevermind it was on the intial mailing. Not sure what I was looking at.
From: linux-pm-ow...@vger.kernel.org [linux-pm-ow...@vger.kernel.org] on behalf
of Nathan Zimmer [nzim...@sgi.com]
Sent: Monday, February 11, 2013 8:09 AM
To: Fengguang Wu; Viresh
: Thursday, February 07, 2013 5:29 PM
To: Nathan Zimmer
Cc: viresh.ku...@linaro.org; linux-kernel@vger.kernel.org;
linux...@vger.kernel.org; cpuf...@vger.kernel.org
Subject: Re: [PATCH v2 linux-next 2/2] cpufreq: Convert the cpufreq_driver_lock
to use the rcu
On Tuesday, February 05, 2013 08:04:50
Argh, your right. I completely misread that section.
It'll take me a few days to respin and retest properly.
Thanks,
Nate
From: Rafael J. Wysocki [r...@sisk.pl]
Sent: Monday, February 11, 2013 1:36 PM
To: Nathan Zimmer
Cc: viresh.ku...@linaro.org;
On Tue, Apr 02, 2013 at 10:35:46AM +0530, Viresh Kumar wrote:
> On 2 April 2013 01:41, Nathan Zimmer wrote:
> > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
>
> > +static struct cpufreq_driver __rcu *cpufreq_driver;
> > +static DEFINE_SPINLOCK(cpufre
On Tue, Apr 02, 2013 at 02:48:07PM +0200, Rafael J. Wysocki wrote:
> On Tuesday, April 02, 2013 10:34:21 AM Viresh Kumar wrote:
> > On 2 April 2013 06:26, Nathan Zimmer wrote:
> > > On Mon, Apr 01, 2013 at 10:41:27PM +0200, Rafael J. Wysocki wrote:
> > >> On Mond
On Tue, Apr 02, 2013 at 08:29:12PM +0530, Viresh Kumar wrote:
> On 2 April 2013 20:25, Nathan Zimmer wrote:
> > The lock is unneeded if we expect register and unregister driver to not be
> > called from muliple threads at once. I didn't make that assumption.
>
> Hmm..
with the
RCU, so I am leaving it with the rwlock for now since under certain configs
__cpufreq_cpu_get is hot spot with 256+ cores.
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 278 ++-
2013 20:33, Nathan Zimmer wrote:
We eventually would like to remove the rwlock cpufreq_driver_lock or convert
it back to a spinlock and protect the read sections with RCU. The first step in
Why do we want to convert it back to spinlock?
Documentation/spinlocks.txt:84
I am not sure why but there
ready accepted half
v8: Correct have_governor_per_policy
Reviewed location of rcu_read_(un)lock in several spots
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 277 ++
1 file changed,
On 04/04/2013 03:02 AM, Al Viro wrote:
On Wed, Apr 03, 2013 at 11:56:34PM -0700, Andrew Morton wrote:
On Thu, 4 Apr 2013 17:26:48 +1100 Stephen Rothwell
wrote:
Hi Andrew,
Today's linux-next merge of the akpm tree got a conflict in
fs/proc/generic.c between several commits from the vfs tree
0.9710
256 5.2513 2.6519
512 8.0529 6.2976
Cc: "Eric W. Biederman"
Cc: Andrew Morton
Cc: Alexander Viro
Cc: David Woodhouse
Cc:
Acked-by: Alexey Dobriyan
Signed-off-by: Nathan Zimmer
---
fs/proc/inode.c | 5 ++---
1 file changed, 2 insertions(+), 3
On 04/04/2013 11:11 AM, Al Viro wrote:
On Thu, Apr 04, 2013 at 10:53:39AM -0500, Nathan Zimmer wrote:
This moves a kfree outside a spinlock to help scaling on larger (512 core)
systems. This should be some relief until we can move the section to use
the rcu.
Umm... That'll get wreck
On 04/04/2013 03:44 PM, Al Viro wrote:
On Thu, Apr 04, 2013 at 12:12:05PM -0500, Nathan Zimmer wrote:
Ok I am cloning the tree now.
It does look like the patches would conflict.
I'll run some tests and take a deeper look.
FWIW, I've just pushed there a tentative patch that s
On 04/05/2013 12:36 PM, Al Viro wrote:
On Fri, Apr 05, 2013 at 12:05:26PM -0500, Nathan Zimmer wrote:
On 04/04/2013 03:44 PM, Al Viro wrote:
On Thu, Apr 04, 2013 at 12:12:05PM -0500, Nathan Zimmer wrote:
Ok I am cloning the tree now.
It does look like the patches would conflict.
I'l
I am noticing the cpufreq_driver_lock is quite hot.
Currently on an idle 512 system perf shows me most of the system time is
spent on this lock.
- 84.18% [kernel] [k] _raw_spin_lock_irqsave
- _raw_spin_lock_irqsave
- 99.97% __cpufreq_cpu_get
cpufreq_cpu_get
On 02/22/2013 09:39 PM, Viresh Kumar wrote:
Hi Nathan,
Sorry for pointing out this so late but i still feel we are missing something
really important.
On 22 February 2013 21:54, Nathan Zimmer wrote:
- read_lock_irqsave(&cpufreq_driver_lock, flags);
+ rcu_read_
gh as reported by Dave Jones.
Convert /proc/timer_list to a proper seq_file with its own iterator. This
is a little more complex given that we have to make two passes with two
separate headers.
Signed-off-by: Nathan Zimmer
Reported-by: Dave Jones
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen
Split timer_list_show_tickdevices() out the header and just pull the rest up
to timer_list_show. Also tweak the location of the whitespace. This is all
to prep for the fix.
Signed-off-by: Nathan Zimmer
Reported-by: Dave Jones
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
---
kernel
should be identical to the previous version.
v2: Added comments on the iteration and other fixups pointed to by Andrew.
v3: Corrected the case where max_cpus != nr_cpu_ids by exiting early.
Nathan Zimmer (2):
timer_list: split timer_list_show_tickdevices()
timer_list: convert timer list to be a
I thought I grabbed the version without it.
I'll fix it.
From: Stephen Boyd [sb...@codeaurora.org]
Sent: Wednesday, February 27, 2013 1:37 PM
To: Nathan Zimmer
Cc: johns...@us.ibm.com; t...@linutronix.de; a...@linux-foundation.org;
linux-k
On Wed, Feb 27, 2013 at 11:37:26AM -0800, Stephen Boyd wrote:
> On 02/26/13 15:33, Nathan Zimmer wrote:
> > @@ -246,12 +244,8 @@ static void timer_list_show_tickdevices(struct
> > seq_file *m)
> > #endif
> > SEQ_printf(m, "\n");
> &
the RCU documentation instead of skimming it. Also I based on
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
pm+acpi-3.9-rc1
I assumed that was what you would prefer Rafael.
Nathan Zimmer (2):
cpufreq: Convert the cpufreq_driver_lock to a rwlock
cpufreq: Convert the
grab them under the rcu_read_lock but call them after
rcu_read_unlock();
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 312 +-
1 file changed, 224 insertions(+), 88 deletions(-)
di
This eliminates the contention I am seeing in __cpufreq_cpu_get.
It also nicely stages the lock to be replaced by the rcu.
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 52 +++
1 fi
On 02/20/2013 11:50 PM, Viresh Kumar wrote:
On 21 February 2013 05:26, Nathan Zimmer wrote:
In general rwlocks are discourged so we are moving it to use the rcu instead.
This does require a bit of care since the cpufreq_driver_lock protects both
the cpufreq_driver and the cpufreq_cpu_data
On Thu, Feb 21, 2013 at 01:25:29AM -0800, Stephen Boyd wrote:
> On 2/19/2013 5:21 PM, a...@linux-foundation.org wrote:
> > * timer_list-split-timer_list_show_tickdevices.patch
> > * timer_list-convert-timer-list-to-be-a-proper-seq_file.patch
> > * timer_list-convert-timer-list-to-be-a-proper-seq_fi
On 02/21/2013 12:27 PM, Stephen Boyd wrote:
On 2/21/2013 10:18 AM, Nathan Zimmer wrote:
On Thu, Feb 21, 2013 at 01:25:29AM -0800, Stephen Boyd wrote:
On 2/19/2013 5:21 PM, a...@linux-foundation.org wrote:
* timer_list-split-timer_list_show_tickdevices.patch
* timer_list-convert-timer-list-to
c: David Woodhouse
Cc: Alexey Dobriyan
Cc: "Paul E. McKenney"
Signed-off-by: Nathan Zimmer
---
fs/proc/generic.c | 62 +--
fs/proc/inode.c | 161 ++--
include/linux/proc_fs.h | 6 +-
3 files changed, 123
the RCU documentation instead of skimming it. Also I based on
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
pm+acpi-3.9-rc1
I assumed that was what you would prefer Rafael.
v4: Removed an unnecessary syncronize_rcu().
Nathan Zimmer (2):
cpufreq: Convert the
This eliminates the contention I am seeing in __cpufreq_cpu_get.
It also nicely stages the lock to be replaced by the rcu.
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 52 +++
1 fi
grab them under the rcu_read_lock but call them after
rcu_read_unlock();
Cc: Viresh Kumar
Cc: "Rafael J. Wysocki"
Signed-off-by: Nathan Zimmer
---
drivers/cpufreq/cpufreq.c | 305 +-
1 file changed, 217 insertions(+), 88 deletions(-)
di
A cleanup patch to remove sparse warnings caused by my other patch
"procfs: Improve Scaling in proc" since now proc_fops is protected by the rcu.
Signed-off-by: Nathan Zimmer
Cc: Greg Kroah-Hartman
Cc: Bill Pemberton
Cc: de...@driverdev.osuosl.org
Cc: linux-kernel@vger.
On 03/11/2013 06:23 PM, Rafael J. Wysocki wrote:
On Friday, February 22, 2013 10:24:33 AM Nathan Zimmer wrote:
I am noticing the cpufreq_driver_lock is quite hot.
On an idle 512 system perf shows me most of the system time is spent on this
lock. This is quite significant as top shows 5% of
Revert commit id 2c6413aee215a43b1f95e218067abcde50ccbc5e
On larger systems (256 cores+) with signifigant IO attached this single message
represents over 20% of the messages at boot.
Cc: Bjorn Helgaas
Cc: Jesse Barnes
Signed-off-by: Nathan Zimmer
---
drivers/pci/probe.c |4 ++--
1 files
the device tree in /proc when adding/removing a node.
o Adding a notification chain for adding/removing nodes and
properties of the device tree.
o Re-naming the base OF code prom_* routines to of_* to better go
with the naming used for OF code.
-Nathan
--
To unsubscribe from this list: send
When adding or removing a device tree node we should also update
the device tree in /proc/device-tree. This action is already done in the
generic OF code for adding/removing properties of a node. This patch adds
this functionality for nodes.
Signed-off-by: Nathan Fontenot
---
arch/powerpc
This patch moves the definition of the of_drconf_cell struct to asm/prom.h
to make it available for all powerpc/pseries code.
Signed-off-by: Nathan Fontenot
---
arch/powerpc/include/asm/prom.h | 16
arch/powerpc/mm/numa.c | 12
2 files changed, 16
property.
Signed-off-by: Nathan Fontenot
---
arch/powerpc/include/asm/pSeries_reconfig.h | 32 --
arch/powerpc/kernel/prom.c |6 -
arch/powerpc/platforms/pseries/dlpar.c | 14 ++--
arch/powerpc/platforms/pseries/hotplug-cpu.c|8 +-
arch
Rename the prom_*_property routines of the generic OF code to of_*_property.
This brings them in line with the naming used by the rest of the OF code.
Signed-off-by: Nathan Fontenot
---
arch/powerpc/kernel/machine_kexec.c | 12 ++--
arch/powerpc/kernel/machine_kexec_64.c
Remove the pSeries_reconfig.h header file. At this point there is only one
definition in the file, pSeries_coalesce_init(), which can be
moved to rtas.h.
Signed-off-by: Nathan Fontenot
---
arch/powerpc/include/asm/pSeries_reconfig.h | 15 ---
arch/powerpc/include/asm/rtas.h
On 10/03/2012 05:54 PM, Bjorn Helgaas wrote:
On Tue, Oct 2, 2012 at 8:23 AM, Nathan Zimmer wrote:
Revert commit id 2c6413aee215a43b1f95e218067abcde50ccbc5e
On larger systems (256 cores+) with signifigant IO attached this single message
represents over 20% of the messages at boot.
Is this
On 10/04/2012 11:37 AM, Joe Perches wrote:
On Thu, 2012-10-04 at 11:02 -0500, Nathan Zimmer wrote:
At many of our customer sites the log level is set to KERN_DEBUG. It
helps avoid reboots due to operator impatience. Machines this large
take significantly longer then typical to boot and seeing
On 10/05/2012 09:14 AM, Joe Perches wrote:
On Fri, 2012-10-05 at 08:55 -0500, Nathan Zimmer wrote:
On 10/04/2012 11:37 AM, Joe Perches wrote:
On Thu, 2012-10-04 at 11:02 -0500, Nathan Zimmer wrote:
At many of our customer sites the log level is set to KERN_DEBUG. It
helps avoid reboots due to
On 10/05/2012 10:16 AM, Bjorn Helgaas wrote:
On Fri, Oct 5, 2012 at 8:54 AM, Nathan Zimmer wrote:
On 10/05/2012 09:14 AM, Joe Perches wrote:
On Fri, 2012-10-05 at 08:55 -0500, Nathan Zimmer wrote:
On 10/04/2012 11:37 AM, Joe Perches wrote:
On Thu, 2012-10-04 at 11:02 -0500, Nathan Zimmer
ing loop.
Cc: Eric Dumazet
Cc: Alexander Viro
Cc: David Woodhouse
Cc: Alexey Dobriyan
Cc: "Paul E. McKenney"
Signed-off-by: Nathan Zimmer
---
fs/proc/generic.c | 56 +-
fs/proc/inode.c | 252 +--
fs/proc/internal.
led_trigger_cpu appears to have no function: it
resides in a per-cpu data structure which never changes after the
trigger is registered. So just remove it.
Reported-by: Miles Lane
Signed-off-by: Nathan Lynch
---
drivers/leds/ledtrig-cpu.c | 21 -
1 file changed, 21 deletions
from forming in parallel through multiple EPOLL_CTL_ADD
> >> operations. However, for the simple case of an epoll file descriptor
> >> attached directly to a wakeup source (with no nesting), we do not need
> >> to hold the 'epmutex'.
> >>
> >>
On 09/18/2013 02:09 PM, Jason Baron wrote:
On 09/13/2013 11:54 AM, Nathan Zimmer wrote:
We noticed some scaling issue in the SPECjbb benchmark. Running perf
we found that the it was spending lots of time in SYS_epoll_ctl.
In particular it is holding the epmutex.
This patch helps by moving out
Robin and I had been
working on.
Signed-off-by: Robin Holt
Signed-off-by: Nathan Zimmer
To: "H. Peter Anvin"
To: Ingo Molnar
Cc: Linux Kernel
Cc: Linux MM
Cc: Rob Landley
Cc: Mike Travis
Cc: Daniel J Blueman
Cc: Andrew Morton
Cc: Greg KH
Cc: Yinghai Lu
Cc: Mel Gorman
---
mm/n
l.org/lkml/2011/2/25/297.
Any thoughts?
Cc: Al Viro
Cc: Jason Baron
Reported-by: Jerry Lohr
Signed-off-by: Nathan Zimmer
---
fs/eventpoll.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 9ad17b15..752e5ff 1
On Mon, Sep 23, 2013 at 11:17:39AM -0400, Jason Baron wrote:
> On 09/19/2013 12:37 PM, Nathan Zimmer wrote:
> > On 09/18/2013 02:09 PM, Jason Baron wrote:
> >> On 09/13/2013 11:54 AM, Nathan Zimmer wrote:
> >>> We noticed some scaling issue in the SPECjbb benchmark
On Mon, Sep 23, 2013 at 11:47:41AM -0500, Nathan Zimmer wrote:
> On Mon, Sep 23, 2013 at 11:17:39AM -0400, Jason Baron wrote:
> > On 09/19/2013 12:37 PM, Nathan Zimmer wrote:
> > > On 09/18/2013 02:09 PM, Jason Baron wrote:
> > >> On 09/13/2013 11:54 AM, Nathan Zi
vid Woodhouse
Cc: Alexey Dobriyan
Signed-off-by: Nathan Zimmer
---
fs/proc/generic.c | 64 ++---
fs/proc/inode.c | 251 +--
fs/proc/internal.h |2 +
include/linux/proc_fs.h |7 +-
4 files changed, 191 insertions(+),
On 10/18/2012 02:46 AM, Eric Dumazet wrote:
On Wed, 2012-10-17 at 15:25 -0500, Nathan Zimmer wrote:
I am currently tracking a hotlock reported by a customer on a large, 512 cores,
system, I am currently running 3.7.0 rc1 but the issue looks like it has been
this way for a very long time.
The
Hi Bryan,
On Thu, 2012-10-18 at 11:18 -0700, Bryan Wu wrote:
> @@ -117,14 +117,14 @@ static int __init ledtrig_cpu_init(void)
> for_each_possible_cpu(cpu) {
> struct led_trigger_cpu *trig = &per_cpu(cpu_trig, cpu);
>
> - mutex_init(&trig->lock);
> + spi
Split timer_list_show_tickdevices out the header and just pull the rest up
to timer_list_show. Also tweak the location of the whitespace. This is all
to prep for the fix.
CC: John Stultz
CC: Thomas Gleixner
CC: linux-kernel@vger.kernel.org
Reported-by: Dave Jones
Signed-off-by: Nathan Zimmer
it a spin.
Nathan Zimmer (4):
procfs: /proc/sched_stat fails on very very large machines.
procfs: /proc/sched_debug fails on very very large machines.
/proc/timer_list split timer_list_show_tickdevices
Convert timer list to be a proper seq_file.
kernel/sched/debug.c |
seq_operations.
The output should be identical to previous version and thus not need the
version number.
CC: Ingo Molnar
CC: Peter Zijlstra
Cc: Alexander Viro
CC: linux-kernel@vger.kernel.org
Reported-by: Dave Jones
Signed-off-by: Nathan Zimmer
---
kernel/sched/stats.c | 73
seq_operations and treat each cpu as an individual record.
The output should be identical to previous version.
CC: Ingo Molnar
CC: Peter Zijlstra
Cc: Alexander Viro
CC: linux-kernel@vger.kernel.org
Reported-by: Dave Jones
Signed-off-by: Nathan Zimmer
---
kernel/sched/debug.c | 84
Convert /proc/timer_list to a proper seq_file with its own iterator. This is a
little more complex given that we have to make two passes with two seperate
headers.
CC: John Stultz
CC: Thomas Gleixner
CC: linux-kernel@vger.kernel.org
Reported-by: Dave Jones
Signed-off-by: Nathan Zimmer
When running with 4096 cores attemping to read /proc/sched_stat and
/proc/sched_debug will fail with an ENOMEM condition.
On a sufficantly large systems the total amount of data is more then 4mb, so
it won't fit into a single buffer.
Nathan Zimmer (2):
procfs: /proc/sched_stat fails on
seq_operations.
The output should be identical to previous version and thus not need the
version number.
Signed-off-by: Nathan Zimmer
CC: Ingo Molnar
CC: Peter Zijlstra
CC: linux-kernel@vger.kernel.org
---
kernel/sched/stats.c | 139 +-
1 files changed, 81
seq_operations and treat each cpu as an individual record.
The output should be identical to previous version except the trailing '\n'
was dropped.
Does that require incrementing the version?
Or should I find a way to reinclude it?
Signed-off-by: Nathan Zimmer
CC: Ingo Molnar
CC: Peter Zi
On Tue, Nov 06, 2012 at 04:31:28PM -0500, Dave Jones wrote:
> On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
> > On systems with 4096 cores attemping to read /proc/sched_debug fails.
> > We are trying to push all the data into a single kmalloc buffer.
> >
On 11/06/2012 05:49 PM, Dave Jones wrote:
On Tue, Nov 06, 2012 at 05:24:15PM -0600, Nathan Zimmer wrote:
> On Tue, Nov 06, 2012 at 04:31:28PM -0500, Dave Jones wrote:
> > On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
> > > On systems with 4096 cores
seq_operations and treat each cpu as an individual record.
The output should be identical to previous version.
Signed-off-by: Nathan Zimmer
CC: Ingo Molnar
CC: Peter Zijlstra
CC: linux-kernel@vger.kernel.org
CC: Al Viro
---
kernel/sched/debug.c | 73
wever timer_list has two
seperate per online cpu loops which will require a bit more thought.
Nathan Zimmer (2):
procfs: /proc/sched_stat fails on very very large machines.
procfs: /proc/sched_debug fails on very very large machines.
kernel/sched/debug.c | 73 +---
kernel/
seq_operations.
The output should be identical to previous version and thus not need the
version number.
Signed-off-by: Nathan Zimmer
CC: Ingo Molnar
CC: Peter Zijlstra
CC: linux-kernel@vger.kernel.org
CC: Al Viro
---
kernel/sched/stats.c | 154 ++---
1 files
I am getting an early boot problem. It only happens on the larger of the
machines I haven't seen it crop up on machines with more then 512 GB of ram.
It shows in the latest linus kernel too.
I am (wildly) guessing that what is happening is that the new_memmap that is
being passed to bios is someho
On 01/02/2013 12:04 PM, H. Peter Anvin wrote:
On 01/02/2013 09:21 AM, Nathan Zimmer wrote:
I am getting an early boot problem. It only happens on the larger of the
machines I haven't seen it crop up on machines with more then 512 GB of ram.
It shows in the latest linus kernel too.
I am (w
On Thu, Jan 03, 2013 at 03:50:55PM +, Matt Fleming wrote:
> On Wed, 2013-01-02 at 15:09 -0600, Robin Holt wrote:
> > On Wed, Jan 02, 2013 at 01:13:43PM -0600, Nathan Zimmer wrote:
> > > On 01/02/2013 12:04 PM, H. Peter Anvin wrote:
> > > >On 01/02/2013 09:21 AM,
solution is to not us the single_open mechanism but to provide
our own seq_operations and treat each cpu as an individual record.
Also sysrq_sched_debug_show must be updated to to not use the new
sched_debug_show.
The output should be identical to previous version.
Signed-off-by: Nathan Zimmer
is to not us the single_open mechanism but to provide
our own seq_operations.
The output should be identical to previous version and thus not need the
version number.
Signed-off-by: Nathan Zimmer
CC: Ingo Molnar
CC: Peter Zijlstra
CC: linux-kernel@vger.kernel.org
Reported-by: Dave Jones
set the
dev.groups field.
Signed-off-by: Nathan Fontenot
Patch updated to correct formatting errors.
Please cc me on responses/comments.
---
drivers/base/memory.c | 143 +-
1 file changed, 62 insertions(+), 81 deletions(-)
Index: linux/drivers
set the
dev.groups field.
Signed-off-by: Nathan Fontenot
Please cc me on responses/comments
The memory we set aside in the previous patch needs to be reinserted.
We start this process via late_initcall so we will have multiple cpus to do
the work.
Signed-off-by: Mike Travis
Signed-off-by: Nathan Zimmer
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Greg Kro
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on large memory
systems. On 16TB it can take over an hour to boot and most of that time
is spent initializing memory.
We avoid that bottleneck by delaying ini
the 15 to 30 minute range.
Signed-off-by: Mike Travis
Signed-off-by: Nathan Zimmer
Cc: Rob Landley
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Yinghai Lu
Cc: Andrew Morton
---
Documentation/kernel-parameters.txt | 15
arch/x86/Kconfig
On 06/21/2013 12:03 PM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:25:32AM -0500, Nathan Zimmer wrote:
This rfc patch set delays initializing large sections of memory until we have
started cpus. This has the effect of reducing startup times on
On 06/21/2013 02:10 PM, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 11:50 AM, Greg KH wrote:
On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin wrote:
On 06/21/2013 09:51 AM, Greg KH wrote:
I suspect the cutoff for this should be a lot
On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
Since you made it a compile time option, it would be good to know how
much code it adds, but otherwise I agree with Greg here... this really
shouldn't need to be an option. It *especially* shouldn
On Fri, Jun 21, 2013 at 01:08:06PM -0700, H. Peter Anvin wrote:
> Is this init code? 32K of unconditional runtime addition isn't completely
> trivial.
Some of it is init code but not all.
I am guessing 24k of that is actually runtime.
>
> Nathan Zimmer wrote:
>
> >
On Fri, Jun 21, 2013 at 01:28:11PM -0700, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 12:19 PM, Nathan Zimmer wrote:
> > On 06/21/2013 02:10 PM, Yinghai Lu wrote:
> >> in this way we can keep all numa etc on the place when online ram, cpu,
> >> pci...
> >>
&g
On Sun, Jun 23, 2013 at 11:28:40AM +0200, Ingo Molnar wrote:
>
> That's 4.5 GB/sec initialization speed - that feels a bit slow and the
> boot time effect should be felt on smaller 'a couple of gigabytes' desktop
> boxes as well. Do we know exactly where the 2 hours of boot time on a 32
> TB sy
On Wed, Jun 26, 2013 at 02:14:30PM +0200, Ingo Molnar wrote:
>
> * Nathan Zimmer wrote:
>
> > perf seems to struggle with 512 cpus, but I did get some data.
>
> Btw., mind outlining in what way it struggles?
>
> Thanks,
>
> Ingo
System interactivit
On Wed, Jun 26, 2013 at 03:37:15PM +0200, Ingo Molnar wrote:
>
> * Andrew Morton wrote:
>
> > On Wed, 26 Jun 2013 11:22:48 +0200 Ingo Molnar wrote:
> >
> > > except that on 32 TB
> > > systems we don't spend ~2 hours initializing 8,589,934,592 page heads.
> >
> > That's about a million a sec
On 06/26/2013 10:12 AM, Dave Hansen wrote:
On 06/26/2013 07:49 AM, Nathan Zimmer wrote:
My guess it is the NMIs overwhelming the system but I have not found a good way
to profile perf gone wild so it is only a guess.
I've got an 80-core system and the symptoms sound similar to perf issue
1 - 100 of 2264 matches
Mail list logo