On Wed, Jul 18, 2007 at 11:09:07PM -0700, Andrew Morton wrote:
>On Wed, 18 Jul 2007 22:41:21 -0700 Ravikiran G Thirumalai <[EMAIL PROTECTED]>
>wrote:
>
>> On Wed, Jul 18, 2007 at 04:08:58PM -0700, Andrew Morton wrote:
>> > On Mon, 16 Jul 2007 15:26:50 -0700
>&
On Thu, Jul 19, 2007 at 11:51:14AM -0700, Jeremy Fitzhardinge wrote:
>Ingo Molnar wrote:
>> just in case someone sees false positives and wants to turn it off.
>
>Why not make 0=off?
A patch to disable softlockup during boot already went in.
On Thu, Jul 19, 2007 at 11:11:42AM +0200, Ingo Molnar wrote:
>
>* Andrew Morton <[EMAIL PROTECTED]> wrote:
>
>> > +softlockup_thresh:
>> > +
>> > +This value can be used to lower the softlockup tolerance
>> > +threshold. The default threshold is 10s. If a cpu is locked up
>> > +for 10s, the
On Wed, Jul 18, 2007 at 04:08:58PM -0700, Andrew Morton wrote:
> On Mon, 16 Jul 2007 15:26:50 -0700
> Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
>
> > Kernel warns of softlockups if the softlockup thread is not able to run
> > on a CPU for 10s. It is useful to l
On Wed, Jul 18, 2007 at 04:08:58PM -0700, Andrew Morton wrote:
On Mon, 16 Jul 2007 15:26:50 -0700
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
Kernel warns of softlockups if the softlockup thread is not able to run
on a CPU for 10s. It is useful to lower the softlockup warning
On Thu, Jul 19, 2007 at 11:11:42AM +0200, Ingo Molnar wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
+softlockup_thresh:
+
+This value can be used to lower the softlockup tolerance
+threshold. The default threshold is 10s. If a cpu is locked up
+for 10s, the kernel complains. Valid
On Thu, Jul 19, 2007 at 11:51:14AM -0700, Jeremy Fitzhardinge wrote:
Ingo Molnar wrote:
just in case someone sees false positives and wants to turn it off.
Why not make 0=off?
A patch to disable softlockup during boot already went in.
On Wed, Jul 18, 2007 at 11:09:07PM -0700, Andrew Morton wrote:
On Wed, 18 Jul 2007 22:41:21 -0700 Ravikiran G Thirumalai [EMAIL PROTECTED]
wrote:
On Wed, Jul 18, 2007 at 04:08:58PM -0700, Andrew Morton wrote:
On Mon, 16 Jul 2007 15:26:50 -0700
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote
Kernel warns of softlockups if the softlockup thread is not able to run
on a CPU for 10s. It is useful to lower the softlockup warning
threshold in testing environments to catch potential lockups early.
Following patch adds a kernel parameter 'softlockup_lim' to control
the softlockup threshold.
Kernel warns of softlockups if the softlockup thread is not able to run
on a CPU for 10s. It is useful to lower the softlockup warning
threshold in testing environments to catch potential lockups early.
Following patch adds a kernel parameter 'softlockup_lim' to control
the softlockup threshold.
On Thu, Jul 12, 2007 at 07:13:17PM -0700, Andrew Morton wrote:
> On Thu, 12 Jul 2007 17:06:16 -0700 Ravikiran G Thirumalai <[EMAIL PROTECTED]>
> wrote:
>
> > Too many remote cpu references due to /proc/stat.
> >
> > On x86_64, with newer kernel versions,
On Thu, Jul 12, 2007 at 07:13:17PM -0700, Andrew Morton wrote:
On Thu, 12 Jul 2007 17:06:16 -0700 Ravikiran G Thirumalai [EMAIL PROTECTED]
wrote:
Too many remote cpu references due to /proc/stat.
On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
On every call
On Thu, Jul 12, 2007 at 05:06:15PM -0700, Ravikiran G Thirumalai wrote:
> Too many remote cpu references due to /proc/stat.
>
> On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
> On every call to kstat_irqs, the process brings in per-cpu data from all
> onli
Too many remote cpu references due to /proc/stat.
On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
On every call to kstat_irqs, the process brings in per-cpu data from all
online cpus. Doing this for NR_IRQS, which is now 256 + 32 * NR_CPUS
results in (256+32*63) * 63
Too many remote cpu references due to /proc/stat.
On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
On every call to kstat_irqs, the process brings in per-cpu data from all
online cpus. Doing this for NR_IRQS, which is now 256 + 32 * NR_CPUS
results in (256+32*63) * 63
On Thu, Jul 12, 2007 at 05:06:15PM -0700, Ravikiran G Thirumalai wrote:
Too many remote cpu references due to /proc/stat.
On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
On every call to kstat_irqs, the process brings in per-cpu data from all
online cpus. Doing
On Wed, Jun 20, 2007 at 09:36:30AM -0400, Len Brown wrote:
> On Wednesday 20 June 2007 04:49, Andreas Herrmann wrote:
> > On Tue, Jun 19, 2007 at 11:38:02PM -0400, Len Brown wrote:
> > > On Tuesday 19 June 2007 18:50, Andreas Herrmann wrote:
>
> I fear, however, that this patch defeats the
On Wed, Jun 20, 2007 at 09:36:30AM -0400, Len Brown wrote:
On Wednesday 20 June 2007 04:49, Andreas Herrmann wrote:
On Tue, Jun 19, 2007 at 11:38:02PM -0400, Len Brown wrote:
On Tuesday 19 June 2007 18:50, Andreas Herrmann wrote:
I fear, however, that this patch defeats the purpose of
On Mon, Jun 18, 2007 at 01:20:55AM -0700, Andrew Morton wrote:
> On Mon, 18 Jun 2007 10:12:04 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > >
> > Subject: [patch] x86: fix spin-loop starvation bug
> > From: Ingo Molnar <[EMAIL PROTECTED]>
>
On Mon, Jun 18, 2007 at 01:20:55AM -0700, Andrew Morton wrote:
On Mon, 18 Jun 2007 10:12:04 +0200 Ingo Molnar [EMAIL PROTECTED] wrote:
Subject: [patch] x86: fix spin-loop starvation bug
From: Ingo Molnar [EMAIL PROTECTED]
Miklos
While running a dbench stress test on a nfs mounted file system, I notice
the subject error message on the client machine. The client machine is a 48
core box with NUMA characteristics and 1024 dbench processes running
continuously in a loop, while another memory hog application runs in
While running a dbench stress test on a nfs mounted file system, I notice
the subject error message on the client machine. The client machine is a 48
core box with NUMA characteristics and 1024 dbench processes running
continuously in a loop, while another memory hog application runs in
On Thu, May 24, 2007 at 11:03:56AM +0200, Martin Schwidefsky wrote:
> On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
>
> Current git with the patches applied and the default configuration for
> s390 decreases the section size fof .data.percpu from 0x3e50 to 0
On Thu, May 24, 2007 at 11:03:56AM +0200, Martin Schwidefsky wrote:
On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
Current git with the patches applied and the default configuration for
s390 decreases the section size fof .data.percpu from 0x3e50 to 0x3e00.
0.5% decrease
On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
>
> >Has there been any measurable benefit yet due to tail padding?
>
> We don't have data that tail padding actually helps. It all
> depends on what data the linker lays out in the cachelines.
>
> As of now we just want to create the
On Wed, May 23, 2007 at 11:26:53AM -0700, Yu, Fenghua wrote:
> > elements are cacheline aligned. And as such, this differentiates the
> local
> > only data and remotely accessed data cleanly.
>
> >OK, but could we please have a concise description of the impact
> >of these changes on kernel
On Wed, May 23, 2007 at 11:26:53AM -0700, Yu, Fenghua wrote:
elements are cacheline aligned. And as such, this differentiates the
local
only data and remotely accessed data cleanly.
OK, but could we please have a concise description of the impact
of these changes on kernel memory
On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
Has there been any measurable benefit yet due to tail padding?
We don't have data that tail padding actually helps. It all
depends on what data the linker lays out in the cachelines.
As of now we just want to create the
On Sat, Apr 28, 2007 at 01:59:46AM -0400, Len Brown wrote:
> On Thursday 26 April 2007 09:26, you wrote:
> ...
> CONFIG_ACPI depends on CONFIG_PM, yet this build fails because you have
> CONFIG_ACPI=y and CONFIG_PM=n
>
> Unfortunately kconfig doesn't trace dependencies when "select" is used,
>
On Sat, Apr 28, 2007 at 01:59:46AM -0400, Len Brown wrote:
On Thursday 26 April 2007 09:26, you wrote:
...
CONFIG_ACPI depends on CONFIG_PM, yet this build fails because you have
CONFIG_ACPI=y and CONFIG_PM=n
Unfortunately kconfig doesn't trace dependencies when select is used,
making
Provide a failsafe mechanism to avoid kernel spinning for ever at read_hpet_tsc
during early kernel bootup.
This failsafe mechanism was introduced in 21-rc,
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2f7a2a79c3ebb44f8b1b7d9b4fd3a650eb69e544
But looks like the
Provide a failsafe mechanism to avoid kernel spinning for ever at read_hpet_tsc
during early kernel bootup.
This failsafe mechanism was introduced in 21-rc,
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2f7a2a79c3ebb44f8b1b7d9b4fd3a650eb69e544
But looks like the
On Mon, Apr 09, 2007 at 03:17:05PM -0700, Siddha, Suresh B wrote:
> On Mon, Apr 09, 2007 at 02:53:09PM -0700, Ravikiran G Thirumalai wrote:
> > On Mon, Apr 09, 2007 at 01:40:57PM -0700, Andrew Morton wrote:
> > > On Mon, 9 Apr 2007 11:08:53 -0700
> > > "Siddha, Sur
On Mon, Apr 09, 2007 at 03:47:52PM -0600, Eric W. Biederman wrote:
> Andrew Morton <[EMAIL PROTECTED]> writes:
>
> > This will consume nearly 4k per irq won't it? What is the upper bound
> > here, across all configs and all hardware?
> >
> > Is VSMP the only arch which has
On Mon, Apr 09, 2007 at 01:40:57PM -0700, Andrew Morton wrote:
> On Mon, 9 Apr 2007 11:08:53 -0700
> "Siddha, Suresh B" <[EMAIL PROTECTED]> wrote:
>
> > Align the per cpu runqueue to the cacheline boundary. This will minimize the
> > number of cachelines touched during remote wakeup.
> >
> >
We noticed a drop in n/w performance due to the irq_desc being cacheline
aligned rather than internode aligned. We see 50% of expected performance
when two e1000 nics local to two different nodes have consecutive irq
descriptors allocated, due to false sharing.
Note that this patch does away
We noticed a drop in n/w performance due to the irq_desc being cacheline
aligned rather than internode aligned. We see 50% of expected performance
when two e1000 nics local to two different nodes have consecutive irq
descriptors allocated, due to false sharing.
Note that this patch does away
On Mon, Apr 09, 2007 at 01:40:57PM -0700, Andrew Morton wrote:
On Mon, 9 Apr 2007 11:08:53 -0700
Siddha, Suresh B [EMAIL PROTECTED] wrote:
Align the per cpu runqueue to the cacheline boundary. This will minimize the
number of cachelines touched during remote wakeup.
Signed-off-by:
On Mon, Apr 09, 2007 at 03:47:52PM -0600, Eric W. Biederman wrote:
Andrew Morton [EMAIL PROTECTED] writes:
This will consume nearly 4k per irq won't it? What is the upper bound
here, across all configs and all hardware?
Is VSMP the only arch which has
On Mon, Apr 09, 2007 at 03:17:05PM -0700, Siddha, Suresh B wrote:
On Mon, Apr 09, 2007 at 02:53:09PM -0700, Ravikiran G Thirumalai wrote:
On Mon, Apr 09, 2007 at 01:40:57PM -0700, Andrew Morton wrote:
On Mon, 9 Apr 2007 11:08:53 -0700
Siddha, Suresh B [EMAIL PROTECTED] wrote:
Kiran
On Fri, Mar 23, 2007 at 10:40:17AM +0100, Eric Dumazet wrote:
> On Fri, 23 Mar 2007 09:59:11 +0100
> Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> >
> > Implement queued spinlocks for i386. This shouldn't increase the size of
> > the spinlock structure, while still able to handle 2^16 CPUs.
> >
>
On Fri, Mar 23, 2007 at 10:40:17AM +0100, Eric Dumazet wrote:
On Fri, 23 Mar 2007 09:59:11 +0100
Nick Piggin [EMAIL PROTECTED] wrote:
Implement queued spinlocks for i386. This shouldn't increase the size of
the spinlock structure, while still able to handle 2^16 CPUs.
Not
On Mon, Mar 12, 2007 at 01:56:13PM -0700, David Miller wrote:
> From: Pekka J Enberg <[EMAIL PROTECTED]>
> Date: Mon, 12 Mar 2007 14:15:16 +0200 (EET)
>
> > On 3/9/07, David Miller <[EMAIL PROTECTED]> wrote:
> > > The whole cahce-multipath subsystem has to have it's guts revamped for
> > > proper
On Mon, Mar 12, 2007 at 01:56:13PM -0700, David Miller wrote:
From: Pekka J Enberg [EMAIL PROTECTED]
Date: Mon, 12 Mar 2007 14:15:16 +0200 (EET)
On 3/9/07, David Miller [EMAIL PROTECTED] wrote:
The whole cahce-multipath subsystem has to have it's guts revamped for
proper error
On Sat, Jan 13, 2007 at 01:20:23PM -0800, Andrew Morton wrote:
>
> Seeing the code helps.
But there was a subtle problem with hold time instrumentation here.
The code assumed the critical section exiting through
spin_unlock_irq entered critical section with spin_lock_irq, but that
might not be
On Sat, Jan 13, 2007 at 01:20:23PM -0800, Andrew Morton wrote:
Seeing the code helps.
But there was a subtle problem with hold time instrumentation here.
The code assumed the critical section exiting through
spin_unlock_irq entered critical section with spin_lock_irq, but that
might not be
On Sat, Jan 13, 2007 at 12:00:17AM -0800, Andrew Morton wrote:
> > On Fri, 12 Jan 2007 23:36:43 -0800 Ravikiran G Thirumalai <[EMAIL
> > PROTECTED]> wrote:
> > > >void __lockfunc _spin_lock_irq(spinlock_t *lock)
> > &
On Sat, Jan 13, 2007 at 12:00:17AM -0800, Andrew Morton wrote:
On Fri, 12 Jan 2007 23:36:43 -0800 Ravikiran G Thirumalai [EMAIL
PROTECTED] wrote:
void __lockfunc _spin_lock_irq(spinlock_t *lock)
{
local_irq_disable();
rdtsc(t1
On Fri, Jan 12, 2007 at 05:11:16PM -0800, Andrew Morton wrote:
> On Fri, 12 Jan 2007 17:00:39 -0800
> Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
>
> > But is
> > lru_lock an issue is another question.
>
> I doubt it, although there might be changes we c
On Sat, Jan 13, 2007 at 03:39:45PM +1100, Nick Piggin wrote:
> Ravikiran G Thirumalai wrote:
> >Hi,
> >We noticed high interrupt hold off times while running some memory
> >intensive
> >tests on a Sun x4600 8 socket 16 core x86_64 box. We noticed softlockups,
>
&g
On Fri, Jan 12, 2007 at 01:45:43PM -0800, Christoph Lameter wrote:
> On Fri, 12 Jan 2007, Ravikiran G Thirumalai wrote:
>
> Moreover mostatomic operations are to remote memory which is also
> increasing the problem by making the atomic ops take longer. Typically
> mature N
On Fri, Jan 12, 2007 at 11:46:22AM -0800, Christoph Lameter wrote:
> On Fri, 12 Jan 2007, Ravikiran G Thirumalai wrote:
>
> > The test was simple, we have 16 processes, each allocating 3.5G of memory
> > and and touching each and every page and returning. Each of the pr
Hi,
We noticed high interrupt hold off times while running some memory intensive
tests on a Sun x4600 8 socket 16 core x86_64 box. We noticed softlockups,
lost ticks and even wall time drifting (which is probably a bug in the
x86_64 timer subsystem).
The test was simple, we have 16 processes,
Hi,
We noticed high interrupt hold off times while running some memory intensive
tests on a Sun x4600 8 socket 16 core x86_64 box. We noticed softlockups,
lost ticks and even wall time drifting (which is probably a bug in the
x86_64 timer subsystem).
The test was simple, we have 16 processes,
On Fri, Jan 12, 2007 at 11:46:22AM -0800, Christoph Lameter wrote:
On Fri, 12 Jan 2007, Ravikiran G Thirumalai wrote:
The test was simple, we have 16 processes, each allocating 3.5G of memory
and and touching each and every page and returning. Each of the process is
bound to a node
On Fri, Jan 12, 2007 at 01:45:43PM -0800, Christoph Lameter wrote:
On Fri, 12 Jan 2007, Ravikiran G Thirumalai wrote:
Moreover mostatomic operations are to remote memory which is also
increasing the problem by making the atomic ops take longer. Typically
mature NUMA system have implemented
On Sat, Jan 13, 2007 at 03:39:45PM +1100, Nick Piggin wrote:
Ravikiran G Thirumalai wrote:
Hi,
We noticed high interrupt hold off times while running some memory
intensive
tests on a Sun x4600 8 socket 16 core x86_64 box. We noticed softlockups,
[...]
We did not use any lock debugging
On Fri, Jan 12, 2007 at 05:11:16PM -0800, Andrew Morton wrote:
On Fri, 12 Jan 2007 17:00:39 -0800
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
But is
lru_lock an issue is another question.
I doubt it, although there might be changes we can make in there to
work around
On Sun, Jan 07, 2007 at 01:06:58PM -0800, Ravikiran G Thirumalai wrote:
>
>
> Question is, now we have 2 versions of spin_locks_irq implementation
> with CONFIG_PARAVIRT -- one with regular cli sti and other with virtualized
> CLI/STI -- sounds odd!
Sunday morning hangovers !
On Sun, Jan 07, 2007 at 12:05:03PM -0800, Andrew Morton wrote:
> On Sun, 07 Jan 2007 05:24:45 -0800
> Daniel Walker <[EMAIL PROTECTED]> wrote:
>
> > Now it fails with CONFIG_PARAVIRT off .
> >
> > scripts/kconfig/conf -s arch/i386/Kconfig
> > CHK include/linux/version.h
> > CHK
On Sun, Jan 07, 2007 at 12:05:03PM -0800, Andrew Morton wrote:
On Sun, 07 Jan 2007 05:24:45 -0800
Daniel Walker [EMAIL PROTECTED] wrote:
Now it fails with CONFIG_PARAVIRT off .
scripts/kconfig/conf -s arch/i386/Kconfig
CHK include/linux/version.h
CHK
On Sun, Jan 07, 2007 at 01:06:58PM -0800, Ravikiran G Thirumalai wrote:
Question is, now we have 2 versions of spin_locks_irq implementation
with CONFIG_PARAVIRT -- one with regular cli sti and other with virtualized
CLI/STI -- sounds odd!
Sunday morning hangovers !! spin_lock_irq
On Wed, Jan 03, 2007 at 12:16:35AM -0800, Andrew Morton wrote:
> On Tue, 2 Jan 2007 23:59:23 -0800
> Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
>
> > The following patches do just that. The first patch is preparatory in nature
> > and the second one changes
Implement interrupt enabling while spinning for lock for spin_lock_irq
Signed-off by: Pravin B. Shelar <[EMAIL PROTECTED]>
Signed-off by: Ravikiran Thirumalai <[EMAIL PROTECTED]>
Signed-off by: Shai Fultheim <[EMAIL PROTECTED]>
Index: linux-2.6.20-rc1/include/asm-x86_64/spinlock.h
There seems to be no good reason for spin_lock_irq to disable interrupts
while spinning. Zwane Mwaikambo had an implementation couple of years ago,
and the only objection seemed to be concerns about buggy code using
spin_lock_irq whilst interrupts disabled
http://lkml.org/lkml/2004/5/26/87
There seems to be no good reason for spin_lock_irq to disable interrupts
while spinning. Zwane Mwaikambo had an implementation couple of years ago,
and the only objection seemed to be concerns about buggy code using
spin_lock_irq whilst interrupts disabled
http://lkml.org/lkml/2004/5/26/87
Implement interrupt enabling while spinning for lock for spin_lock_irq
Signed-off by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off by: Ravikiran Thirumalai [EMAIL PROTECTED]
Signed-off by: Shai Fultheim [EMAIL PROTECTED]
Index: linux-2.6.20-rc1/include/asm-x86_64/spinlock.h
On Wed, Jan 03, 2007 at 12:16:35AM -0800, Andrew Morton wrote:
On Tue, 2 Jan 2007 23:59:23 -0800
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
The following patches do just that. The first patch is preparatory in nature
and the second one changes the x86_64 implementation
Hi Andrew,
dev_to_node() does not work as expected on x86 and x86_64 as pointed out
earlier here:
http://lkml.org/lkml/2006/11/7/10
Following patch fixes it, please apply. (Note: The fix depends on support
for PCI domains for x86/x86_64)
Thanks,
Kiran
dev_to_node does not work as expected on
Hi Andrew,
dev_to_node() does not work as expected on x86 and x86_64 as pointed out
earlier here:
http://lkml.org/lkml/2006/11/7/10
Following patch fixes it, please apply. (Note: The fix depends on support
for PCI domains for x86/x86_64)
Thanks,
Kiran
dev_to_node does not work as expected on
Enable system hashtable memory to be distributed among nodes on x86_64 NUMA
Forcing the kernel to use node interleaved vmalloc instead of bootmem
for the system hashtable memory (alloc_large_system_hash) reduces the
memory imbalance on node 0 by around 40MB on a 8 node x86_64 NUMA box:
Before
Enable system hashtable memory to be distributed among nodes on x86_64 NUMA
Forcing the kernel to use node interleaved vmalloc instead of bootmem
for the system hashtable memory (alloc_large_system_hash) reduces the
memory imbalance on node 0 by around 40MB on a 8 node x86_64 NUMA box:
Before
2.6.19 stopped booting (or booted based on build/config) on our x86_64
systems due to a bug introduced in 2.6.19. check_nmi_watchdog schedules an
IPI on all cpus to busy wait on a flag, but fails to set the busywait
flag if NMI functionality is disabled. This causes the secondary cpus
to spin
2.6.19 stopped booting (or booted based on build/config) on our x86_64
systems due to a bug introduced in 2.6.19. check_nmi_watchdog schedules an
IPI on all cpus to busy wait on a flag, but fails to set the busywait
flag if NMI functionality is disabled. This causes the secondary cpus
to spin
On Wed, Sep 07, 2005 at 11:19:24AM +0200, Jens Axboe wrote:
> On Tue, Sep 06 2005, Ravikiran G Thirumalai wrote:
> > The following patchset breaks down the global ide_lock to per-hwgroup lock.
> > We have taken the following approach.
>
> Curious, what is the point of this
On Wed, Sep 07, 2005 at 06:09:10PM +0100, Alan Cox wrote:
> On Maw, 2005-09-06 at 16:33 -0700, Ravikiran G Thirumalai wrote:
> > 2. Change the core ide code to use hwgroup->lock instead of ide_lock.
> > Deprecate ide_lock (patch 2)
>
> hwgroups and IDE locking req
On Wed, Sep 07, 2005 at 06:06:23PM +0100, Alan Cox wrote:
> On Maw, 2005-09-06 at 16:44 -0700, Ravikiran G Thirumalai wrote:
> > Patch to convert piix driver to use per-driver/hwgroup lock and kill
> > ide_lock. In the case of piix, hwgroup->lock should be sufficient.
>
>
On Tue, Sep 06, 2005 at 04:40:28PM -0700, Ravikiran G Thirumalai wrote:
> Patch to convert ide core code to use hwgroup lock instead of a global
> ide_lock.
>
> Index: linux-2.6.13/drivers/ide/ide-io.c
> ===
> ---
On Tue, Sep 06, 2005 at 04:40:28PM -0700, Ravikiran G Thirumalai wrote:
Patch to convert ide core code to use hwgroup lock instead of a global
ide_lock.
Index: linux-2.6.13/drivers/ide/ide-io.c
===
--- linux-2.6.13.orig
On Wed, Sep 07, 2005 at 06:06:23PM +0100, Alan Cox wrote:
On Maw, 2005-09-06 at 16:44 -0700, Ravikiran G Thirumalai wrote:
Patch to convert piix driver to use per-driver/hwgroup lock and kill
ide_lock. In the case of piix, hwgroup-lock should be sufficient.
PIIX requires that both
On Wed, Sep 07, 2005 at 06:09:10PM +0100, Alan Cox wrote:
On Maw, 2005-09-06 at 16:33 -0700, Ravikiran G Thirumalai wrote:
2. Change the core ide code to use hwgroup-lock instead of ide_lock.
Deprecate ide_lock (patch 2)
hwgroups and IDE locking requirements are frequently completely
On Wed, Sep 07, 2005 at 11:19:24AM +0200, Jens Axboe wrote:
On Tue, Sep 06 2005, Ravikiran G Thirumalai wrote:
The following patchset breaks down the global ide_lock to per-hwgroup lock.
We have taken the following approach.
Curious, what is the point of this?
On smp machines
Patch to make ide-host controllers use hwgroup lock where serialization with
hwgroup->lock is necessary
Signed-off-by: Vaibhav V. Nivargi <[EMAIL PROTECTED]>
Signed-off-by: Alok N. Kataria <[EMAIL PROTECTED]>
Signed-off-by: Ravikiran Thirumalai <[EMAIL PROTECTED]>
Index:
Patch to convert piix driver to use per-driver/hwgroup lock and kill
ide_lock. In the case of piix, hwgroup->lock should be sufficient.
Signed-off-by: Ravikiran Thirumalai <[EMAIL PROTECTED]>
Index: linux-2.6.13/drivers/ide/pci/piix.c
Patch to convert ide core code to use hwgroup lock instead of a global
ide_lock.
Signed-off-by: Vaibhav V. Nivargi <[EMAIL PROTECTED]>
Signed-off-by: Alok N. Kataria <[EMAIL PROTECTED]>
Signed-off-by: Ravikiran Thirumalai <[EMAIL PROTECTED]>
Signed-off-by: Shai Fultheim <[EMAIL PROTECTED]>
Following patch moves the hwif tuning code from probe_hwif to ideprobe_init
after ideprobe_init calls hwif_init so that all hwif's have associated
hwgroups. With this patch, we should always have hwgroups for hwifs during
calls the drive tune routines.
Signed-off-by: Alok N Kataria <[EMAIL
The following patchset breaks down the global ide_lock to per-hwgroup lock.
We have taken the following approach.
1. Move the hwif tuning code from probe_hwif to ideprobe_init, after
hwif_init so that hwgroups are present for all the hwifs when the tune
routines for the hwifs are invoked (patch
Following patch moves the hwif tuning code from probe_hwif to ideprobe_init
after ideprobe_init calls hwif_init so that all hwif's have associated
hwgroups. With this patch, we should always have hwgroups for hwifs during
calls the drive tune routines.
Signed-off-by: Alok N Kataria [EMAIL
Patch to convert ide core code to use hwgroup lock instead of a global
ide_lock.
Signed-off-by: Vaibhav V. Nivargi [EMAIL PROTECTED]
Signed-off-by: Alok N. Kataria [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Signed-off-by: Shai Fultheim [EMAIL PROTECTED]
Index:
Patch to convert piix driver to use per-driver/hwgroup lock and kill
ide_lock. In the case of piix, hwgroup-lock should be sufficient.
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Index: linux-2.6.13/drivers/ide/pci/piix.c
Patch to make ide-host controllers use hwgroup lock where serialization with
hwgroup-lock is necessary
Signed-off-by: Vaibhav V. Nivargi [EMAIL PROTECTED]
Signed-off-by: Alok N. Kataria [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Index:
Following patch moves a few static 'read mostly' variables to the
.data.read_mostly section. Typically these are vector - irq tables,
boot_cpu_data, node_maps etc., which are initialized once and read from
often and rarely written to. Please include.
Thanks,
Kiran
Patch to mark variables
Following patch moves a few static 'read mostly' variables to the
.data.read_mostly section. Typically these are vector - irq tables,
boot_cpu_data, node_maps etc., which are initialized once and read from
often and rarely written to. Please include.
Thanks,
Kiran
Patch to mark variables
Machines with ide-interfaces which do not have pci devices are crashing on boot
at pcibus_to_node in the ide drivers. We noticed this on a x445 running
2.6.13-rc4. Similar issue was discussed earlier, but the crash was due
to hwif being NULL.
http://marc.theaimsgroup.com/?t=11207535203=1=2
Machines with ide-interfaces which do not have pci devices are crashing on boot
at pcibus_to_node in the ide drivers. We noticed this on a x445 running
2.6.13-rc4. Similar issue was discussed earlier, but the crash was due
to hwif being NULL.
On Thu, Jul 28, 2005 at 10:20:26AM -0700, Dave Hansen wrote:
> On Wed, 2005-07-27 at 18:31 -0700, Ravikiran G Thirumalai wrote:
> > On Wed, Jul 27, 2005 at 06:17:24PM -0700, Andrew Morton wrote:
> > > Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
> > > >
&g
On Thu, Jul 28, 2005 at 10:20:26AM -0700, Dave Hansen wrote:
On Wed, 2005-07-27 at 18:31 -0700, Ravikiran G Thirumalai wrote:
On Wed, Jul 27, 2005 at 06:17:24PM -0700, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
Yes, it does cause a crash.
I don't know
On Wed, Jul 27, 2005 at 06:24:45PM -0700, Andrew Morton wrote:
> Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
> >
> > While booting with SMT disabled in bios, when using acpi srat to setup
> > cpu_to_node[], sparse apic_ids create problems. Here's a fix for tha
On Wed, Jul 27, 2005 at 06:17:24PM -0700, Andrew Morton wrote:
> Ravikiran G Thirumalai <[EMAIL PROTECTED]> wrote:
> >
> > While reserving KVA for lmem_maps of node, we have to make sure that
> > node_remap_start_pfn[] is aligned to a proper pmd boundary.
> > (node
While booting with SMT disabled in bios, when using acpi srat to setup
cpu_to_node[], sparse apic_ids create problems. Here's a fix for that.
Signed-off-by: Ravikiran Thirumalai <[EMAIL PROTECTED]>
Signed-off-by: Shai Fultheim <[EMAIL PROTECTED]>
Index: linux-2.6.13-rc3/arch/x86_64/mm/srat.c
1 - 100 of 117 matches
Mail list logo