>What's "shared percpu data" ? It sounds to me like a contradiction in
>terms. Isn't percpu data supposed to only be accessed by the CPU which
>owns it to prevent cache line bouncing? In which case, what's the
point
>of sharing that data with other CPUs? Surely "shared percpu data" is
>just
On Wed, May 23, 2007 at 02:13:24PM -0700, Yu, Fenghua wrote:
> Yes, in theory, sharing shared percpu data with local percpu data in one
> cache line can cause cache line contention between remote and local
> access.
What's "shared percpu data" ? It sounds to me like a contradiction in
terms.
On Wed, May 23, 2007 at 02:13:24PM -0700, Yu, Fenghua wrote:
Yes, in theory, sharing shared percpu data with local percpu data in one
cache line can cause cache line contention between remote and local
access.
What's shared percpu data ? It sounds to me like a contradiction in
terms. Isn't
What's shared percpu data ? It sounds to me like a contradiction in
terms. Isn't percpu data supposed to only be accessed by the CPU which
owns it to prevent cache line bouncing? In which case, what's the
point
of sharing that data with other CPUs? Surely shared percpu data is
just the same as
On Thu, May 24, 2007 at 11:03:56AM +0200, Martin Schwidefsky wrote:
> On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
>
> Current git with the patches applied and the default configuration for
> s390 decreases the section size fof .data.percpu from 0x3e50 to 0x3e00.
> 0.5%
On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
> > >OK, but could we please have a concise description of the impact
> > >of these changes on kernel memory footprint? Increase or decrease?
> > >And by approximately how much?
> >
> > Depending on how linker places percpu data,
On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
OK, but could we please have a concise description of the impact
of these changes on kernel memory footprint? Increase or decrease?
And by approximately how much?
Depending on how linker places percpu data, the patches
On Thu, May 24, 2007 at 11:03:56AM +0200, Martin Schwidefsky wrote:
On Wed, 2007-05-23 at 11:57 -0700, Ravikiran G Thirumalai wrote:
Current git with the patches applied and the default configuration for
s390 decreases the section size fof .data.percpu from 0x3e50 to 0x3e00.
0.5% decrease.
>> So what we have now is space wastage on some architectures, space
savings on
>> some, but with no measurable performance benefit due to the
infrastructure
>> itself. Why not push the infrastructure when we really need it, as
against
>> pushing it now when we are not sure if it benefits?
>
On Wed, 23 May 2007 12:20:05 -0700 Ravikiran G Thirumalai <[EMAIL PROTECTED]>
wrote:
> On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
> >
> > >Has there been any measurable benefit yet due to tail padding?
> >
> > We don't have data that tail padding actually helps. It all
> >
On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
>
> >Has there been any measurable benefit yet due to tail padding?
>
> We don't have data that tail padding actually helps. It all
> depends on what data the linker lays out in the cachelines.
>
> As of now we just want to create the
>Has there been any measurable benefit yet due to tail padding?
We don't have data that tail padding actually helps. It all
depends on what data the linker lays out in the cachelines.
As of now we just want to create the infrastructure (so that
more and more people who need it, can use it).
On Wed, May 23, 2007 at 11:26:53AM -0700, Yu, Fenghua wrote:
> > elements are cacheline aligned. And as such, this differentiates the
> local
> > only data and remotely accessed data cleanly.
>
> >OK, but could we please have a concise description of the impact
> >of these changes on kernel
> elements are cacheline aligned. And as such, this differentiates the
local
> only data and remotely accessed data cleanly.
>OK, but could we please have a concise description of the impact
>of these changes on kernel memory footprint? Increase or decrease?
>And by approximately how much?
On Tue, 22 May 2007 11:20:03 -0700 Fenghua Yu <[EMAIL PROTECTED]> wrote:
> per cpu data section contains two types of data. One set which is exclusively
> accessed by the local cpu and the other set which is per cpu, but also shared
> by remote cpus. In the current kernel, these two sets are not
On Tue, 22 May 2007 11:20:03 -0700 Fenghua Yu [EMAIL PROTECTED] wrote:
per cpu data section contains two types of data. One set which is exclusively
accessed by the local cpu and the other set which is per cpu, but also shared
by remote cpus. In the current kernel, these two sets are not
elements are cacheline aligned. And as such, this differentiates the
local
only data and remotely accessed data cleanly.
OK, but could we please have a concise description of the impact
of these changes on kernel memory footprint? Increase or decrease?
And by approximately how much?
Depending
On Wed, May 23, 2007 at 11:26:53AM -0700, Yu, Fenghua wrote:
elements are cacheline aligned. And as such, this differentiates the
local
only data and remotely accessed data cleanly.
OK, but could we please have a concise description of the impact
of these changes on kernel memory
Has there been any measurable benefit yet due to tail padding?
We don't have data that tail padding actually helps. It all
depends on what data the linker lays out in the cachelines.
As of now we just want to create the infrastructure (so that
more and more people who need it, can use it).
It
On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
Has there been any measurable benefit yet due to tail padding?
We don't have data that tail padding actually helps. It all
depends on what data the linker lays out in the cachelines.
As of now we just want to create the
On Wed, 23 May 2007 12:20:05 -0700 Ravikiran G Thirumalai [EMAIL PROTECTED]
wrote:
On Wed, May 23, 2007 at 12:09:56PM -0700, Yu, Fenghua wrote:
Has there been any measurable benefit yet due to tail padding?
We don't have data that tail padding actually helps. It all
depends on what
So what we have now is space wastage on some architectures, space
savings on
some, but with no measurable performance benefit due to the
infrastructure
itself. Why not push the infrastructure when we really need it, as
against
pushing it now when we are not sure if it benefits?
It makes
22 matches
Mail list logo