Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Mackerras
Paul Jackson writes:

> ... however ... question for Paul M. ...  what version of gcc did this fail 
> on?

The gcc-4.0.2 in Debian/ppc sid, which is biarch.

> I finally got my crosstools setup working for me again, and building
> a powerpc64 using gcc-3.4.0 on my Intel PC box does _not_ fail.  That

Did you have CONFIG_NUMA=y ?

Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Jackson
A day or two ago, Paul M. wrote:
> Compiling -rc7 for ppc64 using pSeries_defconfig I get this compile
> error:

Not that the following really matters ... I've already sent in a fix,
based on your analysis, followed by Nick's suggestion that we don't do
it this way anyway.

... however ... question for Paul M. ...  what version of gcc did this fail on?

I finally got my crosstools setup working for me again, and building
a powerpc64 using gcc-3.4.0 on my Intel PC box does _not_ fail.  That
build goes through fine.  This is with CONFIG_CPUSETS=y, but without my
fix of early this Wednesday to put the cpumask in question into a local
variable.

Either I've managed to confuse myself (most likely) or else this gcc
3.4 is newer than you were using, and this newer gcc has gotten smart
enough to unravel this particular case and recognize that there actually
is already a memory object (the array of cpumasks, one per node, specifying
which cpus are on that node) laying around that can be used here.

Strange.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Jackson
Paul Mackerras wrote:
> I'm not sure what the best way to fix this is

Thank-you for reporting this.  Likely the best way to fix this for now,
since we are late in a release (Linus will probably want to wack me
upside the head for breaking his build ;) is to leave the
node_to_cpumask and for_each_cpu_mask exactly as they are, and have the
code that my cpu_exclusive sched domain patch added make a local copy
of the cpumask.

I just sent off a patch to do this - quite untested so far.

I am trying now to get fire up crosstools to verify the build.
But if you can get it to build anytime soon, let me know.  My
crosstools are rusty -- it might take me a bit to resuscitate them.

I also am not sure what is the best way to fix this detail with
node_to_cpumask and for_each_cpu_mask in the long term.  The choices I
see are:

 1) Leave it be - which makes it easy trip the build bug I hit,
due to the different styles of node_to_cpumask, inline or
macro, on different archs.

 2) Make node_to_cpumask a macro on all archs, though that
makes it even easier than it is now to write code that
appears to modify a local variable, but actually modifies
some global array of the per-node cpumasks, which could
lead to some juicy runtime bugs.

 3) Make node_to_cpumask an inline on all archs, though that might
force a local stack copy of a cpumask in places that might
be performance critical on arch's with big cpumasks.

 4) Perhaps some more subtle combination of macros/inlines
can be all things to all arch's.

I'm not going to unravel the above tonight.

> it seems unfortunate that for_each_cpu_mask
> requires the mask to be an lvalue, but that isn't documented anywhere
> that I can see.

Are you saying that it's unfortunate that for_each_cpu_mask requires
an lvalue, or that it's unfortunate that this isn't documented?

Or both ;).

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Jackson
Paul Mackerras wrote:
 I'm not sure what the best way to fix this is

Thank-you for reporting this.  Likely the best way to fix this for now,
since we are late in a release (Linus will probably want to wack me
upside the head for breaking his build ;) is to leave the
node_to_cpumask and for_each_cpu_mask exactly as they are, and have the
code that my cpu_exclusive sched domain patch added make a local copy
of the cpumask.

I just sent off a patch to do this - quite untested so far.

I am trying now to get fire up crosstools to verify the build.
But if you can get it to build anytime soon, let me know.  My
crosstools are rusty -- it might take me a bit to resuscitate them.

I also am not sure what is the best way to fix this detail with
node_to_cpumask and for_each_cpu_mask in the long term.  The choices I
see are:

 1) Leave it be - which makes it easy trip the build bug I hit,
due to the different styles of node_to_cpumask, inline or
macro, on different archs.

 2) Make node_to_cpumask a macro on all archs, though that
makes it even easier than it is now to write code that
appears to modify a local variable, but actually modifies
some global array of the per-node cpumasks, which could
lead to some juicy runtime bugs.

 3) Make node_to_cpumask an inline on all archs, though that might
force a local stack copy of a cpumask in places that might
be performance critical on arch's with big cpumasks.

 4) Perhaps some more subtle combination of macros/inlines
can be all things to all arch's.

I'm not going to unravel the above tonight.

 it seems unfortunate that for_each_cpu_mask
 requires the mask to be an lvalue, but that isn't documented anywhere
 that I can see.

Are you saying that it's unfortunate that for_each_cpu_mask requires
an lvalue, or that it's unfortunate that this isn't documented?

Or both ;).

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson [EMAIL PROTECTED] 1.925.600.0401
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Jackson
A day or two ago, Paul M. wrote:
 Compiling -rc7 for ppc64 using pSeries_defconfig I get this compile
 error:

Not that the following really matters ... I've already sent in a fix,
based on your analysis, followed by Nick's suggestion that we don't do
it this way anyway.

... however ... question for Paul M. ...  what version of gcc did this fail on?

I finally got my crosstools setup working for me again, and building
a powerpc64 using gcc-3.4.0 on my Intel PC box does _not_ fail.  That
build goes through fine.  This is with CONFIG_CPUSETS=y, but without my
fix of early this Wednesday to put the cpumask in question into a local
variable.

Either I've managed to confuse myself (most likely) or else this gcc
3.4 is newer than you were using, and this newer gcc has gotten smart
enough to unravel this particular case and recognize that there actually
is already a memory object (the array of cpumasks, one per node, specifying
which cpus are on that node) laying around that can be used here.

Strange.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson [EMAIL PROTECTED] 1.925.600.0401
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: cpu_exclusive sched domains fix broke ppc64

2005-08-24 Thread Paul Mackerras
Paul Jackson writes:

 ... however ... question for Paul M. ...  what version of gcc did this fail 
 on?

The gcc-4.0.2 in Debian/ppc sid, which is biarch.

 I finally got my crosstools setup working for me again, and building
 a powerpc64 using gcc-3.4.0 on my Intel PC box does _not_ fail.  That

Did you have CONFIG_NUMA=y ?

Paul.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/