[OMPI devel] 1.4.5rc5 has been released

2012-02-07 Thread Jeff Squyres
In the usual place:

http://www.open-mpi.org/software/ompi/v1.4/

This fixes all known issues.  Changes since the last rc:

- Add note about intel compilers in README
- Per Paul Hargrove's notes, replace $(RM) with rm in Makefile.am
- Add note about how ROMIO is not supported on OpenBSD.
- Fix preprocessor macros to be __linux__ instead of linux

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread George Bosilca
This doesn't sound like a very good idea, despite a significant support from a 
lot of institutions. There is no standardization efforts in the targeted 
community, and championing a broader support in the Java world was not one of 
our main target.

OMPI does not include the Boost bindings, despite the fact that it was 
developed at IU. OMPI does not include Python nor R bindings despite their 
large user community. Why suddenly should we provide unstandardized Java 
bindings?

I think we should not tackle such inclusion before there is at least a 
beginning of a standardization effort in the targeted community. They have to 
step up and address their needs (if they are real), instead of relying on us to 
take a decision. Until then, the fast growing targeted community should 
maintain the binding as a standalone project on their own.

  george.

On Feb 1, 2012, at 15:20 , Ralph Castain wrote:

> FROM: LANL, HLRS, Cisco, Oracle, and IBM
> 
> WHAT: Adds Java bindings
> 
> WHY: The Hadoop community would like to use MPI in their efforts, and most of 
> their code is in Java
> 
> WHERE: ompi/mpi/java plus one new config file in ompi/config
> 
> TIMEOUT: Feb 10, 2012
> 
> 
> Hadoop is a Java-based environment for processing extremely large data sets. 
> Modeled on the Google enterprise system, it has evolved into its own 
> open-source community. Currently, they use their own IPC for messaging, but 
> acknowledge that it is nowhere near as efficient or well-developed as found 
> in MPI.
> 
> While 3rd party Java bindings are available, the Hadoop business world is 
> leery of depending on something that "bolts on" - they would be more willing 
> to adopt the technology if it were included in a "standard" distribution. 
> Hence, they have requested that Open MPI provide that capability, and in 
> exchange will help champion broader adoption of Java support within the MPI 
> community.
> 
> We have based the OMPI bindings on the mpiJava code originally developed at 
> IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
> completely transparent to all other OMPI users and has zero performance 
> impact on the rest of the code/bindings. We have setup the configure so that 
> the Java bindings will build if/when they can or are explicitly requested, 
> just as with other language support.
> 
> As the Hadoop community represents a rapidly-growing new set of customers and 
> needs, we feel that adding these bindings is appropriate. The bindings will 
> be maintained by those organizations that have an interest in this use-case.
> 
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Rayson Ho
Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
is going to use MPI for inter-task communication, then make sure they
understand that the MPI standard currently does not address fault
folerant.

Note that it is not uncommon to run map reduce jobs on Amazon EC2's
spot instances, which can be taken back by Amazon at any time if the
spot price rises above the bid price of the user. If Hadoop is going
to use MPI, and without a fault folerant MPI implementation, then the
whole job needs to be rerun.

http://www.youtube.com/watch?v=66rfnFA0jpM

Rayson

=
Open Grid Scheduler / Grid Engine
http://gridscheduler.sourceforge.net/

Scalable Grid Engine Support Program
http://www.scalablelogic.com/


On Wed, Feb 1, 2012 at 3:20 PM, Ralph Castain  wrote:
> FROM: LANL, HLRS, Cisco, Oracle, and IBM
>
> WHAT: Adds Java bindings
>
> WHY: The Hadoop community would like to use MPI in their efforts, and most of 
> their code is in Java
>
> WHERE: ompi/mpi/java plus one new config file in ompi/config
>
> TIMEOUT: Feb 10, 2012
>
>
> Hadoop is a Java-based environment for processing extremely large data sets. 
> Modeled on the Google enterprise system, it has evolved into its own 
> open-source community. Currently, they use their own IPC for messaging, but 
> acknowledge that it is nowhere near as efficient or well-developed as found 
> in MPI.
>
> While 3rd party Java bindings are available, the Hadoop business world is 
> leery of depending on something that "bolts on" - they would be more willing 
> to adopt the technology if it were included in a "standard" distribution. 
> Hence, they have requested that Open MPI provide that capability, and in 
> exchange will help champion broader adoption of Java support within the MPI 
> community.
>
> We have based the OMPI bindings on the mpiJava code originally developed at 
> IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
> completely transparent to all other OMPI users and has zero performance 
> impact on the rest of the code/bindings. We have setup the configure so that 
> the Java bindings will build if/when they can or are explicitly requested, 
> just as with other language support.
>
> As the Hadoop community represents a rapidly-growing new set of customers and 
> needs, we feel that adding these bindings is appropriate. The bindings will 
> be maintained by those organizations that have an interest in this use-case.
>
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel



-- 
Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/



Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain
The community is aware of the issue. However, the corporations 
interested/involved in this area are not running on EC2 nor concerned about 
having allocations taken away. The question of failed nodes is something we 
plan to address over time, but is not considered an immediate show-stopper.

On Feb 7, 2012, at 1:05 PM, Rayson Ho wrote:

> Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
> is going to use MPI for inter-task communication, then make sure they
> understand that the MPI standard currently does not address fault
> folerant.
> 
> Note that it is not uncommon to run map reduce jobs on Amazon EC2's
> spot instances, which can be taken back by Amazon at any time if the
> spot price rises above the bid price of the user. If Hadoop is going
> to use MPI, and without a fault folerant MPI implementation, then the
> whole job needs to be rerun.
> 
> http://www.youtube.com/watch?v=66rfnFA0jpM
> 
> Rayson
> 
> =
> Open Grid Scheduler / Grid Engine
> http://gridscheduler.sourceforge.net/
> 
> Scalable Grid Engine Support Program
> http://www.scalablelogic.com/
> 
> 
> On Wed, Feb 1, 2012 at 3:20 PM, Ralph Castain  wrote:
>> FROM: LANL, HLRS, Cisco, Oracle, and IBM
>> 
>> WHAT: Adds Java bindings
>> 
>> WHY: The Hadoop community would like to use MPI in their efforts, and most 
>> of their code is in Java
>> 
>> WHERE: ompi/mpi/java plus one new config file in ompi/config
>> 
>> TIMEOUT: Feb 10, 2012
>> 
>> 
>> Hadoop is a Java-based environment for processing extremely large data sets. 
>> Modeled on the Google enterprise system, it has evolved into its own 
>> open-source community. Currently, they use their own IPC for messaging, but 
>> acknowledge that it is nowhere near as efficient or well-developed as found 
>> in MPI.
>> 
>> While 3rd party Java bindings are available, the Hadoop business world is 
>> leery of depending on something that "bolts on" - they would be more willing 
>> to adopt the technology if it were included in a "standard" distribution. 
>> Hence, they have requested that Open MPI provide that capability, and in 
>> exchange will help champion broader adoption of Java support within the MPI 
>> community.
>> 
>> We have based the OMPI bindings on the mpiJava code originally developed at 
>> IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
>> completely transparent to all other OMPI users and has zero performance 
>> impact on the rest of the code/bindings. We have setup the configure so that 
>> the Java bindings will build if/when they can or are explicitly requested, 
>> just as with other language support.
>> 
>> As the Hadoop community represents a rapidly-growing new set of customers 
>> and needs, we feel that adding these bindings is appropriate. The bindings 
>> will be maintained by those organizations that have an interest in this 
>> use-case.
>> 
>> 
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> 
> -- 
> Rayson
> 
> ==
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain
Nobody is asking us to make any decision or take a position re standardization. 
The Hadoop community fully intends to bring the question of Java binding 
standards to the Forum over the next year, but we all know that is a long, 
arduous journey. In the interim, they not only asked that we provide the 
bindings in our release, but also are providing the support to maintain them.

If members of the Python or R communities were to step forward, offer to do the 
work and maintain it, and could show it had zero impact on the rest of the code 
base, I for one would welcome their bindings. Can't see the harm - can always 
be removed if/when they ceased to support them on their own.


On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:

> This doesn't sound like a very good idea, despite a significant support from 
> a lot of institutions. There is no standardization efforts in the targeted 
> community, and championing a broader support in the Java world was not one of 
> our main target.
> 
> OMPI does not include the Boost bindings, despite the fact that it was 
> developed at IU. OMPI does not include Python nor R bindings despite their 
> large user community. Why suddenly should we provide unstandardized Java 
> bindings?
> 
> I think we should not tackle such inclusion before there is at least a 
> beginning of a standardization effort in the targeted community. They have to 
> step up and address their needs (if they are real), instead of relying on us 
> to take a decision. Until then, the fast growing targeted community should 
> maintain the binding as a standalone project on their own.
> 
>  george.
> 
> On Feb 1, 2012, at 15:20 , Ralph Castain wrote:
> 
>> FROM: LANL, HLRS, Cisco, Oracle, and IBM
>> 
>> WHAT: Adds Java bindings
>> 
>> WHY: The Hadoop community would like to use MPI in their efforts, and most 
>> of their code is in Java
>> 
>> WHERE: ompi/mpi/java plus one new config file in ompi/config
>> 
>> TIMEOUT: Feb 10, 2012
>> 
>> 
>> Hadoop is a Java-based environment for processing extremely large data sets. 
>> Modeled on the Google enterprise system, it has evolved into its own 
>> open-source community. Currently, they use their own IPC for messaging, but 
>> acknowledge that it is nowhere near as efficient or well-developed as found 
>> in MPI.
>> 
>> While 3rd party Java bindings are available, the Hadoop business world is 
>> leery of depending on something that "bolts on" - they would be more willing 
>> to adopt the technology if it were included in a "standard" distribution. 
>> Hence, they have requested that Open MPI provide that capability, and in 
>> exchange will help champion broader adoption of Java support within the MPI 
>> community.
>> 
>> We have based the OMPI bindings on the mpiJava code originally developed at 
>> IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
>> completely transparent to all other OMPI users and has zero performance 
>> impact on the rest of the code/bindings. We have setup the configure so that 
>> the Java bindings will build if/when they can or are explicitly requested, 
>> just as with other language support.
>> 
>> As the Hadoop community represents a rapidly-growing new set of customers 
>> and needs, we feel that adding these bindings is appropriate. The bindings 
>> will be maintained by those organizations that have an interest in this 
>> use-case.
>> 
>> 
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Rayson Ho
Ralph,

I am not totally against the idea. As long as Hadoop is not taking
away the current task communication mechanism until MPI finally (there
are just too many papers on FT MPI, I remember reading checkpointing
MPI jobs more than 10 years ago!) has a standard way to handle node
failure, then I am not concerned at all!

Rayson

=
Open Grid Scheduler / Grid Engine
http://gridscheduler.sourceforge.net/

Scalable Grid Engine Support Program
http://www.scalablelogic.com/



On Tue, Feb 7, 2012 at 3:14 PM, Ralph Castain  wrote:
> The community is aware of the issue. However, the corporations 
> interested/involved in this area are not running on EC2 nor concerned about 
> having allocations taken away. The question of failed nodes is something we 
> plan to address over time, but is not considered an immediate show-stopper.
>
> On Feb 7, 2012, at 1:05 PM, Rayson Ho wrote:
>
>> Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
>> is going to use MPI for inter-task communication, then make sure they
>> understand that the MPI standard currently does not address fault
>> folerant.
>>
>> Note that it is not uncommon to run map reduce jobs on Amazon EC2's
>> spot instances, which can be taken back by Amazon at any time if the
>> spot price rises above the bid price of the user. If Hadoop is going
>> to use MPI, and without a fault folerant MPI implementation, then the
>> whole job needs to be rerun.
>>
>> http://www.youtube.com/watch?v=66rfnFA0jpM
>>
>> Rayson
>>
>> =
>> Open Grid Scheduler / Grid Engine
>> http://gridscheduler.sourceforge.net/
>>
>> Scalable Grid Engine Support Program
>> http://www.scalablelogic.com/
>>
>>
>> On Wed, Feb 1, 2012 at 3:20 PM, Ralph Castain  wrote:
>>> FROM: LANL, HLRS, Cisco, Oracle, and IBM
>>>
>>> WHAT: Adds Java bindings
>>>
>>> WHY: The Hadoop community would like to use MPI in their efforts, and most 
>>> of their code is in Java
>>>
>>> WHERE: ompi/mpi/java plus one new config file in ompi/config
>>>
>>> TIMEOUT: Feb 10, 2012
>>>
>>>
>>> Hadoop is a Java-based environment for processing extremely large data 
>>> sets. Modeled on the Google enterprise system, it has evolved into its own 
>>> open-source community. Currently, they use their own IPC for messaging, but 
>>> acknowledge that it is nowhere near as efficient or well-developed as found 
>>> in MPI.
>>>
>>> While 3rd party Java bindings are available, the Hadoop business world is 
>>> leery of depending on something that "bolts on" - they would be more 
>>> willing to adopt the technology if it were included in a "standard" 
>>> distribution. Hence, they have requested that Open MPI provide that 
>>> capability, and in exchange will help champion broader adoption of Java 
>>> support within the MPI community.
>>>
>>> We have based the OMPI bindings on the mpiJava code originally developed at 
>>> IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
>>> completely transparent to all other OMPI users and has zero performance 
>>> impact on the rest of the code/bindings. We have setup the configure so 
>>> that the Java bindings will build if/when they can or are explicitly 
>>> requested, just as with other language support.
>>>
>>> As the Hadoop community represents a rapidly-growing new set of customers 
>>> and needs, we feel that adding these bindings is appropriate. The bindings 
>>> will be maintained by those organizations that have an interest in this 
>>> use-case.
>>>
>>>
>>> ___
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>>
>>
>> --
>> Rayson
>>
>> ==
>> Open Grid Scheduler - The Official Open Source Grid Engine
>> http://gridscheduler.sourceforge.net/
>>
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel



Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain
:-)

I agree, and I don't sense anyone pushing the direction of distorting the 
current MPI behaviors. There are some good business reasons to want to use MPI 
in the analytics, and there are thoughts on how to work around the failure 
issues, but Hadoop clusters have some mechanisms available to them that aren't 
typically used on HPC clusters that may help. It will obviously be a bit of a 
work-in-progress for awhile, but the corporate investment rate is high and so 
hands will be available to address them.

Thanks
Ralph

On Feb 7, 2012, at 1:25 PM, Rayson Ho wrote:

> Ralph,
> 
> I am not totally against the idea. As long as Hadoop is not taking
> away the current task communication mechanism until MPI finally (there
> are just too many papers on FT MPI, I remember reading checkpointing
> MPI jobs more than 10 years ago!) has a standard way to handle node
> failure, then I am not concerned at all!
> 
> Rayson
> 
> =
> Open Grid Scheduler / Grid Engine
> http://gridscheduler.sourceforge.net/
> 
> Scalable Grid Engine Support Program
> http://www.scalablelogic.com/
> 
> 
> 
> On Tue, Feb 7, 2012 at 3:14 PM, Ralph Castain  wrote:
>> The community is aware of the issue. However, the corporations 
>> interested/involved in this area are not running on EC2 nor concerned about 
>> having allocations taken away. The question of failed nodes is something we 
>> plan to address over time, but is not considered an immediate show-stopper.
>> 
>> On Feb 7, 2012, at 1:05 PM, Rayson Ho wrote:
>> 
>>> Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
>>> is going to use MPI for inter-task communication, then make sure they
>>> understand that the MPI standard currently does not address fault
>>> folerant.
>>> 
>>> Note that it is not uncommon to run map reduce jobs on Amazon EC2's
>>> spot instances, which can be taken back by Amazon at any time if the
>>> spot price rises above the bid price of the user. If Hadoop is going
>>> to use MPI, and without a fault folerant MPI implementation, then the
>>> whole job needs to be rerun.
>>> 
>>> http://www.youtube.com/watch?v=66rfnFA0jpM
>>> 
>>> Rayson
>>> 
>>> =
>>> Open Grid Scheduler / Grid Engine
>>> http://gridscheduler.sourceforge.net/
>>> 
>>> Scalable Grid Engine Support Program
>>> http://www.scalablelogic.com/
>>> 
>>> 
>>> On Wed, Feb 1, 2012 at 3:20 PM, Ralph Castain  wrote:
 FROM: LANL, HLRS, Cisco, Oracle, and IBM
 
 WHAT: Adds Java bindings
 
 WHY: The Hadoop community would like to use MPI in their efforts, and most 
 of their code is in Java
 
 WHERE: ompi/mpi/java plus one new config file in ompi/config
 
 TIMEOUT: Feb 10, 2012
 
 
 Hadoop is a Java-based environment for processing extremely large data 
 sets. Modeled on the Google enterprise system, it has evolved into its own 
 open-source community. Currently, they use their own IPC for messaging, 
 but acknowledge that it is nowhere near as efficient or well-developed as 
 found in MPI.
 
 While 3rd party Java bindings are available, the Hadoop business world is 
 leery of depending on something that "bolts on" - they would be more 
 willing to adopt the technology if it were included in a "standard" 
 distribution. Hence, they have requested that Open MPI provide that 
 capability, and in exchange will help champion broader adoption of Java 
 support within the MPI community.
 
 We have based the OMPI bindings on the mpiJava code originally developed 
 at IU, and currently maintained by HLRS. Adding the bindings to OMPI is 
 completely transparent to all other OMPI users and has zero performance 
 impact on the rest of the code/bindings. We have setup the configure so 
 that the Java bindings will build if/when they can or are explicitly 
 requested, just as with other language support.
 
 As the Hadoop community represents a rapidly-growing new set of customers 
 and needs, we feel that adding these bindings is appropriate. The bindings 
 will be maintained by those organizations that have an interest in this 
 use-case.
 
 
 ___
 devel mailing list
 de...@open-mpi.org
 http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>> 
>>> 
>>> 
>>> --
>>> Rayson
>>> 
>>> ==
>>> Open Grid Scheduler - The Official Open Source Grid Engine
>>> http://gridscheduler.sourceforge.net/
>>> 
>>> ___
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> ___
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> ___
> devel mailing list
> de...@open-mpi

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Paul H. Hargrove
As an HPC software developer and user of OMPI, I'd like to add my $0.02 
here even though I am not an OMPI developer.


Nothing in George's response seems to me to preclude the interested 
institutions (listed as FROM in the RFC) from forking a branch to pursue 
this work until there can be standardization of Java bindings.  If the 
JAVA bindings really are as "orthogonal" to the rest of the code as the 
RFC authors claim, then merging a branch back to the trunk when they 
have a stable/standard interface should not be onerous.


I know from experience in other projects that work that SHOULD "have 
zero impact" on those not using it seldom does.  There is always 
something that pops up, such as small autotools mistakes that break 
nighlty tarballs and goofs like that.  If nothing else, the existance of 
the JAVA bindings would seem to impose an additional testing burden on 
developers making changes to internal interfaces and data structures.  
For that reason I agree w/ George that there is not yet sufficiently low 
risk/reward to support adding Java bindings in OMPI's trunk.


So I'd propose that the work be done on a branch and the RFC can be 
reissued when there is both

a) a standard to which the bindings can claim to conform
b) an implementation which has been shown to be stable

-Paul

On 2/7/2012 12:18 PM, Ralph Castain wrote:

Nobody is asking us to make any decision or take a position re standardization. 
The Hadoop community fully intends to bring the question of Java binding 
standards to the Forum over the next year, but we all know that is a long, 
arduous journey. In the interim, they not only asked that we provide the 
bindings in our release, but also are providing the support to maintain them.

If members of the Python or R communities were to step forward, offer to do the 
work and maintain it, and could show it had zero impact on the rest of the code 
base, I for one would welcome their bindings. Can't see the harm - can always 
be removed if/when they ceased to support them on their own.


On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:


This doesn't sound like a very good idea, despite a significant support from a 
lot of institutions. There is no standardization efforts in the targeted 
community, and championing a broader support in the Java world was not one of 
our main target.

OMPI does not include the Boost bindings, despite the fact that it was 
developed at IU. OMPI does not include Python nor R bindings despite their 
large user community. Why suddenly should we provide unstandardized Java 
bindings?

I think we should not tackle such inclusion before there is at least a 
beginning of a standardization effort in the targeted community. They have to 
step up and address their needs (if they are real), instead of relying on us to 
take a decision. Until then, the fast growing targeted community should 
maintain the binding as a standalone project on their own.

  george.

On Feb 1, 2012, at 15:20 , Ralph Castain wrote:


FROM: LANL, HLRS, Cisco, Oracle, and IBM

WHAT: Adds Java bindings

WHY: The Hadoop community would like to use MPI in their efforts, and most of 
their code is in Java

WHERE: ompi/mpi/java plus one new config file in ompi/config

TIMEOUT: Feb 10, 2012


Hadoop is a Java-based environment for processing extremely large data sets. 
Modeled on the Google enterprise system, it has evolved into its own 
open-source community. Currently, they use their own IPC for messaging, but 
acknowledge that it is nowhere near as efficient or well-developed as found in 
MPI.

While 3rd party Java bindings are available, the Hadoop business world is leery of depending on 
something that "bolts on" - they would be more willing to adopt the technology if it were 
included in a "standard" distribution. Hence, they have requested that Open MPI provide 
that capability, and in exchange will help champion broader adoption of Java support within the MPI 
community.

We have based the OMPI bindings on the mpiJava code originally developed at IU, 
and currently maintained by HLRS. Adding the bindings to OMPI is completely 
transparent to all other OMPI users and has zero performance impact on the rest 
of the code/bindings. We have setup the configure so that the Java bindings 
will build if/when they can or are explicitly requested, just as with other 
language support.

As the Hadoop community represents a rapidly-growing new set of customers and 
needs, we feel that adding these bindings is appropriate. The bindings will be 
maintained by those organizations that have an interest in this use-case.


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.or

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain
We already have a stable, standard interface for non-C language bindings, Paul 
- the C++ bindings, for example, are built on top of them.

The binding codes are all orthogonal to the base code. All they do is massage 
data access and then loop back to the C bindings. This is the normal way we 
handle all non-C bindings, so nothing different there.

The work has been done on a branch, and an RFC issued. The bindings conform to 
the MPI standard, and the implementation uses an existing external, third-party 
binding that has been tested.

So I'm not sure what you are asking that hasn't already been done…

On Feb 7, 2012, at 1:33 PM, Paul H. Hargrove wrote:

> As an HPC software developer and user of OMPI, I'd like to add my $0.02 here 
> even though I am not an OMPI developer.
> 
> Nothing in George's response seems to me to preclude the interested 
> institutions (listed as FROM in the RFC) from forking a branch to pursue this 
> work until there can be standardization of Java bindings.  If the JAVA 
> bindings really are as "orthogonal" to the rest of the code as the RFC 
> authors claim, then merging a branch back to the trunk when they have a 
> stable/standard interface should not be onerous.
> 
> I know from experience in other projects that work that SHOULD "have zero 
> impact" on those not using it seldom does.  There is always something that 
> pops up, such as small autotools mistakes that break nighlty tarballs and 
> goofs like that.  If nothing else, the existance of the JAVA bindings would 
> seem to impose an additional testing burden on developers making changes to 
> internal interfaces and data structures.  For that reason I agree w/ George 
> that there is not yet sufficiently low risk/reward to support adding Java 
> bindings in OMPI's trunk.
> 
> So I'd propose that the work be done on a branch and the RFC can be reissued 
> when there is both
> a) a standard to which the bindings can claim to conform
> b) an implementation which has been shown to be stable
> 
> -Paul
> 
> On 2/7/2012 12:18 PM, Ralph Castain wrote:
>> Nobody is asking us to make any decision or take a position re 
>> standardization. The Hadoop community fully intends to bring the question of 
>> Java binding standards to the Forum over the next year, but we all know that 
>> is a long, arduous journey. In the interim, they not only asked that we 
>> provide the bindings in our release, but also are providing the support to 
>> maintain them.
>> 
>> If members of the Python or R communities were to step forward, offer to do 
>> the work and maintain it, and could show it had zero impact on the rest of 
>> the code base, I for one would welcome their bindings. Can't see the harm - 
>> can always be removed if/when they ceased to support them on their own.
>> 
>> 
>> On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:
>> 
>>> This doesn't sound like a very good idea, despite a significant support 
>>> from a lot of institutions. There is no standardization efforts in the 
>>> targeted community, and championing a broader support in the Java world was 
>>> not one of our main target.
>>> 
>>> OMPI does not include the Boost bindings, despite the fact that it was 
>>> developed at IU. OMPI does not include Python nor R bindings despite their 
>>> large user community. Why suddenly should we provide unstandardized Java 
>>> bindings?
>>> 
>>> I think we should not tackle such inclusion before there is at least a 
>>> beginning of a standardization effort in the targeted community. They have 
>>> to step up and address their needs (if they are real), instead of relying 
>>> on us to take a decision. Until then, the fast growing targeted community 
>>> should maintain the binding as a standalone project on their own.
>>> 
>>>  george.
>>> 
>>> On Feb 1, 2012, at 15:20 , Ralph Castain wrote:
>>> 
 FROM: LANL, HLRS, Cisco, Oracle, and IBM
 
 WHAT: Adds Java bindings
 
 WHY: The Hadoop community would like to use MPI in their efforts, and most 
 of their code is in Java
 
 WHERE: ompi/mpi/java plus one new config file in ompi/config
 
 TIMEOUT: Feb 10, 2012
 
 
 Hadoop is a Java-based environment for processing extremely large data 
 sets. Modeled on the Google enterprise system, it has evolved into its own 
 open-source community. Currently, they use their own IPC for messaging, 
 but acknowledge that it is nowhere near as efficient or well-developed as 
 found in MPI.
 
 While 3rd party Java bindings are available, the Hadoop business world is 
 leery of depending on something that "bolts on" - they would be more 
 willing to adopt the technology if it were included in a "standard" 
 distribution. Hence, they have requested that Open MPI provide that 
 capability, and in exchange will help champion broader adoption of Java 
 support within the MPI community.
 
 We have based the OMPI bindings on the mpiJava code originally de

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Paul H. Hargrove
Forgive me if I misunderstand, but I am under the impression that the 
MPI Forum has not begun any standardization of MPI bindings for JAVA. 
Have I missed something?


-Paul

On 2/7/2012 12:39 PM, Ralph Castain wrote:

We already have a stable, standard interface for non-C language bindings, Paul 
- the C++ bindings, for example, are built on top of them.

The binding codes are all orthogonal to the base code. All they do is massage 
data access and then loop back to the C bindings. This is the normal way we 
handle all non-C bindings, so nothing different there.

The work has been done on a branch, and an RFC issued. The bindings conform to 
the MPI standard, and the implementation uses an existing external, third-party 
binding that has been tested.

So I'm not sure what you are asking that hasn't already been done…

On Feb 7, 2012, at 1:33 PM, Paul H. Hargrove wrote:


As an HPC software developer and user of OMPI, I'd like to add my $0.02 here 
even though I am not an OMPI developer.

Nothing in George's response seems to me to preclude the interested institutions (listed 
as FROM in the RFC) from forking a branch to pursue this work until there can be 
standardization of Java bindings.  If the JAVA bindings really are as 
"orthogonal" to the rest of the code as the RFC authors claim, then merging a 
branch back to the trunk when they have a stable/standard interface should not be onerous.

I know from experience in other projects that work that SHOULD "have zero 
impact" on those not using it seldom does.  There is always something that pops up, 
such as small autotools mistakes that break nighlty tarballs and goofs like that.  If 
nothing else, the existance of the JAVA bindings would seem to impose an additional 
testing burden on developers making changes to internal interfaces and data structures.  
For that reason I agree w/ George that there is not yet sufficiently low risk/reward to 
support adding Java bindings in OMPI's trunk.

So I'd propose that the work be done on a branch and the RFC can be reissued 
when there is both
a) a standard to which the bindings can claim to conform
b) an implementation which has been shown to be stable

-Paul

On 2/7/2012 12:18 PM, Ralph Castain wrote:

Nobody is asking us to make any decision or take a position re standardization. 
The Hadoop community fully intends to bring the question of Java binding 
standards to the Forum over the next year, but we all know that is a long, 
arduous journey. In the interim, they not only asked that we provide the 
bindings in our release, but also are providing the support to maintain them.

If members of the Python or R communities were to step forward, offer to do the 
work and maintain it, and could show it had zero impact on the rest of the code 
base, I for one would welcome their bindings. Can't see the harm - can always 
be removed if/when they ceased to support them on their own.


On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:


This doesn't sound like a very good idea, despite a significant support from a 
lot of institutions. There is no standardization efforts in the targeted 
community, and championing a broader support in the Java world was not one of 
our main target.

OMPI does not include the Boost bindings, despite the fact that it was 
developed at IU. OMPI does not include Python nor R bindings despite their 
large user community. Why suddenly should we provide unstandardized Java 
bindings?

I think we should not tackle such inclusion before there is at least a 
beginning of a standardization effort in the targeted community. They have to 
step up and address their needs (if they are real), instead of relying on us to 
take a decision. Until then, the fast growing targeted community should 
maintain the binding as a standalone project on their own.

  george.

On Feb 1, 2012, at 15:20 , Ralph Castain wrote:


FROM: LANL, HLRS, Cisco, Oracle, and IBM

WHAT: Adds Java bindings

WHY: The Hadoop community would like to use MPI in their efforts, and most of 
their code is in Java

WHERE: ompi/mpi/java plus one new config file in ompi/config

TIMEOUT: Feb 10, 2012


Hadoop is a Java-based environment for processing extremely large data sets. 
Modeled on the Google enterprise system, it has evolved into its own 
open-source community. Currently, they use their own IPC for messaging, but 
acknowledge that it is nowhere near as efficient or well-developed as found in 
MPI.

While 3rd party Java bindings are available, the Hadoop business world is leery of depending on 
something that "bolts on" - they would be more willing to adopt the technology if it were 
included in a "standard" distribution. Hence, they have requested that Open MPI provide 
that capability, and in exchange will help champion broader adoption of Java support within the MPI 
community.

We have based the OMPI bindings on the mpiJava code originally developed at IU, 
and currently maintained by HLRS. Adding the bindings to OMPI is complet

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Jeff Squyres
On Feb 7, 2012, at 2:33 PM, George Bosilca wrote:

> This doesn't sound like a very good idea, despite a significant support from 
> a lot of institutions. There is no standardization efforts in the targeted 
> community, and championing a broader support in the Java world was not one of 
> our main target.

This is a bit of a chicken-and-egg issue.  

You can't standardize something until you know what the good thing is to 
standardize.  We currently have no Java bindings -- there were several in the 
late 90s, but all of them have bit-rotted in one way or another.  Java -- as a 
performant technology -- has come a long way since then.

Hence, this is an attempt by those in the MPI community to go get some 
real-world experience on what to standardize.

> OMPI does not include the Boost bindings, despite the fact that it was 
> developed at IU.

Apples and oranges.

Boost.mpi = class library
Boost.mpi != bindings
Boost.mpi has its own, separate community.
Boost.mpi hasn't been developed in quite a while.

This effort = bindings (i.e., 1:1 mapping to the C MPI bindings)
This effort != class library
This effort has a group that is trying to join the MPI community

> OMPI does not include Python nor R bindings despite their large user 
> community.

They've also:

- never asked to be part of Open MPI
- support more than just Open MPI

If all goes well, the Java bindings will someday support more than Open MPI.  
But for today, the enterprise players are choosing to put efforts and resources 
only into the enterprise-class MPI implementation: Open MPI.

> Why suddenly should we provide unstandardized Java bindings?

Because multiple members of the Open MPI developer community would like to go 
explore this space.

The way these bindings interact is basically additional stuff in configure (to 
find the java compiler and the like) and a new directory under ompi/mpi/java.

It's the moral equivalent of a new component.

> I think we should not tackle such inclusion before there is at least a 
> beginning of a standardization effort in the targeted community.

Chicken-and-egg issue.  You know as well as I do that the MPI Forum won't talk 
about Java bindings until a (sizeable) community can be identified who can 
demonstrate real-world use cases.

> They have to step up and address their needs (if they are real), instead of 
> relying on us to take a decision.

Er... have you noticed Ralph's new employer?  He's at Greenplum now.  They're a 
hadoop company.

They basically hired him to enable MPI in a Hadoop world.

Sooo... I'd say that they *are* stepping up.  :-)

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain

On Feb 7, 2012, at 1:43 PM, Paul H. Hargrove wrote:

> Forgive me if I misunderstand, but I am under the impression that the MPI 
> Forum has not begun any standardization of MPI bindings for JAVA. Have I 
> missed something?

No, they haven't - but that doesn't mean that the bindings cannot conform to 
the standard. Remember, the standard doesn't dictate that you can't have Java 
bindings - it just doesn't currently require that you do. Big difference in 
those two statements.

> 
> -Paul
> 
> On 2/7/2012 12:39 PM, Ralph Castain wrote:
>> We already have a stable, standard interface for non-C language bindings, 
>> Paul - the C++ bindings, for example, are built on top of them.
>> 
>> The binding codes are all orthogonal to the base code. All they do is 
>> massage data access and then loop back to the C bindings. This is the normal 
>> way we handle all non-C bindings, so nothing different there.
>> 
>> The work has been done on a branch, and an RFC issued. The bindings conform 
>> to the MPI standard, and the implementation uses an existing external, 
>> third-party binding that has been tested.
>> 
>> So I'm not sure what you are asking that hasn't already been done…
>> 
>> On Feb 7, 2012, at 1:33 PM, Paul H. Hargrove wrote:
>> 
>>> As an HPC software developer and user of OMPI, I'd like to add my $0.02 
>>> here even though I am not an OMPI developer.
>>> 
>>> Nothing in George's response seems to me to preclude the interested 
>>> institutions (listed as FROM in the RFC) from forking a branch to pursue 
>>> this work until there can be standardization of Java bindings.  If the JAVA 
>>> bindings really are as "orthogonal" to the rest of the code as the RFC 
>>> authors claim, then merging a branch back to the trunk when they have a 
>>> stable/standard interface should not be onerous.
>>> 
>>> I know from experience in other projects that work that SHOULD "have zero 
>>> impact" on those not using it seldom does.  There is always something that 
>>> pops up, such as small autotools mistakes that break nighlty tarballs and 
>>> goofs like that.  If nothing else, the existance of the JAVA bindings would 
>>> seem to impose an additional testing burden on developers making changes to 
>>> internal interfaces and data structures.  For that reason I agree w/ George 
>>> that there is not yet sufficiently low risk/reward to support adding Java 
>>> bindings in OMPI's trunk.
>>> 
>>> So I'd propose that the work be done on a branch and the RFC can be 
>>> reissued when there is both
>>> a) a standard to which the bindings can claim to conform
>>> b) an implementation which has been shown to be stable
>>> 
>>> -Paul
>>> 
>>> On 2/7/2012 12:18 PM, Ralph Castain wrote:
 Nobody is asking us to make any decision or take a position re 
 standardization. The Hadoop community fully intends to bring the question 
 of Java binding standards to the Forum over the next year, but we all know 
 that is a long, arduous journey. In the interim, they not only asked that 
 we provide the bindings in our release, but also are providing the support 
 to maintain them.
 
 If members of the Python or R communities were to step forward, offer to 
 do the work and maintain it, and could show it had zero impact on the rest 
 of the code base, I for one would welcome their bindings. Can't see the 
 harm - can always be removed if/when they ceased to support them on their 
 own.
 
 
 On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:
 
> This doesn't sound like a very good idea, despite a significant support 
> from a lot of institutions. There is no standardization efforts in the 
> targeted community, and championing a broader support in the Java world 
> was not one of our main target.
> 
> OMPI does not include the Boost bindings, despite the fact that it was 
> developed at IU. OMPI does not include Python nor R bindings despite 
> their large user community. Why suddenly should we provide unstandardized 
> Java bindings?
> 
> I think we should not tackle such inclusion before there is at least a 
> beginning of a standardization effort in the targeted community. They 
> have to step up and address their needs (if they are real), instead of 
> relying on us to take a decision. Until then, the fast growing targeted 
> community should maintain the binding as a standalone project on their 
> own.
> 
>  george.
> 
> On Feb 1, 2012, at 15:20 , Ralph Castain wrote:
> 
>> FROM: LANL, HLRS, Cisco, Oracle, and IBM
>> 
>> WHAT: Adds Java bindings
>> 
>> WHY: The Hadoop community would like to use MPI in their efforts, and 
>> most of their code is in Java
>> 
>> WHERE: ompi/mpi/java plus one new config file in ompi/config
>> 
>> TIMEOUT: Feb 10, 2012
>> 
>> 
>> Hadoop is a Java-based environment for processing extremely large data 
>> s

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Jeff Squyres
On Feb 7, 2012, at 3:33 PM, Paul H. Hargrove wrote:

> So I'd propose that the work be done on a branch and the RFC can be reissued 
> when there is both
> a) a standard to which the bindings can claim to conform

I don't really agree with this statement; see my prior email.

> b) an implementation which has been shown to be stable

That's fair enough.

The implementation has zero performance impact on the rest of the code base 
(E.g., latency of C's MPI_Send).  But the rest of the code stability does need 
to be proven, and definitely benefits from having others test it.

This is not an unusual pattern for the OMPI SVN trunk.  People develop stuff on 
branches all the time and bring them in to the trunk.  And sometimes it makes 
the trunk a little unstable for a while, despite the best of intentions and the 
best attempts at pre-testing before committing to the trunk (I know; I've been 
the cause of trunk instability before, too).

Case in point: some new MPI-3 functions were recently brought to the trunk.  
They had several mistakes in them that were not evident until others tried to 
compile / use them.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Paul H. Hargrove

Ralph,

I think you and I may be confusing each other with the meaning of 
"standard":


You asked me

So I'm not sure what you are asking that hasn't already been done…


My reply to that question is that when I wrote

a) a standard to which the bindings can claim to conform


I meant "a) JAVA bindings standardized by the MPI Forum."
In other words, I feel that new language binding should kept out of the 
trunk until there is a standard from the MPI Forum.
I don't think that is a "chicken-and-egg" problem, because the branch 
would be available to the Hadoop community to show the Forum that 
existence of the necessary community. IN FACT, download stats for that 
branch could be shown as evidence of that interest.


So, now I hope it is at least clear where we disagree.
I am, as I said at the start, NOT an OMPI developer and so I won't argue 
this point any further.


-Paul

On 2/7/2012 12:48 PM, Ralph Castain wrote:

On Feb 7, 2012, at 1:43 PM, Paul H. Hargrove wrote:


Forgive me if I misunderstand, but I am under the impression that the MPI Forum 
has not begun any standardization of MPI bindings for JAVA. Have I missed 
something?

No, they haven't - but that doesn't mean that the bindings cannot conform to 
the standard. Remember, the standard doesn't dictate that you can't have Java 
bindings - it just doesn't currently require that you do. Big difference in 
those two statements.


-Paul

On 2/7/2012 12:39 PM, Ralph Castain wrote:

We already have a stable, standard interface for non-C language bindings, Paul 
- the C++ bindings, for example, are built on top of them.

The binding codes are all orthogonal to the base code. All they do is massage 
data access and then loop back to the C bindings. This is the normal way we 
handle all non-C bindings, so nothing different there.

The work has been done on a branch, and an RFC issued. The bindings conform to 
the MPI standard, and the implementation uses an existing external, third-party 
binding that has been tested.

So I'm not sure what you are asking that hasn't already been done…

On Feb 7, 2012, at 1:33 PM, Paul H. Hargrove wrote:


As an HPC software developer and user of OMPI, I'd like to add my $0.02 here 
even though I am not an OMPI developer.

Nothing in George's response seems to me to preclude the interested institutions (listed 
as FROM in the RFC) from forking a branch to pursue this work until there can be 
standardization of Java bindings.  If the JAVA bindings really are as 
"orthogonal" to the rest of the code as the RFC authors claim, then merging a 
branch back to the trunk when they have a stable/standard interface should not be onerous.

I know from experience in other projects that work that SHOULD "have zero 
impact" on those not using it seldom does.  There is always something that pops up, 
such as small autotools mistakes that break nighlty tarballs and goofs like that.  If 
nothing else, the existance of the JAVA bindings would seem to impose an additional 
testing burden on developers making changes to internal interfaces and data structures.  
For that reason I agree w/ George that there is not yet sufficiently low risk/reward to 
support adding Java bindings in OMPI's trunk.

So I'd propose that the work be done on a branch and the RFC can be reissued 
when there is both
a) a standard to which the bindings can claim to conform
b) an implementation which has been shown to be stable

-Paul

On 2/7/2012 12:18 PM, Ralph Castain wrote:

Nobody is asking us to make any decision or take a position re standardization. 
The Hadoop community fully intends to bring the question of Java binding 
standards to the Forum over the next year, but we all know that is a long, 
arduous journey. In the interim, they not only asked that we provide the 
bindings in our release, but also are providing the support to maintain them.

If members of the Python or R communities were to step forward, offer to do the 
work and maintain it, and could show it had zero impact on the rest of the code 
base, I for one would welcome their bindings. Can't see the harm - can always 
be removed if/when they ceased to support them on their own.


On Feb 7, 2012, at 12:33 PM, George Bosilca wrote:


This doesn't sound like a very good idea, despite a significant support from a 
lot of institutions. There is no standardization efforts in the targeted 
community, and championing a broader support in the Java world was not one of 
our main target.

OMPI does not include the Boost bindings, despite the fact that it was 
developed at IU. OMPI does not include Python nor R bindings despite their 
large user community. Why suddenly should we provide unstandardized Java 
bindings?

I think we should not tackle such inclusion before there is at least a 
beginning of a standardization effort in the targeted community. They have to 
step up and address their needs (if they are real), instead of relying on us to 
take a decision. Until then, the fast growing targeted commu

Re: [OMPI devel] RFC: Java MPI bindings

2012-02-07 Thread Ralph Castain
No problems, Paul. I appreciate your input.

If everything in the trunk was required to in the standard, then much of the 
trunk would have to be removed (e.g., all the fault tolerance code). As Jeff 
indicated, the trunk is an area in which we bring new functionality for broader 
exposure. I very much doubt there will be much instability for anyone not using 
the Java bindings (though I do expect some degree of debugging in that arena), 
but my ears are open if/when someone finds something.

You are welcome to try the branch, if it would help resolve concerns:

https://bitbucket.org/rhc/ompi-jv2


On Feb 7, 2012, at 2:06 PM, Paul H. Hargrove wrote:

> Ralph,
> 
> I think you and I may be confusing each other with the meaning of "standard":
> 
> You asked me
>> So I'm not sure what you are asking that hasn't already been done…
> 
> My reply to that question is that when I wrote
>> a) a standard to which the bindings can claim to conform
> 
> I meant "a) JAVA bindings standardized by the MPI Forum."
> In other words, I feel that new language binding should kept out of the trunk 
> until there is a standard from the MPI Forum.
> I don't think that is a "chicken-and-egg" problem, because the branch would 
> be available to the Hadoop community to show the Forum that existence of the 
> necessary community. IN FACT, download stats for that branch could be shown 
> as evidence of that interest.
> 
> So, now I hope it is at least clear where we disagree.
> I am, as I said at the start, NOT an OMPI developer and so I won't argue this 
> point any further.
> 
> -Paul
> 
> On 2/7/2012 12:48 PM, Ralph Castain wrote:
>> On Feb 7, 2012, at 1:43 PM, Paul H. Hargrove wrote:
>> 
>>> Forgive me if I misunderstand, but I am under the impression that the MPI 
>>> Forum has not begun any standardization of MPI bindings for JAVA. Have I 
>>> missed something?
>> No, they haven't - but that doesn't mean that the bindings cannot conform to 
>> the standard. Remember, the standard doesn't dictate that you can't have 
>> Java bindings - it just doesn't currently require that you do. Big 
>> difference in those two statements.
>> 
>>> -Paul
>>> 
>>> On 2/7/2012 12:39 PM, Ralph Castain wrote:
 We already have a stable, standard interface for non-C language bindings, 
 Paul - the C++ bindings, for example, are built on top of them.
 
 The binding codes are all orthogonal to the base code. All they do is 
 massage data access and then loop back to the C bindings. This is the 
 normal way we handle all non-C bindings, so nothing different there.
 
 The work has been done on a branch, and an RFC issued. The bindings 
 conform to the MPI standard, and the implementation uses an existing 
 external, third-party binding that has been tested.
 
 So I'm not sure what you are asking that hasn't already been done…
 
 On Feb 7, 2012, at 1:33 PM, Paul H. Hargrove wrote:
 
> As an HPC software developer and user of OMPI, I'd like to add my $0.02 
> here even though I am not an OMPI developer.
> 
> Nothing in George's response seems to me to preclude the interested 
> institutions (listed as FROM in the RFC) from forking a branch to pursue 
> this work until there can be standardization of Java bindings.  If the 
> JAVA bindings really are as "orthogonal" to the rest of the code as the 
> RFC authors claim, then merging a branch back to the trunk when they have 
> a stable/standard interface should not be onerous.
> 
> I know from experience in other projects that work that SHOULD "have zero 
> impact" on those not using it seldom does.  There is always something 
> that pops up, such as small autotools mistakes that break nighlty 
> tarballs and goofs like that.  If nothing else, the existance of the JAVA 
> bindings would seem to impose an additional testing burden on developers 
> making changes to internal interfaces and data structures.  For that 
> reason I agree w/ George that there is not yet sufficiently low 
> risk/reward to support adding Java bindings in OMPI's trunk.
> 
> So I'd propose that the work be done on a branch and the RFC can be 
> reissued when there is both
> a) a standard to which the bindings can claim to conform
> b) an implementation which has been shown to be stable
> 
> -Paul
> 
> On 2/7/2012 12:18 PM, Ralph Castain wrote:
>> Nobody is asking us to make any decision or take a position re 
>> standardization. The Hadoop community fully intends to bring the 
>> question of Java binding standards to the Forum over the next year, but 
>> we all know that is a long, arduous journey. In the interim, they not 
>> only asked that we provide the bindings in our release, but also are 
>> providing the support to maintain them.
>> 
>> If members of the Python or R communities were to step forward, offer to 
>> do the work and 

Re: [OMPI devel] 1.4.5rc5 has been released

2012-02-07 Thread Paul H. Hargrove



On 2/7/2012 8:59 AM, Jeff Squyres wrote:

This fixes all known issues.


Well, not quite...

I've SUCCESSFULLY retested 44 out of the 55 cpu/os/compiler/abi 
combinations currently on my list.
I expect 9 more by the end of the day (the older/slower hosts), but two 
of my test hosts are down.


So far I see only two problems that remain:

+ I can't build w/ the PGI compilers on MacOS Lion.
This was previously reported in 
http://www.open-mpi.org/community/lists/devel/2012/01/10258.php


+ Building w/ Solaris Studio 12.2 or 12.3 on Linux x86-64, with "-m32" 
required setting LD_LIBRARY_PATH.

This is could either be Oracle's bug in the compiler, or a libtool problem.
My report was: 
http://www.open-mpi.org/community/lists/devel/2012/01/10272.php


-Paul

--
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
HPC Research Department   Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900



Re: [OMPI devel] 1.4.5rc5 has been released

2012-02-07 Thread Paul H. Hargrove


On 2/7/2012 1:25 PM, Paul H. Hargrove wrote:


So far I see only two problems that remain:

+ I can't build w/ the PGI compilers on MacOS Lion.
This was previously reported in 
http://www.open-mpi.org/community/lists/devel/2012/01/10258.php


+ Building w/ Solaris Studio 12.2 or 12.3 on Linux x86-64, with "-m32" 
required setting LD_LIBRARY_PATH.
This is could either be Oracle's bug in the compiler, or a libtool 
problem.
My report was: 
http://www.open-mpi.org/community/lists/devel/2012/01/10272.php




With more of my results back, I can add the following to the 
known-but-not-resolved list:


+ "make check" fails atomics tests using GCCFSS-4.0.4 compilers on 
Solaris10/SPARC
Originally reported in: 
http://www.open-mpi.org/community/lists/devel/2012/01/10234.php
This is a matter of the Sun/Oracle fork of GCC (known as GCC For SPARC 
Systems, or GCCFSS) being buggy with respect to GNU inline asm.
The original failures were with gccfss-4.0.4, but am now retested with 
gccfss-4.3.3.

I'll report on those results later.

BTW:
Due to a scripting error, all these tests are actually against the 
openmpi-1.4.5rc5r25855 tarball rather than the genuine 1.4.5rc5.
I doubt there is any meaningful difference, but figured full disclosure 
is best :-).


-Paul

--
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
HPC Research Department   Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900



Re: [OMPI devel] 1.4.5rc5 has been released

2012-02-07 Thread Paul H. Hargrove


On 2/7/2012 2:37 PM, Paul H. Hargrove wrote:


+ "make check" fails atomics tests using GCCFSS-4.0.4 compilers on 
Solaris10/SPARC
Originally reported in: 
http://www.open-mpi.org/community/lists/devel/2012/01/10234.php
This is a matter of the Sun/Oracle fork of GCC (known as GCC For SPARC 
Systems, or GCCFSS) being buggy with respect to GNU inline asm.
The original failures were with gccfss-4.0.4, but am now retested with 
gccfss-4.3.3.
I'll report on those results later. 


Use of gccfss-4.3.3 is not an improvement.
Instead of failing the atomic_cmpset test, the compiler HANGS when 
compiling atomic_cmpset.c.
I allowed the compiler just over 4 hours accumulated CPU time before 
being convinced it was hung.


So, I'd like to request documenting "gccfss" as unusable in README.
This is important because this broken compiler is installed as 
/usr/bin/gcc on some Solaris systems.


-Paul

--
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
HPC Research Department   Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900



Re: [OMPI devel] 1.4.5rc5 has been released

2012-02-07 Thread Paul H. Hargrove


On 2/7/2012 1:25 PM, Paul H. Hargrove wrote:


I've SUCCESSFULLY retested 44 out of the 55 cpu/os/compiler/abi 
combinations currently on my list.
I expect 9 more by the end of the day (the older/slower hosts), but 
two of my test hosts are down.


My testing is complete for this rc:
+ 54 of my 55 configs have been tested, one host is down

Of those 54:
+ 47 require nothing "extra" (just --prefix, CC & friends, and CFLAGS & 
friends for non-default ABIs)
+ 2 configs using Solaris Studio compilers on Linux x86-64 w/ -m32 
require setting LD_LIBRARY_PATH

+ 2 OpenBSD configs require --disable-io-romio (as documented)
+ 2 configs using gccfss FAILs "make check" as described previously
+ 1 config using PGI compilers on MacOS Lion failed "make all"

-Paul

--
Paul H. Hargrove  phhargr...@lbl.gov
Future Technologies Group
HPC Research Department   Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900