[OMPI users] CfP 2013 Workshop on Middleware for HPC and Big Data Systems (MHPC'13)

2013-04-25 Thread MHPC 2013
we apologize if you receive multiple copies of this message
===

CALL FOR PAPERS

2013 Workshop on

Middleware for HPC and Big Data Systems

MHPC '13

as part of Euro-Par 2013, Aachen, Germany

===

List-Post: users@lists.open-mpi.org
Date: August 27, 2012

Workshop URL: http://m-hpc.org

Springer LNCS

SUBMISSION DEADLINE:

May 31, 2013 - LNCS Full paper submission (rolling abstract submission)
June 28, 2013 - Lightning Talk abstracts


SCOPE

Extremely large, diverse, and complex data sets are generated from
scientific applications, the Internet, social media and other applications.
Data may be physically distributed and shared by an ever larger community.
Collecting, aggregating, storing and analyzing large data volumes
presents major challenges. Processing such amounts of data efficiently
has been an issue to scientific discovery and technological
advancement. In addition, making the data accessible, understandable and
interoperable includes unsolved problems. Novel middleware architectures,
algorithms, and application development frameworks are required.

In this workshop we are particularly interested in original work at the
intersection of HPC and Big Data with regard to middleware handling
and optimizations. Scope is existing and proposed middleware for HPC
and big data, including analytics libraries and frameworks.

The goal of this workshop is to bring together software architects,
middleware and framework developers, data-intensive application developers
as well as users from the scientific and engineering community to exchange
their experience in processing large datasets and to report their scientific
achievement and innovative ideas. The workshop also offers a dedicated forum
for these researchers to access the state of the art, to discuss problems
and requirements, to identify gaps in current and planned designs, and to
collaborate in strategies for scalable data-intensive computing.

The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections.
Presentations may be accompanied by interactive demonstrations.


TOPICS

Topics of interest include, but are not limited to:

- Middleware including: Hadoop, Apache Drill, YARN, Spark/Shark, Hive,
Pig, Sqoop,
HBase, HDFS, S4, CIEL, Oozie, Impala, Storm and Hyrack
- Data intensive middleware architecture
 - Libraries/Frameworks including: Apache Mahout, Giraph, UIMA and GraphLab
- NG Databases including Apache Cassandra, MongoDB and CouchDB/Couchbase
- Schedulers including Cascading
- Middleware for optimized data locality/in-place data processing
- Data handling middleware for deployment in virtualized HPC environments
- Parallelization and distributed processing architectures at the
middleware level
- Integration with cloud middleware and application servers
- Runtime environments and system level support for data-intensive computing
- Skeletons and patterns
- Checkpointing
- Programming models and languages
- Big Data ETL
- Stream processing middleware
- In-memory databases for HPC
- Scalability and interoperability
- Large-scale data storage and distributed file systems
- Content-centric addressing and networking
- Execution engines, languages and environments including CIEL/Skywriting
- Performance analysis, evaluation of data-intensive middleware
- In-depth analysis and performance optimizations in existing data-handling
middleware, focusing on indexing/fast storing or retrieval between compute
and storage nodes
- Highly scalable middleware optimized for minimum communication
- Use cases and experience for popular Big Data middleware
- Middleware security, privacy and trust architectures

DATES

Papers:
Rolling abstract submission
May 31, 2013 - Full paper submission
July 8, 2013 - Acceptance notification
October 3, 2013 - Camera-ready version due

Lightning Talks:
June 28, 2013 - Deadline for lightning talk abstracts
July 15, 2013 - Lightning talk notification

August 27, 2013 - Workshop Date


TPC

CHAIR

Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Jie Tao (co-chair), Karlsruhe Institut of Technology, Germany
Lizhe Wang (co-chair), Chinese Academy of Sciences, China
Gianluigi Zanetti (co-chair), CRS4, Italy

PROGRAM COMMITTEE

Amitanand Aiyer, Facebook, USA
Costas Bekas, IBM, Switzerland
Jakob Blomer, CERN, Switzerland
William Gardner, University of Guelph, Canada
José Gracia, HPC Center of the University of Stuttgart, Germany
Zhenghua Guom,  Indiana University, USA
Marcus Hardt,  Karlsruhe Institute of Technology, Germany
Sverre Jarp, CERN, Switzerland
Christopher Jung,  Karlsruhe Institute of Technology, Germany
Andreas Knüpfer - Technische Universität Dresden, Germany
Nectarios Koziris, National Technical University of Athens, Greece
Yan Ma, Chinese Academy of Sciences, China
Martin Schulz - Lawrence Livermore

Re: [OMPI users] ierr vs ierror in F90 mpi module

2013-04-25 Thread W Spector

Hi Jeff,

I just downloaded 1.7.1.  The new files in the use-mpi-f08 look great!

However the use-mpi-tkr and use-mpi-ignore-tkr directories don't fare so 
well.  Literally all the interfaces are still 'ierr'.


While I realize that both the F90 mpi module and interface checking, 
were optional prior to MPI 3.0, the final argument has been called 
'ierror' since MPI 1!  This really should be fixed.


Walter

On 04/24/2013 06:08 PM, Jeff Squyres (jsquyres) wrote:

Can you try v1.7.1?

We did a major Fortran revamp in the 1.7.x series to bring it up to speed with 
MPI-3 Fortran stuff (at least mostly).  I mention MPI-3 because the name-based 
parameter passing stuff wasn't guaranteed until MPI-3.  I think 1.7.x should 
have gotten all the name-based parameter passing stuff correct (please let me 
know if you find any bugs!).

Just to be clear: it is unlikely that we'll be updating the Fortran support in 
the 1.6.x series.


On Apr 24, 2013, at 8:52 PM, W Spector 
  wrote:


Hi,

The MPI Standard specifies to use 'ierror' for the final argument in most 
Fortran MPI calls.  However the Openmpi f90 module defines it as being 'ierr'.  
This messes up those who want to use keyword=value syntax in their calls.

I just checked the latest 1.6.4 release and it is still broken.

Is this something that can be fixed?

Walter
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-25 Thread Dave Love
Ralph Castain  writes:

> On Apr 24, 2013, at 8:58 AM, Dave Love  wrote:
>
>> "Elken, Tom"  writes:
>> 
 I have seen it recommended to use psm instead of openib for QLogic cards.
>>> [Tom] 
>>> Yes.  PSM will perform better and be more stable when running OpenMPI
>>> than using verbs.
>> 
>> But unfortunately you won't be able to checkpoint.
>
> True - yet remember that OMPI no longer supports checkpoint/restart
> after the 1.6 series. Pending a new supporter coming along

As far as I can tell, lack of PSM checkpointing isn't specific to OMPI,
and I know people have resorted to verbs to get it.

Dropped CR is definitely reason not to use OMPI past 1.6.  [By the way,
the release notes are confusing, saying that DMTCP is supported, but CR
is dropped.]  I'd have hoped a vendor who needs to support CR would
contribute, but I suppose changes just become proprietary if they move
the base past 1.6 :-(.

For general information, what makes the CR support difficult to maintain
-- is it just a question of effort?



Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-25 Thread Ralph Castain

On Apr 25, 2013, at 9:11 AM, Dave Love  wrote:

> Ralph Castain  writes:
> 
>> On Apr 24, 2013, at 8:58 AM, Dave Love  wrote:
>> 
>>> "Elken, Tom"  writes:
>>> 
> I have seen it recommended to use psm instead of openib for QLogic cards.
 [Tom] 
 Yes.  PSM will perform better and be more stable when running OpenMPI
 than using verbs.
>>> 
>>> But unfortunately you won't be able to checkpoint.
>> 
>> True - yet remember that OMPI no longer supports checkpoint/restart
>> after the 1.6 series. Pending a new supporter coming along
> 
> As far as I can tell, lack of PSM checkpointing isn't specific to OMPI,
> and I know people have resorted to verbs to get it.
> 
> Dropped CR is definitely reason not to use OMPI past 1.6.  [By the way,
> the release notes are confusing, saying that DMTCP is supported, but CR
> is dropped.]  I'd have hoped a vendor who needs to support CR would
> contribute, but I suppose changes just become proprietary if they move
> the base past 1.6 :-(.

Not necessarily

> 
> For general information, what makes the CR support difficult to maintain
> -- is it just a question of effort?

Largely a lack of interest. Very few (i.e., a handful) of people around the 
world use it, and it is hard to justify putting in the effort for that small a 
user group. The person who did the work did so as part of his PhD thesis - he 
maintained it for a couple of years while doing a post-doc, but now has joined 
the "real world" and no longer has time. None of the other developers are 
employed by someone who cares.


> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-04-25 Thread Dave Love
"Elken, Tom"  writes:

>> > Intel has acquired the InfiniBand assets of QLogic
>> > about a year ago.  These SDR HCAs are no longer supported, but should
>> > still work.
> [Tom] 
> I guess the more important part of what I wrote is that " These SDR HCAs are 
> no longer supported" :)

Sure, though from our point of view, they never were.  Good riddance to
that cluster vendor, who should have gone out of business earlier.

> [Tom] 
> Some testing from an Intel group who had these QLE7140 HCAs revealed to me 
> that they do _not_ work with our recent software stack such as IFS 7.1.1 
> (which includes OFED 1.5.4.1) .

I suspect I had done the experiment too.

> They were able to get them to work with the QLogic OFED+ 6.0.2 stack.
> That corresponds to OFED 1.5.2 -- that was the first OFED to include
> PSM.

In case it helps for anyone else trying this with old kit:  I had been
using a v6.something, but I'd have to check the something.  Using the
set of "updates" modules built with that and the latest kernel also
provokes the crashes, binary compatibility or not.

> I am providing this info as a courtesy, but not making any guarantees
> that it will work.

Understood, and thanks.

> [Tom] 
> The older QLogic and OFED stacks mentioned above were not ported to nor 
> tested with RHEL 5.9, which did not exist at the time.  Sorry.

Sure, and presumably the Red Hat module shouldn't match the hardware if
it won't work.  (The kernel supports the even older QHT cards OK -- pity
anyone running an old cluster with Mellanox added to three incompatible
lots of Infinipath and ethernet islands.)




[OMPI users] Problem with Openmpi-1.4.0 and qlogic-ofed-1.5.4.1

2013-04-25 Thread Padma Pavani
Hi Team,

I am facing some problem while running HPL benchmark.



I am using Intel mpi -4.0.1 with Qlogic-OFED-1.5.4.1  to run benchmark and
also tried with openmpi-1.4.0 but getting same error.


Error File :

[compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is
attempting to be sent to a process whose contact information is unknown in
file rml_oob_send.c at line 105
[compute-0-1.local:06936] [[14544,1],25] could not get route to
[[INVALID],INVALID]
[compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is
attempting to be sent to a process whose contact information is unknown in
file base/plm_base_proxy.c at line 86
[compute-0-1.local:06975] [[14544,1],27] ORTE_ERROR_LOG: A message is
attempting to be sent to a process whose contact information is unknown in
file rml_oob_send.c at line 105
[compute-0-1.local:06975] [[14544,1],27] could not get route to
[[INVALID],INVALID]
[compute-0-1.local:06975] [[14544,1],27] ORTE_ERROR_LOG: A message is
attempting to be sent to a process whose contact information is unknown in
file base/plm_base_proxy.c at line 86
[compute-0-1.local:06990] [[14544,1],19] ORTE_ERROR_LOG: A message is
attempting to be sent to a process whose contact information is unknown in
file rml_oob_send.c at line 105
[compute-0-1.local:06990] [[14544,1],19] could not get route to
[[INVALID],INVALID]


[OMPI users] multithreaded jobs

2013-04-25 Thread Vladimir Yamshchikov
Hi all,

The FAQ has excellent entries on how to schedule on a SGE cluster non-MPI
jobs, yet only simple jobs are exemplified. But wnat about jobs that can be
run in multithreaded mode, say specifying option -t number_of_threads? In
other words, consider a command an esample qsub script:
..
#$ -pe openmpi 16
..

mpirun -np $NSLOTS my_program -t 16 > out_file

Will that launch a program to run in 16 threads (as desired) or will this
launch 16 instances of a program wiith each instance trying to run in 16
threads (certainly not desired)?


Re: [OMPI users] Problem with Openmpi-1.4.0 and qlogic-ofed-1.5.4.1

2013-04-25 Thread Jeff Squyres (jsquyres)
I'm guessing you're the alter ego of 
http://www.open-mpi.org/community/lists/devel/2013/04/12309.php?  :-)

My first suggestion to you is to upgrade your version of Open MPI -- 1.4.0 is 
ancient.  Can you upgrade to 1.6.4?


On Apr 25, 2013, at 2:08 PM, Padma Pavani  wrote:

> Hi Team,
> 
> I am facing some problem while running HPL benchmark.
> 
>  
> I am using Intel mpi -4.0.1 with Qlogic-OFED-1.5.4.1  to run benchmark and 
> also tried with openmpi-1.4.0 but getting same error.
> 
>  
> Error File : 
> 
> [compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is 
> attempting to be sent to a process whose contact information is unknown in 
> file rml_oob_send.c at line 105
> [compute-0-1.local:06936] [[14544,1],25] could not get route to 
> [[INVALID],INVALID]
> [compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is 
> attempting to be sent to a process whose contact information is unknown in 
> file base/plm_base_proxy.c at line 86
> [compute-0-1.local:06975] [[14544,1],27] ORTE_ERROR_LOG: A message is 
> attempting to be sent to a process whose contact information is unknown in 
> file rml_oob_send.c at line 105
> [compute-0-1.local:06975] [[14544,1],27] could not get route to 
> [[INVALID],INVALID]
> [compute-0-1.local:06975] [[14544,1],27] ORTE_ERROR_LOG: A message is 
> attempting to be sent to a process whose contact information is unknown in 
> file base/plm_base_proxy.c at line 86
> [compute-0-1.local:06990] [[14544,1],19] ORTE_ERROR_LOG: A message is 
> attempting to be sent to a process whose contact information is unknown in 
> file rml_oob_send.c at line 105
> [compute-0-1.local:06990] [[14544,1],19] could not get route to 
> [[INVALID],INVALID]
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] multithreaded jobs

2013-04-25 Thread Ralph Castain
Depends on what NSLOTS is and what your program's "-t" option does :-)

Assuming your "-t" tells your program the number of threads to start, then the 
command you show will execute NSLOTS number of processes, each of which will 
spin off the number of indicated threads.


On Apr 25, 2013, at 11:39 AM, Vladimir Yamshchikov  wrote:

> Hi all,
> 
> The FAQ has excellent entries on how to schedule on a SGE cluster non-MPI 
> jobs, yet only simple jobs are exemplified. But wnat about jobs that can be 
> run in multithreaded mode, say specifying option -t number_of_threads? In 
> other words, consider a command an esample qsub script:
> ..
> #$ -pe openmpi 16
> ..
> 
> mpirun -np $NSLOTS my_program -t 16 > out_file
> 
> Will that launch a program to run in 16 threads (as desired) or will this 
> launch 16 instances of a program wiith each instance trying to run in 16 
> threads (certainly not desired)?
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] assert in opal_datatype_is_contiguous_memory_layout

2013-04-25 Thread Jeff Squyres (jsquyres)
To follow up for the web archives...

We fixed this bug off-list.  It will be included in 1.6.5 and (likely) 1.7.2.


On Apr 5, 2013, at 3:18 PM, Eric Chamberland  
wrote:

> Hi again,
> 
> I have attached a very small example which raise the assertion.
> 
> The problem is arising from a process which does not have any element to 
> write in the file (and then in the MPI_File_set_view)...
> 
> You can see this "bug" with openmpi 1.6.3, 1.6.4 and 1.7.0 configured with:
> 
> ./configure --enable-mem-debug --enable-mem-profile --enable-memchecker
> --with-mpi-param-check --enable-debug
> 
> Just compile the given example (idx_null.cc) as-is with
> 
> mpicxx -o idx_null idx_null.cc
> 
> and run with 3 processes:
> 
> mpirun -n 3 idx_null
> 
> You can modify the example by commenting "#define WITH_ZERO_ELEMNT_BUG" to 
> see that everything is going well when all processes have something to write.
> 
> There is no "bug" if you use openmpi 1.6.3 (and higher) without the debugging 
> options.
> 
> Also, all is working well with mpich-3.0.3 configured with:
> 
> ./configure --enable-g=yes
> 
> 
> So, is this a wrong "assert" in openmpi?
> 
> Is there a real problem to use this code in a "release" mode?
> 
> Thanks,
> 
> Eric
> 
> On 04/05/2013 12:57 PM, Eric Chamberland wrote:
>> Hi all,
>> 
>> I have a well working (large) code which is using openmpi 1.6.3 (see
>> config.log here:
>> http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_nodebug)
>> 
>> (I have used it for reading with MPI I/O with success over 1500 procs
>> with very large files)
>> 
>> However, when I use openmpi compiled with "debug" options:
>> 
>> ./configure --enable-mem-debug --enable-mem-profile --enable-memchecker
>> --with-mpi-param-check --enable-debug --prefix=/opt/openmpi-1.6.3_debug
>> (se other config.log here:
>> http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_debug) the code
>> is aborting with an assertion on a very small example on 2 processors.
>> (the same very small example is working well without the debug mode)
>> 
>> Here is the assertion causing an abort:
>> 
>> ===
>> 
>> openmpi-1.6.3/opal/datatype/opal_datatype.h:
>> 
>> static inline int32_t
>> opal_datatype_is_contiguous_memory_layout( const opal_datatype_t*
>> datatype, int32_t count )
>> {
>> if( !(datatype->flags & OPAL_DATATYPE_FLAG_CONTIGUOUS) ) return 0;
>> if( (count == 1) || (datatype->flags & OPAL_DATATYPE_FLAG_NO_GAPS)
>> ) return 1;
>> 
>> 
>> /* This is the assertion:  */
>> 
>> assert( (OPAL_PTRDIFF_TYPE)datatype->size != (datatype->ub -
>> datatype->lb) );
>> 
>> return 0;
>> }
>> 
>> ===
>> 
>> Does anyone can tell me what does this mean?
>> 
>> It happens while writing a file with MPI I/O when I am calling for the
>> fourth time a "MPI_File_set_view"... with different types of
>> MPI_Datatype created with "MPI_Type_indexed".
>> 
>> I am trying to reproduce the bug with a very small example to be send
>> here, but if anyone has a hint to give me...
>> (I would like: this assert is not good! just ignore it ;-) )
>> 
>> Thanks,
>> 
>> Eric
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] multithreaded jobs

2013-04-25 Thread Vladimir Yamshchikov
$NSLOTS is what requested by -pe openmpi  in the script, my
understanding that by default it is threads. $NSLOTS processes each
spinning -t  threads is not what is wanted as each process could spin
off more threads then there are physical or logical cores per node, thus
degrading performance or even crashing the node. Even when -t  option or blast+ with
number_of_threads  option specified.


On Thu, Apr 25, 2013 at 2:09 PM, Ralph Castain  wrote:

> Depends on what NSLOTS is and what your program's "-t" option does :-)
>
> Assuming your "-t" tells your program the number of threads to start, then
> the command you show will execute NSLOTS number of processes, each of which
> will spin off the number of indicated threads.
>
>
> On Apr 25, 2013, at 11:39 AM, Vladimir Yamshchikov 
> wrote:
>
> > Hi all,
> >
> > The FAQ has excellent entries on how to schedule on a SGE cluster
> non-MPI jobs, yet only simple jobs are exemplified. But wnat about jobs
> that can be run in multithreaded mode, say specifying option -t
> number_of_threads? In other words, consider a command an esample qsub
> script:
> > ..
> > #$ -pe openmpi 16
> > ..
> >
> > mpirun -np $NSLOTS my_program -t 16 > out_file
> >
> > Will that launch a program to run in 16 threads (as desired) or will
> this launch 16 instances of a program wiith each instance trying to run in
> 16 threads (certainly not desired)?
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] multithreaded jobs

2013-04-25 Thread Ralph Castain

On Apr 25, 2013, at 5:33 PM, Vladimir Yamshchikov  wrote:

> $NSLOTS is what requested by -pe openmpi  in the script, my 
> understanding that by default it is threads.

No - it is the number of processing elements (typically cores) that are 
assigned to your job.

> $NSLOTS processes each spinning -t  threads is not what is wanted as 
> each process could spin off more threads then there are physical or logical 
> cores per node, thus degrading performance or even crashing the node. Even 
> when -t  processor or 2, 4, 8, or 12 cores per node), it is still not clear how these 
> cores are utilized in multithreaded runs.
> My question is then - how to correctly formulate resource scheduling for 
> programs designed to run in multithreaded mode? For those involved in 
> bioinformatics, examples are bwa with -t  option or blast+ with 
> number_of_threads  option specified.

What you want to do is:

1. request a number of slots = the number of application processes * the number 
of threads each process will run

2. execute mpirun with the --cpus-per-proc N option, where N = the number of 
threads each process will run.

This will ensure you have one core for each thread. Note, however, that we 
don't actually bind a thread to the core - so having more threads than there 
are cores on a socket can cause a thread to bounce across sockets and 
(therefore) potentially across NUMA regions.


> 
> 
> On Thu, Apr 25, 2013 at 2:09 PM, Ralph Castain  wrote:
> Depends on what NSLOTS is and what your program's "-t" option does :-)
> 
> Assuming your "-t" tells your program the number of threads to start, then 
> the command you show will execute NSLOTS number of processes, each of which 
> will spin off the number of indicated threads.
> 
> 
> On Apr 25, 2013, at 11:39 AM, Vladimir Yamshchikov  wrote:
> 
> > Hi all,
> >
> > The FAQ has excellent entries on how to schedule on a SGE cluster non-MPI 
> > jobs, yet only simple jobs are exemplified. But wnat about jobs that can be 
> > run in multithreaded mode, say specifying option -t number_of_threads? In 
> > other words, consider a command an esample qsub script:
> > ..
> > #$ -pe openmpi 16
> > ..
> >
> > mpirun -np $NSLOTS my_program -t 16 > out_file
> >
> > Will that launch a program to run in 16 threads (as desired) or will this 
> > launch 16 instances of a program wiith each instance trying to run in 16 
> > threads (certainly not desired)?
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] ierr vs ierror in F90 mpi module

2013-04-25 Thread W Spector

Jeff,

I tried building 1.7.1 on my Ubuntu system.  The default gfortran is 
v4.6.3, so configure won't enable the mpi_f08 module build.  I also 
tried a three week old snapshot of the gfortran 4.9 trunk.  This has 
Tobias's new TYPE(*) in it, but not his latest !GCC$ attributes 
NO_ARG_CHECK stuff.  However configure still won't enable the mpi_f08 
module.


Is there a trick to getting a recent gfortran to compile the mpi_f08 module?

I went into the openmpi-1.7.1/ompi/mpi/fortran/use-mpi-tkr/scripts 
directory and modified the files to use ierror instead of ierr.  (One 
well-crafted line of shell script.)  Did the same with a couple of .h.in 
files in the use-mpi-tkr and use-mpi-ignore-tkr directories, and 
use-mpi-tkr/attr_fn-f90-interfaces.h.in.  (One editor command each.)


With the above, the mpi module is in much better shape.  However there 
are still some scattered incorrect non-ierror argument names.  A few 
examples from the code I am working with:


  MPI_Type_create_struct: The 2nd argument should be 
"array_of_blocklengths", instead of "array_of_block_lengths"


  MPI_Type_commit: "datatype" instead of "type"

  MPI_Type_free: Again, "datatype" instead of "type"

There are more...

Walter

On 04/25/2013 06:50 AM, W Spector wrote:

Hi Jeff,

I just downloaded 1.7.1.  The new files in the use-mpi-f08 look great!

However the use-mpi-tkr and use-mpi-ignore-tkr directories don't fare so
well.  Literally all the interfaces are still 'ierr'.

While I realize that both the F90 mpi module and interface checking,
were optional prior to MPI 3.0, the final argument has been called
'ierror' since MPI 1!  This really should be fixed.

Walter

On 04/24/2013 06:08 PM, Jeff Squyres (jsquyres) wrote:

Can you try v1.7.1?

We did a major Fortran revamp in the 1.7.x series to bring it up to
speed with MPI-3 Fortran stuff (at least mostly).  I mention MPI-3
because the name-based parameter passing stuff wasn't guaranteed until
MPI-3.  I think 1.7.x should have gotten all the name-based parameter
passing stuff correct (please let me know if you find any bugs!).

Just to be clear: it is unlikely that we'll be updating the Fortran
support in the 1.6.x series.


On Apr 24, 2013, at 8:52 PM, W Spector 
  wrote:


Hi,

The MPI Standard specifies to use 'ierror' for the final argument in
most Fortran MPI calls.  However the Openmpi f90 module defines it as
being 'ierr'.  This messes up those who want to use keyword=value
syntax in their calls.

I just checked the latest 1.6.4 release and it is still broken.

Is this something that can be fixed?

Walter
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users