[OMPI users] OPEN MPI tutorials depreciated!

2011-03-10 Thread Abdul Rahman Riza
Hi All,

I am newbie in open-mpi. I am using ubuntu 10.04 machine and installed
openmpi from reps. When i read readme docs mentioning like this

If you are looking for a comprehensive MPI tutorial, these samples are
not enough.  An excellent MPI tutorial is available here:

http://webct.ncsa.uiuc.edu:8900/public/MPI/

But I can;t access http://webct.ncsa.uiuc.edu:8900/public/MPI/ , does it
moved somewhere else?

Riza


[OMPI users] QLogic Infiniband : Run switch from ib0 to eth0

2011-03-10 Thread Thierry LAMOUREUX
Hello,

We add recently enhanced our network with Infiniband modules on a six node
cluster.

We have install all OFED drivers related to our hardware

We have set network IP like following :
- eth : 192.168.1.0 / 255.255.255.0
- ib : 192.168.70.0 / 255.255.255.0

After first tests all seems good. IB interfaces ping each other, ssh and
other king of exchanges over IB works well.

Then we started to run our job thought openmpi (building with --with-openib
option) and our first results were very bad.

After investigations, our system have the following behaviour :
- job starts over ib network (few packet are sent)
- job switch to eth network (all next packet sent to these interfaces)

We never specified the IP Address of our eth interfaces.

We tried to launch our jobs with the following options :
- mpirun -hostfile hostfile.list -mca blt openib,self
/common_gfs2/script-test.sh
- mpirun -hostfile hostfile.list -mca blt openib,sm,self
/common_gfs2/script-test.sh
- mpirun -hostfile hostfile.list -mca blt openib,self -mca
btl_tcp_if_exclude lo,eth0,eth1,eth2 /common_gfs2/script-test.sh

The final behaviour remain the same : job is initiated over ib and runs over
eth.

We grab performance tests file (osu_bw and osu_latency) and we got not so
bad results (see attached files).

We had tried plenty of different things but we are stuck : we don't have any
error message...

Thanks per advance for your help.

Thierry.
# OSU MPI Latency Test (Version 2.0)
# Size  Latency (us) 
0   9.39
1   8.98
2   6.92
4   6.94
8   6.94
16  6.99
32  7.09
64  7.30
128 7.56
256 7.70
512 8.27
10249.38
204812.14
409614.51
819219.79
16384   43.00
32768   64.82
65536   104.82
131072  164.28
262144  293.86
524288  536.71
1048576 1049.46
2097152 2213.57
4194304 3686.72
# OSU MPI Bandwidth Test (Version 2.0)
# Size  Bandwidth (MB/s) 
1   0.180975
2   0.365537
4   0.730864
8   1.461231
16  2.920952
32  5.793988
64  11.254934
128 27.403607
256 55.811413
512 109.614427
1024210.083847
2048329.558204
4096506.783138
8192749.913297
16384   570.730147
32768   794.796561
65536   968.103658
131072  990.723946
262144  1009.216695
524288  1032.053241
1048576 1063.046034
2097152 1209.998818
4194304 1346.575306
  HSN Codes
   
Summary of Results  TCP=GigE GM/MX=Myrinet 
IBV/VAPI/UDAPL/PSM=Infiniband
 --
   Maximum Performance
   ---
  GigE :   57 usec HSN-PSM  :2 usec
  GigE :  102 MB/s HSN-PSM  : 1134 MB/s

   Average Performance
   ---
  GigE :   57 usec HSN-PSM  :2 usec
  GigE :  101 MB/s HSN-PSM  : 1124 MB/s

   Minimum Performance
   ---
  GigE :   57 usec HSN-PSM  :2 usec
  GigE :  100 MB/s HSN-PSM  : 1115 MB/s


Re: [OMPI users] multi-threaded programming

2011-03-10 Thread Jeff Squyres
On Mar 8, 2011, at 12:34 PM, Eugene Loh wrote:

> Let's say you have multi-threaded MPI processes, you request 
> MPI_THREAD_MULTIPLE and get MPI_THREAD_MULTIPLE, and you use the self,sm,tcp 
> BTLs (which have some degree of threading support).  Is it okay to have an 
> [MPI_Isend|MPI_Irecv] on one thread be completed by an MPI_Wait on another 
> thread?  I'm assuming some sort of synchronization and memory barrier/flush 
> in between to protect against funny race conditions.
> 
> If it makes things any easier on you, we can do this multiple-choice style:
> 
> 1)  Forbidden by the MPI standard.
> 2)  Not forbidden by the MPI standard, but will not work with OMPI (not even 
> with the BTLs that claim to be multi-threaded).
> 3)  Works well with OMPI (provided you use a BTL that's multi-threaded).

I believe the current answer is #2, but it would be great if the answer could 
change to be a variant of #3:

3) Works well with OMPI (provided you use a BTL that's safe to use with 
MPI_THREAD_MULTIPLE)

(i.e., the BTL doesn't have to be multi-threaded, itself)

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI users] v1.5.2 release is missing the new "affinity" MPI extension

2011-03-10 Thread Jeff Squyres
We realized late yesterday that the v1.5.2 release is missing the new 
"affinity" MPI extension (that provides the OMPI_Affinity_str() function for 
MPI applications), even though it was specifically mentioned in the 1.5.2 NEWS 
and README files.

Oops!

We'll be releasing 1.5.3 shortly; it will include this new extension.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Number of processes and spawn

2011-03-10 Thread Jeff Squyres
This usually means you didn't install the GNU auto tools properly.

Check the HACKING file in the top-level directory for specific instructions on 
how to install the Autotools.


On Mar 10, 2011, at 7:50 AM, Federico Golfrè Andreasi wrote:

> 
> Hi Ralph,
> 
> I did a chekout of the 22794 revision with svn.
> I've download and installed (with the default configuration) in my /home 
> folder:
> - m4 version 1.4.16
> - autoconf version 2.68
> - automake version 1.11
> - libtool version 2.2.6b
> I've modifyed my CSHRC to export the following:
> setenv PATH 
> /home/fandreasi/m4-1.4.16/bin:/home/fandreasi/autoconf-2.68/bin:/home/fandreasi/automake-1.11/bin:/home/fandreasi/libtool-2.2.6b/bin:$PATH
> setenv LD_LIBRARY_PATH /home/fandreasi/libtool-2.2.6b/lib
> 
> When I do the autogen it return me the error I've attached.
> Can you help me on this ?
> 
> Thank you,
> Federico.
> 
> 
> 
> 
> 
> 
> 
> Il giorno 05 marzo 2011 19:05, Ralph Castain  ha scritto:
> Hi Federico
> 
> I tested the trunk today and it works fine for me - I let it spin for 1000 
> cycles without issue. My test program is essentially identical to what you 
> describe - you can see it in the orte/test/mpi directory. The "master" is 
> loop_spawn.c, and the "slave" is loop_child.c. I only tested it on a single 
> machine, though - will have to test multi-machine later. You might see if 
> that makes a difference.
> 
> The error you report in your attachment is a classic symptom of mismatched 
> versions. Remember, we don't forward your ld_lib_path, so it has to be 
> correct on your remote machine.
> 
> As for r22794 - we don't keep anything that old on our web site. If you want 
> to build it, the best way to get the code is to do a subversion checkout of 
> the developer's trunk at that revision level:
> 
> svn co -r 22794 http://svn.open-mpi.org/svn/ompi/trunk
> 
> Remember to run autogen before configure.
> 
> 
> On Mar 4, 2011, at 4:43 AM, Federico Golfrè Andreasi wrote:
> 
>> 
>> Hi Ralph,
>> 
>> I'm getting stuck with spawning stuff,
>> 
>> I've downloaded the snapshot from the trunk of 1st of March 
>> (openmpi-1.7a1r24472.tar.bz2),
>> I'm testing using a small program that does the following:
>>  - master program starts and each rank prints his hostsname
>>  - master program spawn a slave program with the same size
>>  - each rank of the slave (spawned) program prints his hostname
>>  - end
>> Not always he is able to complete the progam run, two different behaviour:
>>  1. not all the slave print their hostname and the program ends suddenly
>>  2. both program ends correctly but orted demon is still alive and I need to 
>> press crtl-c to exit
>> 
>> 
>> I've tryed to recompile my test program with a previous snapshot 
>> (openmpi-1.7a1r22794.tar.bz2)
>> where I have only the compiled version of OpenMPI (in another machine).
>> It gives me an error before starting (I've attacehd)
>> Surfing on the FAQ I found some tip and I verified to compile the program 
>> with the correct OpenMPI version,
>> that the LD_LIBRARY_PATH is consistent.
>> So I would like to re-compile the openmpi-1.7a1r22794.tar.bz2 but where can 
>> I found it ?
>> 
>> 
>> Thank you,
>> Federico
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Il giorno 23 febbraio 2011 03:43, Ralph Castain  ha 
>> scritto:
>> Apparently not. I will investigate when I return from vacation next week.
>> 
>> 
>> Sent from my iPad
>> 
>> On Feb 22, 2011, at 12:42 AM, Federico Golfrè Andreasi 
>>  wrote:
>> 
>>> Hi Ralf,
>>> 
>>> I've tested spawning with the OpenMPI 1.5 release but that fix is not there.
>>> Are you sure you've added it ?
>>> 
>>> Thank you,
>>> Federico
>>> 
>>> 
>>> 
>>> 2010/10/19 Ralph Castain 
>>> The fix should be there - just didn't get mentioned.
>>> 
>>> Let me know if it isn't and I'll ensure it is in the next one...but I'd be 
>>> very surprised if it isn't already in there.
>>> 
>>> 
>>> On Oct 19, 2010, at 3:03 AM, Federico Golfrè Andreasi wrote:
>>> 
 Hi Ralf !
 
 I saw that the new realease 1.5 is out. 
 I didn't found this fix in the "list of changes", is it present but not 
 mentioned since is a minor fix ?
 
 Thank you,
 Federico
 
 
 
 2010/4/1 Ralph Castain 
 Hi there!
 
 It will be in the 1.5.0 release, but not 1.4.2 (couldn't backport the 
 fix). I understand that will come out sometime soon, but no firm date has 
 been set.
 
 
 On Apr 1, 2010, at 4:05 AM, Federico Golfrè Andreasi wrote:
 
> Hi Ralph,
> 
> 
>  I've downloaded and tested the openmpi-1.7a1r22817 snapshot,
> and it works fine for (multiple) spawning more than 128 processes.
> 
> That fix will be included in the next release of OpenMPI, right ?
> Do you when it will be released ? Or where I can find that info ?
> 
> Thank you,
>  Federico
> 

Re: [OMPI users] Number of processes and spawn

2011-03-10 Thread Federico Golfrè Andreasi
Hi Ralph,

I did a chekout of the 22794 revision with svn.
I've download and installed (with the default configuration) in my /home
folder:
- m4 version 1.4.16
- autoconf version 2.68
- automake version 1.11
- libtool version 2.2.6b
I've modifyed my CSHRC to export the following:
setenv PATH
/home/fandreasi/m4-1.4.16/bin:/home/fandreasi/autoconf-2.68/bin:/home/fandreasi/automake-1.11/bin:/home/fandreasi/libtool-2.2.6b/bin:$PATH
setenv LD_LIBRARY_PATH /home/fandreasi/libtool-2.2.6b/lib

When I do the autogen it return me the error I've attached.
Can you help me on this ?

Thank you,
Federico.







Il giorno 05 marzo 2011 19:05, Ralph Castain  ha scritto:

> Hi Federico
>
> I tested the trunk today and it works fine for me - I let it spin for 1000
> cycles without issue. My test program is essentially identical to what you
> describe - you can see it in the orte/test/mpi directory. The "master" is
> loop_spawn.c, and the "slave" is loop_child.c. I only tested it on a single
> machine, though - will have to test multi-machine later. You might see if
> that makes a difference.
>
> The error you report in your attachment is a classic symptom of mismatched
> versions. Remember, we don't forward your ld_lib_path, so it has to be
> correct on your remote machine.
>
> As for r22794 - we don't keep anything that old on our web site. If you
> want to build it, the best way to get the code is to do a subversion
> checkout of the developer's trunk at that revision level:
>
> svn co -r 22794 http://svn.open-mpi.org/svn/ompi/trunk
>
> Remember to run autogen before configure.
>
>
> On Mar 4, 2011, at 4:43 AM, Federico Golfrè Andreasi wrote:
>
>
> Hi Ralph,
>
> I'm getting stuck with spawning stuff,
>
> I've downloaded the snapshot from the trunk of 1st of March (
> openmpi-1.7a1r24472.tar.bz2),
> I'm testing using a small program that does the following:
>  - master program starts and each rank prints his hostsname
>  - master program spawn a slave program with the same size
>  - each rank of the slave (spawned) program prints his hostname
>  - end
> Not always he is able to complete the progam run, two different behaviour:
>  1. not all the slave print their hostname and the program ends suddenly
>  2. both program ends correctly but orted demon is still alive and I need
> to press crtl-c to exit
>
>
> I've tryed to recompile my test program with a previous snapshot
> (openmpi-1.7a1r22794.tar.bz2)
> where I have only the compiled version of OpenMPI (in another machine).
> It gives me an error before starting (I've attacehd)
> Surfing on the FAQ I found some tip and I verified to compile the program
> with the correct OpenMPI version,
> that the LD_LIBRARY_PATH is consistent.
> So I would like to re-compile the openmpi-1.7a1r22794.tar.bz2 but where
> can I found it ?
>
>
> Thank you,
> Federico
>
>
>
>
>
>
>
>
>
>
> Il giorno 23 febbraio 2011 03:43, Ralph Castain  ha
> scritto:
>
>> Apparently not. I will investigate when I return from vacation next week.
>>
>>
>> Sent from my iPad
>>
>> On Feb 22, 2011, at 12:42 AM, Federico Golfrè Andreasi <
>> federico.gol...@gmail.com> wrote:
>>
>> Hi Ralf,
>>
>> I've tested spawning with the OpenMPI 1.5 release but that fix is not
>> there.
>> Are you sure you've added it ?
>>
>> Thank you,
>> Federico
>>
>>
>>
>> 2010/10/19 Ralph Castain < r...@open-mpi.org>
>>
>>> The fix should be there - just didn't get mentioned.
>>>
>>> Let me know if it isn't and I'll ensure it is in the next one...but I'd
>>> be very surprised if it isn't already in there.
>>>
>>>
>>> On Oct 19, 2010, at 3:03 AM, Federico Golfrè Andreasi wrote:
>>>
>>> Hi Ralf !
>>>
>>> I saw that the new realease 1.5 is out.
>>> I didn't found this fix in the "list of changes", is it present but not
>>> mentioned since is a minor fix ?
>>>
>>> Thank you,
>>> Federico
>>>
>>>
>>>
>>> 2010/4/1 Ralph Castain < r...@open-mpi.org>
>>>
 Hi there!

 It will be in the 1.5.0 release, but not 1.4.2 (couldn't backport the
 fix). I understand that will come out sometime soon, but no firm date has
 been set.


 On Apr 1, 2010, at 4:05 AM, Federico Golfrè Andreasi wrote:

 Hi Ralph,


  I've downloaded and tested the openmpi-1.7a1r22817 snapshot,
 and it works fine for (multiple) spawning more than 128 processes.

 That fix will be included in the next release of OpenMPI, right ?
 Do you when it will be released ? Or where I can find that info ?

 Thank you,
  Federico



 2010/3/1 Ralph Castain < r...@open-mpi.org>

> 
> http://www.open-mpi.org/nightly/trunk/
>
> I'm not sure this patch will solve your problem, but it is worth a try.
>
>
>
>
 ___
 users mailing list
 

Re: [OMPI users] Two Instances of Same Process Rather Than Two SeparateProcesses

2011-03-10 Thread Jeff Squyres (jsquyres)
LAM/MPI is dead; all the developers (including me) moved to Open MPI years ago 
(literally). 

Most of LAM's good ideas have been absorbed into Open MPI. 

Switching from LAM to Open MPI is theoretically pretty easy - both use the 
same-named wrapper compilers (mpicc, mpif77, etc). You should be able to simply 
change your path to point to the Open MPI wrappers instead of the LAM wrappers 
and then build as you normally would. 

Sent from my phone. No type good. 

On Mar 9, 2011, at 11:52 PM, "Clark Britan"  wrote:

> Jeff,
> 
> Thanks for the reply and you are correct about the error. Here is a
> summary of what happened, with an additional question at the end. 
> 
> I originally installed lam-mpi to run FDS, as suggested in the FDS
> manual. Everything works smoothly with lam-mpi, but on the lam-mpi
> website it suggests trying the newer open mpi, so I downloaded it. Then
> when I tried to run FDS in parallel using open mpi, I got the error that
> I mentioned in the previous email. I then deleted lam-mpi and tried
> running again using open mpi and I got an error saying that FDS was
> looking for the lam-mpi help file and that it couldn't find it. So, that
> leads me to believe that the pre-compiled version of FDS was compiled
> against lam-mpi (not sure if "compiled against lam-mpi" is the right
> wording) and therefore will not work with open mpi. I spent some time
> trying to compile the FDS source code with the open mpi compilers, but I
> realised this is quite difficult. 
> 
> Is open mpi significantly better than lam-mpi? I.e. should I continue my
> efforts in trying to run FDS with open mpi? And if so, would compiling
> the FDS source code using the open mpi compilers solve the problem?
> 
> Thanks for the help.
> 
> Regards,
> 
> Clark
> 
> On Tue, 2011-03-08 at 10:53 -0500, Gus Correa wrote:
>> Jeff Squyres (jsquyres) wrote:
>>> This usually indicates a mismatch of MPI installations - eg, you compiled 
>>> against one MPI installation but then accidentally used the mpirun from a 
>>> different MPI installation. 
>>> 
>>> Sent from my phone. No type good. 
>>> 
>>> On Mar 8, 2011, at 4:36 AM, "Clark Britan"  wrote:
>>> 
 I just installed OpenMPI on my Linux Ubuntu 10.04 LTS 64 bit computer. I
 downloaded the most recent version of OpenMPI and ran the configure and 
 make
 commands. 
 
 I then tried to run a CFD software called FDS using 2 of the 12 available
 processors (single node) as a test. I split my computational domain into 
 two
 meshes, as explained in the FDS manual and would like to run each mesh on a
 separate core. 
 
 When I run the command mpirun -np 2 fds5_mpi_linux_64 room_fire.fds I get
 the following error:
 
 Process 0 of 0 is running on comp1
 Process 0 of 0 is running on comp1
 Mesh 1 is assigned to Process 0
 Error: MPI_PROCESS greater than total number of processes
 
 Why are two instances of the same process run instead of two separate
 processes? What I expect to see after running the above command is:
 
 Process 0 of 1 is running on comp1
 Process 1 of 1 is running on comp1
 Mesh 1 is assigned to Process 0
 Mesh 2 is assigned to Process 1
 ...
 
 Any idea what is going on? Thanks for the help.
 
 Kind Regards,
 
 Clark
>> 
>> Hi Clark
>> 
>> Any chances that MPI_PROCESS was not properly set in your FDS parameter 
>> file?
>> I am not familiar to the FDS software, but it looks like MPI_PROCESS is
>> part of the FDS setup, and the error message seems to complain
>> of a mismatch w.r.t. the number of processes (-np 2).
>> Maybe it takes a default value.
>> 
>> Also, if you just want to check your OpenMPI functionality, download
>> the OpenMPI source code, compile (with mpicc) and run (with mpirun)
>> the hello_c.c, connectivity_c.c, and ring_c.c programs in the 'examples'
>> directory.  This will at least tell you if the problem is in OpenMPI or
>> in FDS.
>> 
>> My two cents,
>> Gus Correa
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] Understanding the buffering of small messages with tcp network

2011-03-10 Thread George Markomanolis

Dear all,

I would like you to ask for a topic that there are already many 
questions but I am not familiar a lot with it. I want to understand the 
behaviour of an application where there are many messages less than 64KB 
(eager mode) and I use TCP network. I am trying to understand in order 
to simulate this application.
For example it can be possible to have one MPI_Send of 1200 bytes after 
some computation, then two messages of the same size, after computation, 
etc. However according to the measurements and the profiling the cost of 
the communication is less than the latency of the network. I can 
understand that the cost of the MPI_Send is the copy to the buffer 
however sending the message to the destination it should cost at least 
the latency. So are the messages buffered in the sender and they are 
sent as packet to the receiver? My tcp window is 4MB and I use the same 
value for snd_buff and rcv_buff. If they are buffered in the sender what 
is the criterion/algorithm? I mean if I have one message, after 
computation and after again message is it possible these two messages to 
be buffered from the sender point of view or this happens only on the 
receiver? If there is any document/paper that I can read about this I 
would be appreciate to provide me the link.
A simple example is that if I have a loop that rank 0 sends two messages 
to rank 1 then the duration of the first message is bigger than the 
second's one and if I increase the loop to 10 or 20 messages then all 
the messages cost a lot less than the first one and also less from what 
SkaMPI measures. So I am sure that it should be a buffer issue (or 
something else that I can't think about).


Best regards,
Georges


Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Tim Prince

On 3/9/2011 11:05 PM, Jack Bryan wrote:

thanks

I am using GNU mpic++ compiler.

Does it can automatically support accessing a file by many parallel
processes ?



It should follow the gcc manual, e.g.
http://www.gnu.org/s/libc/manual/html_node/Opening-Streams.html
I think you want *opentype to evaluate to 'r' (readonly).
--
Tim Prince


Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Jack Bryan

thanks
I am using GNU mpic++ compiler. 
Does it can automatically support accessing  a file by many parallel processes 
? 

thanks
> Date: Wed, 9 Mar 2011 22:54:18 -0800
> From: n...@aol.com
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Open MPI access the same file in parallel ?
> 
> On 3/9/2011 8:57 PM, David Zhang wrote:
> > Under my programming environment, FORTRAN, it is possible to parallel
> > read (using native read function instead of MPI's parallel read
> > function).  Although you'll run into problem when you try to parallel
> > write to the same file.
> >
> 
> If your Fortran compiler/library are reasonably up to date, you will 
> need to specify action='read' as opening once with default readwrite 
> will lock out other processes.
> -- 
> Tim Prince
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
  

Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Belaid MOA

Hi Jack,
  cplex.importModel(model, problemFile) basically reads the problem from 
"problemFile" and add its content to "model". So, I do not see
any problem calling that in your code for each process. The best way is just to 
try it out and let us know how it goes.

With best regards,
-Belaid.

  


From: dtustud...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 23:42:00 -0700
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?








Hi, thanks for your code. 
I have test it with a simple example file. It works well without any conflict 
of parallel accessing the same file.
Now, I am using CPLEX (an optimization model solver) to load a model data file, 
which can be 200 MBytes. 
CPLEX.importModel(modelName, dataFileName) ;
I do not know how CPLEX code handle the reading the model data file.
Any suggestions or ideas are welcome.

thanks
Jack 

From: belaid_...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Thu, 10 Mar 2011 05:51:31 +
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?











Hi,
  You can do that with C++ also. Just for fun of it, I produced a little 
program for that; each process reads the whole
file and print the content to stdout. I hope this helps:

#include 
#include 
#include 
#include 
using namespace std;

int main (int argc, char* argv[]) {
  int rank, size;
  string line;
  MPI_Init (, );  
  MPI_Comm_size (MPI_COMM_WORLD, );
  MPI_Comm_rank (MPI_COMM_WORLD, );   
  ifstream txtFile("example.txt");
  if (txtFile.is_open()) {
while ( txtFile.good() ) {
  getline (txtFile,line);
  cout << line << endl;
}
txtFile.close();
  }else {
cout << "Unable to open file";
  }
  MPI_Finalize(); /*end MPI*/
  return 0;
}

With best regards,
-Belaid.



From: dtustud...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 22:08:44 -0700
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?








Thanks, 
I only need to read the file. And, all processes only to read the file only 
once. 
But, the file is about 200MB. 
But, my code is C++. 
Does Open MPI support this ?
thanks

From: solarbik...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 20:57:03 -0800
To: us...@open-mpi.org
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?

Under my programming environment, FORTRAN, it is possible to parallel read 
(using native read function instead of MPI's parallel read function).  Although 
you'll run into problem when you try to parallel write to the same file.



On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan  wrote:







Hi, 
I have a file, which is located in a system folder, which can be accessed by 
all parallel processes. 
Does Open MPI allow multi processes to access the same file at the same time ? 


For example, all processes open the file and load data from it at the same 
time. 
Any help is really appreciated. 
thanks
Jack


Mar 9 2011
  

___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
David Zhang
University of California, San Diego




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Jack Bryan

Hi, thanks for your code. 
I have test it with a simple example file. It works well without any conflict 
of parallel accessing the same file.
Now, I am using CPLEX (an optimization model solver) to load a model data file, 
which can be 200 MBytes. 
CPLEX.importModel(modelName, dataFileName) ;
I do not know how CPLEX code handle the reading the model data file.
Any suggestions or ideas are welcome.

thanks
Jack 

From: belaid_...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Thu, 10 Mar 2011 05:51:31 +
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?











Hi,
  You can do that with C++ also. Just for fun of it, I produced a little 
program for that; each process reads the whole
file and print the content to stdout. I hope this helps:

#include 
#include 
#include 
#include 
using namespace std;

int main (int argc, char* argv[]) {
  int rank, size;
  string line;
  MPI_Init (, );  
  MPI_Comm_size (MPI_COMM_WORLD, );
  MPI_Comm_rank (MPI_COMM_WORLD, );   
  ifstream txtFile("example.txt");
  if (txtFile.is_open()) {
while ( txtFile.good() ) {
  getline (txtFile,line);
  cout << line << endl;
}
txtFile.close();
  }else {
cout << "Unable to open file";
  }
  MPI_Finalize(); /*end MPI*/
  return 0;
}

With best regards,
-Belaid.



From: dtustud...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 22:08:44 -0700
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?








Thanks, 
I only need to read the file. And, all processes only to read the file only 
once. 
But, the file is about 200MB. 
But, my code is C++. 
Does Open MPI support this ?
thanks

From: solarbik...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 20:57:03 -0800
To: us...@open-mpi.org
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?

Under my programming environment, FORTRAN, it is possible to parallel read 
(using native read function instead of MPI's parallel read function).  Although 
you'll run into problem when you try to parallel write to the same file.



On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan  wrote:







Hi, 
I have a file, which is located in a system folder, which can be accessed by 
all parallel processes. 
Does Open MPI allow multi processes to access the same file at the same time ? 


For example, all processes open the file and load data from it at the same 
time. 
Any help is really appreciated. 
thanks
Jack


Mar 9 2011
  

___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
David Zhang
University of California, San Diego




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Belaid MOA




Hi,
  You can do that with C++ also. Just for fun of it, I produced a little 
program for that; each process reads the whole
file and print the content to stdout. I hope this helps:

#include 
#include 
#include 
#include 
using namespace std;

int main (int argc, char* argv[]) {
  int rank, size;
  string line;
  MPI_Init (, );  
  MPI_Comm_size (MPI_COMM_WORLD, );
  MPI_Comm_rank (MPI_COMM_WORLD, );   
  ifstream txtFile("example.txt");
  if (txtFile.is_open()) {
while ( txtFile.good() ) {
  getline (txtFile,line);
  cout << line << endl;
}
txtFile.close();
  }else {
cout << "Unable to open file";
  }
  MPI_Finalize(); /*end MPI*/
  return 0;
}

With best regards,
-Belaid.



From: dtustud...@hotmail.com
To: us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 22:08:44 -0700
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?








Thanks, 
I only need to read the file. And, all processes only to read the file only 
once. 
But, the file is about 200MB. 
But, my code is C++. 
Does Open MPI support this ?
thanks

From: solarbik...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 20:57:03 -0800
To: us...@open-mpi.org
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?

Under my programming environment, FORTRAN, it is possible to parallel read 
(using native read function instead of MPI's parallel read function).  Although 
you'll run into problem when you try to parallel write to the same file.



On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan  wrote:







Hi, 
I have a file, which is located in a system folder, which can be accessed by 
all parallel processes. 
Does Open MPI allow multi processes to access the same file at the same time ? 


For example, all processes open the file and load data from it at the same 
time. 
Any help is really appreciated. 
thanks
Jack


Mar 9 2011
  

___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
David Zhang
University of California, San Diego




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users  
  

Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread David Zhang
It not about MPI but rather your system.  Can your system read the same file
multiple times?  Can you open the same file multiple times?  The simplest
way to answer your question is to write a simple MPI program to test this.

On Wed, Mar 9, 2011 at 9:08 PM, Jack Bryan  wrote:

>  Thanks,
>
> I only need to read the file. And, all processes only to read the file only
> once.
>
> But, the file is about 200MB.
>
> But, my code is C++.
>
> Does Open MPI support this ?
>
> thanks
>
> --
> From: solarbik...@gmail.com
> Date: Wed, 9 Mar 2011 20:57:03 -0800
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Open MPI access the same file in parallel ?
>
>
> Under my programming environment, FORTRAN, it is possible to parallel read
> (using native read function instead of MPI's parallel read function).
> Although you'll run into problem when you try to parallel write to the same
> file.
>
> On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan  wrote:
>
>  Hi,
>
> I have a file, which is located in a system folder, which can be accessed
> by all parallel processes.
>
> Does Open MPI allow multi processes to access the same file at the same
> time ?
>
> For example, all processes open the file and load data from it at the same
> time.
>
> Any help is really appreciated.
>
> thanks
>
> Jack
>
> Mar 9 2011
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> --
> David Zhang
> University of California, San Diego
>
> ___ users mailing list
> us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
David Zhang
University of California, San Diego


Re: [OMPI users] Open MPI access the same file in parallel ?

2011-03-10 Thread Jack Bryan

Thanks, 
I only need to read the file. And, all processes only to read the file only 
once. 
But, the file is about 200MB. 
But, my code is C++. 
Does Open MPI support this ?
thanks

From: solarbik...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Wed, 9 Mar 2011 20:57:03 -0800
To: us...@open-mpi.org
Subject: Re: [OMPI users] Open MPI access the same file in parallel ?

Under my programming environment, FORTRAN, it is possible to parallel read 
(using native read function instead of MPI's parallel read function).  Although 
you'll run into problem when you try to parallel write to the same file.



On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan  wrote:







Hi, 
I have a file, which is located in a system folder, which can be accessed by 
all parallel processes. 
Does Open MPI allow multi processes to access the same file at the same time ? 


For example, all processes open the file and load data from it at the same 
time. 
Any help is really appreciated. 
thanks
Jack


Mar 9 2011
  

___

users mailing list

us...@open-mpi.org

http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
David Zhang
University of California, San Diego




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users