Ralph
Defining these parameters in my environment also did not resolve the
problem. Whenever I restart my program, the temporary files are getting
stored in the default /tmp directory instead of the directory I had
defined.
Thanks
Ananda
=
Subject: Re: [OMPI users] opal_cr_tmp_
Define them in your environment prior to executing any of those commands.
On May 12, 2010, at 4:43 PM, wrote:
> Ralph
>
> When you say manually, do you mean setting these parameters in the command
> line while calling mpirun, ompi-restart, and ompi-checkpoint? Or is there
> another way to set
Ralph
When you say manually, do you mean setting these parameters in the
command line while calling mpirun, ompi-restart, and ompi-checkpoint? Or
is there another way to set these parameters?
Thanks
Ananda
==
Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at
Ralph
When you say manually, do you mean setting these parameters in the
command line while calling mpirun, ompi-restart, and ompi-checkpoint? Or
is there another way to set these parameters?
Thanks
Ananda
==
Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at
You shouldn't have to, but there may be a bug in the system. Try manually
setting both envars and see if it fixes the problem.
On May 12, 2010, at 3:59 PM, wrote:
> Ralph
>
> I have these parameters set in ~/.openmpi/mca-params.conf file
>
> $ cat ~/.openmpi/mca-params.conf
>
> orte_tmpdir_
Ralph
I have these parameters set in ~/.openmpi/mca-params.conf file
$ cat ~/.openmpi/mca-params.conf
orte_tmpdir_base = /home/ananda/ORTE
opal_cr_tmp_dir = /home/ananda/OPAL
$
Should I be setting OMPI_MCA_opal_cr_tmp_dir?
FYI, I am using openmpi 1.3.4 with blcr 0.8.2
Thanks
Ananda
=
ompi-restart just does a fork/exec of the mpirun, so it should get the param if
it is in your environ. How are you setting it? Have you tried adding
OMPI_MCA_opal_cr_tmp_dir= to your environment?
On May 12, 2010, at 12:45 PM, wrote:
> Thanks Ralph.
>
> Another question. Even though I am setti
On May 12, 2010, at 3:01 PM, Fernando Lemos wrote:
> Please correct me if I'm wrong, but I believe OpenMPI sends
> stdin/stdout from the other ranks back to rank 0 so that the output is
> displayed as the stdin of mpirun and the other way around with
> stdout/stderr. Otherwise it wouldn't be possi
On Wed, May 12, 2010 at 2:51 PM, Jeff Squyres wrote:
> On May 12, 2010, at 1:48 PM, Hanjun Kim wrote:
>
>> I am working on parallelizing my sequential program using OpenMPI.
>> Although I got performance speedup using many threads, there was
>> slowdown on a small number of threads like 4 threads.
Thanks Ralph.
Another question. Even though I am setting opal_cr_tmp_dir to a
directory other than /tmp while calling ompi-restart command, this
setting is not getting passed to the mpirun command that gets generated
by ompi-restart. How do I overcome this constraint?
Thanks
Ananda
==
It's a different MCA param: orte_tmpdir_base
On May 12, 2010, at 12:33 PM, wrote:
> I am setting the MCA parameter “opal_cr_tmp_dir” to a directory other than
> /tmp while calling “mpirun”, “ompi-restart”, and “ompi-checkpoint” commands
> so that I don’t fill up /tmp filesystem. But I see th
I am setting the MCA parameter "opal_cr_tmp_dir" to a directory other
than /tmp while calling "mpirun", "ompi-restart", and "ompi-checkpoint"
commands so that I don't fill up /tmp filesystem. But I see that
openmpi-sessions* directory is still getting created under /tmp. How do
I overcome this prob
On May 12, 2010, at 1:48 PM, Hanjun Kim wrote:
> I am working on parallelizing my sequential program using OpenMPI.
> Although I got performance speedup using many threads, there was
> slowdown on a small number of threads like 4 threads.
> I found that it is because getc worked much slower than s
Hi,
I am working on parallelizing my sequential program using OpenMPI.
Although I got performance speedup using many threads, there was
slowdown on a small number of threads like 4 threads.
I found that it is because getc worked much slower than sequential
version. Does OpenMPI override or wrap ge
Absolutely. I'll get a package of stuff put together.
Damien
On 12/05/2010 2:24 AM, Shiqing Fan wrote:
Hi Damien,
I know there will be more problems, and your feedback is always
helpful. :-)
Could you please provide me a Visual Studio solution file for MUMPS? I
would like to test it a l
Hello,
My question is about virtual memory allocated by an open-mpi program. I am
not familiar with memory managment and I will be grateful if you could
explain me what I am observing when I launch my openMpi program on several
machines.
My program is started on a server machine which comunicate
Hi Damien,
I know there will be more problems, and your feedback is always
helpful. :-)
Could you please provide me a Visual Studio solution file for MUMPS? I
would like to test it a little.
Thanks,
Shiqing
On 2010-5-12 6:11 AM, Damien wrote:
Hi all,
Me again (poor Shiqing, I know...)
Just to be sure:
Is there a copy of the shared library on the other host (hpcnode1) ?
jody
On Mon, May 10, 2010 at 5:20 PM, Prentice Bisbal wrote:
> Are you runing thee jobs through a queuing system like PBS, Torque, or SGE?
>
> Prentice
>
> Miguel Ángel Vázquez wrote:
>> Hello Prentice,
>>
>>
Hi all,
Me again (poor Shiqing, I know...). I've been trying to get the MUMPS
solver running on Windows with Open-MPI. I can only use the 1.5 branch
because that has Fortran support on Windows and 1.4.2 doesn't. There's
a couple of things going wrong:
First, calls to MPI_Initialized from
19 matches
Mail list logo