yes, it seems to be fixed.
thanks.
On Mon, Mar 31, 2008 at 9:17 PM, Ralph H Castain wrote:
> I am unable to replicate the segfault. However, I was able to get the job
> to
> hang. I fixed that behavior with r18044.
>
> Perhaps you can test this again and let me know what you see. A gdb stack
> t
Unfortunately, we have no way to "alias" an MCA param. :-\
This topic has come up a few times over the past few years, but no
one's actually extended the MCA params infrastructure to support
aliasing. I'm guessing that it wouldn't be too hard to do...
On Apr 1, 2008, at 5:22 AM, Lenny Ve
Hi,
is there any elegant way to register mpi parameter that will actually be
pointer or alias to hidden opal parameter ?
I still want to leave opal_paffinity_alone flag untouched but instead expose
mpi_paffinity_alone for the user.
thanks
Lenny.
On Mon, Mar 31, 2008 at 2:55 PM, Jeff Squyres wrot
Ummm...actually, there already is an MCA param that does precisely that:
OMPI_MCA_tmpdir_base
Been there for years...sets the tmpdir for both orteds and procs.
The tmpdir argument for mpirun is there if you want to ONLY set the tmpdir
base for mpirun. It provides a protective mechanism for cases
On Apr 1, 2008, at 1:47 PM, Ralph H Castain wrote:
Ummm...actually, there already is an MCA param that does precisely
that:
OMPI_MCA_tmpdir_base
Perhaps we can modify this so that it reports in ompi_info?
- galen
Been there for years...sets the tmpdir for both orteds and procs.
The tm
Sure - I'll rename it "orte_tmpdir_base" so it shows up.
On 4/1/08 12:05 PM, "Shipman, Galen M." wrote:
>
> On Apr 1, 2008, at 1:47 PM, Ralph H Castain wrote:
>
>> Ummm...actually, there already is an MCA param that does precisely
>> that:
>>
>> OMPI_MCA_tmpdir_base
>
> Perhaps we can modif
On Apr 1, 2008, at 2:12 PM, Ralph H Castain wrote:
Sure - I'll rename it "orte_tmpdir_base" so it shows up.
Perfect, do we also need to carry on support for "OMPI_MCA_tmdir_base"?
- Galen
On 4/1/08 12:05 PM, "Shipman, Galen M." wrote:
On Apr 1, 2008, at 1:47 PM, Ralph H Castain wrote
Per this morning's telecon, I have added the latest scaling test results to
the wiki:
https://svn.open-mpi.org/trac/ompi/wiki/ORTEScalabilityTesting
As you will see upon review, the trunk is scaling about an order of
magnitude better than 1.2.x, both in terms of sheer speed and in the
strength of
I'll bet that no one was using it; if ompi_info didn't report it,
there was no way for users to know about it.
On Apr 1, 2008, at 2:19 PM, Shipman, Galen M. wrote:
On Apr 1, 2008, at 2:12 PM, Ralph H Castain wrote:
Sure - I'll rename it "orte_tmpdir_base" so it shows up.
Perfect, do we
Thanks for the reply.
I was not able to achieve the required task with the given pointers.
I ran the application with following command,
mpirun -np 2 --mca btl_tcp_frag 9 --mca btl_tcp_max_send_size 8192 -host
node-00,node-01 /home/atif/blah/aa_l
I still see the messages of size 65226 bytes. I
The parameters I was talking about only split the message at the MPI
level, pushing the data in 8k fragments into the network. Once the
data is pushed into the kernel (via the socket), we don't have any
control over how not when it is physically send to the remote node.
The only way I see t
11 matches
Mail list logo