Hi:
I'm new to this group. I'm trying to implement a parallel quantum code called
"Seqquest".
I'm trying to figure out why there is an error in the implementation of this
code where there is an error:
This job has allocated 2 cpus
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_
Tony,
I don't know what iac is. I use ias for my ASM code:
ia64b <82> cd /opt/intel
ia64b <83> find . -name 'iac'
ia64b <84> find . -name 'ias'
./fc/10.1.012/bin/ias
./cc/10.1.012/bin/ias
Anyway, if you want another data point and see if my compilers work I
will gladly try to compile if you sen
Strange. We have successfully built 1.3 using Intel 11.0 and earlier
versions on RHEL5 and Fedora 9 (only 11.0, of course).
Can you send your configure? Perhaps there is something different there.
On Jan 26, 2009, at 1:44 PM, Scot Breitenfeld wrote:
Hi, I'm trying to compile from source open
>> In the original process 'A' code, prior to sending out a command,
>> 'A' will issue an MPI_Wait to make sure that the command request
>> instance is free.
>>
> I'm not quite sure I understand that statement. Can't you just
> compare the request to MPI_REQUEST_NULL? From your description, it
Jeff,
I could successfully compile OpenMPI versions 1.2.X on Itanium Linux with
the same compilers. I was never able to compile the 1.3 beta versions on IA64
Linux.
Joe,
I am using whatever assembler that ./configure provides. I believe it is
icc. Should I set AS (I think) to iac?
Hi, I'm trying to compile from source open-mpi-1.3r20295 on a suse linux
64-bit system (I also tried a 32-bit linux system, same problem). I'm
using Intel compilers version 11.0 (and 10.1) for fortran, C/C++
(ifort, icc, icpc). The configure script completes with no errors, but
when I do make it f
That's cool then - i have written a shellscript
which automatically does the xhost stuff for all
nodes in my hostfile :)
On Mon, Jan 26, 2009 at 9:25 PM, Ralph Castain wrote:
>
> On Jan 26, 2009, at 1:20 PM, jody wrote:
>
>> Hi Brian
>>
>>>
>>> I would rather not have mpirun doing an xhost comman
I am chasing a segfault when I use Open MPI (1.3) with Rmpi (0.5.6), the MPI
add-on package for R that is written and maintained btyby Prof Hao Yu (CC'ed)
I should prefix that the code runs just fine on 32bit Debian system at home.
However, on amd64 running Ubuntu 8.10, I am seeing segfaults upon
On Jan 26, 2009, at 1:20 PM, jody wrote:
Hi Brian
I would rather not have mpirun doing an xhost command - I think
that is
beyond our comfort zone. Frankly, if someone wants to do this, it
is up to
them to have things properly setup on their machine - as a rule, we
don't
mess with your
Typo there: "xceren" stands for "screen" - sorry :)
On Mon, Jan 26, 2009 at 9:20 PM, jody wrote:
> Hi Brian
>
>>
>> I would rather not have mpirun doing an xhost command - I think that is
>> beyond our comfort zone. Frankly, if someone wants to do this, it is up to
>> them to have things properly
Hi Brian
>
> I would rather not have mpirun doing an xhost command - I think that is
> beyond our comfort zone. Frankly, if someone wants to do this, it is up to
> them to have things properly setup on their machine - as a rule, we don't
> mess with your machine's configuration. Makes sys admins u
Jeff,
I could compile OpenMPI versions 1.2.X on Itanium Linux with the same
compilers.
Thanks,
Tony
Hi Jody
I would rather not have mpirun doing an xhost command - I think that
is beyond our comfort zone. Frankly, if someone wants to do this, it
is up to them to have things properly setup on their machine - as a
rule, we don't mess with your machine's configuration. Makes sys
admins ups
This scenario was not mentioned, but I'll bet it falls into the same
general category. If an HCA has different run-time characteristics,
regardless of whether they are caused by the OEM or the reseller,
that's probably "heterogeneous enough" for this discussion.
On Jan 26, 2009, at 2:41 P
Tony,
I have a couple questions ...
1. It looks like you are creating atomic-asm.o with icc and not
"ias". Is that correct?
libtool: compile: icc -DHAVE_CONFIG_H -I. -I../../opal/include
-I../../orte/include -I../../ompi/include
-I../../opal/mca/paffinity/linux/plpa/src/libplpa -
Jeff,
Did IWG say anything about there being a chip set issue?Example what
if a vender, say Sun, wraps Mellanox chips and on its own HCAs, would
Mellanox HCA and Sun HCA work together?
-DON
On 01/26/09 14:19, Jeff Squyres wrote:
The Interop Working Group (IWG) of the OpenFabrics Allianc
MPI_THREAD_MULTIPLE support in the 1.2 series is unfortunately pretty
broken/non-existent.
The v1.3 series has MPI point-to-point support for several networks
with MPI_THREAD_MULTIPLE; check the README file.
On Jan 26, 2009, at 9:21 AM, Ali Copey wrote:
Hi,
I'm trying to get multiple th
That's fairly strange; were you able to build Open MPI v1.2.x?
I ask because the IA64 assembly hasn't changed between the two at all.
On Jan 23, 2009, at 8:33 PM, Iannetti, Anthony C. (GRC-RTB0) wrote:
Dear OpenMPI Users:
I cannot compile OpenMPI 1.3 on my Itanium 2 system. Attached is
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to ask
each group sep
Yes. The --tag-output option will prepend [job,rank] (or
stderr, whichever is appropriate) to each line. I don't insert a
colon, though I suppose that would easily be done for grep purposes.
I just finished implementing the --output-filename option that will
split the output from each rank
Hello Ralph:
Please forgive if this has already been covered...
Have you considered prefixing each line of output from each process
with something like "process_number" and a colon?
That is what IBM's poe does. Separating the output is then easy:
cat file | grep 0: > file.0
cat file | grep
Actually, I found out that the help message I pasted lies a little:
the "number of buffers" parameter for both PP and SRQ types is
mandatory, not optional.
On Jan 23, 2009, at 2:59 PM, Jeff Squyres wrote:
Here's a copy-n-paste of our help file describing the format of each:
Per-peer receiv
On Jan 23, 2009, at 2:36 PM, Hartzman, Leslie D (MS) wrote:
I’m trying to modify some code that is involved in point-to-point
communications. Process A has a one way mode of communication with
Process B. ‘A’ checks to see if its rank is zero and if so will send
a “command” to ‘B’ (MPI_Issen
FWIW: we build OMPI 1.3 under pathscale with -O3 without problem.
However, we do not build the VT code, so it may only be a problem
there. If you don't need VT, you might just configure to exclude that
from the build.
Ralph
On Jan 26, 2009, at 10:16 AM, Jeff Squyres wrote:
Yowza! Bumme
Yowza! Bummer. Please let us know what Pathscale says.
On Jan 23, 2009, at 8:53 PM, Alain Miniussi wrote:
FYI:
I get the following problem when compiling openmpi-1.3 at -O2 and
beyond:
[alainm@rossini vtfilter]$pwd
/misc/nice1/alainm/openmpi-1.3/ompi/contrib/vt/vt/tools/vtfilter
[alain
On Sun, 2009-01-25 at 05:20 -0700, Ralph Castain wrote:
> 2. redirect output of specified processes to files using the provided
> filename appended with ".rank". You can do this for all ranks, or a
> specified subset of them.
A filename extension including both the comm size and the rank is
h
Great; thanks!
On Jan 26, 2009, at 4:11 AM, Andrea Iob wrote:
Could you confirm that changing the last 3 files to
use OMPI_PATH_MAX
instead of PATH_MAX (without adding the #include)
also fixes the
problem?
Yes, with OMPI_PATH_MAX the problem is also fixed.
Andrea
_
Hi Jody,
I think it is not a problem of MPI_Sends which doesn't match a
corresponding MPI_Recvs, because all processes reach MPI_Finalize(). If
not, at least one process would be blocked before reaching MPI_Finalize.
Bernard
jody a écrit :
Hi Bernard
The structure looks as far as i can s
Hi,
I'm trying to get multiple thread running, and have openMPI 1.2.8 compiled with
threading enabled:
xxx@xxx:/usr/lib$ ompi_info | grep Thread
Thread support: posix (mpi: yes, progress: no)
however, when I attempt to get MPI_THREAD_MULTIPLE, ...FUNNELED or
...SERIALIZED I am return
Am 25.01.2009 um 06:16 schrieb Sangamesh B:
Thanks Reuti for the reply.
On Sun, Jan 25, 2009 at 2:22 AM, Reuti
wrote:
Am 24.01.2009 um 17:12 schrieb Jeremy Stout:
The RLIMIT error is very common when using OpenMPI + OFED + Sun Grid
Engine. You can find more information and several remedie
> Could you confirm that changing the last 3 files to
> use OMPI_PATH_MAX
> instead of PATH_MAX (without adding the #include)
> also fixes the
> problem?
>
Yes, with OMPI_PATH_MAX the problem is also fixed.
Andrea
Hello George,
Thanks for your messages. Yes i disconnect my different worlds before
calling MPI_Finalize().
Bernard
George Bosilca a écrit :
I was somehow confused when I wrote my last email and I mixed up the
MPI versions (thanks to Dick Treumann for gently pointing me to the
truth). Befor
Hi
I have written some shell scripts which ease the output
to an xterm for each processor for normal execution(run_sh.sh),
gdb (run_gdb.sh), and valgrind (run_vg.sh).
In order for the xterms to be shown on your machine,
you have to set the DISPLAY variable on every host
(if this is not done by ssh
33 matches
Mail list logo