g
>> > https://lists.open-mpi.org/mailman/listinfo/users
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
ve them online in some repo to take a look?
--
Lisandro Dalcin
Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (KAUST)
http://ecrc.kaust.edu.sa/
4700 King Ab
use a newer version of mpi4py, maybe even a git
checkout of the master branch?
--
Lisandro Dalcin
Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (K
On 9 October 2015 at 12:05, simona bellavista wrote:
>
>
> 2015-10-09 9:40 GMT+02:00 Lisandro Dalcin :
>>
>> On 8 October 2015 at 14:54, simona bellavista wrote:
>> >
>>
>> >>
>> >> I cannot figure out how spawn would work with a stri
calls.
However, I have to insist. If you are using mpi4py as a tool to spawn
a bunch of different processes that work in isolation and then collect
result at the end, then mpi4py is likely not the right tool for the
task, at least if you do not have previous experience with MPI
programming.
--
4py for such a simple trivial
parallelism, I recommend you to take a look at Python's
multiprocessing module.
If for any reason you want to go the MPI way, you should use MPI
dynamic process management, e.g. MPI.COMM_SELF.Spawn(...).
--
Lisandro Dalcin
Research Scientist
Compute
and I
> confess to not remembering what mpi4py does, offhand.
>
mpi4py just calls
dlopen("libmpi.so", RTLD_NOW | RTLD_GLOBAL | RTLD_NOLOAD);
before calling MPI_Init(), see the code below:
https://bitbucket.org/mpi4py/mpi4py/src/master/src/lib-mpi/compat/openmpi.h?fileviewer=fil
On 24 May 2012 12:40, George Bosilca wrote:
> On May 24, 2012, at 11:22 , Jeff Squyres wrote:
>
>> On May 24, 2012, at 11:10 AM, Lisandro Dalcin wrote:
>>
>>>> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER
>>>> all had the issu
s array is the
local group size
(http://www.mpi-forum.org/docs/mpi22-report/node113.htm#Node113)
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
ntor:13786] 2 more processes have sent help message
help-mpi-errors.txt / mpi_errors_are_fatal
[trantor:13786] Set MCA parameter "orte_base_help_aggregate" to 0 to
see all help / error messages
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Co
en your main sequential code "chats" to the child
parallel app using MPI calls.
> If this scenario
> is possible, when should I call MPI_Finalize()?
>
When you know you will not use MPI any more. Perhaps you could
register a finalizer using atexit()...
--
Lisandro Dalcin
---
m C++.
>
> I've had to resort to something like
>
> #ifdef __cplusplus
> #undef __cplusplus
> #include
> #define __cplusplus
> #else
> #include
> #endif
>
> in c-code.h, which seems to work but isn't exactly smooth. Is there
> another way around t
On 17 August 2010 04:16, Manoj Vaghela wrote:
> Hi,
>
> I am compiling a C++ program with MPI-C function calls with mpic++.
>
> Is there effect of this on efficiency/speed of parallel program?
>
No.
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-
`mpicc` instead of `gcc` when setting $LD is supported, or set in
> e.g. $LDFLAGS for the latter.
>
> Why is there no `mpild` to do this automatically then?
>
Why not use LD=mpicc ?
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168
MPI, so
I'm really confident about my Python bindings.
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
I do not know anything about implementing webservices, but you should
take a look at MPI-2 dynamic process management. This way, your
webservice can MPI_Comm_spawn() a brand-new set of parallel processes
doing the heavy work. This way, your webservice will act as a kind of
proxy application between
What Python version are you using?
I would use 'ctypes' modules (available in recent Python's stdlib) in
order do open the MPI shared library, next call MPI_Init() from Python
code... Of couse, I'm assuming you Fortran code can manage the case of
MPI being initialized (by using MPI_Initialized()).
On 12/13/07, Jeff Squyres wrote:
> On Dec 12, 2007, at 7:47 PM, Lisandro Dalcin wrote:
> Specifically: it would probably require some significant hackery in
> the OMPI build process to put in a #define that indicates whether OMPI
> is being built statically or not. But the AM/LT pro
On 12/12/07, Jeff Squyres wrote:
> On Dec 12, 2007, at 6:32 PM, Lisandro Dalcin wrote:
> > Do I have the libtool API calls available when linking against
> > libmpi.so ?
>
> You should, yes.
OK, but now I realize that I cannot simply call libtool dlopen()
inconditionally, a
On 12/12/07, Jeff Squyres wrote:
> Yes, this is problematic; dlopen is fun on all the various OS's...
>
> FWIW: we use the Libtool DL library for this kind of portability; OMPI
> itself doesn't have all the logic for the different OS loaders.
Do I have the libtool API calls available when linking
On 12/10/07, Jeff Squyres wrote:
> Brian / Lisandro --
> I don't think that I heard back from you on this issue. Would you
> have major heartburn if I remove all linking of our components against
> libmpi (etc.)?
>
> (for a nicely-formatted refresher of the issues, check out
> https://svn.open-m
On 8/16/07, George Bosilca wrote:
> Well, finally someone discovered it :) I know about this problem for
> quite a while now, it pop up during our own valgrind test of the
> collective module in Open MPI. However, it never create any problems
> in the applications, at least not as far as I know. T
On 10/23/06, Tony Ladd wrote:
A couple of comments regarding issues raised by this thread.
1) In my opinion Netpipe is not such a great network benchmarking tool for
HPC applications. It measures timings based on the completion of the send
call on the transmitter not the completion of the recei
On 10/23/06, Jayanta Roy wrote:
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
Can you please tell me how are you measuring transfer rates? I mean,
Can you show us a snipet of the code y
On 10/11/06, Jeff Squyres wrote:
Open MPI v1.1.1 requires that you set your LD_LIBRARY_PATH to include the
directory where its libraries were installed (typically, $prefix/lib). Or,
you can use mpirun's --prefix functionality to avoid this
BTW, Why mpicc/mpicxx does not symply pass a -rpath/-
I've just catched a problem with packing/unpacking using 'external32'
in Linux. The problem seems to be word ordering, I believe you forgot
to make the little-endian <-> big-endian conversion somewhere. Below,
an interactive session with ipython (sorry, no time to write in C)
showing the problem.
26 matches
Mail list logo