Durga,
The Cuda libraries use the C++ std libraries. That's the std::ios_base
errors.. You need the C++ linker to bring those in.
Damien
On March 20, 2016 9:15:47 AM "dpchoudh ." wrote:
Hello all
I downloaded some code samples from here:
https://github.com/parallel-forall/code-samples/
Heheheheh.
Chuck Norris has zero latency and infinite bandwidth.
Chuck Norris is a hardware implementation only. Software is for sissys.
Chuck Norris's version of MPI_IRecv just gives you the answer.
Chuck Norris has a 128-bit memory space.
Chuck Norris's Law says Chuck Norris gets twice as amaz
You didn't mention complete Fortran support on Windows, thanks to
Shiqing. :-)
Damien
On 10/10/2010 5:50 PM, Jeff Squyres wrote:
The Open MPI Team, representing a consortium of research, academic, and
industry partners, is pleased to announce the release of Open MPI version 1.5.
This rele
Jeff, Shiqing, anyone...
I notice there's no Fortan support in the Windows binary versions of
1.5.1 on the website. Is that a deliberate decision?
Damien
g Fan wrote:
Hi Damien,
Unfortunately, we don't have a valid license for Intel Fortran
compiler at moment on the machine that we built this installer.
Regards,
Shiqing
On 12/29/2010 6:47 AM, Damien Hocking wrote:
Jeff, Shiqing, anyone...
I notice there's no Fortan support in the
Tom,
Changing the path to icc is done in that configure file:
#!/bin/bash
CC=icc CXX=icpc F77=ifort FC=ifort ./configure
--prefix=/usr/local/OpenMPI-intel --enable-static --enable-shared
becomes
#!/bin/bash
CC=/usr/local/intel/Compiler/11.0/083/bin/intel64/icc CXX=icpc F77=ifort
FC=ifort ./c
Manoj,
Those binaries were built for use with Visual Studio 2008, not MinGW. I
don't know if OpenMPI has been built with MinGW before, maybe someone on
the list knows.
Damien
On 27/02/2011 4:42 AM, Manoj Vaghela wrote:
Hi All,
I have downloaded the latest version OpenMPI binaries for Wind
Is there a timeline for the Windows version yet?
Damien
This isn't a OpenMPI problem, it's a problem with the symbols from the
Pord reordering library with Mumps. The linker can't see the Pord
symbols, which means you either didn't build the library, or you haven't
linked it in. I don't see it in your link command there.
Damien
Tim Reis wrote:
That's what's supposed to happen, it's how MPI works. Process 0 is the
head or boss process, and the others are slaves, and execute partially
different code even though they're in the same executable. MPI is
multi-process, not multi-thread.
Damien
Henry Adolfo Lambis Miranda wrote:
Hi ever
It might also be interrupt flooding, you should check your CPU loads
while your tests are running. GigE has an optional 9000-byte packet
size to cut down on the number of interrupts the CPU receives.
Typically it gets an interrupt for each packet that comes in, and if
you're at a standard 1500
Gib,
If you have OMPI_IMPORTS set that usually removes those symbol errors.
Are you absolutely sure you have everything set to 32-bit in Visual Studio?
Damien
On 01/10/2012 7:55 PM, Gib Bogle wrote:
I am building the Sundials examples, with MS Visual Studio 2005
version 8 (i.e. 32-bit) on W
can't see that there is anything in the mpicc link
(with --showme:link) that is not in VS. The command line in VS has a
lot more stuff in it, to be sure.
Gib
On 2/10/2012 3:55 p.m., Damien Hocking wrote:
Gib,
If you have OMPI_IMPORTS set that usually removes those symbol
errors. Are
/D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD
/Fo"cvAdvDiff_non_p.dir\Release\\"
/Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb"
/W3 /c /TC /errorReport:prompt
Gib
On 2/10/2012 5:06 p.m., Damien Hocking wro
I've never seen that, but someone else might have.
Damien
On 30/10/2012 1:43 AM, Mathieu Gontier wrote:
Hi Damien,
The only message I have is:
[vs2010:09300] [[56007,0],0]-[[56007,1],0] mca_oob_tcp_msg_recv: readv
failed: Unknown error (108)
[vs2010:09300] 2 more processes have sent help mess
I can probably fix the 1.6.3 build. I think it's just bumping CMake
support and tweaks so that VS2012 works. But yeah, it looks a bit grim
going forward.
Damien
On 07/12/2012 8:28 AM, Jeff Squyres wrote:
Sorry for my late reply; I've been in the MPI Forum and Open MPI engineering
meetings
I know 1.6.3 is broken for Win builds with VS2012 and Intel. I'm not a
MinGW expert by any means, I've hardly ever used it. I'll try and look
at this on the weekend. If you can post on Friday to job my memory that
would help. :-)
Damien
On 12/12/2012 3:31 AM, Ilias Miroslav wrote:
Ad: ht
Well this is interesting. The linker can't find that because
MPI::Datatype::Free isn't implemented on the Windows build (in
datatype_inln.h). It's declared in datatype.h though. It's not there
in the Linux version either, so I don't know where the Linux build is
getting that symbol from, tha
ild
everything's in there, a dumpbin shows all the MPI::Datatype symbols.
Those symbols are missing all the way back into 1.5 shared-lib builds as
well.
Damien
On 21/02/2013 12:19 PM, Jeff Squyres (jsquyres) wrote:
On Feb 21, 2013, at 10:59 AM, Damien Hocking wrote:
Well this is inter
Roberto,
Ipopt doesn't use MPI. It can use the MUMPS parallel linear solver in
sequential mode, but nothing is set up in IPOPT to use the parallel MPI
version. For sequential mode, MUMPS dummies out the MPI headers. The
dummy headers are part of the MUMPS distribution in the libseq
directo
Hi all,
I notice in the last couple of weeks there was a patch with
ALL_DEPENDENCIES to fix CMake 2.8 builds with on Windows. With CMake
2.8 I'm getting exactly the same build errors in r22504 as in the 1.4.1
release. Has that patch made it into the snapshots yet, or is there a
regression?
oon. And please note that the patch will be in
1.4.2, but not in 1.4.1 release, which means you can update your CMake
to 2.8 for the upcoming Open MPI 1.4.2 release.
Thanks,
Shiqing
Damien Hocking wrote:
Hi all,
I notice in the last couple of weeks there was a patch with
ALL_DEPENDENCIES t
Can anyone tell me how to enable Fortran bindings on a Windows build?
Damien
Hi all,
There might be some minor bugs in the 64-bit CMake Visual Studio Install
project on Windows (say that 3 times fast...). When I build a 64-bit
release version, the install is still set up for installing pdbs, even
though it's a release build. This is for VS2008 on Windows 7, CMake
2.
make sure that the CMAKE_BUILD_TYPE variable
in the CMake-GUI is set to "release"? Setting "release" in Visual
Studio will not change the CMake install scripts.
Thanks,
Shiqing
Damien Hocking wrote:
Hi all,
There might be some minor bugs in the 64-bit CMake Visual Stud
I just ran everything again. I'm absolutely positive that when I used
CMake's GUI last night that setting Release still gave me pdbs. But
when I put SET (CMAKE_BUILD_TYPE Release) into the top-level
CMakeLists.txt, I have a Release build and the install is fine. It's
highly likely that I did
I started again from the beginning to sort out exactly what was going
on. Here's what I found.
If I use the CMake GUI, and set CMAKE_BUILD_TYPE to Release,
re-configure and then generate, and then do the following build command:
"devenv OpenMPI.sln /build"
I get the following:
1>-- Bui
Hi all,
Does OpenMPI support dynamic process management without launching
through mpirun or mpiexec? I need to use some MPI code in a
shared-memory environment where I don't know the resources in advance.
Damien
specified app. Or you can do "add-hostfile" - either or both are supported.
On Feb 24, 2010, at 5:39 PM, Damien Hocking wrote:
Hi all,
Does OpenMPI support dynamic process management without launching through
mpirun or mpiexec? I need to use some MPI code in a shared-memory
Hi all,
I'm playing around with MPI_Comm_spawn, trying to do something simple
with a master-slave example. I get a LOCAL DAEMON SPAWN IS CURRENTLY
UNSUPPORTED error when it tries to spawn the slave. This is on Windows,
OpenMPI version 1.4.1, r22421.
Here's the master code:
int main(int ar
a/ras/base/static-components.h. There is an option to do so in
the trunk version, but not for 1.4.1. Sorry for the inconvenience.
For the "singleton" run with master.exe, it's still not working under
Windows.
Best Regards,
Shiqing
Damien Hocking wrote:
Hi all,
I'
A few people have looked at EC2 for this lately. This one's a good read.
http://insidehpc.com/2009/08/03/comparing-hpc-cluster-amazons-ec2-nas-benchmarks-linpack/
There was another paper published too, if I can find it again I'll post
the link.
Damien
On 19/03/2010 9:17 PM, Joshua Bernstein
Thanks Shiqing. I'll try that. I'm not sure which bindings MUMPS uses,
I'll post back if I need F90.
My apologies for not asking a clearer question, when I said Fortran 90
support on Windows, I meant Open MPI, not compilers.
Damien
On 07/05/2010 3:09 AM, Shiqing Fan wrote:
Hi Damien,
Cu
Absolutely. I'll get a package of stuff put together.
Damien
On 12/05/2010 2:24 AM, Shiqing Fan wrote:
Hi Damien,
I know there will be more problems, and your feedback is always
helpful. :-)
Could you please provide me a Visual Studio solution file for MUMPS? I
would like to test it a l
You don't need to check anything alse in the red window, OpenMPI doesn't
know it's in a virtual machine. If you're running Windows in a virtual
cluster, are you running as 32-bit or 64-bit?
Damien
On 12/07/2010 5:05 PM, Alexandru Blidaru wrote:
Wow thanks a lot guys. I'll try it tomorrow morn
010 5:47 PM, Alexandru Blidaru wrote:
I am running 32 bit Windows. The actual cluster is 64 bit and the OS
is CentOS
On Mon, Jul 12, 2010 at 7:15 PM, Damien Hocking <mailto:dam...@khubla.com>> wrote:
You don't need to check anything alse in the red window, OpenMPI
doesn&
It does. The big difference is that MUMPS is a 3-minute compile, and
PETSc, erm, isn't. It's..longer...
D
On 19/07/2010 12:56 PM, Daniel Janzon wrote:
Thanks a lot! PETSc seems to be really solid and integrates with MUMPS
suggested by Damien.
All the best,
Daniel Janzon
On 7/18/10, Gustavo
Outstanding. I'll have two.
Damien
George Bosilca wrote:
The Open MPI Team, representing a consortium of bailed-out banks, car
manufacturers, and insurance companies, is pleased to announce the
release of the "unbreakable" / bug-free version Open MPI 2009,
(expected to be available by mid-2011
I've seen this behaviour with MUMPS on shared-memory machines as well
using MPI. I use the iterative refinement capability to sharpen the
last few digits of the solution ( 2 or 3 iterations is usually enough).
If you're not using that, give it a try, it will probably reduce the
noise you're g
39 matches
Mail list logo