Please apologize for multiple copies.
Begin forwarded message:
EuroPVM/MPI 2008 CALL FOR PAPERS
15th European PVMMPI Users' Group Meeting
Dublin, Ireland, September 7 - 10, 2008
web: http://pvmmpi08.ucd.ie
organized by UCD School of Computer Science and Informatics
BACKGROUND
Hi Allan,
This suggest that your chipset is not able to handle the full PCI-E
speed on more than 3 ports. This usually depends on the way the PCI-E
links are wired trough the ports and the capacity of the chipset
itself. As an exemple we were never able to reach fullspeed
performance with
On 18 December 2007 at 16:08, Randy Heiland wrote:
| The pkg in question is here: http://www.stats.uwo.ca/faculty/yu/Rmpi/
|
| The question is: has anyone on this list got OpenMPI working for
| this pkg? Any suggestions?
Yes -- I happen to maintain GNU R, a number of R packages (eg r-cran-*
Dr. Yu sent me a version of this intended for OpenMPI back in September.
I was just today getting around to trying it, although I noticed that it
doesn't work with R v2.6, so my plans just changed a little.
If Dr. Yu gives permission, I'll send to you what he sent to me, or
perhaps he'll post it
The pkg in question is here: http://www.stats.uwo.ca/faculty/yu/Rmpi/
The question is: has anyone on this list got OpenMPI working for
this pkg? Any suggestions?
thanks, Randy
Begin forwarded message:
Subject: R npRmpi
Been looking into the npRmpi problem
I can get a segfault execut
On Tue, 2007-12-18 at 11:59 -0700, Ralph H Castain wrote:
> Hate to be a party-pooper, but the answer is "no" in OpenMPI 1.2. We don't
> allow the use of a hostfile in a Torque environment in that version.
>
> We have changed this for v1.3, but you'll have to wait for that release.
Can one not b
On Dec 18, 2007, at 11:12 AM, Marco Sbrighi wrote:
Assumedly this(these) statement(s) are in a config file that is being
read by Open MPI, such as $HOME/.openmpi/mca-params.conf?
I've tried many combinations: only in $HOME/.openmpi/mca-params.conf,
only in command line and both; but none seems
Hate to be a party-pooper, but the answer is "no" in OpenMPI 1.2. We don't
allow the use of a hostfile in a Torque environment in that version.
We have changed this for v1.3, but you'll have to wait for that release.
Sorry
Ralph
On 12/18/07 11:12 AM, "pat.o'bry...@exxonmobil.com"
wrote:
> Ti
Tim,
Will OpenMPI 1.2.1 allow the use of a "hostfile"?
Thanks,
Pat
J.W. (Pat) O'Bryant,Jr.
Business Line Infrastructure
Technical Systems, HPC
Office: 713-431-7022
Tim Prins
Open MPI v1.2 had some problems with the TM configuration code which was fixed
in v1.2.1. So any version v1.2.1 or later should work fine (and, as you
indicate, 1.2.4 works fine).
Tim
On Tuesday 18 December 2007 12:48:40 pm pat.o'bry...@exxonmobil.com wrote:
> Jeff,
> Here is the result of
Jeff,
Here is the result of the "pbs-config". By the way, I have successfully
built OpenMPI 1.2.4 on this same system. The "config.log" for OpenMPI 1.2.4
shows the correct Torque path. That is not surprising since the "configure"
script for OpenMPI 1.2.4 uses "pbs-config" while the configure s
Well that's fun. Is this the library location where Torque put them
by default? What does "pbs-config --libs" return?
Also -- I second Reuti's question: what is the nature of your
requirement such that you need to be able to run outside of the nodes
that have been allocated to a job? Are
On 12/18/07, SaiGiridhar Ramasamy wrote:
>
> Can you elaborate it still?
http://www-fp.mcs.anl.gov/CCST/research/reports_pre1998/comp_bio/stalk/pgapack.html
HTH,
Amit
--
Amit Kumar Saha
Writer, Programmer, Researcher
http://amitsaha.in.googlepages.com
http://amitksaha.blogspot.com
Can you elaborate it still?
On 12/18/07, SaiGiridhar Ramasamy wrote:
>
>
> Great.I have few on hand experience with MPI(tracking target) it involved
> GA.We jus had an intro on parallel GA too.I prefer any kind of application
> which can be finished in 2 or 3 months.
How about trying out 'PGAPack' then?
>
Great.I have few on hand experience with MPI(tracking target) it involved
GA.We jus had an intro on parallel GA too.I prefer any kind of application
which can be finished in 2 or 3 months.
On 12/18/07, SaiGiridhar Ramasamy wrote:
> hi,
> As a project for final year.Not just to test.
Okay, so are you experienced with parallel programming? What kind of
HPC applications are you looking for?
3-months back, I had no exposure to parallel programming.
Subsequently, I worked on a project
hi,
As a project for final year.Not just to test.
On 12/18/07, SaiGiridhar Ramasamy wrote:
> Hi all,
>I've an operational cluster and soon about to form a cluster can
> anyone suggest any hpc application.
A HPC application to do a test run on your cluster?
--Amit
--
Amit Kumar Saha
Writer, Programmer, Researcher
http://amitsaha.in.goo
Hi all,
I've an operational cluster and soon about to form a cluster can
anyone suggest any hpc application.
Am 18.12.2007 um 17:09 schrieb pat.o'bry...@exxonmobil.com:
We have Torque as an mpi job scheduler. Additionally, I have
some
users that want to modify the contents of "-hostfile" when they
execute
Why do they want to modify the hostfile? They should stay with the
granted machines a
On Mon, 2007-12-17 at 20:58 -0500, Brian Dobbins wrote:
> Hi Marco and Jeff,
>
> My own knowledge of OpenMPI's internals is limited, but I thought
> I'd add my less-than-two-cents...
>
> > I've found only a way in order to have tcp connections
> binded only to
> > the et
On Mon, 2007-12-17 at 17:19 -0500, Jeff Squyres wrote:
> On Dec 17, 2007, at 8:35 AM, Marco Sbrighi wrote:
>
> > I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron,
> > dual core, Linux cluster. Of course, with Infiniband 4x interconnect.
> >
> > Each cluster node is equipped wit
We have Torque as an mpi job scheduler. Additionally, I have some
users that want to modify the contents of "-hostfile" when they execute
"mpirun". To allow the modification of the hostfile, I downloaded OpenMPI
1.2 and attempted to do a "configure" with the options shown below:
./configur
On 12/18/07 7:35 AM, "Elena Zhebel" wrote:
> Thanks a lot! Now it works!
> The solution is to use mpirun -n 1 -hostfile my.hosts *.exe and pass MPI_Info
> Key to the Spawn function!
>
> One more question: is it necessary to start my "master" program with
> mpirun -n 1 -hostfile my_hostfile -h
Just to add. My whole cluster is intel em64t or x86_64 and with
openmpiv1.2.4 i was getting for two pc express intel gigabit and a
pciexpress gigabit ethernet Syskonnect @ 888, 892 and 892 Mbps measured
using NPtcp a sum total bandwidth of 1950Mbps on two identical different
systems connected
Hi,
I found the problem. It's a bug with openmpi v 1.2.4 i think. As below
tests confirm(AND an big THANKS to George!) I compiled openmpi v
1.3a1r16973 and tried the same tests with the same mca-params.conf file
and got for three pci express gigabit ethernet cards a total bandwidth
of 2583Mbp
27 matches
Mail list logo