@Gus
the applications in the links u have sent are really high level n i believe
really expensive too as i will have 2 have a physical apparatus for various
measurements along with the cluster. Am i right?
hod, etc:
> http://www.foamcfd.org/
> http://www.cimec.org.ar/twiki/bin/view/Cimec/PETScFEM
>
> Computational Chemistry, molecular dynamics, etc:
> http://www.tddft.org/programs/octopus/wiki/index.php/Main_Page
> http://classic.chem.msu.su/gran/gamess/
> http://ambermd.org/
>
run.
>> See this:
>> http://www.rocksclusters.org/wordpress/
>>
>>
>> My two cents.
>>
>> Gus Correa
>> -
>> Gustavo Correa
>> Lamont-Doherty Earth Observatory - Columbia University
>> Palisades, NY, 10964-8000 - USA
>> --
n understanding of
> MPI itself:
>
> http://ci-tutor.ncsa.uiuc.edu/login.php
>
> Register (it's free), login, and look for the Introduction to MPI tutorial.
> It's quite good.
>
>
>
>
> On Apr 23, 2009, at 6:59 AM, Ankush Kaul wrote:
>
> I found
I found some programs on this link :
http://lam-mpi.lzu.edu.cn/tutorials/nd/part1/
will these program run on my openmpi cluster?
actually i want to run some image processing program on my cluster, as i
cannot write the entire code of the program can anyone tell where can i get
ip programs.
I kno
@Gus, Eugene
I read all you mails and even followed the same procedure, it was blas that
was giving the problem.
Thanks
I am again stuck on a problem, i connected a new node to my cluster and
installed CentOS 5.2 on it. after that i use yum to install
openmpi,openmpi-libs and openmpi-devel sucess
c] Error 2
make[1]: Leaving directory `/hpl'
make: *** [build] Error 2*
**
*ccomp* folder is created but *xhpl* file is not created
is it some prob with de config file?
On Wed, Apr 22, 2009 at 11:40 AM, Ankush Kaul wrote:
> i feel the above problem occured due 2 installing mpich package
i feel the above problem occured due 2 installing mpich package, now even
nomal mpi programs are not running.
What should we do? we even tried *yum remove mpich* but it says no packages
to remove.
Please Help!!!
On Wed, Apr 22, 2009 at 11:34 AM, Ankush Kaul wrote:
> We are facing another prob
ailed; status=255*
whats the problem here?
On Tue, Apr 21, 2009 at 11:45 PM, Gus Correa wrote:
> Hi Ankush
>
> Ankush Kaul wrote:
>
>> @Eugene
>> they are ok but we wanted something better, which would more clearly show
>> de diff in using a single pc and the cl
@Eugene
they are ok but we wanted something better, which would more clearly show de
diff in using a single pc and the cluster.
@Prakash
i had prob with running de programs as they were compiling using mpcc n not
mpicc
@gus
we are tryin 2 figure out de hpl config, its quite complicated, also de
l
let me describe what i want to do.
i had taken linux clustering as my final year engineering project as i m
really iintrested in 0networking.
to tell de truth our college does not have any professor with knowledge of
clustering.
the aim of our project was just to make a cluster, which we did. no
Thanks a lot, I m implementing the passwordless cluster
I m also tryin different benchmarking software n got fed up of all the probs
in all de sofwares i try. will list few:
*1) VampirTrace*
I extracted de tar in /vt then followed following steps
*$ ./configure --prefix=/vti*
[...lots
Also how can i find out where are my mpi libraries and include directories?
On Sat, Apr 18, 2009 at 2:29 PM, Ankush Kaul wrote:
> Let me explain in detail,
>
> when we had only 2 nodes, 1 master (192.168.67.18) + 1 compute node
> (192.168.45.65)
> my openmpi-default-hostf
h-Keys-HOWTO/SSH-with-Keys-HOWTO-4.html#ss4.3
>
> I hope this helps.
>
> Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -
>
>
> A
t line 1198
What is the problem here?
--
mpirun was unable to cleanly terminate the daemons for this job. Returned
value Timeout instead of ORTE_SUCCESS
On Tue, Apr 14, 2009 at 7:15 PM, Eugene Loh wrote:
> Ankush Kau
Finally, after mentioning the hostfiles the cluster is working fine. We
downloaded few benchmarking softwares but i would like to know if there is
any GUI based benchmarking software so that its easier to demonstrate the
working of our cluster while displaying our cluster.
Regards
Ankush
, 2009 at 1:34 PM, Ankush Kaul wrote:
> can you please suggest a simple benchmarking software, are there any gui
> benchmarking softwares available?
>
>
> On Tue, Apr 7, 2009 at 2:29 PM, Ankush Kaul wrote:
>
>> Thank you sir, thanks a lot.
>>
>> The informat
can you please suggest a simple benchmarking software, are there any gui
benchmarking softwares available?
On Tue, Apr 7, 2009 at 2:29 PM, Ankush Kaul wrote:
> Thank you sir, thanks a lot.
>
> The information you provided helped us a lot. Am currently going through
> the OpenMPI
Thank you sir, thanks a lot.
The information you provided helped us a lot. Am currently going through the
OpenMPI FAQ and will contact you in case of any doubts.
Regards,
Ankush Kaul
---
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -
>
> Ankush Kaul wrote:
>
>> Thank you sir,
>> one more thing i am confused about, suppose i
lder on de master node, is this correct?
i m a newbie n really need some help, thanks in advance
On Mon, Apr 6, 2009 at 8:43 PM, John Hearns wrote:
> 2009/4/6 Ankush Kaul :
> >> Also how do i come to know that the program is using resources of both
> the
> > nodes?
>
>
t be prompted for a password/passphrase
>
> This should help.
>
>
>
> On Apr 4, 2009, at 5:51 AM, Ankush Kaul wrote:
>
> I followed the steps given here to setup up openMPI cluster :
>> http://www.ps3cluster.umassd.edu/step3mpi.html
>>
>> My cluster consi
I followed the steps given here to setup up openMPI cluster :
http://www.ps3cluster.umassd.edu/step3mpi.html
My cluster consists of two nodes, master(running on fedora 10) and
compute(CentOS 5.2) node, connected directly through a cross cable.
I installed openmpi on both, there were many folders n
I followed the steps given here to setup up openMPI cluster :
http://www.ps3cluster.umassd.edu/step3mpi.html
My cluster consists of two nodes, master(192.168.67.18) and
salve(192.168.45.65), connected directly through a cross cable.
After setting up the cluster n configuring the master node, i mou
24 matches
Mail list logo