Hi all,

Currently I am having a demo cluster up and running, thanks to all.

But .. with some problems ,which i know will over as soon as i post
them.(I did build a cluster previously with oscar 2.1 + rhl 7.2 )

Brief 
We have 1+2 cluster all identical with PIV 256 MB connected via 10 Mbps
HUB , Build on Oscar 4.0 + rhl 9.0 (with no updates). This is Demo
Cluster which is Build to learn building a HPC Beowulf cluster. We are
having this on display in a annual technical fair of my college.

I am stuck in following problems 

1. eth0 in any client node is not active when the clients boot. 
     but is active when i issue insmod e100. so also need to mount     
     /home manually. is where should i enter configuration so that it
eth0 loads with subsequent boot.

2. various tests mainly related to LAM/MPI , PBS , MPI (via PBS), PVM
(via PBS) are failing.

3. when i tried as a user from master node " lamboot my_cluster "
returns an error about  "xauth  X11 forwarding .. "   and tells me to
test for the $SHELL with "ssh -x" and it returns /bin/bash  corretly.
I read about -ssi options and sun lamboot to use rsh agent as ssh -x but
it does't seems to use that and thus when i am running mpirun to run my
hello world mpi it responds from master node only.

4. reports from ganglia are not shown on nodes so i need to run 
     run services gmatd start on each client (i can still execute c3
commands with no problems, like cexec service gmatd start etc...)

Please help me out in setting up those failing tests correctly so that i
can run MPI program.

Thanks.
Amit Vyas










-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Oscar-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to