Hi Richard
I have just been through this myself. In my case I installed an older
version of RHEL3 (update 1? - how do you tell?) from CDs that came with a
computer a couple of years ago. I then installed OSCAR 4.1 (and had to
solve a whole bunch of missing .rpm problems by downloading them fro
Many thanks Bernard and Fernando
Two things:
1) The reason I thought I needed a second oscar image is that half my
nodes now have IDE drives and the other half SATA (=SCSI?). Doesn't this
mean they will need different partition tables and /etc/fstab files at a
minimum? I realise that I don't need
Hi All
Here's the problem: I had a happily running OSCAR (v4.1 on RHEL3) cluster
based on Dell Precision 470 machines with IDE drives. I went and bought
some new (additional) nodes but IDE wasn't an option any longer, so I got
SATA drives instead. The new boxes also seem to have come with an upda
Hi All
I know that this has been discussed before, but I searched the archives of
this forum, and the answer is still not clear to me. Could someone please
explain to me (in simple step by step instructions) how I should update
system files on my computational nodes.
For example:
I am runnin
ly necessary to do this even after you have changed the pfilter
settings (or turn it off)? Just wondering.
Thanks!
Bernard
From: Giles Lesser
[
mailto:[EMAIL PROTECTED]]
Sent: Monday, June 27, 2005 16:49
To: Bernard Li; oscar-users@lists.sourceforge.net
Subject: Re: [Oscar-users] R
Hi Bernard
Thanks for your help with this. The problem is now solved, but I thought
I would post what I found for others benefit.
There were a number of problems that had to be solved:
1) As you suggested, the ntp.conf file on the headnode had to be modified
to include the line
restrict 192.168.1
ilter stop.
Pfilter should allow ntpd
traffic to go through, though.
You may try to leave it for a
little while and then come back and start up ntpd again on the compute
node, to see if it comes up then. I can do a test to see if I can
re-produce this issue.
Cheers,
Bernard
From: Giles Lesse
Hi all. Many thanks for your help with my previous posts.
I am now running MPICH_p4 on dual-processor (Oscar) cluster nodes and would
like to take advantage of message passing by shared memory when possible. I
have re-compiled MPICH and specified the --with-comm=shared flag and
re-installed MPI
hould reflect whatever path you set up, which
would integrate with switcher nicely.
Jeremy
At 09:19 PM 2/2/2004, Giles Lesser wrote:
I know I'm covering old ground, but
after lots of searching I still can't figure it out
I am trying to use IFC v7.1 to compile a fortran90 program to
I know I'm covering old ground, but after lots of searching I still can't
figure it out
I am trying to use IFC v7.1 to compile a fortran90 program to run under
MPICH on an OSCAR cluster and I get the (in)famous
pi3.o(.text+0x1b): In function `main':
: undefined reference to `mpi_init_'
pi3.
10 matches
Mail list logo