Oh, make sure if you turn of pfilter that you turn it off on the nodes as well.

On 10/13/06, Michael Edwards <[EMAIL PROTECTED]> wrote:
> Oscar sets up exporting for /home on the masternode automatically.  It
> sounds like you have not exported the /home1 and /home2 drives.  Make
> sure that they are listed in the /etc/exports file on their
> originating computer.  Look at the /etc/exports file on the master
> node for an example.  You will also have to fiddle with pfilter to let
> the traffic through.  I would personally just off pfilter (assuming
> you have other security available or are not connected to the internet
> directly), but that is mainly because I have never bothered to find
> out how it works in detail.
>
> Just a suggestion, I would not name the other directories /home1 and
> /home2, as they probably not be used for user data in the same way as
> /home is.  I would name them something like /scratch1 and /scratch2 or
> /nodedrive1 and /nodedrive2 or something.
>
> This is a somewhat unusual setup, from my limited experience, which is
> why I did not understand what you were trying to do initially.  Having
> three active NFS servers could generate a lot of traffic on your
> network if nothing else.  Then if you get up to 16+ nodes it would be
> a mess...
>
> There are some clustered file systems floating around out there if you
> want to use all the disks in a fairly transparent way and minimize
> network traffic.  It really depends a lot on what you are trying to
> do, and how many nodes you plan to have in the long run.
>
> On 10/13/06, Ivan Ivanov <[EMAIL PROTECTED]> wrote:
> > Hi Michael,
> >
> > It might be my fault not explaining too well. Here is a different
> > description of the problem:
> >
> > =======================================================
> > The cluster has 3 nodes (n0 - master; n1 and n2 - slaves) and each has
> > its
> > own hard drive, named: /home, /home1 and /home2. We were able to mount
> > /home
> > to n1 and n2 so the cluster runs ok (we use MPI).
> >
> > The problem is that we could not access /home1 and /home2 from n0, when
> > trying to use the hard drive for storage. Here is the situation:
> >    when in n0: read /home YES; read /home1 NO; read /home2 NO
> >    when in n1: read /home YES; read /home1 YES; read /home2 NO
> >    when in n2: read /home YES; read /home1 NO; read /home2 YES
> >
> > The 'mount: RPC: Program not registered' error msg pops out when we
> > tried to
> > mount /home1 or /home2 to n0.
> > =======================================================
> >
> > Hope this helps,
> >
> > Ivan
> >
> >
> >
> > >>> "Michael Edwards" <[EMAIL PROTECTED]> 10/10/06 4:31 PM >>>
> > The mapping of home drives is from the master node to the compute
> > nodes, so it sounds like your set up is working as I would expect it
> > to.
> >
> > NFS takes care of returning changes made on the nodes back to the
> > server automatically, you don't need to mount the home directories of
> > the compute nodes back to the server, the data should simply stay in
> > sync as long as the clocks are all synchronized.
> >
> > Perhaps I am not understanding your problem completely though.
> >
> > On 10/10/06, Ivan Ivanov <[EMAIL PROTECTED]> wrote:
> > > We are running Oscar on a 3 node (16 processor) DELL cluster with
> > > RedHat
> > > 2.4.18-4smp.
> > > We have the following problem:
> > >
> > > When attempting to map the home directories of the 2nd and 3rd nodes
> > > to
> > > the first one (the master) the mount fails, and we get the message
> > > 'mount: RPC = Program not registered'
> > > On the other hand side, mounting the home directory of the 1st node to
> > > the
> > > second and the third nodes is fine.
> > >
> > > Any suggestions are welcome.
> > >
> > > Thanks,
> > >
> > > Ivan Ivanov
> > >
> > >
> > >
> > >
> > >
> > -------------------------------------------------------------------------
> > > Take Surveys. Earn Cash. Influence the Future of IT
> > > Join SourceForge.net's Techsay panel and you'll get the chance to
> > share your
> > > opinions on IT & business topics through brief surveys -- and earn
> > cash
> > >
> > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> > > _______________________________________________
> > > Oscar-users mailing list
> > > [email protected]
> > > https://lists.sourceforge.net/lists/listinfo/oscar-users
> > >
> >
> > -------------------------------------------------------------------------
> > Take Surveys. Earn Cash. Influence the Future of IT
> > Join SourceForge.net's Techsay panel and you'll get the chance to share
> > your
> > opinions on IT & business topics through brief surveys -- and earn cash
> > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> > _______________________________________________
> > Oscar-users mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/oscar-users
> >
> >
> > -------------------------------------------------------------------------
> > Using Tomcat but need to do more? Need to support web services, security?
> > Get stuff done quickly with pre-integrated technology to make your job 
> > easier
> > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> > _______________________________________________
> > Oscar-users mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/oscar-users
> >
>

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Oscar-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to