Hi:

Well, I've set the GLOBUS_TCP_PORT_RANGE with the open range in the file
/etc/xinetd.d/gsigatekeeper on the client side (my machine) and in the
server (our cluster), but there is a firewall that manages (open/close) the
ports in the "domain" were my machine and our cluster are, so I just have to
open the port range and the port of the gatekeeper (of my machine and our
cluster) in the firewall? or I have to open it in my machine and in our
cluster too?

Thanks and Regards...


2008/3/10, Charles Bacon <[EMAIL PROTECTED]>:
>
> That means the gatekeeper cannot contact your client.  If you're
> behind a firewall, you'll need to open a port range to be contacted
> on, and set GLOBUS_TCP_PORT_RANGE on the client side to the open
> range.  If your machine has a private address, but can be reached via
> NAT or the like, you need to set GLOBUS_HOSTNAME on the client side
> to the correct hostname for the server to use to contact the client.
>
>
>
> Charles
>
>
> On Mar 10, 2008, at 2:34 PM, Nyarfee wrote:
>
> > Hi:
> >
> > I'm verifying Globus Installation with the steps from http://
> > www.gridway.org/documentation/stable/installconfguide/x108.htm, the
> > commands works well in the machine where I installed GT4.0.5
> > (localhost) except step 3. Submission test in the
> > name_of_our_cluster:
> >
> > 1. Authorization test:
> >
> >    $ globusrun -a -r localhost
> >    $ globusrun -a -r name_of_my_machine
> >    $ globusrun -a -r name_of_our_cluster
> >
> > I receive in all: GRAM Authentication test successful
> >
> > 2. File transfer test:
> >
> >    $ globus-url-copy file:///etc/hosts gsiftp://localhost/tmp/hosts1
> >    $ globus-url-copy gsiftp://localhost/tmp/hosts1 file:///tmp/hosts2
> >
> >    $ globus-url-copy file:///etc/hosts gsiftp://name_of_my_machine/
> > tmp/hosts1
> >    $ globus-url-copy gsiftp://name_of_my_machine/tmp/hosts1 file:///
> > tmp/hosts2
> >
> >    $ globus-url-copy file:///etc/hosts gsiftp://name_of_our_cluster/
> > tmp/hosts1
> >    $ globus-url-copy gsiftp://name_of_my_machine/tmp/hosts1 file:///
> > tmp/hosts2
> >
> > All transfer are ok.
> >
> > 3. Submission test:
> >    $ globus-job-run localhost /bin/uname -a
> >    $ globus-job-run name_of_my_machine /bin/uname -a
> >
> > the outputs are ok. but when I type
> > ----------------------------------------------------------------------
> > ---------------------------------
> >    $ globus-job-run name_of_our_cluster /bin/uname -a
> >
> > this message appear: GRAM Job Submisison failed because the job
> > manager failed to open stderr (error code 74)
> > ----------------------------------------------------------------------
> > ----------------------------------
> > The cluster have GT4 installed and configured Pre-ws, I already
> > have the Certificates Authorities and I create a valid proxy before
> > type the commands. The GT4.0.5 in my machine is just installed and
> > configure to work with the meta-scheduler GridWay 4.2.3 (and I
> > don't need GT4.0.5 in my machine like one node worker).
> >
> > What could be the reason of this message and how I can fixed?
> >
> > Regards...
>
>

Reply via email to