Not really, no. The idea is that you run some kind of local scheduler
(like, say, Condor) and just use a single Globus node to address those
instances. If you need access to the client binaries, you can mount
them on NFS. But there's not much reason for a single resource pool
to need more than one GRAM, RFT, or Index service to talk to it.
Charles
On Oct 1, 2008, at 2:02 AM, Yoichi Takayama wrote:
Hi
I was also thinking as to what's the best way to add another Globus
node easily to a Condor pool.
Condor has a dynamic node addition, in which a minimum effort is
necessary to dynamically add or remove a node to/from the pool.
The global configuration file uses parametrized reference to each
local host (not hard coded strings for real values), I can add/
removed nodes relatively easy by installing one copy of Condor on /
nfs and refer to it from all nodes as a global binary and
configuration. (Or, I can just copy /usr/local/condor to all new
nodes). Additionally, there is a work directory on each node, which
holds local configuration (what daemons to run on that node, what
the IP is, what the local condor user's UID:GroupID), job
executables, spools, and logs.
Does Globus have similar arrangement so that it would be easy to add/
remove nodes?
Thanks,
Yoichi
--------------------------------------------------------------------------
Yoichi Takayama, PhD
Senior Research Fellow
RAMP Project
MELCOE (Macquarie E-Learning Centre of Excellence)
MACQUARIE UNIVERSITY
Phone: +61 (0)2 9850 9073
Fax: +61 (0)2 9850 6527
www.mq.edu.au
www.melcoe.mq.edu.au/projects/RAMP/
--------------------------------------------------------------------------
MACQUARIE UNIVERSITY: CRICOS Provider No 00002J
This message is intended for the addressee named and may contain
confidential information. If you are not the intended recipient,
please delete it and notify the sender. Views expressed in this
message are those of the individual sender, and are not necessarily
the views of Macquarie E-Learning Centre Of Excellence (MELCOE) or
Macquarie University.
On 01/10/2008, at 4:12 PM, Charles Bacon wrote:
Sorry for not answering earlier; copying the installation from one
machine to another works best if you copy it *before* you run make
install.
After you run make install, the current machine's name is copied
into a large number of files. You can probably fix it by running
$GLOBUS_LOCATION/sbin/gpt-postinstall -force on the new machine.
Charles
On Sep 30, 2008, at 10:05 PM, Yoichi Takayama wrote:
Hi
I just copied the installed Globus binary to the 3rd machine and
tested whether it can be just configured.
Everything works up to Setting up GRAM.
In the testing GRAMS, the job below fails. Just by copying the
binaries, probably the GLOBUS_USER_HOME does not get set up
properly???
Although I can install Globus from the source on this machine, if
there is a way to make a binary distribution package from the
build once it is made would be helpful. I think that the make
creates BUILD directory once, but after "make install" it seems to
clean it. Is this true??? Maybe I can stop after "make" and before
"make install" and copy the entire directory to another machine???
So it does not seem to be possible to just copy the binaries
installed to another machine.
Thanks,
Yoichi
{code}
$ vim a.rsl
-- insert --
<job>
<executable>my_echo</executable>
<directory>${GLOBUS_USER_HOME}</directory>
<argument>Hello</argument>
<argument>World!</argument>
<stdout>${GLOBUS_USER_HOME}/stdout</stdout>
<stderr>${GLOBUS_USER_HOME}/stderr</stderr>
<fileStageIn>
<transfer>
<sourceUrl>gsiftp://grid4.ramscommunity.org:2811/bin/
echo</sourceUrl>
<destinationUrl>file:///${GLOBUS_USER_HOME}/my_echo</
destinationUrl>
</transfer>
</fileStageIn>
<fileCleanUp>
<deletion>
<file>file:///${GLOBUS_USER_HOME}/my_echo</file>
</deletion>
</fileCleanUp>
</job>
{code}
{code}
$ globusrun-ws -submit -S -f a.rsl
Delegating user credentials...Done.
Submitting job...Done.
Job ID: uuid:fa585234-8f62-11dd-9f76-000c292b999a
Termination time: 10/01/3008 02:45 GMT
Current job state: StageIn
Current job state: Failed
Destroying job...Done.
Cleaning up any delegated credentials...Done.
globusrun-ws: Job failed: Invalid executable path "my_echo".
{code}
--------------------------------------------------------------------------
Yoichi Takayama, PhD
Senior Research Fellow
RAMP Project
MELCOE (Macquarie E-Learning Centre of Excellence)
MACQUARIE UNIVERSITY
Phone: +61 (0)2 9850 9073
Fax: +61 (0)2 9850 6527
www.mq.edu.au
www.melcoe.mq.edu.au/projects/RAMP/
--------------------------------------------------------------------------
MACQUARIE UNIVERSITY: CRICOS Provider No 00002J
This message is intended for the addressee named and may contain
confidential information. If you are not the intended recipient,
please delete it and notify the sender. Views expressed in this
message are those of the individual sender, and are not
necessarily the views of Macquarie E-Learning Centre Of Excellence
(MELCOE) or Macquarie University.