On Mon, Apr 7, 2008 at 3:09 PM, Charles Bacon <[EMAIL PROTECTED]> wrote:

> On Apr 7, 2008, at 1:56 PM, Adam Bazinet wrote:
>
> > Just out of curiosity, how do you specify a file delete in an RFT
> > transfer file?  I suppose I could always submit a dummy GRAM job and use the
> > cleanup tags to get the same effect, but I'd like to be as efficient as
> > possible I guess.
> >
>
> Use rft-delete instead of rft.
> http://www.globus.org/toolkit/docs/4.0/data/rft/rn01re02.html
>

Thanks, missed that somehow.

<http://www.globus.org/toolkit/docs/4.0/data/rft/rn01re02.html>
>
>   Your other idea about the sysadmin type script is a good one, I think.
> >  In fact, since GLOBUS_SCRATCH_DIR can be set to a different value for each
> > different installed scheduler, might it make sense to modify the appropriate
> > scheduler provider to report this scratch dir information?  That would keep
> > things organized nicely in the central MDS store, I would think.
> >
>
> That seems like a good idea, though I don't know if that would wind up
> messing with their schemas.  The GLUE schema they use should already have an
> area to report disk space available on the various file systems.  I'm not
> sure if/how it is populated by default, but I think you should be able to do
> that without having any schema trouble.


I'm a little worried about the schema.  I found this page:
http://viewcvs.globus.org/viewcvs.cgi/ws-mds/usefulrp/schema/schema/mds/usefulrp/batchproviders.xsd?view=markup&content-type=text%2Fvnd.viewcvs-markup&revision=1.6

Is that the latest version of the schema?  It seems to match up with the
output I'm used to seeing from various scheduler providers.  In any case,
there isn't really a field for disk space, unless I just missed it.  So, is
it possible to extend this schema somehow?  We already customize all of our
scheduler providers anyway.  I suppose I could go back to your original idea
of publishing something separate to the Index service, but it would be nice
if this information were integrated the way I had in mind.

Thanks,
Adam


>
>
>
> Charles
>

Reply via email to