Jan,

I agree with your assessment that the need to adjust the memory use per
process is a general one in cluster job submission, and that it is in
some way implemented by any underlying job management system, and that
these extensions ought not to be PBS-specific.

I also looked at your "messy solution".  (The code looks very professional,
really.)  It won't do for my purposes, because I need to present a minimal,
easily understood solution.

Let me explain my situation:

None of the compute resources is under my control.  I can point out
problems to admins, that is all.

I have been assigned two jobs.

I and our users are familiar with doing conventional cluster job submission. 
One job was to bring them into the grid fold, showing them the advantages
of globusrun-ws.  If it can be shown to be really a cross-platform 
solution, giving them the ability to (almost) effortlessly switch 
between grid clusters, the effort will be a success.

My other job is to write a report on practical MPI job submission over
the grid.

We have come a long way, but still have to deal with a couple of practical
details.  At this point, it looks like both of them will end up as 
work-arounds to incomplete implementation of a job submission interface
in Globus.

If with a future release of Globus, these issues can be dealt with, grid
job submission will look very attractive to real researchers.

Thanks again!

On  6.05.08, Jan Ploski wrote:
> Steve White <[EMAIL PROTECTED]> schrieb am 05/06/2008 04:43:18 PM:
> 
> > Jan,
> > 
> > It looked good, but unfortunately this is a PBS-only extension, and most
> > of our resources (specifically, the one the user in question wants 
> most),
> > are using SGE.
> > 
> > You know, my users say, "Why should I do job submission via Globus?
> > Why not just gsissh in and do a job submission to the cluster's job 
> > manager?"
> > 
> > My answer has been "Ah, because the Globus way abstracts away all the
> > details of the cluster's job manager---you just write one job 
> description
> > for Globus, and voilĂ , it runs on all Grid clusters".
> 
> That's the main idea, yes.
> 
> > If in fact the user has to write different job descriptions for each job
> > manager on each cluster...
> 
> That's the sad reality.
> 
> > ... what is the correct answer to this question?
> 
> When I recommended it, I didn't pay attention to the fact that this was 
> PBS-only. In fact I see nothing in the elements' meaning which would make 
> it impossible to implement in other LRMS adapters (I don't know SGE, but I 
> know Condor). Yet, strangely, the extension is called "PBS Node Selection 
> Parameters" in the documentation. Perhaps someone from Globus can comment 
> whether this extension is going to be generalized?
> 
> If you are dealing with a small number of resources under your control, it 
> might be reasonable to just hack the SGE adapter (Perl) to support this 
> extension and possibly even contribute it back to Globus... That way you 
> could keep your promise to the users.
> 
> Regards,
> Jan Ploski
> 
> --
> Dipl.-Inform. (FH) Jan Ploski
> OFFIS
> Betriebliches Informationsmanagement
> Escherweg 2  - 26121 Oldenburg - Germany
> Fon: +49 441 9722 - 184 Fax: +49 441 9722 - 202
> E-Mail: [EMAIL PROTECTED] - URL: http://www.offis.de
> 

-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  - 
Steve White                                             +49(331)7499-202
e-Science / AstroGrid-D                                   Zi. 35  Bg. 20
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  - 
Astrophysikalisches Institut Potsdam (AIP)
An der Sternwarte 16, D-14482 Potsdam

Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz

Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  - 

Reply via email to