[EMAIL PROTECTED] schrieb am 11/12/2007 08:06:11 AM: > Hi all, > As we know GRAM do well in long time job management. However there > are many short time jobs need to be done in some scopes. > For example,the science computing in bioinformatics,the biologist > submite a batch of requests to the Grid computing enviorment,the > amount of requests maybe very huge,like hundred of thousands,but > most of the requests only needs CPU time less than one second > respectively. If we submit these short time jobs with GRAM,which > only can handle one job per time,the network overhead, credential > overhead,OS schedule overhead will be very large too. > I know there is mutijobs model in GRAM,but can we make hundreds or > thousands jobs into one job description file? Is that feasible and > efficient? Are there any other method to handle large number of > short time jobs? Any advise will be highly appreciated!
If you have huge numbers of small jobs, package them (using Unix scripting, not Globus) into a fewer "big" jobs to avoid the overheads you mention. Each such job should carry the "right" amount of workload to the target site. Based on my experience, (WS) GRAM handles large amounts of jobs poorly (slow, with an increased risk of errors). Regards, Jan Ploski -- Dipl.-Inform. (FH) Jan Ploski OFFIS Betriebliches Informationsmanagement Escherweg 2 - 26121 Oldenburg - Germany Fon: +49 441 9722 - 184 Fax: +49 441 9722 - 202 E-Mail: [EMAIL PROTECTED] - URL: http://www.offis.de
