Hey Nate,

On 2011-08-16 20:21, Nate Coraor wrote:
> Roman Valls wrote:
>> Thanks indeed for the ~/.slurm_drmaa.conf hint Mariusz, very useful !
>>
>> I wonder how this approach will work out when one runs galaxy as advised
>> in a production setting:
>>
>> http://usegalaxy.org/production
>>
>> Together with the gordon patch to run drmaa as different users:
>>
>> http://wiki.g2.bx.psu.edu/Events/GCC2011?action=AttachFile&do=get&target=RunningGalaxyDRMAAJobsAsDifferentUsers.pdf
>>
>> My concern is that probably the SUID code will end up clearing up all
>> the environment variables and/or ignoring the user's ~/.slurm_drmaa.conf
>> as happened to me before with "screen":
>>
>> http://superuser.com/questions/235760/ld-library-path-unset-by-screen
>>
>> I'll jump in that Gordon patch/code eventually, but for now your
>> insight/fixes have been very helpful from a galaxy developer instance
>> perspective, thanks much again !
> 
> Hi Roman,
> 
> From the Galaxy side of things, you could add a new section to
> universe_wsgi.ini as we do with tool_runners, and specify, perhaps,
> user_runners.  This is parsed in lib/galaxy/config.py and implemented in
> lib/galaxy/tools/__init__.py.  The correct place to do the per-user code
> might be in lib/galaxy/jobs/__init__.py itself, but I'm not sure how
> you'd resolve overlapping tool and user runners.





I got it up&running in Vienna, thanks to Jeremy. I didn't want to put it
as a universe_wsgi.ini since users don't have access to it and thus
tweak the parameters when their needs change (different time limits,
projects, etc...).

The overlapping of parameters still needs to be solved indeed :-S





> I'd also been working on more complex configuration of job parameters
> that would probably make what you want to do easier, but that work is
> stalled.




Let me know where this partial work is ! My changeset on this issue is:

https://bitbucket.org/brainstorm/galaxy-central/changeset/ec53f3a4d37a


PD: Btw, the per-user UNIXy ~/.slurm_drmaa.conf works, but lacks some
native flags (job title, time limit, etc...)... for now (I'm in contact
with the slurm-drmaa dev).

PD2: (TODO as well) I think the drm_template should be removed
altogether from the drmaa.py runner, it defeats the whole purpose of the
DRMAA abstraction.




> --nate
> 
>>
>> /Roman
>>
>> On 2011-07-19 19:53, Mariusz Mamoński wrote:
>>>> The general config file allows us to set a fixed project:
>>>>
>>>> default_cluster_job_runner = drmaa://-A a2010002 -p core
>>>>
>>>> And even set per-tool job settings. But we would like each user to have
>>>> the ability to change those settings.
>>>
>>>
>>>
>>>>
>>>>
>>>> What is the least intrusive way to set per-user native (site-specific)
>>>> job manager settings ?
>>>
>>>
>>> you may try to use an user's local DRMAA configuration file:
>>>
>>> ~/.slurm_drmaa.conf
>>>
>>> ". If multiple configuration sources are present then all
>>> configurations are merged with values from user-defined files taking
>>> precedence (in following order: $SLURM_DRMAA_CONF,
>>> ~/.slurm_drmaa.conf, /etc/slurm_drmaa.conf)"
>>>
>>> there you can put any user's specific settings, e.g.:
>>>
>>> job_categories: {
>>>   default: "-A a2010002 -p core",
>>> }
>>>
>>>
>>> Cheers,
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>
>>   http://lists.bx.psu.edu/
>>
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to