On Thu, Sep 26, 2013 at 9:10 PM, Greg Von Kuster <g...@bx.psu.edu> wrote:
> James, it seems I was answering at the same time you were, so to highlight my 
> comments to John, I'm just wondering how this will work for repositories in 
> the tool shed that do not contain any tools, but just tool dependency 
> definitions or complex repository dependency definitions.

I am certain our European colleagues will have use cases to contribute
when they have had their coffee, but I would just assume not get the
tool shed involved. If you want to optimize a tool for Galaxy, the
best practice would be you optimize it by replacing the package the
corresponding tool sources not by manually compiling some library and
having the tool shed compile additional tools against it. Does that
answer your question?

-John

>
> Thanks,
>
> Greg Von Kuster
>
> On Sep 26, 2013, at 9:49 PM, James Taylor <ja...@taylorlab.org> wrote:
>
>> Make the precedence a config option. Otherwise I agree.
>>
>> In addition, I still like the idea I suggested earlier of dependency 
>> provider plugins. Then you could (for example) have one that uses 'modules' 
>> and skips env.sh entirely.
>>
>> On Sep 26, 2013, at 9:15 PM, John Chilton <chil...@msi.umn.edu> wrote:
>>
>>> I was not even thinking we needed to modify the tool shed to implement
>>> this. I was hoping (?) you could just modify:
>>>
>>> lib/galaxy/tools/deps/__init__.py
>>>
>>> to implement this. If some tool contains the tag
>>>
>>> <requirement type="package" version="1.7.1">numpy</requirement>
>>>
>>> then if there is a manually installed tool_dependency in
>>> `tool_dependency_dir/numpy/1.7.1/env.sh` that would take precedence
>>> over the tool shed installed version (would that be something like
>>> `tool_dependency_dir/numpy/1.7.1/owner/name/changeset/env.sh`)? Let me
>>> know if this is way off base.
>>>
>>> There is a lot you could do to make this more complicated of course -
>>> an interface for mapping exact tool shed dependencies to manually
>>> installed ones, the ability to auto-compile tool shed dependencies
>>> against manually installed libraries, etc..., but I am not sure those
>>> complexities are buying you anything really.
>>>
>>> Thoughts?
>>>
>>> -John
>>>
>>>
>>>
>>> On Thu, Sep 26, 2013 at 5:47 PM, Greg Von Kuster <g...@bx.psu.edu> wrote:
>>>> Hi John,
>>>>
>>>> On Sep 26, 2013, at 5:27 PM, John Chilton <chil...@msi.umn.edu> wrote:
>>>>
>>>>> My recommendation would be make the tool dependency install work on as
>>>>> many platforms as you can and not try to optimize in such a way that
>>>>> it is not going to work - i.e. favor reproduciblity over performance.
>>>>> If a system administrator or institution want to sacrifice
>>>>> reproduciblity and optimize specific packages they should be able to
>>>>> do so manually. Its not just Atlas and CPU throttling right? Its
>>>>> vendor versions of MPI, GPGPU variants of code, variants of OpenMP,
>>>>> etc....  Even if the tool shed provided some mechanism for determining
>>>>> if some particular package optimization is going to work, perhaps its
>>>>> better to just not enable it by default because frequently these cause
>>>>> slightly different results than the unoptimized version.
>>>>>
>>>>> The problem with this recommendation that is Galaxy currently provides
>>>>> no mechanism for doing so. Luckily this is easy to solve and the
>>>>> solution solves other problems. If the tool dependency resolution code
>>>>> would grab the manually configured dependency instead of the tool shed
>>>>> variant when available, instead of favoring the opposite, then it
>>>>> would be really easy to add in an optimized version of numpy or an MPI
>>>>> version of software X.
>>>>
>>>> How would you like this to happen?  Would it work to provide an admin the 
>>>> ability to create a ToolDependency object and point it to a "manually 
>>>> configured dependency" in whatever location on disk the admin chooses via 
>>>> a new UI feature?  Or do you have a different idea?
>>>>
>>>> Thanks,
>>>>
>>>> Greg Von Kuster
>>>>
>>>>
>>>>>
>>>>> Whats great is this solves other problems as well. For instance, our
>>>>> genomics Galaxy web server runs Debian but the worker nodes run
>>>>> CentOS. This means many tool shed installed dependencies do not work.
>>>>> JJ being the patient guy he is goes in and manually updates the tool
>>>>> shed installed env.sh files to load modules. Even if you think not
>>>>> running the same version of the OS on your server and worker nodes is
>>>>> a bit crazy, there is the much more reasonable (common) case of just
>>>>> wanting to submit to multiple different clusters. When I was talking
>>>>> with the guys at NCGAS they were unsure how to do this, this one
>>>>> change would make that a lot more tenable.
>>>>>
>>>>> -John
>>>>>
>>>>> On Thu, Sep 26, 2013 at 1:29 PM, Björn Grüning
>>>>> <bjoern.gruen...@pharmazie.uni-freiburg.de> wrote:
>>>>>> Hi,
>>>>>>
>>>>>>> Hi Bjoern,
>>>>>>>
>>>>>>> Is there anything else we (the Galaxy community) can do to help
>>>>>>> sort out the ATLAS installation problems?
>>>>>>
>>>>>> Thanks for asking. I have indeed a few things I would like some
>>>>>> comments.
>>>>>>
>>>>>>> Another choice might be to use OpenBLAS instead of ATLAS, e.g.
>>>>>>> http://stackoverflow.com/questions/11443302/compiling-numpy-with-openblas-integration
>>>>>>
>>>>>> I have no experience with it. Does it also need to turn off CPU
>>>>>> throttling? I would assume so, or how is it optimizing itself?
>>>>>>
>>>>>>> However, I think we build NumPy without using ATLAS or any
>>>>>>> BLAS library. That seems like the most pragmatic solution
>>>>>>> in the short term - which I think is what Dan tried here:
>>>>>>> http://testtoolshed.g2.bx.psu.edu/view/blankenberg/package_numpy_1_7
>>>>>>
>>>>>> I can remove them if that is the consensus.
>>>>>>
>>>>>> A few points:
>>>>>> - fixing the atlas issue can speed up numpy, scipy, R considerably (by
>>>>>> 400% in some cases)
>>>>>> - as far as I understand that performance gain is due to optimizing
>>>>>> itself on specific hardware, for atlas there is no way around to disable
>>>>>> CPU throttling (How about OpenBlas?)
>>>>>> - it seems to be complicated to deactivate CPU throttling on OS-X
>>>>>> - binary installation does not make sense in that case, because ATLAS is
>>>>>> self optimizing
>>>>>> - Distribution shipped ATLAS packages are not really faster
>>>>>>
>>>>>> Current state:
>>>>>> - Atlas tries two different commands to deactivate CPU throttling. Afaik
>>>>>> that only works on some Ubuntu versions, where no root privileges are
>>>>>> necessary.
>>>>>> - If atlas fails for some reason, numpy/R/scipy installation should not
>>>>>> be affected (that's was at least the aim)
>>>>>>
>>>>>> Questions:
>>>>>> - Is it worth the hassle for some speed improvements? pip install numpy,
>>>>>> would be so easy?
>>>>>>
>>>>>> - If we want to support ATLAS, any better idea to how to implement it?
>>>>>> Any Tool Shed feature that can help? -> interactive installation?
>>>>>>     - can we flag a tool dependency as optional? So it can fail?
>>>>>>
>>>>>> - Can anyone help with testing and fixing it?
>>>>>>
>>>>>>
>>>>>> Any opinions/comments?
>>>>>> Bjoern
>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Peter
>>>>>>
>>>>>>
>>>>>>
>>>>>> ___________________________________________________________
>>>>>> Please keep all replies on the list by using "reply all"
>>>>>> in your mail client.  To manage your subscriptions to this
>>>>>> and other Galaxy lists, please use the interface at:
>>>>>> http://lists.bx.psu.edu/
>>>>>>
>>>>>> To search Galaxy mailing lists use the unified search at:
>>>>>> http://galaxyproject.org/search/mailinglists/
>>>>>
>>>>> ___________________________________________________________
>>>>> Please keep all replies on the list by using "reply all"
>>>>> in your mail client.  To manage your subscriptions to this
>>>>> and other Galaxy lists, please use the interface at:
>>>>> http://lists.bx.psu.edu/
>>>>>
>>>>> To search Galaxy mailing lists use the unified search at:
>>>>> http://galaxyproject.org/search/mailinglists/
>>>
>>> ___________________________________________________________
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>> http://lists.bx.psu.edu/
>>>
>>> To search Galaxy mailing lists use the unified search at:
>>> http://galaxyproject.org/search/mailinglists/
>>
>

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to