[galaxy-dev] keeping the original file name from the FTP pop-up
Hi, I am using a third-party software to upload huge files (> 100 GB) into Galaxy which emulates the ftp upload procedure. The files come up into the FTP upload pop-up as long id-strings (see attached). which are then used for the filename in the history panel. Displaying the original file name in the pop-up was easy (see attached - Real Name column), but propagating it to the history panel has been more challenging :) I finally found a way to keep the original file name in the history panel, but the method does not look very elegant. At the same time, I discovered that the file name can be explicitly set by using the param NAME in in ../data_source/upload.xml I need to push it to the tool inputs from the FTP pop-up but I can't find the way to do it? Does anybody have had some experience with this? Best regards Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/
Re: [galaxy-dev] Question about Galaxy integration with external access control
You might consider a secure login for users affiliated to other instutitutions than your lab as well. Then you can implement a redirection to a set of IdPs with delegated permissions to authenticate users against your LDAP but also against many other LDAPs. Feel free to come up with questions about this solution. Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: galaxy-devon behalf of Dannon Baker Sent: 29 September 2016 17:40 To: Simon Chang Cc: galaxy-dev@lists.galaxyproject.org Subject: Re: [galaxy-dev] Question about Galaxy integration with external access control Hi Simon, On Thu, Sep 29, 2016 at 11:22 AM, Simon Chang > wrote: 1) Assuming Galaxy can read LDAP directory service information, to what extent is access control enforced? Is it on a file system level? The 'galaxy' user, or whichever user is running the files is the normal way to handle this, with other system users not being able to access galaxy owned files directly. 2) If a researcher logs into Galaxy with his LDAP credentials, runs some analyses and obtains the results, how exactly are these results protected from other researchers who may be prohibited from accessing these results due to institutional policies? Accordingly, if a researcher wants to share the data product with another LDAP user, how is that done exactly apart from simply downloading and emailing it? Check out https://wiki.galaxyproject.org/Learn/Share for more information about galaxy's sharing abilities, and certainly feel free to ask more questions. In short, there are systems built into Galaxy that allow users to share (or secure) Galaxy objects within the framework. -Dannon ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] trying to add a new controller to webapps/galaxy/controllers
Hi, I am trying to add a new controller (an extension of User class and living in the same directory as user.py) to Galaxy. I am trying to call it from within user/login.mako by setting : form_action = h.url_for( controller='NEWCONTROLLER', action='login', use_panels=use_panels ) but Galaxy refuses to see it. In the paster.log I see that the controller is enabled. It is completely identical to user.py, just a couple of modified functions bearing the same names. Is there something more to set up to make it accessible from the mako file? My version os 16.04 Thank you Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] howto add a controller to Galaxy (sorry for multiple posts)
Hi, I am trying to add a new controller (an extension of User class and living in the same directory as user.py) to Galaxy. I am trying to call it from within user/login.mako by setting : form_action = h.url_for( controller='NEWCONTROLLER', action='login', use_panels=use_panels ) but Galaxy refuses to see it. In the paster.log I see that the controller is enabled. It is completely identical to user.py, just a couple of modified functions bearing the same names. Is there something more to set up to make it accessible from the mako file? My version os 16.04 Thank you Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] jupyter + apache proxy + SSL
Hi, I am trying to set up galaxy server with IE jupyter with apache proxy and SSL enbled. Does anybody have a similar setup? I am using == galaxy.ini dynamic_proxy_manage=False ('True' gives the same error) dynamic_proxy=node dynamic_proxy_session_map=database/session_map.sqlite #dynamic_proxy_bind_port=8800 #dynamic_proxy_bind_ip=0.0.0.0 dynamic_proxy_external_proxy=True dynamic_proxy_prefix=gie_proxy == httpd.conf ProxyPass/gie_proxy/jupyter/ipython/api/kernels ws://localhost:8800/gie_proxy/jupyter/ipython/api/kernels VirtualHost _default_:80> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] It works fine with HTTP but not HTTPS and, naturally, I get jquery.js:10261 Mixed Content: The page at 'https://mydomain.xx/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://mydomain.xx/gie_proxy/jupyter/ipython/login?next=%2Fgie_proxy%2Fjupyter%2Fipython%2Ftree'. This request has been blocked; the content must be served over HTTPS. Is there a way to modify the container config such that it runs on a https, or is there a galaxy configuration routine I can use? Thank you Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] Galaxy sending jobs to multiple clusters
Thank you , Nate, Then, to be on the safe side, the recipe will be (including multicluster support): ON SUBMIT HOST: 1) Pull Nate's version of slurm-drmaa and compile 2) Recompile Slurm client using your patch (on the submit hosts) and use it instead the old one containing the unmodified libslurmdb.so No changes to Slurm on the controllers is needed Thank you very much again !! PS :) I'll have to merge your slurm-drmaa and mine, I have also ripped it off a bit :):) Best regards Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: Nate Coraor <n...@bx.psu.edu> Sent: 03 February 2016 22:19 To: Nikolay Aleksandrov Vazov Cc: Ganote, Carrie L; John Chilton; dannon.ba...@gmail.com; galaxy-dev@lists.galaxyproject.org Subject: Re: Galaxy sending jobs to multiple clusters On Wed, Feb 3, 2016 at 10:09 AM, Nikolay Aleksandrov Vazov <n.a.va...@usit.uio.no<mailto:n.a.va...@usit.uio.no>> wrote: Hi, Nate, Yes, we are using slurmdbd here. So by controllers, if I get it right, you mean the controller machines of each cluster which shall connect to (share) the same slurmdbd. Correct. And a last one : In your github version you say about using Slurm >= 14.11. We are running Slurm 14.03 and we shall have to recompile it. Do you mean by recompilation that we have to recompile both server Slurm part and the client Slurm on the submit hosts. This is a bit tricky. If you don't need to specify multiple clusters when submitting (e.g. with `--clusters=cluster1,cluster2`) then it works like this: You will need to recompile Slurm with the patch shown on Github. You don't actually have to install this recompiled version if you don't want to, you only need to make sure that slurm-drmaa uses this recompiled version's libslurmdb.so at runtime. There are a variety of ways to do this, I list a couple in the instructions. Or if you don't mind using this modified version in place of your existing version, you can just install the modified version so that the "default" libslurmdb.so is compatible. In that case, no runtime linker tricks should be necessary. You do not have to compile slurm-drmaa against the modified version. If you *do* need multicluster submission support, you have to compile slurm-drmaa against a copy of the (unmodified) Slurm source code for access to the private headers contained within. Once done, it works the same as above - you still need to compile a modified libslurmdb.so. The version of libslurmdb.so used by slurm-drmaa at runtime is the key. In both cases, this only needs to be done on the submission host. No (controller) server modifications are necessary. Best regards Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: Nate Coraor <n...@bx.psu.edu<mailto:n...@bx.psu.edu>> Sent: 03 February 2016 15:56 To: Nikolay Aleksandrov Vazov Cc: Ganote, Carrie L; John Chilton; dannon.ba...@gmail.com<mailto:dannon.ba...@gmail.com>; galaxy-dev@lists.galaxyproject.org<mailto:galaxy-dev@lists.galaxyproject.org> Subject: Re: Galaxy sending jobs to multiple clusters On Tue, Feb 2, 2016 at 9:18 AM, Nikolay Aleksandrov Vazov <n.a.va...@usit.uio.no<mailto:n.a.va...@usit.uio.no>> wrote: Many thanks to all of you!! Definitely Nate's approach is a better choice. We are running Slurm 14.03, but Nate's manual is exhaustive enough to recompile even the existing version. (I don't know how we can do this on a running cluster though :) I will most probably go for this solution. There is a sentence in Nate's answer I don't really understand : "... using `--clusters` means you have to have your controllers integrated using slurmdbd, ..." what do you mean by this, Nate? You have to run slurmdbd (it's optional) and your slurm controllers must connect to a single slurmdbd instance. This is Slurm's accounting server. Here's the documentation: http://slurm.schedmd.com/accounting.html The setup is relatively simple, you just need to have a MySQL (or derivative) server for it to store records in. --nate Carie, I don't actually get how you implemented the hack : did you reduplicate the class DRMAAJobRunner under a different name in drmaa.py? And where do you define every next cluster (controller machines)? Can you give me some more detalis? Thank you Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: Nate Coraor <n...@bx.psu.edu<mailto:n...@bx.psu.edu>> Sent: 01 February 2016 17:28 To: Ganote, Carrie L Cc: John Chilton; Nikolay Aleksandrov Vazov; dannon.ba...@gmail.com<mailto:dannon.ba...@gmail.com>; galaxy-dev@lists.galaxyproject.org<mailto:galaxy-dev@lists.galaxyproject.org&
Re: [galaxy-dev] Galaxy sending jobs to multiple clusters
Hi, Nate, Yes, we are using slurmdbd here. So by controllers, if I get it right, you mean the controller machines of each cluster which shall connect to (share) the same slurmdbd. And a last one : In your github version you say about using Slurm >= 14.11. We are running Slurm 14.03 and we shall have to recompile it. Do you mean by recompilation that we have to recompile both server Slurm part and the client Slurm on the submit hosts. Best regards Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: Nate Coraor <n...@bx.psu.edu> Sent: 03 February 2016 15:56 To: Nikolay Aleksandrov Vazov Cc: Ganote, Carrie L; John Chilton; dannon.ba...@gmail.com; galaxy-dev@lists.galaxyproject.org Subject: Re: Galaxy sending jobs to multiple clusters On Tue, Feb 2, 2016 at 9:18 AM, Nikolay Aleksandrov Vazov <n.a.va...@usit.uio.no<mailto:n.a.va...@usit.uio.no>> wrote: Many thanks to all of you!! Definitely Nate's approach is a better choice. We are running Slurm 14.03, but Nate's manual is exhaustive enough to recompile even the existing version. (I don't know how we can do this on a running cluster though :) I will most probably go for this solution. There is a sentence in Nate's answer I don't really understand : "... using `--clusters` means you have to have your controllers integrated using slurmdbd, ..." what do you mean by this, Nate? You have to run slurmdbd (it's optional) and your slurm controllers must connect to a single slurmdbd instance. This is Slurm's accounting server. Here's the documentation: http://slurm.schedmd.com/accounting.html The setup is relatively simple, you just need to have a MySQL (or derivative) server for it to store records in. --nate Carie, I don't actually get how you implemented the hack : did you reduplicate the class DRMAAJobRunner under a different name in drmaa.py? And where do you define every next cluster (controller machines)? Can you give me some more detalis? Thank you Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo From: Nate Coraor <n...@bx.psu.edu<mailto:n...@bx.psu.edu>> Sent: 01 February 2016 17:28 To: Ganote, Carrie L Cc: John Chilton; Nikolay Aleksandrov Vazov; dannon.ba...@gmail.com<mailto:dannon.ba...@gmail.com>; galaxy-dev@lists.galaxyproject.org<mailto:galaxy-dev@lists.galaxyproject.org> Subject: Re: Galaxy sending jobs to multiple clusters Hi Nikolay, It's worth noting that using `--clusters` means you have to have your controllers integrated using slurmdbd, and they must share munge keys. You can set up separate destinations as in Carrie's example without having to "integrate" your controllers at the slurm level. The downside of this approach is that you can't have slurm automatically "balance" across clusters, although Slurm's algorithm for doing this with `--clusters` is fairly primitive. If you don't use `--clusters` you can attempt to do the balancing with a dynamic job destination. If you're not using slurmdbd, you may still need to share the same munge key across clusters to allow the slurm client lib on the Galaxy server to talk to both clusters. There could be ways around this if it's a problem, though. --nate On Mon, Feb 1, 2016 at 11:10 AM, Ganote, Carrie L <cgan...@iu.edu<mailto:cgan...@iu.edu>> wrote: Hi Nikolay, The slurm branch that John mentioned sounds great! That might be your best bet. I didn't get drmaa to run with multiple clusters with flags, but I did 'assign' different job handlers to different destinations in the drmaa.py runner in Galaxy - but that is a bit of a hacky way to do it. -Carrie From: John Chilton <jmchil...@gmail.com<mailto:jmchil...@gmail.com>> Date: Monday, February 1, 2016 at 11:02 AM To: Nikolay Aleksandrov Vazov <n.a.va...@usit.uio.no<mailto:n.a.va...@usit.uio.no>> Cc: "dannon.ba...@gmail.com<mailto:dannon.ba...@gmail.com>" <dannon.ba...@gmail.com<mailto:dannon.ba...@gmail.com>>, "galaxy-dev@lists.galaxyproject.org<mailto:galaxy-dev@lists.galaxyproject.org>" <galaxy-dev@lists.galaxyproject.org<mailto:galaxy-dev@lists.galaxyproject.org>>, Carrie Ganote <cgan...@iu.edu<mailto:cgan...@iu.edu>>, Nate Coraor <n...@bx.psu.edu<mailto:n...@bx.psu.edu>> Subject: Re: Galaxy sending jobs to multiple clusters Nate has a branch of slurm drmaa that allows specifying a --clusters argument in the native specification this can be used to target multiple hosts. More information can be found here: https://github.com/natefoo/slurm-drmaa Here is how Nate uses it to configure usegalaxy.org<http://usegalaxy.org>: https://github.com/galaxyproject/usegalaxy-playbook/blob/master/templates/gal
[galaxy-dev] Galaxy sending jobs to multiple clusters
Hi, John, Dan, Carrie and all others, I am considering a task of setting up a Galaxy instance which shall send jobs to more than on cluster at a time. In my case I am using drmaa-python and I was wondering if it was possible to configure multiple drmaa runners each "pointing" at a different (slurm) control host, e.g. local drmaa1 drmaa2 Thanks a lot for your advice Nikolay === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] Please post a Galaxy vacancy at the University of Oslo
Opening @ University of Oslohttps://wiki.galaxyproject.org/News/UiODeveloperBioinformatician The University Center for Information Technology (USIT)http://www.usit.uio.no/english/ at the University of Oslo has an opening for a developer / bioinformaticianhttp://uio.easycruit.com/vacancy/1347839/69316?iso=no (announcement: Norwegianhttp://uio.easycruit.com/vacancy/1347839/69316?iso=no, Englishhttps://translate.google.com/translate?sl=notl=enu=http%3A%2F%2Fuio.easycruit.com%2Fvacancy%2F1347839%2F69316). The position includes the development and management of services aimed at life science, with particular focus on bioinformatics and our research portals that offer supercomputing via a simpler user interface. The position will maintain and implement solutions on a local Galaxy server. The position is permanent, and group communication is in English. N.B. The application deadline is March 12, 2015. The link to the application is here http://uio.easycruit.com/vacancy/1347839/69316?iso=no Sorry, the text is only in Norwegian, please use google translator :) Don't hesitate to get in touch with the contact persons in the announcement or Nikolay Vazov (n.a.va...@usit.uio.no) from the DevTeam at University in Oslo Nikolay Vazovhttp://www.usit.uio.no/english/about/organisation/bps/rc/rss/staff/nikolaiv/index.html === Nikolay Vazov, PhD Department for Research Computing, University of Oslo ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/