Re: [galaxy-dev] ProFTPD with SQLite

2015-11-19 Thread Mic
Hi Boern,
Thank you. I changed to Postgresql and proFTPd and it works.

Cheers,
Mic

On Fri, Nov 13, 2015 at 12:57 AM, Björn Grüning 
wrote:

> Hi Mic,
>
> you can always dive into a container and get access to everything.
> Please read about `docker exec`, also described in our readme file.
>
> cheers,
> Bjoern
>
> Am 12.11.2015 um 15:24 schrieb Mic:
> > HI Bjoern,
> > Thank you for your email. I have just started to build a new docker
> > container based on bgruening/galaxy-stable:15.07 . It seems that 15.10
> has
> > not been yet included.
> >
> > The problem which I have with Docker container is that if a tool from the
> > toolshed failed to install then I can not get access to INSTALLATION.log.
> > If I press repair repository button then NGINX complain with an error
> 400.
> > Therefore, I also started to try to convert you Dockerfiles to a VM
> image.
> >
> > How do you handle broken tools and how do you repair them in a Docker
> > container. I really like your Dockerfiles.
> >
> > Thank you in advance.
> >
> > Mic
> >
> >
> > On Thu, Nov 12, 2015 at 9:42 PM, Björn Grüning <
> bjoern.gruen...@gmail.com>
> > wrote:
> >
> >> Hi Mic,
> >>
> >> Am 12.11.2015 um 01:22 schrieb Mic:
> >>> Hello,
> >>> I installed all tools from the toolshed which I needed for Galaxy
> (15.10)
> >>> and noticed that I can not upload bigger files than 2 GB to Galaxy
> >> without
> >>> a FTP server.
> >>>
> >>> I installed on Ubuntu 14.04 ProFTPD in the following way:
> >>>
> >>> *sudo aptitude install ProFTPD proftpd-mod-sqlite sqlite3*
> >>>
> >>> Unfortunately, I do not know how to connect the ProFTPD with Galaxy's
> >>> SQLite.
> >>
> >> ProFTP is an FTP Server and SQLite is a database you cannot connect from
> >> the one the the other.
> >>
> >> Are you still using the Galaxy Docker? If so we have already installed a
> >> FTP Server for you and configured it so Galaxy can talk to the FTP
> server.
> >>
> >> Please see the documentation about the Docker FTP server:
> >> https://github.com/bgruening/docker-galaxy-stable
> >>
> >> To transfer data from your client to Galaxy you only need an FTP cient.
> >> Have a look at Filezilla as FTP-client.
> >>
> >> Cheers,
> >> Bjoern
> >>
> >>> Thank you in advance.
> >>>
> >>> Mic
> >>>
> >>>
> >>>
> >>> ___
> >>> Please keep all replies on the list by using "reply all"
> >>> in your mail client.  To manage your subscriptions to this
> >>> and other Galaxy lists, please use the interface at:
> >>>   https://lists.galaxyproject.org/
> >>>
> >>> To search Galaxy mailing lists use the unified search at:
> >>>   http://galaxyproject.org/search/mailinglists/
> >>>
> >>
> >
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] rgFastQC.py error - cannot find executable /fastqc

2015-11-19 Thread Mic
Hi all,
Thank you for all response. I setup galaxy, postgresql and proFTPd from
scratch and installed all tools. Now everything  it is working.

Thank you in advance.

Best wishes,

Mic

On Tue, Nov 17, 2015 at 2:23 PM, Mic  wrote:

> HI Eric,
> I compared your two files to mine ones and notices that *FASTQC_ROOT_DIR*
> was missing, but after I adding it, I still get the same error.
>
> cat
> ~/galaxy/tool_dependency/FastQC/0.11.2/devteam/fastqc/2d094334f61e/env.sh
> if [ -f
> /home/lorencm/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
> ] ;
> then .
> /home/lorencm/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
> ; fi
>
> cat
> ~/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
>
> PATH=/home/lorencm/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0:$PATH;
> export PATH
> FASTQC_JAR_PATH=/home/lorencm/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0;
> export FASTQC_JAR_PATH
> *FASTQC_ROOT_DIR=/home/lorencm/galaxy/tool_dependency/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0;
> export FASTQC_ROOT_DIR*
>
> I would have removed fastqc already and installed it again, but after I
> migrated to Postgresql, I am not able to change my API key to previous
> ones. Do you have any idea how can I change it in Postrgresql manually?
>
> What am I missing?
>
> Thank you in advance.
>
> Mic
>
> On Tue, Nov 17, 2015 at 1:16 PM, Eric Rasche  wrote:
>
>> /##rgFastQC.py error - cannot find executable /fastqc
>> /
>>
>> That specific error is caused by the environment variable
>> $FASTQC_JAR_PATH/$FASQTC_ROOT_DIR being empty. I suggest checking to see
>> that all the correct environment files are loadable. I had this issue
>> recently where I moved my galaxy installation and FastQC failed exactly
>> like that.
>>
>> Konsole output
>> # cat tool_dependencies/FastQC/0.11.2/devteam/fastqc/0b201de108b9/env.sh
>> if [ -f
>> /galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
>> ] ;
>>   then .
>> /galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
>> ;
>> fi
>>
>> In that file, the env.sh file mentioned should be accessible. When I
>> moved my galaxy, the env.sh file originally had a "/home/galaxy" prefixing
>> it and was inaccessible causing a failure to load. You may have that, or
>> some other issue that's causing the file to be inaccessible and the
>> environment variables contained within to not be loaded:
>>
>> Konsole output
>> # cat
>> /galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0/env.sh
>> PATH=/galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0:$PATH;
>> export PATH
>> FASTQC_JAR_PATH=/galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0;
>> export FASTQC_JAR_PATH
>> FASTQC_ROOT_DIR=/galaxy/tool_dependencies/FastQC/0.11.2/devteam/package_fastqc_0_11_2/4b65f6e39cb0;
>> export FASTQC_ROOT_DIR
>>
>>
>> On 11/16/2015 09:10 PM, Mic wrote:
>> > Hi Bjoern,
>> > Yes, I do:
>> >
>> > > java -version
>> > java version "1.7.0_85"
>> > OpenJDK Runtime Environment (IcedTea 2.6.1)
>> (7u85-2.6.1-5ubuntu0.14.04.1)
>> > OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)
>> >
>> > Can it be related that my API key has change while migrating to
>> > Postgresql? I tried to put it back the old one in /galaxy.ini/, but
>> > Galaxy does not pick it from /master_api_key/.
>> >
>> > Thank you in advance.
>> >
>> > Mic
>> >
>> >
>> >
>> > On Tue, Nov 17, 2015 at 12:28 PM, Björn Grüning
>> > mailto:bjoern.gruen...@gmail.com>> wrote:
>> >
>> > Hi Mic,
>> >
>> > do you have Java installed?
>> >
>> > cheers,
>> > Bjoern
>> >
>> > Am 17.11.2015 um 02:05 schrieb Mic:
>> > > Hello,
>> > > I installed FastQC
>> > (~/shed_tools/toolshed.g2.bx.psu.edu/repos/*devteam*
>> > 
>> > > /fastqc/*2d094334f61e*/
>> > >
>> > > *FastQC ).*
>> > > However, I got the following error while running it:
>> > >
>> > >
>> > > *Fatal error: Exit code 1 () Traceback (most recent call last):
>> File
>> > >
>> > "/home/lorencm/shed_tools/
>> toolshed.g2.bx.psu.edu/repos/devteam/fastqc/2d094334f61e/fastqc/rgFastQC.py
>> > <
>> http://toolshed.g2.bx.psu.edu/repos/devteam/fastqc/2d094334f61e/fastqc/rgFastQC.py
>> >
>> > >
>> > <
>> http://toolshed.g2.bx.psu.edu/repos/devteam/fastqc/2d094334f61e/fastqc/rgFastQC.py
>> >",
>> > > line 157, in  assert
>> os.path.isfile(opts.executable),'##rgFastQC.py
>> > > error - cannot find executable %s' % opts.executable
>> AssertionError:
>> > > ##rgFastQC.py error - cannot find executable /fastqc*
>> > >
>> > > The Fastqc package seems to be correctly installed:
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> >  

Re: [galaxy-dev] API key change manually

2015-11-19 Thread Mic
Hi all,
Thank you for all response. I setup galaxy, postgresql and proFTPd from
scratch and now it is everything working.

Thank you in advance.

Best wishes,

Mic

On Tue, Nov 17, 2015 at 11:21 PM, Hans-Rudolf Hotz  wrote:

> Hi Mic
>
> You said, you had migrated your "Galaxy installation to Postgresql"?
>
> So why did you lose your API key? - I assume by 'loosing' you mean: is no
> longer valid, don't you?
>
> Have you forgot to migrate the "api_keys" table? All the keys are stored
> there.
>
>
> Hans-Rudolf
>
>
> On 11/17/2015 02:10 PM, Gildas Le Corguillé wrote:
>
>> Hello,
>>
>> I don't know your constraints but you can easily generate a new one:
>>   - User
>>   - API Keys
>>   - Generate a new key now
>>
>> Gildas
>>
>>
>> Le 17/11/2015 13:11, Mic a écrit :
>>
>>> Hello,
>>> After I migrated Galaxy installation to Postgresql my API key got
>>> lost. I still know it and would like to add it to the database, but I
>>> am not quite sure how to do it.
>>>
>>> How is it possible to add an API key to the database.
>>>
>>> Thank you in advance.
>>>
>>> Mic
>>>
>>>
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>>https://lists.galaxyproject.org/
>>>
>>> To search Galaxy mailing lists use the unified search at:
>>>http://galaxyproject.org/search/mailinglists/
>>>
>>
>> --
>> -
>> Gildas Le Corguillé - Bioinformatician/Bioanalyste
>> Plateforme ABiMS (Analyses and Bioinformatics for Marine Science)
>>
>> Station Biologique de Roscoff - UPMC/CNRS - FR2424
>> Place Georges Teissier 29680 Roscoff FRANCE
>> tel: +33 2 98 29 23 81
>> http://abims.sb-roscoff.fr
>> --
>>
>>
>>
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>https://lists.galaxyproject.org/
>>
>> To search Galaxy mailing lists use the unified search at:
>>http://galaxyproject.org/search/mailinglists/
>>
>>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] R: Error installing tools from toolshed

2015-11-19 Thread Nicola Soranzo
Hi Scott,
which Galaxy release are you using?
This is a bug which should be fixed in an updated release_15.07 git branch. 

Cheers, 
Nicola 

 Scott Szakonyi ha scritto 

>___
>Please keep all replies on the list by using "reply all"
>in your mail client.  To manage your subscriptions to this
>and other Galaxy lists, please use the interface at:
>  https://lists.galaxyproject.org/
>
>To search Galaxy mailing lists use the unified search at:
>  http://galaxyproject.org/search/mailinglists/___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Planemo 0.20.0 and xunit

2015-11-19 Thread Tiago Antao
On Thu, 19 Nov 2015 20:35:42 +
John Chilton  wrote:

> The latest development release of Galaxy which planemo targets by
> default requires virtualenv to be available. Can you verify that it is
> not available on your machine and install it and try again. I will try
> to have planemo give a clearer error in this case if it is indeed the
> problem.


I created a new environment from scratch and installed virtualenv.
That did it.
Thanks
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Tool code for symlinking a data collection from input to output?

2015-11-19 Thread Aaron Petkau
Thanks for including me Damion.For symlinking, you're right John and I never thought about any of the issues with deleting datasets in Galaxy afterwards.The ability to define a connection for passing/failing a subsequent tool you mentioned sounds exactly what we were trying to accomplish.  Is there any link to documentation on how to do this?  Passing a dummy input "passing_qc_text_file" could work too, but we were designing our QC tool to handle input from many types of tools (along with a "rules" file for how to evaluate the quality), so we would end up modifying a lot of existing tools just to add a "passing_qc_text_file" input.  I'm not sure how much time we'd want to spend updating a bunch of tools if something new is on the way.Aaron-"Dooley, Damion"  wrote: -To: John Chilton From: "Dooley, Damion" Date: 11/17/2015 01:31PMCc: "galaxy-...@lists.bx.psu.edu" , "aaron.pet...@phac-aspc.gc.ca" Subject: Re: [galaxy-dev] Tool code for symlinking a data collection from input to output?Ah, I can see how symlinking could lead to file management issues.  Well,we were trying to avoid the situation where use of our qc tool wouldrequire customizing any subsequent tools in a workflow, and as well,reduce disk overhead of hundred megabyte files being passed along in aworkflow.So wow on the second paragraph - enabling dependencies outside of toolfile I/o.  I agree with Eric, this will be great.Now in our current canned workflows we actually don't need this to beedited via the interface - so are there details on how to edit a workflowfile directly to get this dependency of tool B on tool A in place?Thanks,DamionOn 2015-11-17, 11:18 AM, "John Chilton"  wrote:>Slowly trying to catch up on e-mail after a lot of travel in November>and I answered a variant of this to Damion directly, the most relevant>snippet was:>>">I would not symbolic link the>files though. I would just take the original collection and pipe it>into the next tool and add a dummy input to the next tool>("passing_qc_text_file") that would cause the workflow to fail if the>qc fails. This is a bit hacky, but symbolic linking will break>Galaxy's deletion, purging, etc You can delete the original>dataset collection and the result would affect the files on disk for>the output collection without Galaxy having anyway to know.>>The workflow subsystem has the ability to define a connection like>this (just wait for one tool to pass before calling the next without a>input/output relationship) but it hasn't been exposed in the workflow>editor yet.">>-John
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Cloudman vs StarCluster

2015-11-19 Thread Enis Afgan
Hi Ryan,
Sorry you're having trouble porting it over. Let us know if you have
specific questions that we may be able to help with.

In the mean time, here are some of the benefits of CloudMan: technically,
CloudMan provides a management interface tailored for Galaxy so you can
control the Galaxy process (and the dependent processes such as the
database, ftp server and the Reports app) from within it. It comes with a
graphical web interface, which exposes many features in one place and
allows multiple people to access it without sharing cloud access creds.
Functionally, CloudMan allows instances to be cloned and shared with other
users so that each instance includes complete configuration and data yet
each child instance is independent of the parent. Once the components are
prebuilt, other users don't need to install anything locally and can
instead use their web browser to create an arbitrary number of instances
that each come preconfigured with Galaxy (i.e., Galaxy, toolset, indices)
that is ready for use. Exposed via the Cloud Launch app, it allows creation
of multiple flavors that can correspond to different toolsets or software
versions.

Hope this helps and let us know if you have more questions.

On Wed, Nov 11, 2015 at 10:19 AM, Ryan G 
wrote:

> Hi all - Our organization requires us to use a specific AMI (with a
> specific flavor of linux) that isn't supported by Cloudman.  Its taking us
> a bit of time to get this combination working.
>
> Its taken more than 3 weeks and we still don't have Galaxy /  Cloudman
> running.  I could probably do the same thing with StarCluster and be up and
> running within a day.
>
> I'm very comfortable with using StarCluster and am wondering if there is
> any benefit to using Cloudman over StarCluster for node management with
> Galaxy.
>
> Thoughts?
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Planemo 0.20.0 and xunit

2015-11-19 Thread John Chilton
The latest development release of Galaxy which planemo targets by
default requires virtualenv to be available. Can you verify that it is
not available on your machine and install it and try again. I will try
to have planemo give a clearer error in this case if it is indeed the
problem.

-John

On Tue, Nov 17, 2015 at 8:21 PM, Tiago Antao  wrote:
> On Tue, 17 Nov 2015 14:48:03 +
> John Chilton  wrote:
>
>> Thanks for the bug report. Somehow Galaxy isn't installing the
>> development wheels into the transient Galaxy's virtualenv, I've wiped
>> out my planemo caches and I can't reproduce this locally.
>
> From planemo 0.19 to 0.20 I have also found that problem, but I
> installed many tools on my conda environment to compensate for that (I
> should have reported that bug also, sorry)
>
>
>
>> Can you send me the green log messages at the beginning of the test
>> command as well as the few lines after them (maybe just paste the
>> whole thing into a gist or something).
>
> I have put everything on the log on this gist:
> https://gist.github.com/tiagoantao/488172904527ce854110
> Note that my environment is problematic (as in having too much stuff) as
> I had to pip and conda install quite a few packages due to the
> virtualenv not being in use.
>
> Thanks,
> Tiago
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Error installing tools from toolshed

2015-11-19 Thread Scott Szakonyi
Hello all,

I'm having issues installing Trimmomatic and Picard tools from the
toolshed. The errors are of the following form (Trimmomatic used for the
example).

Trimmomatic 0.32 package shows an installation error.

Error details:

File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/install_manager.py",
line 142, in install_and_build_package_via_fabric
tool_dependency = self.install_and_build_package(
tool_shed_repository, tool_dependency, actions_dict )
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/install_manager.py",
line 136, in install_and_build_package
dir = tmp_dir
  File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
  File 
"/vectorbase/web/Galaxy/galaxy-dist/eggs/Fabric-1.7.0-py2.6.egg/fabric/context_managers.py",
line 142, in _setenv
yield
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/install_manager.py",
line 100, in install_and_build_package
initial_download=True )
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/tool_dependencies/recipe/recipe_manager.py",
line 32, in execute_step
initial_download=initial_download )
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py",
line 685, in execute_step
dir = self.url_download( work_dir, downloaded_filename, url,
extract=True, checksums=checksums )
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py",
line 223, in url_download
extraction_path = archive.extract( install_dir )
  File 
"/vectorbase/web/Galaxy/galaxy-dist/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py",
line 89, in extract
os.chmod( absolute_filepath, unix_permissions )

[Errno 2] No such file or directory:
'/data/galaxy/tmp/tmp-toolshed-mtdeqJvts/Trimmomatic-0.32/Trimmomatic-0.32/'

I've tried several version of the tool with no success. Any thoughts? I've
searched project website but haven't come up with anything definitive.

Thanks for any help you can offer,

-- 
Scott B. Szakonyi
Research Programmer
*Center for Research Computing*
107 Information Technology Center
Notre Dame, IN 46556
http://crc.nd.edu
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] workflow API: step_order vs step_id in bioblend

2015-11-19 Thread Jorrit Boekel
Thanks John, I found indeed the step order ids in the result from 
export_workflow_json. Helps a lot and now I won’t need to use soon-deprecated 
stuff.

cheers,
— 
Jorrit Boekel
Proteomics systems developer
BILS / Lehtiö lab
Scilifelab Stockholm, Sweden



> On 19 Nov 2015, at 15:37, John Chilton  wrote:
> 
> The workflow API is the only place where we expose unencoded IDs and
> we really shouldn't be doing it. I would instead focus on adapting to
> using step_ids - they really should be more stable and usable. Order
> index has lots of advantages
> - You can build a request for a given workflow and apply it to
> multiple Galaxy instances
> - You can build a request by simply looking at the Galaxy workflow
> JSON you upload to create the workflow
> - If you don't add or remote inputs you can modify a workflow and the
> request will likely still be valid
> 
> If get_workflow doesn't have the information you need - it is in
> workflows.export_workflow_json. Just filter the steps on input types.
> We should probably deprecate get_workflows because it exposes
> information it shouldn't and expose the same data but using the order
> index instead of the unencoded ID. Aysam Guerler pointed out to me
> yesterday that parameter overrides also still use unencoded IDs, we
> should change that also.
> 
> Today, I'll try to add parameters to the get and run APIs to have
> these use order_index instead of unencoded step ids. Someday, that
> really should become the default behavior but we have to think
> carefully about how to deprecate the existing functionality.
> 
> That said - I'd definitely merge a PR that exposed inputs_by for the
> workflow run endpoints in bioblend. It is useful - less for restoring
> the legacy behavior than allowing specifying inputs by label
> (functionality that is in bioblend objects on the client side but that
> Galaxy supports natively) and needs to be documented better.
> 
> -John
> 
> On Thu, Nov 19, 2015 at 9:50 AM, Jorrit Boekel
>  wrote:
>> Hi all,
>> 
>> Here’s a mail for heads up and googleable error message in case someone 
>> finds a similar error and scratches her/his head.
>> 
>> So (some time) after the very nice API class we had at GCC2015 I am now 
>> trying my hand at running workflows using Bioblend. I had some frustration 
>> trying to invoke a simple WF, but now found out that the inputs parameter to 
>> invoke_workflow is not as I thought it would be. I’m on the latest 
>> galaxy-dist and using Bioblend 0.7 in Python3.4.
>> 
>> So I thought I’d call e.g.:
>> 
>> gi.workflows.invoke_workflow(workflow_id, inputs={‘9678’: {’src’: ‘hda’, 
>> ‘id’: ‘abcdef12345’}} , history_id=‘abc1234’)
>> 
>> Where the keys in inputs dict represent the ids from:
>> gi.workflows.get_workflow(workflow_id)[‘inputs’]
>> 
>> But apparently the new standard is to use the step_order (so 0 for the first 
>> step) instead of the step_id, as shown in the code at 
>> lib/galaxy/workflow/run_request.py
>> So this gave me the HTTP 400 error "Workflow cannot be run because an 
>> expected input step '84' has no input dataset."
>> 
>> I have reverted to using legacy code with run_workflow and dataset_map, 
>> which circumvents the problem:
>> gi.workflows.run_workflow(workflow_id, dataset_map={‘9678’: {’src’: ‘hda’, 
>> ‘id’: ‘abcdef12345’}} , history_id=‘abc1234’)
>> 
>> Is there any way to specify inputs_by in the payload or am I on the wrong 
>> bioblend version? Otherwise I can file a request on the Bioblend github.
>> 
>> cheers,
>> —
>> Jorrit Boekel
>> Proteomics systems developer
>> BILS / Lehtiö lab
>> Scilifelab Stockholm, Sweden
>> 
>> 
>> 
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>  https://lists.galaxyproject.org/
>> 
>> To search Galaxy mailing lists use the unified search at:
>>  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] workflow API: step_order vs step_id in bioblend

2015-11-19 Thread John Chilton
The workflow API is the only place where we expose unencoded IDs and
we really shouldn't be doing it. I would instead focus on adapting to
using step_ids - they really should be more stable and usable. Order
index has lots of advantages
 - You can build a request for a given workflow and apply it to
multiple Galaxy instances
 - You can build a request by simply looking at the Galaxy workflow
JSON you upload to create the workflow
 - If you don't add or remote inputs you can modify a workflow and the
request will likely still be valid

If get_workflow doesn't have the information you need - it is in
workflows.export_workflow_json. Just filter the steps on input types.
We should probably deprecate get_workflows because it exposes
information it shouldn't and expose the same data but using the order
index instead of the unencoded ID. Aysam Guerler pointed out to me
yesterday that parameter overrides also still use unencoded IDs, we
should change that also.

Today, I'll try to add parameters to the get and run APIs to have
these use order_index instead of unencoded step ids. Someday, that
really should become the default behavior but we have to think
carefully about how to deprecate the existing functionality.

That said - I'd definitely merge a PR that exposed inputs_by for the
workflow run endpoints in bioblend. It is useful - less for restoring
the legacy behavior than allowing specifying inputs by label
(functionality that is in bioblend objects on the client side but that
Galaxy supports natively) and needs to be documented better.

-John

On Thu, Nov 19, 2015 at 9:50 AM, Jorrit Boekel
 wrote:
> Hi all,
>
> Here’s a mail for heads up and googleable error message in case someone finds 
> a similar error and scratches her/his head.
>
> So (some time) after the very nice API class we had at GCC2015 I am now 
> trying my hand at running workflows using Bioblend. I had some frustration 
> trying to invoke a simple WF, but now found out that the inputs parameter to 
> invoke_workflow is not as I thought it would be. I’m on the latest 
> galaxy-dist and using Bioblend 0.7 in Python3.4.
>
> So I thought I’d call e.g.:
>
> gi.workflows.invoke_workflow(workflow_id, inputs={‘9678’: {’src’: ‘hda’, 
> ‘id’: ‘abcdef12345’}} , history_id=‘abc1234’)
>
> Where the keys in inputs dict represent the ids from:
> gi.workflows.get_workflow(workflow_id)[‘inputs’]
>
> But apparently the new standard is to use the step_order (so 0 for the first 
> step) instead of the step_id, as shown in the code at 
> lib/galaxy/workflow/run_request.py
> So this gave me the HTTP 400 error "Workflow cannot be run because an 
> expected input step '84' has no input dataset."
>
> I have reverted to using legacy code with run_workflow and dataset_map, which 
> circumvents the problem:
> gi.workflows.run_workflow(workflow_id, dataset_map={‘9678’: {’src’: ‘hda’, 
> ‘id’: ‘abcdef12345’}} , history_id=‘abc1234’)
>
> Is there any way to specify inputs_by in the payload or am I on the wrong 
> bioblend version? Otherwise I can file a request on the Bioblend github.
>
> cheers,
> —
> Jorrit Boekel
> Proteomics systems developer
> BILS / Lehtiö lab
> Scilifelab Stockholm, Sweden
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] workflow API: step_order vs step_id in bioblend

2015-11-19 Thread Jorrit Boekel
Hi all,

Here’s a mail for heads up and googleable error message in case someone finds a 
similar error and scratches her/his head.

So (some time) after the very nice API class we had at GCC2015 I am now trying 
my hand at running workflows using Bioblend. I had some frustration trying to 
invoke a simple WF, but now found out that the inputs parameter to 
invoke_workflow is not as I thought it would be. I’m on the latest galaxy-dist 
and using Bioblend 0.7 in Python3.4.

So I thought I’d call e.g.:

gi.workflows.invoke_workflow(workflow_id, inputs={‘9678’: {’src’: ‘hda’, ‘id’: 
‘abcdef12345’}} , history_id=‘abc1234’)

Where the keys in inputs dict represent the ids from:
gi.workflows.get_workflow(workflow_id)[‘inputs’]

But apparently the new standard is to use the step_order (so 0 for the first 
step) instead of the step_id, as shown in the code at 
lib/galaxy/workflow/run_request.py
So this gave me the HTTP 400 error "Workflow cannot be run because an expected 
input step '84' has no input dataset."

I have reverted to using legacy code with run_workflow and dataset_map, which 
circumvents the problem:
gi.workflows.run_workflow(workflow_id, dataset_map={‘9678’: {’src’: ‘hda’, 
‘id’: ‘abcdef12345’}} , history_id=‘abc1234’)

Is there any way to specify inputs_by in the payload or am I on the wrong 
bioblend version? Otherwise I can file a request on the Bioblend github.

cheers,
— 
Jorrit Boekel
Proteomics systems developer
BILS / Lehtiö lab
Scilifelab Stockholm, Sweden



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/