Hi Ajinkya, Yes, that is the case. how would you plan to solve it?
Regards, Shameera. On Fri, Jan 13, 2017 at 6:37 AM, Ajinkya Dhamnaskar <[email protected]> wrote: > Amila, > > Thanks Amila for explaining. It really explains how things are mapped. I > could see output against JOB but could not figure out from where exactly we > are logging output for a process. > > Shameera, > > Yeah, that's true. So basically, if application does not have output > staging task, it would not log output for respective process. > Which means if output data type is not URI, we are not logging output > against process.(Please correct me if I am wrong). > > Probably, here we have an opportunity to improve. > > Thanks in anticipation > > On Fri, Jan 13, 2017 at 8:58 AM, Shameera Rathnayaka < > [email protected]> wrote: > >> Hi Ajinkya, >> >> If you check here >> org.apache.airavata.gfac.impl.task.SCPDataStageTask#outputDataStaging >> you will see that we are saving process outputs to database(through >> registry). You probably testing with local job submission >> with org.apache.airavata.gfac.impl.task.DataStageTask as data staging >> task implementation. There we don't save process outputs. First thing is >> to fix this and save the process outputs to the database. >> >> If you know the JobId then you can retrieve processId from Job model. >> Using processId you can get all process outputs. see PROCES_OUTPUT case in >> org.apache.airavata.registry.core.experiment.catalog.impl.ExperimentCatalogImpl#get(..,..) >> method. >> >> Hope this will help you to move forward. >> >> Best, >> Shameera. >> >> On Thu, Jan 12, 2017 at 3:48 PM Amila Jayasekara <[email protected]> >> wrote: >> >>> Hi Ajinkya, >>> >>> I am not familiar with the context of your question but let me try to >>> answer. >>> >>> If you are referring to an application deployed in a supercomputer, then >>> the application should have a job id. In the supercomputer, each >>> application runs as a separate batch job and each job is distinguished >>> using the job id (similar to process id in a PC). Usually, the job >>> scheduler returns this job id and Airavata should be aware about that job >>> id. Then, you should be able to use this job id to identify the output, >>> provided job script specify instructions to generate output. >>> >>> I did not understand what you referred as "process model" and "job >>> model". I assume these are database tables. >>> >>> Thanks >>> -Amila >>> >>> >>> >>> On Wed, Jan 11, 2017 at 1:17 PM, Ajinkya Dhamnaskar < >>> [email protected]> wrote: >>> >>> Hello Dev, >>> >>> I am trying to fetch application output (type:INTEGER) after experiment >>> completion. As per my understanding each application runs as a process and >>> that process should have final output. >>> >>> So, ideally we should be able to get final output from process id itself >>> (correct me if I am wrong). >>> In my case, I am not seeing final output in database. Basically, we are >>> not updating process model after job completion, we update job model >>> though. >>> >>> Am I missing anything here? >>> >>> Any help is appreciated. >>> >>> -- >>> Thanks and regards, >>> >>> Ajinkya Dhamnaskar >>> Student ID : 0003469679 >>> Masters (CS) >>> +1 (812) 369- 5416 <(812)%20369-5416> >>> >>> >>> -- >> Shameera Rathnayaka >> > > > > -- > Thanks and regards, > > Ajinkya Dhamnaskar > Student ID : 0003469679 > Masters (CS) > +1 (812) 369- 5416 <(812)%20369-5416> > -- Best Regards, Shameera Rathnayaka. email: shameera AT apache.org , shameerainfo AT gmail.com Blogs : https://shameerarathnayaka.wordpress.com , http://shameerarathnayaka.blogspot.com/
