Heshan,

You can look into JAXB[1] where you can use some thing like below to convert 
the schema to POJO. I have not tried this with hierarchical XSD's or XSD with 
extension which will be good thing to explore

xjc schema/GfacMessage.xsd -p org.xxx.message -d src/main/java/

Thanks
Raminder

[1] : 
http://download.oracle.com/docs/cd/E17802_01/webservices/webservices/docs/1.6/jaxb/xjc.html

On Aug 29, 2011, at 2:12 PM, Heshan Suriyaarachchi wrote:

> Hi Devs,
> 
> Please see my inline comments.
> 
> On Sun, Aug 28, 2011 at 9:30 AM, Suresh Marru <[email protected]> wrote:
> 
>> Hi All,
>> 
>> Looking through the JIRA tickets I see we have addressed some of the issues
>> mentioned below and working on few others. Once we capture current status,
>> as Alek mentioned, lets clearly layout a roadmap for this release and also
>> for the graduation. Please add/modify to the list.
>> 
>> On Jul 31, 2011, at 11:37 PM, Suresh Marru wrote:
>> 
>>> Great to see all the progress in the last few weeks.  As I see it, here
>> is a recap here are the next steps before a release:
>>> 
>>> * Standardize WS Messenger clients and integrate the axis2 ported clients
>> to XBaya and GFac.
>> --Seems like mostly done, but I think we still need to integrate and test
>> this with rest of Airavata. Some TODO's as I see it:
>>       • Verify and test Tracking Library
>>       • Provide notification Publish, Subscribe with tracking schema.
>>       • Change XBaya to use new tracking schema
>>       • Add notification to Gfac using new tracking schema
>>       • create a notification interface inGFac with Tracking as one of the
>> implementation (if gfac is used outside workflow context)
>>> * Discuss, standardize and update  GFac application and service
>> deployment description schema.
>>       * The POJO schema is good for current development, but as per the
>> discussion in mailing list, we still need to have XML schemas. May be we can
>> write utility classes to convert from xml schema to POJO?
>> 
> 
> I would like to work on Converting XML Schema of GFac to POJO. In fact, I
> have already started working on this.
> 
> Currently we have XML Schema defined in the GFac-Schema. The idea here is to
> create POJO's from the above XML Schema. Currently the POJO which we are
> using are not generated. ie. these POJOs are not generated dynamically from
> the XML Schema.
> 
> Following is the way in which I am going to move ahead with the
> developement.
> 
> I will write a parser to parse the Schema and extract the arguments needed
> to create the POJO. Then with the use of JET [1] I will be generating the
> POJOs.
> 
> [1] - http://www.eclipse.org/modeling/m2t/?project=jet#jet
> 
>> * Integrate Axis2 based service interface to GFac-Core
>>       * I see the basic axis2 interface is done and tested with SOAP UI.
>> But I think we we still need to test with XBaya.
>> 
>>> * Upgrade XBaya WSIF clients from XSUL to Axis2 based WSIF clients.
>>       * I think XSUL WSIF clients are working well, may be we can defer
>> AXIS2 clients for future?
>>> * Upgrade GSI Security libraries
>>       * I think we should focus on the integrated all moving components of
>> Airavata and defer any individual component upgrades.
>>> * Provide simple to use samples to try out Airavata.
>>       * We need to provide lot of samples and associated documentation
>>> * Package, document and release airavata incubating release v 0.1
>>       * The builds have evolved, but we still need to have one click
>> integrated build and deploy
>>       * The weakest point I think is documentation, once we get a feature
>> freeze, should we pause development and run a sprint of documentation?
>> 
>> Thanks,
>> Suresh
>> 
>>> 
>>> Please correct/update to the list.
>>> 
>>> Cheers,
>>> Suresh
>>> 
>>> On May 13, 2011, at 8:37 AM, Suresh Marru wrote:
>>> 
>>>> Hi All,
>>>> 
>>>> All of us clearly know what Airavata software is about in varying
>> details,  but at the same time I realize not every one of us on the list
>> have a full understanding of the architecture as a whole and sub-components.
>> Along with inheriting the code donation, I suggest we focus on bringing
>> every one to speed by means of high level and low level architecture
>> diagrams. I will start a detailed email thread about this task. In short,
>> currently the software assumes understanding of e-Science in general and
>> some details of Grid Computing. Our first focus should be to bring the
>> software to a level any java developer can understand and contribute. Next
>> the focus can be to make it easy for novice users.
>>>> 
>>>> I thought a good place to start might be to list out the high level
>> goals and then focus on the first goal with detailed JIRA tasks. I am
>> assuming you will steer us with a orthogonal roadmap to graduation. I hope I
>> am not implying we need to meet the following goals to graduate, because
>> some of them are very open ended. Also, please note that Airavata may have
>> some of these features already, I am mainly categorizing so we will have a
>> focused effort in testing, re-writing or new implementations.
>>>> 
>>>> Airavata high level feature list:
>>>> 
>>>> Phase 1: Construct, Execute and monitor workflows from pre-deployed web
>> services. The workflow enactment engine will be the inherent Airavata
>> Workflow Interpreter. Register command line applications as web services,
>> construct and execute workflows with these application services. The
>> applications may run locally, on Grid enabled resources or by ssh'ing to a
>> remote resource. The client to test this phase workflows can be Airavata
>> Workflow Client (XBaya) running as a desktop application.
>>>> 
>>>> Phase 2: Execute all of phase 1 workflows on Apache ODE engine by
>> generating and deploying BPEL. Develop and deploy gadget interfaces to
>> Apache Rave container to support application registration, workflow
>> submission and monitoring components. Support applications running on
>> virtual machine images to be deployed to Amazon EC2, EUCALYPTUS and similar
>> infrastructure-as-a-service cloud deployments.
>>>> 
>>>> Phase 3:  Expand the compute resources to Elastic Map Reduce and Hadoop
>> based executions. Focus on the data and metadata catalog integration like
>> Apache OODT.
>>>> 
>>>> I will stop here, to allow us to discuss the same. Once we narrow down
>> on the high level phase 1 goals, I will start a detailed discussion on where
>> the code is now and the steps to get to goal1.
>>>> 
>>>> Comments, Barbs?
>>>> 
>>>> Suresh
>>> 
>> 
>> 
> 
> 
> -- 
> Regards,
> Heshan Suriyaarachchi
> 
> http://heshans.blogspot.com/

Reply via email to