> Great!
>
> Giorgio, if I understand correctly, the above scheme will help you
> trigger the XStream databinding for objects that implement the
> XStreamable interface you've defined.

Yes. I use it also for serializing Jobs, but I'm going to change this.
I feel that too much xml
is compute extensive.

> You also said that you were using Java serialization and tunneling the
> resulting bytes as base64. Could you expand a little on this and help me
> understand how you do it?
Simple. Look in the same way I patched CallableReferenceImpl. I
serialize and base64 and then i add a trasformer for it.
>
> Are you doing the serialization in your SCA component's implementation
> logic and then passing the bytes to a service interface like:
>
> JobManager {
>
>    run(byte[] serializedJob);
> }

No it has something like the following:
> and then letting the Axis2 binding send the byte[] as base64 (using the
> JAXB mapping)?
No. a custom trasformer, that now it's useless :) I'm planning to use
java.io.Serializable and sending a bunch of jobs at time.

Jean Sebastian, this is what i'm doing. That's my workpool readme,
work in progress..it might change:

README.

This readme explains how to use my workpool application.
You can configure the workers by subclassing the WorkerServiceImpl class,
and you should give to this class a COMPOSITE scope, i.e.:

import org.apache.tuscany.sca.core.context.CallableReferenceImpl;
import org.apache.tuscany.sca.databinding.job.Job;
import org.apache.tuscany.sca.databinding.job.JobDataMap;
import org.osoa.sca.annotations.Scope;
/*
This is the example class  in order to use the workpool service
*/
@Scope("COMPOSITE")
public class MyWorker extends WorkerServiceImpl<Object, Integer> {

        @Override
        public ResultJob computeTask(Job<Object,Integer> job) {
                
                ResultJob result = new ResultJob();
                JobDataMap map = new JobDataMap();
                map.addJobData("result", job.compute(new Integer(5)));
                result.setJobDataMap(map);
                return result;
        }
        
}

This worker class receives a job stream and it give us a result a so
called in a hashmap. This way of working is quite similar to how
Quartz Scheduler handles the results.
In order to customize your workpool application, you also should modify the
Workpool.composite. For example for my nodeB:

<composite xmlns="http://www.osoa.org/xmlns/sca/1.0";

           targetNamespace="http://sample";

           xmlns:sample="http://sample";

           name="Workpool">

        <component name="WorkerManagerNodeBComponent">

        <implementation.java class="workpool.WorkerManagerImpl"/>

        <property name="nodeName">nodeB</property>

                <property name="compositeName">Workpool.composite</property>

                 <property name="workerClass">workpool.MyWorker</property>

                <service name="WorkerManagerInitService">

            <interface.java
interface="org.apache.tuscany.sca.node.NodeManagerInitService"/>

            <binding.sca/>

        </service>

     <service name="WorkerManager">

      <binding.sca uri="http://localhost:13000/WorkerManagerNodeBComponent"/>

     </service>

     </component>


</composite>

In the slaves nodes. So each slave node in the workpool is managed by
a WorkerManager, which is in charge to add/remove dynamically workers
in order to adapt all the system to the load. At boot time each nodes
has no worker component instance until the workpool master starts.
The workpool master node is made up of two components:
- WorkpoolManager - which has the task to control/adapt worker numbers
- WorkpoolService - which simply submit jobs to a worker on demand. I
say "on demand" because when a worker gets started from its node
manager send it a NullJob, and then it refers to the WorkpoolService's
queue to get other jobs.

The peculiar structure of this system is that the WorkpoolManager
holds on its internals a Rule Engine for its business decisions. It's
simply a Java Drools instance, an open source engine widely used in
SOA enviroments.
In this way you can post to the WorkpoolManager (by WebServices) your
own rule set in order to adapt the system to your particular computing
task. Now in this system, the features that can be checked and
controlled are incapsulated in a JavaBean, called WorkpoolBean.

public class WorkpoolBean
{
    private double loadAverage = 0;
    private int nodeNumbers = 0;
    private int workers = 0;
    private double averageServiceTime = 0;
  // skipped setter/getter methods

}

This Workbean is registered inside the rule engine, and when one of
its properties change, a rule is fired. That's all for now.
Cheers,
Giorgio.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to