On 5/12/06, Lance Waterman <[EMAIL PROTECTED]> wrote:


Based on other posts I have been under the impression that the JACOB
engine/virtual machine was not creating new JVM threads ( due to
complexities around transaction enlistment, context locking, etc ... ). So
in practice these parallel flows are actually serialized by the PXE
virtual
machine? In other words; a single input message/event will use a single
JVM
thread of execution within the BPEL "virtual machine". Are these
assumptions
correct?


Lance,

The problem that JACOB solves is exactly what you describe: how do you
parallelize activities across process threads but keep actions serialized in
the same Java thread.

Since you're doing long running processes, you can have a flow that is
sending messages in one of its threads, and waiting to receive in another.
If you did those in two separate Java threads, you wouldn't be able to scale
to more than a hundred process instances.

The engine needs to operate with a relatively small number of Java threads,
and multiplex as many process threads as you can into that limited pool.
That works for million of process instances, but also applies to threads in
the same process. If you have two Java threads doing atomic assignment at
the same time, doing them in parallel has synchronization overhead (locking,
context switching) and no performance gain to offset that. If you have two
invokes at the same time (and doing transactions) you end up in a diamond
situation that affects data integrity.

So you want to serialize work within the Java thread, but you don't want to
serialize work within the process. You don't want to wait for one flow
branch to complete before starting the other. If you could chunk those
activities into smaller units of work (e.g. send/receive) you can serialize
these actions while parallelizing the activities.

The only case where parallelizing helps is when you have one thread doing an
assign, while the other is waiting to receive a message. That's solved by
decoupling the send/receive threads, and executing invoke as two seperate
actions (send/recieve).

The combination of serializing actions and parallelizing activities,
decoupling send/receive threads, gives you the best performance profile, and
one you can easily tune for different CPU and I/O loads.

Assaf



Separately from that, there are threads dedicated to executing activities
> and threads dedicated to sending/receiving messages. This architecture
> allows some threads to keep executing activities, while other threads
are
> waiting to send and receive messages. It helps with tuning, since the
> activity executing threads are load on the server (CPU, database), while
> the
> threads sending/receiving messages are I/O bound. Processes that are
very
> I/O bound will require a lot of send/receive threads, and only a few
> execution threads.
>
> That has nothing to do with BPEL, it's just a better architecture for
> messaging, especially for supporting low-latency operations. You'll find
> the
> same behavior in Axis2, .Net and many other modern messaging frameworks.
> Right now this is handled by PXE code, but if we switch to Axis2 we
would
> still prefer to use decoupled sender/receiver threads, we'll just
delegate
> their lifecycle to Axis.


Yes, and to that end I believe the goal of the API that Maciej is working
on
is to abstract the core BPEL "virtual  machine" away from the messaging
architecture.

http://ws.apache.org/sandesha/architecture.html
> http://www.onjava.com/pub/a/onjava/2005/07/27/axis2.html?page=2
>
> And of course there's thread management, time scheduling, service
> lifecycle,
> etc which we delegate to the app server layer.
>
> Assaf
>
>
>
> --
> CTO, Intalio
> http://www.intalio.com
>




--
CTO, Intalio
http://www.intalio.com

Reply via email to