Due to the way the specs are written the Connector system is really
married to the transaction manager, so if you take one you have to
take the other. I like the G connector system and don't want to learn
another one, so I see no reason to switch.
-dain
On Mar 18, 2008, at 7:56 AM, Mohammad Nour El-Din wrote:
One more question, if ObjectWeb's JOTM is using HOWL and it is a Tx
manager, so why Geronimo is uing it's own Tx manager ?
On Tue, Mar 18, 2008 at 11:45 AM, Mohammad Nour El-Din
<[EMAIL PROTECTED]> wrote:
So interesting, so they developed everything in Java Tx
management ???
sounds strange ha !
On Mon, Mar 17, 2008 at 8:28 PM, Dain Sundstrom <[EMAIL PROTECTED]>
wrote:
There isn't much active development on it because the project is
basically done. The project has a narrowly focused, write a
transaction logging system. The project was complete a few years
ago
and all known bugs have been fixed.
So although there is no active development, this code is used in
Geronimo TX and some ObjectWeb projects.
-dain
On Mar 17, 2008, at 1:28 AM, Mohammad Nour El-Din wrote:
I looked at the HOWL project at ObjectWeb and seems that it is an
old
project and no further development is made, so why we use it ?
On Sun, Mar 16, 2008 at 8:07 PM, Dain Sundstrom <[EMAIL PROTECTED]>
wrote:
After thinking about this more, I don't think that we should
turn on
recovery at this point in the 3.0 release cycle. I think it is
good
turning it on in trunk (3.1) so we can get lots of testing in
before
releasing it.
One other thing, the tx logs should be in a directory in the data
directory. I'm not sure if that is happening now but the property
should be something like data/txlog.
-dain
On Mar 16, 2008, at 10:03 AM, Dain Sundstrom wrote:
On Mar 16, 2008, at 9:33 AM, David Jencks wrote:
On Mar 15, 2008, at 2:37 PM, David Blevins wrote:
On Mar 15, 2008, at 12:06 AM, David Jencks wrote:
While not ideal, I think using a working although slower
transport is a reasonable compromise to a faster, broken
transport until we can get a fixed activemq out.
We definitely need the vm transport for the embedded testing
scenarios and we don't have tx recovery yet, so this is
something
we probably don't want enable by default. We can wrap the
wrapping with the "duct tape" flag like so:
if (System.getProperty("duct tape") != null) {
xaResource = new WrapperNamedXAResource(xaResource,
container.getContainerID().toString());
}
EndpointHandler endpointHandler = new
EndpointHandler(container,
deploymentInfo, instanceFactory, xaResource);
If you have time to make the change and rollback the service-
jar.xml settings, that'd be great, otherwise I'll get to it
before
we release.
You should probably check my work :-) but after some work I
think
the current status is:
- recovery works if howl log configured in tm configuration
- there's a flag TxRecovery for MDB container and the DBCP pools
that turns on the NamedXAResource wrapping
- recovery and wrapping is turned on for standalone and
tomcat, and
these use the amq tcp transport
- recovery and wrapping is turned off for embedded and it uses
vm
transport
The tests break if you turn on recovery and wrapping in embedded
because the howl log locks its log files and does not unlock
them.
Without a "stop" lifecycle call I don't know how the howl log
can
determine its time for a clean shutdown.
How about a finalizer (assuming we have a point where the log is
GCed)? We could subclass the howl log service in OpenEJB and add
the finalizer.
-dain
--
Thanks
- Mohammad Nour
--
Thanks
- Mohammad Nour
--
Thanks
- Mohammad Nour