[
https://issues.apache.org/activemq/browse/CAMEL-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Willem Jiang reassigned CAMEL-333:
--
Assignee: Willem Jiang
CamelDestination in camel-cxf should support decoupled message
CamelDestination in camel-cxf should support decoupled message
--
Key: CAMEL-333
URL: https://issues.apache.org/activemq/browse/CAMEL-333
Project: Apache Camel
Issue Type:
endpoints implementing Service do not get stopped
-
Key: CAMEL-334
URL: https://issues.apache.org/activemq/browse/CAMEL-334
Project: Apache Camel
Issue Type: Bug
Reporter: James
On 13/02/2008, Tully, Gary [EMAIL PROTECTED] wrote:
Consider a http request response route:
from(jetty:http://localhost:8088/someService;).unmarshal().string().pro
cess(
new Processor() {
public void process(Exchange e) {
String input = (String)
[
https://issues.apache.org/activemq/browse/AMQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rainer Klute closed AMQ-1490.
-
Resolution: Fixed
Fix Version/s: (was: 5.0.0)
5.1.0
Seems to work with
I am very interested in this set of changes. I am currently
load/performance testing ActiveMQ, and am very surprised at the results.
Anything that can be done to speed this area is a good thing. I have found
a dramatic drop in performance when adding even a single consumer,
especially to a
Consider a http request response route:
from(jetty:http://localhost:8088/someService;).unmarshal().string().pro
cess(
new Processor() {
public void process(Exchange e) {
String input = (String) e.getIn().getBody();
I agree. We should cut release candidates for at least NMS and
NMS.ActiveMQ, run it through a vote and make those guys officially
1.0.0.
Then work (towards 1.1) can continue on in the trunk.. and any small
bug fixes can be made to a 1.0 branch.
Regards,
Hiram
On Feb 13, 2008 2:45 PM, Jim Gomes
Hello All,
Before serious work begins on the failover transport support in the NMS
client, I think it would be good to branch or tag the source repository.
I'm not sure of the exact release protocols the Apache group uses in this
type of situation. I think it would be good to mark a stable point
Hi Rob,
That sounds like another good optimisation. I guess we probably still
need both changes, since if a large number of messages are being
injected into the system, we will still end up with a large number of
messages on the pending lists for all subscribers.
Cheers,
David
Rob Davies
I just noticed while looking at Queue.removeSubscription(), that it goes
through pagedInMessages, and effectively redelivers all messages that
were locked by the removed subscription.
My question is why is this necessary? From my understanding in the code
(which may be flawed), these
2008/2/13, Tully, Gary [EMAIL PROTECTED]:
Currently, the unmarshaller copies the input message to the output
message and augments it. The copy brings with it all of the http
headers.
The result is that Content-Length from the request ends in the reply.
This breaks the http response[1].
I
Rob and I did some performance enhancements with queues so that a
Queue.send() call was decoupled from the dispatch processing. In the
past, depending on the state of the consumers, a Queue.send() call could
take a significant amount of time. We changed it so that a single
thread was
13 matches
Mail list logo