Thanks Quinn, 
<groupId>org.apache.camel</groupId>
 <artifactId>camel-parent</artifactId>
  <version>2.22.1</version>
     <groupId>org.apache.camel</groupId>
<artifactId>camel-mllp</artifactId>
<version>2.21.1</version>


    On ‎Friday‎, ‎January‎ ‎4‎, ‎2019‎ ‎03‎:‎39‎:‎56‎ ‎PM‎ ‎EST, Quinn 
Stevenson <qu...@pronoia-solutions.com> wrote:  
 
 Hmmm… what version of Camel is this?

Quinn Stevenson
qu...@pronoia-solutions.com
(801) 244-7758



> On Jan 4, 2019, at 10:06 AM, John F. Berry <bohnje...@yahoo.com.INVALID> 
> wrote:
> 
> 
> 
> Thanks Quinn, 
> 
> I've attempted to increase the consumers with the uri option of 
> maxConcurrentConsumers=15 
> [from("mllp://0.0.0.0:9020?maxConcurrentConsumers=15")]  to see if there's a 
> change.
> I get an error on execution:
> Unknown parameters=[{maxConcurrentConsumers=15}]
> I've tried the Camel uri standard of "concurrentConsumers" with the same 
> result.
> 
> On Wednesday, January 2, 2019, 11:17:15 AM EST, Quinn Stevenson 
> <qu...@pronoia-solutions.com> wrote: 
> 
> Sorry for the late reply on this - I was on vacation :-)
> 
> When I wrote camel-mllp, I had to deal with a somewhat similar situation.  I 
> had multiple systems sending to the same host:port (same type of data, 
> different source systems).  So camel-mllp will allow multiple/concurrent 
> connections and should handle them just fine.
> 
> I’d have to see the exact ERROR/WARN messages to be sure, but I’m guessing 
> the sending system is sending messages frequently enough that camel-mllp 
> can’t detect the connection is closed fast enough, so you’re hitting the 
> maximum number of consumers (defaults to 5).  You may need to increase that, 
> as well as lower some of the other timeouts so that sockets get cleaned-up a 
> little quicker.
> 
> For large messages with camel-mllp, you’ll probably want to adjust the buffer 
> sizes to handle them a little better.
> 
> HTH
> 
> 
> 
>> On Dec 28, 2018, at 9:41 AM, John F. Berry <bohnje...@yahoo.com.INVALID> 
>> wrote:
>> 
>> It seems over the years that an understanding existed in client/server HL7 
>> interfaces that one client sends one message at a time to one port.  Any 
>> more and you're asking for trouble.  We moved an interface I have referenced 
>> here before recently off of our interface engine to a direct point to point 
>> with Camel.  What we discovered is that the sending system creates new work 
>> instances based on outbound events, and that some of these can occur 
>> simultaneously.  mllp and/or the general Camel from/to context does not like 
>> this.. I get warnings about the message being rejected due to only having 
>> one consumer.. which normally I would agree with how it should work.  The 
>> interface engine, however, seems to be able to accommodate this.. all except 
>> for overly huge messages.  This interface carries a base64 encoded pdf if 
>> one HL7 field that I decode and place on a file system.  Get a 30 or 60 page 
>> pdf base64 encoded and you get quite the payload.. thus the need to skip the 
>> interface engine, even though it did serialize my traffic for me.
>> 
>> I attempted to not process, but to dump the message into a queue, thinking 
>> that the extra "conversation" and subsequent WARN message from Camel about 
>> max consumers was from me delaying the ACK while I process.. but later I put 
>> it back, since it occured to me that I should not begin receiving another 
>> message until I have ACK/NAK'ed the current message... or we've timed out.  
>> Neither was the case.
>> 
>> So my long winded history is to ask.. Can I/Should I attempt to allow 
>> multiple consumers, and how I might accomplish this in camel.  This 
>> particular situation, unlike normal HL7.. will not have a problem should 
>> arrival order of messages not be completely linear.  Each event is 
>> completely unrelated to the next, etc.
>> 
>> I attempted to simply replace mllp with netty and mina.. figuring that 
>> perhaps those might have better handling for this situation, but neither 
>> like the big messages and give me excessive frame errors.
>> 
>> I have, in an additional issue, exceeded my JVM heap size with some big 
>> messages.. I'm attempting to handle that on the JVM side but figured I'd 
>> mention it to express the massive size of some of these messages.  So if 
>> there's an idea of running a "from.. to.. javaq" and process later.. I need 
>> to accommodate that in the queue I build as well.
>> 
>> Thanks all!
  

Reply via email to