Hi Norman,
There are not so much .m64 files, so even it it shows that a mail
processing didn't reach the end and may have cause a leak, I don't think
it would make crash my process so quick (between 10 and 24 hours).
I quickly analysed a few dumps last week : there is always a class that
took 30/40% of the memory. Sometimes a activemq class, sometimes not
(don't remember the details).
Also notable, the stack trace (the place where the OOM gives problems)
varies.
Today stack was:
org.apache.camel.RuntimeCamelException:
org.apache.camel.CamelExchangeException: Error processing Exchange.
Exchange[JmsMessage: ActiveMQObjectMessage {commandId = 2102,
responseRequired = true, messageId =
ID:srv001-51032-1270011735798-2:0:24:1:214, originalDestination = null,
originalTransactionId = null, producerId =
ID:srv001-51032-1270011735798-2:0:24:1, destination =
queue://processor.root, transactionId = null, expiration = 0, timestamp
= 1270014325685, arrival = 0, brokerInTime = 1270014326893,
brokerOutTime = 1270014333023, correlationId = null, replyTo = null,
persistent = true, type = null, priority = 4, groupID = null,
groupSequence = 0, targetConsumerId = null, compressed = false, userID =
null, content = org.apache.activemq.util.byteseque...@69b568d0,
marshalledProperties = null, dataStructure = null, redeliveryCounter =
0, size = 10718, properties = null, readOnlyProperties = true,
readOnlyBody = true, droppable = false}]. Caused by:
[java.lang.OutOfMemoryError - Java heap space]
at
org.apache.camel.util.ObjectHelper.wrapRuntimeCamelException(ObjectHelper.java:1055)
at
org.apache.camel.spring.spi.TransactionErrorHandler$1.doInTransactionWithoutResult(TransactionErrorHandler.java:154)
...
I also think the tmp files are not the main cause.
I have now james running in eclipse and I made already made a little
trip in the code. Quite impressive since last time I looked at it.
I will try to stress james with a small smtp/pop3 client (I used
postage, but a simple class with commons-net behind) may be easier and
see what happens (eventually with eclipse profiler).
It will be for this weekend for me.
Bye,
Eric
On 03/31/2010 07:11 PM, Norman Maurer wrote:
Hi Eric,
thx for keeping us in the loop... I'm still not sure why the .m64
files are still in the tmp folder sometimes.. But I suspect the files
are not the cause of the OOM.
I didn't get a OOM since yesterday morning (the time I deployed
current trunk version), but just found to new .m64 files..
So I'm still searching for the real cause. If nothing helps I will
need to use a profiler to find th leak.
Bye,
Norman
2010/3/31 Eric Charles<eric.char...@u-mangate.com>:
Hi Norman,
I had defined the Null mailet
<mailet match="HostIsLocal" class="Null">
<processor> local-address-error</processor>
<notice>550 - Requested action not taken: no such user here</notice>
</mailet>
but now, I use the standard config
<mailet match="HostIsLocal" class="ToProcessor">
<processor> local-address-error</processor>
<notice>550 - Requested action not taken: no such user here</notice>
</mailet>
I still have the OOM.
I will now deploy a fresh svn with your last commits and enable the
ValidRcptHandler.
I keep you posted with the result,
Tks,
Eric
On 03/30/2010 06:49 AM, Norman Maurer wrote:
Hi Eric,
you said all the files are related to address-errors , could you show
me your address error processor config?
Does the Problem still exist when you enable the ValidRcptHandler in
the smtpserver.xml file?
Thx
Norman
2010/3/30, Eric Charles<eric.char...@u-mangate.com>:
Oops, no, the files are still there (only unkn...@known.com).
Eric
On 03/29/2010 10:16 PM, Eric Charles wrote:
Hi Norman,
I just deployed your new commit with the new camel DisposeProcess.
Good news : I don't see anymore the m64 file in tmp (well I see 1
file, on the second after, it is no more there, so the dispose works
as it should).
I keep you posted with our eventual future OOM.
Tks,
Eric
On 03/29/2010 09:18 PM, Eric Charles wrote:
Hi Norman,
The .m64 are all to "unkown user" but to "well known domain" (so
for
<unkn...@known.com>).
They are from various size (with and without attachment).
They are well formed (I can open the downloaded file with
thunderbird)
However, when I sent a mail to unkn...@known.com, I don't see it in
the /tmp
Tks,
Eric
On 03/29/2010 08:43 PM, Norman Maurer wrote:
Hi Eric,
sure.. we have to find the OOM cause. What would be interesting,
could
you check somehow if the .m64 files are files which are related to
successfully delivered mail ?
Thx,
Norman
2010/3/29 Eric Charles<eric.char...@u-mangate.com>:
Norman,
Done. Now, we have to wait...
I saw some created *.m64 that were removed.
But there are other ones that remains in /tmp.
I sometimes run a jmap (java memory map) to produce a heap dump
and
analyse
it.
At the beginning, everything seems ok, and often, when I come
back,
the OOM
has already occured.
I changed the -Xmx512m to -Xmx256m to have a quicker exception
(if
any,
let's hope not).
The *.m64 are a clue and we should ensure that they are removed
in
all
circumstances.
This may be the easy part, as we have some visible clues (the
files).
We should also ensure after that there are no other causes to the
leaks.
Tks,
Eric
On 03/29/2010 08:00 PM, Norman Maurer wrote:
Hi Eric,
I found the cause for the not deleted temporary files. Hopefully
this
is the cause of the OOM. Could try to svn up and run again ?
Thx,
Norman
2010/3/29 Norman Maurer<norman.mau...@googlemail.com>:
Hi Eric,
thx for the report. I see exact the same problem today here..
(The
OOM). Didn't notice the files in the tmp folder, but I think
thats
a
good pointer. I will try to debug the problem later or
tomorrow.
But I
suspect you are right about the tmp files and jms producers..
Did
you
run a kill -3 pid to see what threads are active etc ?
Thx,
Norman
Ps: Patches are welcome :)
2010/3/29 Eric Charles<eric.char...@u-mangate.com>:
Hi,
The last two weeks, I deployed various trunk snapshots.
After james running less than 1 day, I always ran into a
OutOfMemory
Exception.
I analysed the logs (for example STACK TRACE 1 in annex), and
always
found
Camel complaining when trying to deliver on JMS queue.
I tried to adapt the ActiveMQ configuration based on
http://activemq.apache.org/javalangoutofmemory.html but
nothing
helped.
I also analyzed various heap dump :sometimes, 65% was used by
org.apache.mina.transport.socket.nio.NioSocketSession,
sometimes
by a
activemq class,...
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org