> It is really difficult to distinguish between leaks and bad memory usage
> using hprof. I'm used to [keep] 2 snapshots: 1 after few hours, 1 after
> few days (after a GC) and search for long running objects allocated
> after the first snapshot and still in memory.

Yes, that's what I've wanted to do ... actually, I wanted to give it a
couple of days under measurement, but the JVM kept locking up.  We'll see
how it goes now.

Oh, you had asked if there was anything different over the weekend.  One
thing that I have noticed is a pattern of a lot more spambot traffic,
particularly on Sundays, as if the scum figure that Sunday is the day with
least administrative observation of systems.  The traffic is generally
broadly based, rather than a few systems trying to open lots of connections.
Apparently, I need to significantly expand my list of IP subnets (mostly
dial-up) to block at the firewall.

>>   1  9.69%  9.69%  4821600     294   139416400   8501  34780 [B
>> TRACE 34780:
>>   org.gjt.mm.mysql.Buffer.<init>(<Unknown>:Unknown line)
>>   org.gjt.mm.mysql.PreparedStatement.executeQuery(<Unknown>:Unknown line)
>>
org.apache.james.mailrepository.JDBCSpoolRepository.loadPendingMessages(:276
)
>> Ok, this is bothersome.  There does not appear to be any reason for there
to
>> be 294 outstanding buffers taking up 4.8MB of heap.

> IIRC connector/j caches in memory for each connection the whole expanded
> configuration as chars. This uses a lot of memory.

Yeah, no kidding.  But why 294?  That would be almost, but not quite, 15
buffers for each possible connection, since I have configured a maximum of
20.  Well, we'll see what Connector/J v3 does.

>>   3  3.28% 21.42%  1631296   15245    31949344 298592   5954 [C
>>   5  3.09% 27.66%  1539240    9525    30157792 186620   5960 [B
>> TRACE 5954:
>> ...
>>  java.io.File.listFiles(File.java:998)
>>
org.apache.avalon.excalibur.monitor.DirectoryResource.testModifiedAfter(Dire
ctoryResource.java:84)
>>
org.apache.avalon.excalibur.monitor.impl.AbstractMonitor.scanAllResources(Ab
stractMonitor.java:132)
>>
org.apache.avalon.excalibur.monitor.impl.ActiveMonitor.run(ActiveMonitor.jav
a:102)
>> TRACE 5960:
>> ...
>>  java.io.File.lastModified(File.java:773)
>>
org.apache.avalon.excalibur.monitor.DirectoryResource.testModifiedAfter(Dire
ctoryResource.java:93)
>>
org.apache.avalon.excalibur.monitor.impl.AbstractMonitor.scanAllResources(Ab
stractMonitor.java:132)
>>
org.apache.avalon.excalibur.monitor.impl.ActiveMonitor.run(ActiveMonitor.jav
a:102)
>> I'm going to increase the stack depth for the next test to see where
these
>> are coming from.  3MB for what?  And it isn't in our code as far as I can
>> see, so it must be some Avalon artifact.

> Excalibur-monitor should not be used by james code but only by phoenix.
> Did you changed anything on the phoenix side?

Not a thing.  Stock configuration.

> if phoenix needs 3MB to keep references/hashes or anything else for
> deployed classes I would not be so surprised.

That doesn't add up.  100s of 1000s of allocations, and 25K still alive.
And I can tell you from checking, that those are not allocated early in the
lifecycle.

>>   8  2.52% 35.46%  1253616     882    36346336  25572  24996 [B
>> TRACE 24996:
>>  org.gjt.mm.mysql.PreparedStatement.<init>(<Unknown>:Unknown line)
>>  org.gjt.mm.mysql.jdbc2.PreparedStatement.<init>(<Unknown>:Unknown line)
>>  org.gjt.mm.mysql.jdbc2.Connection.prepareStatement(<Unknown>:Unknown
line)
>>  org.gjt.mm.mysql.jdbc2.Connection.prepareStatement(<Unknown>:Unknown
line)
>>
org.apache.james.util.mordred.PoolConnEntry.prepareStatement(PoolConnEntry.j
ava:257)
>>
org.apache.james.mailrepository.JDBCSpoolRepository.loadPendingMessages(JDBC
SpoolRepository.java:272)
>>
org.apache.james.mailrepository.JDBCSpoolRepository.getNextPendingMessage(JD
BCSpoolRepository.java:248)
>>
org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolReposito
ry.java:192)
>>
org.apache.james.mailrepository.JDBCSpoolRepository.accept(JDBCSpoolReposito
ry.java:122)
>> Disturbing on two accounts.  First, why are there 882 outstanding
>> PreparedStatement objects left in memory, and secondly, why didn't I
switch
>> to DBCP from Mordred?  :-)

> Is there anyone supporting mordred here? If not we should remove any
> mordred reference from the config.xml. And I would like to remove
> mordred folder from our trunk.

Notice the smiley face.  I'd simply forgotten that I hadn't switched that
over already.  But I don't believe that it has anything to do with mordred,
which is a useful backup in case someone reports some weird show-stopper
with DBCP.  See the issue over on server-user@ today.  Not coincidentally, I
just had the exact same thing happen on my server, so I've switched back to
mordred, while leaving Connector/J v3 in place.  Mordred will try 100 times
over a 5 second period to get a connection.  By default, DBCP quits after 3
attempts.  And I remember having increased Mordred from 10 to 100 because of
this problem.

So leave it in for now, but be my guest to deprecate the package, with the
javadoc comment being to use DBCP.  We've already made it the default, and I
suspect that we'll have to do some work to properly tune its use, since I
don't believe that we've given enough stress testing to our DBCP compared to
mordred over the years, as evidenced by the recent report.

>> So there are two incidents of MySQL apparently leaking, but not on every
>> opportunity.  And we do have explicit close calls for that allocation
spot.

> This is not enough to say they are leaks, they could simply be memory
> correctly allocated for the current usage

I don't believe so, no.  There is no reason to have 882 outstanding copies
of that PreparedStatement, which is almost 45 times the maximum number of
connections for the pool.

        --- Noel



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to