Great! - thanks for testing this!
On 25 Jan 2010, at 14:41, Joe Fernandez wrote:


Rob,

The trunk (902807) passed my test; no hurling of OOMs. Memory utilization
under JConsole looked much better.

My producer didn't get kicked off when the store filled, but I think that's
because in this case it was issuing async sends.

Joe


rajdavies wrote:

Hi Joe,

any chance you can build from trunk and try it ?

cheers,

Rob

On 22 Jan 2010, at 20:07, Joe Fernandez wrote:


I ran my 5.3 test with the following

  <systemUsage>
          <systemUsage sendFailIfNoSpace="true">
              <memoryUsage>
                  <memoryUsage limit="100mb"/>
              </memoryUsage>
              <storeUsage>
                  <storeUsage limit="500 mb" name="foo"/>
              </storeUsage>
              <tempUsage>
                  <tempUsage limit="100mb"/>
              </tempUsage>
          </systemUsage>
      </systemUsage>

...and had my producer fill up the store; it tried to pump 400k
messages
each at 2k in size. Producer flow control was also enabled. The
broker did
not hurl any OOM exceptions, "just
javax.jms.ResourceAllocationException:
Usage Manager Store is Full. Stopping producer..." exceptions as
expected.
The producer got kicked off its connection.

The only time I got OOMs was when I used this.

<systemUsage>
   <systemUsage sendFailIfNoSpace="true">
   </systemUsage>
</systemUsage>

Joe



Daniel Kluesing-2 wrote:

So I'm not sure if that's what Rob is talking about being fixed in
5.4
(And I'll try the snapshot as soon as it's ready) but if I don't
have the
sendFailIfNoSpace then my understanding is the producers send calls
block/wait/timeout - as opposed to fail - so it's more difficult to
get
into a HA configuration. It's a minor point, not having an OOM is
much
more important, but I definitely want the send calls to fail for the
producer if the broker ever does anything funny.

Thanks for the feedback on the config, very helpful.

-----Original Message-----
From: Joe Fernandez [mailto:joe.fernan...@ttmsolutions.com]
Sent: Thursday, January 21, 2010 1:01 PM
To: users@activemq.apache.org
Subject: RE: OOM with high KahaDB index time


Just for grins, I threw your cfg file into our 5.3 testbed and sure
enough,
we got the OOMs; I pumped 200k messages with each being 2k in size.
FWIW,
taking this out of the cfg file made things run a lot better.

<systemUsage>
   <systemUsage sendFailIfNoSpace="true">
   </systemUsage>
</systemUsage>

With the above taken out of the cfg file, I was able to pump 400k
messages
into the broker, no OOMs and memory utilization looked much better.
I also
gave a fully-defined systemUsage a try and that also appeared to do
the
trick.

<systemUsage>
          <systemUsage>
              <memoryUsage>
                  <memoryUsage limit="100 mb"/>
              </memoryUsage>
              <storeUsage>
                  <storeUsage limit="1 gb" name="foo"/>
              </storeUsage>
              <tempUsage>
                  <tempUsage limit="100 mb"/>
              </tempUsage>
          </systemUsage>
</systemUsage>

So may be worth giving it a whirl if you can't scoot over to the
trunk and
ride Rob's patch.

Joe


Daniel Kluesing-2 wrote:

I tried the suggestion of going with the default cursor, but I
still get
OOM errors. I've included my full config file below, I think I'm
running
fairly vanilla/default.

After about 350k persistent messages, the logs start to look like:

INFO | Slow KahaDB access: Journal append took: 10 ms, Index
update took
3118 ms
INFO | Slow KahaDB access: Journal append took: 0 ms, Index update
took
5118 ms
INFO | Slow KahaDB access: Journal append took: 0 ms, Index update
took
2736 ms
INFO | Slow KahaDB access: Journal append took: 0 ms, Index update
took
2945 ms
INFO | Slow KahaDB access: Journal append took: 33 ms, Index
update took
2654 ms
INFO | Slow KahaDB access: Journal append took: 82 ms, Index
update took
3174 ms
INFO | Slow KahaDB access: Journal append took: 1 ms, Index update
took
5891 ms
INFO | Slow KahaDB access: Journal append took: 0 ms, Index update
took
2906 ms
INFO | Slow KahaDB access: Journal append took: 60 ms, Index
update took
7619 ms
Exception in thread "InactivityMonitor WriteCheck"
java.lang.OutOfMemoryError: Java heap space
      at java.util.jar.Attributes.read(Attributes.java:377)
      at java.util.jar.Manifest.read(Manifest.java:182)
      at java.util.jar.Manifest.<init>(Manifest.java:52)
      at
java.util.jar.JarFile.getManifestFromReference(JarFile.java:165)
      at java.util.jar.JarFile.getManifest(JarFile.java:146)
      at
sun.misc.URLClassPath$JarLoader$2.getManifest(URLClassPath.java: 693)
      at java.net.URLClassLoader.defineClass(URLClassLoader.java:
221)
at java.net.URLClassLoader.access$000(URLClassLoader.java: 56)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java: 188)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
      at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:
319)
      at
org
.apache
.activemq
.transport.InactivityMonitor.writeCheck(InactivityMonitor.java: 132)
      at
org.apache.activemq.transport.InactivityMonitor
$2.run(InactivityMonitor.java:106)
      at
org
.apache
.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java: 33)
      at java.util.TimerThread.mainLoop(Timer.java:512)
      at java.util.TimerThread.run(Timer.java:462)

Config file:

<beans
xmlns="http://www.springframework.org/schema/beans";
xmlns:amq="http://activemq.apache.org/schema/core";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd";>

  <bean
class
=
"org
.springframework
.beans.factory.config.PropertyPlaceholderConfigurer">
      <property name="locations">

<value>file:${activemq.base}/conf/credentials.properties</value>
      </property>
  </bean>

  <broker xmlns="http://activemq.apache.org/schema/core";
brokerName="sub01chi" dataDirectory="${activemq.base}/data">

      <managementContext>
          <managementContext createConnector="true"/>
      </managementContext>

      <persistenceAdapter>
          <kahaDB directory="${activemq.base}/data/kahadb"/>
      </persistenceAdapter>

        <destinationPolicy>
                <policyMap>
                                <policyEntries>
                                <policyEntry queue="P>" 
producerFlowControl="true"
memoryLimit="10mb"></policyEntry>
                </policyEntries>
                </policyMap>
        </destinationPolicy>
      <systemUsage>
          <systemUsage sendFailIfNoSpace="true">
          </systemUsage>
      </systemUsage>
      <transportConnectors>
          <transportConnector name="openwire"
uri="tcp://0.0.0.0:61616"/>
      </transportConnectors>
  </broker>
  <import resource="jetty.xml"/>
</beans>

-----Original Message-----
From: Rob Davies [mailto:rajdav...@gmail.com]
Sent: Monday, January 18, 2010 10:42 PM
To: users@activemq.apache.org
Subject: Re: OOM with high KahaDB index time


On 18 Jan 2010, at 22:14, Daniel Kluesing wrote:

Hi,

I'm running the 5.3 release as a standalone broker. In one case, a producer is running without a consumer, producing small, persistent
messages, with the FileCursor pendingQueuePolicy (per
https://issues.apache.org/activemq/browse/AMQ-2512)
option and flow control memoryLimit set to 100mb for the queue in
question. (Through a policy entry)

As the queue grows above 300k messages, KahaDB indexing starts
climbing above 1 second. At around 350k messages, the indexing is
taking over 8 seconds. At this point, I start getting java out of
heap space errors in essentially random parts of the code. After a
while, the producers timeout with a channel inactive for too long
error, and the entire broker basically wedges itself. At this
point,
consumers are generally unable to bind to the broker quitting with
timeout errors. When they can connect, consuming a single message
triggers an index re-build, which takes 2-8seconds. Turning on
verbose garbage collection, the jvm is collecting like mad but
reclaiming no space.

If I restart the broker, it comes back up, I can consume the old
messages, and can handle another 350k messages until it wedges.

I can reproduce under both default gc and incremental gc.

Two questions:
- It seems like someone is holding onto a handle to the messages
after they have been persisted to disk - is this a known issue?
Should I open a JIRA for it? (Or is there another explanation?)

- Is there any documentation about the internals of KahaDB - the
kind of indices etc? I'd like to get a better understanding of the
index performance and in general how KahaDB compares to something
like BerkeleyDB.

Thanks





There's is some confusion over naming of our persistence options
that
doesn't help. There is Kaha - which uses multiple log files and a
Hash
based index - this is currently used by the FileCursor - whilst
KahaDB
is a newer implementation, which is more robust and typically uses a
BTreeIndex. There is currently a new implementation of the
Filecursor
btw - but that's a different matter. You can't currently configure
the
HashIndex via the FileCursor -  but it looks like this is the
problem
you are encountering - as it looks like you need to increase the max
hash buckets.


So I would recommend the following
1. Use the default pendingQueuePolicy (which only uses a FileCursor
for non-persistent messages - and uses the underlying database for
persistent messages
2. Try KahaDB - which - with the BTreeIndex - will not hit the
problems you are seeing with the Filecursor

or - increase the maximum number of hash buckets for the FileCursor
index - by setting a Java system property -  maximumCapacity to
65536
(the default is 16384)

cheers,

Rob

http://twitter.com/rajdavies
I work here: http://fusesource.com
My Blog: http://rajdavies.blogspot.com/
I'm writing this: http://www.manning.com/snyder/








--
View this message in context:
http://old.nabble.com/OOM-with-high-KahaDB-index-time-tp27217704p27264475.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.




--
View this message in context:
http://old.nabble.com/OOM-with-high-KahaDB-index-time-tp27217704p27279301.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Rob Davies
http://twitter.com/rajdavies
I work here: http://fusesource.com
My Blog: http://rajdavies.blogspot.com/
I'm writing this: http://www.manning.com/snyder/








--
View this message in context: 
http://old.nabble.com/OOM-with-high-KahaDB-index-time-tp27217704p27307657.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Rob Davies
http://twitter.com/rajdavies
I work here: http://fusesource.com
My Blog: http://rajdavies.blogspot.com/
I'm writing this: http://www.manning.com/snyder/





Reply via email to