a similar fix, but removing the deterministic task runner and reverting to
the pooled/dedicated task runners with a wake-up count does the trick. The
queue size in this case is always < 2.
Can you do an svn up to validate.

thanks for the heads up on your fix. Think it is best not to limit the
pending wakeups to max page size as it may result in a hung destination.

2009/11/13 afei <afei1...@126.com>

>
> the test scene: after accumulate a large of messages(about 5 millions) in
> queue,begin to consume it.
>
> i do a modifications,look at the attach (
> http://old.nabble.com/file/p26328726/Queue.java Queue.java ),it look like
> ok.
> the test xml configuration.
>
>
> Gary Tully wrote:
> >
> > https://issues.apache.org/activemq/browse/AMQ-2483 is tracking
> this.Could
> > you attach your test case and/or configuration to the jira issue. thx. It
> > should be possible to reuse an existing task for some of the iterations,
> > but
> > it is important that no wakeup request is ignored so that the the
> > destination keeps responding.
> >
> > 2009/11/12 afei <afei1...@126.com>
> >
> >>
> >> when consuming a large of messages,the method:asyncWakeup() is invoked
> >> crazily,so the executor
> >> has a great deal of  runnable that callback Queue.iterate(), but
> >> Queue.iterate() is much slower than the increasing of runnable in the
> >> executor. this result in OOM.
> >> the picture of memory dump:
> >>
> >> avoid to invoke the method:asyncWakeup()  frequently???
> >>
> >>
> >> Gary Tully wrote:
> >> >
> >> > iterate actually does a dispatch. when consumers change etc, we again
> >> > try and dispatch to ensure prefetch buffers are maintained.
> >> >
> >> > Do a svn up to r834579 as I committed a fix to trunk for this today
> >> > resolving https://issues.apache.org/activemq/browse/AMQ-2481
> >> >
> >> > 2009/11/10 afei <afei1...@126.com>:
> >> >>
> >> >> in org.apache.activemq.broker.region ,why so many invoke
> >> asyncWakeup(),
> >> >> what is the method:iterate() doing?
> >> >>
> >> >>
> >> >> afei wrote:
> >> >>>
> >> >>> in addition,another problem of OOM.
> >> >>>
> >> >>>
> >> >>>
> >> >>> Gary Tully wrote:
> >> >>>>
> >> >>>> fyi: you can disable periodic message expiry processing using a
> >> >>>> destination policy entry that sets expireMessagesPeriod = 0
> >> >>>>
> >> >>>> 2009/11/9 afei <afei1...@126.com>:
> >> >>>>>
> >> >>>>>
> >> >>>>> when a large of messages in queue,and no consumer or the consumer
> >> is
> >> >>>>> very
> >> >>>>> slow, the OOM problem occur, because :
> >> >>>>> in org.apache.activemq.broker.region.Queue,the 588 line is :
> >> >>>>>  doBrowse(true, browsedMessages, this.getMaxExpirePageSize());
> >> >>>>> ,transform to :
> >> >>>>> doBrowse(false, browsedMessages, this.getMaxExpirePageSize());
> >> >>>>>  is ok.
> >> >>>>>
> >> >>>>>
> >> >>>>> Dejan Bosanac wrote:
> >> >>>>>>
> >> >>>>>> Hi Mitch,
> >> >>>>>>
> >> >>>>>> yeah, I said in thread I was referring to, that it is working
> with
> >> >>>>>> "regular"
> >> >>>>>> stomp connector. I started investigating AMQ-2440 patch the other
> >> >>>>>> day,
> >> >>>>>> should be have something soon.
> >> >>>>>>
> >> >>>>>> Cheers
> >> >>>>>> --
> >> >>>>>> Dejan Bosanac - http://twitter.com/dejanb
> >> >>>>>>
> >> >>>>>> Open Source Integration - http://fusesource.com/
> >> >>>>>> ActiveMQ in Action - http://www.manning.com/snyder/
> >> >>>>>> Blog - http://www.nighttale.net
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Wed, Oct 28, 2009 at 6:18 PM, Mitch Granger
> >> >>>>>> <mitch.gran...@sophos.com>wrote:
> >> >>>>>>
> >> >>>>>>> So we turned off stomp+nio and went back to plain old stomp and
> >> so
> >> >>>>>>> far
> >> >>>>>>> it's
> >> >>>>>>> working fine.  New(IO) isn't always better, I guess :-)
> >> >>>>>>>
> >> >>>>>>> Seems like maybe it's this issue ->
> >> >>>>>>> https://issues.apache.org/activemq/browse/AMQ-2440
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> afei wrote:
> >> >>>>>>>
> >> >>>>>>>> i have same problem
> >> >>>>>>>>
> >> >>>>>>>> http://www.nabble.com/file/p26093204/aaaaaa.jpg aaaaaa.jpg
> >> >>>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> themitchy wrote:
> >> >>>>>>>>
> >> >>>>>>>>> This is what we've done to tune so far:
> >> >>>>>>>>>
> >> >>>>>>>>>  - UseDedicatedTaskRunner=false
> >> >>>>>>>>>  - flow control is off
> >> >>>>>>>>>  - stomp transport uses transport.closeAsync=false
> >> >>>>>>>>>
> >> >>>>>>>>> I agree that it is because of the high number of open/close
> >> >>>>>>>>> connections
> >> >>>>>>>>> from Stomp.  When we monitor through JConsole we can see more
> >> >>>>>>>>> threads
> >> >>>>>>>>> starting up for each new connection.  The problem is that
> these
> >> >>>>>>>>> threads
> >> >>>>>>>>> don't get let go.  Even though the stomp clients are
> >> disconnecting
> >> >>>>>>>>> the
> >> >>>>>>>>> number of threads that get released is less than the number
> >> >>>>>>>>> created.
> >> >>>>>>>>> So
> >> >>>>>>>>>  the thread count goes up and up until it fails.  All of the
> >> above
> >> >>>>>>>>> settings/tuning only delay when it will hit the wall.
> >> >>>>>>>>>
> >> >>>>>>>>> Dejan Bosanac wrote:
> >> >>>>>>>>>
> >> >>>>>>>>>> Hi Mitch,
> >> >>>>>>>>>>
> >> >>>>>>>>>> I think the root cause of this problem is that you probably
> >> have
> >> >>>>>>>>>> Stomp
> >> >>>>>>>>>> clients that open/close connection at a high rate. I
> simulated
> >> >>>>>>>>>> this
> >> >>>>>>>>>> problem
> >> >>>>>>>>>> on OSX with a StompLoadTest (
> >> >>>>>>>>>>
> >> >>>>>>>>>>
> >>
> http://svn.apache.org/viewvc/activemq/trunk/activemq-core/src/test/java/org/apache/activemq/transport/stomp/StompLoadTest.java?view=log
> >> >>>>>>>>>> ),
> >> >>>>>>>>>> while trying to reproduce "too many open files" problem. You
> >> can
> >> >>>>>>>>>> find
> >> >>>>>>>>>> some
> >> >>>>>>>>>> of my findings (and workaround) in this thread.
> >> >>>>>>>>>>
> >> >>>>>>>>>>
> >> >>>>>>>>>>
> >>
> http://www.nabble.com/%22too-many-open-files%22-error-with-5.3-and-Stomp-tt25888831.html#a26010080
> >> >>>>>>>>>>
> >> >>>>>>>>>> (BTW. it is producing "too many open files" problem on linux)
> >> >>>>>>>>>> Basically,
> >> >>>>>>>>>> the
> >> >>>>>>>>>> problem with stomp is that every send is done in separate
> >> >>>>>>>>>> connection
> >> >>>>>>>>>> and
> >> >>>>>>>>>> thus considered to be a new producer for every message. So
> >> when
> >> >>>>>>>>>> producer
> >> >>>>>>>>>> flow control is hit, the producers are piling up and probably
> >> not
> >> >>>>>>>>>> releasing
> >> >>>>>>>>>> connections. Thus you can observe large number of tcp
> >> connections
> >> >>>>>>>>>> on
> >> >>>>>>>>>> the
> >> >>>>>>>>>> system in state TIME_WAIT (and TIME_CLOSE), which causes that
> >> the
> >> >>>>>>>>>> system
> >> >>>>>>>>>> limit is hit at one point. In the above thread, you can find
> a
> >> >>>>>>>>>> workaround
> >> >>>>>>>>>> that worked for me for that test. I started investigating
> this
> >> >>>>>>>>>> more
> >> >>>>>>>>>> and
> >> >>>>>>>>>> hopefully I'll have some more findings in the near future.
> >> >>>>>>>>>>
> >> >>>>>>>>>> Cheers
> >> >>>>>>>>>> --
> >> >>>>>>>>>> Dejan Bosanac - http://twitter.com/dejanb
> >> >>>>>>>>>>
> >> >>>>>>>>>> Open Source Integration - http://fusesource.com/
> >> >>>>>>>>>> ActiveMQ in Action - http://www.manning.com/snyder/
> >> >>>>>>>>>> Blog - http://www.nighttale.net
> >> >>>>>>>>>>
> >> >>>>>>>>>>
> >> >>>>>>>>>> On Tue, Oct 27, 2009 at 1:07 AM, Mitch Granger
> >> >>>>>>>>>> <mitch.gran...@sophos.com>wrote:
> >> >>>>>>>>>>
> >> >>>>>>>>>>  Update: We've [nearly] proven that this only happens with
> AMQ
> >> >>>>>>>>>> running
> >> >>>>>>>>>>> on
> >> >>>>>>>>>>> openVZ.  What exactly is causing it, we're still not sure.
> >> >>>>>>>>>>>  After
> >> >>>>>>>>>>> memoryUsage is met, the number of threads skyrockets until
> we
> >> >>>>>>>>>>> get
> >> >>>>>>>>>>> OutOfMemoryError.
> >> >>>>>>>>>>>
> >> >>>>>>>>>>> It works just fine on regular hardware; We're going to try
> >> >>>>>>>>>>> VMWare
> >> >>>>>>>>>>> tomorrow.
> >> >>>>>>>>>>>
> >> >>>>>>>>>>> One thing really worth mentioning is that by using the
> >> >>>>>>>>>>> fileCursor
> >> >>>>>>>>>>> we
> >> >>>>>>>>>>> actually started seeing it use the Temp Store.  When reading
> >> >>>>>>>>>>> about
> >> >>>>>>>>>>> systemUsage it is NOT intuitive that the Temp Store does not
> >> >>>>>>>>>>> come
> >> >>>>>>>>>>> into
> >> >>>>>>>>>>> play
> >> >>>>>>>>>>> with the default cursor.  Anyone keeping a significant
> volume
> >> of
> >> >>>>>>>>>>> messages on
> >> >>>>>>>>>>> their queues should be well served by changing the cursor.
> >> >>>>>>>>>>>
> >> >>>>>>>>>>>
> >> >>>>>>>>>>> Mitch Granger wrote:
> >> >>>>>>>>>>>
> >> >>>>>>>>>>>  Config is attached.  We have also tried the
> >> >>>>>>>>>>> activemq-scalability.xml
> >> >>>>>>>>>>>> with
> >> >>>>>>>>>>>> the only change being adding a stomp connector.
> >> >>>>>>>>>>>>
> >> >>>>>>>>>>>> Once we hit the memoryUsage limit we can [sometimes]
> connect
> >> >>>>>>>>>>>> new
> >> >>>>>>>>>>>> consumers
> >> >>>>>>>>>>>> but nothing comes back after we send the SUBSCRIBE frame.
> >> >>>>>>>>>>>>
> >> >>>>>>>>>>>> I expect sending to fail when we hit this limit but if we
> >> can't
> >> >>>>>>>>>>>> subscribe
> >> >>>>>>>>>>>> there's no chance of recovering from this state.
> >> >>>>>>>>>>>>
> >> >>>>>>>>>>>> Rob Davies wrote:
> >> >>>>>>>>>>>>
> >> >>>>>>>>>>>>  On 26 Oct 2009, at 17:38, themitchy wrote:
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>  We're using only persistent messages and heap size is set
> >> to
> >> >>>>>>>>>>>>> 2GB
> >> >>>>>>>>>>>>> yet
> >> >>>>>>>>>>>>> we
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>> hit
> >> >>>>>>>>>>>>>> the memoryUsage limit quite quickly (system usage config
> >> >>>>>>>>>>>>>> below).
> >> >>>>>>>>>>>>>> This
> >> >>>>>>>>>>>>>> is
> >> >>>>>>>>>>>>>> followed by "java.lang.OutOfMemoryError: unable to create
> >> new
> >> >>>>>>>>>>>>>> native
> >> >>>>>>>>>>>>>>  thread"
> >> >>>>>>>>>>>>>> as the process quickly reaches the 2GB of heap we gave
> it.
> >> >>>>>>>>>>>>>>  How
> >> >>>>>>>>>>>>>> are
> >> >>>>>>>>>>>>>> we
> >> >>>>>>>>>>>>>> getting to that point with the memoryUsage limit set far
> >> >>>>>>>>>>>>>> below
> >> >>>>>>>>>>>>>> it?
> >> >>>>>>>>>>>>>>
> >> >>>>>>>>>>>>>> Is there no way to get AMQ to gracefully limit it's
> memory
> >> >>>>>>>>>>>>>> usage?
> >> >>>>>>>>>>>>>>
> >> >>>>>>>>>>>>>>      <systemUsage>
> >> >>>>>>>>>>>>>>          <systemUsage>
> >> >>>>>>>>>>>>>>              <memoryUsage>
> >> >>>>>>>>>>>>>>                  <memoryUsage limit="256 mb"/>
> >> >>>>>>>>>>>>>>              </memoryUsage>
> >> >>>>>>>>>>>>>>              <storeUsage>
> >> >>>>>>>>>>>>>>                  <storeUsage limit="60 gb" name="foo"/>
> >> >>>>>>>>>>>>>>              </storeUsage>
> >> >>>>>>>>>>>>>>              <tempUsage>
> >> >>>>>>>>>>>>>>                  <tempUsage limit="60 gb"/>
> >> >>>>>>>>>>>>>>              </tempUsage>
> >> >>>>>>>>>>>>>>          </systemUsage>
> >> >>>>>>>>>>>>>>      </systemUsage>
> >> >>>>>>>>>>>>>>
> >> >>>>>>>>>>>>>> --
> >> >>>>>>>>>>>>>> View this message in context:
> >> >>>>>>>>>>>>>>
> >> http://www.nabble.com/Out-of-Memory-on-5.3-tp26064098p26064098.html
> >> >>>>>>>>>>>>>> Sent from the ActiveMQ - User mailing list archive at
> >> >>>>>>>>>>>>>> Nabble.com.
> >> >>>>>>>>>>>>>>
> >> >>>>>>>>>>>>>>
> >> >>>>>>>>>>>>>>  Can you send the rest of your config ?
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>> Rob Davies
> >> >>>>>>>>>>>>> http://twitter.com/rajdavies
> >> >>>>>>>>>>>>> I work here: http://fusesource.com
> >> >>>>>>>>>>>>> My Blog: http://rajdavies.blogspot.com/
> >> >>>>>>>>>>>>> I'm writing this: http://www.manning.com/snyder/
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>>>>>
> >> >>>>>>>>>
> >> >>>>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> -----
> >> >>>>>> Dejan Bosanac
> >> >>>>>>
> >> >>>>>> Open Source Integration - http://fusesource.com/
> >> >>>>>> ActiveMQ in Action - http://www.manning.com/snyder/
> >> >>>>>> Blog - http://www.nighttale.net
> >> >>>>>>
> >> >>>>> http://old.nabble.com/file/p26264779/dump.jpg
> >> >>>>> --
> >> >>>>> View this message in context:
> >> >>>>>
> http://old.nabble.com/Out-of-Memory-on-5.3-tp26064098p26264779.html
> >> >>>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >> >>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> --
> >> >>>> http://blog.garytully.com
> >> >>>>
> >> >>>> Open Source Integration
> >> >>>> http://fusesource.com
> >> >>>>
> >> >>>>
> >> >>>  http://old.nabble.com/file/p26278228/oom.jpg
> >> >>>
> >> >>
> >> >> --
> >> >> View this message in context:
> >> >> http://old.nabble.com/Out-of-Memory-on-5.3-tp26064098p26279671.html
> >> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >> >>
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > http://blog.garytully.com
> >> >
> >> > Open Source Integration
> >> > http://fusesource.com
> >> >
> >> >
> >> http://old.nabble.com/file/p26312669/oom.jpg
> >> --
> >> View this message in context:
> >> http://old.nabble.com/Out-of-Memory-on-5.3-tp26064098p26312669.html
> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
> > --
> > http://blog.garytully.com
> >
> > Open Source Integration
> > http://fusesource.com
> >
> >
> http://old.nabble.com/file/p26328726/activemq.xml activemq.xml
> http://old.nabble.com/file/p26328726/activemq.xml activemq.xml
> --
> View this message in context:
> http://old.nabble.com/Out-of-Memory-on-5.3-tp26064098p26328726.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>


-- 
http://blog.garytully.com

Open Source Integration
http://fusesource.com

Reply via email to