This memory issue report might be related

https://groups.google.com/forum/#!topic/elasticsearch/EH76o1CIeQQ

Jörg


On Wed, Jul 2, 2014 at 5:34 PM, JoeZ99 <jzar...@gmail.com> wrote:

> Igor.
> Yes, that's right. My "index only" machines are just machines that are
> booted just for the indexing-snapshotting task. once there is no more tasks
> in queue, those machines are terminated. they only handle a few indices
> each time (their only purpose is to "snapshot").
>
> I will do as you tell me. I guess I'll better wait to the timeframe in
> which most of the restores occurs, because that's when the memory
> consumption grows more, so expect those postings in 5 or 6 hours.
>
>
> On Wednesday, July 2, 2014 10:29:53 AM UTC-4, Igor Motov wrote:
>>
>> So, your "search-only" machines are running out of memory, while your
>> "index-only" machines are doing fine. Did I understand you correctly? Could
>> you send me nodes stats (curl "localhost:9200/_nodes/stats?pretty") from
>> the machine that runs out of memory, please run stats a few times with 1
>> hour interval. I would like to see how memory consumption is increasing
>> over time. Please, also run nodes info ones (curl "localhost:9200/_nodes")
>> and post here (or send me by email) the results. Thanks!
>>
>> On Wednesday, July 2, 2014 10:15:46 AM UTC-4, JoeZ99 wrote:
>>>
>>> Hey, Igor, thanks for answering! and sorry for the delay. Didn't catch
>>> the update.
>>>
>>> I explain:
>>>
>>>    - we have one cluster of one machine which is only meant for serving
>>> search requests. the goal is  not to index anything to it. It contains 1.7k
>>> indices, give it or take it.
>>>    - every day, those 1.7k indices are reindexed, and snapshoted in
>>> pairs to a S3 repository (producint 850 snapshots)repository.
>>>    - every day, the one "reading only" cluster of the first point
>>> restores those 850 snapshots to "update" its 1.7k indices from that same S3
>>> repository.
>>>
>>> It works like a real charm. Load has dropped dramatically, and we can
>>> set a "farm" of temporary machines to do the indexing duties.
>>>
>>> But memory consumption never stops growing.
>>>
>>> we don't get any "out of memory" error or anything. In fact, there is
>>> nothing in the logs that shows any error, but after a week or a few days,
>>> the host has its memory almost exhausted and elasticsearch is not
>>> responding. The memory consumption is of course way ahead of the HEAP_SIZE
>>> We have to restart it and, when we do it we get the following error:
>>>
>>> java.util.concurrent.RejectedExecutionException: Worker has already
>>> been shutdown
>>>         at org.elasticsearch.common.netty.channel.socket.nio.AbstractNi
>>> oSelector.registerTask(AbstractNioSelector.java:120)
>>>         at org.elasticsearch.common.netty.channel.socket.nio.AbstractNi
>>> oWorker.executeInIoThread(AbstractNioWorker.java:72)
>>>         at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.
>>> executeInIoThread(NioWorker.java:36)
>>>         at org.elasticsearch.common.netty.channel.socket.nio.AbstractNi
>>> oWorker.executeInIoThread(AbstractNioWorker.java:56)
>>>         at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.
>>> executeInIoThread(NioWorker.java:36)
>>>         at org.elasticsearch.common.netty.channel.socket.nio.AbstractNi
>>> oChannelSink.execute(AbstractNioChannelSink.java:34)
>>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
>>> .execute(DefaultChannelPipeline.java:636)
>>>         at org.elasticsearch.common.netty.channel.Channels.fireExceptio
>>> nCaughtLater(Channels.java:496)
>>>         at org.elasticsearch.common.netty.channel.AbstractChannelSink.e
>>> xceptionCaught(AbstractChannelSink.java:46)
>>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
>>> .notifyHandlerException(DefaultChannelPipeline.java:658)
>>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipelin
>>> e$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.
>>> java:781)
>>>         at org.elasticsearch.common.netty.channel.Channels.write(Channe
>>> ls.java:725)
>>>         at org.elasticsearch.common.netty.handler.codec.oneone.OneToOne
>>> Encoder.doEncode(OneToOneEncoder.java:71)
>>>         at org.elasticsearch.common.netty.handler.codec.oneone.OneToOne
>>> Encoder.handleDownstream(OneToOneEncoder.java:59)
>>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
>>> .sendDownstream(DefaultChannelPipeline.java:591)
>>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
>>> .sendDownstream(DefaultChannelPipeline.java:582)
>>>         at org.elasticsearch.common.netty.channel.Channels.write(Channe
>>> ls.java:704)
>>>         at org.elasticsearch.common.netty.channel.Channels.write(Channe
>>> ls.java:671)
>>>         at org.elasticsearch.common.netty.channel.AbstractChannel.write(
>>> AbstractChannel.java:248)
>>>         at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(N
>>> ettyHttpChannel.java:158)
>>>         at org.elasticsearch.rest.action.search.RestSearchAction$1.onRe
>>> sponse(RestSearchAction.java:106)
>>>         at org.elasticsearch.rest.action.search.RestSearchAction$1.onRe
>>> sponse(RestSearchAction.java:98)
>>>         at org.elasticsearch.action.search.type.TransportSearchQueryAnd
>>> FetchAction$AsyncAction.innerFinishHim(TransportSearchQueryA
>>> ndFetchAction.java:94)
>>>         at org.elasticsearch.action.search.type.TransportSearchQueryAnd
>>> FetchAction$AsyncAction.moveToSecondPhase(TransportSearchQue
>>> ryAndFetchAction.java:77)
>>>         at org.elasticsearch.action.search.type.TransportSearchTypeActi
>>> on$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java
>>> :425)
>>>         at org.elasticsearch.action.search.type.TransportSearchTypeActi
>>> on$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:243
>>> )
>>>         at org.elasticsearch.action.search.<span style="color: #
>>> ...
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/a00cc733-a81c-4f8b-bdc0-b2bf4250481b%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/a00cc733-a81c-4f8b-bdc0-b2bf4250481b%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHkMrYX_u5oPFK7o0Gr8ngU2byU71M1x61MiVJ8_tXmbA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to