Igor.
Yes, that's right. My "index only" machines are just machines that are 
booted just for the indexing-snapshotting task. once there is no more tasks 
in queue, those machines are terminated. they only handle a few indices 
each time (their only purpose is to "snapshot").

I will do as you tell me. I guess I'll better wait to the timeframe in 
which most of the restores occurs, because that's when the memory 
consumption grows more, so expect those postings in 5 or 6 hours. 

On Wednesday, July 2, 2014 10:29:53 AM UTC-4, Igor Motov wrote:
>
> So, your "search-only" machines are running out of memory, while your 
> "index-only" machines are doing fine. Did I understand you correctly? Could 
> you send me nodes stats (curl "localhost:9200/_nodes/stats?pretty") from 
> the machine that runs out of memory, please run stats a few times with 1 
> hour interval. I would like to see how memory consumption is increasing 
> over time. Please, also run nodes info ones (curl "localhost:9200/_nodes") 
> and post here (or send me by email) the results. Thanks!
>
> On Wednesday, July 2, 2014 10:15:46 AM UTC-4, JoeZ99 wrote:
>>
>> Hey, Igor, thanks for answering! and sorry for the delay. Didn't catch 
>> the update.
>>
>> I explain:
>>
>>    - we have one cluster of one machine which is only meant for serving 
>> search requests. the goal is  not to index anything to it. It contains 1.7k 
>> indices, give it or take it. 
>>    - every day, those 1.7k indices are reindexed, and snapshoted in pairs 
>> to a S3 repository (producint 850 snapshots)repository. 
>>    - every day, the one "reading only" cluster of the first point 
>> restores those 850 snapshots to "update" its 1.7k indices from that same S3 
>> repository. 
>>
>> It works like a real charm. Load has dropped dramatically, and we can set 
>> a "farm" of temporary machines to do the indexing duties. 
>>
>> But memory consumption never stops growing.
>>
>> we don't get any "out of memory" error or anything. In fact, there is 
>> nothing in the logs that shows any error, but after a week or a few days, 
>> the host has its memory almost exhausted and elasticsearch is not 
>> responding. The memory consumption is of course way ahead of the HEAP_SIZE
>> We have to restart it and, when we do it we get the following error:
>>
>> java.util.concurrent.RejectedExecutionException: Worker has already been 
>> shutdown
>>         at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
>>         at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
>>         at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.
>> executeInIoThread(NioWorker.java:36)
>>         at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56)
>>         at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.
>> executeInIoThread(NioWorker.java:36)
>>         at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
>> execute(DefaultChannelPipeline.java:636)
>>         at org.elasticsearch.common.netty.channel.Channels.
>> fireExceptionCaughtLater(Channels.java:496)
>>         at org.elasticsearch.common.netty.channel.AbstractChannelSink.
>> exceptionCaught(AbstractChannelSink.java:46)
>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
>> notifyHandlerException(DefaultChannelPipeline.java:658)
>>         at org.elasticsearch.common.netty.channel.
>> DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(
>> DefaultChannelPipeline.java:781)
>>         at org.elasticsearch.common.netty.channel.Channels.write(Channels
>> .java:725)
>>         at org.elasticsearch.common.netty.handler.codec.oneone.
>> OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
>>         at org.elasticsearch.common.netty.handler.codec.oneone.
>> OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
>> sendDownstream(DefaultChannelPipeline.java:591)
>>         at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
>> sendDownstream(DefaultChannelPipeline.java:582)
>>         at org.elasticsearch.common.netty.channel.Channels.write(Channels
>> .java:704)
>>         at org.elasticsearch.common.netty.channel.Channels.write(Channels
>> .java:671)
>>         at org.elasticsearch.common.netty.channel.AbstractChannel.write(
>> AbstractChannel.java:248)
>>         at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(
>> NettyHttpChannel.java:158)
>>         at org.elasticsearch.rest.action.search.RestSearchAction$1.
>> onResponse(RestSearchAction.java:106)
>>         at org.elasticsearch.rest.action.search.RestSearchAction$1.
>> onResponse(RestSearchAction.java:98)
>>         at org.elasticsearch.action.search.type.
>> TransportSearchQueryAndFetchAction$AsyncAction.innerFinishHim(
>> TransportSearchQueryAndFetchAction.java:94)
>>         at org.elasticsearch.action.search.type.
>> TransportSearchQueryAndFetchAction$AsyncAction.moveToSecondPhase(
>> TransportSearchQueryAndFetchAction.java:77)
>>         at org.elasticsearch.action.search.type.
>> TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(
>> TransportSearchTypeAction.java:425)
>>         at org.elasticsearch.action.search.type.
>> TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(
>> TransportSearchTypeAction.java:243)
>>         at org.elasticsearch.action.search.<span style="color: #
>> ...
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a00cc733-a81c-4f8b-bdc0-b2bf4250481b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to