If I start elasticsearch from the bin folder not using the wrapper, I get
these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.fst.BytesStore.<init>(BytesStore.java:62)
        at org.apache.lucene.util.fst.FST.<init>(FST.java:366)
        at org.apache.lucene.util.fst.FST.<init>(FST.java:301)
        at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.<init>(BlockTreeTermsReader.java:481)
        at
org.apache.lucene.codecs.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:175)
        at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
        at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.<init>(BloomFilterPostingsFormat.java:131)
        at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
        at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
        at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:195)
        at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
        at
org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:115)
        at
org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:95)
        at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
        at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
        at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
        at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
        at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
        at
org.apache.lucene.search.XSearcherManager.<init>(XSearcherManager.java:94)
        at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
        at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
        at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
        at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
        at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)


- - - - - - - - - -
Sincerely:
Hicham Mallah
Software Developer
mallah.hic...@gmail.com
00961 700 49 600



On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah <mallah.hic...@gmail.com>wrote:

> Hello again,
>
> setting bootstrap.mlockall to true seems to have made memory usage slower,
> so like at the place of elasticsearch being killed after ~2 hours it will
> be killed after ~3 hours. What I see weird, is why is the process releasing
> memory one back to the OS but not doing it again? And why is it not abiding
> by this DIRECT_SIZE setting too.
>
> Thanks for the help
>
>
> - - - - - - - - - -
> Sincerely:
> Hicham Mallah
> Software Developer
> mallah.hic...@gmail.com
> 00961 700 49 600
>
>
>
> On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah <mallah.hic...@gmail.com>wrote:
>
>> Jorg the issue is after the JVM giving back memory to the OS, it starts
>> going up again, and never gives back memory till its killed, currently
>> memory usage is up to 66% and still going up. HEAP size is currently set to
>> 8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
>> 8 but still facing the issue, lowering it more will cause undesirable speed
>> on the website. I'll try mlockall now, and see what happens, but looking at
>> Bigdesk on 18.6mb of swap is used.
>>
>> I'll let you know what happens with mlockall on.
>>
>> - - - - - - - - - -
>> Sincerely:
>> Hicham Mallah
>> Software Developer
>> mallah.hic...@gmail.com
>> 00961 700 49 600
>>
>>
>>
>> On Thu, Mar 13, 2014 at 4:38 PM, joergpra...@gmail.com <
>> joergpra...@gmail.com> wrote:
>>
>>> From the gist, it alls looks very well. There is no reason for the OOM
>>> killer to kick in. Your system is idle and there is much room for
>>> everything.
>>>
>>> Just to quote you:
>>>
>>> "What's happening is that elasticsearch starts using memory till 50%
>>> then it goes back down to about 30% gradually then starts to go up again
>>> gradually and never goes back down."
>>>
>>> What you see is ES JVM process giving back memory to the OS, which is no
>>> reason to worry about in regard to process killing. It is just undesirable
>>> behaviour, and it is all a matter of correct configuration of the heap size.
>>>
>>> You should check if your ES starts from service wrapper or from the bin
>>> folder, and adjust the parameters for heap size. I recommend only to use
>>> ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
>>> use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
>>> doing the right thing for you.
>>>
>>> With bootstrap mlockall, you can lock the ES JVM process into main
>>> memory, this helps much regarding to performance and fast GC, as it reduces
>>> swapping. You can test if this setting will invoke the OOM killer too, as
>>> it increases the pressure on main memory (but, as said, there is plenty
>>> room in your machine).
>>>
>>> Jörg
>>>
>>>
>>> On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah 
>>> <mallah.hic...@gmail.com>wrote:
>>>
>>>> Hello Zachary,
>>>>
>>>> Thanks for your reply and the pointer to the settings.
>>>>
>>>> Here are the output of the commands you requested:
>>>>
>>>>
>>>> curl -XGET "http://localhost:9200/_nodes/stats";
>>>> curl -XGET "http://localhost:9200/_nodes";
>>>>
>>>> https://gist.github.com/codebird/9529114
>>>>
>>>>
>>>> - - - - - - - - - -
>>>> Sincerely:
>>>> Hicham Mallah
>>>> Software Developer
>>>> mallah.hic...@gmail.com
>>>> 00961 700 49 600
>>>>
>>>>
>>>>
>>>> On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong 
>>>> <zacharyjt...@gmail.com>wrote:
>>>>
>>>>> Can you gist up the output of these two commands?
>>>>>
>>>>> curl -XGET "http://localhost:9200/_nodes/stats";
>>>>>
>>>>> curl -XGET "http://localhost:9200/_nodes";
>>>>>
>>>>> Those are my first-stop APIs for determining where memory is being
>>>>> allocated.
>>>>>
>>>>>
>>>>> By the way, these settings don't do anything anymore (they were
>>>>> depreciated and removed):
>>>>>
>>>>> index.cache.field.type: soft
>>>>> index.term_index_interval: 256
>>>>> index.term_index_divisor: 5
>>>>>
>>>>> index.cache.field.max_size: 10000
>>>>>
>>>>>
>>>>>
>>>>> `max_size` was replaced with `indices.fielddata.cache.size` and
>>>>> accepts a value like "10gb" or "30%"
>>>>>
>>>>> And this is just bad settings in general (causes a lot of GC
>>>>> thrashing):
>>>>>
>>>>> index.cache.field.expire: 10m
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:
>>>>>
>>>>>> Now the process went back down to 25% usage, from now on it will go
>>>>>> back up, and won't stop going up.
>>>>>>
>>>>>> Sorry for spamming
>>>>>>
>>>>>>  - - - - - - - - - -
>>>>>> Sincerely:
>>>>>> Hicham Mallah
>>>>>> Software Developer
>>>>>> mallah...@gmail.com
>>>>>> 00961 700 49 600
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah 
>>>>>> <mallah...@gmail.com>wrote:
>>>>>>
>>>>>>>  Here's the top after ~1 hour running:
>>>>>>>
>>>>>>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>>>>>> 780 root      20   0  317g  14g 7.1g S 492.9 46.4 157:50.89 java
>>>>>>>
>>>>>>>
>>>>>>> - - - - - - - - - -
>>>>>>> Sincerely:
>>>>>>> Hicham Mallah
>>>>>>> Software Developer
>>>>>>> mallah...@gmail.com
>>>>>>> 00961 700 49 600
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah 
>>>>>>> <mallah...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hello Jörg
>>>>>>>>
>>>>>>>> Thanks for the reply, our swap size is 2g. I don't know at what %
>>>>>>>> the process is being killed as the first time it happened I wasn't 
>>>>>>>> around,
>>>>>>>> and then I never let that happen again as the website is online. After 
>>>>>>>> 2
>>>>>>>> hours of running the memory in sure is going up to 60%, I am restarting
>>>>>>>> each time when it arrives at 70% (2h/2h30) when I am around and testing
>>>>>>>> config changes. When I am not around, I am setting a cron job to 
>>>>>>>> restart
>>>>>>>> the server every 2 hours. Server has apache and mysql running on it 
>>>>>>>> too.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - - - - - - - - - -
>>>>>>>> Sincerely:
>>>>>>>> Hicham Mallah
>>>>>>>> Software Developer
>>>>>>>> mallah...@gmail.com
>>>>>>>> 00961 700 49 600
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
>>>>>>>> joerg...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> You wrote, the OOM killer killed the ES process. With 32g (and the
>>>>>>>>> swap size), the process must be very big. much more than you 
>>>>>>>>> configured.
>>>>>>>>> Can you give more info about the live size of the process, after ~2 
>>>>>>>>> hours?
>>>>>>>>> Are there more application processes on the box?
>>>>>>>>>
>>>>>>>>> Jörg
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
>>>>>>>>> mallah...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>> I have been using elasticsearch on a ubuntu server for a year
>>>>>>>>>> now, and everything was going great. I had an index of 150,000,000 
>>>>>>>>>> entries
>>>>>>>>>> of domain names, running small queries on it, just filtering by 1 
>>>>>>>>>> term no
>>>>>>>>>> sorting no wildcard nothing. Now we moved servers, I have now a 
>>>>>>>>>> CentOS 6
>>>>>>>>>> server, 32GB ram and running elasticserach but now we have 2 
>>>>>>>>>> indices, of
>>>>>>>>>> about 150 million entries each 32 shards, still running the same 
>>>>>>>>>> queries on
>>>>>>>>>> them nothing changed in the queries. But since we went online with 
>>>>>>>>>> the new
>>>>>>>>>> server, I have to restart elasticsearch every 2 hours before OOM 
>>>>>>>>>> killer
>>>>>>>>>> kills it.
>>>>>>>>>>
>>>>>>>>>> What's happening is that elasticsearch starts using memory till
>>>>>>>>>> 50% then it goes back down to about 30% gradually then starts to go 
>>>>>>>>>> up
>>>>>>>>>> again gradually and never goes back down.
>>>>>>>>>>
>>>>>>>>>> I have tried all the solutions I found on the net, I am a
>>>>>>>>>> developer not a server admin.
>>>>>>>>>>
>>>>>>>>>> *I have these setting in my service wrapper configuration*
>>>>>>>>>>
>>>>>>>>>> set.default.ES_HOME=/home/elasticsearch
>>>>>>>>>> set.default.ES_HEAP_SIZE=8192
>>>>>>>>>> set.default.MAX_OPEN_FILES=65535
>>>>>>>>>> set.default.MAX_LOCKED_MEMORY=10240
>>>>>>>>>> set.default.CONF_DIR=/home/elasticsearch/conf
>>>>>>>>>> set.default.WORK_DIR=/home/elasticsearch/tmp
>>>>>>>>>> set.default.DIRECT_SIZE=4g
>>>>>>>>>>
>>>>>>>>>> # Java Additional Parameters
>>>>>>>>>> wrapper.java.additional.1=-Delasticsearch-service
>>>>>>>>>> wrapper.java.additional.2=-Des.path.home=%ES_HOME%
>>>>>>>>>> wrapper.java.additional.3=-Xss256k
>>>>>>>>>> wrapper.java.additional.4=-XX:+UseParNewGC
>>>>>>>>>> wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
>>>>>>>>>> wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
>>>>>>>>>> wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
>>>>>>>>>> wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
>>>>>>>>>> wrapper.java.additional.9=-Djava.awt.headless=true
>>>>>>>>>> wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
>>>>>>>>>> wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
>>>>>>>>>> wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
>>>>>>>>>> wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
>>>>>>>>>> wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g
>>>>>>>>>> # Initial Java Heap Size (in MB)
>>>>>>>>>> wrapper.java.initmemory=%ES_HEAP_SIZE%
>>>>>>>>>>
>>>>>>>>>> *And these in elasticsearch.yml*
>>>>>>>>>> ES_MIN_MEM: 5g
>>>>>>>>>> ES_MAX_MEM: 5g
>>>>>>>>>> #index.store.type=mmapfs
>>>>>>>>>> index.cache.field.type: soft
>>>>>>>>>> index.cache.field.max_size: 10000
>>>>>>>>>> index.cache.field.expire: 10m
>>>>>>>>>> index.term_index_interval: 256
>>>>>>>>>> index.term_index_divisor: 5
>>>>>>>>>>
>>>>>>>>>> *java version: *
>>>>>>>>>> java version "1.7.0_51"
>>>>>>>>>> Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
>>>>>>>>>> Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
>>>>>>>>>>
>>>>>>>>>> *Elasticsearch version*
>>>>>>>>>>  "version" : {
>>>>>>>>>>     "number" : "1.0.0",
>>>>>>>>>>     "build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
>>>>>>>>>>     "build_timestamp" : "2014-02-12T16:18:34Z",
>>>>>>>>>>     "build_snapshot" : false,
>>>>>>>>>>     "lucene_version" : "4.6"
>>>>>>>>>>   }
>>>>>>>>>>
>>>>>>>>>> Using elastica PHP
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I have tried playing with values up and down to try to make it
>>>>>>>>>> work, but nothing is changing.
>>>>>>>>>>
>>>>>>>>>> Please any help would be highly appreciated.
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> You received this message because you are subscribed to the
>>>>>>>>>> Google Groups "elasticsearch" group.
>>>>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>>>>> send an email to elasticsearc...@googlegroups.com.
>>>>>>>>>>
>>>>>>>>>> To view this discussion on the web visit
>>>>>>>>>> https://groups.google.com/d/msgid/elasticsearch/4059bf32-
>>>>>>>>>> ae30-45fa-947c-98ef4540920a%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>>>>> .
>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  --
>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>> the Google Groups "elasticsearch" group.
>>>>>>>>> To unsubscribe from this topic, visit https://groups.google.com/d/
>>>>>>>>> topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
>>>>>>>>>  To unsubscribe from this group and all its topics, send an email
>>>>>>>>> to elasticsearc...@googlegroups.com.
>>>>>>>>>
>>>>>>>>> To view this discussion on the web visit
>>>>>>>>> https://groups.google.com/d/msgid/elasticsearch/
>>>>>>>>> CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
>>>>>>>>> 40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>>>>>>> .
>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>  --
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "elasticsearch" group.
>>>>> To unsubscribe from this topic, visit
>>>>> https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
>>>>> .
>>>>>  To unsubscribe from this group and all its topics, send an email to
>>>>> elasticsearch+unsubscr...@googlegroups.com.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>  --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to