Re: What to do if ES document count continuously increases - with zero indexing ongoing

2014-03-12 Thread Swaroop CH
Hi Clinton, Thanks for the reply. I did look at the docs, I tried setting `marvel.agent.enabled: false` via the API, but ES logs an error saying it is "not dynamically updateable", and so on. Eventually, I solved it by disabling shard allocation, shutting down all the nodes of the cluster, uninstalling Marvel, and starting master-only nodes followed by data-only nodes. Regards,Swaroop 12.03.2014, 16:18, "Clinton Gormley" :Have you looked at the docs? http://www.elasticsearch.org/guide/en/marvel/current/index.html#configurationOn 12 March 2014 06:42, Swaroop CH <swaroo...@yandex.com> wrote:The source of the problem is Marvel - is there anyway to disable Marvel indexing?  Trying to set `marvel.agent.indices: "-*"` says "ignoring transient setting [marvel.agent.indices], not dynamically updateable"   Regards, Swaroop12.03.2014, 08:31, "Swaroop CH" <swaroo...@yandex.com>:> Hello, > > We have bulk-indexed a fresh ES 1.0.1 cluster with index.refresh_interval : -1 (disabled), and then set index.refresh_interval: 5s about 20 hours ago, the document count is continuously increasing since then, is this normal or expected? How do I know when it'll be done? Our expected document count was about half of the current document count in the new cluster. > > 1394593149 02:59:09 167713281 > 1394593153 02:59:13 167723653 > 1394593156 02:59:16 167720017 > ... > 1394593220 03:00:20 167800614 > 1394593224 03:00:24 167812913 > 1394593228 03:00:28 167812056 > > ( while true; do curl -s http://54.xxx.xxx.xxx:9201/_cat/count; sleep 2; done ) > > Looking forward to any advice or suggestions on what to look into. > > Thanks. > > Regards, > Swaroop > > -- > You received this message because you are subscribed to the Google Groups "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/355871394593302%40web26h.yandex.ru. > For more options, visit https://groups.google.com/d/optout.  -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/55311394602944%40web7h.yandex.ru.For more options, visit https://groups.google.com/d/optout. --  You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSrxe0qZKX%2BWWDTpQWFN3rBsDcjFpiOzkLA2YZMpz1gew%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/401231394645319%40web9m.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


Re: What to do if ES document count continuously increases - with zero indexing ongoing

2014-03-11 Thread Swaroop CH
The source of the problem is Marvel - is there anyway to disable Marvel 
indexing?

Trying to set `marvel.agent.indices: "-*"` says "ignoring transient setting 
[marvel.agent.indices], not dynamically updateable"


Regards,
Swaroop



12.03.2014, 08:31, "Swaroop CH" :
> Hello,
>
> We have bulk-indexed a fresh ES 1.0.1 cluster with index.refresh_interval : 
> -1 (disabled), and then set index.refresh_interval: 5s about 20 hours ago, 
> the document count is continuously increasing since then, is this normal or 
> expected? How do I know when it'll be done? Our expected document count was 
> about half of the current document count in the new cluster.
>
> 1394593149 02:59:09 167713281
> 1394593153 02:59:13 167723653
> 1394593156 02:59:16 167720017
> ...
> 1394593220 03:00:20 167800614
> 1394593224 03:00:24 167812913
> 1394593228 03:00:28 167812056
>
> ( while true; do curl -s http://54.xxx.xxx.xxx:9201/_cat/count; sleep 2; done 
> )
>
> Looking forward to any advice or suggestions on what to look into.
>
> Thanks.
>
> Regards,
> Swaroop
>
> --
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/355871394593302%40web26h.yandex.ru.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55311394602944%40web7h.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


What to do if ES document count continuously increases - with zero indexing ongoing

2014-03-11 Thread Swaroop CH
Hello,

We have bulk-indexed a fresh ES 1.0.1 cluster with index.refresh_interval : -1 
(disabled), and then set index.refresh_interval: 5s about 20 hours ago, the 
document count is continuously increasing since then, is this normal or 
expected? How do I know when it'll be done? Our expected document count was 
about half of the current document count in the new cluster.

1394593149 02:59:09 167713281
1394593153 02:59:13 167723653
1394593156 02:59:16 167720017
...
1394593220 03:00:20 167800614
1394593224 03:00:24 167812913
1394593228 03:00:28 167812056

( while true; do curl -s http://54.xxx.xxx.xxx:9201/_cat/count; sleep 2; done )

Looking forward to any advice or suggestions on what to look into.

Thanks.

Regards,
Swaroop

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/355871394593302%40web26h.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


Re: DELETE snapshot request (which was long-running and had not yet completed) is hung

2014-03-10 Thread Swaroop CH
 Hi Igor, It seems that the S3 bucket had "PUT only permissions". Regards,Swaroop  10.03.2014, 17:40, "Igor Motov" :That's strange. Wrong S3 permissions should have caused it to failed immediately. Could you provide any more details about the permissions, so I can reproduce it? Meanwhile, restarting the nodes where primary shards of the stuck index are located is the only option that I can think of.  We are working on improving the performance of snapshot cancelation (https://github.com/elasticsearch/elasticsearch/pull/5244), but it didn't make it to a release yet.On Monday, March 10, 2014 4:39:24 AM UTC-4, Swaroop wrote:Hi,  I had started a snapshot request on a freshly-indexed ES 1.0.1 cluster with cloud plugin installed, but unfortunately the EC2 access keys configured did not have S3 permissions, so ES was in a weird state, so I sent a DELETE snapshot request and it's stuck for more than a couple of hours, any advice on what to do here to cleanup the snapshot request? Logs don't reveal anything relevant.  Regards, Swaroop  --  You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c9ef80ee-5512-44bd-b301-54496a31f4b6%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1015081394516640%40web27h.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


DELETE snapshot request (which was long-running and had not yet completed) is hung

2014-03-10 Thread Swaroop CH
Hi,

I had started a snapshot request on a freshly-indexed ES 1.0.1 cluster with 
cloud plugin installed, but unfortunately the EC2 access keys configured did 
not have S3 permissions, so ES was in a weird state, so I sent a DELETE 
snapshot request and it's stuck for more than a couple of hours, any advice on 
what to do here to cleanup the snapshot request? Logs don't reveal anything 
relevant.

Regards,
Swaroop

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/531801394440764%40web11j.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


Re: After bulk indexing with refresh_interval disabled, now at 100% CPU usage for > 24 hours

2014-03-06 Thread Swaroop CH
For posterity note, the problem was solved by specifying 
{"index.refresh_interval": "5s"} - note the "s" : by specifying just "5", ES 
assumes 5 milliseconds!

Regards,
Swaroop



07.03.2014, 11:28, "Swaroop CH" :
> Hello,
>
> We have a brand-new ES 1.0.1 cluster of 3 m2.xlarge machines, we set 
> `index.refresh_interval` to -1, `index.number_of_replicas` to 0, 
> `index.number_of_shards` to 10 and indexed about half a million documents in 
> about 2000 indexes, this completed successfully in about 10 hours.
>
> However, after the bulk indexing completed, I set `index.refresh_interval` to 
> 5, and there is 100% CPU usage in 1 out of the 2 CPUs on all the 3 nodes, and 
> it has been more than 24 hours and it is still at 100% CPU (1 out of 2 CPUs). 
> Is this normal and expected? (Note that the cluster status is green)
>
> From `/_nodes/hot_threads`, I can see that 
> `org.elasticsearch.index.shard.service.InternalIndexShard$EngineRefresher.run(InternalIndexShard.java:914)`
>  is what is taking up the CPU.
>
> Any advice on the same is welcome.
>
> Thank you.
>
> Regards,
> Swaroop
>
> --
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/1371391394171884%40web12m.yandex.ru.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1411101394172329%40web12m.yandex.ru.
For more options, visit https://groups.google.com/d/optout.


After bulk indexing with refresh_interval disabled, now at 100% CPU usage for > 24 hours

2014-03-06 Thread Swaroop CH
Hello,

We have a brand-new ES 1.0.1 cluster of 3 m2.xlarge machines, we set 
`index.refresh_interval` to -1, `index.number_of_replicas` to 0, 
`index.number_of_shards` to 10 and indexed about half a million documents in 
about 2000 indexes, this completed successfully in about 10 hours.

However, after the bulk indexing completed, I set `index.refresh_interval` to 
5, and there is 100% CPU usage in 1 out of the 2 CPUs on all the 3 nodes, and 
it has been more than 24 hours and it is still at 100% CPU (1 out of 2 CPUs). 
Is this normal and expected? (Note that the cluster status is green)

>From `/_nodes/hot_threads`, I can see that 
>`org.elasticsearch.index.shard.service.InternalIndexShard$EngineRefresher.run(InternalIndexShard.java:914)`
> is what is taking up the CPU.

Any advice on the same is welcome.

Thank you.

Regards,
Swaroop

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1371391394171884%40web12m.yandex.ru.
For more options, visit https://groups.google.com/d/optout.