Problems upgrading to 1.5.0

2015-03-31 Thread Martin Forssen
We just encountered some mysterious problems when upgrading from 1.1.1 to 
1.5.0.

The cluster consists of three machines, two data nodes and one master-only 
node. It hosts 86 indices which each has one replica.

I stopped writes, did a snapshot and stopped the entire cluster before I 
upgraded the nodes and restarted them. The system came up and quickly 
turned yellow, but it refused to become green. it failed to recover a 
number of shards. The errors I got in the logs looked like this (there were 
a lot):
[2015-03-31 07:33:39,704][WARN ][indices.cluster  ] [NODE1] 
[signal_bin][0] sending failed shard after recovery failure
org.elasticsearch.indices.recovery.RecoveryFailedException: 
[signal_bin][0]: Recovery failed from 
[NODE2][rpXLVgS8Qw2jgimXNYKn_A][NODE2][inet[/IP2:9300]]{aws_availability_zone=us-east-1d,
 
max_local_storage_nodes=1} into 
[NODE1][tdXdf0MeS62DIO0KFZX-Rg][NODE1][inet[/IP1:9300]]{aws_availability_zone=us-east-1b,
 
max_local_storage_nodes=1}
at 
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:274)
at 
org.elasticsearch.indices.recovery.RecoveryTarget.access$700(RecoveryTarget.java:69)
at 
org.elasticsearch.indices.recovery.RecoveryTarget$RecoveryRunner.doRun(RecoveryTarget.java:550)
at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.transport.RemoteTransportException: 
[NODE2][inet[/IP2:9300]][internal:index/shard/recovery/start_recovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: 
[signal_bin][0] Phase[1] Execution failed
at 
org.elasticsearch.index.engine.InternalEngine.recover(InternalEngine.java:839)
at org.elasticsearch.index.shard.IndexShard.recover(IndexShard.java:684)
at 
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:125)
at 
org.elasticsearch.indices.recovery.RecoverySource.access$200(RecoverySource.java:49)
at 
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:146)
at 
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:132)
at 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: 
org.elasticsearch.indices.recovery.RecoverFilesRecoveryException: 
[signal_bin][0] Failed to transfer [11] files with total size of [1.4mb]
at 
org.elasticsearch.indices.recovery.RecoverySourceHandler.phase1(RecoverySourceHandler.java:413)
at 
org.elasticsearch.index.engine.InternalEngine.recover(InternalEngine.java:834)
... 10 more
Caused by: org.elasticsearch.transport.RemoteTransportException: 
[NODE1][inet[/IP1:9300]][internal:index/shard/recovery/clean_files]
Caused by: org.elasticsearch.indices.recovery.RecoveryFailedException: 
[signal_bin][0]: Recovery failed from 
[NODE2][rpXLVgS8Qw2jgimXNYKn_A][NODE2][inet[/IP2:9300]]{aws_availability_zone=us-east-1d,
 
max_local_storage_nodes=1} into 
[NODE1][tdXdf0MeS62DIO0KFZX-Rg][NODE1][inet[/IP1:9300]]{aws_availability_zone=us-east-1b,
 
max_local_storage_nodes=1} (failed to clean after recovery)
at 
org.elasticsearch.indices.recovery.RecoveryTarget$CleanFilesRequestHandler.messageReceived(RecoveryTarget.java:443)
at 
org.elasticsearch.indices.recovery.RecoveryTarget$CleanFilesRequestHandler.messageReceived(RecoveryTarget.java:389)
at 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
at 
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: local 
version: name [_yor.si], length [363], checksum [1jnqbzx], writtenBy [null] 
is different from remote version after recovery: name [_yor.si], length 
[363], checksum [null], writtenBy [null]
at 
org.elasticsearch.index.store.Store.verifyAfterCleanup(Store.java:645)
at org.elasticsearch.index.store.Store.cleanupAndVerify(Store.java:613)
at 

Snapshots are failing

2015-03-16 Thread Martin Forssen
Hello,

I'm experimenting with snapshots to S3, but I'm having no luck. The cluster 
consists of 8 nodes (i2.2xlarge). The index I'm trying to snapshot is 
2.91T, has 16 shards and 1 replica. I shoudl perhaps also mention that this 
is running Elasticsearch version 1.1.1.

Initially when I initiate the snapshot process everything looks good. But 
after a while shards start failing. In the logs I can find messages like 
this:
[2015-02-05 07:57:59,029][WARN ][index.merge.scheduler] [machine_name] 
[cluster_day][13] failed to merge
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at 
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:228)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at 
org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:473)
at 
org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:52)
at 
org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:215)
at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:95)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4273)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3743)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
... 14 more

After a while this stops happening but the snapshot has failed:
[2015-02-05 13:26:25,232][WARN ][snapshots] [machine_name] 
[[cluster_day][13]] [rf_es_snapshot:cluster_full] failed to create snapshot
org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: 
[cluster_day][13] Failed to snapshot
at 
org.elasticsearch.index.snapshots.IndexShardSnapshotAndRestoreService.snapshot(IndexShardSnapshotAndRestoreService.java:100)
at 
org.elasticsearch.snapshots.SnapshotsService$5.run(SnapshotsService.java:694)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.index.engine.EngineClosedException: 
[cluster_day][13] CurrentState[CLOSED]
at 
org.elasticsearch.index.engine.internal.InternalEngine.ensureOpen(InternalEngine.java:900)
at 
org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:746)
at 
org.elasticsearch.index.engine.internal.InternalEngine.snapshotIndex(InternalEngine.java:1045)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.snapshotIndex(InternalIndexShard.java:618)
at 
org.elasticsearch.index.snapshots.IndexShardSnapshotAndRestoreService.snapshot(IndexShardSnapshotAndRestoreService.java:83)
... 4 more

There have been a number of other exceptions in between but most seem to be 
related to out of memory. Should really a snpshot require so much memory?


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a940847c-9005-47a5-b8af-79ecb3f3b864%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Limiting pagination?

2014-09-18 Thread Martin Forssen
Hello,

I wonder if it is possible to put a limit on the from parameter in 
pagination requests. For example refuse any paginating searches where the 
from is above X? This would be good to protect clusters which otherwise are 
easy to bring down.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/206b9f66-86c6-48c3-845f-cd07e9f7089a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: stuck thread problem?

2014-08-29 Thread Martin Forssen
FYI, this turned out to be a real bug. A fix has been committed and will be 
included in the next release.

On Wednesday, August 27, 2014 11:36:03 AM UTC+2, Martin Forssen wrote:

 I did report it https://github.com/elasticsearch/elasticsearch/issues/7478



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/04d9c094-112d-4d7d-bd48-e4fa2ff3a774%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: stuck thread problem?

2014-08-27 Thread Martin Forssen
I see the same problem. We are running 1.1.1 on a 13-node cluster (3 master 
and 5+5 data). I see stuck threads on most of the data nodes, I had a look 
around on one of them. Top in thread mode shows:
top - 08:08:20 up 62 days, 18:49,  1 user,  load average: 9.18, 13.21, 12.67
Threads: 528 total,  14 running, 514 sleeping,   0 stopped,   0 zombie
%Cpu(s): 39.0 us,  1.5 sy,  0.0 ni, 59.0 id,  0.2 wa,  0.2 hi,  0.0 si,  
0.1 st
KiB Mem:  62227892 total, 61933428 used,   294464 free,65808 buffers
KiB Swap: 61865980 total,19384 used, 61846596 free. 24645668 cached Mem

  PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ 
COMMAND 
   

 3743 elastic+  20   0  1.151t 0.045t 0.013t S 93.4 78.1  17462:00 
java
   

 3748 elastic+  20   0  1.151t 0.045t 0.013t S 93.4 78.1  17457:55 
java
   

 3761 elastic+  20   0  1.151t 0.045t 0.013t S 93.1 78.1  17455:21 
java
   

 3744 elastic+  20   0  1.151t 0.045t 0.013t S 92.7 78.1  17456:55 
java
   

 1758 elastic+  20   0  1.151t 0.045t 0.013t R  5.9 78.1   3450:01 
java
   

 1755 elastic+  20   0  1.151t 0.045t 0.013t R  5.6 78.1   3450:05 
java  

So I have four threads consuming way more CPU than the others. The node is 
only doing a moderate amount of garbage collection. Running jstack I find 
that all the stuck threads have  stack dump which looks like this:
Thread 3744: (state = IN_JAVA)
 - java.util.HashMap.getEntry(java.lang.Object) @bci=72, line=446 (Compiled 
frame; information may be imprecise)
 - java.util.HashMap.get(java.lang.Object) @bci=11, line=405 (Compiled 
frame)
 - 
org.elasticsearch.search.scan.ScanContext$ScanFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext,
 
org.apache.lucene.util.Bits) @bci=8, line=156 (Compiled frame)
 - 
org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext,
 
org.apache.lucene.util.Bits) @bci=6, line=45 (Compiled frame)
 - 
org.apache.lucene.search.FilteredQuery$1.scorer(org.apache.lucene.index.AtomicReaderContext,
 
boolean, boolean, org.apache.lucene.util.Bits) @bci=34, line=130 (Compiled 
frame)
 - org.apache.lucene.search.IndexSearcher.search(java.util.List, 
org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) 
@bci=68, line=618 (Compiled frame)
 - 
org.elasticsearch.search.internal.ContextIndexSearcher.search(java.util.List, 
org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) 
@bci=225, line=173 (Compiled frame)
 - 
org.apache.lucene.search.IndexSearcher.search(org.apache.lucene.search.Query, 
org.apache.lucene.search.Collector) @bci=11, line=309 (Interpreted frame)
 - 
org.elasticsearch.search.scan.ScanContext.execute(org.elasticsearch.search.internal.SearchContext)
 
@bci=54, line=52 (Interpreted frame)
 - 
org.elasticsearch.search.query.QueryPhase.execute(org.elasticsearch.search.internal.SearchContext)
 
@bci=174, line=119 (Compiled frame)
 - 
org.elasticsearch.search.SearchService.executeScan(org.elasticsearch.search.internal.InternalScrollSearchRequest)
 
@bci=49, line=233 (Interpreted frame)
 - 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.search.internal.InternalScrollSearchRequest,
 
org.elasticsearch.transport.TransportChannel) @bci=8, line=791 (Interpreted 
frame)
 - 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.transport.TransportRequest,
 
org.elasticsearch.transport.TransportChannel) @bci=6, line=780 (Interpreted 
frame)
 - 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run() 
@bci=12, line=270 (Compiled frame)
 - 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
 
@bci=95, line=1145 (Compiled frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=615 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=724 (Interpreted frame)

The state varies between IN_JAVA an BLOCKED. I took two stack traces 10 
minutes apart and they were identical for the suspect threads.

I assume this could be a very long running query, but I wonder if it isn't 
just stuck. Perhaps we are seeing this issue: 
http://stackoverflow.com/questions/17070184/hashmap-stuck-on-get

-- 
You received this message because you are subscribed 

Re: stuck thread problem?

2014-08-27 Thread Martin Forssen
I did report it https://github.com/elasticsearch/elasticsearch/issues/7478

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aea84561-33fc-4451-a5e5-bbc576470135%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Mysterious sudden load increase

2014-08-13 Thread Martin Forssen
Hello,

We are running an es-cluster with 13 nodes, 10 data and 3 master, on Amazon 
hi1.4xlarge machines. The cluster contains almost 10T of data (including 
one replica). It is running Elasticsearch 1.1.1 on Oracle java  1.7.0_25.

Our problem is that every now and then the cpu load suddenly increases on 
one of the data nodes. The load average can suddenly jump from about 4 up 
to 10-16, and once it has jumped up it stays there. Then after a couple of 
days another node is also affected and so on. Eventually most nodes in the 
cluster are affected and we have to restart them. A restart of the Java 
process brings the load back to normal.

We are not experiencing any abnormal levels of garbage collection on the 
affected nodes.

I did a java stack dump on one of the affected node and one things which 
stood out was that it had a nubber of threads with state IN_JAVA, the 
non-loaded nodes had no such threads. The stack-dump for these threads 
ivariably looks something lie this:

Thread 23022: (state = IN_JAVA)
 - java.util.HashMap.getEntry(java.lang.Object) @bci=72, line=446 (Compiled 
frame; information may be imprecise)
 - java.util.HashMap.get(java.lang.Object) @bci=11, line=405 (Compiled 
frame)
 - 
org.elasticsearch.search.scan.ScanContext$ScanFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext,
 
org.apache.lucene.util.Bits) @bci=8, line=156 (Compiled frame)
 - 
org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext,
 
org.apache.lucene.util.Bits) @bci=6, line=45 (Compiled frame)
 - 
org.apache.lucene.search.FilteredQuery$1.scorer(org.apache.lucene.index.AtomicReaderContext,
 
boolean, boolean, org.apache.lucene.util.Bits) @bci=34, line=130 (Compiled 
frame)
 - org.apache.lucene.search.IndexSearcher.search(java.util.List, 
org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) 
@bci=68, line=618 (Compiled frame)
 - 
org.elasticsearch.search.internal.ContextIndexSearcher.search(java.util.List, 
org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) 
@bci=225, line=173 (Compiled frame)
 - 
org.apache.lucene.search.IndexSearcher.search(org.apache.lucene.search.Query, 
org.apache.lucene.search.Collector) @bci=11, line=309 (Interpreted frame)
 - 
org.elasticsearch.search.scan.ScanContext.execute(org.elasticsearch.search.internal.SearchContext)
 
@bci=54, line=52 (Interpreted frame)
 - 
org.elasticsearch.search.query.QueryPhase.execute(org.elasticsearch.search.internal.SearchContext)
 
@bci=174, line=119 (Compiled frame)
 - 
org.elasticsearch.search.SearchService.executeScan(org.elasticsearch.search.internal.InternalScrollSearchRequest)
 
@bci=49, line=233 (Interpreted frame)
 - 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.search.internal.InternalScrollSearchRequest,
 
org.elasticsearch.transport.TransportChannel) @bci=8, line=791 (Interpreted 
frame)
 - 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.transport.TransportRequest,
 
org.elasticsearch.transport.TransportChannel) @bci=6, line=780 (Interpreted 
frame)
 - 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run() 
@bci=12, line=270 (Compiled frame)
 - 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
 
@bci=95, line=1145 (Compiled frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=615 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=724 (Interpreted frame)

Does anybody know what we are experiencing, or have any tips on how to 
further debug this?

/MaF

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e83a7e9f-6fe4-4d45-b19c-95f8d8418659%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: After upgrade to elastic search 1.2.1 getting org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodesIn

2014-06-17 Thread Martin Forssen
I have also encountered this, did the debugging and created an issue: 
https://github.com/elasticsearch/elasticsearch/issues/6325

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/77d17a29-db6d-44c7-9afd-04d8161fac74%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Upgrading from 0.20.6 to 1.1.1

2014-06-03 Thread Martin Forssen
We recently upgraded three elasticsearch clusters from 0.20.2 to 1.1.1 in 
one big step.

We did it without any downtime by setting up parallel clusters running 
1.1.1. Since our data is changing all the time we created the parallel 
clusters by first adding machines and shards to the existing clusters and 
then we manually cut off the extra machines, renamed the cluster and 
started them and we have a complete copy of our existing system. We only 
had to pause updates to the clusters for a couple of hours while rewiring 
some logic, and reading never stopped.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1d52a48d-d6b0-43c7-ac1b-62fe629c390f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: TransportClient of ES 1.1.1 on Ubantu 12.x throws 'No node available' exception

2014-06-03 Thread Martin Forssen
But be aware that there is a bug in Elasticsearch which can cause the 
transport client to get the NoNodeAvailable exception of sniff is set to 
false. It doesn't seem to have been the issue in this case but I thought I 
should mention it.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/70b8dab6-4032-4016-adb6-427781dc7531%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Create new index in busy cluster

2014-04-29 Thread Martin Forssen
On Tuesday, April 29, 2014 8:59:35 AM UTC+2, Mark Walkom wrote:

 Yep, you can do this using 
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html


Thanks, that's exactly what I was looking for.

/MaF 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f123731e-f0c4-4bf1-bc7f-99941bd44bbf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Dedicated Gateway node

2014-04-09 Thread Martin Forssen
If you use the Java client you probably also have to tell it to not sniff 
out the other nodes in the cluster.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/74c90a12-3b3f-4853-8e08-b3b3c1bc47a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Java API or REST API for client development ?

2014-03-26 Thread Martin Forssen
The Java API is said to have better performance (and I believe that). The 
drawbacks are that you must use the exact same version of the java API 
library on the client as the server runs, as well as the same version of 
Java. So upgrades suck.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4a130362-4732-423b-8994-937e2662c38a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch cluster fails to stabilize

2013-12-18 Thread Martin Forssen
On some further debugging I enabled debug logging on one of the nodes. Now 
when I try to get the indices stats I get the following in the log on the 
debugging node:
[2013-12-18 08:02:01,078][DEBUG][index.shard.service  ] [NODE6] 
[reference][10] Can not build 'completion stats' from engine shard state 
[RECOVERING]
org.elasticsearch.index.shard.IllegalIndexShardStateException: 
[reference][10] CurrentState[RECOVERING] operations only allowed when 
started/relocated
at 
org.elasticsearch.index.shard.service.InternalIndexShard.readAllowed(InternalIndexShard.java:765)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:600)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:595)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:536)
at 
org.elasticsearch.action.admin.indices.stats.CommonStats.init(CommonStats.java:151)
at 
org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)
at 
org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)
at 
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)
at 
org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)
at 
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$NodeTransportHandler.messageReceived(TransportNodesOperationAction.java:273)
at 
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$NodeTransportHandler.messageReceived(TransportNodesOperationAction.java:264)
at 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Looking in head I see that this node has a number of green shards, but 
shard 10 is yellow (recovering). This smells like a bug.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/62fdf587-7779-4814-97a6-f1381993eba5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.