Re: $ES_HEAP_SIZE

2014-12-19 Thread Johan Öhr
What you need to do is to just create another init.d script, the difference 
between them should be that they point at different 
/etc/sysconfig/elasticsearch files, in here you should put different 
configs, dont set it in the /usr/share/elasticsearch.in.sh (i think its 
there default) 

With this you can also spin up a new instance for master node, just put the 
differences in the sysconfig file, different heap etc etc

Den fredagen den 8:e februari 2013 kl. 15:21:39 UTC+1 skrev Shawn Ritchie:
>
> Sorry Again, What if i wanted to run them as Services? I tried looking in 
> the /service/elasticsearch.conf and init.d but with no luck
>
> On Friday, 8 February 2013 13:40:19 UTC+1, Clinton Gormley wrote:
>>
>> On Fri, 2013-02-08 at 04:27 -0800, Shawn Ritchie wrote: 
>> > Already read that post, but from what i understood or misunderstood, 
>> > is its making the assumption you will have 1 instance of elastic 
>> > search running on a machine. 
>> > 
>> > What i'd like to do is with 1 elastic search installation is lunch 2 
>> > instance of elastic search with different /config /data /log node 
>> > name. 
>> > 
>> > 
>> > Or is it that multiple versions of elastic search on the same machine 
>> > run @ a directory level, that is 2 instances which are sharing 
>> > the /config /data and /log directries together with the node name? 
>>
>> You can run multiple instances with the same paths (including logging 
>> and data). 
>>
>> If you just want to specify a different node name, then you could do so 
>> on the command line: 
>>
>> ./bin/elasticsearch -Des.node.name=node_1 
>> ./bin/elasticsearch -Des.node.name=node_2 
>>
>> If you want to change more than that, you could specify a specific 
>> config file: 
>>
>> ./bin/elasticsearch -Des.config=/path/to/config/file_1 
>> ./bin/elasticsearch -Des.config=/path/to/config/file_2 
>>
>> clint 
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e0192b3d-1ee3-4192-92e7-d4a3180a467d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch snapshots throttle problems

2014-12-04 Thread Johan Öhr
I noticed these warnings on some of my nodes while executing the snapshot, 
maybe it has to do something with why its so slow.

[2014-12-03 15:57:35,699][WARN ][snapshots] [xx06] 
[[xxx-2014-11-20][7]] [my_backup:snapshot_test] failed to create snapshot

org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: 
[xxx-2014-11-20][7] Aborted

at 
org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository$SnapshotContext$AbortableInputStream.checkAborted(BlobStoreIndexShardRepository.java:632)

at 
org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository$SnapshotContext$AbortableInputStream.read(BlobStoreIndexShardRepository.java:625)

at java.io.FilterInputStream.read(FilterInputStream.java:107)

at 
org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository$SnapshotContext.snapshotFile(BlobStoreIndexShardRepository.java:557)

at 
org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository$SnapshotContext.snapshot(BlobStoreIndexShardRepository.java:501)

at 
org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository.snapshot(BlobStoreIndexShardRepository.java:139)

at 
org.elasticsearch.index.snapshots.IndexShardSnapshotAndRestoreService.snapshot(IndexShardSnapshotAndRestoreService.java:86)

at 
org.elasticsearch.snapshots.SnapshotsService$5.run(SnapshotsService.java:818)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745

Den onsdagen den 3:e december 2014 kl. 10:25:38 UTC+1 skrev Johan Öhr:
>
> Hi, 
>
> I have 12 elasticsearch nodes, with 10gb eth 
>
> Ive been having alot of problem with the performance of snapshots, its 
> throttles to 20 mb/s even tho i set max_snapshot_bytes_per_sec to something 
> else, ive tried to set it in bytes, in megabytes (500m, 500mb) 
>
> Ive tried to move 100gb file from elastic-node to my nfs-server, its about 
> 1gb/s 
> Ive tried to move 100gb file from elastic-node down to my share, its about 
> 1gb/s 
> Ive tried to just cp -rp my index from elastic-node to my share, its about 
> 1gb/s 
>
> Am i missing something here? How is the max_snapshot_bytes_per_sec suppose 
> to look like? 
> Are there any other settings (like recovery streams etc) that affects 
> this? 
>
> My backup dir: 
> curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{ 
> "type": "fs", 
> "settings": { 
>"location": "/misc/backup_elastic/snapshot", 
>"compress": true, 
>"verify": true, 
>"max_snapshot_bytes_per_sec" : "1048576000", 
>"max_restore_bytes_per_sec" : "1048576000" 
> } 
> }' 
>
> And snapshot: 
>  curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_test" -d '{ 
> "indices": "index-2014-01-03", 
> "ignore_unavailable": "true", 
> "include_global_state": "true", 
> "partial": "true" 
> }' 
>
> Regards, Johan
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/13857ee7-5e36-45a5-a2b5-3ffa66e8fa21%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch snapshots throttle problems

2014-12-03 Thread Johan Öhr
Hi, 

I have 12 elasticsearch nodes, with 10gb eth 

Ive been having alot of problem with the performance of snapshots, its 
throttles to 20 mb/s even tho i set max_snapshot_bytes_per_sec to something 
else, ive tried to set it in bytes, in megabytes (500m, 500mb) 

Ive tried to move 100gb file from elastic-node to my nfs-server, its about 
1gb/s 
Ive tried to move 100gb file from elastic-node down to my share, its about 
1gb/s 
Ive tried to just cp -rp my index from elastic-node to my share, its about 
1gb/s 

Am i missing something here? How is the max_snapshot_bytes_per_sec suppose 
to look like? 
Are there any other settings (like recovery streams etc) that affects this? 

My backup dir: 
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{ 
"type": "fs", 
"settings": { 
   "location": "/misc/backup_elastic/snapshot", 
   "compress": true, 
   "verify": true, 
   "max_snapshot_bytes_per_sec" : "1048576000", 
   "max_restore_bytes_per_sec" : "1048576000" 
} 
}' 

And snapshot: 
 curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_test" -d '{ 
"indices": "index-2014-01-03", 
"ignore_unavailable": "true", 
"include_global_state": "true", 
"partial": "true" 
}' 

Regards, Johan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fcb8e013-6174-44f7-9343-5090665c43f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: accidently ran another instance of elasticsearch on a few nodes

2014-11-10 Thread Johan Öhr
Thank you

I fixed the problem by taking down a node, delete wrong directory, start the 
node. It worked fine with the first two nodes. But not the third ..

See 
https://groups.google.com/forum/m/?utm_medium=email&utm_source=footer#!topic/elasticsearch/7LZVzQkcAtA

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dd0258ee-bc4d-4b05-b3f8-98985e71da62%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Shards UNASSIGNED even tho they exist on disk

2014-11-10 Thread Johan Öhr
Another thing i just noticed..

The first problematic index with the problem, is from 2014-10-08,
We upgraded to 1.3.2 from 1.01 2014-10-07, indexes is created nighthly.

Den måndagen den 10:e november 2014 kl. 14:45:09 UTC+1 skrev Johan Öhr:
>
> Hi,
>
> I have a problem with a few index, some of the shards (both replica and 
> primary) are UNASSIGNED, my cluster stays yellow.
>
> This is what the master says about that:
> [2014-11-10 06:53:01,223][WARN ][cluster.action.shard ] [node-master] 
> [index][9] received shard failed for [index][9], 
> node[9g2_kOrDSt-57UVI1bLfFg], [P], s[STARTED], indexUUID 
> [20P5SMNFTZyrUEVyUPCsbQ], reason [master 
> [node-master][07ZcjsurR3iIVsH6iSX0jw][data-node][inet[/xx.xx.xx.xx:9300]]{data=false,
>  
> master=true} marked shard as started, but shard has not been created, mark 
> shard as failed]
>
> http://host:9200/index/_stats"_shards":{"failed":0,"successful":13,
> "total":20
>
> This happend when i dropped a node, and let it replicate itself together, 
> replication factor is 1 (two shards identical)
> I did it on two nodes, worked perfectly, then on the third node, i have 92 
> SHARDS Unassigned
>
> The only different between the first two nodes and the third is that it 
> ran with these settings:
>
>
>   "cluster.routing.allocation.disk.threshold_enabled": true,
>
>   "cluster.routing.allocation.disk.watermark.low": "0.85",
>
> "cluster.routing.allocation.disk.watermark.high": "0.90",
>
> "cluster.info.update.interval": "60s",
>
> "indices.recovery.concurrent_streams": "10",
>
> "cluster.routing.allocation.node_concurrent_recoveries": "40",
>
>
> Any idea if this can be fixed? 
>
> Ive tried to clean up the masters and restarted them, nothing
> Ive tried to delete _state on data-node on these index, nothing
>
> Thanks for help :)
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/03686f7e-df0b-4fcd-89fd-ebc66a0ebc05%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Shards UNASSIGNED even tho they exist on disk

2014-11-10 Thread Johan Öhr
Hi,

I have a problem with a few index, some of the shards (both replica and 
primary) are UNASSIGNED, my cluster stays yellow.

This is what the master says about that:
[2014-11-10 06:53:01,223][WARN ][cluster.action.shard ] [node-master] 
[index][9] received shard failed for [index][9], 
node[9g2_kOrDSt-57UVI1bLfFg], [P], s[STARTED], indexUUID 
[20P5SMNFTZyrUEVyUPCsbQ], reason [master 
[node-master][07ZcjsurR3iIVsH6iSX0jw][data-node][inet[/xx.xx.xx.xx:9300]]{data=false,
 
master=true} marked shard as started, but shard has not been created, mark 
shard as failed]

http://host:9200/index/_stats"_shards":{"failed":0,"successful":13,"total":
20

This happend when i dropped a node, and let it replicate itself together, 
replication factor is 1 (two shards identical)
I did it on two nodes, worked perfectly, then on the third node, i have 92 
SHARDS Unassigned

The only different between the first two nodes and the third is that it ran 
with these settings:


  "cluster.routing.allocation.disk.threshold_enabled": true,

  "cluster.routing.allocation.disk.watermark.low": "0.85",

"cluster.routing.allocation.disk.watermark.high": "0.90",

"cluster.info.update.interval": "60s",

"indices.recovery.concurrent_streams": "10",

"cluster.routing.allocation.node_concurrent_recoveries": "40",


Any idea if this can be fixed? 

Ive tried to clean up the masters and restarted them, nothing
Ive tried to delete _state on data-node on these index, nothing

Thanks for help :)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1706f0bc-5aef-42dc-bcbe-8f62efd25faf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


accidently ran another instance of elasticsearch on a few nodes

2014-11-08 Thread Johan Öhr
Hi, while trying to set up an another process as master, i believe i for 
some time ran multiple instances of elasticsearch on three nodes. 

On these nodes, its looks like this: 
/var/lib/elasticsearch/elasticsearch/indices/0 
/var/lib/elasticsearch/elasticsearch/indices/1 

On my other nodes, that are fine it looks like: 
/var/lib/elasticsearch/elasticsearch/indices/0 

So, there is alot of data in the "1"-directory on three nodes, and these 
shards will not be ASSIGNED, my cluster stays yellow. 

This mistake happend 1 week ago, since then i have restarted ES a couple of 
times, but it was just now that i got the problem. 

How can i fix this? 

At the moment im running 5 nodes, where 3 runs another instance of 
elasticsearch, as just masters 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3c852fff-611a-45fb-b1c2-e5962d733977%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.