Re: sort "missing" parameter from query string

2015-05-11 Thread sebastian
Is it possible to specify it from mapping?

On Monday, May 11, 2015 at 5:05:27 PM UTC-3, sebastian wrote:
>
> Hello,
>
> Can I specify the sort "missing" parameter from query string?
>
> From body:
>
> {
> "sort" : [
> { "price" : {"missing" : "_last"} },
> ],
> "query" : {
> "term" : { "user" : "kimchy" }
> }
> }
>
>
> Thanks!
>

-- 
Please update your bookmarks! We have moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1a51531d-a0ed-408b-bc05-a0acdb2e0e2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


sort "missing" parameter from query string

2015-05-11 Thread sebastian
Hello,

Can I specify the sort "missing" parameter from query string?

>From body:

{
"sort" : [
{ "price" : {"missing" : "_last"} },
],
"query" : {
"term" : { "user" : "kimchy" }
}
}


Thanks!

-- 
Please update your bookmarks! We have moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5bfd91c2-9cda-42e6-88bc-38ba1c08ae5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Get a fixed random sample from all documents

2015-04-24 Thread Sebastian Rickelt
Hi,

I want to fetch a fixed large number of documents randomly from 
Elasticsearch to compute some statistics (100,000 out of 10 M documents). 
The randomness has to be predictable so that I get the same documents with 
every request.

My problem is that scan and scroll is fast but as I understand the order is 
not predictable. On the other side I could use the 'random_score' function 
with a fixed seed in my query. That would fix the order problem but deep 
pagination is very slow. Has anyone done this before? Any ideas or pointers 
how to do this with Elasticsearch?

Any help appreciated.

Cheers,

Sebastian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e00e363a-5346-48bd-807c-4b221bed7c28%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


issues with alias/routing name

2015-03-30 Thread sebastian
Hi,

I created the following alias/routing ("users" is the index name, "user" is 
the type name):

{
  "users": {
"aliases": {
  "acme": {
"filter": {
  "term": {
"tenant": "acme"
  }
},
"index_routing": "acme",
"search_routing": "acme"
  }
}
  }
}


And everything works OK:

curl -XGET 'http://localhost:9200/acme/user/_count'
 
{"count":8, "_shards": {"total":5,"successful":5,"failed":0}}


But if the alias name has a '-' char:

{
  "users": {
"aliases": {
  "foo-bar": {
"filter": {
  "term": {
"tenant": "foo-bar"
  }
},
"index_routing": "foo-bar",
"search_routing": "foo-bar"
  }
}
  }
}


The routing is not working:

curl -XGET 'http://localhost:9200/foo-bar/user/_count'
 
{"count":0, "_shards": {"total":5,"successful":5,"failed":0}}


curl -XGET 'http://localhost:9200/users/user/_count?q=tenant:"foo-bar";'

{"count":15, "_shards": {"total":5, "successful":5, "failed":0}}


Any clues?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/98476ebe-b426-418e-9597-c821de56cc6f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


reindex question

2015-03-27 Thread sebastian
Hi,

Can I reindex the data into the same index/type? My goal is regenerate the 
mapping for specified index/type.


Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/15848d9e-1b70-434f-a125-44e575c7a5ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch with JSON-array, causing serialize -error

2015-03-24 Thread sebastian
same issue here. Any clues?

On Sunday, April 20, 2014 at 2:47:40 PM UTC-3, PyrK wrote:
>
> I'm using elasticsearch with mongodb -collection using elmongo 
> . I have a collection (elasticsearch 
> index's point of view json-array), that contains for example field: 
> "random_point": [  0.10007477086037397,  0 ]
>
> That's most likely the reason I get this error, when trying to index my 
> collection.
> [2014-04-20 16:48:51,228][DEBUG][action.bulk  ] [Emma Frost] [
> mediacontent-2014-04-20t16:48:44.116z][4] failed to execute bulk item (
> index) index {[mediacontent-2014-04$
>
> org.elasticsearch.index.mapper.MapperParsingException: object mapping [
> random_point] trying to serialize a value with no field associated with it
> , current value [0.1000747708603739$
>
> at org.elasticsearch.index.mapper.object.ObjectMapper.
> serializeValue(ObjectMapper.java:595)
>
> at org.elasticsearch.index.mapper.object.ObjectMapper.parse(
> ObjectMapper.java:467)
>
> at org.elasticsearch.index.mapper.object.ObjectMapper.
> serializeValue(ObjectMapper.java:599)
>
> at org.elasticsearch.index.mapper.object.ObjectMapper.
> serializeArray(ObjectMapper.java:587)
>
> at org.elasticsearch.index.mapper.object.ObjectMapper.parse(
> ObjectMapper.java:459)
>
> at org.elasticsearch.index.mapper.DocumentMapper.parse(
> DocumentMapper.java:506)
>
> at org.elasticsearch.index.mapper.DocumentMapper.parse(
> DocumentMapper.java:450)
>
> at org.elasticsearch.index.shard.service.InternalIndexShard.
> prepareIndex(InternalIndexShard.java:327)
>
> at org.elasticsearch.action.bulk.TransportShardBulkAction.
> shardIndexOperation(TransportShardBulkAction.java:381)
>
> at org.elasticsearch.action.bulk.TransportShardBulkAction.
> shardOperationOnPrimary(TransportShardBulkAction.java:155)
>
> at org.elasticsearch.action.support.replication.
> TransportShardReplicationOperationAction$AsyncShardOperationAction.
> performOnPrimary(TransportShardReplicationOperationAction$
>
> at org.elasticsearch.action.support.replication.
> TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(
> TransportShardReplicationOperationAction.java:430)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1146)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:701)
>
> [2014-04-20 16:48:54,129][INFO ][cluster.metadata ] [Emma Frost] [
> mediacontent-2014-04-20t16:39:09.348z] deleting index
>
>
> Is there any ways to bypass this? That array is a needed value in my 
> collection. Is there anyways to give some option in elasticsearch to not to 
> index that JSON-field, tho it's not going to be searchable field at all?
>
>
> Best regards,
>
> PK
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bfd442a0-ef52-4ae9-be6c-5e98475cbff4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


mappings: use wildcards from field name

2015-02-26 Thread sebastian
Hi,


Can I create a mapping and use wildcards in the fields name? For example, I 
want to create a template with the following mapping:

{

"user": {

"dynamic": "false", 

"properties": { 

"email": { "type":"string" },

"*_metadata": {

"type": "object", 

"dynamic": "true" 

} 

} 

}

}

Then, the "foo_metadata", "bar_metadata", etc fields will be mapped.

Is it possible?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6d7d08b8-c2f2-4109-9720-c75f031092a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


another multitenant scenario

2015-02-26 Thread sebastian
Hi guys!

I'm looking to find the best indexing strategy for my multitenant data.

Here is my scenario:

   - Tenants: > 1
   - Each tenant has "User" documents where all of these docs have common 
   fields, but also they have tenant-specific fields:
  - { tenant: "contoso", name: "peter", location: { id: 1, City: "New 
  York", Street: "57th St" }, metadata: { foo: { bar: "test" } } }
  - { tenant: "fabrikam", name: "mary", location: "San Francisco", 
  metadata: { foo: ["bar", "test"] }, plan: "gold" }
  - { tenant: "fabrikam", name: "john", location: "Seattle", metadata: 
  { foo: ["abc", "xyz"] }, plan: "silver" }
   

Any ideas?
Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/530c9dbe-3735-4c49-b6a9-9127ccca6d24%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


how to avoid MapperParsingException?

2015-02-19 Thread sebastian
Hi,

I'm indexing documents with the following structure:

{ "name": "peter", "email": "p...@p.com", "location": "MIA" }
{ "name": "mary", "email": "m...@m.com", "device": "ipad" }
{ "name": "mary", "email": "m...@m.com", "metadata": { ... } }

As you can see, I only know the type for "name" and "email" fields. The 
location, device, metadata or whatever other field are dynamic fields.

So, in order to avoid a MapperParsingException, I want to persist all of 
the document fields, but ONLY mark as "searchable" the "name" and "email" 
fields.

Can I do that using mappings?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8bbe52be-59b4-4e78-8866-13716b3f862c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Is there any correlation between ttl and version conflicts?

2014-10-30 Thread Sebastian
Hi folks,

I'm using elasticsearch to store and analyse logs. Since I wanted old logs
to be deleted automatically I added a ttl to my mapping. Now I sometimes get
version conflict exceptions when my (PHP) application tries to update a
timestamp in one of the fields. I'm trying to update the field using cURL
and because of session locking it is impossible for a single user to
generate more than one curl_exec()-call at a time. Furthermore I do not
provide version information explicidly in the update query. There are two
elasticsearch servers handling the log index however all queries are handled
by only one of them. The other one is just a standby. The exceptions were
thrown only after setting a ttl so I was wondering if there might be any
correlation? Can any of you shed some light on that matter?

Best regards, Sebastian



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Is-there-any-correlation-between-ttl-and-version-conflicts-tp4065542.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414672713198-4065542.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: Accessing Search Templates via Rest

2014-06-17 Thread Sebastian Gräser
Thank you very much : ) good to know!

Am Montag, 16. Juni 2014 15:46:43 UTC+2 schrieb Alexander Reelsen:
>
> Hey,
>
> no, this is not yet possible, but this will be added sooner or later as 
> the search template API should behave like any other API.
>
>
> --Alex
>
>
> On Fri, Jun 13, 2014 at 9:51 AM, Sebastian Gräser  > wrote:
>
>> so i guess its not possible?
>>
>> Am Dienstag, 10. Juni 2014 16:58:31 UTC+2 schrieb Sebastian Gräser:
>>
>>> Hello,
>>>
>>> maybe someone can help me. Is there a way to get the available search 
>>> templates via rest api? havent found a way yet, hope you can help me.
>>>
>>> Best regards
>>> Sebastian
>>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/ae1fedb0-4c74-4407-9532-fe7ad705ceb0%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/ae1fedb0-4c74-4407-9532-fe7ad705ceb0%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3ffef6a1-1d0e-48d3-a659-1f6ea85da3eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Accessing Search Templates via Rest

2014-06-13 Thread Sebastian Gräser
so i guess its not possible?

Am Dienstag, 10. Juni 2014 16:58:31 UTC+2 schrieb Sebastian Gräser:
>
> Hello,
>
> maybe someone can help me. Is there a way to get the available search 
> templates via rest api? havent found a way yet, hope you can help me.
>
> Best regards
> Sebastian
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ae1fedb0-4c74-4407-9532-fe7ad705ceb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
That directory is being created automatically every time I start
elasticsearch.

[root@254020-web1 programs]# service elasticsearch stop

Stopping elasticsearch:[  OK  ]

[root@254020-web1 programs]# ls

elasticsearch-1.2.1

[root@254020-web1 programs]# rm -rf *

[root@254020-web1 programs]# ls

[root@254020-web1 programs]# service elasticsearch start

Starting elasticsearch:[  OK  ]

[root@254020-web1 programs]# ls

[root@254020-web1 programs]# tail /var/log/elasticsearch/elasticsearch.log

at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:176)

at java.nio.channels.FileChannel.open(FileChannel.java:287)

at java.nio.channels.FileChannel.open(FileChannel.java:334)

at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)

at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)

at org.elasticsearch.index.store.Store.openInputRaw(Store.java:319)

at
org.elasticsearch.indices.recovery.RecoverySource$1$1.run(RecoverySource.java:189)

... 3 more

[2014-06-12 02:48:40,299][WARN ][cluster.action.shard ] [Doctor
Faustus] [blurays][2] sending failed shard for [blurays][2],
node[Vf24wKzHSNuEXJCL-rEFDQ], [R], s[INITIALIZING], indexUUID
[z6lXF3L0S1KwwNZUUaiKsw], reason [Failed to start shard, message
[RecoveryFailedException[[blurays][2]: Recovery failed from
[Stilt-Man][mxmoAlTaTkClmfpImcpb1A][254020-web1.8coupons.com][inet[/
192.168.100.218:9301]] into [Doctor Faustus][Vf24wKzHSNuEXJCL-rEFDQ][
254020-web1.8coupons.com][inet[/192.168.100.218:9300]]]; nested:
RemoteTransportException[[Stilt-Man][inet[/192.168.100.218:9301]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[blurays][2] Phase[1] Execution failed];
nested: RecoverFilesRecoveryException[[blurays][2] Failed to transfer [4]
files with total size of [6.6kb]]; nested:
NoSuchFileException[/home/programs/elasticsearch-1.2.1/data/elasticsearch/nodes/1/indices/blurays/2/index/_
0.si]; ]]

[2014-06-12 02:48:40,299][WARN ][cluster.action.shard ] [Doctor
Faustus] [blurays][3] sending failed shard for [blurays][3],
node[Vf24wKzHSNuEXJCL-rEFDQ], [R], s[INITIALIZING], indexUUID
[z6lXF3L0S1KwwNZUUaiKsw], reason [Failed to start shard, message
[RecoveryFailedException[[blurays][3]: Recovery failed from
[Stilt-Man][mxmoAlTaTkClmfpImcpb1A][254020-web1.8coupons.com][inet[/
192.168.100.218:9301]] into [Doctor Faustus][Vf24wKzHSNuEXJCL-rEFDQ][
254020-web1.8coupons.com][inet[/192.168.100.218:9300]]]; nested:
RemoteTransportException[[Stilt-Man][inet[/192.168.100.218:9301]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[blurays][3] Phase[1] Execution failed];
nested: RecoverFilesRecoveryException[[blurays][3] Failed to transfer [1]
files with total size of [71b]]; nested:
NoSuchFileException[/home/programs/elasticsearch-1.2.1/data/elasticsearch/nodes/1/indices/blurays/3/index/segments_1];
]]

[root@254020-web1 programs]# ls

elasticsearch-1.2.1

[root@254020-web1 programs]#




On Thu, Jun 12, 2014 at 9:36 AM, Mark Walkom 
wrote:

> Why does it mention this path /home/programs/elasticsearch-1.2.1/
> data/elasticsearch/nodes/1/indices/blurays/3/index/segments_1
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 12 June 2014 16:33, Sebastian Okser  wrote:
>
>> I don't see data, but I see it searching for the shards in the log files.
>> Other functionality regarding the JDBC rivers don't seem to be working
>> either in 1.2.1, but its extremely hard to debug those issues when my log
>> files are constantly being filled with the following every minute or so.
>>
>> [root@254020-web1 home]# tail /var/log/elasticsearch/elasticsearch.log
>> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>> at
>> sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:176)
>> at java.nio.channels.FileChannel.open(FileChannel.java:287)
>> at java.nio.channels.FileChannel.open(FileChannel.java:334)
>> at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>> at
>> org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
>> at org.elasticsearch.index.store.Store.openInputRaw(Store.java:319)
>> at
>> org.elasticsearch.indices.recovery.RecoverySource$1$1.run(RecoverySource.java:189)
>> ... 3 more
>> [2014-06-12 02:30:01,513][WARN ][cluster.action.shard ] [Metalhead]
>> [blurays][3] sending failed shard for [blurays][3],
>> node[UqeMw_y7RN-FQ3tuBNEdng], [R], s[INITIALIZING], indexUUID
>> [z6lXF3L0S1KwwNZUUaiKsw], reason [Failed to start shard, message
>> [RecoveryFailedException[[blurays][3]: Recovery failed from
>> [Stilt-

Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
I don't see data, but I see it searching for the shards in the log files.
Other functionality regarding the JDBC rivers don't seem to be working
either in 1.2.1, but its extremely hard to debug those issues when my log
files are constantly being filled with the following every minute or so.

[root@254020-web1 home]# tail /var/log/elasticsearch/elasticsearch.log
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:176)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:334)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
at org.elasticsearch.index.store.Store.openInputRaw(Store.java:319)
at
org.elasticsearch.indices.recovery.RecoverySource$1$1.run(RecoverySource.java:189)
... 3 more
[2014-06-12 02:30:01,513][WARN ][cluster.action.shard ] [Metalhead]
[blurays][3] sending failed shard for [blurays][3],
node[UqeMw_y7RN-FQ3tuBNEdng], [R], s[INITIALIZING], indexUUID
[z6lXF3L0S1KwwNZUUaiKsw], reason [Failed to start shard, message
[RecoveryFailedException[[blurays][3]: Recovery failed from
[Stilt-Man][mxmoAlTaTkClmfpImcpb1A][254020-web1.8coupons.com][inet[/
192.168.100.218:9301]] into [Metalhead][UqeMw_y7RN-FQ3tuBNEdng][
254020-web1.8coupons.com][inet[/192.168.100.218:9300]]]; nested:
RemoteTransportException[[Stilt-Man][inet[/192.168.100.218:9301]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[blurays][3] Phase[1] Execution failed];
nested: RecoverFilesRecoveryException[[blurays][3] Failed to transfer [1]
files with total size of [71b]]; nested:
NoSuchFileException[/home/programs/elasticsearch-1.2.1/data/elasticsearch/nodes/1/indices/blurays/3/index/segments_1];
]]
[root@254020-web1 home]#


On Thu, Jun 12, 2014 at 9:27 AM, Mark Walkom 
wrote:

> Just to clarify, is it just the directory that appears again at
> /var/lib/elasticsearch/elasticsearch/nodes/0/indices, or are you saying
> the indexed data in the shards is reappearing as well?
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 12 June 2014 16:24, Sebastian Okser  wrote:
>
>>  Neither have data directories. I also deleted all files I could find
>> located at /var/lib/elasticsearch including
>> /var/lib/elasticsearch/elasticsearch/nodes/0/indices, /etc/elasticsearch,
>> their parent directories and every other directory I could find. I then
>> again reinstalled and the data again appears at
>> /var/lib/elasticsearch/elasticsearch/nodes/0/indices.
>>
>> [root@254020-web1 home]# ls /usr/local/share/elasticsearch
>>
>> ls: /usr/local/share/elasticsearch: No such file or directory
>>
>> [root@254020-web1 home]# ls /usr/share/elasticsearch/
>>
>> LICENSE.txt  NOTICE.txt  README.textile  bin  lib  plugins
>>
>>
>> On Thu, Jun 12, 2014 at 9:12 AM, Mark Walkom 
>> wrote:
>>
>>> You may have a data directory under /usr/local/share/elasticsearch, or
>>> under /usr/share/elasticsearch, if it's there and you don't want the data,
>>> delete them.
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>>
>>>
>>> On 12 June 2014 16:10, Sebastian Okser  wrote:
>>>
>>>> I should add that I have also tried uninstalling the RPM and
>>>> reinstalling it. I used rpm -e elasticsearch-1.2.1 for the uninstall.
>>>>
>>>>
>>>> On Thu, Jun 12, 2014 at 9:09 AM, Sebastian Okser 
>>>> wrote:
>>>>
>>>>> I originally installed it with:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> cd ~
>>>>> wget 
>>>>> https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.9.tar.gz
>>>>>  -O elasticsearch.tar.gz
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> tar -xf elasticsearch.tar.gz
>>>>> rm elasticsearch.tar.gz
>>>>> mv elasticsearch-* elasticsearch
>>>>> sudo mv elasticsearch /usr/local/share
>>>>>
>>>>> curl -L 
>>>>> http://github.com/elasticsearch/elasticsea

Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
Neither have data directories. I also deleted all files I could find
located at /var/lib/elasticsearch including
/var/lib/elasticsearch/elasticsearch/nodes/0/indices, /etc/elasticsearch,
their parent directories and every other directory I could find. I then
again reinstalled and the data again appears at
/var/lib/elasticsearch/elasticsearch/nodes/0/indices.

[root@254020-web1 home]# ls /usr/local/share/elasticsearch

ls: /usr/local/share/elasticsearch: No such file or directory

[root@254020-web1 home]# ls /usr/share/elasticsearch/

LICENSE.txt  NOTICE.txt  README.textile  bin  lib  plugins


On Thu, Jun 12, 2014 at 9:12 AM, Mark Walkom 
wrote:

> You may have a data directory under /usr/local/share/elasticsearch, or
> under /usr/share/elasticsearch, if it's there and you don't want the data,
> delete them.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 12 June 2014 16:10, Sebastian Okser  wrote:
>
>> I should add that I have also tried uninstalling the RPM and reinstalling
>> it. I used rpm -e elasticsearch-1.2.1 for the uninstall.
>>
>>
>> On Thu, Jun 12, 2014 at 9:09 AM, Sebastian Okser 
>> wrote:
>>
>>> I originally installed it with:
>>>
>>>
>>>
>>>
>>> cd ~
>>> wget 
>>> https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.9.tar.gz
>>>  -O elasticsearch.tar.gz
>>>
>>>
>>>
>>>
>>>
>>>
>>> tar -xf elasticsearch.tar.gz
>>> rm elasticsearch.tar.gz
>>> mv elasticsearch-* elasticsearch
>>> sudo mv elasticsearch /usr/local/share
>>>
>>> curl -L 
>>> http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master 
>>> | tar -xz
>>>
>>>
>>>
>>>
>>>
>>> mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/
>>> rm -Rf *servicewrapper*
>>> sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install
>>> sudo /etc/init.d/elasticsearch start
>>>
>>>
>>> I then deleted all of these directories using rm -rf as specified in
>>> past messages. I then reinstalled it using an RPM using:
>>>
>>> rpm -Uvh elasticsearch-1.2.1.noarch.rpm
>>>
>>> It seems to me that there must be some hidden place where it is storing
>>> all of the settings. Thanks!
>>>
>>>
>>> On Thu, Jun 12, 2014 at 9:04 AM, Mark Walkom 
>>> wrote:
>>>
>>>> Is it a single node cluster?
>>>>
>>>> Howe are you installing things; deb, rpm, zip?
>>>> How are you uninstalling it?
>>>> Exact commands for the above will be helpful :)
>>>>
>>>> Regards,
>>>> Mark Walkom
>>>>
>>>> Infrastructure Engineer
>>>> Campaign Monitor
>>>> email: ma...@campaignmonitor.com
>>>> web: www.campaignmonitor.com
>>>>
>>>>
>>>> On 12 June 2014 16:00, Sebastian Okser  wrote:
>>>>
>>>>>  I should add that I want to do a 100% clean install it keeps trying
>>>>> to initiate a recovery which is creating problems.
>>>>>
>>>>>
>>>>> On Thu, Jun 12, 2014 at 8:42 AM, Sebastian Okser 
>>>>> wrote:
>>>>>
>>>>>> I tried deleting the directories an reinstalling but somehow it keeps
>>>>>> loading in my old environment even when I try to rm -rf the directories
>>>>>> after shutting down the server? Where is elasticsearch hiding these 
>>>>>> files?
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 11, 2014 at 7:41 PM, Ivan Brusic  wrote:
>>>>>>
>>>>>>> Regarding your problem, are you perhaps running into the fact that
>>>>>>> dynamic scripts are now disabled by default since 1.2?
>>>>>>>
>>>>>>>
>>>>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_enabling_dynamic_scripting
>>>>>>>
>>>>>>> In terms of deleting the existing version, it all depends on how you
>>>>>>> installed Elasticsearch. If you did not override the defaults for the
>>>>>>> various directories, you can just delete the directory.
>>>>>>>
>>>>>>>
>>>

Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
I should add that I have also tried uninstalling the RPM and reinstalling
it. I used rpm -e elasticsearch-1.2.1 for the uninstall.


On Thu, Jun 12, 2014 at 9:09 AM, Sebastian Okser  wrote:

> I originally installed it with:
>
>
> cd ~
> wget 
> https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.9.tar.gz
>  -O elasticsearch.tar.gz
>
>
> tar -xf elasticsearch.tar.gz
> rm elasticsearch.tar.gz
> mv elasticsearch-* elasticsearch
> sudo mv elasticsearch /usr/local/share
>
> curl -L 
> http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | 
> tar -xz
>
> mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/
> rm -Rf *servicewrapper*
> sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install
> sudo /etc/init.d/elasticsearch start
>
>
> I then deleted all of these directories using rm -rf as specified in past
> messages. I then reinstalled it using an RPM using:
>
> rpm -Uvh elasticsearch-1.2.1.noarch.rpm
>
> It seems to me that there must be some hidden place where it is storing
> all of the settings. Thanks!
>
>
> On Thu, Jun 12, 2014 at 9:04 AM, Mark Walkom 
> wrote:
>
>> Is it a single node cluster?
>>
>> Howe are you installing things; deb, rpm, zip?
>> How are you uninstalling it?
>> Exact commands for the above will be helpful :)
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>
>>
>> On 12 June 2014 16:00, Sebastian Okser  wrote:
>>
>>>  I should add that I want to do a 100% clean install it keeps trying to
>>> initiate a recovery which is creating problems.
>>>
>>>
>>> On Thu, Jun 12, 2014 at 8:42 AM, Sebastian Okser 
>>> wrote:
>>>
>>>> I tried deleting the directories an reinstalling but somehow it keeps
>>>> loading in my old environment even when I try to rm -rf the directories
>>>> after shutting down the server? Where is elasticsearch hiding these files?
>>>>
>>>>
>>>> On Wed, Jun 11, 2014 at 7:41 PM, Ivan Brusic  wrote:
>>>>
>>>>> Regarding your problem, are you perhaps running into the fact that
>>>>> dynamic scripts are now disabled by default since 1.2?
>>>>>
>>>>>
>>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_enabling_dynamic_scripting
>>>>>
>>>>> In terms of deleting the existing version, it all depends on how you
>>>>> installed Elasticsearch. If you did not override the defaults for the
>>>>> various directories, you can just delete the directory.
>>>>>
>>>>>
>>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-dir-layout.html#setup-dir-layout
>>>>>
>>>>> --
>>>>> Ivan
>>>>>
>>>>>
>>>>> On Wed, Jun 11, 2014 at 12:51 AM, John  wrote:
>>>>>
>>>>>> I recently upgraded to elasticsearch 1.2.1 and have since seen most
>>>>>> of my scripts stop working. Nothing seems to work properly and I think I
>>>>>> may have messed up during the upgrade. How can I properly delete
>>>>>> elasticsearch and all associated data so I can reinstall it from scratch
>>>>>> (all data, indices, etc removed).
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "elasticsearch" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to elasticsearch+unsubscr...@googlegroups.com.
>>>>>>
>>>>>> To view this discussion on the web visit
>>>>>> https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com
>>>>>> <https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>>  --
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "elasticsearch" group.
>>>>> To unsu

Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
I originally installed it with:


cd ~
wget 
https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.9.tar.gz
-O elasticsearch.tar.gz

tar -xf elasticsearch.tar.gz
rm elasticsearch.tar.gz
mv elasticsearch-* elasticsearch
sudo mv elasticsearch /usr/local/share

curl -L 
http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master
| tar -xz
mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/
rm -Rf *servicewrapper*
sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install
sudo /etc/init.d/elasticsearch start


I then deleted all of these directories using rm -rf as specified in past
messages. I then reinstalled it using an RPM using:

rpm -Uvh elasticsearch-1.2.1.noarch.rpm

It seems to me that there must be some hidden place where it is storing all
of the settings. Thanks!


On Thu, Jun 12, 2014 at 9:04 AM, Mark Walkom 
wrote:

> Is it a single node cluster?
>
> Howe are you installing things; deb, rpm, zip?
> How are you uninstalling it?
> Exact commands for the above will be helpful :)
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 12 June 2014 16:00, Sebastian Okser  wrote:
>
>>  I should add that I want to do a 100% clean install it keeps trying to
>> initiate a recovery which is creating problems.
>>
>>
>> On Thu, Jun 12, 2014 at 8:42 AM, Sebastian Okser 
>> wrote:
>>
>>> I tried deleting the directories an reinstalling but somehow it keeps
>>> loading in my old environment even when I try to rm -rf the directories
>>> after shutting down the server? Where is elasticsearch hiding these files?
>>>
>>>
>>> On Wed, Jun 11, 2014 at 7:41 PM, Ivan Brusic  wrote:
>>>
>>>> Regarding your problem, are you perhaps running into the fact that
>>>> dynamic scripts are now disabled by default since 1.2?
>>>>
>>>>
>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_enabling_dynamic_scripting
>>>>
>>>> In terms of deleting the existing version, it all depends on how you
>>>> installed Elasticsearch. If you did not override the defaults for the
>>>> various directories, you can just delete the directory.
>>>>
>>>>
>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-dir-layout.html#setup-dir-layout
>>>>
>>>> --
>>>> Ivan
>>>>
>>>>
>>>> On Wed, Jun 11, 2014 at 12:51 AM, John  wrote:
>>>>
>>>>> I recently upgraded to elasticsearch 1.2.1 and have since seen most of
>>>>> my scripts stop working. Nothing seems to work properly and I think I may
>>>>> have messed up during the upgrade. How can I properly delete elasticsearch
>>>>> and all associated data so I can reinstall it from scratch (all data,
>>>>> indices, etc removed).
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "elasticsearch" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>>>>
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>  --
>>>> You received this message because you are subscribed to a topic in the
>>>> Google Groups "elasticsearch" group.
>>>> To unsubscribe from this topic, visit
>>>> https://groups.google.com/d/topic/elasticsearch/6YzVfTg-4ng/unsubscribe
>>>> .
>>>> To unsubscribe from this group and all its topics, send an email to
>>>> elasticsearch+unsubscr...@googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAkb1XiEK7umZReNyPiQ-uE3YGhReUivPOdKcSbz81VuA%40mail.gmail.com
>>>> <https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAkb1XiEK7umZReNyPiQ-uE3YGhReUivPOdKcSbz81VuA%40mail.gmail.com?utm_medium=email&utm_source=foote

Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
I should add that I want to do a 100% clean install it keeps trying to
initiate a recovery which is creating problems.


On Thu, Jun 12, 2014 at 8:42 AM, Sebastian Okser  wrote:

> I tried deleting the directories an reinstalling but somehow it keeps
> loading in my old environment even when I try to rm -rf the directories
> after shutting down the server? Where is elasticsearch hiding these files?
>
>
> On Wed, Jun 11, 2014 at 7:41 PM, Ivan Brusic  wrote:
>
>> Regarding your problem, are you perhaps running into the fact that
>> dynamic scripts are now disabled by default since 1.2?
>>
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_enabling_dynamic_scripting
>>
>> In terms of deleting the existing version, it all depends on how you
>> installed Elasticsearch. If you did not override the defaults for the
>> various directories, you can just delete the directory.
>>
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-dir-layout.html#setup-dir-layout
>>
>> --
>> Ivan
>>
>>
>> On Wed, Jun 11, 2014 at 12:51 AM, John  wrote:
>>
>>> I recently upgraded to elasticsearch 1.2.1 and have since seen most of
>>> my scripts stop working. Nothing seems to work properly and I think I may
>>> have messed up during the upgrade. How can I properly delete elasticsearch
>>> and all associated data so I can reinstall it from scratch (all data,
>>> indices, etc removed).
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>>
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com
>>> <https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/6YzVfTg-4ng/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAkb1XiEK7umZReNyPiQ-uE3YGhReUivPOdKcSbz81VuA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAkb1XiEK7umZReNyPiQ-uE3YGhReUivPOdKcSbz81VuA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPrW28-9O%2B_gomzyDymGEPpsL%2B--que7AhQr3sUODEGOZuPC6w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch 1.2 Delete and Reinstall

2014-06-11 Thread Sebastian Okser
I tried deleting the directories an reinstalling but somehow it keeps
loading in my old environment even when I try to rm -rf the directories
after shutting down the server? Where is elasticsearch hiding these files?


On Wed, Jun 11, 2014 at 7:41 PM, Ivan Brusic  wrote:

> Regarding your problem, are you perhaps running into the fact that dynamic
> scripts are now disabled by default since 1.2?
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_enabling_dynamic_scripting
>
> In terms of deleting the existing version, it all depends on how you
> installed Elasticsearch. If you did not override the defaults for the
> various directories, you can just delete the directory.
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-dir-layout.html#setup-dir-layout
>
> --
> Ivan
>
>
> On Wed, Jun 11, 2014 at 12:51 AM, John  wrote:
>
>> I recently upgraded to elasticsearch 1.2.1 and have since seen most of my
>> scripts stop working. Nothing seems to work properly and I think I may have
>> messed up during the upgrade. How can I properly delete elasticsearch and
>> all associated data so I can reinstall it from scratch (all data, indices,
>> etc removed).
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>>
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/b7ca9761-05e7-4e24-b287-f98bbaeb4713%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/6YzVfTg-4ng/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAkb1XiEK7umZReNyPiQ-uE3YGhReUivPOdKcSbz81VuA%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPrW288%2BSqpFOu5_BNu07-W9yfH1r2UjmTAT%3DWch63OikA7rcA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Accessing Search Templates via Rest

2014-06-10 Thread Sebastian Gräser
Hello,

maybe someone can help me. Is there a way to get the available search 
templates via rest api? havent found a way yet, hope you can help me.

Best regards
Sebastian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eb44220f-ef7d-4306-93f1-cbbeda046a83%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Multi DC cluster or separate cluster per DC?

2014-05-15 Thread Sebastian Łaskawiec
We are still thinking about production configuration and here is a short 
list of single/separate cluster's advantages and disadvantages...

Single cluster:

   - (+) If you have single cluster - you perform single query to the 
   database. In case of having cluster per DC - each cluster needs to query DB 
   separately
   - (+) Data consistency - in the matter of fact this is achieved by 
   single query to the DB
   - (+) You can introduce new DC easily
   - (+) True active-active configuration
   - (-) Split brain and pretty complicated configuration (to avoid split 
   brain in case when DC link is down)
   - (-) node.master setting can not be changed in runtime (take a look at 
   my first post and split brain solution)
   - (-) In case of a disaster we need to operate on single DC. If you use 
   single cluster per 2 DCs you can't really tell if a single DC is strong 
   enough to handle query and indexing load
   - (-) In pessimistic scenario data travels through WAN 2 times (first 
   time - database replication, second time - ES replication)
   - (-) You can't really tell which node will respond to the query. Let's 
   assume that you have full index in each DC (force awareness option). ES 
   might decide to gather results from the remote DC and not from the local 
   one. This way you need to add WAN latency into your query time.
   - (-) You need to turn off whole cluster or perform cycle restarts 
   during upgrade

Separate cluster per DC:

   - (+) No Split brain
   - (+) You can tell precisely when you are out of resources to handle 
   load in ES cluster in each DC
   - (+) You can experiment with different settings on production. If 
   something goes wrong - just switch clients to standby DC.
   - (+) Full failover - in case of any problems - just switch to the other 
   DC
   - (+) Upgrades are easy and you have no down time (upgrade first DC, 
   stabilize it, test it, and then to the same to the other DC)
   - (+) Since these are 2 separate clusters you can avoid data traveling 
   through WAN during queries. Each DC queries nodes locally.
   - (-) It is not a full active-active configuration. It's more like an 
   active-standby configuration
   - (-) Data inconsistency might occur (different results when queried 
   local and remote DC)
   - (-) Each DC will query DB separately. This will generate additional 
   load to the DB

Right now we think we should go for 2 separate clusters. DB load is a thing 
which worries me the most (we have really complicated query with a lot of 
left joins). However we think that in our case having to separate DC have 
more advantages then disadvantages.

If you have some more arguments or comments - please let us know :)

Regards
Sebastian

W dniu poniedziałek, 12 maja 2014 20:02:35 UTC+2 użytkownik Deepak Jha 
napisał:
>
> Having a separate cluster is definitely a better way to go. OR, you can 
> control the shard, replica placement so that they are always placed in the 
> same DC. In this way, you can avoid interDC issues still having a single 
> cluster. I have the similar issue and I am looking at it as one of the 
> alternative. 
>
> On Saturday, May 10, 2014 1:05:08 AM UTC-7, Sebastian Łaskawiec wrote:
>>
>> Thanks for the answer! We've been talking with several other teams in our 
>> company and it looks like this is the most recommended and stable setup.
>>
>> Regards
>> Sebastian
>>
>> W dniu środa, 7 maja 2014 03:23:43 UTC+2 użytkownik Mark Walkom napisał:
>>>
>>> Go the latter method and have two clusters, ES can be very sensitive to 
>>> network latency and you'll likely end up with more problems than it is 
>>> worth. 
>>> Given you already have the data source of truth being replicated, it's 
>>> the sanest option to just read that locally.
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>>
>>>
>>> On 6 May 2014 23:51, Sebastian Łaskawiec  wrote:
>>>
>>>> Hi!
>>>>
>>>> I'd like to ask for advice about deployment in multi DC scenario.
>>>>
>>>> Currently we operate on 2 Data Centers in active/standby mode.  like to 
>>>> opeIn case of ES we'd like to have different approach - we'drate in 
>>>> active-active mode (we want to optimize our resources especially for 
>>>> querying). 
>>>> Here are some details about target configuration:
>>>>
>>>>- 4 ES instances per DC. Full cluster will have 8 instances.
>>>>- Up to 1 TB of data 
>>>>- Data pulled from database using JDBC River
>

Re: Multi DC cluster or separate cluster per DC?

2014-05-10 Thread Sebastian Łaskawiec
Thanks for the answer! We've been talking with several other teams in our 
company and it looks like this is the most recommended and stable setup.

Regards
Sebastian

W dniu środa, 7 maja 2014 03:23:43 UTC+2 użytkownik Mark Walkom napisał:
>
> Go the latter method and have two clusters, ES can be very sensitive to 
> network latency and you'll likely end up with more problems than it is 
> worth. 
> Given you already have the data source of truth being replicated, it's the 
> sanest option to just read that locally.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 6 May 2014 23:51, Sebastian Łaskawiec 
> > wrote:
>
>> Hi!
>>
>> I'd like to ask for advice about deployment in multi DC scenario.
>>
>> Currently we operate on 2 Data Centers in active/standby mode.  like to 
>> opeIn case of ES we'd like to have different approach - we'drate in 
>> active-active mode (we want to optimize our resources especially for 
>> querying). 
>> Here are some details about target configuration:
>>
>>- 4 ES instances per DC. Full cluster will have 8 instances.
>>- Up to 1 TB of data 
>>- Data pulled from database using JDBC River
>>- Database is replicated asynchronously between DCs. Each DC will 
>>have its own database instance to pull data. 
>>- Average latency between DCs is about several miliseconds
>>- We need to operate when passive DC is down
>>
>> We know that multi DC configuration might end with Split Brain issue. 
>> Here is how we want to prevent it:
>>
>>- Set node.master: true only in 4 nodes in active DC
>>- Set node.master: false in passive DC
>>- This way we'll be sure that new cluster will not be created in 
>>passive DC 
>>- Additionally we'd like to set discovery.zen.minimum_master_nodes: 3 
>>(to avoid Split Brain in active DC)
>>
>> Additionally there is problem with switchover (passive DC becomes active 
>> and active becomes passive). In our system it takes about 20 minutes and 
>> this is the maximum length of our maintenance window. We were thinking of 
>> shutting down whole ES cluster and switch node.master setting in 
>> configuration files (as far as I know this settings can not be changed via 
>> REST api). Then we'd need to start whole cluster.
>>
>> So my question is: is it better to have one big ES cluster operating on 
>> both DCs or should we change our approach and create 2 separate clusters 
>> (and rely on database replication)? I'd be grateful for advice.
>>
>> Regards
>> Sebastian
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/6be53754-63fd-4202-b940-750a3e0c1a8f%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/6be53754-63fd-4202-b940-750a3e0c1a8f%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/71a8db73-40bc-431d-bb9a-b581f510cf03%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Multi DC cluster or separate cluster per DC?

2014-05-06 Thread Sebastian Łaskawiec
Hi!

I'd like to ask for advice about deployment in multi DC scenario.

Currently we operate on 2 Data Centers in active/standby mode.  like to 
opeIn case of ES we'd like to have different approach - we'drate in 
active-active mode (we want to optimize our resources especially for 
querying). 
Here are some details about target configuration:

   - 4 ES instances per DC. Full cluster will have 8 instances.
   - Up to 1 TB of data
   - Data pulled from database using JDBC River
   - Database is replicated asynchronously between DCs. Each DC will have 
   its own database instance to pull data.
   - Average latency between DCs is about several miliseconds
   - We need to operate when passive DC is down

We know that multi DC configuration might end with Split Brain issue. Here 
is how we want to prevent it:

   - Set node.master: true only in 4 nodes in active DC
   - Set node.master: false in passive DC
   - This way we'll be sure that new cluster will not be created in passive 
   DC
   - Additionally we'd like to set discovery.zen.minimum_master_nodes: 3 
   (to avoid Split Brain in active DC)

Additionally there is problem with switchover (passive DC becomes active 
and active becomes passive). In our system it takes about 20 minutes and 
this is the maximum length of our maintenance window. We were thinking of 
shutting down whole ES cluster and switch node.master setting in 
configuration files (as far as I know this settings can not be changed via 
REST api). Then we'd need to start whole cluster.

So my question is: is it better to have one big ES cluster operating on 
both DCs or should we change our approach and create 2 separate clusters 
(and rely on database replication)? I'd be grateful for advice.

Regards
Sebastian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6be53754-63fd-4202-b940-750a3e0c1a8f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Match every token position in the field when using synonyms

2014-01-22 Thread Sebastian Briesemeister
I am also very keen on answer!! If you find a solution, let me know!

Sebastian

On Thursday, 16 January 2014 15:12:23 UTC+1, Dany Gielow wrote:
>
> In my Elasticsearch index I have documents that have multiple tokens at 
> the same position.
>
> I want to get a document back when I match at least one token at every 
> position.
> The order of the tokens is not important. How can I accomplish that?
> I use Elasticsearch 0.90.5.
>
> *Example:*
>
> I index a document like this.
> 
> {
> "field":"red car"
> }
>
>
> I use a synonym token filter that adds synonyms at the same positions as 
> the original token.
> So now in the field, there are 2 positions:
>
>
>- Position 1: "red"
>- Position 2: "car", "automobile"
>
>
> *My solution for now:*
>
> To be able to ensure that all positions match, I index the maximum 
> position as well.
>
> {
> "field":"red car",
> "max_position": 2
> }
>
>
> I have a custom similarity that extends from DefaultSimilarity and returns 
> 1 tf(), idf() and lengthNorm(). The resulting score is the number of 
> matching terms in the field.
>
> Query:
>
> {
> "custom_score": {
> "query": {
>  "match": {
>  "field": "a car is an automobile"
>  }
> },
> "_script": "_score*100/doc[\"max_position\"]+_score"
> },
> "min_score":"100"
> }
> Enter code here...
>
>
>
> *Problem with my solution:*
> The above search should not match the document, because there is no token 
> "red" in the query string. But it matches, because Elasticsearch counts the 
> matches for car and automobile as two matches and that gives a score of 2 
> which leads to a script score of 102, which satisfies the "min_score".
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ca3a-dffc-4714-8940-0278cf70a7cf%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.