Re: Problems setting up user authentication with Shield

2015-03-15 Thread Sönke Liebau
Hi Mark,

thanks a lot, that did the trick!

Kind regards,
 Sönke

On Sunday, March 15, 2015 at 9:14:47 PM UTC+1, Mark Walkom wrote:
>
> There is a step in the longer version of getting started about 
> setting ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch" - 
> http://www.elastic.co/guide/en/shield/current/getting-started.html - did 
> you set that?
>
> On 15 March 2015 at 11:30, Sönke Liebau > 
> wrote:
>
>> Hi everybody,
>>
>> I just tried setting up Shield on my three node testcluster. I downloaded 
>> the license and shield plugins, ran syskeygen and distributed that to all 
>> cluster nodes.
>> I then ran the following command on all nodes as the es user:
>> ./esusers useradd es_admin -r admin -p foobar1234
>>
>> After a cluster restart I was asked to enter username and password when 
>> accessing 10.10.0.72:9200 in my browser (or via curl from localhost, 
>> same result). So far so good, but when I enter the user credentials created 
>> above I just get returned to the login prompt.
>>
>> In the cluster.log file I see the following lines:
>>
>> [2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [
>> elastic_node_3] user not found in cache, proceeding with normal 
>> authentication
>> [2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [
>> elastic_node_3] realm [esusers] could not authenticate [es_admin]
>>
>> I did not add anything to the elasticsearch.yml configuration, as the 
>> documentation stated that this is not strictly speaking necessarry, if I am 
>> happy to use the default esusers realm. 
>>
>> I am probably omitting a very simple but crucial step here, but I cannot 
>> figure out what is wrong and am thankful for any hints as to where I might 
>> look for the cause.
>>
>> Kind regards,
>>  Sönke
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/bc9c8044-e7d7-4435-8f9b-b120861fa921%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a163c80d-7ce2-4057-8439-94ac4501700b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch high heap usage

2015-03-15 Thread Mark Walkom
Those are reasonably large documents. You also seem to have a lot of shards
for the data.

What sort of data is it, are you using doc values, how are you bucketing
data (ie time series indices)?

On 15 March 2015 at 20:39,  wrote:

> Hello,
>
> We have a 2 node elasticsearch cluster which is used by logstash to store
> log files. The current input is around 100 documents (logs) per second wit
> a size of around 50kb - 150kb.
> Compared to what i have read so far this is not a high amount but we
> experience already a high heap usage 70% form the total of 11GB heap size,
> the system has in total 32GB RAM. CPU and IO are totally fine.
>
> Any suggestion highly appreciated!
>
> Cheers
> Chris
>
> 
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c2679cad-72ec-472f-a009-a6c9e2abbb9d%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-2kqt5%3DBcVJ9fJRP1GCfr3Aagt3sybBjHJEyN6iWok9w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch high heap usage

2015-03-15 Thread chris85lang
Hello,

We have a 2 node elasticsearch cluster which is used by logstash to store 
log files. The current input is around 100 documents (logs) per second wit 
a size of around 50kb - 150kb.
Compared to what i have read so far this is not a high amount but we 
experience already a high heap usage 70% form the total of 11GB heap size, 
the system has in total 32GB RAM. CPU and IO are totally fine.

Any suggestion highly appreciated!

Cheers
Chris 



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c2679cad-72ec-472f-a009-a6c9e2abbb9d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregation / Sort and CircuitBreakingException

2015-03-15 Thread Lindsey Poole
Also, if I understand correctly, there are negative implications when 
sorting over a column that has been analyzed - in our case, to remove 
stop-words.

Since the total cardinality of our sort field exceeds the heap available, 
we can't sort a single users documents when using stop word analysis since 
doc_values do not support analyzed fields.

It seems like we'll have to preprocess the field to remove stop-words?

On Sunday, March 15, 2015 at 7:01:21 PM UTC-7, Lindsey Poole wrote:
>
> Well, we have a field that is supporting a backward compatibility use 
> case. Clients are executing a partial match query on this field, so we used 
> the keyword tokenizer instead of not_analyzed. Since this is supporting 
> legacy functionality, the clients cannot be updated to change the 
> expectation that a partial match will return results.
>
> I can modify the schema and re-index so that we aggregate and sort over a 
> not_analyzed subfield instead, while executing any queries on the parent 
> field, but I wanted to verify that there is no other way to filter out 
> terms prior to loading them into the fielddata cache.
>
> The kind of filtering I'm looking for would be something like, "only 
> consider terms in field1 from documents where field2=valueA".
>
> -Lindsey
>
> On Sunday, March 15, 2015 at 4:43:56 PM UTC-7, Jörg Prante wrote:
>>
>> I mean, I do not understand what you mean by "I'm caught up on the 
>> advice to use doc_values where possible, but we have a use case where we do 
>> light analysis on a particular set of fields in our document" - what 
>> exactly prevents you from doc values?
>>
>> Jörg
>>
>> On Mon, Mar 16, 2015 at 12:41 AM, joerg...@gmail.com  
>> wrote:
>>
>>> Have you considered doc values?
>>>
>>>
>>> http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html
>>>
>>> Jörg
>>>
>>> On Sun, Mar 15, 2015 at 11:11 PM, Lindsey Poole  
>>> wrote:
>>>
 Hey guys,

 I have a question about the mechanics of aggregation and sorting w.r.t. 
 the fielddata cache. I know this has been covered in some detail 
 previously, and I'm caught up on the advice to use doc_values where 
 possible, but we have a use case where we do light analysis on a 
 particular 
 set of fields in our document, but also allow sorting on those fields.

 While we'll probably modify our schema to solve the issue, I was first 
 wondering whether it is possible to filter the set of documents that ES 
 aggregates / sorts over *before* pulling them into the fielddata cache? We 
 have extremely high cardinality fields, but very selective queries, and it 
 seems very inefficient to pull multiple gigabytes into the fielddata cache 
 to select relatively few matching documents.

 Thanks,

 Lindsey

 -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/e32cf7c3-e2b3-48e9-bc7c-d7f2e0016835%40googlegroups.com
  
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8027c84c-dd00-490e-a845-7fb0bb2f6107%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregation / Sort and CircuitBreakingException

2015-03-15 Thread Lindsey Poole
Well, we have a field that is supporting a backward compatibility use case. 
Clients are executing a partial match query on this field, so we used the 
keyword tokenizer instead of not_analyzed. Since this is supporting legacy 
functionality, the clients cannot be updated to change the expectation that 
a partial match will return results.

I can modify the schema and re-index so that we aggregate and sort over a 
not_analyzed subfield instead, while executing any queries on the parent 
field, but I wanted to verify that there is no other way to filter out 
terms prior to loading them into the fielddata cache.

The kind of filtering I'm looking for would be something like, "only 
consider terms in field1 from documents where field2=valueA".

-Lindsey

On Sunday, March 15, 2015 at 4:43:56 PM UTC-7, Jörg Prante wrote:
>
> I mean, I do not understand what you mean by "I'm caught up on the advice 
> to use doc_values where possible, but we have a use case where we do light 
> analysis on a particular set of fields in our document" - what exactly 
> prevents you from doc values?
>
> Jörg
>
> On Mon, Mar 16, 2015 at 12:41 AM, joerg...@gmail.com  <
> joerg...@gmail.com > wrote:
>
>> Have you considered doc values?
>>
>> http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html
>>
>> Jörg
>>
>> On Sun, Mar 15, 2015 at 11:11 PM, Lindsey Poole > > wrote:
>>
>>> Hey guys,
>>>
>>> I have a question about the mechanics of aggregation and sorting w.r.t. 
>>> the fielddata cache. I know this has been covered in some detail 
>>> previously, and I'm caught up on the advice to use doc_values where 
>>> possible, but we have a use case where we do light analysis on a particular 
>>> set of fields in our document, but also allow sorting on those fields.
>>>
>>> While we'll probably modify our schema to solve the issue, I was first 
>>> wondering whether it is possible to filter the set of documents that ES 
>>> aggregates / sorts over *before* pulling them into the fielddata cache? We 
>>> have extremely high cardinality fields, but very selective queries, and it 
>>> seems very inefficient to pull multiple gigabytes into the fielddata cache 
>>> to select relatively few matching documents.
>>>
>>> Thanks,
>>>
>>> Lindsey
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/e32cf7c3-e2b3-48e9-bc7c-d7f2e0016835%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0c9dc986-cfe1-42f9-ac83-d1ca40699c3d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregation / Sort and CircuitBreakingException

2015-03-15 Thread joergpra...@gmail.com
I mean, I do not understand what you mean by "I'm caught up on the advice
to use doc_values where possible, but we have a use case where we do light
analysis on a particular set of fields in our document" - what exactly
prevents you from doc values?

Jörg

On Mon, Mar 16, 2015 at 12:41 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> Have you considered doc values?
>
> http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html
>
> Jörg
>
> On Sun, Mar 15, 2015 at 11:11 PM, Lindsey Poole  wrote:
>
>> Hey guys,
>>
>> I have a question about the mechanics of aggregation and sorting w.r.t.
>> the fielddata cache. I know this has been covered in some detail
>> previously, and I'm caught up on the advice to use doc_values where
>> possible, but we have a use case where we do light analysis on a particular
>> set of fields in our document, but also allow sorting on those fields.
>>
>> While we'll probably modify our schema to solve the issue, I was first
>> wondering whether it is possible to filter the set of documents that ES
>> aggregates / sorts over *before* pulling them into the fielddata cache? We
>> have extremely high cardinality fields, but very selective queries, and it
>> seems very inefficient to pull multiple gigabytes into the fielddata cache
>> to select relatively few matching documents.
>>
>> Thanks,
>>
>> Lindsey
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/e32cf7c3-e2b3-48e9-bc7c-d7f2e0016835%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGD8qRCq6k6MwK4ujnWYfYv%2BGzdqn45GA6a6Gv4jHcUWw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregation / Sort and CircuitBreakingException

2015-03-15 Thread joergpra...@gmail.com
Have you considered doc values?

http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html

Jörg

On Sun, Mar 15, 2015 at 11:11 PM, Lindsey Poole  wrote:

> Hey guys,
>
> I have a question about the mechanics of aggregation and sorting w.r.t.
> the fielddata cache. I know this has been covered in some detail
> previously, and I'm caught up on the advice to use doc_values where
> possible, but we have a use case where we do light analysis on a particular
> set of fields in our document, but also allow sorting on those fields.
>
> While we'll probably modify our schema to solve the issue, I was first
> wondering whether it is possible to filter the set of documents that ES
> aggregates / sorts over *before* pulling them into the fielddata cache? We
> have extremely high cardinality fields, but very selective queries, and it
> seems very inefficient to pull multiple gigabytes into the fielddata cache
> to select relatively few matching documents.
>
> Thanks,
>
> Lindsey
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/e32cf7c3-e2b3-48e9-bc7c-d7f2e0016835%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFgpwVbkkAsKK11m74qqE_avwQ5mmMGb2z1w0-qH5hNMw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: PerThreadIDAndVersionLookup - thread safety

2015-03-15 Thread joergpra...@gmail.com
It is not thread safe because of the TermsEnum array, which can not be
shared between threads. By not sharing, a thread can reuse the array, which
avoids expensive reinitialization.

The utility class was introduced at

https://github.com/elastic/elasticsearch/issues/6212

and from what I understand this replaced the previous version ID lookup by
bloom filters (which comes with a very noticeable RAM cost)

Maybe you have lots of segments?

Sometimes, ThreadLocals go crazy because of Java issues, and they are hard
to clean up. So I think if you can post some more detailed information
about what you have seen and what OS, JVM, and ES versions you use, it
would be helpful.

Jörg

On Sun, Mar 15, 2015 at 10:16 PM, Paweł Róg  wrote:

> Hi,
> Can anyone shortly describe why class PerThreadIDAndVersionLookup is not
> thread safe and what is needed to make it thread safe? I'm wondering if it
> is possible to keep only single instance of VersionLookup and make it not
> stick to a thread. I see waste of big chunk of memory in JVM only because
> of class PerThreadIDAndVersionLookup.
>
> Thanks a lot for any suggestions/advises.
>
> --
> Paweł
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAHngsdi_u_gj0PAaahB%2B8fEhsqRQ0SNr5LrFw5_oPJcs4LqyYA%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFaLjw58opRVQ79_fXxtzRDRoFDZz2y2qRGz1bVbAX6jg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Aggregation / Sort and CircuitBreakingException

2015-03-15 Thread Lindsey Poole
Hey guys,

I have a question about the mechanics of aggregation and sorting w.r.t. the 
fielddata cache. I know this has been covered in some detail previously, 
and I'm caught up on the advice to use doc_values where possible, but we 
have a use case where we do light analysis on a particular set of fields in 
our document, but also allow sorting on those fields.

While we'll probably modify our schema to solve the issue, I was first 
wondering whether it is possible to filter the set of documents that ES 
aggregates / sorts over *before* pulling them into the fielddata cache? We 
have extremely high cardinality fields, but very selective queries, and it 
seems very inefficient to pull multiple gigabytes into the fielddata cache 
to select relatively few matching documents.

Thanks,

Lindsey

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e32cf7c3-e2b3-48e9-bc7c-d7f2e0016835%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: What configuration is available to control MemoryMapDirectory

2015-03-15 Thread Lindsey Poole
Just to close this out - we disabled EC2's health checks and spent some 
time tuning the batch thread-pool size to prevent overrunning the cluster 
once the memory map cache size exceeds available physical memory. This was 
successful (we're restricted to a surprisingly small threadpool size of 3).

Thanks.

On Saturday, March 14, 2015 at 1:05:49 PM UTC-7, Mark Walkom wrote:
>
> Can you provide more info on what the error/problem is, logs might help.
>
> On 14 March 2015 at 10:12, joerg...@gmail.com  <
> joerg...@gmail.com > wrote:
>
>> I'm out - no experience with EC2. I avoid foreign servers at all cost. 
>> Maybe 120G RAM is affected by swap/memory overcommit.  Do not forget to 
>> check memlock and memory ballooning. The chances are few you can control 
>> host settings as a guest in a virtual server environment.
>>
>> Jörg
>>
>> On Sat, Mar 14, 2015 at 5:06 PM, Lindsey Poole > > wrote:
>>
>>> btw - we're on EC2 I2-4xl hosts, so we have ~120g ram and SSDs.
>>>
>>>
>>> On Saturday, March 14, 2015 at 9:04:34 AM UTC-7, Lindsey Poole wrote:

 I did see the ES_DIRECT_SIZE, but it seems to be ineffective.

 I will try setting -XX:MaxDirectMemorySize directly.

 On Saturday, March 14, 2015 at 4:43:22 AM UTC-7, Jörg Prante wrote:
>
> You may try limit direct memory on JVM level by 
> using -XX:MaxDirectMemorySize (default is unlimited). See also 
> ES_DIRECT_SIZE in http://www.elastic.co/guide/en/elasticsearch/
> reference/current/setup-service.html#_linux 
>
> I recommend at least 2GB
>
> Jörg
>
> On Sat, Mar 14, 2015 at 1:03 AM, Lindsey Poole  
> wrote:
>
>> Hey guys,
>>
>> We're running into some problems under heavy write, nominal read 
>> volume when the Lucene memory mapped files have exhausted available 
>> physical memory, and segments from disk must be paged into memory.
>>
>> Are there any configs available to control how much physical memory 
>> is available to MemoryMapDirectory?
>>
>> Thanks!
>>
>> -- 
>> You received this message because you are subscribed to the Google 
>> Groups "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, 
>> send an email to elasticsearc...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/elasticsearch/12d0d4ba-8a32-4c85-9635-40d7791865e5%
>> 40googlegroups.com 
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/cf50ed46-cacf-414b-8b20-b82595dc2fd0%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHY_M2BAywrG%2BaNcg59xA4_ocph1oqE0bzba4HTqrdLqQ%40mail.gmail.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4bfeb6fa-5fc4-408c-a888-47553ed00738%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


PerThreadIDAndVersionLookup - thread safety

2015-03-15 Thread Paweł Róg
Hi,
Can anyone shortly describe why class PerThreadIDAndVersionLookup is not
thread safe and what is needed to make it thread safe? I'm wondering if it
is possible to keep only single instance of VersionLookup and make it not
stick to a thread. I see waste of big chunk of memory in JVM only because
of class PerThreadIDAndVersionLookup.

Thanks a lot for any suggestions/advises.

--
Paweł

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHngsdi_u_gj0PAaahB%2B8fEhsqRQ0SNr5LrFw5_oPJcs4LqyYA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: java.io.IOException: Connection timed out error

2015-03-15 Thread Mark Walkom
I am guessing that is the client log, but what does your ES node show?

On 15 March 2015 at 10:27, sunil patil  wrote:

> I using ElasticSearch 1.1.0 server. I have a Java Web Application that
> connects to ElasticSearch using Transport Client, i am also using
> elasticsearch 1.1.0 client. When the application starts it is able to
> connect to elasticsearch server and all the searches work but after every
> couple of hours(This happens randomly) the application stops working/not
> able to execute search on elasticsearch. When i enable client transport and
> netty transport trace i see following exception. Has anyone seen this issue
> before, do you have any suggestions ?
>
>
>
> [2015-Mar-14 18:37:56:647] TRACE [Gertrude Yorkes] close connection exception
> caught on transport layer [[id: 0x08525d42, /172.26.129.121:51561 =>
> p01sml002.aap.csaa.com/172.29.24.69:9300]
> ], disconnecting from
> relevant node
> java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> at
> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> at
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [2015-Mar-14 18:37:56:647] TRACE [Gertrude Yorkes] close connection exception
> caught on transport layer [[id: 0x08525d42, /172.26.129.121:51561 =>
> p01sml002.aap.csaa.com/172.29.24.69:9300]
> ], disconnecting from
> relevant node
> java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> at
> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> at
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/67012934-7735-468d-a3a0-9cd5d674307a%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KpMyB4FAUg4MS4Tv53K%2BX

Re: Problems setting up user authentication with Shield

2015-03-15 Thread Mark Walkom
There is a step in the longer version of getting started about
setting ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch" -
http://www.elastic.co/guide/en/shield/current/getting-started.html - did
you set that?

On 15 March 2015 at 11:30, Sönke Liebau  wrote:

> Hi everybody,
>
> I just tried setting up Shield on my three node testcluster. I downloaded
> the license and shield plugins, ran syskeygen and distributed that to all
> cluster nodes.
> I then ran the following command on all nodes as the es user:
> ./esusers useradd es_admin -r admin -p foobar1234
>
> After a cluster restart I was asked to enter username and password when
> accessing 10.10.0.72:9200 in my browser (or via curl from localhost, same
> result). So far so good, but when I enter the user credentials created
> above I just get returned to the login prompt.
>
> In the cluster.log file I see the following lines:
>
> [2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [
> elastic_node_3] user not found in cache, proceeding with normal
> authentication
> [2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [
> elastic_node_3] realm [esusers] could not authenticate [es_admin]
>
> I did not add anything to the elasticsearch.yml configuration, as the
> documentation stated that this is not strictly speaking necessarry, if I am
> happy to use the default esusers realm.
>
> I am probably omitting a very simple but crucial step here, but I cannot
> figure out what is wrong and am thankful for any hints as to where I might
> look for the cause.
>
> Kind regards,
>  Sönke
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/bc9c8044-e7d7-4435-8f9b-b120861fa921%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-w_b83jL6iFn1QRQc%3DU4e-5dMOWefRka178MVT5JK9kg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Problems setting up user authentication with Shield

2015-03-15 Thread Sönke Liebau
Hi everybody,

I just tried setting up Shield on my three node testcluster. I downloaded 
the license and shield plugins, ran syskeygen and distributed that to all 
cluster nodes.
I then ran the following command on all nodes as the es user:
./esusers useradd es_admin -r admin -p foobar1234

After a cluster restart I was asked to enter username and password when 
accessing 10.10.0.72:9200 in my browser (or via curl from localhost, same 
result). So far so good, but when I enter the user credentials created 
above I just get returned to the login prompt.

In the cluster.log file I see the following lines:

[2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [elastic_node_3] 
user not found in cache, proceeding with normal authentication
[2015-03-15 18:26:00,197][DEBUG][shield.authc.esusers ] [elastic_node_3] 
realm [esusers] could not authenticate [es_admin]

I did not add anything to the elasticsearch.yml configuration, as the 
documentation stated that this is not strictly speaking necessarry, if I am 
happy to use the default esusers realm. 

I am probably omitting a very simple but crucial step here, but I cannot 
figure out what is wrong and am thankful for any hints as to where I might 
look for the cause.

Kind regards,
 Sönke

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bc9c8044-e7d7-4435-8f9b-b120861fa921%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Embedded fields not searchable when addressed by name and type of same name exists?

2015-03-15 Thread Joel Potischman
Thanks, Vineeth. Much appreciated! I read your ticket and the associated 
issue and that definitely seems to be it. I think I found a workaround for 
my specific needs though:

It seems that if instead of querying against the root of the index, I 
supply the list of types, it works as expected. Using the example data from 
your issue:
curl -XPOST "http://dev-searchapi.beatport.com:9200/tests/
*country,sublocality*/_search" -d'
{
"query": {
"term": {
"country.name": "yyy"
}
}
}'

finds both records (one sublocality, one country) as I'd expect, whereas 
the same query payload without the types returns nothing.

That *seems* to remove my stumbling block but if you're aware of any other 
unexpected behavior I should watch out for in this regard please let me 
know so I can avoid additional frustrations!

Thanks again!

-joel

On Sunday, March 15, 2015 at 5:39:47 AM UTC-4, vineeth mohan wrote:
>
> Hello Joel , 
>
> This is a known issue.
> Internally each field is stored as typeName.fieldName format as type name 
> is just an abstraction.
> That is one of the reason for this issue.
>
> You can find more information here - 
> https://github.com/elastic/elasticsearch/issues/7411
>
>
> Thanks
>Vineeth Mohan,
>Elasticsearch consultant,
>qbox.io ( Elasticsearch service provider )
>
>
> On Sat, Mar 14, 2015 at 1:48 AM, Joel Potischman  > wrote:
>
>> I ran into the following today in Elasticsearch 1.4.4 and am trying to 
>> determine if this is a bug in Elasticsearch or a bug in my understanding of 
>> Elasticsearch. I'm more than willing to believe it is the latter. It should 
>> be very reproducible with the commands I've pasted.
>>
>> Let's say I have two types - books and authors - and I add one of each to 
>> my test index:
>>
>> POST /tests/authors
>> { "name": "mytest12345" }
>>
>> POST /tests/books
>> {
>> "title": "My big book",
>> "authors": [{ "name": "mytest12345" }]
>> }
>>
>> I can perform a simple query and I will get both records back - one 
>> artist, one book:
>> GET /tests/_search?q=mytest12345
>>
>> This is what I would expect.
>>
>> If I then query against the authors.name field within the books type, I 
>> get my book, also just as I expect:
>> GET /tests/books/_search?q=authors.name:mytest12345
>>
>> However, if I perform the exact same query against the root of the index 
>> instead of against the books type
>> GET /tests/_search?q=authors.name:mytest12345
>>
>> I instead get back the *author* record. The book no longer comes back at 
>> all even though to my understanding I'm performing the same query against 
>> *all* types instead of just books.
>>
>> If I delete the authors type (and I confirmed deleting just the authors 
>> records 
>> won't work)
>> DELETE /tests/authors
>>
>> Then the query against the index root behaves as expected
>> GET /tests/_search?q=authors.name:mytest12345
>>
>> It basically appears that the query
>> GET /{index}/_search?q=*{type}.{field}*:{query}
>>
>> appears to run internally as:
>> GET /{index}/*{type}*/_search?q=*{field}*:{query}
>>
>> when the {type} type exists in the index. So my question is, is this 
>> correct behavior that I don't understand? Or is it a bug? It feels like a 
>> bug to me, but I'll defer to the experts here. I'm happy to open a ticket 
>> if someone more experienced than me can verify it's an issue.
>>
>> Thanks,
>>
>> -joel
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/c8757abd-a99f-4c78-805b-17101ace7982%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d05dd849-505b-41f5-a8f2-8cde8a523bc3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Joining Multiple Aggregration Queries into one query

2015-03-15 Thread Prashant singh
Adding Reference to it.
https://gist.github.com/parasb/751baf0fca2a7682a8d1


On Sunday, March 15, 2015 at 11:32:20 PM UTC+5:30, Prashant singh wrote:
>
> Hi ,
> I have One query given below .
>
> 
>
> {
> "query": {
> "filtered": {
> "query": {
> "query_string": {
> "query": "*"
> }
> },
> "filter": {
> "bool": {
> "should": [
> {
> "fquery": {
> "query": {
> "query_string": {
> "query": "\"357539019562335\""
> }
> },
> "_cache": true
> }
> },
> {
> "fquery": {
> "query": {
> "query_string": {
> "query": "\"357539019562624\""
> }
> },
> "_cache": true
> }
> }
> ],
> "must": [
> {
> "fquery": {
> "query": {
> "query_string": {
> "query": "SMSATTEMPTSMO + 
> SMSATTEMPTSMT"
> }
> },
> "_cache": true
> }
> },
> {
> "range": {
> "@timestamp": {
> "to": "2015-03-05T00:00-05:00",
> "from": "2015-03-04T00:00-05:00"
> }
> }
> }
> ]
> }
> }
> }
> },
> "aggs": {
> "events_by_IMEI": {
> "terms": {
> "field": "IMEI"
> },
> "aggs": {
> "sms_over_time": {
> "date_histogram": {
> "field": "@timestamp",
> "interval": "1h",
> "post_zone": "-05:00",
> "min_doc_count": 0,
> "extended_bounds": {
> "min": "2015-03-04T00:00-05:00",
> "max": "2015-03-05T00:00-05:00"
> }
> },
> "aggs": {
> "total_sms": {
> "sum": {
> "field": "value"
> }
> }
> }
> }
> }
> }
> },
> "size": 0
> }
> Which gives me below response.
>
> -
> {
>   "took": 9279,
>   "timed_out": false,
>   "_shards": {
> "total": 20,
> "successful": 20,
> "failed": 0
>   },
>   "hits": {
> "total": 354,
> "max_score": 0,
> "hits": []
>   },
>   "aggregations": {
> "events_by_IMEI": {
>   "buckets": [
> {
>   "key": "357539019562624",
>   "doc_count": 178,
>   "sms_over_time": {
> "buckets": [
>   {
> "key_as_string": "2015-03-04T00:00:00.000Z",
> "key": 142542720,
> "doc_count": 0,
> "total_sms": {
>   "value": 0
> }
>   },
>   {
> "key_as_string": "2015-03-04T01:00:00.000Z",
> "key": 142543080,
> "doc_count": 0,
> "total_sms": {
>   "value": 0
> }
>   },
>   {
> "key_as_string": "2015-03-04T02:00:00.000Z",
> "key": 142543440,
> "doc_count": 8,
> "total_sms": {
>   "value": 0
> }
>   },
>   {
> "key_as_string": "2015-03-04T03:00:00.000Z",
> "key": 142543800,
> "doc_count": 8,
> "total_sms": {
>   "value": 0
> }
>   },
>   {
> "key_as_string": "2015-03-04T04:00:00.000Z",
> "key": 142544160

Joining Multiple Aggregration Queries into one query

2015-03-15 Thread Prashant singh
Hi ,
I have One query given below .


{
"query": {
"filtered": {
"query": {
"query_string": {
"query": "*"
}
},
"filter": {
"bool": {
"should": [
{
"fquery": {
"query": {
"query_string": {
"query": "\"357539019562335\""
}
},
"_cache": true
}
},
{
"fquery": {
"query": {
"query_string": {
"query": "\"357539019562624\""
}
},
"_cache": true
}
}
],
"must": [
{
"fquery": {
"query": {
"query_string": {
"query": "SMSATTEMPTSMO + SMSATTEMPTSMT"
}
},
"_cache": true
}
},
{
"range": {
"@timestamp": {
"to": "2015-03-05T00:00-05:00",
"from": "2015-03-04T00:00-05:00"
}
}
}
]
}
}
}
},
"aggs": {
"events_by_IMEI": {
"terms": {
"field": "IMEI"
},
"aggs": {
"sms_over_time": {
"date_histogram": {
"field": "@timestamp",
"interval": "1h",
"post_zone": "-05:00",
"min_doc_count": 0,
"extended_bounds": {
"min": "2015-03-04T00:00-05:00",
"max": "2015-03-05T00:00-05:00"
}
},
"aggs": {
"total_sms": {
"sum": {
"field": "value"
}
}
}
}
}
}
},
"size": 0
}
Which gives me below response.

-
{
  "took": 9279,
  "timed_out": false,
  "_shards": {
"total": 20,
"successful": 20,
"failed": 0
  },
  "hits": {
"total": 354,
"max_score": 0,
"hits": []
  },
  "aggregations": {
"events_by_IMEI": {
  "buckets": [
{
  "key": "357539019562624",
  "doc_count": 178,
  "sms_over_time": {
"buckets": [
  {
"key_as_string": "2015-03-04T00:00:00.000Z",
"key": 142542720,
"doc_count": 0,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T01:00:00.000Z",
"key": 142543080,
"doc_count": 0,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T02:00:00.000Z",
"key": 142543440,
"doc_count": 8,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T03:00:00.000Z",
"key": 142543800,
"doc_count": 8,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T04:00:00.000Z",
"key": 142544160,
"doc_count": 8,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T05:00:00.000Z",
"key": 142544520,
"doc_count": 8,
"total_sms": {
  "value": 0
}
  },
  {
"key_as_string": "2015-03-04T06:00:00.000Z",
  

java.io.IOException: Connection timed out error

2015-03-15 Thread sunil patil
I using ElasticSearch 1.1.0 server. I have a Java Web Application that 
connects to ElasticSearch using Transport Client, i am also using 
elasticsearch 1.1.0 client. When the application starts it is able to 
connect to elasticsearch server and all the searches work but after every 
couple of hours(This happens randomly) the application stops working/not 
able to execute search on elasticsearch. When i enable client transport and 
netty transport trace i see following exception. Has anyone seen this issue 
before, do you have any suggestions ?



[2015-Mar-14 18:37:56:647] TRACE [Gertrude Yorkes] close connection exception 
caught on transport layer [[id: 0x08525d42, /172.26.129.121:51561 => 
p01sml002.aap.csaa.com/172.29.24.69:9300]], disconnecting from relevant 
node 
java.io.IOException: Connection timed out 
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) 
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
at sun.nio.ch.IOUtil.read(IOUtil.java:192) 
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 
at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745) 
[2015-Mar-14 18:37:56:647] TRACE [Gertrude Yorkes] close connection exception 
caught on transport layer [[id: 0x08525d42, /172.26.129.121:51561 => 
p01sml002.aap.csaa.com/172.29.24.69:9300]], disconnecting from relevant 
node 
java.io.IOException: Connection timed out 
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) 
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
at sun.nio.ch.IOUtil.read(IOUtil.java:192) 
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 
at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/67012934-7735-468d-a3a0-9cd5d674307a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to get existing index object, if index exists in php elastica

2015-03-15 Thread nitin birdi
My bad. It was so easy: $index = $client->getIndex($name); Had some problem 
in my code itself.
Thanks anyways.

On Sunday, March 15, 2015 at 3:30:01 PM UTC+5:30, nitin birdi wrote:
>
> Hi,
>
> I am new to elastic search. I am using php elastica client and facing a 
> problem:
> If an index exists, I want to get the object of this existing index and 
> not recreate it. How can this be done?
>
> client = new \Elastica\Client($arrServerConf, $callback);
>
> if ( $client->getIndex($name)->exists() ) {
>   //do something here to get this existing object -- what to do 
> here???
> } else {
>   // create a new one
>   $index = $client->getIndex($name);
>   $index->create(array('index' => array('number_of_shards' => 
> $shards, 'number_of_replicas' => 0)), $delete);
> }
> $type = $index->getType($typeName);
>
> Or is there some other way of doing this?
> I need this because, I'll be adding documents in this index and searching 
> among them.
>
> I think its a very trivial task, should have easily found a fix, but am 
> unable to fix this. Hope you guys will be kind enough to help me out.
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2849a69e-f289-42e1-8e3b-4ac4fd1f817f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How to get existing index object, if index exists in php elastica

2015-03-15 Thread nitin birdi
Hi,

I am new to elastic search. I am using php elastica client and facing a 
problem:
If an index exists, I want to get the object of this existing index and not 
recreate it. How can this be done?

client = new \Elastica\Client($arrServerConf, $callback);

if ( $client->getIndex($name)->exists() ) {
  //do something here to get this existing object -- what to do 
here???
} else {
  // create a new one
  $index = $client->getIndex($name);
  $index->create(array('index' => array('number_of_shards' => 
$shards, 'number_of_replicas' => 0)), $delete);
}
$type = $index->getType($typeName);

Or is there some other way of doing this?
I need this because, I'll be adding documents in this index and searching 
among them.

I think its a very trivial task, should have easily found a fix, but am 
unable to fix this. Hope you guys will be kind enough to help me out.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/da2f6e7a-5f5b-4cda-b602-6e5ea362d480%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Embedded fields not searchable when addressed by name and type of same name exists?

2015-03-15 Thread vineeth mohan
Hello Joel ,

This is a known issue.
Internally each field is stored as typeName.fieldName format as type name
is just an abstraction.
That is one of the reason for this issue.

You can find more information here -
https://github.com/elastic/elasticsearch/issues/7411


Thanks
   Vineeth Mohan,
   Elasticsearch consultant,
   qbox.io ( Elasticsearch service provider )


On Sat, Mar 14, 2015 at 1:48 AM, Joel Potischman <
joel.potisch...@beatport.com> wrote:

> I ran into the following today in Elasticsearch 1.4.4 and am trying to
> determine if this is a bug in Elasticsearch or a bug in my understanding of
> Elasticsearch. I'm more than willing to believe it is the latter. It should
> be very reproducible with the commands I've pasted.
>
> Let's say I have two types - books and authors - and I add one of each to
> my test index:
>
> POST /tests/authors
> { "name": "mytest12345" }
>
> POST /tests/books
> {
> "title": "My big book",
> "authors": [{ "name": "mytest12345" }]
> }
>
> I can perform a simple query and I will get both records back - one
> artist, one book:
> GET /tests/_search?q=mytest12345
>
> This is what I would expect.
>
> If I then query against the authors.name field within the books type, I
> get my book, also just as I expect:
> GET /tests/books/_search?q=authors.name:mytest12345
>
> However, if I perform the exact same query against the root of the index
> instead of against the books type
> GET /tests/_search?q=authors.name:mytest12345
>
> I instead get back the *author* record. The book no longer comes back at
> all even though to my understanding I'm performing the same query against
> *all* types instead of just books.
>
> If I delete the authors type (and I confirmed deleting just the authors 
> records
> won't work)
> DELETE /tests/authors
>
> Then the query against the index root behaves as expected
> GET /tests/_search?q=authors.name:mytest12345
>
> It basically appears that the query
> GET /{index}/_search?q=*{type}.{field}*:{query}
>
> appears to run internally as:
> GET /{index}/*{type}*/_search?q=*{field}*:{query}
>
> when the {type} type exists in the index. So my question is, is this
> correct behavior that I don't understand? Or is it a bug? It feels like a
> bug to me, but I'll defer to the experts here. I'm happy to open a ticket
> if someone more experienced than me can verify it's an issue.
>
> Thanks,
>
> -joel
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c8757abd-a99f-4c78-805b-17101ace7982%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DUy9JkYxBYUX4RpOFSkn_L41NsVqLrJz05PbzKbcGHNA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.