KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Anikessh Jain
Hi all,

I want to Install Kibana-3.0.0 with Nginx ,can anybody provide me the steps 
or any Link that can help.

I want to Install ElasticsEarch + Kibana +Logstash for getting syslog 
,nginx logs in Kibana GUI.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Kibana RegEx not working as expected?

2014-03-23 Thread David Pilato
I think you did not set query type to regex as the generated query is a 
query_string
{query_string:{query:a{3}}
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 22 mars 2014 à 22:23, Janet Sullivan jan...@nairial.net a écrit :

Here’s what I see in Kibana:
 
Oops! SearchParseException[[logstash-2014.03.22][5]: from[-1],size[-1]: Parse 
Failure [Failed to parse source 
[{query:{filtered:{query:{bool:{should:[{query_string:{query:a{3}}}]}},filter:{bool:{must:[{range:{@timestamp:{from:1395519710926,to:now}}}],highlight:{fields:{},fragment_size:2147483647,pre_tags:[@start-highlight@],post_tags:[@end-highlight@]},size:500,sort:[{@timestamp:{order:desc}},{@timestamp:{order:desc}}]}]]]
 
From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of Janet Sullivan
Sent: Saturday, March 22, 2014 2:16 PM
To: elasticsearch@googlegroups.com
Subject: Kibana RegEx not working as expected?
 
I’m running ES 1.0.1 and Kibana 3 milestone 5.   I’m trying to use regex in my 
kibana searches, as described at 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html#regexp-syntax.
  Some of the regex works, but any regex that uses square brackets [a-c] or 
curly braces a{3} fails.  What am I doing wrong?
-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/70f64164605e4fcb965e1cfda0545991%40BY2PR07MB043.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/11122f6d44384e4b803764eaa42ff742%40BY2PR07MB043.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CF4A55CD-7725-4592-A294-8803053A521D%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


ES instead of Cassandra.

2014-03-23 Thread Tim Uckun
Is ES a suitable replacement for Cassandra for an analytics platform? I 
need high speed data ingestion, time series analysis, rollups and 
aggregations etc.  Cassandra is used for this kind of task often but it 
seems to me ES might be a suitable if not better replacement.

Cheers.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7301c4b2-b254-437c-8c28-f1906eb89975%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Anikessh Jain

Hi Mark thanks for the help,but my query is where to save the Kibana folder 
in Nginx and what changes I have to make the Nginx

Can you Please help me with that steps also and is Redis also required 
.Please clear my doubt.



On Sunday, March 23, 2014 12:39:31 PM UTC+5:30, Mark Walkom wrote:

 Take a look at 
 http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash 
 This will help on the kibana front, just edit to suit 
 https://gist.github.com/markwalkom/8034326

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com javascript:
 web: www.campaignmonitor.com


 On 23 March 2014 18:03, Anikessh Jain anikess...@gmail.com 
 javascript:wrote:

 Hi all,

 I want to Install Kibana-3.0.0 with Nginx ,can anybody provide me the 
 steps or any Link that can help.

 I want to Install ElasticsEarch + Kibana +Logstash for getting syslog 
 ,nginx logs in Kibana GUI.
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5db7c037-856f-4866-b8b2-29869e4e2ecb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Mark Walkom
That gist is a vhost file which I'd think is pretty explanatory if oyu know
basic nginx, if not check out http://wiki.nginx.org/FullExample and you'll
be good.

You don't need redis.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 23 March 2014 20:32, Anikessh Jain anikesshjai...@gmail.com wrote:


 Hi Mark thanks for the help,but my query is where to save the Kibana
 folder in Nginx and what changes I have to make the Nginx

 Can you Please help me with that steps also and is Redis also required
 .Please clear my doubt.



 On Sunday, March 23, 2014 12:39:31 PM UTC+5:30, Mark Walkom wrote:

 Take a look at http://logstash.net/docs/1.4.0/tutorials/getting-started-
 with-logstash
 This will help on the kibana front, just edit to suit
 https://gist.github.com/markwalkom/8034326

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 23 March 2014 18:03, Anikessh Jain anikess...@gmail.com wrote:

 Hi all,

 I want to Install Kibana-3.0.0 with Nginx ,can anybody provide me the
 steps or any Link that can help.

 I want to Install ElasticsEarch + Kibana +Logstash for getting syslog
 ,nginx logs in Kibana GUI.

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%
 40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/5db7c037-856f-4866-b8b2-29869e4e2ecb%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/5db7c037-856f-4866-b8b2-29869e4e2ecb%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624bR8-iyeGpLw9yFv5mEfpVOZn7P99ebSK56fQE7XCJoEg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES instead of Cassandra.

2014-03-23 Thread David Pilato
I don't know if it could be a replacement for Cassandra but for sure it could 
be an analytic platform!

I would say: just give it a try! It's easy enough to start playing with your 
data! 


--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 09:07, Tim Uckun timuc...@gmail.com a écrit :
 
 Is ES a suitable replacement for Cassandra for an analytics platform? I need 
 high speed data ingestion, time series analysis, rollups and aggregations 
 etc.  Cassandra is used for this kind of task often but it seems to me ES 
 might be a suitable if not better replacement.
 
 Cheers.
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/7301c4b2-b254-437c-8c28-f1906eb89975%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/B1D4557D-E71E-4B31-A45C-E07741A84588%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Anikessh Jain
You did not get my query where do I save my Kibana extracted file and for 
example i saved in /usr/share/nginx so my root will be 
/usr/share/nginx/kibana

and should i remove the default one and how do I browse Kibana through 
BROWSER.for example in my nginx file i have given kibana.myhost.com 
so should i browse like kibana.myhost.com/ and will i get my kibana page 
and how do i integrate logstash as well in kibana .any help will be good.

hope you understand my problem.sorry for stressing you. 

for example i browse like 

server {
listen *:80;
server_name kibana.myhost.com;
 
access_log /var/log/nginx/kibana_access.log;
error_log /var/log/nginx/kibana_error.log;
 
location / {
root /usr/share/nginx/kibana;
index index.htmli index.htm;
}
 
location ~ ^/_aliases$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
  
location ~ ^/.*/_aliases$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}

location ~ ^/_nodes$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
location ~ ^/*/_search$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
location ~ ^/*/_mapping$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
# Password protected end points
location ~ ^/kibana-int/dashboard/.*$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
limit_except GET {
  proxy_pass http://eshost.domain.com:9200;
  auth_basic Restricted;
  auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
}
  }

 
location ~ ^/kibana-int/temp.*$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
limit_except GET {
  proxy_pass http://eshost.domain.com:9200;
  auth_basic Restricted;
  auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
}
  }
}
server {
listen *:80;
server_name kibana.myhost.com;
 
access_log /var/log/nginx/kibana_access.log;
error_log /var/log/nginx/kibana_error.log;
 
location / {
root /usr/share/nginx/kibana;
index index.htmli index.htm;
}
 
location ~ ^/_aliases$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
  
location ~ ^/.*/_aliases$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}

location ~ ^/_nodes$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
location ~ ^/*/_search$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
location ~ ^/*/_mapping$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
}
 
# Password protected end points
location ~ ^/kibana-int/dashboard/.*$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
limit_except GET {
  proxy_pass http://eshost.domain.com:9200;
  auth_basic Restricted;
  auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
}
  }

 
location ~ ^/kibana-int/temp.*$ {
proxy_pass http://eshost.domain.com:9200;
proxy_read_timeout 180;
limit_except GET {
  proxy_pass http://eshost.domain.com:9200;
  auth_basic Restricted;
  auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
}
  }
}


 

On Sunday, March 23, 2014 3:12:25 PM UTC+5:30, Mark Walkom wrote:

 That gist is a vhost file which I'd think is pretty explanatory if oyu 
 know basic nginx, if not check out http://wiki.nginx.org/FullExample and 
 you'll be good.

 You don't need redis.

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com javascript:
 web: www.campaignmonitor.com


 On 23 March 2014 20:32, Anikessh Jain anikess...@gmail.com 
 javascript:wrote:


 Hi Mark thanks for the help,but my query is where to save the Kibana 
 folder in Nginx and what changes I have to make the Nginx

 Can you Please help me with that steps also and is Redis also required 
 .Please clear my doubt.



 On Sunday, March 23, 2014 12:39:31 PM UTC+5:30, Mark Walkom wrote:

 Take a look at http://logstash.net/docs/1.4.0/tutorials/getting-started-
 with-logstash 
 This will help on the kibana front, just edit to suit 
 https://gist.github.com/markwalkom/8034326

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 23 March 2014 18:03, Anikessh Jain anikess...@gmail.com wrote:

 Hi all,

 I want to Install Kibana-3.0.0 with Nginx ,can anybody provide me the 
 steps or any Link that can help.

 I want to Install ElasticsEarch + Kibana +Logstash for getting syslog 
 ,nginx logs in Kibana GUI.
  
 -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/aa71b656-10f5-41a9-b7e5-d01a4b9cfc95%
 

Re: KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Mark Walkom
You mean where do you put the dashboard files? If so then it's
$KIBANA_HOME/app/dashboards/

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 23 March 2014 20:59, Anikessh Jain anikesshjai...@gmail.com wrote:

 You did not get my query where do I save my Kibana extracted file and for
 example i saved in /usr/share/nginx so my root will be
 /usr/share/nginx/kibana

 and should i remove the default one and how do I browse Kibana through
 BROWSER.for example in my nginx file i have given kibana.myhost.com
 so should i browse like kibana.myhost.com/ and will i get my kibana page
 and how do i integrate logstash as well in kibana .any help will be good.

 hope you understand my problem.sorry for stressing you.

 for example i browse like

 server {
 listen *:80;
 server_name kibana.myhost.com;

 access_log /var/log/nginx/kibana_access.log;
 error_log /var/log/nginx/kibana_error.log;

 location / {
 root /usr/share/nginx/kibana;
 index index.htmli index.htm;
 }

 location ~ ^/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/.*/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/_nodes$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/*/_search$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/*/_mapping$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 # Password protected end points
 location ~ ^/kibana-int/dashboard/.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }


 location ~ ^/kibana-int/temp.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }
 }
 server {
 listen *:80;
 server_name kibana.myhost.com;

 access_log /var/log/nginx/kibana_access.log;
 error_log /var/log/nginx/kibana_error.log;

 location / {
 root /usr/share/nginx/kibana;
 index index.htmli index.htm;
 }

 location ~ ^/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/.*/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/_nodes$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/*/_search$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/*/_mapping$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 # Password protected end points
 location ~ ^/kibana-int/dashboard/.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }


 location ~ ^/kibana-int/temp.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }
 }




 On Sunday, March 23, 2014 3:12:25 PM UTC+5:30, Mark Walkom wrote:

 That gist is a vhost file which I'd think is pretty explanatory if oyu
 know basic nginx, if not check out http://wiki.nginx.org/FullExample and
 you'll be good.

 You don't need redis.

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 23 March 2014 20:32, Anikessh Jain anikess...@gmail.com wrote:


 Hi Mark thanks for the help,but my query is where to save the Kibana
 folder in Nginx and what changes I have to make the Nginx

 Can you Please help me with that steps also and is Redis also required
 .Please clear my doubt.



 On Sunday, March 23, 2014 12:39:31 PM UTC+5:30, Mark Walkom wrote:

 Take a look at http://logstash.net/docs/1.4.0
 /tutorials/getting-started-with-logstash
 This will help on the kibana front, just edit to suit
 https://gist.github.com/markwalkom/8034326

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 23 March 2014 18:03, Anikessh Jain anikess...@gmail.com wrote:

 Hi all,

 I want to Install Kibana-3.0.0 with Nginx ,can anybody provide me the
 steps or any Link that can help.

 I want to Install ElasticsEarch + Kibana +Logstash for getting syslog
 ,nginx logs in Kibana GUI.

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe 

Re: KIBANA-3.0.0 install nginx step by step

2014-03-23 Thread Anikessh Jain
You are not getting me  

Please read my query please ,I am telling that have I configured my 
Document root for Kibana properly given above and I want to browse Kibana 
not browse badhboards   and I want to what changes are to be made in 
config.js file of Kibana as well.



Please read and then revert 

On Sunday, March 23, 2014 3:33:56 PM UTC+5:30, Mark Walkom wrote:

 You mean where do you put the dashboard files? If so then it's 
 $KIBANA_HOME/app/dashboards/

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com javascript:
 web: www.campaignmonitor.com
  

 On 23 March 2014 20:59, Anikessh Jain anikess...@gmail.com 
 javascript:wrote:

 You did not get my query where do I save my Kibana extracted file and for 
 example i saved in /usr/share/nginx so my root will be 
 /usr/share/nginx/kibana

 and should i remove the default one and how do I browse Kibana through 
 BROWSER.for example in my nginx file i have given kibana.myhost.com 
 so should i browse like kibana.myhost.com/ and will i get my kibana page 
 and how do i integrate logstash as well in kibana .any help will be good.

 hope you understand my problem.sorry for stressing you. 

 for example i browse like 

 server {
 listen *:80;
 server_name kibana.myhost.com;
  
 access_log /var/log/nginx/kibana_access.log;
 error_log /var/log/nginx/kibana_error.log;
  
 location / {
 root /usr/share/nginx/kibana;
 index index.htmli index.htm;
 }
  
 location ~ ^/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
   
 location ~ ^/.*/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/_nodes$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 location ~ ^/*/_search$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 location ~ ^/*/_mapping$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 # Password protected end points
 location ~ ^/kibana-int/dashboard/.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }

  
 location ~ ^/kibana-int/temp.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }
 }
 server {
 listen *:80;
 server_name kibana.myhost.com;
  
 access_log /var/log/nginx/kibana_access.log;
 error_log /var/log/nginx/kibana_error.log;
  
 location / {
 root /usr/share/nginx/kibana;
 index index.htmli index.htm;
 }
  
 location ~ ^/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
   
 location ~ ^/.*/_aliases$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }

 location ~ ^/_nodes$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 location ~ ^/*/_search$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 location ~ ^/*/_mapping$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 }
  
 # Password protected end points
 location ~ ^/kibana-int/dashboard/.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }

  
 location ~ ^/kibana-int/temp.*$ {
 proxy_pass http://eshost.domain.com:9200;
 proxy_read_timeout 180;
 limit_except GET {
   proxy_pass http://eshost.domain.com:9200;
   auth_basic Restricted;
   auth_basic_user_file /etc/nginx/conf.d/kibana.myhost.org.htpasswd;
 }
   }
 }


  

 On Sunday, March 23, 2014 3:12:25 PM UTC+5:30, Mark Walkom wrote:

 That gist is a vhost file which I'd think is pretty explanatory if oyu 
 know basic nginx, if not check out http://wiki.nginx.org/FullExampleand 
 you'll be good.

 You don't need redis.

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 23 March 2014 20:32, Anikessh Jain anikess...@gmail.com wrote:


 Hi Mark thanks for the help,but my query is where to save the Kibana 
 folder in Nginx and what changes I have to make the Nginx

 Can you Please help me with that steps also and is Redis also required 
 .Please clear my doubt.



 On Sunday, March 23, 2014 12:39:31 PM UTC+5:30, Mark Walkom wrote:

 Take a look at http://logstash.net/docs/1.4.0
 /tutorials/getting-started-with-logstash 
 This will help on the kibana front, just edit to suit 
 https://gist.github.com/markwalkom/8034326

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: 

Re: Kibana Setup

2014-03-23 Thread victor obaitor
Thank you. Got it working perfectly Sent from my IPhone

On Sat, Mar 22, 2014 at 10:51 PM, Mark Walkom ma...@campaignmonitor.com
wrote:

 You need something to server kibana, apache, nginx etc.
 Regards,
 Mark Walkom
 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com
 On 23 March 2014 08:39, victor obaitor vicob...@gmail.com wrote:
 I am trying to integrate kibana with my Elastic search installation.
 Elastic search runs well on \http://localhost:9200\; .I extracted kibana
 inside my ES folder. My questions are; 1. Am i supposed to extract kibana
 into the ES folder or into a server such as XAMPP? 2. Given that ES runs at
 \http://localhost:9200\; , what will be the FQDN of my kibana
 installation. Supposing i was on the right path all along? NB:I am using
 the latest versions of both Kibana and ES.


 Thanks

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/e44ba7dc-13b8-4e0a-8156-09b00ddea98d%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/e44ba7dc-13b8-4e0a-8156-09b00ddea98d%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups elasticsearch group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/elasticsearch/LkzEEsSYj_s/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/CAEM624bt3yONGwX8ABNJ-%2BcPG06Me8Y0N2nA9T4-NvtnbAX41A%40mail.gmail.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1395573508849.d0e6a4cb%40Nodemailer.
For more options, visit https://groups.google.com/d/optout.


Windows Elasticsearch cluster performance tuning

2014-03-23 Thread Eric Brandes
Hey all, I have a 3 node Elasticsearch 1.0.1 cluster running on Windows 
Server 2012 (in Azure).  There's about 20 million documents that take up a 
total of 40GB (including replicas).  There's about 400 indexes in total, 
with some having millions of documents and some having just a few.  Each 
index is set to have 3 shards and 1 replica.   The main cluster is running 
on three  4 core machines with 7GB of ram.  The min/max JVM heap size is 
set to 4GB.  

The primary use case for this cluster is faceting/aggregations over the 
documents.  There's almost no full text searching, so everything is pretty 
much based on exact values (which are stored but not analyzed at index time)

When doing some term facets on a few of these indexes (the biggest one 
contains about 8 million documents) I'm seeing really long response times 
( 5 sec).  There are potentially thousands of distinct values for the term 
I'm faceting on, but I would have still expected faster performance.

So my goal is to speed up these queries to get the responses sub second if 
possible.  To that end I had some questions:
1) Would switching to Linux give me better performance in general?
2) I could collapse almost all of these 400 indexes in to a single big 
index and use aliases + filters instead.  Would this be advisable?
3) Would mucking with the field data cache yield any better results?


If I can add any more data to this discussion please let me know!
Thanks!
Eric

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Date calculation does not work in DELETE

2014-03-23 Thread Norbert Hartl
btw. I forgot to add I'm using elasticsearch 1.0.1

Norbert

Am 22.03.2014 um 17:28 schrieb Norbert Hartl norbert.ha...@googlemail.com:

 Hi,
 
 I'm trying to delete entries in my indexes that are older than 6 months. I'm 
 using the curl command
 
 curl -vv -XDELETE  http://localhost:9200/myindex/_query -d @delete-old.json
 
 delete-old.json content: 
 
 {
 
query : {  
 
   filtered : {
 
  query : {
 
 match_all : { }
 
  },
 
  filter : {
 
 range : {
 
timeStamp : {
 
   lte : now-6M
 
}
 
 }
 
  }
 
   }
 
}
 
 }
 
 
 
 The query does work when doing a search. It does also work if I use
 
timeStamp : {
 
lte : 2013-09-01T00:00:00+00:00
 
}
 
 
 
 So is the date calculation now-6M not supposed to work in DELETEs?
 
 
 
 thanks,
 
 
 
 norbert
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/ec13a845-aedd-4eb4-b9ca-60eb67c5b773%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/A583FCC2-C912-4DC5-ABDD-C2540CBAFA28%40googlemail.com.
For more options, visit https://groups.google.com/d/optout.


ANN Elastisch 2.0.0-beta2 is released

2014-03-23 Thread Michael Klishin
Elastisch [1] is a small, feature complete client for ElasticSearch
that provides both REST and native clients.

Release notes:
http://blog.clojurewerkz.org/blog/2014/03/23/elastisch-2-dot-0-0-beta2-is-released/

Sister projects: http://clojurewerkz.org

1. http://clojureelasticsearch.info
-- 
MK

http://github.com/michaelklishin
http://twitter.com/michaelklishin

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAE3HoVRztvi6EH741xQsv02RsLByTeQx60SeNBZSG0W-aH44Ng%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Windows Elasticsearch cluster performance tuning

2014-03-23 Thread David Pilato
IMHO 800 shards per node is far too much. And with only 4gb of memory...

I guess you have lot of GC or you forget to disable SWAP.

My 2 cents.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 18:08, Eric Brandes eric.bran...@gmail.com a écrit :
 
 Hey all, I have a 3 node Elasticsearch 1.0.1 cluster running on Windows 
 Server 2012 (in Azure).  There's about 20 million documents that take up a 
 total of 40GB (including replicas).  There's about 400 indexes in total, with 
 some having millions of documents and some having just a few.  Each index is 
 set to have 3 shards and 1 replica.   The main cluster is running on three  4 
 core machines with 7GB of ram.  The min/max JVM heap size is set to 4GB.  
 
 The primary use case for this cluster is faceting/aggregations over the 
 documents.  There's almost no full text searching, so everything is pretty 
 much based on exact values (which are stored but not analyzed at index time)
 
 When doing some term facets on a few of these indexes (the biggest one 
 contains about 8 million documents) I'm seeing really long response times ( 
 5 sec).  There are potentially thousands of distinct values for the term I'm 
 faceting on, but I would have still expected faster performance.
 
 So my goal is to speed up these queries to get the responses sub second if 
 possible.  To that end I had some questions:
 1) Would switching to Linux give me better performance in general?
 2) I could collapse almost all of these 400 indexes in to a single big index 
 and use aliases + filters instead.  Would this be advisable?
 3) Would mucking with the field data cache yield any better results?
 
 
 If I can add any more data to this discussion please let me know!
 Thanks!
 Eric
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/C6157A06-390B-45C0-8425-3723F37D3766%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: Windows Elasticsearch cluster performance tuning

2014-03-23 Thread David Pilato
Forget to say that you should extra large instances and not large.
With larges, you could suffer from noisy neighbors.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 19:54, David Pilato da...@pilato.fr a écrit :
 
 IMHO 800 shards per node is far too much. And with only 4gb of memory...
 
 I guess you have lot of GC or you forget to disable SWAP.
 
 My 2 cents.
 
 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
 
 Le 23 mars 2014 à 18:08, Eric Brandes eric.bran...@gmail.com a écrit :
 
 Hey all, I have a 3 node Elasticsearch 1.0.1 cluster running on Windows 
 Server 2012 (in Azure).  There's about 20 million documents that take up a 
 total of 40GB (including replicas).  There's about 400 indexes in total, 
 with some having millions of documents and some having just a few.  Each 
 index is set to have 3 shards and 1 replica.   The main cluster is running 
 on three  4 core machines with 7GB of ram.  The min/max JVM heap size is set 
 to 4GB.  
 
 The primary use case for this cluster is faceting/aggregations over the 
 documents.  There's almost no full text searching, so everything is pretty 
 much based on exact values (which are stored but not analyzed at index time)
 
 When doing some term facets on a few of these indexes (the biggest one 
 contains about 8 million documents) I'm seeing really long response times ( 
 5 sec).  There are potentially thousands of distinct values for the term I'm 
 faceting on, but I would have still expected faster performance.
 
 So my goal is to speed up these queries to get the responses sub second if 
 possible.  To that end I had some questions:
 1) Would switching to Linux give me better performance in general?
 2) I could collapse almost all of these 400 indexes in to a single big index 
 and use aliases + filters instead.  Would this be advisable?
 3) Would mucking with the field data cache yield any better results?
 
 
 If I can add any more data to this discussion please let me know!
 Thanks!
 Eric
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/C6157A06-390B-45C0-8425-3723F37D3766%40pilato.fr.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0AB6FD4E-0C22-4256-8E29-2F40C06E793E%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: Windows Elasticsearch cluster performance tuning

2014-03-23 Thread Eric Brandes
Interesting - so in general would you recommend consolidating all 400 
indexes in to a single index and using aliases/filters to address them?  
(they're currently broken out by user, and all operations are scoped to a 
specific user)

If I were to consolidate to a single index, how many shards would be 
recommended?

On Sunday, March 23, 2014 2:00:18 PM UTC-5, David Pilato wrote:

 Forget to say that you should extra large instances and not large.
 With larges, you could suffer from noisy neighbors.

 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 19:54, David Pilato da...@pilato.fr javascript: a 
 écrit :

 IMHO 800 shards per node is far too much. And with only 4gb of memory...

 I guess you have lot of GC or you forget to disable SWAP.

 My 2 cents.

 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 18:08, Eric Brandes eric.b...@gmail.com javascript: 
 a écrit :

 Hey all, I have a 3 node Elasticsearch 1.0.1 cluster running on Windows 
 Server 2012 (in Azure).  There's about 20 million documents that take up a 
 total of 40GB (including replicas).  There's about 400 indexes in total, 
 with some having millions of documents and some having just a few.  Each 
 index is set to have 3 shards and 1 replica.   The main cluster is running 
 on three  4 core machines with 7GB of ram.  The min/max JVM heap size is 
 set to 4GB.  

 The primary use case for this cluster is faceting/aggregations over the 
 documents.  There's almost no full text searching, so everything is pretty 
 much based on exact values (which are stored but not analyzed at index time)

 When doing some term facets on a few of these indexes (the biggest one 
 contains about 8 million documents) I'm seeing really long response times 
 ( 5 sec).  There are potentially thousands of distinct values for the term 
 I'm faceting on, but I would have still expected faster performance.

 So my goal is to speed up these queries to get the responses sub second if 
 possible.  To that end I had some questions:
 1) Would switching to Linux give me better performance in general?
 2) I could collapse almost all of these 400 indexes in to a single big 
 index and use aliases + filters instead.  Would this be advisable?
 3) Would mucking with the field data cache yield any better results?


 If I can add any more data to this discussion please let me know!
 Thanks!
 Eric

  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.

  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/C6157A06-390B-45C0-8425-3723F37D3766%40pilato.frhttps://groups.google.com/d/msgid/elasticsearch/C6157A06-390B-45C0-8425-3723F37D3766%40pilato.fr?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/04ea8acb-a62f-4232-a483-bdde916c48c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Windows Elasticsearch cluster performance tuning

2014-03-23 Thread David Pilato
Yes. Hard to say. You need to test how much docs a single shard could hold. It 
depends on your use case.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 23 mars 2014 à 20:11, Eric Brandes eric.bran...@gmail.com a écrit :
 
 Interesting - so in general would you recommend consolidating all 400 indexes 
 in to a single index and using aliases/filters to address them?  (they're 
 currently broken out by user, and all operations are scoped to a specific 
 user)
 
 If I were to consolidate to a single index, how many shards would be 
 recommended?
 
 On Sunday, March 23, 2014 2:00:18 PM UTC-5, David Pilato wrote:
 Forget to say that you should extra large instances and not large.
 With larges, you could suffer from noisy neighbors.
 
 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
 
 Le 23 mars 2014 à 19:54, David Pilato da...@pilato.fr a écrit :
 
 IMHO 800 shards per node is far too much. And with only 4gb of memory...
 
 I guess you have lot of GC or you forget to disable SWAP.
 
 My 2 cents.
 
 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
 
 Le 23 mars 2014 à 18:08, Eric Brandes eric.b...@gmail.com a écrit :
 
 Hey all, I have a 3 node Elasticsearch 1.0.1 cluster running on Windows 
 Server 2012 (in Azure).  There's about 20 million documents that take up a 
 total of 40GB (including replicas).  There's about 400 indexes in total, 
 with some having millions of documents and some having just a few.  Each 
 index is set to have 3 shards and 1 replica.   The main cluster is running 
 on three  4 core machines with 7GB of ram.  The min/max JVM heap size is 
 set to 4GB.  
 
 The primary use case for this cluster is faceting/aggregations over the 
 documents.  There's almost no full text searching, so everything is pretty 
 much based on exact values (which are stored but not analyzed at index 
 time)
 
 When doing some term facets on a few of these indexes (the biggest one 
 contains about 8 million documents) I'm seeing really long response times 
 ( 5 sec).  There are potentially thousands of distinct values for the 
 term I'm faceting on, but I would have still expected faster performance.
 
 So my goal is to speed up these queries to get the responses sub second if 
 possible.  To that end I had some questions:
 1) Would switching to Linux give me better performance in general?
 2) I could collapse almost all of these 400 indexes in to a single big 
 index and use aliases + filters instead.  Would this be advisable?
 3) Would mucking with the field data cache yield any better results?
 
 
 If I can add any more data to this discussion please let me know!
 Thanks!
 Eric
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/eb5fb6bf-be2c-4d5f-b73a-edc1ef5813f1%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/C6157A06-390B-45C0-8425-3723F37D3766%40pilato.fr.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/04ea8acb-a62f-4232-a483-bdde916c48c2%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/B736C9EB-1617-483C-8900-A5F7CDDAD8DA%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Need Exact Search on Two Fields

2014-03-23 Thread Randy Jensen
I need to be able to do an exact search against two fields, Program Heading 
and Subheading each with it's own search term.

For example, I might need to search for Health Professions - Medical. So 
the Program Area field would be Health Professions and the Subheading 
field would be Medical. Again, I need exact searched, so Medical 
Marketing wouldn't come back as a Subheading if Medical is searched for 
or if Medical appears as a subheading under another Program Area.

I've tried every variation of every query I can think of and can't get it 
exactly right. I have the mapping below:

'properties' = array(
  'program_area' = array(
'type' = 'multi_field',
'fields' = array(
  'program_area' = array(
'type' = 'string',
'index' = 'analyzed'
  ),
  'program_area_raw' = array(
'type' = 'string',
'index' = 'not_analyzed',
'store' = true
  )
)
  ),
  'subheading' = array(
'type' = 'multi_field',
'fields' = array(
  'subheading' = array(
'type' = 'string',
'index' = 'analyzed',
'store' = true
  ),
  'subheading_raw' = array(
'type' = 'string',
'index' = 'not_analyzed',
'store' = true
  )
)
  ),
)

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/705e6394-1edc-4236-bbf1-a4f6b22f0eb9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


RE: Kibana RegEx not working as expected?

2014-03-23 Thread Janet Sullivan
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html#regexp-syntax
 states “Regular expression queries are supported by the regexp and the 
query_string queries.”  So why isn’t it working with a query_string?

From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of David Pilato
Sent: Sunday, March 23, 2014 12:15 AM
To: elasticsearch@googlegroups.com
Subject: Re: Kibana RegEx not working as expected?

I think you did not set query type to regex as the generated query is a 
query_string
{query_string:{query:a{3}}
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 22 mars 2014 à 22:23, Janet Sullivan 
jan...@nairial.netmailto:jan...@nairial.net a écrit :
Here’s what I see in Kibana:

Oops! SearchParseException[[logstash-2014.03.22][5]: from[-1],size[-1]: Parse 
Failure [Failed to parse source 
[{query:{filtered:{query:{bool:{should:[{query_string:{query:a{3}}}]}},filter:{bool:{must:[{range:{@timestamp:{from:1395519710926,to:now}}}],highlight:{fields:{},fragment_size:2147483647,pre_tags:[@start-highlight@],post_tags:[@end-highlight@]},size:500,sort:[{@timestamp:{order:desc}},{@timestamp:{order:desc}}]}]]]

From: elasticsearch@googlegroups.commailto:elasticsearch@googlegroups.com 
[mailto:elasticsearch@googlegroups.com] On Behalf Of Janet Sullivan
Sent: Saturday, March 22, 2014 2:16 PM
To: elasticsearch@googlegroups.commailto:elasticsearch@googlegroups.com
Subject: Kibana RegEx not working as expected?

I’m running ES 1.0.1 and Kibana 3 milestone 5.   I’m trying to use regex in my 
kibana searches, as described at 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html#regexp-syntax.
  Some of the regex works, but any regex that uses square brackets [a-c] or 
curly braces a{3} fails.  What am I doing wrong?
--
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.commailto:elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/70f64164605e4fcb965e1cfda0545991%40BY2PR07MB043.namprd07.prod.outlook.comhttps://groups.google.com/d/msgid/elasticsearch/70f64164605e4fcb965e1cfda0545991%40BY2PR07MB043.namprd07.prod.outlook.com?utm_medium=emailutm_source=footer.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.commailto:elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/11122f6d44384e4b803764eaa42ff742%40BY2PR07MB043.namprd07.prod.outlook.comhttps://groups.google.com/d/msgid/elasticsearch/11122f6d44384e4b803764eaa42ff742%40BY2PR07MB043.namprd07.prod.outlook.com?utm_medium=emailutm_source=footer.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.commailto:elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CF4A55CD-7725-4592-A294-8803053A521D%40pilato.frhttps://groups.google.com/d/msgid/elasticsearch/CF4A55CD-7725-4592-A294-8803053A521D%40pilato.fr?utm_medium=emailutm_source=footer.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eaffb90d86d2485c85ddfc6705695031%40BY2PR07MB043.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elastic search indexing documents

2014-03-23 Thread Deepikaa Subramaniam
I would like to index documents for full text search. 

On Friday, March 21, 2014 4:22:44 PM UTC-7, Deepikaa Subramaniam wrote:

 Hi guys, 

 I am new to Elastic Search. Have setup my env use C# +Nest to access ES. I 
 am able to index txt files successfully. I downloaded the elastic search 
 mapper plugin to extract data from other document types. However, if i try 
 to search for some Keywords from within the doc the search doesn't return 
 any results. Please help.

 public class Doc
 {
 public string file_id;
 public string created;
 [ElasticProperty(Type=Nest.FieldType.attachment, Store = true, 
 TermVector=Nest.termVectorOption.with_positions_offsets)]
 public string content;
 }
 Doc doc = new Doc();
 Doc.content = Convert.ToBase64String(File.ReadAllBytes(path));


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/07284ada-b85e-4c1c-8ba1-ef12c71c43cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Indexing performance with doc values (particularly with larger number of fields)

2014-03-23 Thread Alex at Ikanow
This might be more of a Lucene question, but a quick google didn't throw up 
anything.

Has anyone done/seen any benchmarking on indexing performance (overhead) 
due to using doc values?

I often index quite large JSON objects, with many fields (eg 50), I'm 
trying to get a feel for whether I can just let all of them be doc values 
on the off chance I'll want to aggregate over them, or whether I need to 
pick beforehand which fields will support aggregation.

(A related question: presumably allowing a mix of doc values fields and 
legacy fields is a bad idea, because if you use doc values fields you 
want a low max heap so that the file cache has lots of memory available, 
whereas if you use the field cache you need a large heap - is that about 
right, or am i missing something?)

Thanks for any insight!

Alex
Ikanow

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0361eda4-ab39-4536-b91a-ccb710921edd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Indexing performance with doc values (particularly with larger number of fields)

2014-03-23 Thread Robert Muir
Would be a nice benchmark to run (and if you find hotspots/slow things
to go improve in lucene...)!

The data structures for docvalues are less complex than the data
structures for the inverted index.

I've enabled docvalues for many fields as you suggest in the past, and
in my tests the time for e.g. segment merging was still dominated by
the inverted index (terms dict, postings lists, etc), as I had all the
fields indexed for search, too. But nothing is free: some of this
stuff is data-dependent so you have to test.

About the heap, you are right, its probably best to adjust your heap
accordingly if you are using dovalues.


On Sun, Mar 23, 2014 at 10:01 PM, Alex at Ikanow apigg...@ikanow.com wrote:
 This might be more of a Lucene question, but a quick google didn't throw up
 anything.

 Has anyone done/seen any benchmarking on indexing performance (overhead) due
 to using doc values?

 I often index quite large JSON objects, with many fields (eg 50), I'm trying
 to get a feel for whether I can just let all of them be doc values on the
 off chance I'll want to aggregate over them, or whether I need to pick
 beforehand which fields will support aggregation.

 (A related question: presumably allowing a mix of doc values fields and
 legacy fields is a bad idea, because if you use doc values fields you want
 a low max heap so that the file cache has lots of memory available, whereas
 if you use the field cache you need a large heap - is that about right, or
 am i missing something?)

 Thanks for any insight!

 Alex
 Ikanow

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/0361eda4-ab39-4536-b91a-ccb710921edd%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMUKNZVGxGcM_QrFHEXsaa%3DQcH_Er_h1s4LgBQDE0kU7c%2Bi2JQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Error in Elasticsearch NullPointerException at org.elasticsearch.plugins.PluginsService.loadSitePlugins

2014-03-23 Thread Preeti Jain
Finally got this working. Our Java application that communicates with 
elasticsearch is hosted in JBoss server. While creating the transport 
client we were not passing elasticsearch home dir, plugin dir path 
explicitly. The default path that was picked up had some wrong reference. 
We managed to resolve the issue after passing the correct value to 
path.home.
 
Regards,
Preeti

On Thursday, March 20, 2014 12:54:33 PM UTC+5:30, David Pilato wrote:

 No. I have no idea. 
 I think something is wrong with your installation.

 Could you just 

 download elasticsearch 1.0.1 (zip or tar.gz file)
 unzip it
 Run bin/plugin -install x (your plugin)
 Run bin/elasticsearch

 Only those commands please.

 If it works, then something went wrong when you installed your 
 elasticsearch version.
 Could you describe how you installed elasticsearch?

 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


 Le 20 mars 2014 à 05:09, Preeti Jain itsp...@gmail.com javascript: a 
 écrit :

 Hi David

 I have only one subdir inside plugins and it has following permissions

 drwxrwxrwx 2 root root 4096 Mar 11 10:06 analysis-phonetic

 Any more thoughts?

 Regards,
 Preeti
 On Wednesday, March 19, 2014 2:44:01 PM UTC+5:30, David Pilato wrote:

 I see. This fix has not been added in 1.0 branch but in 1.x: 
 https://github.com/elasticsearch/elasticsearch/issues/5195

 Could you check subdirs?
 I think you have an issue with your directories.


 -- 
 *David Pilato* | *Technical Advocate* | *Elasticsearch.com 
 http://Elasticsearch.com*
 @dadoonet https://twitter.com/dadoonet | 
 @elasticsearchfrhttps://twitter.com/elasticsearchfr


 Le 19 mars 2014 à 10:03:55, Preeti Jain (itsp...@gmail.com) a écrit:

 Hi David, 

 We are using version 1.0.1. We have following permission on plugins 
 directory :rwxr-xr-x

 Regards,
 Preeti

 On Wednesday, March 19, 2014 2:23:47 PM UTC+5:30, David Pilato wrote: 

  Which elasticsearch version are you using?
  
  This could happen if you don't have privileges on plugins dir.
  Often, you have installed elasticsearch with a given account but you 
 installed plugin using sudo which ends up creating a plugin dir not 
 readable by elasticsearch user.
  
  This NPE has been fixed in recent version. Wondering if it was not 
 correctly fixed or if you are running an old version or if it's something 
 else.

  -- 
 *David Pilato* | *Technical Advocate* | *Elasticsearch.com 
 http://Elasticsearch.com* 
  @dadoonet https://twitter.com/dadoonet | 
 @elasticsearchfrhttps://twitter.com/elasticsearchfr
  

 Le 19 mars 2014 à 09:44:58, Preeti Jain (itsp...@gmail.com) a écrit:

  Hi , 

 We suddenly started getting this error in Elasticsearch log

  [2014-03-19 09:40:57,419][DEBUG][action.admin.cluster.node.info] 
 [North] failed to execute on node [B5a2wTMvQpGHOpO5oIjnug]
 java.lang.NullPointerException
 at 
 org.elasticsearch.plugins.PluginsService.loadSitePlugins(PluginsService.java:441)
 at org.elasticsearch.plugins.PluginsService.info
 (PluginsService.java:308)
 at org.elasticsearch.node.service.NodeService.info
 (NodeService.java:122)
 at 
 org.elasticsearch.action.admin.cluster.node.info.TransportNodesInfoAction.nodeOperation(TransportNodesInfoAction.java:100)
 at 
 org.elasticsearch.action.admin.cluster.node.info.TransportNodesInfoAction.nodeOperation(TransportNodesInfoAction.java:43)
 at 
 org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
  
 We are unable to perform any action on elasticsearch.

 Any idea what could be the issue?

 Regards,
 Preeti
  --
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/9134e001-f459-463f-82d4-930dc3769893%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/9134e001-f459-463f-82d4-930dc3769893%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.
  
   --
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/90440ccf-66b5-4fc8-bbf0-c710d51fcf9f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/90440ccf-66b5-4fc8-bbf0-c710d51fcf9f%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 

Re: Elasticsearch configuration for uninterrupted indexing

2014-03-23 Thread Rujuta Deshpande
Hi, 

Thank you for the response. However, in our scenario, both the nodes are on 
the same machine. Our setup doesn't allow us to have two separate machines 
for each node. Also, we're indexing logs using logstash. Sometimes, we have 
to query data from the logs over a period of two or three months and then, 
we're thrown an out of memory error. This affects the indexing that is 
simultaneously going on and we lose events. 

I'm not sure what configuration of elasticsearch will help achieve this.

Thanks,
Rujuta

On Friday, March 21, 2014 10:36:51 PM UTC+5:30, Ivan Brusic wrote:

 One of the main usage of having a data-less node is that it would act as a 
 coordinator between the other nodes. It will gather all the responses from 
 the other nodes/shards and reduce them into one.

 In your case, the data-less node is gathering all the data from just one 
 node. In other words, it is not doing much since the reduce phase is 
 basically a pass-thru operation. With a two node cluster, I would say you 
 are better off having both machines act as full nodes.

 Cheers,

 Ivan



 On Fri, Mar 21, 2014 at 5:04 AM, Rujuta Deshpande 
 ruj...@gmail.comjavascript:
  wrote:

 Hi, 

 I am setting up a system consisting of elasticsearch-logstash-kibana for 
 log analysis. I am using one machine (2 GB RAM, 2 CPUs) running logstash, 
 kibana and  two instances of elasticsearch. Two other machines, each 
 running  logstash-forwarder are pumping logs into the ELK system. 

 The reasoning behind using two ES instances was this - I needed one 
 uninterrupted instance to index the incoming logs and I also needed to 
 query the currently existing indices. However, I didn't want any complex 
 querying to result in loss of events owing to Out of Memory Errors because 
 of excessive querying. 

 So, one elasticsearch node was master = true  and data = true which did 
 the indexing (called the writer node) and the other node, was master = 
 false and data = false (this was the workhorse or reader node) .

 I assumed that, in cases of excessive querying, although the data is 
 stored on the writer node, the reader node will query the data and all the 
 processing will take place on the reader as a result of which issues like 
 out of memory error etc will be avoided and uninterrupted indexing will 
 take place. 

 However, while testing this, I realized that the reader hardly uses the 
 heap memory ( Checked this in Marvel )  and when I fire a complex search 
 query - which was a search request using the python API where the 'size' 
 parameter was set to 1, the writer node throws an out of memory error, 
 indicating that the processing also takes place on the writer node only. My 
 min and max heap size was set to 256m  for this test. I also ensured that I 
 was firing the search query to the port on which the reader node was 
 listening (Port 9200). The writer node was running on Port 9201.  

 Was my previous understanding of the problem incorrect - i.e. having one 
 reader and one writer node, doesn't help in uninterrupted indexing of 
 documents? If this is so, what is the use of having a separate workhorse or 
 reader node? 

 My eventual aim is to be able to query elasticsearch and fetch large 
 amounts of data at a time without interrupting/slowing down the indexing of 
 documents. 

 Thank you. 

 Rujuta 

 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b552fc2c-1a22-49b5-b0a9-ddc54c134834%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Error in bulk indexing - this IndexWriter is closed

2014-03-23 Thread Preeti Jain
Hi,

We are indexing documents into elasticsearch from a java application.
We are unable to index all the required docs as indexing fails for some of 
them due to following exception(found from log)

org.elasticsearch.index.engine.CreateFailedEngineException: 
[investigations][0] Create failed for [Event#jEbSeiQASEemf6xTHSaUZw]
at 
org.elasticsearch.index.engine.internal.InternalEngine.create(InternalEngine.java:397)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:382)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:401)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:153)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter 
is closed
at 
org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:645)
at 
org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:659)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1525)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1199)
at 
org.elasticsearch.index.engine.internal.InternalEngine.innerCreate(InternalEngine.java:462)
at 
org.elasticsearch.index.engine.internal.InternalEngine.create(InternalEngine.java:384)

What could be the issue? This happens even if the number of documents being 
indexed is as low as 28 and none of them is too big in size.

Regards,
Preeti

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3276bee9-d94e-42a5-9df4-6c895f293dc7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elastic search indexing documents

2014-03-23 Thread David Pilato
What did you try so far?

Read this;: http://www.elasticsearch.org/help/

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 24 mars 2014 à 02:47, Deepikaa Subramaniam deeps.subraman...@gmail.com a 
écrit :

I would like to index documents for full text search. 

 On Friday, March 21, 2014 4:22:44 PM UTC-7, Deepikaa Subramaniam wrote:
 Hi guys, 
 
 I am new to Elastic Search. Have setup my env use C# +Nest to access ES. I am 
 able to index txt files successfully. I downloaded the elastic search mapper 
 plugin to extract data from other document types. However, if i try to search 
 for some Keywords from within the doc the search doesn't return any results. 
 Please help.
 
 public class Doc
 {
 public string file_id;
 public string created;
 [ElasticProperty(Type=Nest.FieldType.attachment, Store = true, 
 TermVector=Nest.termVectorOption.with_positions_offsets)]
 public string content;
 }
 Doc doc = new Doc();
 Doc.content = Convert.ToBase64String(File.ReadAllBytes(path));

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/07284ada-b85e-4c1c-8ba1-ef12c71c43cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9EA0214D-D0E7-4386-A9AC-0BF784736F9D%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.