Hi Guys,
it's my first post here on this group so feel free to make me aware of the
guidelines if any.
My query is regarding the SearchType QueryAndFetch
http://www.elastic.co/guide/en/elasticsearch/reference/1.5/search-request-search-type.html#query-and-fetch.
As far as I understand this
Based on your cryptic message I would guess the issue is likely that the jar you are building is incorrect as it's
manifest is invalid. Spark most likely is signed and thus extra content breaks this.
See
http://www.elastic.co/guide/en/elasticsearch/hadoop/master/troubleshooting.html#help
On
I have been able to solve this.
We were previously using 4 dual core machines. I have now resorted to using
2 quad core machines instead. On these machines, I have limited index and
bulk threadpool sizes to 1 each. This ensures that even in heavy workload 2
cores are free for search threadpool
*Hello*
*when add elasticsearch-hadoop jar *
*this is a error*
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
Exception in thread main java.lang.SecurityException: Invalid signature
file digest for Manifest main attributes
at
On 4/15/15 3:49 PM, jean.freg...@gmail.com wrote:
I'm using one of the 4th April 2015 build of elasticsearch-hadoop-2.1.0.
Doesn't 'es.resource.write' only work for writing index? I'm trying to read
several ones.
Well that's your problem - you are trying to read not to write based on
On Wednesday, April 15, 2015 at 08:00 CEST,
vikas gopal vikas.ha...@gmail.com wrote:
thank you for your quick response. I am totally new to this, any
document or website to understand nginx or any guide to configure
nginx as a reverse proxy on windows server 2012.
Have you had a look at
Hi,
I deleted the index directories from */data/Cluster/nodes/0/indices
But still I am able to access logs from kibana dashboard. I don't know
where I went wrong.Please suggest me to delete 7 day old indices.
index directory name format :
logstash-2015.04.02
Thanks in advance,
Regards,
Ravi
--
Elasticsearch Curator (https://github.com/elasticsearch/curator) is a
better way to manage deletion of indices.
Deleting them off the file system is messy.
On 16 April 2015 at 16:50, Ch Ravikishore ravikishore.ris...@gmail.com
wrote:
Hi,
I deleted the index directories from
Hi All,
TL;DR: Doing dynamic currency conversion via function_score works great for
scenarios where I want to sort by prices,
but I want the same functionality in queries that will be sorted by
relevance score when using terms while still retaining
a dynamically calulated field for converted
Hi,
I am new to linux machine , I have python 2.4.3 installed in my machine
where pip installation is throwing error. Could you help to install curator
and configure that.
Thanks
On Thursday, April 16, 2015 at 1:01:30 PM UTC+5:30, Mark Walkom wrote:
Elasticsearch Curator
I was confused by the docs count value displaying in head plugin if there
is nest type field define in mapping
For example, I created a new index with following mapping:
{
mappings : {
doc : {
properties : {
QueryClicks : {
type :
Sorry, I missed out boost_mode: replace in my function_score example
above. I want the score to be the exact converted
currency, so I can make use of it in code.
On Thursday, April 16, 2015 at 10:44:35 AM UTC+1, David Dyball wrote:
Hi All,
TL;DR: Doing dynamic currency conversion via
On Wednesday, April 15, 2015 at 8:00:12 AM UTC+2, vikas gopal wrote:
thank you for your quick response. I am totally new to this, any document
or website to understand nginx or any guide to configure nginx as a reverse
proxy on windows server 2012.
Have a look here
Hi Friends,
I was using elasticsearch.0.90.8. Now I have downloaded
elasticsearch.1.5.1.deb. But when i was trying to start elastic i got
following error:
$ sudo service elasticsearch start
* Starting Elasticsearch Server
...fail!
...fail!
...fail!
...fail!
...fail!
Anyone knows how to change the max_file_descriptors on windows?
I built ES cluster on Windows and got following process information:
max_file_descriptors : -1,
open_file_descriptors : -1,
What does “-1 mean?
Is it possible to change the max file descriptors on windows platform?
--
You received
Thank you for the suggestion , yes I am aware and I am done with ES
clustering . Now I want the same for LS . Since LS does not have in build
feature like ES has , so what would be the best way for LS to make i highly
available in windows environment?
On Wednesday, April 1, 2015 at 12:03:24
Hi all,
As a completely newbie here, I am going to ask you a question which you
might find find naive (or stupid!). I have a scenario where I would like to
restrict access from specific locations (say, IP addresses) to access
*'specific'* dashboards in Kibana. As far as I know that Apache
A few days ago we started to receive a lot of timeouts across our cluster.
This is causing shard allocation to fail and a perpetual red/yellow state.
Examples:
[2015-04-16 15:04:50,970][DEBUG][action.admin.cluster.node.stats]
[coordinator02] failed to execute on node [1rfWT-mXTZmF_NzR_h1IZw]
On Thu, Apr 16, 2015 at 10:21 AM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
The time required for update depends on the peculiarities of the update
operations, the massive scripting overhead, the refresh operation, and the
segment merge activities that are related.
The number of
Hey Nik, you'll have to forgive me if any of my answers don't make sense.
I've only been familiar with Elasticsearch for about a week.
1. Here's a template for my documents:
https://gist.github.com/mkuchen/d71de53a80e078242af9
2. I interact with my search engine through django-haystack
In case your elasticsearch cluster is internet-accessible: Be aware folks
on the internet are probably trying to exploit it...
Found this in our logging today (This is only our staging environment
fortunately):
Caused by: org.elasticsearch.search.SearchParseException:
[logstash-2015.04.15][0]:
Hello,
based on the comments I could create a new index with _timestamp activated
and it works great.
Now my probleme arrives when I want to activate the timestamp on an
existing index.
Since _timestamp is not stored by default, I wanted to set store to true,
but I get
{
error:
On Thu, Apr 16, 2015 at 10:54 AM, Mitch Kuchenberg mi...@getambassador.com
wrote:
Hey Nik, you'll have to forgive me if any of my answers don't make sense.
I've only been familiar with Elasticsearch for about a week.
1. Here's a template for my documents:
Hi ,
Am using stop words first time .
I am trying to configure stop words and want to see indexing process omits
these stop words
can you let me know why stop words not getting omitted during indexing .
I still see AND,
AN,
THE,
The are still got
Hi,
We have an index per category of item we are indexing. Our search then
searches across all of the indexes. I would like to boost results from
some indexes. Reading the docs this seems to be what I want:
Heya,
We are pleased to announce the release of the Elasticsearch Azure cloud plugin,
version 2.6.0.
The Azure Cloud plugin allows to use Azure API for the unicast discovery
mechanism and add Azure storage repositories..
https://github.com/elastic/elasticsearch-cloud-azure/
Release Notes -
Hi,
I just upgraded from 1.5.0 to 1.5.1
I got bunches of errors with following I think show the issue
[nested: ElasticsearchException[failed to read [dd][1428754566313]];
nested: ElasticsearchIllegalArgumentException[No version type match [99]];
]]
Any idea how to fix it? somehow I can
Hi all,
I am trying to run load test with ES to identify system requirements and
optimum configurations with respect to my load. I have 10 data publishing
tasks and 100 data consuming tasks in my load test.
Data publisher : Each publisher publishes data in every minute and it
publishes 1700
Did you assign different heap sizes? Please use same heap size for all data
nodes. Do not limit cache to 30%, this is very small. Let ES use the
default settings.
Jörg
On Thu, Apr 16, 2015 at 5:43 PM, Manjula Piyumal manjulapiyu...@gmail.com
wrote:
Hi all,
I am trying to run load test with
You need to reindex in a new index.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 16 avr. 2015 à 17:33, Antoine Brun a...@ubiqube.com a écrit :
Hello,
based on the comments I could create a new index with _timestamp activated
and it works great.
Now my probleme
No you need to reindex
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 16 avr. 2015 à 17:52, Antoine Brun a...@ubiqube.com a écrit :
Hello,
is there any simple way to update a mapping and change the store value of a
field?
I'm trying to enable _timestamp:
curl
Doh! Thanks a lot for this :)
On Monday, April 13, 2015 at 7:52:11 PM UTC-4, Jay Modi wrote:
Have you tried transport.publish_port [1]?
[1]
http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-transport.html#_tcp_transport
--
You received this message because you are
I'm currently working on implementing ElasticSearch on a Django-based REST
API. I hope to be able to search through roughly 5 million documents, but
I've struggled to find an answer to a question I've had from the beginning:
*how many fields is too many for a single indexed object?*
My setup
On Thursday, April 16, 2015 at 13:35 CEST,
vikas gopal vikas.ha...@gmail.com wrote:
Thank you for the suggestion , yes I am aware and I am done with ES
clustering . Now I want the same for LS . Since LS does not have in
build feature like ES has , so what would be the best way for LS to
The time required for update depends on the peculiarities of the update
operations, the massive scripting overhead, the refresh operation, and the
segment merge activities that are related.
The number of fields does not matter.
My application has 5000 fields. I avoid updates at all costs. A new
On Thu, Apr 16, 2015 at 9:40 AM, Mitch Kuchenberg mi...@getambassador.com
wrote:
I'm currently working on implementing ElasticSearch on a Django-based REST
API. I hope to be able to search through roughly 5 million documents, but
I've struggled to find an answer to a question I've had from
Hi Jörg,
Sorry, my bad. It's a typo, I have used 4GB heap for all three ES servers.
I have tried without limiting the cache size as the first attempt. But it
also got the out of memory error. Am I not missing any other configuration?
Or is this load is too much for 4GB heap?
Thanks
Manjula
merci!
On Thu, Apr 16, 2015 at 7:11 PM, David Pilato da...@pilato.fr wrote:
No you need to reindex
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 16 avr. 2015 à 17:52, Antoine Brun a...@ubiqube.com a écrit :
Hello,
is there any simple way to update a mapping and
Hi Doug,
Your suggestion worked perfectly!
Thank very much.
Andre
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
Hi,
I would like to store IP addresses and subnets (one or more per document)
and I would like to search for them with exact or inclusion (does an IP is
in any of the subnets stored in the documents).
For example a document could have the following:
ip: [192.168.0.1,192.168.1.0/24,1000::/64]
It is possible to write a plugin with IP/subnet as a new field type.
Jörg
On Thu, Apr 16, 2015 at 9:34 PM, Attila Nagy nagy.att...@gmail.com wrote:
Hi,
I would like to store IP addresses and subnets (one or more per document)
and I would like to search for them with exact or inclusion (does
You could do this with apache/nginx ACLs as KB3 simply loads a path, either
a file from the server's FS or from ES.
If you load it up you will see it in the URL.
On 16 April 2015 at 21:58, Rubaiyat Islam Sadat
rubaiyatislam.sa...@gmail.com wrote:
Hi all,
As a completely newbie here, I am
Thanks, that's what I thought.
So, please help me with my template I gave above. I am familiar with up to
update it, I am just not real sure on how to change it so that a specific
field uses Doc View. Or if it is easier to make it the default for all
fields I suppose that's fine too since it
Thanks Mark for your kind reply. Would you a be bit more specific as I am a
newbie? I am sorry if I had not been clear enough what I want to achieve.
As far as I know that Apache level access is based on relative static
path/url, it won’t know detail how kibana works. I would like to restrict
Thanks Glen!
On Friday, April 17, 2015 at 11:14:36 AM UTC+8, Glen Smith wrote:
Go to the browser tab and select the type. That will show the count you
are looking for.
On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
Just figured out that the doc count is actually the
-1 means unbound, ie unlimited.
On 16 April 2015 at 20:54, Xudong You xudong@gmail.com wrote:
Anyone knows how to change the max_file_descriptors on windows?
I built ES cluster on Windows and got following process information:
max_file_descriptors : -1,
open_file_descriptors : -1,
Just figured out that the doc count is actually the number of document +
the number of items of nested field, in my case, the QueryClicks field has
two items, then the total number of doc shown on head is 1+2=3.
But this might confuse the people who view doc counts on head or other UI
plug-in.
Go to the browser tab and select the type. That will show the count you
are looking for.
On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
Just figured out that the doc count is actually the number of document +
the number of items of nested field, in my case, the
This was tracked down to a problem with Ubuntu 14.04 running under Xen (in
AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a
rolling apt-get update; apt-get dist-upgrade; reboot on all nodes. This
appears to have resolved the issue.
For reference:
BTW: If I just remove the type:nested from the mapping, the doc count
then is correct after insert one document.
Anyone has suggestions resolve this issue?
On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
I was confused by the docs count value displaying in head plugin if
Also related https://github.com/elastic/elasticsearch/issues/10447
On 17 April 2015 at 12:37, Charlie Moad charlie.m...@geofeedia.com wrote:
This was tracked down to a problem with Ubuntu 14.04 running under Xen (in
AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a
As per the docs just add this;
@version : {
index : not_analyzed,
type : string,
*doc_values: true*
}
On 17 April 2015 at 09:35, Scott Chapman scottedchap...@gmail.com wrote:
Thanks, that's what I thought.
So, please help me with my template I
Hi Jörg,
Thanks a lot for your help. May I know what is the size of one record that
you are publishing to ES. And did you do any configuration changes to the
default configurations?
Thanks
Manjula
On Fri, Apr 17, 2015 at 2:51 AM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
I have
53 matches
Mail list logo