Thanks Mark for your kind reply. Would you a be bit more specific as I am a
newbie? I am sorry if I had not been clear enough what I want to achieve.
As far as I know that Apache level access is based on relative static
path/url, it won’t know detail how kibana works. I would like to restrict
a
Also related https://github.com/elastic/elasticsearch/issues/10447
On 17 April 2015 at 12:37, Charlie Moad wrote:
> This was tracked down to a problem with Ubuntu 14.04 running under Xen (in
> AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a
> rolling "apt-get update; apt
Hi Jörg,
Thanks a lot for your help. May I know what is the size of one record that
you are publishing to ES. And did you do any configuration changes to the
default configurations?
Thanks
Manjula
On Fri, Apr 17, 2015 at 2:51 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:
> I have th
Thanks Glen!
On Friday, April 17, 2015 at 11:14:36 AM UTC+8, Glen Smith wrote:
>
> Go to the "browser" tab and select the type. That will show the count you
> are looking for.
>
>
>
> On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
>>
>> Just figured out that the doc count is
Go to the "browser" tab and select the type. That will show the count you
are looking for.
On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
>
> Just figured out that the doc count is actually the number of document +
> the number of items of nested field, in my case, the Que
Just figured out that the doc count is actually the number of document +
the number of items of nested field, in my case, the QueryClicks field has
two items, then the total number of doc shown on head is 1+2=3.
But this might confuse the people who view doc counts on head or other UI
plug-in.
This was tracked down to a problem with Ubuntu 14.04 running under Xen (in
AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a
rolling "apt-get update; apt-get dist-upgrade; reboot" on all nodes. This
appears to have resolved the issue.
For reference: https://bugs.launchpad
BTW: If I just remove the "type":"nested" from the mapping, the doc count
then is correct after insert one document.
Anyone has suggestions resolve this issue?
On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
>
> I was confused by the docs count value displaying in head plugin i
Thanks. The field I wanted to map was @timestamp which isn't explicitly in
the template. What would it look like?
Also, once I have made the change to my template, what's the right way to
test it (validate that for a new index i is using Doc Value for the
specific field)?
On Thursday, April 16
As per the docs just add this;
"@version" : {
"index" : "not_analyzed",
"type" : "string",
*"doc_values": true*
}
On 17 April 2015 at 09:35, Scott Chapman wrote:
> Thanks, that's what I thought.
>
> So, please help me with my template I gave above.
Thanks, that's what I thought.
So, please help me with my template I gave above. I am familiar with up to
update it, I am just not real sure on how to change it so that a specific
field uses Doc View. Or if it is easier to make it the default for all
fields I suppose that's fine too since it so
You could do this with apache/nginx ACLs as KB3 simply loads a path, either
a file from the server's FS or from ES.
If you load it up you will see it in the URL.
On 16 April 2015 at 21:58, Rubaiyat Islam Sadat <
rubaiyatislam.sa...@gmail.com> wrote:
> Hi all,
>
> As a completely newbie here, I a
-1 means unbound, ie unlimited.
On 16 April 2015 at 20:54, Xudong You wrote:
> Anyone knows how to change the max_file_descriptors on windows?
> I built ES cluster on Windows and got following process information:
>
> "max_file_descriptors" : -1,
> "open_file_descriptors" : -1,
>
> What does “-1
I have thousands of concurrent indexing/queries running per second on
non-virtualized servers.
4G heap is ok, it is more than enough, there should be other reasons for
OOM I am sure.
Maybe Bigdesk can help for monitoring heap and cache eviction rates.
I think I can not help any more - no experie
It is possible to write a plugin with IP/subnet as a new field type.
Jörg
On Thu, Apr 16, 2015 at 9:34 PM, Attila Nagy wrote:
> Hi,
>
> I would like to store IP addresses and subnets (one or more per document)
> and I would like to search for them with exact or inclusion (does an IP is
> in any
Hi Doug,
Your suggestion worked perfectly!
Thank very much.
Andre
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
Hi,
I would like to store IP addresses and subnets (one or more per document)
and I would like to search for them with exact or inclusion (does an IP is
in any of the subnets stored in the documents).
For example a document could have the following:
"ip": ["192.168.0.1","192.168.1.0/24","1000::
merci!
On Thu, Apr 16, 2015 at 7:11 PM, David Pilato wrote:
> No you need to reindex
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
> Le 16 avr. 2015 à 17:52, Antoine Brun a écrit :
>
> Hello,
>
> is there any simple way to update a mapping and change the store val
Hi Jörg,
Sorry, my bad. It's a typo, I have used 4GB heap for all three ES servers.
I have tried without limiting the cache size as the first attempt. But it
also got the out of memory error. Am I not missing any other configuration?
Or is this load is too much for 4GB heap?
Thanks
Manjula
--
No you need to reindex
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
> Le 16 avr. 2015 à 17:52, Antoine Brun a écrit :
>
> Hello,
>
> is there any simple way to update a mapping and change the store value of a
> field?
> I'm trying to enable _timestamp:
> curl -X PUT htt
You need to reindex in a new index.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
> Le 16 avr. 2015 à 17:33, Antoine Brun a écrit :
>
> Hello,
>
> based on the comments I could create a new index with _timestamp activated
> and it works great.
> Now my probleme arrives wh
Did you assign different heap sizes? Please use same heap size for all data
nodes. Do not limit cache to 30%, this is very small. Let ES use the
default settings.
Jörg
On Thu, Apr 16, 2015 at 5:43 PM, Manjula Piyumal
wrote:
> Hi all,
>
> I am trying to run load test with ES to identify system r
Hello,
is there any simple way to update a mapping and change the store value of a
field?
I'm trying to enable _timestamp:
curl -X PUT http://localhost:9200/ubilogs-mbr/_mappings/logs -d '{
"logs" : {
"_timestamp" : {
"enabled" : true,
"store" : true,
"format": "-MM-d
Hi ,
Am using stop words first time .
I am trying to configure stop words and want to see indexing process omits
these stop words
can you let me know why stop words not getting omitted during indexing .
I still see "AND",
"AN",
"THE",
"The" are still
Hi all,
I am trying to run load test with ES to identify system requirements and
optimum configurations with respect to my load. I have 10 data publishing
tasks and 100 data consuming tasks in my load test.
Data publisher : Each publisher publishes data in every minute and it
publishes 1700 re
On Thu, Apr 16, 2015 at 10:54 AM, Mitch Kuchenberg
wrote:
> Hey Nik, you'll have to forgive me if any of my answers don't make sense.
> I've only been familiar with Elasticsearch for about a week.
>
> 1. Here's a template for my documents:
> https://gist.github.com/mkuchen/d71de53a80e078242af9
>
Hello,
based on the comments I could create a new index with _timestamp activated
and it works great.
Now my probleme arrives when I want to activate the timestamp on an
existing index.
Since _timestamp is not stored by default, I wanted to set "store" to true,
but I get
{
"error": "Merge
Hi,
I just upgraded from 1.5.0 to 1.5.1
I got bunches of errors with following I think show the issue
[nested: ElasticsearchException[failed to read [dd][1428754566313]];
nested: ElasticsearchIllegalArgumentException[No version type match [99]];
]]
Any idea how to fix it? somehow I can stil
Heya,
We are pleased to announce the release of the Elasticsearch Azure cloud plugin,
version 2.6.0.
The Azure Cloud plugin allows to use Azure API for the unicast discovery
mechanism and add Azure storage repositories..
https://github.com/elastic/elasticsearch-cloud-azure/
Release Notes - e
Hi,
We have an index per category of item we are indexing. Our search then
searches across all of the indexes. I would like to boost results from
some indexes. Reading the docs this seems to be what I want:
http://www.elastic.co/guide/en/elasticsearch/reference/1.x/search-request-index-boo
A few days ago we started to receive a lot of timeouts across our cluster.
This is causing shard allocation to fail and a perpetual red/yellow state.
Examples:
[2015-04-16 15:04:50,970][DEBUG][action.admin.cluster.node.stats]
[coordinator02] failed to execute on node [1rfWT-mXTZmF_NzR_h1IZw]
org
In case your elasticsearch cluster is internet-accessible: Be aware folks
on the internet are probably trying to exploit it...
Found this in our logging today (This is only our staging environment
fortunately):
Caused by: org.elasticsearch.search.SearchParseException:
[logstash-2015.04.15][0]: q
Hey Nik, you'll have to forgive me if any of my answers don't make sense.
I've only been familiar with Elasticsearch for about a week.
1. Here's a template for my documents:
https://gist.github.com/mkuchen/d71de53a80e078242af9
2. I interact with my search engine through django-haystack
On Thu, Apr 16, 2015 at 10:21 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:
> The time required for update depends on the peculiarities of the update
> operations, the massive scripting overhead, the refresh operation, and the
> segment merge activities that are related.
>
> The number
The time required for update depends on the peculiarities of the update
operations, the massive scripting overhead, the refresh operation, and the
segment merge activities that are related.
The number of fields does not matter.
My application has 5000 fields. I avoid updates at all costs. A new
d
On Thursday, April 16, 2015 at 13:35 CEST,
vikas gopal wrote:
> Thank you for the suggestion , yes I am aware and I am done with ES
> clustering . Now I want the same for LS . Since LS does not have in
> build feature like ES has , so what would be the best way for LS to
> make i highly avai
On Thu, Apr 16, 2015 at 9:40 AM, Mitch Kuchenberg
wrote:
> I'm currently working on implementing ElasticSearch on a Django-based REST
> API. I hope to be able to search through roughly 5 million documents, but
> I've struggled to find an answer to a question I've had from the beginning:
> *how
Doh! Thanks a lot for this :)
On Monday, April 13, 2015 at 7:52:11 PM UTC-4, Jay Modi wrote:
>
> Have you tried transport.publish_port [1]?
>
> [1]
> http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-transport.html#_tcp_transport
--
You received this message because you are sub
I'm currently working on implementing ElasticSearch on a Django-based REST
API. I hope to be able to search through roughly 5 million documents, but
I've struggled to find an answer to a question I've had from the beginning:
*how many fields is too many for a single indexed object?*
My setup
Hi Friends,
I was using elasticsearch.0.90.8. Now I have downloaded
elasticsearch.1.5.1.deb. But when i was trying to start elastic i got
following error:
$ sudo service elasticsearch start
* Starting Elasticsearch Server
...fail!
...fail!
...fail!
...fail!
...fail!
...fail
Hi all,
As a completely newbie here, I am going to ask you a question which you
might find find naive (or stupid!). I have a scenario where I would like to
restrict access from specific locations (say, IP addresses) to access
*'specific'* dashboards in Kibana. As far as I know that Apache level
Thank you for the suggestion , yes I am aware and I am done with ES
clustering . Now I want the same for LS . Since LS does not have in build
feature like ES has , so what would be the best way for LS to make i highly
available in windows environment?
On Wednesday, April 1, 2015 at 12:03:24 PM
Anyone knows how to change the max_file_descriptors on windows?
I built ES cluster on Windows and got following process information:
"max_file_descriptors" : -1,
"open_file_descriptors" : -1,
What does “-1 mean?
Is it possible to change the max file descriptors on windows platform?
--
You recei
On Wednesday, April 15, 2015 at 8:00:12 AM UTC+2, vikas gopal wrote:
>
> thank you for your quick response. I am totally new to this, any document
> or website to understand nginx or any guide to configure nginx as a reverse
> proxy on windows server 2012.
>
>
>
Have a look here http://nginx-wi
Sorry, I missed out "boost_mode": "replace" in my function_score example
above. I want the score to be the exact converted
currency, so I can make use of it in code.
On Thursday, April 16, 2015 at 10:44:35 AM UTC+1, David Dyball wrote:
>
> Hi All,
>
> TL;DR: Doing dynamic currency conversion via
Hi All,
TL;DR: Doing dynamic currency conversion via function_score works great for
scenarios where I want to sort by prices,
but I want the same functionality in queries that will be sorted by
relevance score when using terms while still retaining
a dynamically calulated field for converted pri
I was confused by the docs count value displaying in head plugin if there
is nest type field define in mapping
For example, I created a new index with following mapping:
{
"mappings" : {
"doc" : {
"properties" : {
"QueryClicks" : {
"type
Hi,
I am new to linux machine , I have python 2.4.3 installed in my machine
where pip installation is throwing error. Could you help to install curator
and configure that.
Thanks
On Thursday, April 16, 2015 at 1:01:30 PM UTC+5:30, Mark Walkom wrote:
>
> Elasticsearch Curator (https://github.com/
Elasticsearch Curator (https://github.com/elasticsearch/curator) is a
better way to manage deletion of indices.
Deleting them off the file system is messy.
On 16 April 2015 at 16:50, Ch Ravikishore
wrote:
> Hi,
>
> I deleted the index directories from */data/Cluster/nodes/0/indices
> But still
Based on your cryptic message I would guess the issue is likely that the jar you are building is incorrect as it's
manifest is invalid. Spark most likely is signed and thus extra content breaks this.
See
http://www.elastic.co/guide/en/elasticsearch/hadoop/master/troubleshooting.html#help
On 4/
50 matches
Mail list logo