yes mark I am running same es version and java on both nodes.i have
iptables settings in my nodes is this be a problem? do we need to allow any
ports to resolve this issue I already allowed nodes communication ports
before..
Java version 1.7.0_75
Java(TM) SE Runtime Environment (build
How do you send your mapping?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 4 mai 2015 à 08:32, Dharshana Ratnayake darthsh...@gmail.com a écrit :
Hi Guys
Im trying to save documents in elasticsearch where one field i want to map a
geo_point with lat and lon
But
Yes i am using transport client, i can add the all servers' address in
transport client instantiation. But how do ES server will know each other
?? And are you telling me that i don't need to use Nginx as load balancer,
ES transport client will automatically do it for me ??
On Monday, May 4,
On Friday, May 01, 2015 at 21:04 CEST,
Sitka sitkaw...@gmail.com wrote:
I have a file of logging records I am using to debug some filter
parses. I am using file input and have set starting_position to
beginning. So I startup logstash see what I get and killed it and
make fixes and go
I don't think I am.. :-(
guess I'm missing
createIndexRequestBuilder.addMapping(documentType, mappingBuilder);
// MAPPING DONE
createIndexRequestBuilder.execute().actionGet();
i just ran
CreateIndexRequest request = new CreateIndexRequest(indexName);
CreateIndexResponse
read these links, it should answer your question.
http://www.elastic.co/guide/en/elasticsearch/client/java-api/1.x/client.html#transport-client
http://www.elastic.co/guide/en/elasticsearch/guide/current/_transport_client_versus_node_client.html
jason
On Mon, May 4, 2015 at 2:45 PM, Shohedul
You need to open 9300 between nodes to allow them to communicate, that's
likely the cause of this.
On 4 May 2015 at 16:06, phani.nadimi...@goktree.com wrote:
yes mark I am running same es version and java on both nodes.i have
iptables settings in my nodes is this be a problem? do we need to
we have about 2.2B data in our elasticsearch, and we using facet and
function score query on those data.
it will load many data to fielddata, and replica double it.
so my question is how to set replica to make sure it won't use field data
expect primary shard down?
--
You received this
I'm still researching this but I have too little experience with it to draw
a conclusion with certainty. Anyone of you ElesticSearch experts know
whether ES is the right tool for the job?
On Saturday, May 2, 2015 at 5:54:11 PM UTC+2, be...@media2b.net wrote:
I have a large collection of
Replica's don't double the amount of field data loaded. A query will only
load what it needs from the shards it needs, but it won't load primary and
replica, just one or the other.
Ideally you should 1) upgrade to aggs, facets are deprecated and not as
performant, and 2) upgrade to doc_values.
Kibana dashboards are saved in kibana-int for KB 3 and .kibana for KB4.
It looks like you are running out of heap, ie an OOM. How much memory have
you assigned to ES and can you increase that?
On 4 May 2015 at 18:12, xiro86 amir.gli...@gmx.at wrote:
Hi,
I've set up ELK-Stack for Monitoring
Thanks for you replay Mark,
active: {
primaries: {
fielddata: {
memory_size_in_bytes: 77076457764,
evictions: 0,
fields: {
commentsCount: {
memory_size_in_bytes: 2416090508
I am trying to build Elasticsearch Angular app. I am using require.js.
main.js
require.config({
paths : {
'angular' : '../bower_components/angular/angular',
'elasticsearch' : '../node_modules/elasticsearch/src/elasticsearch.angular',
'app' : 'app',
'coreModule':'coreModule',
'esClient' :
Hi
I'm learning how to maintain clean my ES and I found this article today
http://www.elastic.co/guide/en/elasticsearch/client/curator/current/examples.html
The first example is Delete all indices older than. I've installed
curator tool to execute the command:
*# curator delete indices
How to define a analyzer for an index, not just field?
It seems by default it will not using the default analyzer I defined in
elasticsearch.yml.
index.analysis.analyzer.default.type: ik
If I request:
http://localhost:9200/useridx/_search?q=Lin it doesn't using the default
analyzer.
And
At search time, the
http://www.elastic.co/guide/en/elasticsearch/guide/master/_controlling_analysis.html#id-1.5.4.12.18.5.1
http://www.elastic.co/guide/en/elasticsearch/guide/master/_controlling_analysis.html#id-1.5.4.12.18.5.2sequence
is slightly different:
- *The analyzer defined in
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
cat /proc/meminfo has
Slab: 3424728 kB
SReclaimable: 3407256 kB
curl -XGET
Hi All,
as we are very new to the ELK stack, I'm not sure if this is the right
place to ask, but perhaps you can help with the following problem.
We saved some visualization with quotationmark in the name, for example:
'Count-exciis_page:-%22-slash-Microsoft-Server-ActiveSync%2Fdefault.eas%22'
I'd try disabling your firewall and eliminating it as a problem, checking
if the nodes can reach each other, and then going from there.
On 4 May 2015 at 20:23, phani.nadimi...@goktree.com wrote:
mark I opened that port already even though I am getting that warning...
[2015-05-04
I put content in elasticsearch without define the index setting. It seems
by default elasticsearch isn't using the default analyzer I defined in
elasticsearch.yml.
index.analysis.analyzer.default.type: ik
If I request: http://localhost:9200/useridx/_search?q=Lin
it doesn't using the
Hi Everyone,
I went through the group and came across various replies by David Pilato,
Kimchy and other people which stated to use EBS volumes or local disks for
storing ES indices and S3 for periodic snapshots. Wanted to know the reason
why using S3 will be a bad choice for creating and
Thanks Brian for your reply. I have started using the jars that you
suggested here and they are easy to work with. Thank you.
Swati
On Thursday, April 30, 2015 at 5:47:43 PM UTC-4, Brian wrote:
Swati,
Well, I tend not to use the built-in Jackson parser anymore. The only
advantage I've
I have a small set of data ~120Gb collected over 7 days and Kibana
dashboard with 4 widgets. All fields of indices are not analysed (I've
turned it off because for this data splitting strings into terms is not
needed). When I open the dashboard it takes ElasticSearch *minutes* to
process
Hi David,
On Fri, May 1, 2015 at 8:36 PM, David Reagan jer...@gmail.com wrote:
Why do you find IRC unfriendly? Have you tried using a web based client
like irccloud.com?
I use webchat.freenode.net.
There's a big difference between, Here's our live chat app. and Learn
how to connect to IRC
Worth thinking through, though I doubt we'll action this any time soon.
We're doing our best to maintain good coverage in our IRC channels and
folks would still prefer Elastic employees were around more often. Opening
up yet another chat mechanism seems like a way to make sure even more
Hi,
My question is related to boosting fields in Multi Match Query.
I have 3 fields: First name, Second name and description. A want the score
to be low, when it matches description field.
{
function_score : {
query : {
bool : {
must : [ {
multi_match : {
Hello everyone,
We took in feedback on moving to a Discourse based forum for about a month,
and it sounds like most of the folks who thought it might not be optimal
were people who preferred to interact with mailing lists instead of forums.
We're pretty confident the email functionality of
On Mon, May 4, 2015 at 12:12 PM, leslie.hawthorn leslie.hawth...@elastic.co
wrote:
Hello everyone,
We took in feedback on moving to a Discourse based forum for about a
month, and it sounds like most of the folks who thought it might not be
optimal were people who preferred to interact with
On Mon, May 4, 2015 at 6:21 PM, Nikolas Everett nik9...@gmail.com wrote:
On Mon, May 4, 2015 at 12:12 PM, leslie.hawthorn
leslie.hawth...@elastic.co wrote:
Hello everyone,
We took in feedback on moving to a Discourse based forum for about a
month, and it sounds like most of the folks
The site is read-only. No signups possible. Hmm...
Good luck!
--Jürgen
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
On Mon, May 4, 2015 at 6:38 PM, Jürgen Wagner (DVT)
juergen.wag...@devoteam.com wrote:
The site is read-only. No signups possible. Hmm...
Known issue - we're addressing. Whee!
We'll update this thread when the issue is resolved.
Cheers,
LH
--
Leslie Hawthorn
Director of Developer
I suspect its read only while they sort out resourcing issues. Cache hit
rate is likely quite high while readonly.
On May 4, 2015 12:38 PM, Jürgen Wagner (DVT) juergen.wag...@devoteam.com
wrote:
The site is read-only. No signups possible. Hmm...
Good luck!
--Jürgen
--
You received this
On Mon, May 4, 2015 at 6:41 PM, Nikolas Everett nik9...@gmail.com wrote:
I suspect its read only while they sort out resourcing issues. Cache hit
rate is likely quite high while readonly.
On May 4, 2015 12:38 PM, Jürgen Wagner (DVT) juergen.wag...@devoteam.com
wrote:
The site is read-only.
Hi,
I would like to use pageviews to rank our documents. Since pageview-data can
update daily I just thought it might make sense to put in into a child. Is it
even possible to boost a parent by using a value of a child? And what happens
if a child does not exist?
How did you solve this?
Any
It does not work. I can not post messages with links.
After I try to post a new topic such as
- snip
To all of you who want to sneak at the features planned for ES 2.0, this
issue collects some of it
https://github.com/elastic/elasticsearch/issues/9970
Best,
Jörg
snip
I
Hello,
I'm trying to add taxonomies into my index. I add them like the following
below.
However, when I do an aggregation as shown below the output breaks up best
price into two separate buckets.
I want best price to be only one bucket. What I mean is I don't want my
taxonomies being broken
You might have to extract the doc from the .kibana index, edit it and then
send it back to get immediate access.
However it'd also be worth raising this as a Github issue, perhaps there is
something that is not being properly escaped under the hood, which causes
the behaviour.
On 4 May 2015 at
hello, just out of curiosity, what is the cluster index size and what is
the rationale to have so many shards?
jason
On Tue, May 5, 2015 at 5:05 AM, tomas.n...@despegar.com wrote:
Hi all,
We've been facing some trouble with our Elastic Search installation (v
1.5.1), mostly trying
to bring
i'm wondering how / if there is support for automatic compression of
elasticsearch.log via logging.yml.
it looks like there might be via issue #7927
https://github.com/elastic/elasticsearch/issues/7927 and this PR#8464
https://github.com/elastic/elasticsearch/pull/8464 ...but maybe limited
there are example here in elasticsearch 1.5?
https://github.com/elastic/elasticsearch/blob/v1.5.0/config/logging.yml#L42-L51
jason
On Tue, May 5, 2015 at 8:13 AM, gleeco gle...@gmail.com wrote:
i'm wondering how / if there is support for automatic compression of
elasticsearch.log via
The rationale of queuing is to allow for instances where temporary load on
the cluster might otherwise reject a request.
There is no way to prioritise tasks over other tasks.
Though it looks like your problem is you are overloading your nodes. 32192
primary shards is a massive amount for only 12
This is not a dynamic setting, it needs to be defined in your config file
and ES needs to be restarted to read it.
On 5 May 2015 at 08:03, Satnam Singh satnam6...@gmail.com wrote:
I would like to dynamically set discovery.zen.ping.multicast.enabled and
discovery.zen.ping.unicast.hosts -- is
It might be possible, but not really recommended as I'd imagine it'd be
pretty slow, which would impact your performance of ES.
On 4 May 2015 at 23:56, Lavesh Gupta lavesh.gu...@druva.com wrote:
Hi Everyone,
I went through the group and came across various replies by David Pilato,
Kimchy
Hi all,
We've been facing some trouble with our Elastic Search installation (v
1.5.1), mostly trying
to bring it back up. Some questions have come up.
This is our situation. We're seeing about 200 unassigned shards, which are
not being
reassigned automatically, which in turn leads our ES to
44 matches
Mail list logo