yes, all same machines on which only ES with same configuration is running
2014-07-02 14:55 GMT+09:00 David Pilato da...@pilato.fr:
Are you using same physical machine for all your VMs?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 2 juil. 2014 à 07:09, Seungjin
Sorry. I meant on how many physical bare metal machines your 5 VMs are running?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 2 juil. 2014 à 07:59, Seungjin Lee sweetest0...@gmail.com a écrit :
yes, all same machines on which only ES with same configuration is running
Hi, Clinton --
May I suggest:
- Some users (e.g., me) who read this list via an email subscription regard
ANY spam on the list as an unacceptable state of affairs. This is not a
problem with Apache lists, for example, so I would point the finger of
blame at Google Groups.
- Having N
I add the mappings and insert a record with 2 locations ([13, 13], [52,
52]),
and I want to search the results with it's locations all in the polygon,not
one of the locations in the polygon. would you please tell me how to search
the reslut?
curl -XPOST localhost:9200/test5 -d '{
Hi,
I do agree with Paul, 200%.
I've received in my mailbox at least 49 spams just for the 06/30. I won't call
this a few spam email. I'm subscribed for years on many mailing lists, and
I'm pretty sure that it would take years to get as much spam on those lists as
I get in 1 day on ES mailing
oops there is an it that doesn't belong
On 07/02/2014 09:24 AM, surfer wrote:
That definitely helped it. Thank you Vineeth
Regards
giovanni
On 07/01/2014 07:19 PM, vineeth mohan wrote:
Hello Giovanni ,
I feel this will help
-
That definitely helped it. Thank you Vineeth
Regards
giovanni
On 07/01/2014 07:19 PM, vineeth mohan wrote:
Hello Giovanni ,
I feel this will help
-
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_literal_multi_match_literal_query_2.html#_wildcards_in_field_names
Hello Vineeth,
the items that are indexed in elasticsearch really contains a field named
response.user.
_source: {
clientip: aaa.bbb..ddd,
request: http://.aa/b/c;,
request.accept-encoding: gzip, deflate, request.accept-language:
de-ch, response.content-type:
Peter, thanks so much for raising this. This looks aweful! I think we
should move this into an issue on [1] (please feel free to create one) IMO
we should aim to name the issue in a way to prevent this from happening
altogether. Along the lines we should help you to recover but I don't know
Hi all,
I'm testing the indexing of 100 million documents, it took about 400GB of
the hard drive.
Is there a minimum free hard drive space needed for the index to work OK?
I'm asking because after we indexed 100 million documents we tested the
index and it worked OK,
but then when trying to
Hello,
I try to indexing datetime mysql like this : 2013-05-01 00:00:00
In ES it's represented like this : 2013-05-01T00:00:00.000Z
The real problem seems to be when I index this date : -00-00 00:00:00
I have used this mapping :
type:date,
format:-MM-dd
It will work until it's full, but then ES will fall over.
Merging does require a certain amount of disk space, usually the same
amount as the segment that is being merged as it has to take a copy of the
shard to work on. So for a 10GB segment, you'd need at least 10GB free.
How many shards do you
Here's the problem.
I have data with date field that can be either in english or german date
format (or rather week and month naming convention).
I.e.Mittwoch, 18. Juni 2012 or Wednesday, 18. June 2012
I can set up separate mappings for separate fileds for each nation's date.
{
website :
Hey,
I have a question related to write consistency.
I have a elasticsearch cluster with 2 nodes. The nodes are configured as
number_of_shards = 5
number_of_replicas = 1
If i set the action.write_consistency value as `quorum`, what is the number
of active shards required to satisfy the
As made, when I index date -00-00 00:00:00 the indexing stop completly
with an error. (the begin work and stop instantly)
I have tried to put (mapping) the type : string to my date but it doesn't
work
Have you an idea to solve my problem ?
--
You received this message because you are
Hi Tanguy ,
How is this a valid date string - java.io.IOException:
java.sql.SQLException: Value '7918-00-00 00:00:00 ?
This value cant be mapped to any date format or is valid in anyway.
Thanks
Vineth
On Wed, Jul 2, 2014 at 3:21 PM, Tanguy Bernard bernardtanguy1...@gmail.com
What you can do is to set the mapping for the date field to have:
{ type: date, format: -MM-dd HH:mm:ss, ignore_malformed:
true }
then it will just ignore those invalid dates rather than throwing an error
--
You received this message because you are subscribed to the Google Groups
In my mysql table (type : datetime) :
| date_source |
+-+
| 2008-09-15 18:29:07 |
| 2013-08-29 00:00:00 |
| 2013-07-04 00:00:00 |
| 2013-07-17 00:00:00 |
| 2013-07-17 00:00:00 |
| -00-00 00:00:00 |
...
If I use a mapping (type :string)
And I index :
PUT
Hi,
I am new to elasticsearch. I am using JAVA Api to establish connection
with ES.
public void createIndex(final String index) {
getClient().admin().indices().prepareCreate(index).execute().actionGet();
}
public void createLocalCluster(final String
I've received in my mailbox at least 49 spams just for the 06/30. I won't
call this a few spam email. I'm subscribed for years on many mailing
lists, and I'm pretty sure that it would take years to get as much spam on
those lists as I get in 1 day on ES mailing list.
That's
What this date is supposed to represent?
month = 0 or day = 0 does not exist, right?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 2 juillet 2014 à 12:29:29, Tanguy Bernard (bernardtanguy1...@gmail.com) a
écrit:
In my mysql table (type : datetime)
In your case quorum means that you need all primaries to be allocated.
Which is the case here.
Doc explains that very well:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html#index-consistency
Have a look in detail at:
Hello
Everything is on subject
I have to use fuzzy for my fileds (title,content) and when I'm searching I
want to see a part of the sentance where my keyword is.
This, together, doesn't work:
$params['body']['highlight']['fields'][$value]['fragment_size']=30;
This date is created when a document is created, but an error occur and I
have this -00-00 ^^
I'm in company while exist since 10 years, the database is old and they are
this kind of error.
For the moment, I will use :
sql : select id_source as _id, title_source, date_source from source,
I would recommend updating the SQL database! :)
So may be update all dates where date is -00-00 to 1970-01-01 if it fits
with your use case.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 2 juillet 2014 à 12:54:36, Tanguy Bernard
Hi all,
I just open sourced a set of AngularJS Directives for Elasticsearch. It
enables developers to rapidly build a frontend (e.g.: faceted search
engine) on top of Elasticsearch.
http://www.elasticui.com (or github https://github.com/YousefED/ElasticUI)
It makes creating an aggregation and
Ah. Cheers.
I had looked at that page a few times but missed that.
On Tuesday, 1 July 2014 19:04:56 UTC+1, Glen Smith wrote:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-analyzers.html
On Tuesday, July 1, 2014 6:23:54 AM UTC-4, mooky wrote:
Thanks.
So
Yes, it's just some date. I think that it can be update quickly. It's the
better way :)
Thank you all.
Le mercredi 2 juillet 2014 12:56:59 UTC+2, David Pilato a écrit :
I would recommend updating the SQL database! :)
So may be update all dates where date is -00-00 to 1970-01-01 if it
Very cool, I'll pass this onto some of our devs :)
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 2 July 2014 20:56, Yousef El-Dardiry yousefdard...@gmail.com wrote:
Hi all,
I just open sourced a set of
Hi group, I have a special problem which I'm trying to solve.
I need search suggestions while typing text into a search box.
I tried different settings and options with ES, including term suggester,
completion suggester and so on, but no success.
What I'm looking for is if I type in a search and
Having used elastic aggregations for a little bit (and having used Mongo
aggregations previously), I have been finding a couple of things a bit
difficult/awkward.
I am not sure if its because I don't know how to do it properly - or we
missing a feature/enhancement in elastic.
A common thing I
One thing you can consider is calling refresh() after indexing - which has
the effect I think you are looking for.
There are probably some performance considerations others here can comment
on better than I.
In any case, calling refresh() is what we do.
On Thursday, 26 June 2014 10:25:12
Hi,
I am trying to use cluster.routing.allocation.enable to speed up node
restarts. As I understand it, if I set cluster.routing.allocation.enable to
none, restart a node, then set cluster.routing.allocation.enable to
all, the shards that go UNASSIGNED when the node goes down should start
I fall on the side of caring less about spam emails (since I have decent
spam filter on my email) and would rate easy access to the group much
higher.
I tend to add/remove myself from groups all the time - so adding a delay to
adding myself to a group with be a big PITA for me.
-M
On
I am looking to build a logging solution and wanted to make sure that I am
not missing any key components.
The logs that I have are currently stored in a database which there is
limited access due to locking risks from bad queries.
My plan is to have the dba's write the logs from the database
Patrick,
* Well, I did answer your question. But probably not from the direction
you expected. hmm no, you didn't. My question was: it looks like I cant
retrieve/display [_all fields] content. Any idea? and you replied with
your logstash template where _all is disabled. I'm
Hi all,
We're trying to figure out how to access fields from within a native
AbstractSearchScript when it's called from a percolate request that
contains the document to percolate.
We tried to use source mechanism and stored fields to no avail (no errors,
but no matches).
The same
On Wed, Jul 2, 2014 at 6:47 AM, Tanguy Bernard bernardtanguy1...@gmail.com
wrote:
Hello
Everything is on subject
I have to use fuzzy for my fileds (title,content) and when I'm searching I
want to see a part of the sentance where my keyword is.
This, together, doesn't work:
The behavior in my gmail-operated spam filter has been to toss out
lots of emails from this list as false positives. So, I keep sending
them back to my in box; pretty soon, gmail asks me to forward the good
ones to them to study, so I do. The result of that is that they catch
NONE of those spams.
Use gateway type local instead of none, then your index persists across
cluster restarts.
Jörg
On Wed, Jul 2, 2014 at 12:35 AM, venuchitta venu.chitta1...@gmail.com
wrote:
Hi,
I am new to elasticsearch. I am using JAVA Api to establish connection
with ES.
public void
Together with Zennet we brainstormed a solution building on top of Itamar's
proposal.
In one string field we append the current path to the all previous ones and
since we are talking about funnels we need to store them only on the last
event/document generated, e.g SessionEndedEvent.
Then we
I'm not sure but it looks like a node tries to move some GB of document
hits around. This might have triggered timeouts at other places (probably
with node disconnects) and maybe the GB chunk is not yet GC collected, so
you see this in your heap analyzer tool.
It depends on the search results and
We have been using older Elasticsearch version here upgrading to 1.2.1
shows use 'unknown class version errors' on JDK 1.6 . Docs says that JDK
1.6 is support (and it was) . Is there some update here? What latest
Elasticsearch version is available fo JDK 1.6?
--
You received this message
Thanks Mark,
Yeah sorry I realized after the post that I should have done pastebin but
I couldnt edit my post. Yes I am using the logstash dashboard. I changed
the number of pages to a max record size of 10,000 results. I also realized
that my query in kibana was only selecting the last
Docs say at least Java 7 is required from ES 1.2.0 on
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
For Java 6, you have to use ES versions 1.2.0
Jörg
On Wed, Jul 2, 2014 at 4:21 PM, David Marko dmarko...@gmail.com wrote:
We have been using older
On Wed, Jul 02, 2014 at 05:43:26AM -0700, Andrew Davidoff wrote:
How can I avoid this, and make shards on a restarted node come back on the
same node?
Hello,
I have exactly the same issue.
My objective is to make a rolling restart script which wait for green
cluster state before restarting a
So, your search-only machines are running out of memory, while your
index-only machines are doing fine. Did I understand you correctly? Could
you send me nodes stats (curl localhost:9200/_nodes/stats?pretty) from
the machine that runs out of memory, please run stats a few times with 1
hour
Is it possible to apply a geo_polygon filter with a non-zero rule
https://en.wikipedia.org/wiki/Nonzero-rule ?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hello Ben ,
This is defenitely an ambiguity.
By request.user , in the usual case ES expects a data like
request : {
user : vm
}
Try request\.user or something. Some mechanism to escape the dot.
Thanks
Vineeth
On Wed, Jul 2, 2014 at 1:13 PM,
All,
This seems apropos to the current discussion and could help clear up some
confusion on recommendations etc. We, Elasticsearch, are hosting a Webinar
on ELK, given by the Logstash creator, Jordan Sissel.
Its today in 40 minutes.
Igor.
Yes, that's right. My index only machines are just machines that are
booted just for the indexing-snapshotting task. once there is no more tasks
in queue, those machines are terminated. they only handle a few indices
each time (their only purpose is to snapshot).
I will do as you tell
This memory issue report might be related
https://groups.google.com/forum/#!topic/elasticsearch/EH76o1CIeQQ
Jörg
On Wed, Jul 2, 2014 at 5:34 PM, JoeZ99 jzar...@gmail.com wrote:
Igor.
Yes, that's right. My index only machines are just machines that are
booted just for the
When I tried to optimize the index had 51 shards.
Regards, Ophir
On Wednesday, July 2, 2014 11:27:50 AM UTC+3, Mark Walkom wrote:
It will work until it's full, but then ES will fall over.
Merging does require a certain amount of disk space, usually the same
amount as the segment that is
If you enable explanations, you can see why Lucene the rational behind the
scoring:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-explain.html
You are probably correct in that the array length is influencing the
scoring. By default, Lucene will rate higher
For geo search, it would be a good approach to respect the searchers
preference by using a locale, so I suggest to add a locale fr filter to
the search.
Or an origin is added to the start query and all cities are ordered by geo
distance in relation to the origin. For country search, the origin
Hi,
I have the following ES setting defined in my YAML file:
http.enabled: false
discovery.zen.ping.multicast.enabled: false
index:
mappings:
_default_:
_timestamp:
enabled: true
store : true
analysis:
analyzer:
mica_index_analyzer:
type: custom
If you enable explanations, you would see that length normalization is
scoring the document with the shorter field higher than the document with a
term frequency of 2.
The fieldNorm is incredibly lossy since it uses only 1 byte, so there must
be some inconsistencies between the example and your
Heya,
We are pleased to announce the release of the Elasticsearch Servlet Transport
plugin, version 2.2.0.
The wares transport plugin allows to use the REST interface over servlets..
https://github.com/elasticsearch/elasticsearch-transport-wares/
Release Notes -
Unfortunately, I tried with and without the region setting, no difference.
On Tuesday, July 1, 2014 7:43:21 PM UTC-4, Glen Smith wrote:
I'm not sure it matters, but I noticed you aren't setting a region in
either your config or when registering your repo.
On Tuesday, July 1, 2014 7:08:28 PM
the problem is like the topic. I'm not sure if I misunderstood something or
I missed some configurations. The ES works fine in usual situations, but
doesn't work with rexster gremlin extension.
In java, I configured the graph as follows:
When you say - do not let a shard grow bigger than your JVM heap (this is
really a rough estimation) so segment merging will work flawlessly
are we counting all the primary and replicas shards of all indexes on that
node? So for example, if we had two indexes with on 10 node cluster. Each
Hello,
I am attempting to set up a large scale ELK setup at work. Here is a
basic setup of what we have so far:
```
Nodes (approx 150)
[logstash]
|
|
+---+
| |
Indexer1 Indexer2
[Redis] [Redis]
[Logstash] [Logstash]
| |
| |
Great idea. I'll give it a try ASAP.
On Wednesday, July 2, 2014 10:56:48 PM UTC+12, Yousef El-Dardiry wrote:
Hi all,
I just open sourced a set of AngularJS Directives for Elasticsearch. It
enables developers to rapidly build a frontend (e.g.: faceted search
engine) on top of
Ok, how many were you reducing to? How big is the index?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 3 July 2014 02:03, Ophir Michaeli ophirmicha...@gmail.com wrote:
When I tried to optimize the index had 51
Hi Aditya,
I'm looking to do something similar, did you have any success with this
problem?
Thanks
Matt
On Wednesday, January 22, 2014 11:53:36 PM UTC+13, Aditya Pavan Kumar
Vegesna wrote:
Hi
I am looking for way to co-relate multiple log events and then calculate
the time duration
We are using Logstash-ElasticSearch-Kibana and just want to be able to open
the index file in Kibana. What is the necessary plugin that will allow us
to do this in something other than firefox?
On Monday, June 2, 2014 11:56:35 AM UTC-7, Binh Ly wrote:
If you simply point the browser at the
Laura,
The simplest way is to install Kibana as a site plug-in on the same node on
which you run Elasticsearch. Not the best way from a performance and
security perspective, but certainly the easiest way to start with an
absolute minimum of extra levers to pull and knobs to turn, so to speak.
Hi,
I'm trying to get a lot more visibility and metrics into what's going on
under the hood.
Occasionally, we see spikes in memory. I'd like to get heap mem used on a
per shard basis. If I'm not mistaken, somewhere somehow, this Lucene index
that is a shard is using memory in the heap, and
Hi,
I am using ES Java API to talk to an ES server. Sometimes I need to index a
single doc, sometimes dozens or hundreds at a time. I'd prefer to keep my
code simple (am a contrarian thinker) and wonder if I can get away with
always using bulk API (ie BulkRequestBuilder). so that my interface
The heap should be as big as your largest shard, irrespective of what index
it belongs to or if it's a replica.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 3 July 2014 05:50, mrno42 doug...@gmail.com wrote:
There was another thread on this very recently, and some people are using
riemann for this.
Take a look in the archives and you can probably find some useful info.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 2
Depends what you want to do really.
There are plugins like ElasticHQ, Marvel, kopf and bigdesk that will give
you some info. You can also hook collectd into the stack and take metrics,
or use plugins from nagios etc.
What monitoring platforms do you have in place now?
Regards,
Mark Walkom
Hi there,
I noticed that in Java bulk API, some parameters can be set can on both
per-batch-request level and per-operation level, e.g. the consistency level
parameter: BulkRequestBuilder#setConsistencyLevel v.s IndexRequestBuilder.
setConsistencyLevel.
What if the parameter has different
In the latest version of Logstash, you can use the elasticsearch output and
just set the protocol to http. The elasticsearch_http output will be
removed eventually.
On Monday, June 23, 2014 9:22:28 AM UTC-7, Ivan Brusic wrote:
I agree. I thought elasticsearch_http was actually the
Hi,
I have 5 clustered nodes and each nodes have 1 replica.
total document size is 216 M and 853,000 docs.
I was suffering from very high CPU usage.
every hours and every early morning about am 05:00 ~ am 09:00
you can see my cacti graph.
there is elasticsearch only on this server
I
Hi,
I have 5 clustered nodes and each nodes have 1 replica.
total document size is 216 M and 853,000 docs.
I was suffering from very high CPU usage.
every hours and every early morning about am 05:00 ~ am 09:00
you can see my cacti graph.
there is elasticsearch only on this server
I
I currently record basically everything in bigdesk, all the numerics from
cluster health, cluster state, nodes info, node stats, index status and
segments.
I want memory allocated on a per shard level for Lucene level actions,
query level actions (outside field and filter cache) and hooks into
Hi,
I have 5 clustered nodes and each nodes have 1 replica.
total document size is 216 M and 853,000 docs.
I was suffering from very high CPU usage.
every hours and every early morning about am 05:00 ~ am 09:00
you can see my cacti graph.
there is elasticsearch only on this server
I
Hey mathew
Sorry no luck with that
Cheers
Aditya
On Jul 3, 2014 2:22 AM, Matthew Morrison mmorri...@broadsoft.com wrote:
Hi Aditya,
I'm looking to do something similar, did you have any success with this
problem?
Thanks
Matt
On Wednesday, January 22, 2014 11:53:36 PM UTC+13, Aditya
79 matches
Mail list logo