Hi,
Is it possible to use Azure blob storage to store the indexes. Please note
i cannot use a mounted drive.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
You could have one type per form although the cluster state will be very big.
But you should test that option.
Or if you don't really search for numbers as numbers (I mean with Range
queries/filters), you could force each field to be a String and do the
transformation at a client level.
My 2
No and you should not do that if it was possible.
It would be dramatically slow.
You can use Blob storage for snapshots (backup).
But why do you want to use Blob storage and not attached disks?
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet |
It was immediate on my laptop.
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 18 septembre 2014 à 17:32:18, Jinyuan Zhou (zhou.jiny...@gmail.com) a écrit:
David,
Thanks for taking time to look at my
Thanks David. Based on the system behavior, having all type as string is
fine for queries. But for the aggregation level it might be trouble. For
example a type of address is a complex JSON object:
{ field_1: { country: US, province: CA, city: New York, address:
Street Address} }
If we transform
On Thursday, September 18, 2014 at 12:40 CEST,
Foobar Geez foobarg...@gmail.com wrote:
Thanks. I provided a bad example as I guess I over-simplified it and
also edited it to remove proprietary data (thus, missed }).
The following example exhibits the same issue as described in my
HI,
In my case , there are about five thousand fields, Can ES support
this? How do the number of fields affect the speed of searching?
Thanks.
terrs
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and
Hi Mark,
I use command GET http://xxx.xxx.xxx.xxx:9200/_nodes to get the one node
information
direct_max_in_bytes=68518871040
heap_init_in_bytes=68719476736
heap_max_in_bytes=68518871040
non_heap_init_in_bytes=24313856
non_heap_max_in_bytes=136314880
total_in_bytes=135366918144
Java
Hi,
I have a big problem with zipcode.
See below my config :
ville:
mappings:
ville_nom:
index_analyzer : cities_index_analyzer
search_analyzer : cities_search_analyzer
What gives GET /yourindex/_mapping
Your mapping definition does look like a mapping here
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 19 septembre 2014 à 10:44:31, Hari Rajaonarifetra
We are migrating from mongo to ES
We have got mongo query as below;
query.addCriteria(Criteria.where(FIELD_NAME).exists(true))
What is the equivalent query in Elastic Search usnig QueryBuilder?
Thanks.
--
You received this message because you are subscribed to the Google Groups
Hi Rashid !
On Thu, Sep 18, 2014 at 12:37 AM, Rashid Khan
rashid.k...@elasticsearch.com wrote:
Unfortunately I can’t give you an ETA other than soon ;-)
Will you communicate on Kibana 4 in the meantime, like a kind of
changelog overview ? And will the source be available ? Before release
?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-exists-filter.html#query-dsl-exists-filter
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 19 septembre 2014 à 11:42:43, Vipin
And you meant in Java:
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/query-dsl-filters.html#exists-filter
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 19 septembre 2014 à
I don't get it.
If field_1.country is a String why you can not aggregate on it?
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 19 septembre 2014 à 08:27:19, Michael Chen (mechil...@gmail.com) a écrit:
Thanks a lot.. :)
On Friday, September 19, 2014 3:18:27 PM UTC+5:30, David Pilato wrote:
And you meant in Java:
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/query-dsl-filters.html#exists-filter
--
*David Pilato* | Technical Advocate | *elasticsearch.com
We cannot guarantee that field_1 is always address. In Form 1, field_1
might be address while in another form it might be string or number
whatever. Thinking about designing the storage for Google Forms and it's
data entries.
Re you could force each field to be a String and do the transformation
How can we limit the query results using elasticsearchtemplate spring data
elastic search integration.
Thanks.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Let me make the question more clear. The challenge we have now, is how to
index a EAV[1] model database.
Let's take Google Form as an example. Every user can create a form. They
can choose from various field types including text, number, choice etc.
They construct one form like this:
Form 1: a
A search is independent from the number of fields as long as you do not
search over this number of fields.
You can create as much fields as your memory und resources will let you do
so.
Jörg
On Fri, Sep 19, 2014 at 9:53 AM, xiehai...@gmail.com wrote:
HI,
In my case , there are about
Thanks for your answer.
GET /ville/_mapping gives this :
1. {
my_project:
{
mappings:
{
ville:
{
_meta:
{
model: MyProject\ReferenceBundle\Entity\Ville
Could you GIST a full SENSE script which helps to reproduce your issue?
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 19 septembre 2014 à 14:06:04, Hari Rajaonarifetra (rhar...@gmail.com) a
écrit:
There was something wrong in what I did ?
Thanks a lot,
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
+1 for this feature!
What I need is pretty similar: calculate rolling sum, so for each day, I
need to sum the previous 30 days (on each point). Oracle and Postgre make
this very easy with aggregation function (and they can take advantage of
very interesting optimization for sum, as each point
After populating index with this mapping, I just use
http://localhost:9200/_plugin/head/ for testing query.
But when I try to find a city with zipcode:03000 (for example), there is no
result. I check the index with the head navigator and I see that all
zipcode beginning with 0 is cut
In our search we have configured text with 2 analyzers, english and
standard so we can match phrases on the standard-analyzer. We break the
keywords by space, and create a bool query for each word.
This is working fine for all cases except where the query has standard
word-separators like
On other hand, If I use a single query_string instead of bool of terms it
works. Does ES/lucene determines not to use the word-separators by looking
at the definition of the fields.
On Friday, September 19, 2014 11:05:59 AM UTC-4, Ankush Jhalani wrote:
In our search we have configured text
Hi,
Is it possible to get regexp group result into regexp request in
eleasticsearch?
For example, if i make this request with a regexp group, I don't have the
group value under request response. So can I have it or not ?
{
query:{
regexp:{
path: *Prods/([^/]+)*,
flags: all
I'm in the same boat as Dan. Desperate for child aggregation!
Looks like the label has changed
too: https://github.com/elasticsearch/elasticsearch/labels/v1.4.0.Beta1
Tom.
On Wednesday, September 10, 2014 6:02:27 PM UTC+1, Ivan Brusic wrote:
I think this release might be their biggest one
I'm sure that this has been asked before on the forum, but I couldn't find
an answer specifically for this one:
Is there any way at all to disable writes over http for elasticsearch? It's
very easy for people to accidentally create indexes that they didn't mean
to create.
If there is no way,
Has anyone configured kibana/elasticsearch to use HTTPS? I'm new to it and
was wondering if there are any goods tuts out there? I'm using apache as my
webserver and have ssl enabled. If I try to connect to ES using https I get
a message in Kibana that says it was unable to connect to ES at
I would expect this question to be popular, but still cannot google the
answer.
If I have multiple ES nodes in the cluster, each having its own
configuration file (elasticsearch.yml) - what happens if some settings in
those files go out of sync? For instance, index creation config? Which
There were two reasons for not enabling compression on data files. First of
all, the way chunking in snapshot/restore API was implemented didn't
allow simple implementation of compression on data files. Moreover, the
data files are already compressed to a certain degree. In my tests I was
I would have thought that range aggregations return the bucket list in the
order the range agg is listed, but I'm not seeing that result. (ES 1.3.2)
Is there a way to enforce original ordering that I'm missing?
My range aggregation:
day_of_week_range: {
range: {
field:
I have a setup of ELK stack with redis as a broker and a cluster of 2
Elasticsearch Nodes. The stack was running good and suddenly I started
facing the following error and warning messages at different levels of the
stack.
http://pastebin.com/eG7p0PCc
If I restart the redis broker, Logstash
You basically want to create your own aggregation, which are basically
collectors at the Lucene level. Look at existing plugins which provide
custom aggregation.
Basically, elasticsearch uses a scatter-gather/map-reduce model for
distributed collections.
--
Ivan
On Sep 18, 2014 12:56 AM, tim
Hi, all,
To get the last entry from two different types, I am doing
GET localhost:9200/index/type1
{
size: 1,
sort: { id: 'desc' },
}
GET localhost:9200/index/type2
{
size: 1,
sort: { id: 'desc' },
}
For more efficient queries over multiple types, I want to combine the two
May be multi search could help in that case?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-multi-search.html#search-multi-search
Or may be a terms aggregation on _type field (you'll need to index it) and a
top hits sub aggregation:
Hey guys, I’m getting NullPointerException while using a *significant_terms*
aggregation. It happens in this line:
org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicStreams.read(SignificanceHeuristicStreams.java:38)
The error is in the deserialization:
You can disable http entirely, that's it
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html#_disable_http
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 20 September 2014 01:41,
It looks like networking issues, lots of connection reset/closed/timeout.
It might help if you can out your configs into a pastebin too.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 20 September 2014 04:33,
I missed a part of the error message:
[WARN] 2014-09-19 20:29:13.176 o.e.t.netty - [Sigyn] Message not fully read
(response) for [61] handler
org.elasticsearch.action.TransportActionNodeProxy$1@2e6201d0, error
[false], resetting
On Friday, September 19, 2014 5:58:15 PM UTC-3, Felipe
There is a few good results here
https://www.google.com.au/?gws_rd=ssl#q=kibana+https+apache
Check out
http://blog.stevenmeyer.co.uk/2014/02/securing-kibana-and-elasticsearch-with-https-ssl.html
for example
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email:
More information. All 5 ES nodes are on 1.3.2 (checked with curl
localhost:9200/) with java 1.7.0_65. Client machine is also on 1.3.2
and 1.7.0_65
On Friday, September 19, 2014 6:21:06 PM UTC-3, Felipe Hummel wrote:
I missed a part of the error message:
[WARN] 2014-09-19 20:29:13.176
Which configuration you will be interested to loo at.?
-Shriyansh
On Friday, September 19, 2014 2:17:16 PM UTC-7, Mark Walkom wrote:
It looks like networking issues, lots of connection reset/closed/timeout.
It might help if you can out your configs into a pastebin too.
Regards,
Mark
which configuration's you will be interested to look at.?
-Shriyansh
On Friday, September 19, 2014 2:17:16 PM UTC-7, Mark Walkom wrote:
It looks like networking issues, lots of connection reset/closed/timeout.
It might help if you can out your configs into a pastebin too.
Regards,
Mark
You can filter out HTTP PUT (and DELETE) and get a pretty good approach to
not accidentally remove or overwrite anything (due to REST semantics)
Jörg
On Fri, Sep 19, 2014 at 5:41 PM, Marie Jacob jacob.ma...@gmail.com wrote:
I'm sure that this has been asked before on the forum, but I couldn't
Thanks for the update Rashid.
It would be really great to get a look at the early bits. We are just
starting to use es and so backwards compatibility would not be an issue.
Thanks
Doug
On Wednesday, September 17, 2014 5:37:59 PM UTC-5, Rashid Khan wrote:
Unfortunately I can’t give you an
Hi, David,
Thanks a lot for your prompt help. I got both approaches working, which is
very exciting. I prefer the top_hits aggregation approach. The msearch
approach is not accepting normal JSON payloads, which makes things a bit
harder for processing in Javascript.
My query payload is
I have the same problem with lowercase_terms. Is there a fix? The term
suggester lowercases them by default (not desirable), and the completion
suggester doesn't.
On Tuesday, July 8, 2014 4:14:22 PM UTC-5, Ryan Tanner wrote:
Side question:
If I try to set lowercase_terms to true, I get a
Hi,
Actually, search need go over all fields, every field has special
definition, the number of fields is too big to remember,
I need search all fields for any keywords, return matched docs and field
name to request.
Why so many fields, cause the data come from tables of
Can ES scale to 30TB / day, and still be usable?
This is a typical logstash/elasticsearch/kibana setup. I have a small
environment logging 20GB / day that seems to work fine. At 30TB, very
little will be able to cached into ram, can ES still be usable at that
point?
Also, what's is the best
52 matches
Mail list logo