... can it be that _id is treated as string? If so, is there any way
retrieve the max _id field with treating _id as integer?
Am Dienstag, 27. Januar 2015 19:24:41 UTC+1 schrieb Abid Hussain:
Hi all,
I want to determine the doc with max and min _id value. So, when I run
this query:
GET
Hey everyone,
We have been implementing ES for our API for a few months, and have finally
come to a wall with the top hits aggregation.
We are trying to use this to get similar items grouped under a parent. The
aggregation is working perfectly, except when we want to add a geolocation
sort.
Yes, the _id field is a string. You are not limited to numbers. In fact, an
automatically generated ID has many non-numeric characters in it.
For what you want, you should create an id field, map it to a long integer,
and then copy your _id into that id field when you load the document. Then
That was it, I guess Windows 8 has it out of the box.
On Tuesday, January 27, 2015 at 3:31:28 AM UTC-5, Akshay Davis wrote:
Have you added the .json MIME type for the site in IIS?
On Monday, January 26, 2015 at 3:46:44 PM UTC, GWired wrote:
Yes,
It works when I'm on my localhost
Hi all,
I want to determine the doc with max and min _id value. So, when I run this
query:
GET /my_index/order/_search
{
fields: [ _id ],
sort: [
{ _uid: { order: desc } }
],
size: 1
}
I get a result:
{
...
hits: {
...
hits: [
{
I am using the below query to pull the information from logstash::
curl -XGET ' http://logs:xx00/_all/_search?pretty=true' -d ' {
query: {
bool: {
must: [
{
match: {
_type: pre
}
},
{
can anyone help with this.just bumping this email. sorry if I am breaking
any
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/not-able-to-refine-from-o-p-of-query-in-logstash-tp4069573p4069621.html
Sent from the ElasticSearch Users mailing list archive at
Hi Radu,
Thanks for the suggestion and based on the criteria i designed one
architecture using four nodes please suggest me the best way to arrange i
satisfied all conditions in my architecture.
node1 : Dedicated master
node2 : mater and data
node3: master and data
node 4:
Hello,
I want to be able to see exactly which terms are considered high frequency
terms at a specific cutoff_frequency.
I noticed that if I query the termvectors of different documents with
different routing values, the values of field_statistics[doc_count] and
the term[doc_freq] change.
That
Have you added the .json MIME type for the site in IIS?
On Monday, January 26, 2015 at 3:46:44 PM UTC, GWired wrote:
Yes,
It works when I'm on my localhost serving it to me connected to a remote
elasticsearch. It just isn't working when I'm serving it from a dedicated
Windows 2008 web
Indexes are only refreshed when a new document is added.
Best practices would be to use multiple machines, if you lose that one you
lose your cluster!
Without knowing more about your cluster stats, you're probably just
reaching the limits of things, and either need less data, more nodes or
more
Hi While executing an ES query through my java code.
I am getting this exception
Failed to parse query [(((\stew\ OR \kabobs\ OR \filet\ OR \brisket\ OR
\roast\ OR \steak\ OR \beef\ OR \burger\) AND
asdfqwer3456!@#$%dfghtyui%^\u0026(*) AND
-\vegan\)]]; nested:
I'm searching on an array of objects
The problem is when I search using query string
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-dsl-query-string-query,
it matches the text split in different objects (different array positions).
Is
Hi,
I have an ES cluster running on Ubuntu 14 and created a file in
/etc/profile.d/es_vars.sh with this content:
export ES_HEAP_SIZE=7g
I have 14GB of memory so giving 7GB to ES heap but I can see in ps aux:
...
elastic+ 1474 17.0 2.2 5929120 325284 ? Sl 22:33 0:38
Hello!
I am trying to create an elasticsearch index in order to achieve my goals.
The main problem of the task is it's complexity, and after 3 days of tries,
retries etc, i am turning to this group for suggestions:
I want to create a statistics page that would allow me to do following
I have two Web Servers ws001 and ws002 working as load balance for
Elasticseach and I am trying to catch/count the hits for a specific page
which is something like this: mysite.com/listing/item-123/.
Using the ES I am running curl –XGET http:
How much data are you talking? Are you using bulk API? What is your bulk
sizing?
You can also set an index to not refresh while you ingest it (refresh =
-1), then once it's been sent to ES turn indexing back on.
On 28 January 2015 at 11:45, webish greg...@yoursports.com wrote:
I have some
Not sure if it's what you are looking for.
Highlighting?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 28 janv. 2015 à 00:56, bvnrwork budda08n...@gmail.com a écrit :
Hi,
we have a scenario where we need to display small part of the document text
when
I was wondering what is going on behind the scenes when adding n number of
indices to an alias. Are there any performance implications?
So an alias with a single index that has a single shard will allocate a
single process to scan the index... So that would mean, with the same
data, when
So these two nodes are their own separate clusters?
How are you indexing data into them? Are you using the auto ID generation
within ES or specifying your own?
On 28 January 2015 at 11:56, Carlos Henrique de Oliveira
choliveira0...@gmail.com wrote:
I have two Web Servers ws001 and ws002
You need to run 2 queries in that case IMHO.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 28 janv. 2015 à 00:54, buddarapu nagaraju budda08n...@gmail.com a écrit :
Any answers for me :)?
Regards
Nagaraju
908 517 6981
On Sun, Jan 25, 2015 at 2:14 PM, buddarapu
I have some production indices that needs to a large amount of data
imported into them fairly frequently. Each time we import data the ES
nodes become a huge bottleneck. I honestly expected a lot better
performance out of them. Regardless, I would like to import data in a
production ES
As with most performance related things in Elasticsearch context, it
depends on too many factors to really provide set figures.
On 28 January 2015 at 11:05, webish greg...@yoursports.com wrote:
I have often found myself looking into the performance of different
functionality with
Hi Jorg
Thank you for the quick reply. let say i am establishing small cluster i
don't have client node on it in this case can i use haproxy to forward
requests between servers.
Thanks
phani
On Tuesday, January 27, 2015 at 5:33:34 PM UTC+5:30, phani.n...@goktree.com
wrote:
Hi All,
Hello,
Is there a way to manually trigger garbage collector in elasticsearch node?
Read that jmx connection has been removed from es since version 0.90.
Jason
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and
Hey guys,
i´ve been around this problem for quite a while, and didnt got a clear
answer to it.
Like many of you guys out there, we are running an es-cluster on a single
strong server, moving older indices from the fast SSD´s to slow cheap HDD´s
(About 25 TB data).
To make this work we got 3
OK, following previous responses by you about the type is missing error,
I corrected the JSON payload I send to the PUT and got another error:
$ curl -XPUT 'http://localhost:9200/_snapshot/amos0' -d '{
type:s3,
settings: {
region: ap-southeast-1,
bucket: prod-es-backup,
Hi all,
I've seen lots of posts about this, and want to make sure I'm understanding
correctly.
Background:
- Our cluster has 6 servers. They are Dell R720xd with 64GB RAM,
2xE5-2600v2 CPU (2 sockets, 6 cores/socket), 16TB disk
- Elasticsearch is set to have 6 shards, and 1 replica,
Any answers for me :)?
Regards
Nagaraju
908 517 6981
On Sun, Jan 25, 2015 at 2:14 PM, buddarapu nagaraju budda08n...@gmail.com
wrote:
I dont get it exactly so explaining doc structure and example docs .I
understand that HasChild will get you the parent documents and HasParent
will get the
If you are installing using the deb, just use /etc/default/elasticsearch to
set it.
On 28 January 2015 at 09:48, Ali Kheyrollahi alios...@gmail.com wrote:
Sorry missed *echo* $ES_HEAP_SIZE
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Sorry missed *echo* $ES_HEAP_SIZE
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web
So I suppose that when I run a filtered query like this one, ES filters all
the documents in the database, and then performs the match query only to
the documents that fit the filter, right? I just want to make sure that it
doesn't perform the match query on all the documents and the drop the
I have often found myself looking into the performance of different
functionality with Elasticsearch. I feel like this is a huge missing piece
of the documentation with ES. The Redis documentation attempts to identify
performance of functionality using Big O Notation or what they refer to as
You should be able to query the child type with a has_parent query which
has a has_child query nested within it.
No idea how it would perform though.
On Sun, Jan 25, 2015 at 3:29 AM, bvnrwork budda08n...@gmail.com wrote:
For example:
Have three below documents , FakeDoc,Doc1Doc2
Now how to
Thanks David,
It seems that the verify: false setting is specific to the fs type and
not recognised by the s3 type.
I tried it anyway and got the same worrying type is missing error:
$ curl -XPUT 'http://localhost:9200/_snapshot/amos0' -d '{
s3dev0: {
settings: {
Hi,
we have a scenario where we need to display small part of the document text
when document qualifies .
Any ideas would be appreciated
Regards,
Nagaraju
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and
Not much point having master and data only nodes for such a small cluster.
Just make them all master and data and then set min masters to 3.
On 27 January 2015 at 21:07, phani.nadimi...@goktree.com wrote:
Hi Radu,
Thanks for the suggestion and based on the criteria i designed one
Hi All,
I am new to ElasticSearch and currently I am using ElasticSearch to connect
to MongoDB for indexing and searching. I would like to implement keyword
auto-complete feature like search engines which provide a list of
suggestion keyword when user key in partial keywords. I had a document
Hi,
I try to use *ngram* based solution as shotgun approach to get results
which are not covered by more precise analyzers.
Article describing this approach is for example here
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/ngrams-compound-words.html
*Match* query, it is
Looks like this was answered on StackOverflow?
Mike McCandless
http://blog.mikemccandless.com
On Mon, Jan 26, 2015 at 7:54 PM, Steve Pearlman slpearl...@gmail.com
wrote:
For a well formatted example, please see:
Hi,
We were wondering is anyone had time looking at those issues ? Are they
already known or should we open a github issue regarding those ?
Thanks
--
Renaud Delbru
On Monday, January 26, 2015 at 1:03:19 PM UTC, ren...@sindicetech.com wrote:
Hi All,
i know a concept of load balancer in elastic search which is HTTP
enabled and never be a master and doesn't hold any data. my doubt is if we
introduce client node called as loadbalancer in cluster is there any need
to setup haproxy for the cluster to forward request.if there is
On Mon, Jan 26, 2015 at 11:05 PM, Mike Sukmanowsky
mike.sukmanow...@gmail.com wrote:
I understand that the result of the bool is the bitset that's cached as
opposed to the individual term filters themselves. This had me concerned
that for certain complex bool filters (where we have 10 or so
Hi Mark,
Yes, they are two separated nodes.
We are indexing the data via PHP:
public function resetListingsIndex(){
$es = new Elasticsearch\Client();
// delete listings
$deleteParams['index'] = 'listings';
@$es-indices()-delete($deleteParams);
// create listings index
I'm seeing some major performance difference depending on if I wrap my filter
in a query. I don't understand, because the docs say to use filters for exact
matching.
This query takes about 800ms, even after repeated executions (so caches are
hot):
{ filter: { term: { ProjectId: 4191152 }
In 1.5, a new inner hits feature will come.
https://github.com/elasticsearch/elasticsearch/pull/8153
David
Le 28 janv. 2015 à 04:29, bvnrwork budda08n...@gmail.com a écrit :
Okay Thank you ,does nested objects help here .
Is it possible to get only inner objects (from nested objects ) ?
Because the first one is a post_filter (BTW we renamed it). So it is applied
after the search on the resultset.
The second is applied first and then the query is run.
I guess this is the difference here.
I would use the second one everytime unless you need to compute aggregations on
the full
Hi,
thx for your response Mark.
It looks like we are getting a second big server for our
ELK-stack(unfortunately without any more storage, so i really cant create a
failover cluster yet), but i wonder what role i should give this server in
our system.
Would it be good to move the whole long
Perfect. Thanks.
David
Le 28 janv. 2015 à 03:55, Amos S amos.shap...@gmail.com a écrit :
I opened an issue for AWS plugin project on github, I hope this is what you
were referring to. Here is the issue:
https://github.com/elasticsearch/elasticsearch-cloud-aws/issues/167
About the type
Hi ,
What is terms here ?
As far as i know , there is no provisions to get all terms for a field for
a document by default.
Only work around is to use term vectors
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-advanced-scripting.html#_term_vectors
.
Thanks
They are separate nodes, but are they in the same cluster, or are they
running as their own unique clusters? ie you have two clusters of one node
each, rather than one cluster with two nodes.
On 28 January 2015 at 15:42, Carlos Henrique de Oliveira
choliveira0...@gmail.com wrote:
Hi Mark,
It depends on the language/platform you use.
If Java, all is very easy - NodeClient connects to more than one node, and
also TransportClient sniff mode. So Java clients are using fault tolerant
connection mode.
Same do the official clients, but only if configured properly. Please study
how to
Now I have found the right query:
You have to double escape the reserved characters.
e.g. uri:\\/video\\-ondemand\\/video\\/flv\\/test\\/* with this query all
works as expected,
Best regards
Messias
--
You received this message because you are subscribed to the Google Groups
elasticsearch
Hello everyone,
I have played with Elasticsearch for a while and ran into amazing and
useful plugins in the development (Sense, HQ, Kopf, etc), however I haven't
found any plugin which would anyhow help me creating new document by
providing term suggestions from previously submitted docs. Has
1. I opened an issue for AWS plugin project on github, I hope this is
what you were referring to. Here is the
issue: https://github.com/elasticsearch/elasticsearch-cloud-aws/issues/167
2. About the type missing error - it turned out to be my mistake in
trying to copy the
Could you open an issue in AWS plugin project (and may be in azure and gce) to
support verify option as well?
BTW, I think we should try to support have type after settings or to clearly
document it needs to be on the first line. Could you open an issue for this in
elasticsearch?
Coming back
def score = 0;
// terms: list of tokens
for(term in terms) {
q_term_freq = terms.countBy { it }[term];
term_freq = _index[field][term].tf();
doc_freq = _index[field][term].df();
score += term_freq * doc_freq * q_term_freq;
};
score;
The first one gives an error
def score = 0;
// terms: list of tokens
for(term in terms) {
q_term_freq = terms.countBy { it }[term];
term_freq = _index[field][term].tf();
doc_freq = _index[field][term].df();
score += term_freq * doc_freq * q_term_freq;
};
score;
The first one gives an error
58 matches
Mail list logo