is used for operations like filter
or sort. The higher the cardinality, the more effort is needed. This is
because the index is inverted.
Jörg
On Fri, Mar 20, 2015 at 3:30 AM, Ashish Mishra laughin...@gmail.com
javascript: wrote:
Not sure I understand the difference between composable vs
this is implying
bitsets.
#3 correct interpretation
The use of bitsets is a pointer for composable filters, these
should/must/mustnot filters use an internal Lucene bitset implementation
for efficient computation.
Jörg
On Thu, Mar 19, 2015 at 5:58 AM, Ashish Mishra laughin...@gmail.com
I'm trying to optimize filter queries for performance and am slightly
confused by the online docs. Looking at:
1) https://www.elastic.co/blog/all-about-elasticsearch-filter-bitsets
2)
http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-and-filter.html
3)
From
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway-local.html#_dangling_indices
When a node joins the cluster, any shards/indices stored in its local data/
directory which do not already exist in the cluster will be imported into
the cluster by default.
Use the size parameter.
e.g.
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search' -d '{ size: 200,
aggregations: {
my_agg: {
terms: {
field: text
}
}
}
}
'
On Thursday, August 14, 2014 2:59:52 AM UTC-7, julie dabbs wrote:
No, I have a query with
.
On Tue, Aug 12, 2014 at 11:33 PM, Ashish Mishra laughin...@gmail.com
javascript: wrote:
The query size parameter is 200.
Actual hit totals vary widely, generally around 1000-1. A minority
are much lower. About 10% of queries end up with just 1 or 0 hits.
On Tuesday, August 12, 2014 6
typically retrieve? (the value of the
`size` parameter)
On Tue, Aug 12, 2014 at 12:48 AM, Ashish Mishra laughin...@gmail.com
javascript: wrote:
I recently added a binary type field to all documents with mapping
store: true. The field contents are large and as a result the on-disk
index
I recently added a binary type field to all documents with mapping store:
true. The field contents are large and as a result the on-disk index
size rose by 3x, from 2.5Gb/shard to ~8Gb/shard.
After this change I've seen a big jump in query latency. Searches which
previously took 40-60ms
I'm evaluating Elasticsearch for a relatively large cluster, with a per
user-like query pattern. I went through the forum post and the video
below.
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/data$20flow/elasticsearch/49q-_AgQCp8/MRol0t9asEcJ
.
Jörg
On Tue, Jul 29, 2014 at 1:02 AM, Ashish Mishra laughin...@gmail.com
javascript: wrote:
I'm uploading documents using syntax like the following.
curl -XPOST 'http://localhost:9200/test/type1/_bulk' -d '
{ index : { _id : i1, version: 3, version_type: external,
replication: async
I'm uploading documents using syntax like the following.
curl -XPOST 'http://localhost:9200/test/type1/_bulk' -d '
{ index : { _id : i1, version: 3, version_type: external,
replication: async, timeout: 5m } }
{ fields: values etc. }
{ index : { _id : i2, version: 1, version_type: external,
11 matches
Mail list logo