Ok, I'll try that as soon as I can. One (maybe dumb) question meanwhile, do
the credentials provided when creating the certificate (I followed these
steps :
http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/)
need to match the Azure account credentials (email /
Hi,
Filtering based on results of aggregations is not supported unfortunately.
There is no way to do the equivalent of your SQL query.
On Tue, May 27, 2014 at 4:06 AM, Choon Keat Chew choonk...@gmail.comwrote:
I have an index full of user transactions. credit is the amount of an
individual
Aggregations cannot run on the data that they produce so this is something
that you would need to do on client side for now.
On Mon, May 26, 2014 at 8:45 PM, John Smith java.dev@gmail.com wrote:
Using ES 1.2
Is there a way to aggregate an aggregation?
So say I have a query for views
Thank you for your reply.
Here are some observations from a couple of days testing:
- Setting up routing manually reduced the aggregation time about 40%!
- ... however, manual routing caused data to distribute unevenly. I assume
we could have taken steps to improve the distribution, but we
from the documentation:
The size parameter defines how many top terms should be returned out of the
overall terms list. By default, the node coordinating the search process
will ask each shard to provide its own top size terms and once all shards
respond, it will reduce the results to the final
Hi
On Thu, May 22, 2014 at 1:47 PM, bagui [via ElasticSearch Users]
ml-node+s115913n4056274...@n3.nabble.com wrote:
Hi,
I want to get the average value of MEMORY field from my ES document. Below
is the query I'm using for that. Here I'm getting the aggregation along
with the hits Json also.
Hi,
use bulk indexing !
This will speed you up by at least an order of magnitude
em
On Thu, May 22, 2014 at 11:09 AM, 潘飞 [via ElasticSearch Users]
ml-node+s115913n4056261...@n3.nabble.com wrote:
Hi all:
Now , I am trying to index my logs by using the elasticsearch Python API,
but I only
Hi, I know that Kibana dashboard can easily draw the line or histogram on the
number of events per day. However, I would like to draw the moving average
(maybe 3-day MA or 7-day MA) on the number of events of each day. How can I
do this? Any information or suggestion would be helpful:)
--
View
Hey Guys,
I have a 1 node Logstash/ElasticSearch server and the disk went full. In
order to recover I thought I'd just delete some old indices with:
curl -XDELETE 'http://localhost:9200/logstash-2014.05.##
Which went fine, I now have some free disk space. But then the current
shard still
Hey guys,
I had to delete a few indices on my Logstash/ElasticSearch one node
'cluster' since I ran out of diskspace.
After that the current index refused to come online with the following
messages (see below). Google isn't much help with the 'No version type
match' so I was wondering if you
Hi,
I may be wrong but it seems to me you have a problem with your network. It
may be a flaky connection, broken nic or something wrong with your
configuration for discovery and/or data transport ?
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [elastic
ASIC nodo
Is there a new version of S3 storing solution:
I am working with elasticsearch 1.1.1 RPM package and I am trying to
connect to S3 bucket, but the error is the same, and am getting another log
error :
1) AmazonS3Exception[Status Code: 403, AWS Service: Amazon S3, AWS Request
ID:
May be using function_score?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html#_using_function_score
The new score can be restricted to not exceed a certain limit by setting the
max_boostparameter. The default for max_boost is FLT_MAX.
Hi,
we use an index with products that have a default price. For certain
periods of time a product can have a special price.
The mapping looks like this at the moment:
mappings : {
products : {
properties : {
defaultPrice : { type : float },
Gateways have been removed in 1.2:
https://github.com/elasticsearch/elasticsearch/issues/5422
You can snapshot your indices to S3 if you need a backup using snapshot and
restore feature.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 27 mai 2014 à
I tried as you suggested :
curl --cert azure-certificate.pem --key azure-pk.pem -H x-ms-version:
2013-03-01 -H Content-Type: application/json
https://management.core.windows.net/1d4c95fb-d9f1-4594-af6b-bfd3941f1c64/services/hostedservices/elasticpoc?embed-detail=true;
and got the same error as
Sorry, there was a mistake. We had all of the data in Elastic Search
version 1.1 and then we upgraded to Elastic Search 1.2 (We did not
re-indexed the data when we upgraded).
Thanks
Pir
On Tuesday, May 27, 2014 10:33:16 AM UTC+2, Pir Abdul Rasool Qureshi wrote:
No change has been made to
This is similar to the question I asked last week.
https://groups.google.com/forum/#!topic/elasticsearch/w4C1m1u55FA
On Tuesday, May 27, 2014 12:39:22 AM UTC-4, John wrote:
Is there someway in elasticsearch to adapt the following query so that it
only produces unique results. For example
I guess i'll open a bug ticket on github then...see if there are any
thoughts over there.
On Saturday, May 24, 2014 8:50:49 PM UTC-5, JA e wrote:
I frequently need to utilize regular expressions and am having some
difficulties.
For example say I have a full_url of
Thanks is this something planned?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web
in the Head plugin web ui page.
i could not found Action and Info buttons.
at first, i thought it's elasticsearch verison compatibility issue with head
plugin.
But. i reinstalled elasticsearch from 1.2.0 until 1.0.0.
same issue is occured.
What's the problem would you be?
plz somebody help
We have recently changed some of our code to include additional call to
SourceLookup.extractValue(path) in fetch stage. Soon after, we have started
experiencing some issues with search stability (with some of our searches
failing to finish, others taking very long).
I can see search lookup
Any idea where start to check why cluster restart by self ??
Regards
On Monday, May 26, 2014 10:52:13 AM UTC-3, Rino Rondan wrote:
Hi:
I had this issue , a cluster node shutdown alone without any interaction
of people, is that possible ??
Server load ok.
Server disk ok.
Server are in
In a blog post last
month, http://www.elasticsearch.org/blog/resiliency-elasticsearch/, Shay
pointed out that with Lucene 4.8 Elasticsearch will start using sequence
numbers for operations on priimary shards. This will speed up recovery so
only data that has changed need be copied on recovery.
Hi,
I have never tried it, so take my advice with a grain of salt.
Your problem looks like a good usage of function scoring:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html#_using_function_score
Functions used in scoring can have
Hi Christof,
Another Idea, more like a brute force approach: keep a current_price
field and update it daily with a cron job, either with the default price or
with the promotional price. This way you only need to change a very little
in your queries (sort by the new field) and just add a job to
Could someone walk me through getting my cluster up and running. Came in
from long weekend and my cluster was red status, I am showing a lot of
unassigned shards.
jmweber@MIDLOG01:/var/log/logstash$ curl
localhost:9200/_cluster/health?pretty
{
cluster_name : midlogcluster,
status : red,
Yes, by full cluster restart I meant shutting down all nodes and then
starting them up again, which means downtime. However, after thinking about
the issue over the long weekend, I wrote a simple utility that cleans up
snapshots without need to restart the cluster
-
xml-to-es is a new Node.js module that converts XML (or SGML) into a JSON
object suitable for indexing with ElasticSearch. The problem I am
addressing is the unsuitability of the raw JSON typically output by XML
parsers. xml-to-es offers configuration options for flattening and
streamlining
Is this a repeated test? There might be some cache loading going on during
the first request. Values must be loaded into the cache before they can be
filtered on. Try a repeated test.
--
Ivan
On Mon, May 26, 2014 at 11:12 PM, Hui dannyhui1...@gmail.com wrote:
Hi All,
My elasticsearch
Did you upgrade to 1.2? Dynamic scripts are now disabled by default.
https://github.com/elasticsearch/elasticsearch/pull/5943
--
Ivan
On Mon, May 26, 2014 at 11:16 PM, Pratik Poddar pratik.ph...@gmail.comwrote:
My server was running fine until I start getting this error for search
Hello,
I am using the JDBC river plugin (latest version with the name
elasticsearch-river-jdbc-2.2.1.jar on ES 0.90.5) and recently found that
sometimes a new river takes a long time to start. I am creating the river
and waiting for full river population by periodically counting indexed
Looks like I can answer my own question. From the
docshttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations.html#_values_source
:
When both field and script settings are configured for the aggregation, the
script will be treated as a value script. While
It took quite a bit of figuring out but we succeeded registering a
rootMapper and using the ParseContext, thanks.
We don't have any plans for implementing SPARQL on top of ElasticSearch.
Siren can do joins which perform better than blockjoins, especially for
deep nesting, but it still is a
Hey,
Can you actually post your solution If you figured out.
I am having similar issue, I need to filter search result based on
script_field. I don't want to use filter_script though because I am using
facets and I want my records to filter out for facets too.
Do you know if can extends any
I think but I might be wrong that you can use a function_score with a script
and run a script like :
script_score : {
script : _score 1.0 ? 1.0 : _score
}
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 27 mai 2014 à 13:56:32, lyp...@gmail.com
I have this query:
POST /contents/post/_search
{
query : {
match: {
postMessage: game of thrones
}
},
size: 10
}
I'm getting several results but two of them have the same id and they're
the same exact document (as seen below).
On what terms can
Unfortunately I cannot help you but I am wondering how to do the same thing.
On Friday, March 7, 2014 12:29:18 AM UTC+9, eune...@gmail.com wrote:
Thanks!
I still can't seem to find these settings.
Apologies in advance if I am just missing them...
indices.memory.index_buffer_size
Looking through the docs, it doesn't seem like we can change the
index_buffer_size through the cluster update api.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
For all my previous ES versions (up to and including 1.1.1), my Java code
passed the
org.elasticsearch.action.search.SearchOperationThreading.THREAD_PER_SHARD value
to the SearchRequestBuilder.setOperationThreading API.
Now with ES 1.2, the Javac build fails as the SearchOperationThreading
Removed: https://github.com/elasticsearch/elasticsearch/pull/6042
--
Ivan
On Tue, May 27, 2014 at 12:11 PM, InquiringMind brian.from...@gmail.comwrote:
For all my previous ES versions (up to and including 1.1.1), my Java code
passed the
Up through ES 1.1.1 the following Java snippet was able to take an array of
one or more index specifications (e.g. test*, ix*, sgen) and create a list
of index names that match (e.g. test1, test2, test3, ix1, ix2, sgen):
/* Create array of 1 or more index specifications */
String[]
Thanks, Ivan!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
Hi People,
I use the following query to filter my data. Everything works great, untill
I want to use a price-range filter.
This range filter does not do exactly what I expect. I have a product in my
collection with the price=10. When I set the filter to gte:1 - lte:11, it
works, but gte:2 -
Hello,
This question relates to the order of execute of filters and queries.
I have two types of search criterion:
1. A terms query with a few hundred terms. --- (this is obviously
a very expensive query)
2. A geo_bounding_box filter. (this should be
Finally figured out the answer to my own question! Digging through the
deprecated REST API documentation and stumbling across
https://github.com/elasticsearch/elasticsearch/pull/5094/files I found
enough to be able to convert my code to use the new recovery-based API and
no longer use the
Hi all,
We are running 90.11 and we have a cluster with client, master and data
nodes.
Our client nodes are using dedicated 10g memory.
But we are seeing these outofmemory exceptions.
I tried to correlate this log time with logs in our exception but I did not
find any query which we could
I found some relevant info
here:
http://stackoverflow.com/questions/14069593/how-to-deploy-to-azure-with-powershell
The curl command now works. I'm currently redeploying my service, fingers
crossed!
On Monday, May 26, 2014 11:25:58 PM UTC+2, Nicolas Giraud wrote:
Hi,
I've deployed a two
I have never used the geo features, so I could be wrong, but I believe that
geo filters are expensive are should be used as post filters:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-post-filter.html
One of the reasons is that geo filters are not cached by
Lance Lakey wrote:
Leave port 9200 open to everything but only allow search
requests so that all web clients can access ES but can only
search
Configure application servers to use port 9500 to communicate
with ES Configure firewalls to only allow application servers
access to ES port 9500
We are using 90.11 and we have a use case where have following type
- accountsearch: {
- dynamic: strict
- properties: {
- Name: {
- index: not_analyzed
- norms: {
- enabled: false
}
- index_options: docs
Clearing field cache solves this issue. But we need to the final solution.
On Tuesday, 27 May 2014 14:55:18 UTC-7, VB wrote:
We are using 90.11 and we have a use case where have following type
- accountsearch: {
- dynamic: strict
- properties: {
- Name: {
I confirm that this works. I simply needed to upload my PEM certificate to
Azure under Settings/Management Certificates.
Simply uploading it with the cloud service is not enough ... dumb mistake
in the end ;-)
On Tuesday, May 27, 2014 11:15:10 PM UTC+2, Nicolas Giraud wrote:
I found some
You should upgrade ES, there were bugs fixed regarding cluster update
service and rivers.
Jörg
On Tue, May 27, 2014 at 6:44 PM, André Morais ano...@gmail.com wrote:
Hello,
I am using the JDBC river plugin (latest version with the name
elasticsearch-river-jdbc-2.2.1.jar on ES 0.90.5) and
Yes, it is (not only) relevant to library catalog indexing, because
Bibframe, a new project by Library of Congress, is built on RDF, and
next-generation library systems will embrace W3C semantic web technologies.
The RDF data I generate is indexed in JSON-LD format into Elasticsearch but
for
I have a weighted set of labels for each document (e.g. TECHNOLOGY=0.5,
FUNNY=0.1, ...). The weight for each label is how strongly the document
has that label. Each document has a different set of labels, with different
weights. There are thousands of labels in total, although each document
Perhaps the top hits aggregation coming in 1.3 could help:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-top-hits-aggregation.html
--
Ivan
On Tue, May 27, 2014 at 5:36 AM, ES USER es.user.2...@gmail.com wrote:
This is similar to the
hi
I am writing a real time analytics tool using kafka,storm and elasticsearch
and want a elasticsearch that is write optimized for about 80K/sec
inserts(7 machine cluster).
For the purpose of high performance i using bulk udp to batch insert my
doc(each doc is about 300B, only 4 field are
What sort of hardware are you on?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 28 May 2014 13:44, Zealot Yin 0xmal...@gmail.com wrote:
hi
I am writing a real time analytics tool using kafka,storm and
CPU E5-2620 12 cores
MEM 64G
df -lhT
FilesystemTypeSize Used Avail Use% Mounted on
/dev/sda3 ext21.4T 379G 962G 29% /home
uname -a
2.6.32_1-9-0-0 10 17:22:16 CST 2013 x86_64 x86_64 x86_64 GNU/Linux
elasticsearch started with *bin/elasticsearch -Xms10g -Xmx10g
You're probably running into I/O issues with multiple instances per
physical, are you monitoring your disk performance?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 28 May 2014 13:56, Zealot Yin
io-util 50% + and sometime 100%
i will try one big instance in one machine right now
do you have any* configure optimization suggestion*?
On Wednesday, May 28, 2014 11:59:45 AM UTC+8, Mark Walkom wrote:
You're probably running into I/O issues with multiple instances per
physical, are you
My understanding is that Lucene now provides checksums, but that sequence
numbers is a functionality built into Elasticsearch.
I do not think that functionality has been released with 1.2, but I could
be wrong. The item below it, zen discovery, is definitely not part of 1.2
--
Ivan
On Tue,
You could test with a single index and get an idea of write performance on
single one. One nice thing about ES is it scales almost linear, more
concurrent indices setup for writing, better write performance.
--
You received this message because you are subscribed to the Google Groups
Hi All,
My elasticsearch version is 1.1.1. I have a 20 shards and 0 replica index
having 4m docs.
It is fast (1ms) for my query.
{
- took: 1
- timed_out: false
- _shards: {
- total: 20
- successful: 20
- failed: 0
}
- hits: {
- total: 0
-
My server was running fine until I start getting this error for search
queries:
RequestError: TransportError(400, u'SearchPhaseExecutionException[Failed to
execute phase [query], all shards failed; shardFailures
{[bgh1FwvvTmydjtDhY-reCA][article-index][3]:
Thank you David for your reply, I don't think 9200 port is open on my
server as i am not able to do telnet on that port. Should I go ahead ask my
service provider to open this port.
But i feel a security issue if 9200 port is open.
Kindly suggest me if there a better way to access the elastic
Should I go ahead ask my service provider to open this port.
If you need to access Elasticsearch from outside, you need to do whatever is
needed. Open 9200 port is one solution.
But It is a security issue for sure.
That said, you could add a NGnix layer to secure access or follow Nik's advice
When we apply term aggregation and sort by min aggregation the result is
not stable.
Sort by values going great, but objects with the same values do not have
a permanent position.
Example - we change the number of objects while objects with a value of 20
and 30 begin to behave strangely:
1:
Hi everyone,
I have an index where two of the data fields is long type, and the other
string type. They are PID and Batch_Name. I'm trying to do a query that
returns me the entry with PID = 25747 and batch name = ZEJINNSP05.That's
how I'm doing the query:
hi all,
I am wondering whether the shutdown api is blocking or not.
curl -XPOST 'http://localhost:9200/_shutdown'
I need the blocking shutdown api to know the process does shutdown.
Any ideas?
Ivan
--
You received this message because you are subscribed to the Google Groups
elasticsearch
How do I configure ES to listen on http.port 9200 and 9500 ?
I've tried a couple variations in the config but afaict ES always picks
only one port
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop
You can not do that.
What is the use case?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 27 mai 2014 à 10:07:40, Lance Lakey (lancela...@gmail.com) a écrit:
How do I configure ES to listen on http.port 9200 and 9500 ?
I've tried a couple variations
Leave port 9200 open to everything but only allow search requests so that
all web clients can access ES but can only search
Configure application servers to use port 9500 to communicate with ES
Configure firewalls to only allow application servers access to ES port 9500
Allow all commands on
No change has been made to settings. It is like we were running version
1.0.3 without any issues. We prepared another machine with Elastic Search
to 1.2. Re-indexed all of the data.
After that the issue started to appear.
I am thinking of creating new index and re-indexing the data again, any
Hi, not sure tbh
Kibana is a js interface so I don't think it makes sense to alert from it.
You could monitor the results stored in ES with nagios/zabbix/ your
monitoring of choice and parse the json result and alert based on that.
We've used logstash's statsd module to send data we are interested
Hello,
I tried the new top_hots aggregation and made it work on denormalized data.
However, when I tries to add a filter I ran into the following exception:
[2014-05-27 11:32:12,869][DEBUG][action.search.type ] [Cap 'N Hawk]
failed to reduce search
Hey Nicolas,
The 403 status code from azure basically means that your credentials are
incorrects.
It means to me that your certificate is either invalid in
/home/elasticsearch/azurekeystore.pkcs12
You could try
curl --cert azure-cert.pem --key azure-pk.pem -H x-ms-version: 2013-03-01 -H
I run one machine that gathers all the Marvel data from the main ES
cluster. Works fine and Marvel shows me all the information from the main
cluster.
The issues is when I start Sense it only connects to the Marvel ES instance
and not the main one. How can I get it to connect to the main
Use this input filter in Logstash to search the logs
http://logstash.net/docs/1.4.1/inputs/elasticsearch
On Tuesday, May 27, 2014 9:02:35 AM UTC, NF wrote:
Hi,
We’re using Kibana/Elasticsearch to visualize different kind of logs in
our company. Now, we would need a feature that would allow
Hello guys,
I'm trying to find a way how to solve this case with two types:
product : {
properties : {
kbi_id : { type : string, index : not_analyzed},
agreement_status : { type : string, index :
not_analyzed}, *- values Y/N*
product_code : { type :
81 matches
Mail list logo