Exactly what I was looking for. Works like a charm :)
BTW, i do not use ElasticsearchIntegrationTest. Maybe I should but as tests
were written in version 0.90.3 I haven't had the time to upgrade them and
use the new testing framework.
I'm not sure how much time this would take.
Anyway thanks
We had the same problem.
reason: The application server uses a older version of log4j than ES needed.
Am Mittwoch, 4. Dezember 2013 19:31:26 UTC+1 schrieb Justin Uang:
I get the same problem too. Our configuration has us using the
TransportClient, and we have a single node cluster using the
Hi,
we use ES from a old jboss to connect to a new ES Server Cluster (2 Nodes)
with a transport client.
At this point (yes, should improved) we use java 6 with jboss, so we can
use ES version 1.1.1 as client. We also use lucene in a jboss application
with the same version uses with ES (4.7.2)
Anyone would have a hint on this? :)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the
Hi,
I've just given a try to ElasticsearchIntegrationTest as I used to create
my own local node with some pre-seeded data
It seems nice but I'm unable to reproduce what I had setting up my own node.
I've defined my PluginTest class as following:
@ClusterScope(scope = Scope.SUITE)
public class
This sounds perhaps as a non-ES use case, but I would like to have a single
document which has a certain flag boost to the top of the results.
I have ~10% of the documents with a flag. All documents (disregarding the
flag) are sorted on _timestamp. All fields have a _boost by default of 1.0.
hi
elasticsearch noob here, so forgive me if i get terminology wrong etc.
basically i'm loading a bunch of documents into an elasticsearch index,
currently sitting at 17,000. one of the fields i'm using is called md5
and its not_analyzed in the mapping (also tried without to solve this
I did extensive searching but only found 'shutdown the node then remove
it'. We started with an ELK stack on a single box, then created a fresh ES
3-node cluster, and re-pointed Kibana/Logstash to the new cluster. CentOS
6.5, latest ES version (1.3?)
My elasticsearch.yml file only has the
Hi, I started using elasticsearch just recently.
Can someone help me to run 3 queries?
1. the first search for all words entered in the search input in the
field1, if it finds them all and exactly in that order
2. the second search for one or more words you were looking for, always
Hey guys,
Is there actually a way that elasticsearch can cache some API requests in
memory, so that they dont have to be loaded from disk every time?
E.g. If you open a ES stored dashboard in Kibana, Kibana is execurting curl
syslogserver:9200/_nodes and after that curl
Ramdev,
Do your Spark nodes all have access to all of the ES nodes on port 9200?
The connector joins the ES cluster and will query multiple nodes, not just
the configured node in the JobCnf. This bit me on my tests as I was
pointing Spark to a load balancer and hadn't given direct access to
Hm, might have figured this out. I've been testing the various cluster
health plugins - one nice one is Kopf. This had an option 'shutdown
node'...so out of curiosity I clicked it. Success!? Node is gone...
So I'm thinking that if I'd run the shutdown command from one of the 'good'
cluster
Hi,
So I have what you might want to consider a large set of data.
We have about 25k records in our index, and the disk space is taking up
around 2.5 gb, spread across a little more than 4000 indices. Currently
our master node is set for 6gb of ram. We're seeing that after loading
this data
Hi,
I don't think 24k documents are large data.
What is strange for me is 4000 indices.
This is strange .. how many indices do you need ?
On my cluster i have : Nodes: 8 Indices: 89 Shards: 2070 Data: 4.87 TB
When you are running OOM ? Example query(ies) ? How my nodes ? Some more
info please
Hi all,
When I Search for a attribute value, I am getting the JSON.
The Following are the concerns.
1. JSON is not having actual data.
2. JSON is having ID.
Please let me know your comments.
thanks
navjyothi
--
You received this message because you are subscribed to the Google Groups
Hi, Andrew
Each index has 5 shards and each shard has 2 replica. There are 670 indices
now. So there are 670 days' data. But the data for each index is not even.
Small one only has 100KB(old date). Big one may has more than 1 GB(recent
date).
curl 'localhost:9200/_cat/indices/?v'
health
Hi, Andrew
Each index has 5 shards and each shard has 2 replica. There are 670 indices
now. So there are 670 days' data. But the data for each index is not even.
Small one only has 100KB(old date). Big one may has more than 1 GB(recent
date).
curl 'localhost:9200/_cat/indices/?v'
health
Georgi,
Thanks for the quick reply!
I have 4k indices. We're creating an index per tenant. In this
environment we've created 4k tenants.
We're running out of memory just letting the loading of records run.
John
On Tuesday, November 4, 2014 10:15:15 AM UTC-5, Georgi Ivanov wrote:
Hi,
I
This was happenning when i was tried with plugin UI.
On Tuesday, November 4, 2014 9:24:13 PM UTC+5:30, naveen gayar wrote:
Hi all,
When I Search for a attribute value, I am getting the JSON.
The Following are the concerns.
1. JSON is not having actual data.
2. JSON is having ID.
Hi,
is there the possibility to use HDFS directly as a storage for
elasticsearch(except for mounting it via NFS)?
So far I didn't find anything, but I saw that Solr has the possiblity to use
HDFS directly as a storage(using an implementation of the Lucene directory
abstraction). Is something like
So you run OOM when you index data ?
If so :
How do you index the data ?
Are you using BulkRequest ?
Which programming language are you using ?
Are you using multiple threads to index ?
If you are using Bulk request , you should limit the size of the bulk.
You can also tune the bulk request pool
Georgi,
I'm indexing the data through regular index request via java
final IndexResponse response = esClient.client().prepareIndex(indexName,
type)
.setSource(json).setRefresh(true).execute().actionGet();
json in this case is a byte[] with the json data in it.
The
Hi,
I know that facets are deprecated and going to be removed eventually.
But, we currently use facets and we're fine with its current behavior.
My question is do aggregations offer any improvement in performance? Or
maybe a degradation due to added functionality?
Any experience?
We're
And actually now that I'm looking at it again - I wanted to ask why I need
to use setRefresh(true)?
In my case, we were not seeing index data updated quick enough upon
indexing a record. setting refresh = true was doing it for us. If there's
a way to avoid it, that might help me here?
On
Should clauses at the same time as must clauses are only important during
queries (not filters) since they contribute to the scoring for a document.
The should clauses will improve the score for the documents that match.
--
Ivan
On Mon, Nov 3, 2014 at 5:51 PM, kazoompa rha...@p3g.org wrote:
Are you using REST? If so, Jorg wrote a plugin to help with such a task:
https://github.com/jprante/elasticsearch-arrayformat
--
Ivan
On Mon, Nov 3, 2014 at 8:36 AM, Lasse Schou lassesc...@gmail.com wrote:
Hi,
I want to know if it's possible to disable the _index, _type, _id
and _score
Hello, have fixed it? What was the problem?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion
Eg. Index has 100 documents with field A, B, C Z
I want to retrieve a list of all A's from the 100 documents in the Index.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it,
If it’s « only » 100 docs, you can use .setSize(100) to your search query and
.addFields(a):
client.prepareSearch().setSize(100).addFields(a).get();
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
Hey guys,
I have a few questions:
1) I was wondering if anyone knows how to limit how much memory
elasticsearch uses while indexing.
During our loading jobs, typically we only see 60-70% heap usage before the
garbage collector
kicks in, but recently we saw some large traffic while loading
We are pretty comfortable with Lucene and have used it extensively but we'd
like to move from our existing proprietary application layer to using
ElasticSearch (support for merging results across shards, replication). We
have one key challenge, which I'm doing my best to describe below, I'd
Cool! I'll definitely take a look.
@ Elasticsearch Core team - you should consider adding this as a feature.
2014-11-04 17:59 GMT+01:00 Ivan Brusic i...@brusic.com:
Are you using REST? If so, Jorg wrote a plugin to help with such a task:
https://github.com/jprante/elasticsearch-arrayformat
If you implement your tweets that mention apple as a filter then it can
be cached. Elasticsearch's cache is per segment so it should stay sane as
you add more documents. That might be enough to make that fast.
The other option is to walk those 1,000,000 million documents with a
scan/scroll
Thanks for the quick response.
Just to be clear I want to be able to query (slice and dice) against the
set of documents defined by ID, that I through a process external to
elastic search computed have positive sentiment towards apple.
So for subsequent queries against the result set (1
I'm running Elasticsearch 1.0 as a service on an Ubuntu 13.10 laptop,
strictly locally. To install the Index Termlist Plugin, I followed the
instructions at https://github.com/jprante/elasticsearch-index-termlist. I
issued the command
./bin/plugin -install index-termlist -url
Thanks!
On Tuesday, November 4, 2014 10:58:44 AM UTC-8, Brook Miller wrote:
We are pretty comfortable with Lucene and have used it extensively but
we'd like to move from our existing proprietary application layer to using
ElasticSearch (support for merging results across shards,
Hi all,
Is there a way to retrieve term vectors of all documents for a given type
using Elasticsearch Java API.
Thanks,
Evans
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from
I am exploring Term Suggestion in JAVA. Can someone help me with the
following :
1. Java Code Example to suggest/find similar terms from a specific field in
an Index.
eg. Text Entered : money
- Name field values : {abc, money, moneys, mone, xyz }
Possible suggestions with score should be
Dear all,
I am running the following query:
curl -XPOST
http://localhost:9200/logstash-2014.11.04/logs/_search?search_type=count;
-d '{
aggs: {
sessions: {
terms: {
field: sessionId,
size: 0
},
aggs: {
sessions_hits: {
The best way I can think of would be to use conditionals in your output,
one per type, then you can directly point to specific templates for each
type.
On 5 November 2014 15:41, Alejandro Alves alejandro.al...@gmail.com wrote:
Hello,
I want to have different indexes depending on the type, ie
anybody for any help please?
On Tuesday, November 4, 2014 10:30:09 AM UTC+5:30, vijay karmani wrote:
I've document structure like
{
username : abc
login_timestamp : 2014-04-11 11:05:00 AM
logout_timestamp : 2014-04-11 11:30:00 AM
},
{
username : def
login_timestamp : 2014-04-11
anybody for any help please?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
42 matches
Mail list logo