Hi,
Yesterday, I have added a new node to my ES cluster and after that some
of the shards started to remain unassigned. These shards are the
replicas of the shards on the new node. When I inspected the reason, I
found out that this new node has a different ES version. The new one is
1.3.4
You shouldn't run multiple versions unless you are upgrading, so it would
make sense to upgrade the other nodes ASAP.
The logs on your nodes should also shed some more light on the problem.
On 6 November 2014 19:29, Umutcan umut...@gamegos.com wrote:
Hi,
Yesterday, I have added a new node to
Hi,
From the link here, on P-C relationships, given is the excerpt shown below.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/parent-child.html
memory vs doc values
At the time of going to press, the parent-child ID map is held in memory as
part offielddata
anybody for any help please?
On Wednesday, November 5, 2014 12:29:39 PM UTC+5:30, vijay karmani wrote:
anybody for any help please?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from
I am new to elasticsearch. I want to get unique combination of multiple
fields values in search result of elasticsearch.
Please let me know how can we execute such query.
Thanks in advance
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Helli Vijay ,
I cant think of any straight and efficient way to implement this.
But then you can write your own map reduce script ( Feature in 1.4.0) to
gather this information -
Hi all
I cannot seem to get the has_child query (or filter) to function in Kibana
4. My code works in elasticsearch directly as a curl script, but not in
Kibana 4, yet I understood this was a key feature of the upgrade. Can
anybody shed any light?
The curl script as follows works in
Hi,
I'm new to ElasticSearch and I tried using Java API with bulk indexing.
I wrote simple Java program that tries to enter 100 documents.
However at the beginning I'm able to enter 1000 documents at less than 0.5
second while after some thousands of documents it takes more than 2 seconds
On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.mo...@gmail.com wrote:
// bulkRequest = client.prepareBulk();
Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over
--
You
Here is my sample document:
{
jobID: ace4c888-1907-4021-a808-4a816e99aa2e,
startTime: 1415255164835,
endTime: 1415255164898,
moduleCode: STARTING_MODULE
}
- I have thousands of documents.
- I have a pair of documents with the *same jobID* and the module code
would be
Hi,
In Sense plugin, when I made this search query on my elasticsearch cluster :
POST /ses_tis_v5_picardie/d2be6477-f186-4b18-8ffd-eb1ccc3a13c2/_search
{
_source: {
include: [
id,
datecrea,
datemaj,
supprimer,
structureid,
hdatemaj,
hmanip,
Hmm, not good. What does your autocomplete analyzer look like? Can you
post the full stack trace?
Mike McCandless
http://blog.mikemccandless.com
On Wed, Nov 5, 2014 at 7:05 PM, Richard Tier rika...@gmail.com wrote:
An internal error happens when I do a suggest query. I get TokenStream
Hi,
Is there any update on this ? how should the workflow be ? deleting and
creating may be a bad idea? you may end up with nothing if it fails after
the deletion?
On Wednesday, May 21, 2014 3:33:26 PM UTC+5:30, Chetana wrote:
I am using ES 1.1.1 and hdfs as repository type (Hadoop
Hi Thomas,
Thank you for the hint :-)
I changed it however now I'm getting the following Error although I'm not
using threads
Thank you
Regards,
Moshe
Going to add 100
processed 1000 records from -1000 until 0 at 26834
Error VersionConflictEngineException[[twitter][1] [tweet][momo110]:
Thanks Vineeth for your answer. I will give it a shot.
On Thursday, November 6, 2014 3:24:28 PM UTC+5:30, vineeth mohan wrote:
Helli Vijay ,
I cant think of any straight and efficient way to implement this.
But then you can write your own map reduce script ( Feature in 1.4.0) to
gather
I am in process of upgrading ES from 0.20.2 to 1.3.4. Below are two
requests to test an analyzer / filter, and although the mapping files are
semantically the same the results are slightly different.
Can anyone provide some insight as to why the differ (the start_offest,
end_offset and
We're trying to estimate the amount of memory we'll need for the fielddata
cache. Do replicas increase the amount of fielddata cache used? I imagine
that they would but I wanted to get confirmation of this. If I have 3
documents indexed with a string field of 1 character each in a single node
Hi,
Just upgraded to 1.4
I don't see any errors, but only the primary shards are initialized.
Any idea what is happening ?
Georgi
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it,
hi Thomas,
I fixed the code per your suggestion and initiated prepared bulk each 1000
documents (code below).
However add document time is still increasing.
Please let me know what's wrong. Thank you in advance.
Moshe
Output:
Going to add 100
processed 1000 records from -1000 until 0 at
I have a document signature indexed (the top 60 most significant terms in
the document) in an ES index, along with a document ID. I would like to
retrieve documents that match at least 3 words in a query string, in the
document signature field. Is it possible to limit search results that
Yes I was having different version. I have already figured it out.
On Thursday, November 6, 2014 1:45:10 AM UTC+5:30, Mark Walkom wrote:
What version of ES are you on, is it the same for both platforms?
On 6 November 2014 00:50, Vijay Tiwary vijaykr...@gmail.com javascript:
wrote:
I have
So my expected results would for the results that the results from the
function_score filter would filter out results where the parent had n
number of children... but instead, I'm getting all the results instead of a
filtered set
I could really use some advice here... Or let me know what
I've tried to retrieve array values from my data by using script_fields,
but I received inconsistent results.
My mapping:
{
user: {
properties: {
id: {
type: string,
index: not_analyzed
},
other_ids: {
type: string,
index: not_analyzed
(My ES version is 1.3.0)
On Thursday, November 6, 2014 4:10:17 PM UTC+1, Zoltan Balogh wrote:
I've tried to retrieve array values from my data by using script_fields,
but I received inconsistent results.
My mapping:
{
user: {
properties: {
id: {
type: string,
Found it .
I had less than 15% free space on disks , and allocation was disabled.
The annoying part is that i had to enable DEBUG in logging.yml just to see
this !
Will file a bug report.
This must be at least WARNING
Hope this helps someone else.
Georgi
On Thursday, November 6, 2014
Hello Ivan,
I know this post is pretty old.
I am definitely puzzled with the gist that you provided.
Why is there 2 matches?
exclude : {
span_term : {
field1 : dog
}
}
I though we should exclude match with dog...
Could you please point me to proper information to
We did some performance testing and found that the performance hit from
using DFS was minor.
--
Ivan
On Wed, Nov 5, 2014 at 8:55 AM, Sofiane Cherchalli sofian...@gmail.com
wrote:
Answering myself:
According to ES blog
Node checked for transport / network for stats :
transport : {
server_open : 143,
rx_count : 33898549,
rx_size_in_bytes : 12864852481,
tx_count : 33898584,
tx_size_in_bytes : 1516049161
}
network : {
tcp : {
active_opens :
Hi,
i can do multiple filter terms on a query like this
filter: {
terms: {
categories: [
[
boats,
cars
]
]
}
}
it works !
but if i do the same with integer like year, it doesn't work, got a
Pretty old indeed. As explained briefly, I was migrating a Lucene system to
Elasticsearch and did not understand why the span not queries were not
working, only to discover we had a custom parser, to support syntaxes such
as the one you are expecting.
Span nots are tricky in Lucene, but basically
You would need to create a custom analyzer by basically repeating the
configuration of the snowball analyzer, but adding in the synonym filter.
You can't modify a stock analyzer, unless this has changed (if so, someone
please correct me).
--
Ivan
On Wed, Nov 5, 2014 at 6:43 PM, Iqbal Ahmed
You cannot search/filter on a non-indexed field.
--
Ivan
On Wed, Nov 5, 2014 at 11:45 PM, ramakrishna panguluri
panguluri.ramakris...@gmail.com wrote:
I have 10 fields inserted into elasticsearch out of which 5 fields are
indexed.
Is it possible to search on non indexed field?
Thanks in
You can totally use a script filter checking the field against _source.
Its super duper duper slow but you can do it if you need it rarely.
On Thu, Nov 6, 2014 at 11:13 AM, Ivan Brusic i...@brusic.com wrote:
You cannot search/filter on a non-indexed field.
--
Ivan
On Wed, Nov 5, 2014 at
Error during creation of the index 1ea7a62f-30cd-42e4-8d0f-4e4b9869916f
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
How would index aliases help here?
On Wednesday, November 5, 2014 11:50:34 AM UTC-5, Jörg Prante wrote:
Use index aliases: one physical index, 4000 aliases.
Jörg
On Tue, Nov 4, 2014 at 3:42 PM, John D. Ament john.d...@gmail.com
javascript: wrote:
Hi,
So I have what you might want to
I've begun using the decay function in order to promote more recent results
in our index. In particular I'm using what's documented here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html
Here's the date example they use (let's assume
I think you should read
this https://wiki.apache.org/lucene-java/ScoresAsPercentages
it might help you to make a point.
simon
On Wednesday, November 5, 2014 8:42:59 PM UTC+1, Dustin Boswell wrote:
Is there a way to score documents so that the relevance score has a fixed
range, like from 0
We fixed EdgeNGram tokenizer / filter in the 1.x series but don't ask me
when exactly I think it was lucene 4.4 or so. Those offsets are now correct
while they where broken before.
not sure if this helps you to debug your problem
On Thursday, November 6, 2014 1:31:22 PM UTC+1, Ben George wrote:
See kimchy's explanation
https://groups.google.com/forum/#!msg/elasticsearch/49q-_AgQCp8/MRol0t9asEcJ
Jörg
On Thu, Nov 6, 2014 at 7:08 PM, John D. Ament john.d.am...@gmail.com
wrote:
How would index aliases help here?
On Wednesday, November 5, 2014 11:50:34 AM UTC-5, Jörg Prante wrote:
Hi all,
I am new to ELK and my organization is interested in implementing this
framework.
I setup ELK on my machine and trying to collect logs from remote server.
But the logs on the remote server are huge in size (in GigaBytes).
I see we can use logstash shipper or logstash forwarder. But I
@Ivan
I've pick you git push and integrated it into ElasticSearch source
tag:v1.3.5
After a rebuild, it seems to work perfectly (I am still trying to find the
maximum values for pre and post, no luck so far).
I've been able to figure out how span_not works with this post.
I add it here if
How can I use Phrase Match in ES.
eg. A Company Name field can have the following entries :
- USA Tech LLC
- USA Tech Ltd
- Asia USA Tech LLC
- Euro USA Tech
1. I want to write a Java algorithm that will suggest all above 4 as same.
2. Also, how can I use Jaro Wrinkler to perform this
Thanks for reply.
The autocomplete analyzer:
{
'analysis': {
'analyzer': {
'autocomplete': {
'type': 'custom',
'tokenizer': 'standard',
'filter': [
'filter_pair_shingle',
An announce list would be awesome, but at least something to this list with
the [ANN] or [ANNOUNCEMENT] prefix like David has been doing.
Elasticsearch 1.4.0 and 1.3.5 were released, but there is no announcement
on the list. Elasticsearch also announced a product called Shield, which
should
and of course I missed that announcement due to the overwhelming number
of emails... ;)
On 11/6/2014 12:08 PM, Ivan Brusic wrote:
An announce list would be awesome, but at least something to this list
with the [ANN] or [ANNOUNCEMENT] prefix like David has been doing.
Elasticsearch 1.4.0
Hi all,
Is there a way to retrieve term vectors of all documents for a given type
using Elasticsearch Java API.
Thanks,
Evans
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/How-to-get-term-vectors-of-all-documents-for-a-given-type-tp4065725.html
Sent from
How can i fetch all the records which matches by searching some text of all
the fields for specific index using java api?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
Can we get the list of fields of specific index type and index using java?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Not answering to your question but you should look at BulkProcessor class.
It would simplify a lot your code.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 6 nov. 2014 à 15:01, Moshe Recanati re.mo...@gmail.com a écrit :
hi Thomas,
I fixed the code per your
I do this by getting the mappings for a specific index, then isolating by
type if desired. This takes care of all explicitly mapped fields, and also
any automatically detected and mapped fields.
Especially in the latter case, it's a good way to check and see if
Elasticsearch is guessing your
Hello everyone,
I was wondering if someone could clarify this for me: I am storing binary
arrays in an index as stored, un-indexed, binary doc values and later
reading the fields from a custom query, which implements required lucene
classes (Weight Scorer).
Inside the scorer I am loading
It may worth looking at 2 things:
1. Using the latest Elasticsearch version (1.4). Many work went on
optimizing those type of scenarios on the server side.
2. Disabling refresh / flush - I assume this is an ETL process and as such
this could greatly help.
--
Itamar Syn-Hershko
working in a testing environment currently with about 3.11TB of data,
spread out over 128 indices (one index per day for four months, each index
is about 12-18GB). Currently using 3 master-only nodes, and 6 data-only
nodes. Each index has 5 primary shards and 1 set of replicas for a total of
I have a document with a field message, that contains the following text
(truncated):
Welcome to test.com!
The assertion field is mapped to have an analyzer that breaks that string
into the following tokens:
welcome
to
test
com
But, when I search with a query like this:
{
query: {
Glad to know lots of other people have been asking for it too :)
I agree that dividing the default relevance score by some constant (or some
number derived from the results) is a bad idea, for all the reasons that
article describes.
I was hoping there was a non-default scorer that is built to
ElasticSearch 1.4 is out and I can't see any mentions that Rivers are
deprecated.
Has that (informal) decision been reversed? Or was the timeline
further out? What's the currently recommended approach?
Regards,
Alex.
--
You received this message because you are subscribed to the Google
Hi,
I am new to elastic search and want to create a new portal which reads the
news from different sites RSS feed and analyse these and show only the
trending news in market. I gone through a site republishan(dot)com doing
the same and using elastic search.
Please guide me how can i get
Hi there,
As we started a using the ES before 2 years, that time, we have created
multiple clusters where each cluster with single index.
Current model:
1. Number Clusters: 10
2. Number of index in cluster: 1
3. Number of shards per cluster: 5
4. Number of Replica: 1
5. Per index size: 50 GB
6.
Thanks Nikolas Everett for your quick reply.
Can you please provide me example to execute the same. I tried multiple
times but unable to execute.
Thanks in advance
On Thursday, November 6, 2014 9:44:55 PM UTC+5:30, Nikolas Everett wrote:
You can totally use a script filter checking the field
59 matches
Mail list logo