I am querying elasticsearch with the below query
SearchResponse searchResponse = getElasticSearchClient()
.prepareSearch(indexNameStr).setTypes(typeNameStr)
.setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
¿So you're saying that those indexing operations associated with the
unsuccessful create-index requests will however succeed (i.e. all data will
be stored)?
On Sat, Jan 10, 2015 at 8:15 PM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
I think you can safely ignore failed to process
Its posible create an elasticsearch pool like an hibernate pool??
Thanks
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
No I don’t and I can’t really help until I know exactly what you are doing.
But may be others have ideas?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
Its posible create an elasticsearch pool like an hibernate pool??
Thanks
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Thank you Mark for help!
I guess I'll go with replication as it doesn't require to stop indexing new
documents (and while my index is quite big it can take some time).
Chris
On Mon, Jan 12, 2015 at 3:51 AM, Mark Walkom markwal...@gmail.com wrote:
Yes but once you move a shard to a newer node
Hi Hannes,
thanks for replay. But i dont get it to work.
I made some chages. In my database i have the valueScherzartikel für
Feste and when i search for Feste it would be found. But if i try Feste
Scherz then i would not be found. How to solve this? Is there a way?
Thanks Stefan
Am Sonntag,
Hello Chetan ,
Can you confirm that there are enough disk space on that machine to
replicate the shard.
Thanks
Vineeth Mohan,
Elasticsearch consultant,
qbox.io ( Elasticsearch service provider http://qbox.io/)
On Mon, Jan 12, 2015 at 4:24 PM, Chetan Dev
Hi Ed,
I my case I have created a singleton TransportClient and querying ES with
the same in frequent interval to get data. I'm not doing any bulk
operations only searching index. threadPool needs to be shutdown once
Tomcat stops I guess, But when my webapplication is up and running is there
What does your mapping for the index look like? Is there any possibility
there could be a mapping conflict?
Christian
On Friday, January 9, 2015 at 10:48:52 PM UTC, Stefanie wrote:
I am having an issue with searching results if the type is not specified.
The following search request works
It's hard to say what's truly causing it, but when your server goes down,
you should also see NoNodeAvailableException. Also, I don't think it
matters too much whether you are doing index or search requests, as all
requests queue handlers (Runnable) to the generic threadpool. What is your
Hi,
what would be the right way to join between two data sources using
Kibana 4 interface?
Assume 2 data sources:
1. source=jobs, fields = {jobid, user, host, exitstatus,
starttime,finishtime}
Sample record:
type = jobs; jobid = 1234; user = john; host = myhost; exitstatus =
-3002;
You either use parent / child
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/parent-child.html
Or index denormalized data in the first place
Elasticsearch isn't meant to be used using the same models as relational
databases
--
Itamar Syn-Hershko
http://code972.com |
I'd like to create a mapping on an index, specifying a particular field,
TCNAME, as a non-analyzed string. This seems straightforward when you're
mapping against a field at the root node, or even a field nested deeper
within a document in cases where there is a predictable pattern of field
I ran into an issue where logs are logging to the / and not the /data
partition we setup for storing logs.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda124G 19G 4.1G 83% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/vdb199G 60M 94G 1% /data
I have given my document structure for the time of ingestion. My
requirement is to create the schema mapping for only the particular set of
fixed fields at the time of index creation itself, and to further update
the schema mapping, whenever a new dynamic field is established in my
system.
What did you try so far? I mean that dynamic mapping is a default core feature
of elasticsearch. What does not work for you?
David
Le 12 janv. 2015 à 22:17, buddarapu nagaraju budda08n...@gmail.com a écrit :
I have given my document structure for the time of ingestion. My requirement
Is there a way to manage it via Kibana interface just at the query time?
Something like Splunk transaction statement, which allows to group events
into transactions
On Monday, January 12, 2015 at 9:38:56 PM UTC+2, Itamar Syn-Hershko wrote:
You either use parent / child
Ok that worked. Sort of. I can see the results in the response now.
However I'm not able to refer to hits. I can see them but there isn't a
hit or hits type to refer to so I can see what is in a hit.
string? myid = searchResponse.Response.hits.hits[1].nestedObject[id].myid
Doesn't work.
Ok that worked. Sort of. I can see the results in the response now.
However I'm not able to refer to hits. I can see them but there isn't a
hit or hits type to refer to so I can see what is in a hit.
string? myid = searchResponse.Response.hits.hits[1].nestedObject[id].myid
Doesn't work.
Hey Guys,
I am seeking advice on design a system that maintains a historical view of
a user's activities in past one year. Each user can have different
activities: email_open, email_click, item_view, add_to_cart, purchase etc.
The query I would like to do is, for example,
Find all customers
Hi there,
The following works for me (searching for 'testing' also returns fields
with 'test'): index : analysis : analyzer : default : type : snowball
language : english when set up in my elasticsearch.yml file .
I want to combine this with the soundex I have installed so I have tried
this
Hi ,
Yes , Elasticsearch is going to create filter cache per filter.
But then if you want to over run this behavior , you can put _cache as
false in your query as follows -
filter : {
fquery : {
query : {
query_string : {
Hello
I wanted to updated regarding some problem that only happens in multiple
nodes configuration.
If there is a aggregation queries with top hits aggregation but SearchType
is set to count, Elasticsearch 1.4.2 will throw following exception:
Failed to deserialize response of type
Hello,
I am building an application that performs aggregations over time-series
data.
The prevailing advice for my situation seems to be that I should use
filters rather than queries to provide scope for my aggregations. The
reasons being
1) I have no need for scoring
2)
Even more interesting is that using the head plugin, I can see that all the
settings on each server JVM seem to report that the index level settings
are what I want them to be, but they still are getting changed to some
other values...
- settings: {
- index: {
- codec: {
1. remove any plugin. Just try to make your Java client connecting to your
cluster.
2. Use exactly the same Java version and the same elasticsearch version on both
ends.
HTH
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 13 janv. 2015 à 07:43, Rikhi rikhi...@gmail.com
I hate to bump my own thread, but I still have not been able to figure this
one out. Just wanted to put it in front of people's eyes again in hopes
that someone might have an idea where these index settings are coming from.
Thanks everyone.
On Thu, Jan 8, 2015 at 3:51 AM, Chris Neal
*Java code -*
m.put(cluster.name, elasticclustername);
Settings s = ImmutableSettings.settingsBuilder().put(m).build();
client = new TransportClient(s).addTransportAddress(new
InetSocketTransportAddress(elastichost, elasticport));
*issue on elasticsearch server log -*
[2015-01-12
I used Kibana for a couple of tasks in works and I like the charts it
generates, what library does it use?
Thanks.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
Here are the javascript dependencies:
https://github.com/elasticsearch/kibana/blob/master/bower.json
I assume its one of those.
On Mon, Jan 12, 2015 at 11:20 AM, Mauro Julián Fernández
mauroj.fernan...@gmail.com wrote:
I used Kibana for a couple of tasks in works and I like the charts it
Thank you, it's D3 (http://d3js.org/).
On Monday, January 12, 2015 at 1:26:26 PM UTC-3, Nikolas Everett wrote:
Here are the javascript dependencies:
https://github.com/elasticsearch/kibana/blob/master/bower.json
I assume its one of those.
On Mon, Jan 12, 2015 at 11:20 AM, Mauro Julián
I would like to be able to filter documents based on the number of elements
in a list of nested object. In the classic example of posts, I am modeiing
the comments field as a nested field of the post type. I would like to be
able to query all posts where # comments = N.
I do not have a clear
we have a scenario where we know set of fields that we need at the time of
index creation .This we know how do create field mapping at time of index
creation .
But ,
we have other scenario where users can create custom fields after index
creation and dont know what is the field properties of
34 matches
Mail list logo