= Preconditions.checkNotNull(settings);
}
@Override
protected void configure() {
bind(IndexCreator.class).asEagerSingleton();
}
}
But it produces the overmentioned error, any ideas?
Thanks,
Thomas
--
Please update your bookmarks! We have moved to https
A colleague just pointed out that you can add a search to the dashboard.
Seems to work :)
On Tuesday, 14 April 2015 14:57:43 UTC+1, Thomas Bratt wrote:
Hi,
I can't seem to get access to the original data by drilling down on the
visualizations on the dashboard. Am I missing something
:)
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https
Hi,
I can't seem to get access to the original data by drilling down on the
visualizations on the dashboard. Am I missing something?
Many thanks,
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop
solve my use case?
I guess I am missing something.
Regards,
Thomas Güttler
Am Mittwoch, 8. April 2015 11:02:35 UTC+2 schrieb James Green:
Couldn't you update the document with a flag on a field?
On 8 April 2015 at 09:43, Thomas Güttler h...@tbz-pariv.de javascript:
wrote:
We
We are evaluating if ELK is the right tool for our logs and event messages.
We need a way to mark warnings as done. All warnings of this type should
be invisible in the future.
Use case:
There was a bug in our code and the dev team has created a fix. Continuous
Integration is running,
and
Hi,
I am planing to use ELK for our log files.
I read docs about logstash, elasticsearch and kibana.
Still the whole picture is not solid.
Especially the reporting area is something I can't understand up to now.
Kibana seems to be a great tool to do the visualization.
But can I get the
occurs after N hours the warning should be visible again.
Can you understand what I want?
Can this be done with ELK, or I am on the wrong track?
Regards,
Thomas Güttler
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from
will come in only from localhost.
The systems will run isolated.
I see these solutions:
- take a docker container
- do it by hand (RPM install)
- use Chef/Puppet. But up to now we don't use any of those tools.
- any other idea?
What do you think?
Regards,
Thomas Güttler
--
You received
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-fields.html
On Wed, Feb 25, 2015 at 9:13 AM, James m...@employ.com wrote:
Hi,
I want to have certain data in my elasticsearch index but I don't want it
to be returned with a query. At the moment it seems to
. And I have just one URL rewrite rule (pictured).
Were you getting the same error when it was not working for you?
https://lh3.googleusercontent.com/-oDiu_ncjJlA/VMrEJL-Qj_I/Aic/so2IvrgTQbY/s1600/RewriteRule.png
On Thursday, January 29, 2015 at 3:31:56 PM UTC-8, Cijo Thomas wrote
a minute.
Just to make sure - disable Output cache for the website - where is it
in IIS Management Console?
On Wednesday, January 28, 2015 at 4:38:01 PM UTC-8, Cijo Thomas wrote:
Its possible to use IIS with the following steps.
1) Disable Output cache for the website you are using
/s1600/AppPool2.png
https://lh5.googleusercontent.com/-aBFCh_BZKn4/VMqgnM9ejhI/AiM/zxnsdD-VK8U/s1600/Error.png
Any ideas what I may be missing?
Thanks!
Konstantin
On Thursday, January 29, 2015 at 10:13:40 AM UTC-8, Cijo Thomas wrote:
I have been fighting with this for quite some time
Its possible to use IIS with the following steps.
1) Disable Output cache for the website you are using as reverse proxy.
2) Run the website in a new apppool, which do not have any managed code.
With the above two steps, kibana4 runs fine with IIS as reverse proxy.
On Saturday, December 27,
to update the
whole document when a simple email flag is changed.
Does this kind of index structure reminds a particular bug or required
setting ? Any rule of thumb to size memory regarding to index size on disk ?
Regards,
Thomas.
--
You received this message because you are subscribed
Hi,
By removing all my translog files, ES can start without error.
On Wednesday, January 14, 2015 at 2:56:48 PM UTC+1, Thomas Cataldo wrote:
Hi,
I encounter a problem with a large index (38GB) that prevents ES 1.4.2
from starting.
The problem looks pretty similar to the one in
https
being
48GiB for the operating system and 32GiB for ES heap and it still
fails with that.
Any idea or link to an open issue I could follow ?
Regards,
Thomas.
1. debug output:
[2015-01-14 12:01:55,740][DEBUG][indices.cluster ] [Saint Elmo]
[mailspool][0] creating shard
[2015-01-14 12:01
I'm using the snapshot/restore feature of Elasticsearch, together with the
Azure plugin to backup snapshots to Azure blob storage. Everything works
when doing snapshots from a cluster and restoring to the same cluster. Now
I'm in a situation where I want to restore an entirely new cluster
?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 janv. 2015 à 08:19, Thomas Ardal thoma...@gmail.com javascript:
a écrit :
I'm using the snapshot/restore feature of Elasticsearch, together with the
Azure plugin to backup snapshots to Azure blob storage. Everything works
important not
to constantly hit elasticsearch for those data.
I'm trying to first verify that I will not reinvent the wheel and build my
own solution
Thank you again
On Friday, 2 January 2015 17:04:59 UTC+2, Thomas wrote:
Hi,
I wish everybody a happy new year, all the best for 2015
module to do all these?
thank you in advance
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
On Wednesday, December 10, 2014 4:33:12 PM UTC-3, thomas@beatport.com
wrote:
On Monday, August 11, 2014 1:29:56 PM UTC-4, Mike Topper wrote:
Hello,
I'm having trouble coming up with how to supply a field within a nested
object in the multi_match fields list. I'm using
On Monday, August 11, 2014 1:29:56 PM UTC-4, Mike Topper wrote:
Hello,
I'm having trouble coming up with how to supply a field within a nested
object in the multi_match fields list. I'm using the multi_match query in
order to perform query time field boosting, but something like:
On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.mo...@gmail.com wrote:
// bulkRequest = client.prepareBulk();
Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over
--
You
Bump, I'm having the same problem.
On Thursday, June 12, 2014 10:32:14 PM UTC-5, Ivan Ji wrote:
Hi all,
I want to modify one field's search analyzer from standard to keyword
after the index created. So I try to PUT mapping :
$ curl -XPUT 'http://localhost:9200/qindex/main/_mapping' -d '
or later. I have noticed in marvel/sense that the
response is coming in this way and the transformation is happening client
side. Is there a way to change that in the response of ES?
Thank you very much
Thomas
--
You received this message because you are subscribed to the Google Groups
answer in order to see
whether such an approach suit your needs. It is hardware, structure and
partitioning of your data.
Thomas
On Wednesday, 17 September 2014 13:41:55 UTC+3, P Suman wrote:
Hello,
We are planning to use ES as a primary datastore.
Here is my usecase
We receive
I think the correct way to see if there is a missing field is the following
doc['countryid'].empty == true
Check also:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_document_fields
btw why such an old version of ES?
Thomas
On Wednesday, 17
the indices.memory.index_buffer_size will refer to those specific two
shards?
Thank you very much
Thomas
On Friday, 5 September 2014 11:44:42 UTC+3, Thomas wrote:
Hi,
I have been performing indexing operations in my elasticsearch cluster for
some time now. Suddenly, I have been facing some latency while indexing and
I'm
Hi,
I have been performing indexing operations in my elasticsearch cluster for
some time now. Suddenly, I have been facing some latency while indexing and
I'm trying to find the reason for it.
Details:
I have a custom process which is uploading every interval a number of logs
with bulk API.
What version of es have you been using, afaik in later versions you can
control the percentage of heap space to utilize with update settings api,
try to increase it a bit and see what happens, default is 60%, increase it
for example to 70%:
Thx Michael,
I will read the post in detail and let you know for any findings
Thomas.
On Friday, 5 September 2014 11:44:42 UTC+3, Thomas wrote:
Hi,
I have been performing indexing operations in my elasticsearch cluster for
some time now. Suddenly, I have been facing some latency while
On Friday, 5 September 2014 11:44:42 UTC+3, Thomas wrote:
Hi,
I have been performing indexing operations in my elasticsearch cluster for
some time now. Suddenly, I have been facing some latency while indexing and
I'm trying to find the reason for it.
Details:
I have a custom process
Got it thanks
On Friday, 5 September 2014 11:44:42 UTC+3, Thomas wrote:
Hi,
I have been performing indexing operations in my elasticsearch cluster for
some time now. Suddenly, I have been facing some latency while indexing and
I'm trying to find the reason for it.
Details:
I have
the version of cloud-aws is 2.2.0.
Is this correct?
Thank you very much
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr
the node
Hope it helps
Thomas
On Wednesday, 30 July 2014 12:31:06 UTC+3, Nick T wrote:
Is there a way to have a native java script accessible in integration
tests? In my integration tests I am creating a test node in the /tmp
folder.
I've tried copying the script to /tmp/plugins/scripts
I have noticed that you mention native java script so you have implemented
it as a plugin?
if so try the following in your settings:
final Settings settings
= settingsBuilder()
...
.put(plugin.types, YourPlugin.class.getName())
Thomas
On Wednesday, 30
Thnx Mark,
I can see that as you mentioned new version 1.3.1 has been released
Thomas
On Monday, 28 July 2014 11:11:57 UTC+3, Thomas wrote:
Hi,
I maintain a working cluster which is in version 1.1.1 and I'm planning to
upgrade to version 1.3.0 which is released the previous week. I wanted
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com
Great,
thanks 4 your reply Mark
On Monday, 28 July 2014 11:11:57 UTC+3, Thomas wrote:
Hi,
I maintain a working cluster which is in version 1.1.1 and I'm planning to
upgrade to version 1.3.0 which is released the previous week. I wanted to
ask whether it is compatible to upgrade
and
get back aggregations that contain fields of both parent and children
documents combined.
Any thoughts, future features to be added in the near releases, related to
the above?
Thank you
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group
Hi Adrien and thank you for the reply,
This is exactly what i had in mind alongside with the reversed search
equivalent with the reverse_nested, this is planed for version 1.4.0
onwards as i see, will keep track of any updates on this, thanks
Thomas
On Friday, 25 July 2014 14:54:50 UTC+3
The one from the elasticsearch CentOS rpm repository works fine here on EL6.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
(there are also 1.0 and 1.1 repos, simply adjust the baseurl)
The source is here:
alternatives that you thought of?
Cheers,
On 7/7/14 10:48 PM, Brian Thomas wrote:
I am trying to update an elasticsearch index using elasticsearch-hadoop.
I am aware of the *es.mapping.id*
configuration where you can specify that field in the document to use as
an id, but in my case
,
On 7/7/14 10:48 PM, Brian Thomas wrote:
I am trying to update an elasticsearch index using elasticsearch-hadoop.
I am aware of the *es.mapping.id*
configuration where you can specify that field in the document to use as
an id, but in my case the source document does
not have the id (I
.
On Sunday, July 6, 2014 4:28:56 PM UTC-4, Costin Leau wrote:
Hi,
Glad to see you sorted out the problem. Out of curiosity what version of
jackson were you using and what was pulling it in? Can you share you maven
pom/gradle build?
On Sun, Jul 6, 2014 at 10:27 PM, Brian Thomas brianjt
I am trying to update an elasticsearch index using elasticsearch-hadoop. I
am aware of the *es.mapping.id* configuration where you can specify that
field in the document to use as an id, but in my case the source document
does not have the id (I used elasticsearch's autogenerated id when
Thomas wrote:
I am trying to test querying elasticsearch using Apache Spark using
elasticsearch-hadoop. I am just trying to do a query to the elasticsearch
server and return the count of results.
Below is my test class using the Java API:
import org.apache.hadoop.conf.Configuration
I am trying to test querying elasticsearch using Apache Spark using
elasticsearch-hadoop. I am just trying to do a query to the elasticsearch
server and return the count of results.
Below is my test class using the Java API:
import org.apache.hadoop.conf.Configuration;
import
to be exactly the same.
And please allow me to make one more question, since elasticsearch uses
joda start of week is considered always Monday? independently of the
timezone?
Thanks!!
Thomas
On Tuesday, 17 June 2014 18:31:37 UTC+3, Thomas wrote:
Hi,
I was wondering whether there is a proper
On Tuesday, 17 June 2014 18:31:37 UTC+3, Thomas wrote:
Hi,
I was wondering whether there is a proper Utility class to parse the given
values and get the duration in milliseconds probably for values such as 1m
(which means 1 minute) 1q (which means 1 quarter) etc.
I have found
Hi,
I wanted to ask whether it is possible to get with the aggregation
framework the distribution of one specific type of documents sent per user,
I'm interested for occurrences of documents per user, e.g. :
1000 users sent 1 document
500 ussers sent 2 documents
X number of unique users sent
like for 0 to 50 users
have 2 documents per user etc.
Just an idea
Thanks
Thomas
On Tuesday, 24 June 2014 13:32:13 UTC+3, Thomas wrote:
Hi,
I wanted to ask whether it is possible to get with the aggregation
framework the distribution of one specific type of documents sent per user,
I'm
the distribution of documents (requests) per unique user
count, of course I can understand that it is a pretty heavy operation in
terms of memory, but we may limit to the top 100 rows for instance, or if
we can workaround it.
Thanks again for your time
Thomas
On Tuesday, 24 June 2014 13:32:13 UTC+3
We had a 2,2TB/d installation of Splunk and ran it on VMWare with 12
Indexer and 2 Searchheads. Each indexer had 1000IOPS guaranteed assigned.
The system is slow but ok to use.
We tried Elasticsearch and we were able to get the same performance with
the same amount of machines. Unfortunately
Hi,
I'm facing a performance issue with some aggregations I perform, and I need
your help if possible:
I have to documents, the *request* and the *event*. The request is the
parent of the event. Below is a (sample) mapping
event : {
dynamic : strict,
_parent : {
type : request
},
aggs: {
metrics: {
terms: {
field: event,
size: 10
}
}
}
}
}
}
}
}'
On Friday, 13 June 2014 10:09:46 UTC+3, Thomas wrote:
Hi,
I'm facing a performance issue with some
: {
date_histogram: {
field: event_time,
interval: minute
},
aggs: {
metrics: {
terms: {
field: event,
size: 12
}
}
}
}
}
}'
On Friday, 13 June 2014 10:09:46 UTC+3, Thomas wrote:
Hi,
I'm facing a performance
Reelsen wrote:
Hey,
you could index this as a geo shape (as this is valid GeoJSON). If you
really need the functionality for a geo_point, you need to change the
structure of the data.
--Alex
On Sat, May 31, 2014 at 3:36 PM, Brian Thomas mynam...@gmail.com
javascript: wrote:
I am new
I am new to Elasticsearch and I am trying to index a json document with a
nonstandard lat/long format.
I know the standard format for a geo_point array is [lon, lat], but the
documents I am indexing has format [lat, lon].
This is what the JSON element looks like:
geo: {
type: Point,
Hello,
I stepped into a situation where I need to truncate a timestamp field
truncated to the week, and i want to do it the exact way elasticsearch does
it in the datehistogram aggregation in order to be able to perform
comparisons. Does anyone knows how I should perform the truncate to the
Hi,
I'm trying to get some aggregated information by Querying Elasticsearch via
my app. What I notice is that after some time I get a CircuitBreaker
exception and my query fails. I can assume that I load too many fielddata
and eventually the CircuitBreaker stops my query. Inside my application
Hi,
I was wondering whether there is a way to reload the scripts on demand
provided under config/scripts. I'm facing a weird situation were although
the documentation describes that the scripts are loaded every xx amount of
time (configuration) I do not see that happening and there is no way
Hello!
I have been progressing well with aggregations, but this one has got me
stumped.
I'm trying to figure out how to access the key of the parent bucket from a
child aggregation.
The parent bucket is geohash_grid, and the child aggregation is avg (trying
to get avg lat and lon, but only
I'm running a two-node cluster with Elasticsearch 0.90.11. I want to
upgrade to the newest version (1.1.1), but I'm not entirely sure on how to
do it. 0.90.11 is based on Lucene 4.6.1 and 1.1.1 on Lucene 4.7.2. Can I do
the following:
1. stop node 1.
2. install 1.1.1 on node 1.
3. copy data
Thanks Sven,
Yes this would solve a lot of use cases..
Is there anyone that can respond whether we should create an issue for
that? The link provided does not mention whether finally this should be
opened as an issue
Thanks
Thomas
On Friday, 11 April 2014 18:53:08 UTC+3, Thomas wrote
*
}
}
}
}
}
}
}
},
size: 0
}'
Thanks
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr
}
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https
Thanks for the examples. Looks quite interesting. If I understand that
correctly, I'd have to write a plugin doing my subquery. Too bad I don't
have much time right now :( Sounds like an interesting challenge :)
--
You received this message because you are subscribed to the Google Groups
I have documents in a parent/child relation. In a query run on the parent,
I'd like to know, if the found parents have children matching some query. I
don't want to filter only parents with some conditions on the child, but
only get the information, that they have childrens matching some query.
I want to return all parents (or those matching some other query
conditions) but in addition to the other data in the document, I want to
compute for each parent, if he has any child with a set error flag. I don't
want to filter on this condition in this case.
Am Freitag, 28. März 2014
disconnects because of GC, the cluster can fully recover and only one of
the two data nodes can accept data and searches while a node is
disconnected. Is there anything that needs to be changed in the
Elasticsearch code to fix this issue?
Thanks,
Thomas
--
You received this message because you
I'm experiencing split brain problem on my Elasticsearch cluster on Azure,
consisting of two nodes. I've read about the zen.ping.timeout
and discovery.zen.minimum_master_nodes settings, but I guess that I can't
use those settings, when using the Azure plugin. Any ideas for avoiding
split brain
Ok. Also using the zen.* keys?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web
, as you suggested.
Regarding Java 8: We're currently running Java 7 and haven't tweaked any GC
specific settings. Do you think it makes sense to already switch to Java 8
on production and enable the G1 garbage collector?
Thanks again,
Thomas
On Thursday, March 27, 2014 9:41:10 PM UTC+1, Jörg
Forgot to reply to your questions, Binh:
1) No I haven't set this. However I wonder if this has any significant
effect since swap space is barely used.
2) It seems to happen when the cluster is under high load but I haven't
seen any specific pattern so far.
3) No there's not. There's a very
is busy.
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https
Thanks Clint,
We have two nodes with 60 shards per node. I will increase the queue size.
Hopefully this will reduce the amount of rejections.
Thomas
On Tuesday, March 18, 2014 6:11:27 PM UTC+1, Clinton Gormley wrote:
Do you have lots of shards on just a few nodes? Delete by query is handled
Hi,
I'm trying to keep some scripts within config/scripts but elasticsearch
seems that it cannot locate them. What could be a possible reason for this?
When need to invoke it es fails with the following
No such property: scriptname for class: Script1
Any ideas?
Thanks
--
You received this
Thanks David,
So this is a rabbitMQRiver issue, is there a need to open a separate issue?
(Never done the procedure, will look this one)
Thomas
On Wednesday, 26 February 2014 15:48:55 UTC+2, Thomas wrote:
Hi,
We have installed the RabbitMQ river plugin to pull data from our Queue
Just for any other people that might find this post useful, finally we
managed to get the expected functionality as described here
Thanks
Thomas
On Saturday, 15 February 2014 16:53:20 UTC+2, Thomas wrote:
Hi,
First of all congrats for the 1.0 release!! Thumbs up for the aggregation
operation of uploading documents?
Thanks
Thomas
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion
:20 UTC+2, Thomas wrote:
Hi,
First of all congrats for the 1.0 release!! Thumbs up for the aggregation
framework :)
I'm trying to build a system which is kind of querying for analytics. I
have a document called *event*, and I have events of specific type (e.g.
click open etc.) per page
I upgraded elasticsearch to 0.90.11 and installed marvel. Congratulations
on a really nice tool!
Now I have a small issue: since marvel is generating quite a lot of data
(for our develop system), I would like to configure an automatic delete of
old data. Is there such an option? I didn't find
ES seems to have ability to run analytic queries. I have read about people
using it as an OLAP solution [1], although I have not yet read anyone
describe their experience. In that respect how does ES analytics
capabilities compare against:
1) Dremel clones [2] like Impala Presto (for near
Finally, I fixed my problem.
There was a mistake for the field discovery.ec2.groups. Instead of a
string, I had to put an array of string.
And I also forgot to add the tag platform:prod to CloudFormation when
launching my stack.
Fixed!
On Friday, 7 February 2014 14:54:05 UTC+1, Thomas FATTAL
of the machine to something more powerful or to add a new node?
3) Is there a recommended configuration schema in term of number of nodes
in the cluster ?
Thanks a lot for your answer,
Thomas (@nypias)
--
You received this message because you are subscribed to the Google Groups
elasticsearch
at 20:51:14, Thomas Ardal
(thoma...@gmail.comjavascript:)
a écrit:
I know and that's the plan. But with 1.0.0 right around the corner and a
lot of data to migrate, I'll probably wait for that one.
Does Marvel only support the most recent versions of ES?
On Tuesday, January 28, 2014 8:43:26
When trying out Marvel on my Elasticsearch installation, I get the error
There were no results because no indices were found that match your
selected time span in the top of the page.
If I understand the documentation, Marvel automatically collects statistics
from all indexes on the node. What
authentication setup. Or?
On Tuesday, January 28, 2014 8:01:21 PM UTC+1, Thomas Ardal wrote:
When trying out Marvel on my Elasticsearch installation, I get the error
There were no results because no indices were found that match your
selected time span in the top of the page.
If I understand
.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 28 janv. 2014 à 20:11, Thomas Ardal thoma...@gmail.com javascript:
a écrit :
As bonus info I'm running Elasticsearch 0.90.1 on windows server 2012. I'm
using the Jetty plugin to force https and basic authentication
Hi Adrien and thanks for the reply,
This sounds like what I was looking for :) Will investigate it
Thanks
Thomas
On Thursday, 23 January 2014 20:25:01 UTC+2, Thomas wrote:
Hi,
I have been working with a parent child schema creation and I was
wondering if there is a way to perform a search
} }
}
}
},
size:0
}'
Is there an alternative way of achieving that?
Thank you
On Thursday, 23 January 2014 20:25:01 UTC+2, Thomas wrote:
Hi,
I have been working with a parent child schema creation and I was
wondering if there is a way to perform a search in children documents with
the query
that somehow analyzed is not accepted when setting mapping
Thomas
On Tuesday, December 17, 2013 5:10:58 PM UTC+2, Thomas wrote:
Hi,
I'm trying to create a mapping for a nested document and I realize that i
cannot set the index type of a string field to analyzed:
{
action: {
_all
94 matches
Mail list logo