Hi all,
Currently, I am trying to index documents that have conflicting mapping.
ElasticSearch cannot resolve that, so I am trying to reverse the indexing
approach.
I have disabled dynamic mapping at the root, but I would like to explicitly
provide the mapping for some fields or have the
Hi all,
We have a cluster of 4 nodes, and we constantly add new indexes and remove
old ones in a daily basis operation. We had a node down for 3 days, and
when starting it again it just added the indexes it had. The thing is that
the name of the new indexes we create everyday depend on the date
Hi,
I have 12 elasticsearch nodes, with 10gb eth
Ive been having alot of problem with the performance of snapshots, its
throttles to 20 mb/s even tho i set max_snapshot_bytes_per_sec to something
else, ive tried to set it in bytes, in megabytes (500m, 500mb)
Ive tried to move 100gb file
Hi all,
With nested objects, I have noticed that sometimes you do not have to
provide the path (ancestors).
For example,
A : {
properties : {
B : {
type : string
properties : {
C : {
type : string
}
That's dangling index recovery -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway-local.html#_dangling_indices
You can disable it (per that link).
On 3 December 2014 at 20:20, Ernesto Reig erniru...@gmail.com wrote:
Hi all,
We have a cluster of 4 nodes,
Thank you very much. That´s exactly what we were looking for :)
On 3 December 2014 at 11:04, Mark Walkom markwal...@gmail.com wrote:
That's dangling index recovery -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway-local.html#_dangling_indices
You can
Hi,
This version of Kibana requires at least Elasticsearch 1.4.0.Beta1
SetupError@
http://monitor-development-east.test.com:5601/index.js?_b=3998:42905:51
checkEsVersion/@
http://monitor-development-east.test.com:5601/index.js?_b=3998:43091:14
I am trying to do some data analysis on the user action logs stored in our
cluster. However, I always get OutOfMemoryError while doing some simple
aggregation queries.
The cluster is built with 10 EC2 r3.large instances (2 cpus, 15GB memory),
8GB is allocated to JVM, the rest is for filesystem
Maybe because you are looking at _node when you applied it to _cluster, see
if they are returned when you get _cluster/settings.
Cluster settings are applied to the nodes, but they are not stored' in the
same API endpoint.
On 3 December 2014 at 22:10, Ernesto Reig erniru...@gmail.com wrote:
That was the first place I look into (get _cluster/settings), and I just
got:
{
persistent: {},
transient: {}
}
On Wednesday, December 3, 2014 12:29:19 PM UTC+1, Mark Walkom wrote:
Maybe because you are looking at _node when you applied it to _cluster,
see if they are returned when you
There is no limit on the number of client connections imposed by ES.
If you see NoNodeAvailableException you may have hit a connect timeout of
the client. Connect timeout is 30 secs IIRC.
Jörg
On Wed, Dec 3, 2014 at 4:06 AM, Ramdev Wudali agasty...@gmail.com wrote:
Hi:
Is there a
The correct spelling is Elasticsearch.
The change was applied on 6 Jan 2014 almost a year ago for ES 1.0
See
https://github.com/elasticsearch/elasticsearch/commit/fa16969360ea43667ad11f827bbf86718c18fcc6
Jörg
On Wed, Dec 3, 2014 at 1:17 AM, TimOnGmail timbes...@gmail.com wrote:
Hey folks...
Then that's not a setting you can apply via the API, you need to edit the
elaticsearch.yml and restart unfortunately.
On 3 December 2014 at 22:31, Ernesto Reig erniru...@gmail.com wrote:
That was the first place I look into (get _cluster/settings), and I just
got:
{
persistent: {},
I want to upgrade from ES 1.0.2 and just tried ES1.4.1 but completion
suggester throws exception:
Caused by: java.lang.ClassCastException:
org.elasticsearch.index.mapper.core.StringFieldMapper cannot be cast to
org.elasticsearch.index.mapper.core.CompletionFieldMapper
at
Ok, thank you very much for the answer :)
On 3 December 2014 at 12:59, Mark Walkom markwal...@gmail.com wrote:
Then that's not a setting you can apply via the API, you need to edit the
elaticsearch.yml and restart unfortunately.
On 3 December 2014 at 22:31, Ernesto Reig erniru...@gmail.com
Snapshots are at the segment level. The more segments stored in the
repository, the more segments will have to be compared to those in each
successive snapshot. With merges taking place continually in an active
index, you may end up with a considerable number of orphaned segments
stored in
I will include my response to the original post:
Snapshots are at the segment level. The more segments stored in the
repository, the more segments will have to be compared to those in each
successive snapshot. With merges taking place continually in an active
index, you may end up with a
Thanks for bringing this issue up, I opened
https://github.com/elasticsearch/elasticsearch/issues/8760
On Tue, Dec 2, 2014 at 10:23 PM, SD shravanthid1...@gmail.com wrote:
Hi,
We are using Elasticsearch 1.3.2 and having issues running queries with
aggregations on unmapped fields. Some
How to search for the attachment content stored in the elasticsearch and to
get the content as a suggesion in the search?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
Hi! Sorry to keep you waiting, but I've been traveling. I completely
misunderstood how snapshots worked when they first came out, so when I
first wrote Curator's snapshot module it would only snap a complete index
once, and never re-snapshot an index if it appeared in the repository.
Then I
That works fine for me thank you! But I'd also wanted to be able to build
and object from the MapWritable values in the mapper.
Consider values as MapWritable object.
When I try to get a specified value from values.get(title) per example
the returning value is null but the field exists in the
That's because your MapWritables doesn't use Strings as keys but rather
org.apache.hadoop.io.Text
In other words, you can see the data is in the map however you cannot retrieve it since you are using the wrong key (try
inspecting the map object types).
Try values.get(new Text(title))
On
I can think of 3 potential causes for it:
- fielddata: do you know how many unique values you have for the `locale`
field?
- norms: do you have index: analyzed fields in your mappings and don't
care about length normalization? Then you might want to disable norms.
- bloom filters: they take
I've tried that. It returns a
org.elasticsearch.hadoop.mr.WritableArrayWritable object. How can I get my
field content out of that?
Le mercredi 3 décembre 2014 14:10:24 UTC+1, Costin Leau a écrit :
That's because your MapWritables doesn't use Strings as keys but rather
Interesting...does the very large max_merged_segment not result in memory
issues when the largest segments are merged? When I run my the cleanup
command (_optimize?only_expunge_deletes) I see a steep spike in memor as
each merge is completing, followed by an immediate drop, presumably as the
I don't think the current filter cache evicts entries that consume more
memory first. The weight part is only used to evict when the filter cache
size (in terms of bytes, not entries) grows beyond a configured limit.
I agree this behaviour is quite simplistic and there are two interesting
things
I don't know much about facets, but the new histogram aggregation supports
scripts, maybe you can try it out?
On Tue, Dec 2, 2014 at 2:49 PM, DH ciddp...@gmail.com wrote:
Hi everyone,
I'm trying to figure out how to do an histogram facet :
{
query : {
match_all : {}
},
Yes, you can have a field that is not indexed but has doc values. In that
case, elasticsearch will build columnar storage for that field, but no
inverted index.
On Tue, Dec 2, 2014 at 5:55 AM, vineeth mohan vm.vineethmo...@gmail.com
wrote:
Hi ,
I have a situation where a field is only used
Hi,
1.4 changed a lot of things, especially at the distributed system level, so
testing it in your staging environment will certainly help ensure that
things work as expected.
Filtered aliases have been available for a long time (even before
1.4.0.beta1), it's very likely that they are already
Unfortunately, if you mistakenly go to the wrong endpoint, i can happen
indeed :(
On Mon, Dec 1, 2014 at 12:13 PM, Anil Karaka anilkar...@gmail.com wrote:
Oh, things like that happen?
Thanks.
On Monday, December 1, 2014 3:14:27 PM UTC+5:30, Adrien Grand wrote:
aggs makes no sense in the
Hi, and thanks for the reply.
I should have said that, for reasons I (sadly) have no power over, I'm
stuck with ES v0.90.5, and thus don't get to use those impressives
aggregations.
I'm beginnig to think that the only way of doing what I want is indeed
using a native script that will carve my
Thanks for the speedy reply.
As for 1, I understand that ES optimizes for *storage* as snapshots of the
same index accumulate; I just wish it could also optimize for performance.
Right now, with a measly 4.5 gig cluster, the difference between the
snapshot 1 and snapshot 24 is 8 minutes. If
Hi Jorg:
can the Transport settings be changed on the fly (using curl -XPUT )?
If so, what is the command ?
(I doubt its a cluster setting (it is not mentioned on the cluster settings
page of the documentation))
Thanks for the assist
Ramdev
On Wed, Dec 3, 2014 at 5:46 AM,
A “thin” option cannot help with snapshots as they reference segments. If a
segment exists in a time series index from 1 month ago at 12pm and hasn’t
changed by 1 month ago at 1pm, it is only stored once in the repository.
Though you have multiple “snapshots,” each segment is only ever backed
On Wed, Dec 3, 2014 at 8:32 AM, Jonathan Foy the...@gmail.com wrote:
Interesting...does the very large max_merged_segment not result in memory
issues when the largest segments are merged? When I run my the cleanup
command (_optimize?only_expunge_deletes) I see a steep spike in memor as
each
I ran my ELK setup on a large set of data with elasticsearch default
settings but with Xms = 2G on a red hat server with 32 GB RAM, 600GB HD and
after reading around 64 lac log lines my logstash console gives the error:
Exception in thread elasticsearch[logstash-XX-23664-4104][generic][T#6]
Anyone ?
Le jeudi 20 novembre 2014 16:27:44 UTC+1, Damien Montigny a écrit :
Hi everyone,
If was experimenting on mappings for index size optimization purpose and I
have an issue, it seems a bug to me, I cannot find any documentaion about
it.
When I declare a field of type *byte *ES
Hello,
I have created a custom analyzer with (tokenizer: whitespace).
I would like to remove dot only at the end of words AND catenate
letter/words if dot are between letters (ex: a.b.c = abc).
What is the way to handle this in ES?
I have try word_delimiter but it split words as soon as a dot
Sounds like a bug. If I had to guess I'd say Elasticsearch is rounding the
type up to support unsigned bytes and not doing the range check but I
haven't looked.
Nik
On Wed, Dec 3, 2014 at 9:34 AM, Damien Montigny damien.monti...@gmail.com
wrote:
Anyone ?
Le jeudi 20 novembre 2014 16:27:44
Anyone?
On Mon, Dec 1, 2014 at 11:22 AM, Michel Conrad
michel.con...@trendiction.com wrote:
Hi,
I just updated our test environment from 1.0.2 to 1.4.1 and some
indices failed to recover, which seems to be related to the checksum
verfication introduces in 1.3.
[2014-11-28
You're getting back an array ([Samsung EF-C]) - a Writable wrapper around org.hadoop.io.ArrayWritable (to actually
allow it to be serialized).
So call toStrings() or get() to get its content.
On 12/3/14 3:30 PM, Elias Abou Haydar wrote:
I've tried that. It returns a
I understand that the segments are only backed up once. But anecdotally --
and this has been seen by others on the link I started out with --
snapshots take longer as time goes on. With time-based indexes, only
today's segments should be changing whether I optimize the old ones or
not.
I've already tried that. It doesn't work... :/
Le mercredi 3 décembre 2014 16:21:40 UTC+1, Costin Leau a écrit :
You're getting back an array ([Samsung EF-C]) - a Writable wrapper
around org.hadoop.io.ArrayWritable (to actually
allow it to be serialized).
So call toStrings() or get()
I've tried to call toStrings()
I got this :
title : [Ljava.lang.String;@35112ff7
with the get(), i'm getting this:
title : [Lorg.apache.hadoop.io.Writable;@666f5678
Le mercredi 3 décembre 2014 16:21:40 UTC+1, Costin Leau a écrit :
You're getting back an array ([Samsung EF-C]) -
I've been able to figure out how to do this with a char_filter
ref:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-pattern-replace-charfilter.html
char_filter: {
remove_dot_pattern: {
type: pattern_replace,
pattern: \.,
replacement:
Ok actually Strings() returns the String[] array that has the contents and
that solved my problem.
Thanks again Costin! :)
Le mercredi 3 décembre 2014 16:29:38 UTC+1, Elias Abou Haydar a écrit :
I've tried to call toStrings()
I got this :
title : [Ljava.lang.String;@35112ff7
with
I'm not sure what you are expecting since the results are as expected. See the
javadocs [1] for ArrayWritable.
toStrings() returns a String[] while get() a Writable[]. In other words you get an array of Strings and Writables and
neither
implements toString natively.
To get the actual content
I have actually went through the API and I get the big picture now.
I appreciate your help. Thanks! :)
Le mercredi 3 décembre 2014 16:50:33 UTC+1, Costin Leau a écrit :
I'm not sure what you are expecting since the results are as expected. See
the javadocs [1] for ArrayWritable.
Hi,
I am using the bulk api to update some of my records. but I get the
following error.
*ActionRequestValidationException[Validation Failed: 1: version type
[EXTERNAL] is not supported by the update API*
What are my alternatives if I do not want to send the entire record each
time I want to
This is a newbie question about how the cluster works. I try to find the
answer from the group, but seems not exact the same question I have.
By reading the guide from elasticsearch.com, I understand when a master
node goes down, a new master node will be elected automatically. However, a
Time-series indices can grow to 300 segments per index or more. 30 days of that
is a rather large number of segments to test, especially over TCP/IP to Amazon
S3. It tests before it can ignore.
—Aaron
On Wed, Dec 3, 2014 at 10:24 AM, Matt Hughes hughes.m...@gmail.com
wrote:
I understand
Right, but it’s still slow after optimizing each day to 2 segments. I actually
noticed no difference in snapshot speed pre/post optimizing old indexes for
what it’s worth.
On December 3, 2014 at 12:20:59 PM, Aaron Mildenstein (aa...@mildensteins.com)
wrote:
Time-series indices can grow to
Would setting up a symbolic link still necessitate re-installing the
Windows service after each upgrade? I noticed that the service, when
installed, contains version-specific information in places such as the
display name and description.
IE:
Elasticsearch 1.4.1 (node-01)
Elasticsearch 1.4.1
I'm having some difficulties getting some non-logstash data to show up in
kibana4. All logstash data works fine. I loaded up the french data as
suggested on the elasticsearch help page
(http://www.elasticsearch.org/help) and everything works as far as
elasticserach is concerned. I can
Thank you for your response
Looks like I read it wrong in the documentation, only the Fields referred
to in alias filters must exist in the mappings of the index/indices pointed
to by the alias. part was included in the 1.4.0.beta1
Anyway, I found the terms lookup mechanism
I know Marvel can be configured to store data separately from the cluster
that it is reporting on. Is the same type of configuration available for
Kibana as well? After some exhaustive searching, I haven't turned up any
information on the topic.
Can Kibana store it's data separately from the
Upgraded to elasticsearch 1.4.1 - no change
On Wednesday, December 3, 2014 12:53:42 PM UTC-5, Brian Olson wrote:
I'm having some difficulties getting some non-logstash data to show up in
kibana4. All logstash data works fine. I loaded up the french data as
suggested on the elasticsearch
It's stored on the cluster it interfaces on.
You can export dashboards though.
On 4 December 2014 at 06:07, Steve Camire steve.cam...@gmail.com wrote:
I know Marvel can be configured to store data separately from the cluster
that it is reporting on. Is the same type of configuration available
Thanks Mark, about what I expected.
On Dec 3, 2014 2:50 PM, Mark Walkom markwal...@gmail.com wrote:
It's stored on the cluster it interfaces on.
You can export dashboards though.
On 4 December 2014 at 06:07, Steve Camire steve.cam...@gmail.com wrote:
I know Marvel can be configured to
Jonathan,
Your current setup doesn't look ideal. As Nikolas pointed out, optimize
should be run under exceptional circumstances, not for regular maintenance.
That's what the merge policy setting are for, and the right settings should
meet your needs, atleast theoretically. That said, I can't say
I have a nice, performant cluster of 5 nodes. They're all on separate
machines on the same switch. Life is good.
Now...
Do I tell the consumers of my Elasticsearch cluster to hit any of the five
nodes as suits their fancy? Or do I give them the name of ONE node? If so,
is that node configured
It's really left up to you to decide.
Options are;
- Load balancer like HAProxy, nginx
- Multi entry DNS record (ie roundrobin)
- Having a client join the cluster for inserts
There may be more others can suggest.
On 4 December 2014 at 07:19, Christopher Ambler const.dogbe...@gmail.com
So you're saying, in essence, YES, I should try to balance search requests
across all nodes and not just one node.
The method may be debatable, but the underlying answer is YES, distribute
among machines.
(Just being sure I understand).
--
You received this message because you are subscribed
Upgrading to 1.4 has been a clusterfuck for us. It's broken pretty much
everything we rely on. I need to go back to 1.3, can I use the snapshot feature?
Will a snapshot taken on a 1.4 cluster restore to a separate 1.3 cluster ?
I'm really just interested in the data, I'd like to reapply my own
ES Version: 1.3.5
OS: Ubuntu 14.04.1 LTS
Machine: 2 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz, 8 GB RAM at AWS
master (ip-10-0-1-18), 2 data nodes (ip-10-0-1-19, ip-10-0-1-20)
*After upgrading from ES 1.1.2...*
1. Startup ES on master
2. All nodes join cluster
3. [2014-12-03
No. A snapshot done on 1.4 won’t work in 1.3.
It might work only if no other indexation operation happened and no merge.
But that said, what is your concern with 1.4?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
I'm pretty sure you can't due to different Lucene versions. I wouldn't even
try - just export and re-index.
I will be more than happy to hear about what went wrong for you with
upgrading?
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer
Thanks, Jörg! I thought it might be something like that (I haven't
upgraded for a long time :-) ).
- Tim
On Wednesday, December 3, 2014 3:50:06 AM UTC-8, Jörg Prante wrote:
The correct spelling is Elasticsearch.
The change was applied on 6 Jan 2014 almost a year ago for ES 1.0
See
In our environment our cluster is inside EC2/VPC. We have an ELB in front of
the cluster. We use DNS to assign a CNAME to the ELB for easier internal use.
The cluster is currently at 15 nodes, 3 of which are “master only, no data” and
associate themselves with the ELB. The ELB balances requests
thx
On Tue, Dec 2, 2014 at 2:45 PM, Mark Walkom markwal...@gmail.com wrote:
What specifically do you want to know?
There is this in the docs -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html#jvm-version
On 3 December 2014 at 09:34, Sitka sitkaw...@gmail.com
Greetings,
My primary data store has a date_created field stored as a unix timestamp.
I've verified that if I multiply that value by 1000 and then index
it, Elasticsearch correctly picks the field up as a date. But updating the
data in my primary data store is not very feasible, therefore,
I will be more than happy to hear about what went wrong for you with
upgrading?
Well Kibana 4 is unusable for us, the lack of auto refresh killed it for us.
Most of the time K4 simply doesn't work even for browsing small sets, I'd
say 2 times in 3 we get the 30 ms timeout error, is
I'm a bit confused. Are you downgrading just because of Kibana compat
issues? seems to me like killing a fly with a bazooka.
Enabling CORS and using K3 dashboards seem like the better solution to me,
for now. K4 isn't even officially released yet. As for data disappearing,
I'm sure it wasn't and
This tripped me up too.
Are your logstash agents using the elasticsearch output ? If so, they'll be
running the embedded version of ES that comes with Logstash, and that's the
version that's stopping kibana from working. Basically *any* ES node that
connects to your cluster in any form must be
Hi
I have a mapping which is having the following 2 properties:
start_date: {
format: dateOptionalTime,
type: date
},
end_date: {
format: dateOptionalTime,
Well you say just, but at the moment Kibana is our only view into the ES
cluster, so yes it's a dealbreaker for us.
After enabling CORS, and what an unexpected knock about of pure fun that
was, I still can't use the K3 dashboards, they're blank, no data, no errors
just empty dashboards :(
Data
I'm not aware of compat issues with K3 and ES 1.4 other than
https://github.com/elasticsearch/kibana/issues/1637 . I'd check for
javascript errors, and try to see what's going on under the hood, really.
When you have more data about this, you can either quickly resolve, or open
a concrete bug :)
I am kibana 4.0.0-BETA2
On Wednesday, December 3, 2014 7:30:03 PM UTC+8, Mark Walkom wrote:
What version of Kibana?
On 3 December 2014 at 21:48, David Montgomery davidmo...@gmail.com
javascript: wrote:
Hi,
This version of Kibana requires at least Elasticsearch 1.4.0.Beta1
SetupError@
https://github.com/elasticsearch/kibana/issues/1637
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Author of RavenDB in Action http://manning.com/synhershko/
On Thu, Dec 4, 2014 at 1:49 AM, David Montgomery
Hi,
have no agents as of yet. Just logstash server with the below config
input {
redis {
host = %=@redis_host%
data_type = list
key = logstash
codec = json
}
}
output {
stdout { }
elasticsearch {
host = %=node[:ipaddress]%
}
}
On Thursday, December
You still need to set the protocol to either HTTP or transport though.
On 4 December 2014 at 10:53, David Montgomery davidmontgom...@gmail.com
wrote:
Hi,
have no agents as of yet. Just logstash server with the below config
input {
redis {
host = %=@redis_host%
data_type =
Well you're right there's JS errors, CORS related;
XMLHttpRequest cannot load
http://10.5.41.120:9200/logstash-2014.12.04/_search. Request header field
Content-Type is not allowed by Access-Control-Allow-Headers.
In my elasticsearch.yml I've got this on all nodes,
http.cors.allow-origin: /.*/
I have an issue with elasticsearch version 1.4.1, i had indexes without
replicas and i recently i added 1 replica to all indexes and they are not
getting allocated.
I also have an issue when using cluster setting to exclude a node from
allocation, it just doesnt work, data is not moved from the
Hi,
I have two queries with identical nested aggregations using *min_doc_count* *=
0* option. The only difference is in the queries, first query is a
*match_all()* and the other a filtered query. The filtered query's
aggregations miss bucket keys here and there! I was expecting an exact
How many nodes do you have?
Can you provide the command you are sending ES to set the exclude?
On 4 December 2014 at 13:46, Sebastián Schepens
sebastian.schep...@mercadolibre.com wrote:
I have an issue with elasticsearch version 1.4.1, i had indexes without
replicas and i recently i added 1
Hi
We have a multi tenant SaaS solution and we expose our use of elasticsearch
through an API. We noticed that clients who use the API directly, and not
through our front-end, don't bother with paging and just use a count of 500k or
1m even if the actual count is much lower. I was wondering
Hi Boaz,
Yes REST interface got less compatibility issue. The only downside for us
is lacking proper authentication header in its REST request. Is there any
plan to support both REST and tranport interface?
Another question, what ’s the version range supported by existing marvel?
If I
Sorry you answered the ersion range question already:)
Thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To
I want to expose some aggregate SUM/MIN/MAX data from ES through to my Hive
tables.
Currently I have created an table which needs to scroll through all the
rows and then re-aggregate these statistics in Hive.
This feels very inefficient. Is it possible to create an ES query with
Has there been any progress on the Push Down Filtering mentioned by
Costin? (
http://ryrobes.com/systems/connecting-tableau-to-elasticsearch-read-how-to-query-elasticsearch-with-hive-sql-and-hadoop/#comment-1169375542
)
Right now I am working around this by creating a lot of specific table
Hi guys, i want to build graph using d3 and query from elassticsearch, but
anyone here tell me what exactly mean by this code ?
client.search({
index: 'nfl',
size: 5,
body: {
// Begin query.
query: {
// Boolean query for matching
I added the below to elasticsearch.ymk config. Still kibana provides the
same error
http.cors.enabled: true
http.cors.allow-origin: http://monitor-development-east.test.com:5601
For those at elastiseaerch..can you provide me color on what my be going
on?
Thanks
On Thursday,
Hi there,
I have a very simple term query. It is not giving result if I am
executing it against *localhostt:9200, *not even with
*localhostt:9200/index_name*,but if i use the type name i.e.
*localhostt:9200/index_name/type_name* ,its giving me results.
Can anyone please suggest what
93 matches
Mail list logo