This is a bug in Apache Phoenix. As long as this bug exists, Apache Phoenix
is not supported by JDBC plugin.
Jörg
On Thu, Dec 11, 2014 at 8:19 AM, cto@TCS rimita.mit...@gmail.com wrote:
Hi,
I have a HBase database and I use phoenix as a RDBMS skin over it.
Now I am trying to retrieve those
Shield is still in closed beta and it is not accessible to the general
public at the moment.
I don't have an ETA on a general release either sorry!
On 11 December 2014 at 08:48, Deepak Kumar deepmun1...@gmail.com wrote:
Hi Friends,
Tried downloading Shield. it is mentioned that You can also
Hi,
I create document indices based on document creation date.
Lets say we create every month a new index: doc_201412.
Each index is aliased with docs alias and we store percolated queries to
an alias docs.
My guess is that I have to copy percolated queries to newly created indices
even
Hi ES community =)
I would like to use the Java API to connect to an ES Cluster but run into
dependency problems with Lucene. (Using an embedded neo4j which depends on
Lucene 3.x)
When using the HTTP API directly, Lucene is not necessary!
So is there a way how I can use the ES Java API without
I don’t think you can. You could have a look at JEST project which might not
depend on Lucene.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs
Dear folks,
I have discovered that missing filter doesn't work in ES 1.4 as it does in
1.1.1. I have two instances running and have run the following query on
both but in 1.4 everytime i get nothing and 1.1.1 does exactly what i want
it to do. could you please look into it.
GET
Hello EverySearcherbody! :D
I just wanted to say that I didn't have those problems anymore because I am
not using the xml filter anymore instead i now use another grok filter for
this use-case.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
Just tested this.
When I used a large number to get all of my documents according to some
criteria (4926 in the result) I got:
13.951s when using a size of 1M
43.6s when using scan/scroll (with a size of 100)
Looks like I should be using the not recommended paging.
Can I make the scroll better?
Hello all,
I want to run a simple *sql group by query* in kibana 4 Discover page.
Each record in my elastic search index represent a log and has 3 columns:
process_id (not unique value), log_time, log_message.
example:
process_idlog_time log_message
Hello All,
First thanks a lotfor the great work on the ELK stack!
This post is both about elasticsearch and kibana3. There is something I
dont get in the breaking change changelog of 1.4 about alias api:
« The get alias api will return a section for aliases even if there are no
aliases.
Hi all,
We'd like to combine the query score with our own custom trending score for
a given document. Currently, our query looks like:
{
query: {
filtered: {
filter: {
and: [
{
range: {
Hi,
We are using elasticsearch 1.3.1 version.
And we are getting the below error in our production,
[2014-12-10 19:11:00,534][DEBUG][action.search.type ] [es_f2_02] [5031993]
Failed to execute fetch phase org.elasticsearch.search.
SearchContextMissingException: No search context found for id
Thanks.
Since the config format is flat, it should be trivial to just concat N files
together.
cat defaults.yml overrides.yml /etc/elasticsearch.yml
This way, I’d never write elasticsearch.yml myself.
On December 11, 2014 at 1:45:36 AM, Mark Walkom (markwal...@gmail.com) wrote:
No there
Hi:
I have a single node Elasticsearch instance running (version 1.0.0).
This instance was configured
with multicast false and no unicast IPs specified. and I change the default
ports from 9200,9300 to 9600,9700 with 5 Shards and no replication.
I just added a new node to this instance
Hmm... few days ago, I asked a similar question,
https://groups.google.com/forum/#!topic/elasticsearch/1TlJDwuKXiA
But we dont' get timedout of oom error..
Jason
On Thu, Dec 11, 2014 at 10:44 PM, Bala krishnan
balakrishnan.te...@gmail.com wrote:
Hi,
We are using elasticsearch 1.3.1
1 - Depends on how much data you have.
2 - Yes, two replicas will mean one will never be assigned. This is because
you have 2 nodes but 3 copies of the data. Set replica to just 1.
3 - That sounds very unusual. Have you tried to fetch one of these
documents via id?
On 11 December 2014 at 15:50,
I’m afraid you need to reindex.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs
Le 11 déc. 2014 à 17:28, Stefan stefan.tauche...@gmail.com a
Hi Mark:
Thanks, a few things were resolved.
1. I was running into heap memory issues on the new node, and the Cluster
state went form being Yellow to Green almost immediately.
2. The problem with my query was not so much with lack of data or data not
being replicated/copied over to the new
If I remember correctly, version 1.4 can turn nodes that cant connect to
the cluster to read only mode.
On Thursday, December 11, 2014 4:44:28 PM UTC, David Artus wrote:
My understanding is that recovery from Split Brain situations is
troublesome and we are encourages to ensure that a
If you can, then use IIS ARR as a reverse proxy, can do easy kerberos
authentication as well
On Thursday, December 11, 2014 7:48:41 AM UTC, Chetan Dev wrote:
Hi,
How do i generate nginx htpassword in windows?
Thanks
--
You received this message because you are subscribed to the Google
Setting up nginx on windows is actually very easy since they provide a
native binary.
But, take a look at IIS ARR to do reverse proxy for elasticsearch, can do
kerberos authentication, give access based on active directory and limit
what methods are available.
On Thursday, December 11, 2014
I keep getting this error when using the init script. The error i receive
is:
Starting elasticsearch: runuser: unrecognized option '--pidfile'
Try `runuser --help' for more information.
[FAILED]
The line of code in the init script:
Also forgot to add, this is running on ES 1.4.1
On Thursday, December 11, 2014 1:16:38 PM UTC-6, Joey Nooner wrote:
I keep getting this error when using the init script. The error i receive
is:
Starting elasticsearch: runuser: unrecognized option '--pidfile'
Try `runuser --help' for more
Hi,
I am facing the same situation:
We would like to get all the ids of the documents matching certain
criteria. In the worst case (which is the one I am exposing here), the
documents matching the criteria would be around 200K, and in our first
tests it is really slow (around 15 seconds).
Rimita, this was fixed in phoenix 3.1.0, pls follow these instructions:
http://lessc0de.github.io/connecting_hbase_to_elasticsearch.html
On Thu, Dec 11, 2014 at 3:53 AM, cto@TCS rimita.mit...@gmail.com wrote:
Thank you so much
On Thursday, December 11, 2014 12:49:49 PM UTC+5:30, cto@TCS
I have just started with elasticsearch, have setup a cluster with 4
data/master nodes. everything pretty default. The nodes are called E1, E2,
E3 and E4.
I have implemented a few pieces of client software, and doing RESTful
communication against http://E1:9200/ is super easy.
But how are the
Most (all?) of the official clients have connection pool support that will
query the cluster status and round robin across all the nodes with client
capability enabled.
Here's the appropriate link to the python docs:
The only thing to keep in mind is that if the node is down you should just
retry on another one. The client might handle that for you, I dunno. its
important though because you don't want to lose 1/4 of your traffic when
you restart a node.
Nik
On Thu, Dec 11, 2014 at 3:11 PM, Nick Canzoneri
The documentation
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html#docs-update
for detect_noop suggests that it only works for doc updates versus script
updates. Am I interpreting that correctly?
--
You received this message because you are subscribed to
Yes. If you want noop script updates you have to do something else. There
are docs on the script page.
On Dec 11, 2014 3:45 PM, Loren lo...@siebert.org wrote:
The documentation
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html#docs-update
for detect_noop
Per, http://thinkbiganalytics.com/solr-vs-elastic-search/ , ElasticSearch
does not suport shard spiltting which Solr supports. Is it generally an
issue in production, if yes what alternate user has :-
Shard Splitting
Shards are the partitioning unit for the Lucene index, both Solr and
Its never been a problem for me.
Normally for time series data you handle this by creating a new index every
day. For non-time series data I basically do this:
http://www.elasticsearch.org/blog/changing-mapping-with-zero-downtime/
It has the advantage of letting me change the mapping and
I just finished releasing the wikimedia extra
https://github.com/wikimedia/search-extra Elasticsearch plugin (versions
1.3.0 and 1.4.0). This release adds two things:
1. Elasticsearch 1.4.0 support (in the 1.4.0 version)
2. A new ```safer``` query (in 1.4.0 and 1.3.0 versions). This query
Hi,
Is there any rough information you can share (privately is fine) about a
rough Shield release date? Anything rough would help (weeks, months,
quarters away, etc.)
Thanks,
Ben
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
Shard splitting is an anti-pattern if done on server side. If you really
need more shards, you have not made good planning and you can always add
another index and use index aliasing to search on both. Plus, there are
export/import tools if you want reindexing from client side.
Jörg
Am
There have been a change in order for better performance. Empty or null
values are not the same semantics as missing field and this unfortunately
changed without a notice. There is a workaround using range filter to get
the old behavior.
Jörg
Am 11.12.2014 13:26 schrieb arshad Ali
I ran into this problem, and discovered this StackOverflow post (with no
answer):
http://stackoverflow.com/questions/23348172/elasticsearch-index-name-with-multi-field
That original poster included a gist that reproduces it quite nicely
(confirmed on 1.4.1):
Hi,
I am capable of creating a snapshot using sense or curl using the command:
PUT /_snapshot/gridshore-repo/gridshore_4
{
indices: gridshore*
}
The problem is when I start doing something like this using the javascript
library in angularjs.
Hi,
I am using the following java code to create instance of ElasticSearch
instance and create a index called testindex.
Node node =
NodeBuilder.nodeBuilder().settings(ImmutableSettings.settingsBuilder()
.put(path.data,
I am using version v1.4.0.
On Thursday, December 11, 2014 4:30:23 PM UTC-8, Saurabh Saxena wrote:
Hi,
I am using the following java code to create instance of ElasticSearch
instance and create a index called testindex.
Node node =
I would agree that shard splitting is not the best approach. Much better to
design for expansion by building in layers of indirection into your application
through the techniques of over-sharding, index aliasing, and multiple indices.
The non-mutually exclusive techniques are as follows.
Adding another index is adding shards.. you're just going about it the
wrong way.
The point of shard splitting is that you always have the right number of
shards.
FULLY re-indexing all your data is silly. You have to re-read ALL of it
again, and re-emit it, and re-index it.
With shard
It seems to me that most people arguing this have trivial scalability
requirements. Not trying to be rude by saying that btw. But shard
splitting is really the only way to scale from 250GB indexed to 500TB
indexed.
On Thursday, December 11, 2014 4:58:42 PM UTC-8, Andrew Selden wrote:
I
We observed a situation recently where a large increase in the amount of
data being ingested into ES caused log files in /var to fill to capacity on
each node in a six node cluster.
Attempts to curl to a master VIP front-ended on the cluster would return a
portion of the full json, but not all
Hello
I have a use case that feels like a good fit for ElasticSearch except for
one problem. I'm hoping someone might be able to suggest an approach for
overcoming it using ElasticSearch.
I have a lot of time-series data from sensors. Extremely simplified, a
reading looks a bit like this
{
Hi,
We are storing lots of mail messages in ES with multiple fields. 600
Millions+ messages across 3 ES nodes.
There is a custom algorithm which works on batch of messages to correlate
based on fields other message semantics.
Final result involves groups of messages returned similar to say
Hi
We have one cluster with 32 nodes (16 noes in each data center). After
every 18 to 20 hrs or so some of the nodes are removed from cluster
automatically. We tried to increase the time between pings interval still
the issue is not getting resolve.
Elasticsearch version :
Hi,
We are using ES 1.0.3. In our application we do frequent updates to the
documents and this is causes delete count to increase quickly and frequent
merges.
Due to Lucene version issues with our application and ES API, we are not
able to use the API and we have written a module to interact
A small change costs as much as a large one. Your best bet is to batch
multiple updates for the same document together if possible. Also make sure
that your updates actually change something. Sending the exact same
document with the same ID still does an update.
On Dec 12, 2014 12:24 AM, Jinal
May be with:
if(!res.isExists())
:)
David
Le 12 déc. 2014 à 01:37, Saurabh Saxena sau...@gmail.com a écrit :
I am using version v1.4.0.
On Thursday, December 11, 2014 4:30:23 PM UTC-8, Saurabh Saxena wrote:
Hi,
I am using the following java code to create instance of ElasticSearch
Hi Ram,
we have built something similar for a compliance analytics
application. Consider the following:
- The feeding pipeline should perform any tagging, extractions,
enrichments, classification as much as possible. The results will be
indexed. Usually, that takes care of some computationally
Hi David,
I made mistake while copying, it is actually
if(!res.isExists())
- Saurabh
On Thursday, December 11, 2014 10:09:11 PM UTC-8, David Pilato wrote:
May be with:
if(!res.isExists())
:)
David
Le 12 déc. 2014 à 01:37, Saurabh Saxena sau...@gmail.com javascript:
a écrit :
I
And what would be the best way to achive this?
The things is that we don't have enought storage the generate a new index
and leave the old one as it is
Am Donnerstag, 11. Dezember 2014 17:28:52 UTC+1 schrieb Stefan:
Hello again!
I am having another problem with the ELK-Stack.
I have now
Ha!
TBH I prefer to do a try catch (IndexAlreadyExistsException) then testing first.
Will try to reproduce your case.
Is it the actual code you are running?
David
Le 12 déc. 2014 à 08:32, Saurabh Saxena sau...@gmail.com a écrit :
Hi David,
I made mistake while copying, it is actually
May be you could do this one index at a time?
David
Le 12 déc. 2014 à 08:34, Stefan stefan.tauche...@gmail.com a écrit :
And what would be the best way to achive this?
The things is that we don't have enought storage the generate a new index and
leave the old one as it is
Am
55 matches
Mail list logo