On Sunday, September 28, 2014 at 18:48 CEST,
naveen gayar navind...@gmail.com wrote:
I wish to export the data from remote environment and import into my
local server.
Look into snapshots.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html
--
Most of the times the logs around the problem are related to try to index a
document which doesn't conform to the mappings (We expect this can happen
some time). We are running a big cluster with 50 data nodes and 3 masters.
Data is about a TB. Indexes are 5 indexes per day, with approximately
Forgot to mention ES - 1.3.2 Java-7
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Nodes-randomly-not-getting-latest-cluster-state-tp4063966p4064115.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.
--
You received this message
My elasticsearch wrapped in tomcat using transport ware is up and running in
a cluster of 20 machines on a specific path /essearch. I have load balancer
before cluster with dns, so i access elastic search using,
www.dns.com/essearch/
Now, im trying to use the Bulkprocessor to add / update docs
We have deployed marvel in our setup. It was working fine. Then there was an
issue with cluster and one of query nodes left. We restarted service on
query nodes and till them we are seeing Marvel dashboard showing following
error,
SearchParseException[[.marvel-2014.09.26][0]: from[-1],size[-1]:
Hi , Sorry I intended to say HHmmssSSS field . When I apply sorting or
aggregations on HHmmssSSS field how much memory will it take ? In this case
number of unique values for HHmmssSSS field can be 864(~80.6 million) .
FYI: We are
Hi,
On Friday, September 26, 2014 5:22:49 PM UTC+2, David Klotz wrote:
I'm currently having some issues with a search that's using Fuzziness.AUTO.
[...]
Is my interpretation of the docs off or is the implementation of AUTO
inconsistent with the docs?
to make the implicit question of my
If you are using the XDCR protocol for couchbase - Elasticsearch
replication (which is what the transport client plugin uses), there's
currently no way of doing this from that replication level and you will
have to do this using an external tool. Considering percolation is an
offline process, it
Hi all,
I have been struggling to put together a backup solution for my ES cluster.
As far as I understand the documentation at
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html
I can't understand why the following might be failing:
I have exported an
Hey all,
I am new with ES and I wonder if this is possible:
I have a user and a video type.
In my RDB structure the user and video table has many to many relation,
meaning a user can have more videos and a video can have more users.
And still am figuring how to make a local join (any ideas?)
I'm trying to connect to the junit embedded node in my code under test.
Normally the code under test makes a getNodeClient to 9300 to join the
cluster.
But I guess that's not an option with the embedded local(true) node.
So what is the answer?
One option is to write the original code (under
Can you do an ls -ld /srv/backup and provide the output?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 29 September 2014 18:45, Alex Harvey alexharv...@gmail.com wrote:
Hi all,
I have been struggling to put
Are you running the same version across all the nodes?
Is there anything in the logs on your master?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 26 September 2014 22:49, satishmallik satishmal...@gmail.com wrote:
Yes, I updated from 0.90 (meaning some of my data was created with 0.90,
then some with 1.0) to the latest version of elasticsearch. Is there a
way to compress older data?
Thanks!
2014.09.19. 19:42 keltezéssel, Igor Motov írta:
There were two reasons for not enabling compression on data
Hi ,
Sorry .Sending posts from phone caused these ugly replies.
Jörg , can you please look to this question , when you find time ?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from
Hi Alexios,
Hope you have scaled 2TB per day. We are almost in same situation . We are
getting 1 TB of data per day / 30 days retention.
Can you share your hardware details (# of nodes, vCPUs, RAM storage per
node) ?
Please share your experience , this will be helpful to us and save our
If you sort on a field with 80.6 mio unique values, ES will load these
values into RAM and sort on them.
packedint feature of Lucene is not important here, they are designed for
high frequency terms
http://blog.mikemccandless.com/2012/08/lucenes-new-blockpostingsformat-thanks.html
Jörg
On Sat,
so it is setup as
Server1
Logstash ES Kibana
server 2
ES only
When I go to the elasticHQ website I see the 2 nodes joined up and healthy
I will check Logstash thanks
On Saturday, September 27, 2014 4:34:45 AM UTC-4, Mark Walkom wrote:
Is the cluster external to the LS instance?
Check the
So here is the really, i think, relevant log message I got:
log4j, [2014-09-29T08:35:22.061] WARN: org.elasticsearch.discovery:
[Asmodeus] waited for 30s and no initial state was set by the discovery
then a bit later:
log4j, [2014-09-29T08:35:22.061] WARN: org.elasticsearch.discovery:
Hi,
What about 1.3.3? It's been a few days since the last issues were closed`.
Is it going to be released together with 1.4.0beta?
Thanks,
Thibaut
On Fri, Sep 19, 2014 at 5:31 PM, Tom Miller tom.mil...@ebiz.co.uk wrote:
I'm in the same boat as Dan. Desperate for child aggregation!
Looks
Transport Client does not do REST calls. There is no path involved.
So you don't have to set a path.
Note that transport is using 93xx ports but HTTP Rest layer uses 92xx ports.
If you want to use Transport client, then don't forget to use the right port.
My 2 cents.
--
David Pilato |
Think i may have fixed it by upgrading to Logstash 1.4.2
On Monday, September 29, 2014 8:40:32 AM UTC-4, Kevin M wrote:
So here is the really, i think, relevant log message I got:
log4j, [2014-09-29T08:35:22.061] WARN: org.elasticsearch.discovery:
[Asmodeus] waited for 30s and no initial
We're happy to announce that Elasticsearch 1.3.3 has been released! This
is a bug fix release.
The blog post at [1] describes the release content at a high level. The
release notes at [2] give details and a direct link to download.
Please download and try it out. Feedback, bug reports
Lance et. al.,
My post probably sounded more critical than intended.
Kibana is a great tool, no question about it. It is my go-to tool for most
of my work, getting a high level view and being able to quickly drill down
to specifics.I spend most of my time on it.
I understand that the Roadmap
This is probably a bug of the .NET client API, and you should log it on
github where they monitor issues for it
You might find this alternative library useful:
https://github.com/synhershko/NElasticsearch available from nuget as well
https://www.nuget.org/packages/NElasticsearch/
--
Itamar
It is quite easy to add a wrapper as a plugin in ES in the REST output
routine around search responses, see
https://github.com/jprante/elasticsearch-arrayformat
or
https://github.com/jprante/elasticsearch-csv
If the CSV plugin has deficiencies, I would like to get feedback what is
missing/what
Hi everyone,
I am having the next issue: I want to remove some docs from the results
based in conditions. I think this is the behavior that 'filters' have. The
problem is that couldn't get with the correct solution.
Example:
If I have an index with shops like :
{
Shop: {
name: 'Bakery My
Hi,
I am new to ELK stack and planning to use ELK to one my of my log
analytics project.
I am wondering if Kibana support pivot chart with secondary axis. As per
requirement I have to generate charts with secondary axis.
Please refer attached image:
On Thu, May 22, 2014 at 4:31 PM, Erik Rose grinche...@gmail.com wrote:
Alright, try this on for size. :-)
Since the built-in regex-ish filters want to be all clever and
index-based, why not use the JS script plugin, which is happy to run as a
post-processing phase?
curl -s -XGET
You have to use a technique known as anti-phrasing. not filters work well.
In your case of a shop query, something like
{
query : {
filtered : {
query : {
simple_query_string : {
query : shop,
default_operator: and
Jörg, worked perfectly with my problem!!!
Thank you very much for your response and your time.
Marcelo
On Monday, September 29, 2014 3:33:49 PM UTC-3, Jörg Prante wrote:
You have to use a technique known as anti-phrasing. not filters work
well.
In your case of a shop query, something like
Hi,
a new version of the Knapsack plugin was just released.
https://github.com/jprante/elasticsearch-knapsack
Knapsack 1.3.2.0
- new: support for Elasticsearch 1.3.2
- new: all knapsack actions are reimplemented as Java API transport actions
- new: Elasticsearch bulk format support
- new: byte
I get this in the logs when starting up elasticsearch and looking for some
guidance:
I can see logstash sending over the logs but elasticsearch isn't creating
indexes and nothing shows in kibana.
Grateful if anyone could get me pointed in right direction.
[2014-09-29 09:48:23,121][WARN
I'd like to stop using the stopwords filter in favor of Common Terms
queries but I'm not getting how this relates to terms aggregations. Is
there a technique or option I missed which would allow me to run a simple
terms aggregation but not get likely stopwords?
Thanks, Ryan
--
You received
The current charts are always scaled to the second to avoid confusion when
the chart , which always displays 30 data points, changes scale. You can
disable that behavior (or create a new chart with different settings) by
clicking on the little cog on the top of the chart. Then you'd need to
untick
It appears fetching index status information in Elasticsearch 1.3.3 causes
logspam and massive CPU util.
All of the loglines are of the form:
2014/09/29 21:38:56.663000 [DEBUG] action.admin.indices.status
[local-alias] [logstash-2014.09.29][4], node[KlgByctJTyiO7iC_wwXcjg], [R],
s[STARTED]:
Seems like a version mismatch. What versions of elasticsearch/logstash are
you using? Are you using the 'elasticsearch' output in logstash or
'elasticsearch_http'? Try using the latter.
--
Ivan
On Mon, Sep 29, 2014 at 1:24 PM, larrychu...@gmail.com wrote:
I get this in the logs when starting
Hi all,
I'm fairly new at Elasticsearch so please don't throw things at me for
asking silly questions. As the newb I got my instance up with minimal
fuss with the following.
redis
nginx
elasticsearch
logstash
I currently have a syslog entry already in my logstash.conf file and I'm
ready to
The behavior does not seem to occur with 1.3.2.
Both kopf and bigdesk cause the error in 1.3.3, and I was running the
versions of them appropriate for 1.3.
On Monday, September 29, 2014 2:43:00 PM UTC-7, schmichael wrote:
It appears fetching index status information in Elasticsearch 1.3.3
Hi,
I am new to ELK stack and planning to use ELK to one my of my log
analytics project.
I am wondering if Kibana support pivot chart with secondary axis. As per
requirement I have to generate charts with secondary axis.
Please refer attached image:
Are percolator requests Synchronous/blocking, meaning one would have to
wait for a response back, so if I had 100,000 requests, I have to wait for
matching results for every one of these percolator requests?
It doesn't seem right to me, can someone elaborate please?
Thanks
--
You received
The log4j syslog appender uses UDP which does not seem to work in your case.
I do not recommend using UDP because it is not reliable.
Check log4j2 for async logging and TCP for syslog.
http://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
Jörg
On Mon, Sep 29, 2014 at 11:51
There is already an issue opened for this:
https://github.com/elasticsearch/elasticsearch/issues/7916
Jörg
On Mon, Sep 29, 2014 at 11:52 PM, schmichael michael.schur...@gmail.com
wrote:
The behavior does not seem to occur with 1.3.2.
Both kopf and bigdesk cause the error in 1.3.3, and I was
Thanks, Jörg. I will need to find some time to look into this, as it seems
exactly like what I was looking for.
Thanks again!
Brian
On Monday, September 29, 2014 12:21:00 PM UTC-4, Jörg Prante wrote:
It is quite easy to add a wrapper as a plugin in ES in the REST output
routine around
Thanks Jorg! I'm marking my thread as complete since the issue already has
a better summary of the problem than I've posted here.
On Monday, September 29, 2014 3:18:08 PM UTC-7, Jörg Prante wrote:
There is already an issue opened for this:
I'm using ES 1.1.1 and LS 1.4.2
I'm using elasticsearch output not http but I can give it a try
On Monday, September 29, 2014 4:44:48 PM UTC-5, Ivan Brusic wrote:
Seems like a version mismatch. What versions of elasticsearch/logstash are
you using? Are you using the 'elasticsearch' output in
Thanks for responding.
It doesn't seem to be a permissions problem -
[root@logdata01 ~]# ls -ld /srv/backup
drwxrwx---. 3 elasticsearch elasticsearch 4096 Sep 29 18:43 /srv/backup
[root@logdata01 ~]# find /srv/backup/ \! -user elasticsearch -or \! -group
elasticsearch
[root@logdata01 ~]#
Hi Jörg,
Thanks for replying !
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web
Hi Alex,
Any chance you have disk quota enabled for the NFS share? I see this is the
snapshot output:
IndexShardSnapshotFailedException[[logstash-2014.09.19][4] Failed to
perform snapshot (index files)]; nested: IOException[No space left on
device];
Can you try copying a larger file to
49 matches
Mail list logo