issue when accessing the site using DNS name. When I am in
the machine, http:localhost:/ works though. Have not figured out the
fix for this yet. It seems like Kibana 4 CORS issue.
On Thu, Jan 29, 2015 at 3:38 PM, Konstantin Erman kon...@gmail.com
javascript: wrote:
Yes, Kibana 4 beta 3
at 4:19:31 PM UTC-8, Konstantin Erman wrote:
We currently use Kibana 3 hosted in IIS behind IIS reverse proxy for
auhentication. Naturally we look at Kibana 4 Beta 3 expecting it to replace
Kibana 3 soon. Kibana 4 is self hosted and works nicely when accessed
directly, but we need
!
Konstantin
On Thursday, January 29, 2015 at 10:13:40 AM UTC-8, Cijo Thomas wrote:
I have been fighting with this for quite some time, Finally found the
workaround. Let me know if it helps you!
On Thu, Jan 29, 2015 at 10:12 AM, Konstantin Erman kon...@gmail.com
javascript: wrote:
Thank you
, Cijo Thomas wrote:
Can you show your URL rewrite rules ? Also are you using Kibana 4 beta 3 ?
On Thu, Jan 29, 2015 at 1:09 PM, Konstantin Erman kon...@gmail.com
javascript: wrote:
Unfortunately I could not replicate your success :-(
Let me show you what I did in case you may be notice any
We currently use Kibana 3 hosted in IIS behind IIS reverse proxy for
auhentication. Naturally we look at Kibana 4 Beta 3 expecting it to replace
Kibana 3 soon. Kibana 4 is self hosted and works nicely when accessed directly,
but we need authentication and whatever I do I cannot make it work
search platform altogether imo.
On Friday, December 12, 2014 5:11:05 PM UTC-5, Konstantin Erman wrote:
I noticed that occasionally I need to shield my ES cluster from some
documents, which are too many or too big or otherwise poison ES.
Usually I can formulate pretty easy query or criteria
I noticed that occasionally I need to shield my ES cluster from some
documents, which are too many or too big or otherwise poison ES.
Usually I can formulate pretty easy query or criteria to detect those
documents and I'm looking for a way to block them from entering the index.
Is there such
I index exception messages and among other fields there a a couple (Code,
Status) which used to be numbers, so default dynamic mapping happily
mapped them as integers.
Then after some time it appeared that those fields are not necessarily
integers, they can be strings. When this happens
: wrote:
The default indices recovery performance is limited by 3 concurrent
streams and 20MB/sec. This is very slow on my machines. YMMV.
Jörg
On Sun, Nov 23, 2014 at 9:01 PM, Konstantin Erman kon...@gmail.com
javascript: wrote:
Advice to increase indices.recovery.concurrent_streams sounds
takes less than a minute.
Jörg
On Sat, Nov 22, 2014 at 7:43 PM, Konstantin Erman kon...@gmail.com
javascript: wrote:
Yes, I have noticed that article right away, simply because I keep
googling ES related questions every day :-)
Unfortunately the only practical advice I could learn from
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Thursday, November 20, 2014 9:48:56 PM UTC-5, Konstantin Erman wrote:
I work on an experimental cluster of ES nodes running on Windows Server
machines. Once
I work on an experimental cluster of ES nodes running on Windows Server
machines. Once in a while we have a need to reboot machines. The initial
state - cluster is green and well balanced. One machine is gracefully taken
offline and then after necessary service is performed it comes back
4 says This version of Kibana requires
at least Elasticsearch 1.4.0.Beta1.
On Friday, October 10, 2014 10:45:49 AM UTC-7, Konstantin Erman wrote:
After all I've figured (with some help) that Kibana 4 uses node info API
to check that ALL nodes in the cluster are at least 1.4.0.Beta1
which
are obviously not of the right version. Switching Logstash to http protocol
does the trick of unblocking Kibana 4.0.0.Beta1.
Konstantin
On Wednesday, October 8, 2014 5:54:59 PM UTC-7, Konstantin Erman wrote:
Today I have upgraded 3 machines development ES cluster from 1.3.2 to
4.0.0
Doh, it is on the very first page, right where the Connect button is:
Optional:
Quick Connect with 'URL' Parameter:
http://domain/?url=http://localhost:9200
I'm embarrassed.
Konstantin
On Tuesday, October 7, 2014 10:19:12 AM UTC-7, Konstantin Erman wrote:
I would like to use ElasticHQ
Today I have upgraded 3 machines development ES cluster from 1.3.2 to 4.0.0
BETA1. As far as I can tell it went successfully - cluster state is green,
it responds to commands and all the data seems intact. New data keep coming
in and indexing.
BUT somehow I completely lost Kibana cooperation!
I would like to use ElasticHQ, hosted or as a plugin, but the fact that I
have to enter my cluster URL and hit connect each time manually gets on my
nerve. I wonder if anybody found a way to invoke it as a browser bookmark
and make it immediately and automatically connect to my cluster URL?
I have documents in ES with the field Message, which normally represents
some multi word text string. Trying to query it with Kibana to see which
strings are in this property most frequently. What I actually get back is
the table which shows frequency of the specific *words*, but not the whole
, October 6, 2014 8:50:22 PM UTC-7, Doug Nelson wrote:
I use multi fields to have several different analysis types supported as need
and also to have the raw version available like in your example.
On Monday, October 6, 2014 8:34:34 PM UTC-5, Konstantin Erman wrote:
I have documents in ES
I would expect this question to be popular, but still cannot google the
answer.
If I have multiple ES nodes in the cluster, each having its own
configuration file (elasticsearch.yml) - what happens if some settings in
those files go out of sync? For instance, index creation config? Which
, Konstantin Erman kon...@gmail.com
javascript: wrote:
We use Elasticsearch to aggregate several types of logs - web server
logs, application logs, windows event logs, statistics, etc.
As far as I understand I can do one of the following:
1, Send each log to its own index and when I need
We have different indexes for different log types and one index of each
type is generated every day.
What I'm looking for is some convenient to use user interface to see things
like How many shards are allocated for particular index (a ton of tools
can show me that) and then I want to create
We use Elasticsearch to aggregate several types of logs - web server logs,
application logs, windows event logs, statistics, etc.
As far as I understand I can do one of the following:
1, Send each log to its own index and when I need to combine them in query
- specify several indices in Kibana
23 matches
Mail list logo