Hi All,
I am new to ElasticSearch and the ELK stack.
I am using Logstash to parse my apache logs in addition to generating the
fields using kv{} and sending the output to ES and I am using the default
config.
What happens is when I parse the first 15 log days which are nearly
equivalent to
Nevermind folks thanks anyway. I took a look at the ES source. Each
object in the doc has it's own methods such as remove():
POST
/test-timbr-2015.02.19/event/0f78c6b6-a30b-436e-bad7-4234654fc5bb/_update
{
script : ctx._source.realm.remove(\name http://realm.name/\)
}
On Friday, February
May be you forgot the last \n ?
David
Le 22 févr. 2015 à 12:03, Raz Lachyani raz.lachy...@gmail.com a écrit :
I'm getting this error while sending bulks from php client:
Elasticsearch Bulk failed ! cause :
{error:ElasticsearchParseException[Failed to derive xcontent from
It means that your bulk is incorrect. Check its format.
David
Le 22 févr. 2015 à 12:03, Raz Lachyani raz.lachy...@gmail.com a écrit :
I'm getting this error while sending bulks from php client:
Elasticsearch Bulk failed ! cause :
{error:ElasticsearchParseException[Failed to derive
I'm getting this error while sending bulks from php client:
Elasticsearch Bulk failed ! cause :
{error:ElasticsearchParseException[Failed to derive xcontent from
org.elasticsearch.common.bytes.BytesArray@1],status:400}
Does anyone has an idea what does it mean ?
--
You received this message
One way I found was to use a script as was mentioned here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html
There is an example of how to remove a field from a document.
Is there any other way this can be achieved?
For example, if i want to delete the field
I need to index a HTML column (nvarchar(MAX)) in a MS SQL Server database.
I have set up a JDBC river https://github.com/jprante/elasticsearch-river-jdbc
and the database is indexed.
Using
settings:{
analysis:{
analyzer:{
default:{
type:custom,
No,
It was actually empty :-o .
Thanks for the quick response.
On Sunday, February 22, 2015 at 3:58:07 PM UTC+2, David Pilato wrote:
May be you forgot the last \n ?
David
Le 22 févr. 2015 à 12:03, Raz Lachyani raz.la...@gmail.com javascript:
a écrit :
I'm getting this error while
Hi Jörg
A bit out of topic: I wonder if you are indexing blobs as base64 encoded fields
in JDBC river?
(I did not look at the doc)
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 22 févr. 2015 à 18:11, joergpra...@gmail.com joergpra...@gmail.com a
écrit :
Can you
For java.sql.Types.BLOB, I use the builder.value(Object object) method in
XContentBuilder, with a byte array as parameter.
For java.sql.Types.CLOB/NCLOB, I use just a string as returned by JDBC in
Clob.getSubString
There are DBs which store blobs as java.sql.Types.BINARY, and this can be
passed
Hello! This is a question I posted on StackOverflow
(http://stackoverflow.com/questions/28644501/replace-values-with-predefined-mapping-with-elk)
for which I didn't get any answer yet:
I have a file that I read with Logstash containing a certain parameter
called type. The possible values
Can you give some information about the mapper attachment setup you used
successfully?
There is no good reason why this should not be possible with JDBC river.
Jörg
On Sun, Feb 22, 2015 at 5:20 PM, Jiri Pik jiri@googlemail.com wrote:
I need to index a HTML column (nvarchar(MAX)) in a MS
Hi,
I have a problem with a fullsearch using several columns in a index.
I've tested it in few versions 1.1.x EL 1.3.x
Preparation:
(curl -XDELETE 'http://localhost:9200/index1';)
curl -XPUT 'http://localhost:9200/index1' -d '
{
mappings: {
af: {
dynamic: false,
I assume search actions got stuck and block the subsequent ones, which
results in the search queue filling up. Maybe the cause is printed in the
server logs.
Setting replica to 0 with just one node helps to fix the 15 shards/30 total
shards count but that is an unrelated story.
Jörg
On Fri,
Hi,
I have a system with about 50k documents indexed. It's a single data
master node (4gb heap I believe) w/ 2 client nodes connected. Aside from
redundancy issues, we noticed that when there were 45 concurrent users,
response times on our queries were taking upwards of 6s. With user load
It's not a bug, ES allocates based on higher level primary shard count and
doesn't take into account what index a shard may belong to, this is where
allocation awareness and routing comes into play.
Take a look at
If you're using the default settings for ES then you should change some of
them. Take a look at this chapter in the docs -
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/deploy.html
The most likely cause of your problems would likely be related to your heap
size.
On 22
Hi All,
We have several indexes in our ES cluster. ES is not our canonical
system record, we use it only for searching.
Some of our applications have very high write throughput, so for these we
allocate a singular primary shard for each of our nodes. For example, we
have 6 nodes, and we
I still get zero hits if i write or or and : default_operator: and
Le dimanche 22 février 2015 17:44:31 UTC+1, Mahyar SEPEHR a écrit :
Hi,
I have a problem with a fullsearch using several columns in a index.
I've tested it in few versions 1.1.x EL 1.3.x
Preparation:
(curl -XDELETE
You're best off doing this in Logstash -
http://logstash.net/docs/1.4.2/filters/translate
On 23 February 2015 at 06:01, Lucía Pasarin lupasa...@gmail.com wrote:
Hello! This is a question I posted on StackOverflow (
Hello,
Does elasticsearch have the ability to return the most common *adjacent*
words for a given search query?
That is, given some documents:
{text: To be or not to be, that is the question}
{text: We know what we are, but know not what we may be}
{text: If music be the food of love, play
You should check out monitoring tools like Marvel, kopf or elastichq and
the things it reports.
On 23 February 2015 at 07:38, John D. Ament john.d.am...@gmail.com wrote:
Hi,
I have a system with about 50k documents indexed. It's a single data
master node (4gb heap I believe) w/ 2 client
Are you running a single cluster with all of those nodes included?
Have you changed the roles that these play, ie master, data, client, or are
they the default?
On 20 February 2015 at 16:30, Deva Raj devarajcse...@gmail.com wrote:
I listed below instance and his heap size details.
Medium
Thank you very much for your kind answer. If I encode the html file into
Base64, and use the enclosed script, then all works just fine.
So, Joerg:
1. Is there a way for the JDBC river to transform the nvarchar(MAX) into
Base64 by itself?
2. If not, do you recommend
Use https
See
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 23 févr. 2015 à 06:54, Amos S amos.shap...@gmail.com a écrit :
Hi,
Today I tried to install a server using
I have following requirements,
For every project that I index has 2 values
1) Id: Each project has a unique ID
2) Name: Name of the project. But Name of the project can change.
For every project, we index a document with different attributes. Say each
document in index will have an Attribute
Hi,
I have an instance of Elasticsearch 1.4.1 on AWS, Amazon Linux OS. The
version has been installed with yum. I try to upgrade the version of
elasticsearch to the latest 1.4.4.
Since the installation is not production yet downtime is not an issue. So I
do not need to do a rolling upgrade
Apologies for everyone for sending these emails with digital signature which
may have caused some issues:
Summary for Joerg:
1. Is there a way for the JDBC river to transform the nvarchar(MAX) into
Base64 by itself? I can do on SQL server – see below (1) for David – but it’s
Hi,
Today I tried to install a server using ElasticSearch's Ubuntu repo
at http://packages.elasticsearch.org but get 404's.
Tried it multiple times.
Is there something wrong with this repository?
Is anyone aware of a good public mirror available?
Thanks.
--Amos
--
You received this message
I just tested the repos and they work fine.
Take a look at
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html#setup-repositories
On 23 February 2015 at 16:54, Amos S amos.shap...@gmail.com wrote:
Hi,
Today I tried to install a server using
I don't see a way to do exactly what you are looking for.
But, with a little effort on client you could give a try to the highlighting
feature which could give something similar.
Or may be an aggregation with a first level agg as a filter for the term, then
a Terms agg on the field but with a
And David:
Would it be possible to index text/html given as text rather than Base64?
From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On
Behalf Of David Pilato
Sent: Sunday, February 22, 2015 6:15 PM
To: elasticsearch@googlegroups.com
Subject: Re: Indexing of
David:
David: Do I need to use copy_to a new dummy column in order the highlighting to
work???
From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On
Behalf Of David Pilato
Sent: Sunday, February 22, 2015 6:15 PM
To: elasticsearch@googlegroups.com
Subject: Re:
33 matches
Mail list logo