Hi
I would like to use Kibana. I'm able to load my index however it didn't
find time-field name.
I a saw it search for '@timestamp'.
I'm using Java and ObjectMapper to write my data into ES.
I would like to know which field I need to define in order to have this
time-field.
Thank you,
Moshe
--
I have one physical server and I work only on it (no other servers).
At this server I have running elastic 1.4.2 - I use this version as this is the
last version elastic osgi bundle is ready for. Also at this server I have
glassfish 4.1 as java-ee server.
I run elastic node client inside my
I've installed hortonworks sandbow 2.0 and then did install on it
elasticsearch 1.4.0.
Now I want to install Kibana BUT here is the issue :
the sandbox come as a terminal, and thus when I run ES, this is what
happens :
I had a field called _timestamp, which I had to add in the meta-fields list
in the advanced settings. Maybe similar?
On Tuesday, March 17, 2015 at 10:10:51 AM UTC-5, Moshe Recanati wrote:
Hi
I would like to use Kibana. I'm able to load my index however it didn't
find time-field name.
I a
hello,
dear community members.
i want to know sum aggregation result accuracy.
is it result 100% confidence possible?
http://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html
it's have accuracy different for both term aggregation and sum
I think we need to release latest version we have.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 17 mars 2015 à 16:46, Jun Ohtani joht...@gmail.com a écrit :
Hi,
I’m not sure about that.
Do you install ICU plugin version 2.4.1 to Elasticsearch 1.4.3?
If you
I need to use PayloadTermQuery from Lucene.
Does anyone know how I can use this in ElasticSearch?
I am using ES 1.4.4, with the Java API.
In Lucene, I could use this by directly instantiating PayloadTermQuery, but
there are no APIs in ES QueryBuilders for this.
I don't need a query parser,
I was trying to workout the example from the below link
Search Engine with PHP Elasticsearch - YouTube
https://www.youtube.com/watch?v=3xb1dHLg-Lk
I cannot index the document (add.php example).
only When I remove the code for the Indexing the html form appears.
?php
require
I was searching for ES performance by google and I find some documents.
They says modify ES config is good for ES performance.
So, I edit my ES config like below.
*/etc/elasticsearch/elasticsearch.yml*
index.number_of_replica: 0
index.number_of_shards: 3
index.translog.flush_threshold_ops: 5
I keep getting an error like this: Courier Fetch: 5 of 270 shards failed.
in Kibana 4.0.1.
After some Googling, I think it has something to do with @timestamp not
existing for some of my data. But I'm not sure, because
https://groups.google.com/d/topic/elasticsearch/L6AG3dZOGJ8/discussion was
Like the error suggests, No mapping found for [@timestamp] in order to
sort on
Kibana expects a @timestamp field - make sure to push that in your source
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Lucene.NET committer and
I am newbie in elastic and I don't understand how should I work with transport
client connections. Should I use singleton for Client, something like
class ElasticClientManager {
private static Client client;
public static Client getClient(){
if (client==null) {
Settings settings
Take a look at
http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html
On 16 March 2015 at 20:29, chris85l...@googlemail.com wrote:
Hello Mark,
Thanks for your answer! We are using the default values, so no doc_values.
I did some research about it and it sounds very
es-hadoop doesn't depend on akka, only on Spark. The scala version that
es-hadoop is compiled against matches the one used by the Spark version
compiled against for each release - typically this shouldn't pose a problem.
Unfortunately, despite the minor version increments, some of the Spark APIs
There are practical limits, based on your dataset, node sizing, version etc.
You'd be better off segregating indices by a higher level definition (eg
customer number, 1-999, 1000-1999 etc), using routing and then aliases on
top. This way you conceptually get the same layout as a single index per
Could you help me to model architecture of storing posts and comments in
ElasticSearch?
Currenlty i have simple data structure - I store in ES posts as documents
in ES index. I do search on that index to find posts with particular words.
Posts are not related to anything. Every post has
For the record, I had to add the _timestamp field into the meta-fields in
the Kibana advanced configuration settings ...
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
The concrete implementation depends on what you store in the payload (e.g.
scores)
Jörg
On Tue, Mar 17, 2015 at 7:01 AM, Devaraja Swami devarajasw...@gmail.com
wrote:
I need to use PayloadTermQuery from Lucene.
Does anyone know how I can use this in ElasticSearch?
I am using ES 1.4.4, with
I am quite newbie to elactis. Could you explain with java code what you mean?
Вторник, 17 марта 2015, 9:46 -07:00 от aa...@definemg.com:
Is there a reason not to just specify the IP address and to try and rely on
multicast?
I realize this is all on one node as you have stated that, but that
I am newbie in elastic and I don't understand how should I work with transport
client connections. Should I use singleton for Client, something like
class ElasticClientManager {
private static Client client;
public static Client getClient(){
if (client==null) {
Settings settings
According to http://www.kernelcrash.com/blog/nfs-uidgid-mapping/2007/09/10/
the method described in that post only applies to old, out of date,
systems.
I also found no mention of a map file in
http://linux.die.net/man/8/mount.nfs or http://linux.die.net/man/5/nfs
The closest I found to
I'd recommend that you use Logstash with the rabbitmq input instead. Rivers
are being deprecated so fewer people will likely be able to help.
On 17 March 2015 at 10:23, Olalekan Elesin elesin.olale...@gmail.com
wrote:
After proper setting up RabbitMQ river for elasticsearch, I issued the
YouTube videos of
U.S. Congress money laundering hearing
of
Saudi Billionaire Maan Al sanea
with *bank of America*
and The owner of Saad Hospital and Schools
in the Eastern Province in *Saudi Arabia*
and the Chairman of the Board of Directors of Awal Bank in *Bahrain*
With
Look at this example on how to use multiple filters:
http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-filtered-query.html#_multiple_filters
You should wrap them on a bool filter
2015-03-17 15:32 GMT-03:00 jrkroeg jrkr...@gmail.com:
I'm trying to get the top 100
We do recommend to use unicast in production.
On 17 March 2015 at 09:46, aa...@definemg.com wrote:
Is there a reason not to just specify the IP address and to try and rely
on multicast?
I realize this is all on one node as you have stated that, but that seems
even more reason that it would
iSCSI can be mounted as a block device that you can format however you
want, if you do it that way the uid problem won't show up as the system
sees it as a local FS.
On 17 March 2015 at 09:00, David Reagan jer...@gmail.com wrote:
@Mark Walkom, So, I'm looking into iscsi. From what I have
After proper setting up RabbitMQ river for elasticsearch, I issued the
command GET my_ip:9200/_river/my_river/status,
{
_index: _river,
_type: my_river,
_id: _status,
_version: 2,
found: true,
_source: {
node: {
id: -nA8mbDEQ4e3l4HVqlIToA,
Thank you. I did this way:
Settings settings = ImmutableSettings.settingsBuilder()
.put(cluster.name, elasticsearch)
.put(client.transport.sniff, true).build();
Client client = new TransportClient(settings)
I agree with you that in single node environment only transport layer should
be used. But I want to know how to make node client work because maybe I will
need it in future and I want to know what I can do with elastic java api.
Вторник, 17 марта 2015, 11:56 -06:00 от Aaron Mefford
Take a look at
http://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html
for other settings.
On 17 March 2015 at 00:48, Hoon Cho deman...@gmail.com wrote:
I was searching for ES performance by google and I find some documents.
They says modify ES config is good for ES
This is what I use in my code, not sure how correct it is given the abysmal
state of the the Java API documentation.
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.client.Client;
import
Its my bad.
I defined the index in a wrong way: Once I move properties under
user_activity_v2 to _default_
It starts working.
Chen
On Tuesday, March 17, 2015 at 5:36:57 PM UTC-7, Chen Wang wrote:
the index definition is this:
settings: {
index: {
You're close:
elasticsearch-hadoop snapshot (aka dev aka master) works on spark 1.2, 1.1
and 1.0, both core and sql
elasticsearch-hadoop beta3 (not snapshot) works on spark 1.1 and spark 1.0,
both core and sql
elasticsearch-hadoop beta2 (not snapshot) works on spark 1.0 (core and sql)
The support
This is a super timely blog from the Found crew -
https://found.no/foundation/multi-tenancy/
On 17 March 2015 at 14:11, Mark Walkom markwal...@gmail.com wrote:
There are practical limits, based on your dataset, node sizing, version
etc.
You'd be better off segregating indices by a higher
Folks,
I have defined a nested object with multi_fields attribute: the cat in
store_purchase
I loaded some data into Es:
{
_index: user_activity_v2,
_type: combined,
_id: 1229369,
_score: 1,
_source: {
store_purchase: [
the index definition is this:
settings: {
index: {
number_of_shards: 7,
number_of_replicas: 1,
analysis: {
analyzer: {
analyzer_raw: {
It strongly depends on the method how you want to convert XML to JSON and
vice versa.
Maybe this plugin can give you some hints about Jackson XML regarding
parsing and formatting
https://github.com/jprante/elasticsearch-xml
Do not expect XML schema, validation, or XSL stylesheet, this is not
@timestamp is generated automatically by logstash, any documents not added
by logstash will not have it
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Lucene.NET committer and PMC member
On Wed, Mar 18, 2015 at 12:51 AM,
I have a requirement to index and search millions of XML documents related
to mortgage (Uniform Closing Dataset XMLs).
Indexed data will be requested by a web services of many internal
applications through REST API.
Output should be in XML format.
How do I implement this in ELK stack? How to
Yes!
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 17 mars 2015 à 11:23, Александр Свиридов ooo_satu...@mail.ru a écrit :
I am newbie in elastic and I don't understand how should I work with
transport client connections. Should I use singleton for Client, something
@timestamp has always been applied automatically. Only time I've ever
touched it is when I've adjusted the date to what the log message holds,
rather than when the log message is processed by logstash.
So, I have no idea where it comes from, or how I could have turned it off
on something.
Is
For anyone who has a similar problem, I have figured out the issue. By
default, it appears to me that only the _all field is searched. The _all
field contains pharmacy_docs but not pharmacy. If the search is
modified to search the name fields then the search works. And if you
wanted to
I created an example payload plugin
https://github.com/jprante/elasticsearch-payload
but I can't get a custom per-field similarity to work. Setting up a field
with a prebuilt similarity works flawlessly, but with a custom one, it is
not even listed in the mapping.
It looks like
You can use Logstash to change the XML into JSON, but you will need to do
the JSON to XML output yourself.
On 17 March 2015 at 15:17, Venkat Ankam ven...@cloudwick.com wrote:
I have a requirement to index and search millions of XML documents related
to mortgage (Uniform Closing Dataset XMLs).
Hi,
I’m not sure about that.
Do you install ICU plugin version 2.4.1 to Elasticsearch 1.4.3?
If you would like to install ICU plugin to Elasticsearch 1.4.3, you should use
ICU plugin 2.4.2.
bin/plugin install elasticsearch/elasticsearch-analysis-icu/2.4.2
Jun Ohtani
Hi,
You try to use “dynamic_templates” .
http://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-root-object-type.html#_dynamic_templates
I hope that those help you out.
Jun Ohtani
joht...@gmail.com
blog : http://blog.johtani.info
twitter :
Thank you for the summary - you are confirming (as a sanity check for
myself):
elasticsearch-hadoop beta3 (not snapshot) on spark core 1.1 only
elasticsearch-hadoop-beta3-SNAPSHOT with spark core 1.1, 1.2 and 1.3 -- as
long as I don't use Spark SQL when using 1.2 and 1.3
Costin - I am amazed
What sort of data do you have, time based or static? If it's the former
then going with any arbitrary number is less of a problem as you can change
this the next roll over period. If it's static then 4 would be a good start.
There aren't any metrics around this, other than *not* creating a large
I'm trying to get the top 100 documents which match the filtered criteria,
and sort by distance from the pin.location.
Here's my query - which isn't resulting in error, but should be returning
results:
{
query: {
filtered: {
query: {
match_all: {}
},
filter: [
{
term: {
searchTerm1: N
I typically suggest to start with the default of 5 shards. A single shard can
hold several tens of gigabytes. Certainly in your case it seems like 20 shards
is overkill for a 4 node cluster.
On Mar 17, 2015, at 11:00 AM, John S bun...@gmail.com wrote:
Hi All,
Is there any best
Did you ever find a good solution for this? I am trying to solve the same
problem (just sorting, not range filtering).
On Monday, January 26, 2015 at 2:47:30 AM UTC-5, Eric Smith wrote:
I am trying to figure out some sort of indexing scheme where I can do
range filters on semantic versions
There are plenty of spark / akka / scala / elasticsearch-hadoop
dependencies to keep track of.
Is it true that elasticsearch-hadoop needs to be compiled for a specific
spark version to run correctly on the cluster? I'm also trying to keep
track of the akka version and scala version. i.e, wil
http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html#_enabling_doc_values
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Lucene.NET committer and PMC member
On Tue, Mar 17, 2015 at 5:35 AM,
I have one physical server and I work only on it (no other servers).
At this server I have running elastic 1.4.2 - I use this version as this is
the last version elastic osgi bundle is ready for. Also at this server I
have glassfish 4.1 as java-ee server.
I run elastic node client inside my
Hi Team I am new to log stash and Elastic search.
In my log stash, I get lot of logs, few examples are celery-logs, nginx-logs,
and management-logs
I have created Queries like category==celery-logs, category==nginx-logs and
category==mgmt-logs
Created three panels attaching each specific
I noticed some strange behavior of highlighter. It works in a different way
to search.
See example.
request:
{
highlight: {
pre_tags: [
[b]
],
post_tags: [
[/b]
],
fields: {
message: {}
}
},
query: {
Hi EveryOne,
I want to run below query but I am getting no results. Please let me know
if it feasible.
{
query: {
bool: {
must: [
{
nested: {
path: tokens,
query: {
Hi all,
Is there any way to achieve field comparison.
If I index a type
{
manager : id of the manager which is a string,
teamMember : id of the team member which is a string
}
how to write a query when a manager is also a teamMember ?
--
You received this message because you are
I have a JSON like this for a document
_source: {
timestamp: 213234234,
links: [ {
mention: audi,
entity: {rank:3, name:some name}
}, {
mention: ford,
entity: {rank:0, name:some other name},
}
]
}
}
I'm interested in retrieving only the mention
Hi Adrien,
it works fine: docFieldStrings(_index) and docFieldStrings(_uid)
Thanks for your help.
On Monday, March 16, 2015 at 9:41:46 PM UTC+1, Adrien Grand wrote:
I haven't tried, but getting the value of the _index field should work.
On Mon, Mar 16, 2015 at 12:42 PM, Sergey Novikov
Noone? :-(
Petr
Dne středa 18. února 2015 12:35:15 UTC+1 Petr Janský napsal(a):
Hi Lukas,
thank you for your answer. I checked the Proximity Match -
match_phrase and it's what I looking for. I'm only not able to find a way
how to create queries like:
1. Obama BEFORE Iraq - the first
Noone? :-(
Petr
Dne pátek 20. února 2015 15:29:15 UTC+1 Petr Janský napsal(a):
Hi there,
I've tried to use shingle for getting bigrams and trigrams
curl -X POST 'localhost:9200/idnes/' -d '{
settings : {
analysis : {
filter: {
czech_stop: {
type:
While ES does compress by default, it also stores data in data structures,
that increase the size of the data. The net is that your data will be much
larger than the equivalent log file gzipped. However, running logstash to
ingest 1.5 years of logs may well take much longer than you would
It'll be able to read geoip.coordinates if you point to it.
On 17 March 2015 at 09:07, Michael bun...@gmail.com wrote:
What do you mean exactly?
These are the fields I'm able to obtain, whereas geoip.coordinates is
built by using
add_field = [ [geoip][coordinates], %{[geoip][longitude]} ]
On Tue, Mar 17, 2015 at 8:56 AM, Vlad Zaitsev vest...@gmail.com wrote:
But it seems that highlighter ignore operator: “and” and highlight any term
from queries.
Its much more than that. For the most part highlighters reduce the query
to a list of terms blindly. Some do phrases. They don't
I plan to store floats in the payload and boost the score
(multiplicatively) based on the average value of the payloads over the
occurrences of the matching term in the document. ie., exactly as in
AveragePayloadFunction in Lucene.
On Tue, Mar 17, 2015 at 2:16 AM, joergpra...@gmail.com
@Mark Walkom, So, I'm looking into iscsi. From what I have learned so far,
you actually format the LUN with whatever file system you want. So,
wouldn't the gid/uid issue show up there as well, if I formatted to ext3 or
ext4? Since Ubuntu would treat it like a normal partition and use typical
linux
I imagine the right way to do this is with a plugin but I'm not 100% sure.
On Tue, Mar 17, 2015 at 11:47 AM, Devaraja Swami devarajasw...@gmail.com
wrote:
I plan to store floats in the payload and boost the score
(multiplicatively) based on the average value of the payloads over the
What do you mean exactly?
These are the fields I'm able to obtain, whereas geoip.coordinates is built
by using
add_field = [ [geoip][coordinates], %{[geoip][longitude]} ]
add_field = [ [geoip][coordinates], %{[geoip][latitude]} ]
in my logstash.conf.
geoip.city_name
Is there a reason not to just specify the IP address and to try and rely on
multicast?
I realize this is all on one node as you have stated that, but that seems
even more reason that it would be little issue to specify the IP. While
multicast makes it easy to stand up a cluster in an ideal
Hello!
I'm a newbie in elasticsearch, so forgive if the question is lame.
I have implemented a custom plugin using a custom lemmatizer and a
tokenizer. The simplified class sequence:
72 matches
Mail list logo