Guys,
Any thoughts about this issue?
Regards
Sergey
On Thursday, April 2, 2015 at 10:47:40 AM UTC+3, Sergey Zemlyanoy wrote:
Hi,
/var/log/ngingx/error.log is empty
in /var/log/nginx/access.log this is generated while trying to open
Kibana in Firefox
ip_address - - [02/Apr/2015:09:29:28
What's the missing is a culture in which consumers put questions directly
into documentation for answering by the next person who reads it and knows
the answer.
Note that it's not good enough to implement an append-only authorship for
QAs, the question and the answer need to feed back into a
hi,
Just want to give an example of open source project that use Discourse:
https://discuss.aerospike.com/
NodeBB is probably the closely alternative to Discourse as a modern forum
software
regards,
mingfai
On Fri, Apr 17, 2015 at 4:16 PM, James Green james.mk.gr...@gmail.com
wrote:
What's
Check out the talk I gave at elasticon on entity centric indexing
https://www.elastic.co/elasticon/2015/sf/building-entity-centric-indexes
The video is yet to be released but the slides are there.
Web session analysis is one example use case.
Cheers
Mark
On Friday, April 17, 2015 at 9:11:30
Dear all!
I am trying to perform a cohort analysis with Elasticsearch. For a quick
primer on what cohort analysis actually is, please take a look at this
wikipedia article (which IMO does not carry a lot of information, but it's
good enough to get an idea):
On 16/04/2015 20:34, Attila Nagy wrote:
Hi,
I would like to store IP addresses and subnets (one or more per
document) and I would like to search for them with exact or inclusion
(does an IP is in any of the subnets stored in the documents).
For example a document could have the following:
ip:
Can you provide a bit more of the log? This may imply corruption but it's
hard to tell without context.
On 17 April 2015 at 01:32, Ted Smith tedsmithgr...@gmail.com wrote:
Hi,
I just upgraded from 1.5.0 to 1.5.1
I got bunches of errors with following I think show the issue
[nested:
I forgot to mention:
- all document fields are not_analyzed (-- therefore the filtered query)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
*my spark job running *
*this is a error*
*org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection
error (check network and/or proxy settings)- all nodes failed*
*my code *
*val conf = new SparkConf()*
*conf.set(es.nodes, 1.1.1.1)*
*datainfo.saveJsonToEs(index/type)*
*i need
thanks Magnus Bäck , you are right
My kibana version is 4.0.1, I changed by usernum from string to number,
It works, thanks you Magnus Back again
On Wednesday, April 15, 2015 at 3:58:08 PM UTC+8, way way wrote:
My app outoput data to ELK every 1h,
Json format is like:
{usernum:208}
Short question about geo_polygon filter as I haven't found anything about
it in the reference: can the geo_polygon filter handle polygons with holes?
If yes, how are they defined?
Thanks in advance!
Andrej
--
You received this message because you are subscribed to the Google Groups
Hi,
I was trying Filtered query (default search type) to fetch first 8k out of
approx 170k matched records. I noticed that on an average query took around
500ms (response.getTookInMillis()). But, when I tried 4 concurrent searches
over same dataset (in ideal scenario dataset will be
Hi,
it is possible to read an _fields entry in a native script?
When i try it i get this error message ElasticsearchIllegalArgumentException[No
field found for [*fieldKey*] in mapping with types []]
--
You received this message because you are subscribed to the Google Groups
elasticsearch
Hi,
The ES guide states, that when computing score, *The same query
normalization factor is applied to every document* - viz
http://www.elastic.co/guide/en/elasticsearch/guide/master/practical-scoring-function.html#query-norm
But when I try this example:
curl -s -XDELETE 'localhost:9200/ttt'
Sorry I overlooked it, you use getTookInMillis()
Maybe the extra time is spent because you use a range filter which is not
cached?
Jörg
On Fri, Apr 17, 2015 at 3:02 PM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
What time do you measure? The ES query time, or the network latency?
The error should be self explanatory - Elasticsearch cluster is not
accessible. Make sure that 1.1.1.1 is accessible from the Spark
cluster and that the REST interface is enabled and exposed.
On Fri, Apr 17, 2015 at 11:50 AM, guoyiqi...@gmail.com wrote:
my spark job running
this is a error
BTW - the reason I'm bothering with this is more complicated. Example in
the question is already simplified to the core. In my real scenario, I use
bool query composing more fuzzy queries. Then, the resulting score
penalizes some documents when only one field matches, which in that case
has
What time do you measure? The ES query time, or the network latency?
Jörg
On Fri, Apr 17, 2015 at 2:25 PM, Vishal Mahajan vishal...@gmail.com wrote:
Hi,
I was trying Filtered query (default search type) to fetch first 8k out of
approx 170k matched records. I noticed that on an average query
ES query time as given by search response object
(response.getTookInMillis())
Regards,
Vishal
On Friday, April 17, 2015 at 6:32:56 PM UTC+5:30, Jörg Prante wrote:
What time do you measure? The ES query time, or the network latency?
Jörg
On Fri, Apr 17, 2015 at 2:25 PM, Vishal Mahajan
Hi..
I am new in elastic search and using
https://github.com/jprante/elasticsearch-jdbc and my river setting is:
PUT _river/userentriessdatariver/_meta
{
type : jdbc,
jdbc : {
url : jdbc:mysql://localhost:3306/alterduden,
user : root,
password : ,
poll : 6s,
Ugh, it's always something really simple. I appreciate the help, you're a
lifesaver!
On Wednesday, April 15, 2015 at 7:35:50 PM UTC-4, Glen Smith wrote:
This is usually due to the field in question being analyzed with the
standard analyzer. which includes the lowercase token filter.
So the
Hi,
Those are the hacks I thought about, although I don't cleary see yet how
that would be useful for subnet searches and v4/v6.
Basically my problem boils down to:
1. having arbitrary (well, 32 bit for v4 and 128 bit for v6) sized integers
2. searching for range inclusions
The first can be
You can not expect a single node cluster can work faster when being
searched concurrently. Four concurrent searches require four times the
resources, such as CPU and memory.
Jörg
On Fri, Apr 17, 2015 at 4:21 PM, Vishal Mahajan vishal...@gmail.com wrote:
I have single node cluster. Not sure
Hi,
Simple question : How do i tell Elastic Search that a field is not analysed
when i write Data thanks to Hadoop connector through Hive.
Thanks for helphing !
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group
Do you round-robin the four concurrent searches over the cluster nodes?
Jörg
On Fri, Apr 17, 2015 at 3:38 PM, Vishal Mahajan vishal...@gmail.com wrote:
I doubt that's the cause as it should also affect sequential searches.
Regards,
Vishal
On Friday, April 17, 2015 at 6:34:24 PM UTC+5:30,
I understand the problem now. I used */type to access all index based on a
type. It did the trick for my problem.
I'm starting to think that including date in my index name wasn't a good
idea.
Le jeudi 16 avril 2015 08:45:48 UTC+2, Costin Leau a écrit :
On 4/15/15 3:49 PM,
By creating the index mapping before hand in Elasticsearch. This is
also explained in the docs [1]
[1]
http://www.elastic.co/guide/en/elasticsearch/hadoop/master/mapping.html#explicit-mapping
On Fri, Apr 17, 2015 at 4:22 PM, jean.freg...@gmail.com wrote:
Hi,
Simple question : How do i tell
I doubt that's the cause as it should also affect sequential searches.
Regards,
Vishal
On Friday, April 17, 2015 at 6:34:24 PM UTC+5:30, Jörg Prante wrote:
Sorry I overlooked it, you use getTookInMillis()
Maybe the extra time is spent because you use a range filter which is not
cached?
Thank You Jorg Prante and thank you for your quick reply.
On Friday, April 17, 2015 at 4:31:54 PM UTC+5, Jörg Prante wrote:
You must delete the river instance userentriessdatariver, and create a
new one.
Jörg
On Fri, Apr 17, 2015 at 12:51 PM, James Crone araf...@gmail.com
javascript:
Hello elasticUs, I want to know which is the best route to index a text that
contains codes (for example 1/56DTH) and other normal text but without
losing benefits of analyzed text.
For example I want index the text:
Every night I need to restart my robots 1/56DTH
If I use an standard english
You must delete the river instance userentriessdatariver, and create a new
one.
Jörg
On Fri, Apr 17, 2015 at 12:51 PM, James Crone arafay...@gmail.com wrote:
Hi..
I am new in elastic search and using
https://github.com/jprante/elasticsearch-jdbc and my river setting is:
PUT
I have single node cluster. Not sure what you mean by round robin in
concurrent searches.
Regards,
Vishal
On Apr 17, 2015 7:26 PM, joergpra...@gmail.com joergpra...@gmail.com
wrote:
Do you round-robin the four concurrent searches over the cluster nodes?
Jörg
On Fri, Apr 17, 2015 at 3:38 PM,
On Friday, July 4, 2014 at 10:09:39 AM UTC-6, JoeZ99 wrote:
The standard procedure to register a repository is to issue a PUT command
to the cluster.
I'd like to automatize the process, in a way such as a script can build a
search engine server and register a repository into it.
However,
Dear elasticUs,
I am indexing a text data which has special characters,spaces and
alpanumeric. I am pasting sample indexed data in of one document.
{timestamp:Fri Apr 17 16:16:47 IST 2015,
NODE_TYPE_NAME:CAEPART,
CONTENT:[CMATRIX=1.00 0.00 0.00 0.00 0.00 1.00
0.00
Hi,
I'm trying to understand how can I remove multiple documents by query,
where query is done on some id field (not the ES id field) in the document
and I want to delete number of documents.
For example I have 5 documents to delete, their internal ids : 3,5,6,7,8.
What is more efficient or best
Hi all,
We have an elasticsearch v1.4.4-1 server deployed on RHEL6 via RPM, and
we're having trouble getting marvel to display data.
The elasticsearch.log file shows the following error logged over and over
again:
[2015-04-17 18:47:16,046][ERROR][marvel.agent.exporter] [visim-test]
error
Yes, I saw the same issue yesterday on a test system. For me it started
after the node crashed and rebooted. It looks like it's trying to use the
IPv6 address to connect. I didn't really dig too far for a real fix since
this was a test system but setting network.bind_host 0.0.0.0 got marvel
On Friday, 17 April 2015 20:12:43 UTC+2, Kimbro Staken wrote:
Yes, I saw the same issue yesterday on a test system. For me it started
after the node crashed and rebooted. It looks like it's trying to use the
IPv6 address to connect. I didn't really dig too far for a real fix since
this
Hi,
I've been playing around with the completion suggester and it's really
nice. But, there's something I'm either not understanding or it can't be
done.
Let's say I have a title of a song: Somewhere Over the Rainbow
I want to match that title if someone types somewhere, or the first few
Hi There !
Got a little problem with geo_point type.
I have lot of data with geo_point informations indexed like this :
iplocation : [
{
ip : 92.89.xx.xx,
location : {
lat : 48.86,
lon : 2.35
}
}
],
Here is my mapping :
iplocation : {
properties : {
ip : {
index : not_analyzed,
type : string,
Hi There !
Got a little problem with geo_point type.
I have lot of data with geo_point informations indexed like this :
iplocation : [
{
ip : 92.89.xx.xx,
location : {
lat : 48.86,
lon : 2.35
}
}
],
Here is my mapping :
iplocation : {
properties : {
ip : {
index : not_analyzed,
type : string,
In the example
here
http://www.elastic.co/guide/en/elasticsearch/reference/0.90/query-dsl-terms-filter.html#_terms_lookup_twitter_example,
it is indicated that the id field can only be for a single document. Is
there a way to do:
curl -XGET localhost:9200/tweets/_search -d '{
query : {
Yes, merges can hurt, but you can throttle
them: http://search-lucene.com/?q=throttle+mergefc_project=ElasticSearch
You can easily correlate search latency with merges, flushes, and refreshes
with something like SPM for Elasticsearch. This could help you figure out
how much you need to
It's a little difficult to see what is currently in field data to check
(you need a heap dump). You could probably keep an eye on existing field
data and see if it increases slower than before but that's a little
abstract.
Really, as long as it doesn't complain about the mapping you're good.
On
Hello Daniel ,
Feel free to use should clause in bool filter
http://www.elastic.co/guide/en/elasticsearch/reference/1.4/query-dsl-bool-filter.html
.
Here you can give multiple terms filter and each of them can point to a
different document/field.
Thanks
Vineeth Mohan,
Can some one help me in designing /making me understanding how should I
design class/json structure ?
I have document that can have some standard set of fields which I know but
I have dynamic fields (user can create his own fields )
What should be the document structure /json structure for
Hi ,
Is there a way to know minimum memory required for X no of fields created
in index ?
Please help me with some thoughts .For example :If index has 20 fields ,
how much memory elastic search need to keep the index in memory ? or how
does it works?
--
You received this message because you
You can add it in and it'll map it correctly.
@timestamp : {
index : not_analyzed,
type : date,
*doc_values: true*
}
On 17 April 2015 at 10:14, Scott Chapman scottedchap...@gmail.com wrote:
Thanks. The field I wanted to map was @timestamp which
Thanks Mark. Exactly what I was looking for. Once I make the change is
there any way I can tell it is being used properly for a specific field?
On Friday, April 17, 2015 at 10:23:15 PM UTC-4, Mark Walkom wrote:
You can add it in and it'll map it correctly.
@timestamp : {
index :
49 matches
Mail list logo