Install the carrot2 plugin and see if it fits your requirments:
http://download.carrotsearch.com/lingo3g/manual/#section.es
Jörg
On Mon, Feb 24, 2014 at 7:00 AM, prashant.agrawal
prashant.agra...@paladion.net wrote:
Hi Hannes,
Thanks for the info , also I came to know about lingo3G/Carrot
Hi all,
I am using the nested object.
There is strange behavior of field in nested object.
All sample mapping and data : https://gist.github.com/johtani/9183848
1. _source store
2. book.title is not stored
3. book.contents is stored
If fields parameter specify “book.title” and “book.contents”,
If you want classification then Carrot2/ Lingo3G won't be of much use
-- in short classification is assigning an unlabeled example to a pool
of (previously known or computed) labels, Lingo3G and Carrot2 are for
clustering (finding labels in an otherwise untagged set of documents
or search
Also the kernel complains about too many connections being made at
once on the joining node.
(Seems to occur after 30 nodes joined the cluster)
TCP: TCP: Possible SYN flooding on port 9300. Sending cookies. Check
SNMP counters.
On Fri, Feb 21, 2014 at 6:44 PM, Thibaut Britz
I was writing some tests to check if my mappings were being deployed
correctly, and came across this: if you have a geo_point field inside a
nested object, it will inherit the 'path' attribute from the nested object.
I.e. if you create an index like this:
curl -XPOST 'localhost:9200/test' -d
Hi,
I am a newbie about Elasticsearch. I am trying ES for some of my projects.
I want to connect ES cluster using Thrift with python client. I installed
plugin to all machines and restarted them. Then, I try to connect via
Thrift and it gives me socket.timeout: timed out error. I can connect
Hi Costin,
What I'd love to see is a step by step tut have ES and Haddop working
together.
Is there somewhere I can have something like this ?
Regards,
Yann
Le jeudi 20 février 2014 16:25:28 UTC+1, John Pauley a écrit :
Any more tutorials, say append to list?
On Wednesday, February 19,
Hi,
I would like to know if a release date of a first .NET client is already
planned, and if it's the case, when is it ?
Thanks for your replies.
Loïc
Le jeudi 24 octobre 2013 16:47:20 UTC+2, cdhall a écrit :
Yes, there will be an official client for .NET.
Our decision to create our own
Hi guys, I'm hoping somebody on here can help me, I feel like I'm just
missing something really basic but I can't for the life of me figure out
what... I have the following index set up (it's very cut down for clarity's
sake):
{
index:products,
body:{
settings:{
Greetings everyone,
I've faced with a following issue, and I'm not sure whether it's ES
peculiarity or something is wrong with my setup.
1. Created ES mapping with two string fields, one of them is with
customized boost.
2. Performed term search over default catch-all field and observed
hy all,
i was wondering if it's possible to do RRA like consolidation on collectd
indices stored in my ES cluster?
i've seen this kind of option in EMR (hadoop), so i though it was possible to
do nearly the same with ES
any clues apreciated :)
Regards,
Rachid
--
You received this message
Hey @David, since Luca had zeroed-in-on the issue, I'll skip providing the
query this time.
@Luca - Thanks! That was exactly the problem! Another thing that is
inconsistent right now is that I can use date math against the _search
endpoint for filters but not against the _count endpoint in 0.90:
Have you looked at the video? It does exactly that.
Is there something missing?
On 2/24/2014 12:41 PM, Yann Barraud wrote:
Hi Costin,
What I'd love to see is a step by step tut have ES and Haddop working together.
Is there somewhere I can have something like this ?
Regards,
Yann
Le jeudi
Argh, knew I'd forget something, the blooming ES version number! I'm on the
latest 1.0 version.
On Monday, February 24, 2014 11:39:00 AM UTC, Garry Welding wrote:
Hi guys, I'm hoping somebody on here can help me, I feel like I'm just
missing something really basic but I can't for the life of
Hey,
first, you should really upgrade elasticsearch, this is quite an old beta
version.
Second, if you are using something like amazon ec2, multicast is disabled.
You might want to test with unicast in that case, see
Hi All,
At the moment in kibana I can use the stats or terms panel with term_stats
mode to show stats by term..
So for example I could get a total for term A. What I would like to do
however is show the value {total for term A} / {total for all terms}.
So if the total for term A is 5 and the
It will be possible in the future when this issue will be fixed:
https://github.com/elasticsearch/elasticsearch/issues/2114
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 24 février 2014 à 13:47:07, Thomas Andres (thomas.and...@ergon.ch) a écrit:
I
Hi,
I'm going my way using Hortonworks Sandbox Elasticsearch Hadoop.
I can't have Hive request Elasticsearch. Seems it attemps connecting to ES
through hostname while I provided IP adress in Hive query...
I found a workaround setting up /etc/hosts in snadbox, but can't figure out
why it
How about, while the scan is being done, let updates go to the old index
but with an extra field? Once the alias points to the new index, it's just
a query to fetch the fields with that new field from the old index and then
reindex then into the new one. If the alias changing/new index creation
Hello:
I have a cluster in yellow with 27 instances 10 shards and 2 replicas.
size: 56.2G (129G)
docs: 158.935.862 (159.132.956)
I have the plugin head with refresh quickly and the replicas are jumping
from instance to instance. Always trying in instances with no diskspace.
All the instances
Hey,
do you used some forced allocation awareness for some indices or the
cluster, which might lead to this?
--Alex
On Thu, Feb 20, 2014 at 10:12 PM, kondapallinar...@gmail.com wrote:
Hi ,
we had some network glitch .then we had rebounced all the four nodes
.after rebouncing we are
Thank you. But what if multiple nodes have to be started to balance the
load?
Regards,
FFA
On Friday, February 21, 2014 8:50:19 PM UTC-6, David Pilato wrote:
Just set transport port in elasticsearch.yml.
But you can stay with defaults. As long as you start only one node, only
9300 needs
Hey,
if there is an error, can you please open a github issue? However the
envelope shape expects you to set an upper left and lower right boundary.
Your coordinates more look like lower left and upper right (meaning you
might create quite a huge envelope acutally) - which obviously does not
Hey,
please be more verbose than that. Use the nodes stats and nodes info APIs
to find out how many files the elasticsearch process is allowed to open,
and how many files are open. Show us the output here.
Any special linux distribution or any special security settings which might
prevent
Hey,
you have to change the mapping to accept your custom date time format. You
can set it in your mapping using the format parameter like this
{sales_date: {type : date, format : -MM-dd ... AND THE REST
HERE}}
it uses the java SimpleDateFormat class for the defining the date format..
Hey,
it is all in the provided link from the first reply? Maybe you can be a bit
more specific with your problem and what infos you are missing and we can
try to help...
--Alex
On Sun, Feb 23, 2014 at 9:48 PM, Daniel Winterstein
daniel.winterst...@gmail.com wrote:
Thank you Hariharan, but
Hey,
the recently released logstash 1.4 beta includes support for elasticsearch
1.0. See http://www.elasticsearch.org/blog/logstash-1-4-0-beta1/
However when using the elasticsearch_http output you dont need the
node_name directive for example. Please try if you can do a seamless
switch, it
Hello Alex,
To take the example from that page:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/_mapping' -d '
{
tweet : {
properties : {
message : {type : string,
// What can go here??
// I've seen analyzer, store, enabled used in passing in
examples without
What are the expected semantics of the from/to fields in a DateRange
aggregation?
Are the from/to values included? Should there be an *include_lower*/
*include_upper* option like with filters?
I want the aggregation to include the lower and upper values - but I
discovered today that it doesnt
Ahh ok. I'll have to give the keyword analyzer a try then!
Thanks,
Jamil
On Friday, February 21, 2014 2:23:06 PM UTC-8, Binh Ly wrote:
Assuming you have no prior mappings, your first example will put @message
through a standard analyzer - i.e. it will chop it up into pieces using
this
I add more info:
I have the plugin head with refresh quickly and I see that some replicas
are constatly jumping between nodes. Does it make any sense? for instance,
in one of the nodes I see the replica of shard 1 in yellow, then replica of
2, then no shard nor replica at all, then the
version of the elastic
{
ok : true,
status : 200,
name : e-43ea,
version : {
number : 0.90.7,
build_hash : 36897d07dadcb70886db7f149e645ed3d44eb5f2,
build_timestamp : 2013-11-13T12:06:54Z,
build_snapshot : false,
lucene_version : 4.5.1
},
tagline : You Know,
I am not sure what the complaints are all about.
Over the past 20 years, my best practices are to treat the installed
configurations as a template that is subject to change upon reinstallation.
Then, I always create my own configuration and point the server to it, and
never point a server to
you define your custom ports for additional nodes by
setting transport.tcp.port and http.port in elasticsearch.yml
and accordingly punch firewall rules only for those ports.
On Monday, February 24, 2014 8:56:31 AM UTC-6, FFA wrote:
Thank you. But what if multiple nodes have to be started to
Hi ,
I have a field whose value is between 0 and 1.
I need to draw a bar graph which tells how many feeds have value of this
field between (0,.1) , (.1,.2) , (.2,.3) , (.3,.4) and so on.
Can i do this using Kibana.
Thanks
Vineeth
--
You received this message because you are
Yup, this is a known bug. Since _boost is being deprecated and replaced by
function_score, this will likely not be fixed. For now if you want to sort
on a boost value, either remove the _boost from your mapping, or
introduce another field that you don't refer to from _boost.
--
You received
Unfortunately not at the moment. But if you're up to it, you can probably
easily write a custom panel that will do this for you.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it,
Hello everyone,
in this weekend I’ve had the pleasure to read “Mastering ElasticSearch”
(http://www.packtpub.com/mastering-elasticsearch-querying-and-data-handling/book)
of Rafał Kuć. Everyone, who is in Lucene/Solr/ElasticSearch ecosystem, already
knows him also for his blog and the
Technically, you can probably do this with a little scripting and the
script_fields functionality:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-script-fields.html
However, Kibana does not expose this at the moment so for now, you'll need
to produce the
Some ideas:
1) You can turn dynamic mapping to false and then explicitly specify only a
handful of fields that will be indexed/searchable. Or, if you don't want to
do this, just send in a smaller JSON document with only the fields you want
searched or indexed.
2) RAM is mostly dependent on
I have two indices each storing a specific type Products and Stores.
Some of the attribute names of each type overlap. For instance, both
Products and Stores have a name attribute. How can I search across both
indices while giving different boost values to the same attribute? I want
to
If I understand you correctly, let's say you have one or more indexes. Then
you have 2 types named product and store. product and store both
have the field name, but you want to boost the product name independent
from the store name. You should be able to do something like this:
{
query: {
I'm seeing duplicate concatenated values when using the combo analyzer for
_all using a multi-field defined in a dynamic template.
e.g. Instead of seeing Foo Bar when listing the _all terms aggregation,
I'm seeing Foo Bar Foo Bar for the token because my mulit-field defines 2
sub-fields. If
thanks. nice list of ES HA reminders.
On Wednesday, December 12, 2012 3:22:24 AM UTC-7, Karel Minařík wrote:
Hello,
first, there's a great presentation from Shay on the topic available at
http://www.elasticsearch.org/videos/2011/08/09/road-to-a-distributed-searchengine-berlinbuzzwords.html
That works perfectly, Thanks! I had no idea you could preface the field
paths with the type for boosting like that.
On Monday, February 24, 2014 7:08:31 PM UTC-5, Binh Ly wrote:
If I understand you correctly, let's say you have one or more indexes.
Then you have 2 types named product and
I read Elasticsearch Server several months ago and found it helpful. But
I'm hesitant to get any more books that aren't focused on 1.x - hopefully
we'll see some pop up soon (nudge nudge).
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Ok, .
Good to know, I wish I knew this a few days before ;-) I was really loosing
my mind on this!!
yet another reason for dropping indexing time defined boost, I guess. I
really wish there were any way of defining per-document boost at index
time.
txs!
On Monday, February 24, 2014 5:14:48
Thanks for your response.
I have to use numeric and date fields in range queries so the string field
of mutli-field will not work.
Any other thought?
On Friday, February 21, 2014 6:44:09 PM UTC+5:30, Binh Ly wrote:
You can do a multi-field on numeric fields with a string/not_analyzed
field
Hi all,
I am wondering whether the delete API support to return the content of the
deleted documents because I need to know the content of the deleted
documents.
Of course, I can get the document first and then delete it with two REST
API. But are there any api to achieve it?
any
Hi,
I am using elasticsearch embedded in a tomcat 7 webapp container
(everything running under java 7.) All libs for elasticsearch are in
WEB-INF/lib. In v0.90 everything is running swimmingly. We upgraded to v1.0
(libs and all and paid attention to breaking API calls) but now on Ubuntu
Linux
Snippet
from
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping.html
Explicit mapping is defined on an index/type level. By default, there
isn’t a need to define an explicit mapping, since one is automatically
created and registered when a new type or new field is
Oops. I copy-pasted the aggregation I was hacking about with.
The actual aggregation looks like this (I had added an additional
millisecond as a hack to include_upper)
aggregations : {
intentDate : {
date_range : {
field : intentDate,
ranges : [ {
key : Overdue,
52 matches
Mail list logo