Yes. Because « Hello. How are you? » is a sentence that can be broken in «
hello », « how », « are », « you ».
But in « I paid it 2.50 euros », I would most likely keep « 2.50 » as a whole
token.
--
David Pilato - Developer | Evangelist
elastic.co
@dadoonet https://twitter.com/dadoonet |
Thanks for the reply! However, it doesn't make sense to me directly.
If I use the dot as an additional seperator, I will end up with the tokens
swarmvars and json, but not swarmvars.json. Right?
Am Freitag, 29. Mai 2015 10:47:56 UTC+2 schrieb David Pilato:
I would probably go with a Pattern
Mark
This seems counter intuitive. Incremental to me and technologies with which
I have worked before means that in order to restore to a known state you
need A+B+C. In temporal order. Consider this example
DELETE /test
POST /test
{
settings : {
number_of_shards : 1
},
I would probably go with a Pattern Tokenizer and define whatever regex you need.
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-pattern-tokenizer.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-pattern-tokenizer.html
The standard one is
Hi All,
Am using elasticsearch-knapsack plugin for update settings and for few
other actions but am unable to start it , I just started using normal
client creation as:
public class KnapSackImport {
private static Client client = null;
@Inject
public static Client getClient()
We have ElasticSearch 1.5 set up with a very simple mapping to perform full
text search in our docs (https://docs.giantswarm.io/). When searching for
swarmvars we get no hits, although swarmvars.json appears in documents.
The field text is used as a catch-all field for all searchable content
Am Freitag, 29. Mai 2015 11:02:25 UTC+2 schrieb David Pilato:
Yes. Because « Hello. How are you? » is a sentence that can be broken in
« hello », « how », « are », « you ».
But in « I paid it 2.50 euros », I would most likely keep « 2.50 » as a
whole token.
So far, so easy. And my
I would use 2 analyzers and multi field:
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-core-types.html#_multi_fields_3
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-core-types.html#_multi_fields_3
--
David Pilato - Developer | Evangelist
You can not run the Knapsack plugin at transport client side. It must run
at server side in a node being part of the cluster.
Jörg
On Fri, May 29, 2015 at 11:07 AM, Muddadi Hemaanusha
hemaanusha.bu...@gmail.com wrote:
Hi All,
Am using elasticsearch-knapsack plugin for update settings and
I havent figure out what is causing parse error. I have a nested mapping in
place (i just omitting it).
If i do a nested filter by itself it work as expected, but I want to
combine a should not nested filter with a must nested filter, bellow what
I'm posting:
{
from: 0,
size: 10,
What you just described should work fine. exclude._ip will move the shards
off of the nodes you exclude but queries and updates can proceed while this
is happening because the data is still on the old nodes. The updates will
make their way to the new copies via a transaction log reply mechanism.
Thanks for your answer. I didnt know the exclude doesnt disable the
research. But according to the restart,
- I cannot restart the cluster because I have 9 tera of data so it can take
one day to restart all cluster. In addition to that old machines and new
machines will not know each other( I
You certainly will have to think of a way to add new cluster masters - but
you can still do it with unicast discovery I think.
Yeah - exclude doesn't disable searches - it just causes elasticsearch to
move the shards using it normal shard moving mechanism. Its really just
adding a routing rule.
Thats great. Thanks for your help !
29 Mayıs 2015 Cuma 15:09:21 UTC+2 tarihinde Nikolas Everett yazdı:
You certainly will have to think of a way to add new cluster masters - but
you can still do it with unicast discovery I think.
Yeah - exclude doesn't disable searches - it just causes
Hello,
I am running a cluster which contains many indexes. In the near future I
will have new machines so that I will need to migrate my indexes to those
machines. I have thought some scenarios but it will not be possible to do
so. Let me explain what I thought and why it was not possible;
-
Hi ,
I am loading some data to ElasticSearch and the time I am passing in
UTC. Is there a way to pass timezone in Kibana so all my timegraphs
are displayed based on the local time zone or based on a time zone I
set in Kibana ?
Thanks,
Deepak Subhramanian
--
Please update your bookmarks! We
Dedicated master nodes are super convenient if you have the it
infrastructure to host them on shared machines because they are very low
load and its useful to be able to restart the master nodes quickly. We
don't have that kind of infrastructure and our cluster is pretty large and
not having it
Right now we only need 4 ES nodes due to the small data volume, and all 4
nodes are master data nodes.
Q1:
I am wondering in this case, is it necessary to have dedicated master and
client node? Any benefit of having dedicated master node?
Some one said that dedicated master nodes (say, three
Hi
We have an index with setting like :
curl -XGET 'http://localhost:9200/some index name/_settings' -d '{
...
analysis : {
filter : {
truncate_filter : {
type : truncate,
length : 7000
},
}
Now we want to change the
19 matches
Mail list logo