Please mark as completed.
On Sunday, 1 February 2015 06:16:16 UTC, Olav Grønås Gjerde wrote:
I found the problem
settingsBuilder.put(gateway.type, none);
This should be set to local:
settingsBuilder.put(gateway.type, local);
--
You received this message because you are subscribed to
Anyone please??
On Saturday, 31 January 2015 09:56:38 UTC, Ali Kheyrollahi wrote:
Hi,
I really haven't found a consistent way to use query window in Discover or
Visualize tabs. My results become hit and miss and inconsistent.
So I am searching for types of my_type and I have a field
That's fine, but I still want to know what I'm doing wrong in my example
(taken pretty much verbatim from the link provided).
That said, I am planning on evaluating both for my use case. The problem I
have with the suggester is the duplication. That said, I'd rather not
debate that right now
I have included all the required libraries.
java.lang.NoClassDefFoundError:
org.elasticsearch.common.settings.ImmutableSettings
Sounds like you did not include elasticsearch jar.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 1 févr. 2015 à 17:41, 4m7u1
Im stuck with the same problem. I tried adding $ES_HOME/plugins/sheild/ to
my CLASSPATH as well, but still no go.
i also set the path to the plugins directory in my elasticsearch.yml file
like so :
path.plugins: /usr/local/elasticsearch/plugins/
Help!
On Wednesday, January 28, 2015 at
I have included all the jars. Elastic search as well as lucene. What else
could be the issue? Or did i miss any of the jars?
On Sunday, February 1, 2015, David Pilato da...@pilato.fr wrote:
I have included all the required libraries.
*java.lang.NoClassDefFoundError:
For implementing good autocomplete I recommend you look at the completion
suggester - its much faster and has more capabilities. It was built
especially for that.
See http://www.elasticsearch.org/blog/you-complete-me/ and
I don't know.
I guess you are not using Maven, right?
Not sure what you did, what your classpath looks like, how you deploy your
app...
No clue here.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 1 févr. 2015 à 20:07, Amtul Nazneen amtulnazne...@gmail.com a écrit :
Hi,
I'm trying to implement the search as you type example
from
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_index_time_search_as_you_type.html
Can someone see what I'm doing wrong?
curl -XDELETE localhost:9200/my_index
echo
curl -XPUT localhost:9200/my_index -d '
{
FYI, the answer is no.
I did a simple test using pmap to to make multiple scroll queries in
parallel. Out of 500 results, there were only 221 distinct values, so more
than half were duplicates :)
On Wednesday, January 28, 2015 at 12:56:21 PM UTC+1, David Smith wrote:
Can I share a scroll id
Sorry, I should also explain the problem ;-)
For both of the searches I'm getting the following:
{took:1,timed_out:false,_shards:{total:1,successful:1,failed:0},
hits:{total:0,max_score:null,hits:[]}}
On Sunday, February 1, 2015 at 11:12:55 AM UTC-6, Craig Ching wrote:
Hi,
I'm trying to
It's normal to see 40-60% deleted docs if you frequently update existing
documents. See this recent blog post I wrote for some details:
http://www.elasticsearch.org/blog/lucenes-handling-of-deleted-documents/
Mike McCandless
http://blog.mikemccandless.com
On Sun, Feb 1, 2015 at 3:50 PM, Mark
Yes, it should be.
On 30 January 2015 at 21:35, Ernesto Reig erniru...@gmail.com wrote:
Any comments on this?
On Thursday, January 29, 2015 at 5:06:12 PM UTC+1, Ernesto Reig wrote:
Hi all,
This might sound a dumb question but just in case: Is the fielddata
belonging to a given index
Hi Mark, hi Mike,
Thanks both for helping on this. The article's great, I didn't know about
the 40-60 sawtooth pattern.
So, I guess this means we should be ok for now as long as we monitor CPU
usage, correct?
Don't have access to the server OS (that's on qBox side), so moving to
Oracle JDK
All that really matters here (at least, on a high level) is the size of the
index.
On 31 January 2015 at 02:21, Chris Neal chris.n...@derbysoft.net wrote:
Thank you for the reply, Mark.
Heaps are adjusted to 30GB (I liked round numbers :)).
50GB is a good max shard size to keep in mind, and
No, because there are still a number of variables that factor into this to
make it into another it depends.
Things like heap, CPU, disk, query types, ES version, cluster and index
size and setup,
On 31 January 2015 at 04:39, Tony Neil captaintn...@gmail.com wrote:
Greetings,
I have found
If you're updating single documents often then you should expect high
delete rates, your heap and CPU use seems to be ok though so it's not
anything to be super concerned with (for now). You do have the option of
forcing an optimise (which does a merge and removes deleted docs), but this
is
No am not using maven. I'm deploying my app through an IDE(jdev). i added
all the relevant jars to my project. Please help me out here.
On Monday, February 2, 2015, David Pilato da...@pilato.fr wrote:
I don't know.
I guess you are not using Maven, right?
Not sure what you did, what your
Hi David,
It works with 100.0. It rounds up to 2 decimal points.
But, what if we need to round it up to 3 or x decimal points?
Rounding up a FLOAT, DOUBLE datatype to x decimal points is a common
business requirement. I wonder why there isn't any build-in function in
ElasticSearh to round up
Which unicast settings did you set?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 2 févr. 2015 à 08:35, Arulmozhi Devi T tdev...@gmail.com a écrit :
I have AWS EC2 instance to run elasticSearch. I launched it with one IP
(54.xxx.xxx.109). Then I changed the IP of
I guess something like this:
Math.round(7.8151*1000)/1000.0
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 2 févr. 2015 à 04:03, Lee Chuen Ooi leech...@gmail.com a écrit :
Hi David,
It works with 100.0. It rounds up to 2 decimal points.
But, what if we need to
May be also you have the same tid 10 times?
David
Le 2 févr. 2015 à 07:49, Shwetha Raghavendra
shwetha.raghaven...@pearson.com a écrit :
Hi,
After importing , i am checking through kibana tool .
Logstash configuration is :
input {
stdin {
type = stdin-type
}
Could you run a query outside Kibana? For example:
GET /_search?size=0
David
Le 2 févr. 2015 à 07:49, Shwetha Raghavendra
shwetha.raghaven...@pearson.com a écrit :
Hi,
After importing , i am checking through kibana tool .
Logstash configuration is :
input {
stdin {
Hi All,
I am new to elasticsearch. Kindly let me know the system
requirements for elasticsearch in development environment.What is the best
way to import data into elasticsearch using csv .
I have been trying for bulk import using csv into elasticsearch for
100 records using
Was it raw POS tagged data or just raw data? can you share the code /
process you used?
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Lucene.NET committer and PMC member
On Thu, Jan 29, 2015 at 3:34 PM, Mark Harwood
I have AWS EC2 instance to run elasticSearch. I launched it with one IP
(54.xxx.xxx.109). Then I changed the IP of the instance to another IP. I
have changed the ip in host files and elasticsearch.yml configs etc. But
I'm getting this error. It shows the Old IP in the error. Can anybody
what if there are thousands of or even more types within an index (and keep
increasing quickly) , it will consume more heap size and impact the
performance of the cluster ?
Thanks.
--
不学习,不知道
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
How do you know that you have imported only 10 records?
David
Le 2 févr. 2015 à 07:05, Shwetha Raghavendra
shwetha.raghaven...@pearson.com a écrit :
Hi All,
I am new to elasticsearch. Kindly let me know the system requirements
for elasticsearch in development environment.What is
Hi,
After importing , i am checking through kibana tool .
Logstash configuration is :
input {
stdin {
type = stdin-type
}
file {
path = [C:/ELK/data/teacher_det.csv]
start_position = beginning
}
}
filter {
csv {
columns = [tid,tname]
separator = ,
}
}
29 matches
Mail list logo