Is it possible to connect elasticsearch with volt db
Thanks Regards
Jithin
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Do you want to download VoltDB data with JDBC into Elasticsearch? You can
try the JDBC river.
Jörg
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
i want to do a CRUD to voltdb through elasticsearch.
That is after connecting elasticsearch to voltdb , i need to query to ES to
access data from voltdb. Also if voltDB data is updated it will be
available in ES
--
You received this message because you are subscribed to the Google Groups
Hi there,
The reason I'm looking at Elastic Search being a totally different one
^1, I set up a development environment with about 20 servers that use
rsyslog to send off their logs to a logstash server (input, you guessed
it, syslog), and through Redis ultimately makes the syslog entries end
Is there any river to connect to voltdb from ES
Thanks
Jithin
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
All -
I am working on a POC to replace Lucene implementation in our product with
ElasticSearch.
Was looking for a way in Elasticsearch to create an object and setting the
field's Indexing behavior (i.e, store, type, index_type) dynamically while
indexing a document.
I understand that that you
Thanks for the replies, we are now considering and discussing our balance
policy, all the information is helpful.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
Hi Adrien,
Good news! The problem is solved.
Can't wait for the release containing the fix, but for now I will use my
own build :)
On Thursday, February 6, 2014 5:25:11 PM UTC+1, Nils Dijk wrote:
Yay!
I will try this somewhere tomorrow. Thanks for fixing, much appreciated!
Seems like it
My bet is that you are looking for templates or so?
Not sure I fully understood the use case though.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 févr. 2014 à 10:20, Srividhya Umashanker srividhya.umashan...@gmail.com
a écrit :
All -
I am working on a POC to
Have a look at filters, specifically the terms filter. They execute very
fast and are cached too.
http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-terms-filter.html
On Friday, February 7,
Lavanya,
Something like this?
{
name: [
{ blue: black },
{ white: red }
]
}
Also how would you want to search it, for example?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving
Jeroen,
If your objective is to keep the ES storage as minimal as possible, you'd
probably want to understand first what your search requirements are and
then optimize the ES indexes accordingly. For example, if you don't need
replicas, then you can set it to 0. If you don't need the _all
Hi,
you can use a script to find out term frequencies and more of a
particular word in a field. This feature is available since 0.90.10.
Check it out here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-advanced-scripting.html
You can use this either inside a
Tom,
You might be interested in the script_fields functionality:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-script-fields.html
It allows you to introduce a dynamically computed field at query-time where
you can script the logic on how the field value
Excellent news, thanks for checking! RC2 was the last release candidate, so
the next release containing the fix should be 1.0 GA. Hopefully it will be
out soon.
On Fri, Feb 7, 2014 at 12:39 PM, Nils Dijk m...@thanod.nl wrote:
Hi Adrien,
Good news! The problem is solved.
Can't wait for the
Florent,
You probably just want to be careful with your index names, type, mapping,
and actual document. So if you do something like this, it should work:
1) PUT http://localhost:9200/try
2) PUT http://localhost:9200/try/pin/_mapping
{
pin: {
properties: {
location: {
Hi Nik,
would a script field also work for that? Something like:
{
script_fields: {
field_match1: {
script: if(_index['field1']['searchterm'].tf() 0){
return 1;} else {return 0;}
},
field_match2: {
script: if(_index['field2']['searchterm'].tf() 0){
Hi,
you can also get raw term statistics stored in the index such as doc
frequency, term frequency etc within a script (=0.90.10):
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-advanced-scripting.html
You can use this information to calculate your own score. If
Hi,
The score is not a field of a document but is computed dynamically, you
need to use a script to get it. So replacing `{ field : _score }` with
`{ script : _doc.score }` should have the desired effect.
On Fri, Feb 7, 2014 at 2:57 AM, Toan V Luu luuvinht...@gmail.com wrote:
Hi all,
Have
I have the following setup:
Application Server with NGINX and Kibana and configured as reverse proxy
Two node Elasticsearch cluster
Both hosted on EC2
When I try to query Elasticsearch through Kibana from behind a corporate
firewall I get the following error message:
Could not contact
Hello,
I am using completion suggester for user search on my website. I basically
followed the howto from elasticsearch webpage.
Created the mapping:
post /gruppu/user/_mapping
{
user : {
properties : {
name : { type : string },
suggest : { type : completion,
Finally, I fixed my problem.
There was a mistake for the field discovery.ec2.groups. Instead of a
string, I had to put an array of string.
And I also forgot to add the tag platform:prod to CloudFormation when
launching my stack.
Fixed!
On Friday, 7 February 2014 14:54:05 UTC+1, Thomas FATTAL
May be you updated 50 docs (same ID)?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 7 février 2014 at 15:58:15, ZenMaster80 (sabdall...@gmail.com) a écrit:
I am indexing about 5000 documents, when indexing is done, I use HEAD plugin,
it says it
Without changing a firewall setting on the IP address you're trying to
connect to? Or changing to a port that is already open?
IMO it has to be open, there is no alternative. Someone may opine
differently, but from what I've seen Kibana does not support a transport
other than IP (like pipes).
I am using elasticsearch to index and query a fairly large document
collection. Most of the data is in a single property text of a doctype
article. The index is sometimes slow, and my log has many messages about
the garbage collection:
For example, the following is right after starting the
Wouter,
Yes it is possible that you have memory pressure. I'd probably:
1) Set bootstrap.mlockall: true in the elasticsearch.yml file
2) Once you're up and running (or when these GC pauses start to happen),
check the node stats to see what you have in memory:
curl
hi,
can elasticsearch sort by rounded date ?
i want to sort by document creation date but rounded to 1 day, something
like that:
sort: [
CreationDate*/1d*
]
solr supports this type of sorting, so i expect elasticsearch is supporting
it too, but i cant find any
Hi,
I have this following version:
elasticsearch-0.90.10
PS: I can open the server with public IP to be reachable from internet if
you would like to see that behavior yourself.
Regards,
Jorge
Dne pátek, 7. února 2014 12:25:13 UTC-6 Binh Ly napsal(a):
Jorge,
May I ask what version of ES
Thale,
I played with your data a little and it turns out it is more complex than I
thought. Something like this works somewhat but may require some
fine-tuning depending on your exact requirements. Anyway give this a try
and see how it works (BTW I did this in ES 1.0 RC 2):
1) PUT
Hi Jettro,
It seems like the cpu data is missing from the documents marvel queries.
Those documents are based on the output of the Node Stats API. That one
uses Sigar to extract this information, with fall back to Jvm based
metrics. My guess is that neither could get the cpu usage on the OS
Fixed,
It turns out the arm platform was not supported. You can compile your own,
or download the so file from the following post:
https://groups.google.com/forum/#!topic/openhab/18C7FYpxWTQ
Now I need to make marvel nicer for my raspberries, two of three are
constantly over 90%
--
You
Strange, I tried it on 0.90.10 and it works as expected also. Can you
please post the complete sequence of steps (exact commands starting with an
empty cluster) from beginning to end so that I can try it and see if I can
reproduce your problem? Thanks.
--
You received this message because you
It'd probably depend on your nginx setup. If you can gist it it'd help.
We use a reverse proxy so the users browser doesn't do a direct connect to
ES, and this works fine.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
Hello,
I wrote a river for GitHub data (events, issues open pull requests). You
can find it here[1], I hope it will be useful to some people :).
There's a section for community-supported plugins [1] for elasticsearch,
and I was wondering: what's the process for having it added to that list?
Awesome - I could definitely see using the script for calculating the
adjustments. Thanks!
Any other ideas where ES should/should not be used on the site?
On Friday, February 7, 2014 5:10:36 AM UTC-8, Binh Ly wrote:
Tom,
You might be interested in the script_fields functionality:
You should be able to get the textual field values by explicitly requesting
them from fields. For example:
GET localhost:9200/_search
{
fields: *,
query: {
match_all: {}
}
}
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
In past lives I was involved in integrating Solr to power all kinds of
funky little shelves on a page. Elasticsearch can handle it. The trick
is to make sure that you don't do too many slow thing against too many
documents. If the shelf is built by matches and sorting (by a field or
relevance)
You have sorting and paging which are very straightforward to implement in
ES. You can also do full text search using ES easily - for example, if I
type like club snap into the search box, you can run a single query that
looks for that text across multiple fields in ES easily. You can also
Hi,
here is the sequence:
put /gruppu
post /gruppu/user/_mapping
{
user : {
properties : {
name : { type : string },
suggest : { type : completion,
index_analyzer : simple,
search_analyzer : simple,
Nice!
You should use BulkProcessor to process your data instead of running single
index operations. It will be much faster.
To add your plugin, send a pull request in elasticsearch/elasticsearch repo.
(Look inside /docs)
Best
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr /
A number of categories (i.e. watches) has 100's of thousands of possible
results that ultimately need to be sorted - price/brand/time. The default
sorting will be by time. Are you saying that this type of sorting may be
difficult/slow? Nothing bugs me more than a slow site - that's one of the
Good idea w/ the facets - I'll definitely be looking into that more! Thanks!
On Friday, February 7, 2014 1:54:33 PM UTC-8, Binh Ly wrote:
You have sorting and paging which are very straightforward to implement in
ES. You can also do full text search using ES easily - for example, if I
type
Jorge,
I ran your sequence and I got these results which look correct:
{
_shards : {
total : 5,
successful : 5,
failed : 0
},
user-suggest : [ {
text : j,
offset : 0,
length : 1,
options : [ {
text : jorge,
score : 1.0, payload :
So, What's wrong with this?
GET localhost:9200/_search
{
fields: file,
query: {
match_all: {}
}
}
..
hits: {
total: 1,
max_score: 1,
hits: [
{
_index: docs,
_type: pdf,
_id: 1,
_score: 1,
fields:
It looks like that indexing code might not be correct. I just tried this
code and it works for me:
try {
String fileContents = readContent( new File( fn6742.pdf ) );
try {
DeleteIndexResponse deleteIndexResponse = new
DeleteIndexRequestBuilder(
Hi,
I've released the first version of GridFS repository plugin. It allows to
store snapshot data in MongoDB GridFS.
https://github.com/kzwang/elasticsearch-repository-gridfs
Thanks,
Kevin
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Hi,
really appreciate that you are looking into this. I got it working with
elasticsearch-1.0.0.RC2
release. I don't know what was wrong with 0.90. Now the deletes and updates
are ok anyway I have a question. If there are two documents with exactly
the same input - in my case i put two
You are correct, my JSON mapping had a wrong entry. Thanks for the help!
On Friday, February 7, 2014 6:10:50 PM UTC-5, Binh Ly wrote:
It looks like that indexing code might not be correct. I just tried this
code and it works for me:
try {
String fileContents = readContent(
48 matches
Mail list logo