Re: EC2 cluster storage question

2015-02-25 Thread Paul Sanwald
Thanks, the rsync to EBS is what I was rolling around in my head, but 
wasn't sure if it was a dumb idea.

We used to use Elastic Block Store, but have gotten incredible performance 
gains from moving to SSD local storage. The ES team doesn't recommend any 
kind of NAS 
,
 
and they re-iterated in their recent webinar that they couldn't really 
recommend EBS. This was exactly in line with our experience: it will work, 
but performance is less predictable and certainly degraded from ephermal 
storage.

Sounds like I have two options:
1 - shutdown and just restore from snapshot when we start back up.
2 - sync local storage to EBS when we shutdown, and the reverse when we 
start up.

Not sure if the juice is going to be worth the squeeze for either of these 
options, but I appreciate everyone's thoughts.

Thanks!

--paul

On Wednesday, February 25, 2015 at 2:15:01 AM UTC-5, Norberto Meijome wrote:
>
> OP points out he is using ephemeral storage...hence shutdown will destroy 
> the data...but it can be rsynced to EBS as part of the shutdown 
> process...and then repeat in reverse when starting things up again...
>
> Though I guess you could let ES take care of it by tagging nodes 
> accordingly and updating the index settings .(hope it makes sense...)
> On 25/02/2015 4:58 pm, "Mark Walkom" > 
> wrote:
>
>> Why not just shut the cluster down, disable allocation first and then 
>> just gracefully power things off?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_D15Aq62TzhbTN8kWKDPGpsuoYP2e2RJta9N5_tu4_ZA%40mail.gmail.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
-- 
*Important Notice:*  The information contained in or attached to this email 
message is confidential and proprietary information of RedOwl Analytics, 
Inc., and by opening this email or any attachment the recipient agrees to 
keep such information strictly confidential and not to use or disclose the 
information other than as expressly authorized by RedOwl Analytics, Inc. 
 If you are not the intended recipient, please be aware that any use, 
printing, copying, disclosure, dissemination, or the taking of any act in 
reliance on this communication or the information contained herein is 
strictly prohibited. If you think that you have received this email message 
in error, please delete it and notify the sender.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8f970a6d-806a-4290-9cb8-1f54217a8ed8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


EC2 cluster storage question

2015-02-24 Thread Paul Sanwald
More detail below, but the the crux of my question is: What's the best way 
to spin up/down "on demand" an ES cluster on EC2 that uses ephermal local 
storage? Essentially, I want to run the cluster during the week and spin 
down over the weekend. Other than brute force snapshot/restore, is there 
any more creative way to do this, like mirroring local storage to EBS or 
similar?

Some more background:
We run multiple ES clusters on ec2 (we use opsworks for deployment 
automation). We started out several years back using EBS because we didn't 
know any better, and have switched over to using SSD based local storage. 
The performance improvements have been unbelievable.

Obviously, using ephermal local storage comes at a cost: we use 
replication, take frequent snapshots, and store all source data to mitigate 
the risk of data loss. the other thing that local storage means is that our 
cluster essentially needs to be up and running 24/7, which I think is a 
fairly normal.

I'm investigating some ways to save on cost for a large-ish cluster, and 
one of the things is that we don't need it to necessarily run 24/7; 
specifically, we want to turn the cluster off over the weekend. That said, 
restoring terabytes from snapshot doesn't seem like a very efficient way to 
do this, so I want to consider options, and was hoping the community could 
help me in identifying options that I am missing.

thanks in advance for any thoughts you may have.

--paul

-- 
*Important Notice:*  The information contained in or attached to this email 
message is confidential and proprietary information of RedOwl Analytics, 
Inc., and by opening this email or any attachment the recipient agrees to 
keep such information strictly confidential and not to use or disclose the 
information other than as expressly authorized by RedOwl Analytics, Inc. 
 If you are not the intended recipient, please be aware that any use, 
printing, copying, disclosure, dissemination, or the taking of any act in 
reliance on this communication or the information contained herein is 
strictly prohibited. If you think that you have received this email message 
in error, please delete it and notify the sender.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/588e485c-6029-4ded-a3ce-a8dd01213510%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES OutOfMemory on a 30GB index

2014-05-29 Thread Paul Sanwald
We've narrowed the problem down to a multi_match clause in our query:
 {"multi_match":{"fields":["attachments.*.bodies"], "query":"foobar"}}

This has to do with the way we've structured our index, We are searching an 
index that contains emails, and we are indexing attachments in the 
attachments.*.bodies fields. For example, attachments.1.bodies would 
contain the text body of an attachment.

This structure is clearly sub-optimal in terms of multi_match queries, but 
I need to structure our index in some way that we can search the contents 
of an email and the parsed contents of its attachments, and get back the 
email as a result.

>From reading the docs, it seems like the better way to solve this is with 
nested types?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html

--paul

On Wednesday, May 28, 2014 7:11:05 PM UTC-4, Paul Sanwald wrote:
>
> Sorry, it's Java 7:
>
> jvm: {
> pid: 20424
> version: 1.7.0_09-icedtea
> vm_name: OpenJDK 64-Bit Server VM
> vm_version: 23.7-b01
> vm_vendor: Oracle Corporation
> start_time: 1401309063644
> mem: {
> heap_init_in_bytes: 1073741824
> heap_max_in_bytes: 10498867200
> non_heap_init_in_bytes: 24313856
> non_heap_max_in_bytes: 318767104
> direct_max_in_bytes: 10498867200
> }
> gc_collectors: [
> PS Scavenge
> PS MarkSweep
> ]
> memory_pools: [
> Code Cache
> PS Eden Space
> PS Survivor Space
> PS Old Gen
> PS Perm Gen
> ]
>
> On Wednesday, May 28, 2014 6:58:26 PM UTC-4, Mark Walkom wrote:
>>
>> What java version are you running, it's not in the stats gist.
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>  
>>
>> On 29 May 2014 08:33, Paul Sanwald  wrote:
>>
>>> I apologize about the signature, it's automatic. I've created a gist 
>>> with the cluster node stats:
>>> https://gist.github.com/pcsanwald/e11ba02ac591757c8d92
>>>
>>> We are using 1.1.0, using aggregations a lot but nothing crazy. We run 
>>> our app on much much larger indices successfully. But, the problem seems to 
>>> be present itself on even basic search cases. The one thing that's 
>>> different about this dataset is a lot of it is in spanish.
>>>
>>> thanks for your help!
>>>
>>> On Wednesday, May 28, 2014 6:22:59 PM UTC-4, Mark Walkom wrote:
>>>>
>>>> Can you provide some specs on your cluster, OS, RAM, heap, disk, java 
>>>> and ES versions?
>>>> Are you using parent/child relationships, TTLs, large facet or other 
>>>> queries?
>>>>
>>>>
>>>> (Also, your elaborate legalese signature is kind of moot given you're 
>>>> posting to a public mailing list :p)
>>>>
>>>> Regards,
>>>> Mark Walkom
>>>>
>>>> Infrastructure Engineer
>>>> Campaign Monitor
>>>> email: ma...@campaignmonitor.com
>>>> web: www.campaignmonitor.com
>>>>  
>>>>
>>>> On 29 May 2014 07:27, Paul Sanwald  wrote:
>>>>
>>>>> Hi Everyone,
>>>>>We are seeing continual OOM exceptions on one of our 1.1.0 
>>>>> elasticsearch clusters, the index is ~30GB, quite small. I'm trying to 
>>>>> work 
>>>>> out the root cause via heap dump analysis, but not having a lot of luck. 
>>>>> I 
>>>>> don't want to include a bunch of unnecessary info, but the stacktrace 
>>>>> we're 
>>>>> seeing is pasted below. Has anyone seen this before? I've been using the 
>>>>> cluster stats and node stats APIs to try and find a smoking gun, but I'm 
>>>>> not seeing anything that looks out of the ordinary.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Failed to send 
>>>>> error message back to client for action [search/phase/query]
>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Actual Exception
>>>>> org.elasticsearch.search.query.QueryPhaseExecutionException: 
>>>>> [eventdata][2]: q
>>>>> uery[ConstantScore(*:*)],from[0],size[0]: Query Failed [Failed to 
>>>>> execute main
>>>>>  query]
>>>>> 

Re: ES OutOfMemory on a 30GB index

2014-05-28 Thread Paul Sanwald
Sorry, it's Java 7:

jvm: {
pid: 20424
version: 1.7.0_09-icedtea
vm_name: OpenJDK 64-Bit Server VM
vm_version: 23.7-b01
vm_vendor: Oracle Corporation
start_time: 1401309063644
mem: {
heap_init_in_bytes: 1073741824
heap_max_in_bytes: 10498867200
non_heap_init_in_bytes: 24313856
non_heap_max_in_bytes: 318767104
direct_max_in_bytes: 10498867200
}
gc_collectors: [
PS Scavenge
PS MarkSweep
]
memory_pools: [
Code Cache
PS Eden Space
PS Survivor Space
PS Old Gen
PS Perm Gen
]

On Wednesday, May 28, 2014 6:58:26 PM UTC-4, Mark Walkom wrote:
>
> What java version are you running, it's not in the stats gist.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 29 May 2014 08:33, Paul Sanwald 
> > wrote:
>
>> I apologize about the signature, it's automatic. I've created a gist with 
>> the cluster node stats:
>> https://gist.github.com/pcsanwald/e11ba02ac591757c8d92
>>
>> We are using 1.1.0, using aggregations a lot but nothing crazy. We run 
>> our app on much much larger indices successfully. But, the problem seems to 
>> be present itself on even basic search cases. The one thing that's 
>> different about this dataset is a lot of it is in spanish.
>>
>> thanks for your help!
>>
>> On Wednesday, May 28, 2014 6:22:59 PM UTC-4, Mark Walkom wrote:
>>>
>>> Can you provide some specs on your cluster, OS, RAM, heap, disk, java 
>>> and ES versions?
>>> Are you using parent/child relationships, TTLs, large facet or other 
>>> queries?
>>>
>>>
>>> (Also, your elaborate legalese signature is kind of moot given you're 
>>> posting to a public mailing list :p)
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>>  
>>>
>>> On 29 May 2014 07:27, Paul Sanwald  wrote:
>>>
>>>> Hi Everyone,
>>>>We are seeing continual OOM exceptions on one of our 1.1.0 
>>>> elasticsearch clusters, the index is ~30GB, quite small. I'm trying to 
>>>> work 
>>>> out the root cause via heap dump analysis, but not having a lot of luck. I 
>>>> don't want to include a bunch of unnecessary info, but the stacktrace 
>>>> we're 
>>>> seeing is pasted below. Has anyone seen this before? I've been using the 
>>>> cluster stats and node stats APIs to try and find a smoking gun, but I'm 
>>>> not seeing anything that looks out of the ordinary.
>>>>
>>>> Any ideas?
>>>>
>>>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Failed to send 
>>>> error message back to client for action [search/phase/query]
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Actual Exception
>>>> org.elasticsearch.search.query.QueryPhaseExecutionException: 
>>>> [eventdata][2]: q
>>>> uery[ConstantScore(*:*)],from[0],size[0]: Query Failed [Failed to 
>>>> execute main
>>>>  query]
>>>> at org.elasticsearch.search.query.QueryPhase.execute(
>>>> QueryPhase.java:1
>>>> 27)
>>>> at org.elasticsearch.search.SearchService.executeQueryPhase(
>>>> SearchService.java:257)
>>>> at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$SearchQueryTransportHandler.
>>>> messageReceived(SearchServiceTransportAction.java:623)
>>>> at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$SearchQueryTransportHandler.
>>>> messageReceived(SearchServiceTransportAction.java:612)
>>>> at org.elasticsearch.transport.netty.MessageChannelHandler$
>>>> RequestHandler.run(MessageChannelHandler.java:270)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615)
>>>> at java.lang.Thread.run(Thread.java:722)
>>>> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>  
>>>> *Important Notice:*  The information contained in or attached to this 
>>>> email message is confidential and proprietary information of RedOwl 
>>&g

Re: ES OutOfMemory on a 30GB index

2014-05-28 Thread Paul Sanwald
I apologize about the signature, it's automatic. I've created a gist with 
the cluster node stats:
https://gist.github.com/pcsanwald/e11ba02ac591757c8d92

We are using 1.1.0, using aggregations a lot but nothing crazy. We run our 
app on much much larger indices successfully. But, the problem seems to be 
present itself on even basic search cases. The one thing that's different 
about this dataset is a lot of it is in spanish.

thanks for your help!

On Wednesday, May 28, 2014 6:22:59 PM UTC-4, Mark Walkom wrote:
>
> Can you provide some specs on your cluster, OS, RAM, heap, disk, java and 
> ES versions?
> Are you using parent/child relationships, TTLs, large facet or other 
> queries?
>
>
> (Also, your elaborate legalese signature is kind of moot given you're 
> posting to a public mailing list :p)
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 29 May 2014 07:27, Paul Sanwald 
> > wrote:
>
>> Hi Everyone,
>>We are seeing continual OOM exceptions on one of our 1.1.0 
>> elasticsearch clusters, the index is ~30GB, quite small. I'm trying to work 
>> out the root cause via heap dump analysis, but not having a lot of luck. I 
>> don't want to include a bunch of unnecessary info, but the stacktrace we're 
>> seeing is pasted below. Has anyone seen this before? I've been using the 
>> cluster stats and node stats APIs to try and find a smoking gun, but I'm 
>> not seeing anything that looks out of the ordinary.
>>
>> Any ideas?
>>
>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Failed to send error 
>> message back to client for action [search/phase/query]
>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>> 14/05/27 20:37:08 WARN transport.netty: [Strongarm] Actual Exception
>> org.elasticsearch.search.query.QueryPhaseExecutionException: 
>> [eventdata][2]: q
>> uery[ConstantScore(*:*)],from[0],size[0]: Query Failed [Failed to execute 
>> main
>>  query]
>> at 
>> org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:1
>> 27)
>> at 
>> org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
>> at 
>> org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)
>> at 
>> org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)
>> at 
>> org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:722)
>> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>>  
>> *Important Notice:*  The information contained in or attached to this 
>> email message is confidential and proprietary information of RedOwl 
>> Analytics, Inc., and by opening this email or any attachment the recipient 
>> agrees to keep such information strictly confidential and not to use or 
>> disclose the information other than as expressly authorized by RedOwl 
>> Analytics, Inc.  If you are not the intended recipient, please be aware 
>> that any use, printing, copying, disclosure, dissemination, or the taking 
>> of any act in reliance on this communication or the information contained 
>> herein is strictly prohibited. If you think that you have received this 
>> email message in error, please delete it and notify the sender. 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/e6634ad4-619f-4f24-8287-d3bc97722a88%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/e6634ad4-619f-4f24-8287-d3bc97722a88%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
-- 
*Important Notice:*  The information contained in or attached to this email 
message is confidential and proprietary information of RedOwl Analytics, 
Inc., and by op

ES OutOfMemory on a 30GB index

2014-05-28 Thread Paul Sanwald
Hi Everyone,
   We are seeing continual OOM exceptions on one of our 1.1.0 elasticsearch 
clusters, the index is ~30GB, quite small. I'm trying to work out the root 
cause via heap dump analysis, but not having a lot of luck. I don't want to 
include a bunch of unnecessary info, but the stacktrace we're seeing is 
pasted below. Has anyone seen this before? I've been using the cluster 
stats and node stats APIs to try and find a smoking gun, but I'm not seeing 
anything that looks out of the ordinary.

Any ideas?

14/05/27 20:37:08 WARN transport.netty: [Strongarm] Failed to send error 
message back to client for action [search/phase/query]
java.lang.OutOfMemoryError: GC overhead limit exceeded
14/05/27 20:37:08 WARN transport.netty: [Strongarm] Actual Exception
org.elasticsearch.search.query.QueryPhaseExecutionException: 
[eventdata][2]: q
uery[ConstantScore(*:*)],from[0],size[0]: Query Failed [Failed to execute 
main
 query]
at 
org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:1
27)
at 
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
at 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)
at 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)
at 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

-- 
*Important Notice:*  The information contained in or attached to this email 
message is confidential and proprietary information of RedOwl Analytics, 
Inc., and by opening this email or any attachment the recipient agrees to 
keep such information strictly confidential and not to use or disclose the 
information other than as expressly authorized by RedOwl Analytics, Inc. 
 If you are not the intended recipient, please be aware that any use, 
printing, copying, disclosure, dissemination, or the taking of any act in 
reliance on this communication or the information contained herein is 
strictly prohibited. If you think that you have received this email message 
in error, please delete it and notify the sender.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e6634ad4-619f-4f24-8287-d3bc97722a88%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: java 8, elasticsearch, and MVEL

2014-05-16 Thread Paul Sanwald
It's a little hard to tell between the mvel es commit histories and the 
github issue.

It looks like this isn't fixed, and isn't going to get fixed in MVEL? Am I 
misreading something?

--paul

On Monday, April 21, 2014 8:39:43 AM UTC-4, Alexander Reelsen wrote:
>
> Hey,
>
> this commits upgrades mvel, that seems to have fixed the java8 issues 
> (still requires more testing on our side though): 
> https://github.com/elasticsearch/elasticsearch/commit/21a36678883c159e50a03b76309d3da2a8e5d7b4
>
> IIRC this bug has also been fixed in the new MVEL version: 
> https://github.com/elasticsearch/elasticsearch/issues/5483
>
>
> --Alex
>
>
> On Tue, Apr 15, 2014 at 11:40 AM, Bernhard Berger 
> 
> > wrote:
>
>>  Is there an open issue so that I can watch the progress for this bug? I 
>> cannot find any issue for this on GitHub.
>>
>> Am 07.04.2014 01:12, schrieb Shay Banon:
>>  
>> We will report back with findings and progress.
>>
>>   
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/534CFEA8.9060100%40gmail.com
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
-- 
*Important Notice:*  The information contained in or attached to this email 
message is confidential and proprietary information of RedOwl Analytics, 
Inc., and by opening this email or any attachment the recipient agrees to 
keep such information strictly confidential and not to use or disclose the 
information other than as expressly authorized by RedOwl Analytics, Inc. 
 If you are not the intended recipient, please be aware that any use, 
printing, copying, disclosure, dissemination, or the taking of any act in 
reliance on this communication or the information contained herein is 
strictly prohibited. If you think that you have received this email message 
in error, please delete it and notify the sender.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e4717341-fc95-4ca0-badf-50b38e6df5d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: java 8, elasticsearch, and MVEL

2014-04-07 Thread Paul Sanwald
Thanks, Shay. If there's anything I can do to help with the effort, please 
do let me know.

On Sunday, April 6, 2014 7:12:39 PM UTC-4, kimchy wrote:
>
> We are planning to address this on Elasticsearch itself. The tricky bit is 
> the fact that we want to have a highly optimized concurrent scripting 
> engine. You can install the Rhino one which should work for now, its pretty 
> fast, and it allows for the type of execution we are after.
>
> We will report back with findings and progress.
>
> On Apr 6, 2014, at 14:29, joerg...@gmail.com  wrote:
>
> No, you are not the only one. MVEL breaks under Java 8 here. I use Java 8 
> with ES without scripting right now. For doc boosting, I will need 
> scripting desperately.
>
> I also want to migrate away from MVEL. My favorite is Nashorn because it 
> is part of Java 8 JDK, but I'm wrestling with thread safety issues - and my 
> tests show low performance to my surprise. 
>
> So I have tried to implement some other script languages as a plugin with 
> focus on JSR 223 (dynjs, jav8, luaj) but I'm stuck in the middle of getting 
> them to run and sorting out what script language implementation give best 
> performance and smartest resource usage behavior under ES.
>
> Jörg
>
>
> On Fri, Apr 4, 2014 at 9:11 PM, Paul Sanwald 
> 
> > wrote:
>
>> it seems I'm the only one with this problem. perhaps I will migrate our 
>> scripts to javascript. I'll post back to the group with results.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/d92ffdc0-63b5-440f-86b4-fe055b709858%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/d92ffdc0-63b5-440f-86b4-fe055b709858%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoG2S2Oufs1Dm26-nT4QuT17H2zdZY2JWRkFSUpd%2Butomw%40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAKdsXoG2S2Oufs1Dm26-nT4QuT17H2zdZY2JWRkFSUpd%2Butomw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54edc661-2525-4ea8-b9e1-f83e733401e4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: java 8, elasticsearch, and MVEL

2014-04-04 Thread Paul Sanwald
it seems I'm the only one with this problem. perhaps I will migrate our 
scripts to javascript. I'll post back to the group with results.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d92ffdc0-63b5-440f-86b4-fe055b709858%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


java 8, elasticsearch, and MVEL

2014-03-28 Thread Paul Sanwald
I've been testing ES with java 8, and everything is working fantastic, with 
the exception of MVEL, which is fairly broken. I've looked on the MVEL 
mailing lists, and on github issues, and there's not a lot of activity. I'm 
trying to decide if I should just migrate my MVEL scripts to a different 
language, which seems like the easiest path. Any thoughts? Have others 
moved ES installs to java 8 successfully?

--paul

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a0f5a4a0-3a22-42c4-a4ee-be4b9d7b9734%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.