Fwd: Riak Cannot allocate bytes of memory (of type "heap")

2016-01-25 Thread Luke Bakken
Hi Byron -

I strongly suggest you monitor the number of siblings and object sizes
for your Riak objects. These sorts of allocation errors can often
caused by a very large object within your cluster.

This page gives information about which statistics to monitor:
http://docs.basho.com/riak/latest/ops/running/stats-and-monitoring/

> Are m3.large instances an issue for Riak?

It all depends on your use. Since you are running into these
allocation errors there is something up with how Riak is being used or
there just may not be enough resources available.

> Can you let me know what we might expect if we disable Active Anti-Entropy - 
> will that make our solr queries return stale data?

Not necessarily. I will point this email thread to others who can
speak better to this.

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Jan 25, 2016 at 11:34 AM, Sakoulas, Byron
 wrote:
> Luke - thanks for replying.
>
> Are m3.large instances an issue for Riak? we were originally told those would 
> be fine by Dimitri at basho.
> We raised the solr RAM to GB after having issues with solr running out of 
> memory.
>
> Can you let me know what we might expect if we disable Active Anti-Entropy - 
> will that make our solr queries return stale data?
>
> On 1/25/16, 12:17 PM, "Luke Bakken"  wrote:
>
>>Hello Byron -
>>
>>m3.large instances only support 7.5 GiB of RAM. You can see that Riak
>>crashed while attempting to allocate 2.12 GiB of RAM for leveldb.
>>
>>I suggest decreasing jvm (Solr) RAM back to the 1GiB setting that
>>ships with Riak. You can also experiment with disabling Active
>>Anti-Entropy to reduce memory usage. Hopefully someone with more
>>experience with Riak Search (Yokozuna) interaction with Active
>>Anti-Entropy will chime in on this thread.
>>
>>Or, increase the amount of RAM available to these VMs.
>>
>>Thanks
>>
>>--
>>Luke Bakken
>>Engineer
>>lbak...@basho.com
>>
>>
>>On Mon, Jan 25, 2016 at 10:10 AM, Sakoulas, Byron
>> wrote:
>>> We are running an 8 node cluster of riak at AWS, and our nodes are 
>>> consistently crashing with the error - Cannot allocate x bytes of memory 
>>> (of type "heap”).
>>>
>>> Here are some of the specs for our env:
>>>
>>> 8 nodes - running on M3 Larges
>>> Level DB with 50% allocated
>>> Solr with 2Gig
>>> We use only Immutable and CRDT data
>>> We have a Custom search schema
>>> System config matches basho recommendations
>>> CentOs 7
>>> Riak 2.0.2
>>> Riak java client 2.0.0

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Cannot allocate bytes of memory (of type "heap")

2016-01-25 Thread Luke Bakken
Hello Byron -

m3.large instances only support 7.5 GiB of RAM. You can see that Riak
crashed while attempting to allocate 2.12 GiB of RAM for leveldb.

I suggest decreasing jvm (Solr) RAM back to the 1GiB setting that
ships with Riak. You can also experiment with disabling Active
Anti-Entropy to reduce memory usage. Hopefully someone with more
experience with Riak Search (Yokozuna) interaction with Active
Anti-Entropy will chime in on this thread.

Or, increase the amount of RAM available to these VMs.

Thanks

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Jan 25, 2016 at 10:10 AM, Sakoulas, Byron
 wrote:
> We are running an 8 node cluster of riak at AWS, and our nodes are 
> consistently crashing with the error - Cannot allocate x bytes of memory (of 
> type "heap”).
>
> Here are some of the specs for our env:
>
> 8 nodes - running on M3 Larges
> Level DB with 50% allocated
> Solr with 2Gig
> We use only Immutable and CRDT data
> We have a Custom search schema
> System config matches basho recommendations
> CentOs 7
> Riak 2.0.2
> Riak java client 2.0.0
>
> Below is the console log leading up to the crash. I have also attached the 
> erl_crash.dump file. Any help is greatly appreciated.
>
> 2016-01-25 16:34:16.822 [info] 
> <0.2681.4>@riak_kv_exchange_fsm:key_exchange:263 Repaired 1 keys during 
> active anti-entropy exchange of 
> {707914855582156101004909840846949587645842325504,3} between 
> {730750818665451459101842416358141509827966271488,'riakaws@172.16.65.8'}
>  and 
> {753586781748746817198774991869333432010090217472,'riakaws@172.16.65.12'}
> 2016-01-25 16:34:56.867 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,180682}]
>  
> [{old_heap_block_size,0},{heap_block_size,22177879},{mbuf_size,0},{stack_size,26},{old_heap_size,0},{heap_size,8755966}]
> 2016-01-25 16:35:00.231 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,203278}]
>  
> [{old_heap_block_size,0},{heap_block_size,26613454},{mbuf_size,0},{stack_size,26},{old_heap_size,0},{heap_size,9839470}]
> 2016-01-25 16:35:08.857 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,256704}]
>  
> [{old_heap_block_size,0},{heap_block_size,31936144},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,12371527}]
> 2016-01-25 16:35:15.731 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,299047}]
>  
> [{old_heap_block_size,0},{heap_block_size,38323372},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,14501169}]
> 2016-01-25 16:35:21.285 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,330848}]
>  
> [{old_heap_block_size,0},{heap_block_size,45988046},{mbuf_size,0},{stack_size,26},{old_heap_size,0},{heap_size,16029792}]
> 2016-01-25 16:35:36.034 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,382846}]
>  
> [{old_heap_block_size,0},{heap_block_size,55185655},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,18521726}]
> 2016-01-25 16:35:49.409 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,455689}]
>  
> [{old_heap_block_size,0},{heap_block_size,66222786},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,21841438}]
> 2016-01-25 16:35:59.878 [info] <0.71.0> alarm_handler: 
> {set,{process_memory_high_watermark,<0.1369.0>}}
> 2016-01-25 16:36:00.267 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{eleveldb,get,3}},{message_queue_len,515497}]
>  
> [{old_heap_block_size,0},{heap_block_size,79467343},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,24737674}]
> 2016-01-25 16:36:08.497 [info] 
> <0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor large_heap 
> <0.1369.0> 
> [{initial_call,{yz_index_hashtree,init,1}},{almost_current_function,{hashtree,should_insert,3}},{message_queue_len,560639}]
>  
> [{old_heap_block_size,0},{heap_block_size,95360811},{mbuf_size,0},{stack_size,19},{old_heap_size,0},{heap_size,26973030}]
> 2016-01-25 16:36:34.80

Re: accessor for FetchDatatype's location?

2016-01-25 Thread David Byron

Finally got around to this:

https://github.com/basho/riak-java-client/pull/590

Thanks for the help.

-DB

On 1/20/16 7:58 AM, Alex Moore wrote:

Hey David,

We include/shade that jar in the final riak-client jar so we stopped
shipping it separately.
You'll want to clone riak_pb, and build this tag to install it locally
for dev: https://github.com/basho/riak_pb/tree/java-2.1.1.0

You'll need protocol buffers 2.5.0
 to build, and
you should be able to build/install the riak_pb lib with `mvn install`.
If you use homebrew you can use this
 procedure to install the older
protobuf lib (just swap protobuf241 for protobuf250).

Thanks,
Alex

On Thu, Jan 14, 2016 at 2:22 PM, David Byron mailto:dby...@dbyron.com>> wrote:

On 1/14/16 7:40 AM, Alex Moore wrote:
> Hi David,
>
> It doesn't look like we expose that property anywhere, but it can
> probably be chalked up to YAGNI when it was written.   Go forth and
> PR :)

Excellent...except for this at the HEAD of develop (24e1404).

$ mvn clean install
[INFO] Scanning for projects...
[INFO]
[INFO]

[INFO] Building Riak Client for Java 2.0.5-SNAPSHOT
[INFO]

[WARNING] The POM for com.basho.riak.protobuf:riak-pb:jar:2.1.1.0 is
missing, no dependency information available
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 0.285 s
[INFO] Finished at: 2016-01-14T11:17:22-08:00
[INFO] Final Memory: 10M/309M
[INFO]

[ERROR] Failed to execute goal on project riak-client: Could not
resolve dependencies for project
com.basho.riak:riak-client:jar:2.0.5-SNAPSHOT: Failure to find
com.basho.riak.protobuf:riak-pb:jar:2.1.1.0 in
https://repo.maven.apache.org/maven2 was cached in the local
repository, resolution will not be reattempted until the update
interval of central has elapsed or updates are forced -> [Help 1]

The latest version I see at
http://mvnrepository.com/artifact/com.basho.riak.protobuf/riak-pb is
2.0.0.16.  When I change pom.xml to use that version I get
truckloads of errors.

-DB


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Get all keys from an bucket

2016-01-25 Thread Russell Brown
Hi Markus,
Are you using leveldb backend?

Russell

On 22 Jan 2016, at 19:05, Markus Geck  wrote:

> Hello,
> 
> is there any way to get all keys from an bucket?
> 
> I've already tried this guide: 
> http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html But riak 
> always wents unresponsive with a huge server load.
> 
> and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.
> 
> Is there any other way?
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Get all keys from an bucket

2016-01-25 Thread Markus Geck
 Hello,

is there any way to get all keys from an bucket?

I've already tried this guide:  
http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html  But riak 
always wents unresponsive with a huge server load.

and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.

Is there any other way?___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com