Re: Secondary indexes or Riak search ?

2017-02-02 Thread Jason Voegele
Hi Alex, There is some info on this page that can help you decide: http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/ See the sections titled "When to Use Secondary Indexes" and " When Not to Use Secondary Indexes". Sent from my iPad > On Feb 2, 2017, at 4:43 AM, Alex

Re: Reg:Continuous Periodic crashes after long operation

2017-02-01 Thread Luke Bakken
n > > Luke Bakken <lbak...@basho.com> writes: > >> Hi Steven, >> >> At this point I suspect you're using the Python client in such a way >> that too many connections are being created. Are you re-using the >> RiakClient object or repeatedly creating new ones? Can you

Re: Reg:Continuous Periodic crashes after long operation

2017-02-01 Thread Steven Joseph
experiment with using process ids as keys to access a process specific riak client in forked child ? Regards Steven Luke Bakken <lbak...@basho.com> writes: > Hi Steven, > > At this point I suspect you're using the Python client in such a way > that too many connections are being c

Re: Reg:Continuous Periodic crashes after long operation

2017-02-01 Thread Luke Bakken
Hi Steven, At this point I suspect you're using the Python client in such a way that too many connections are being created. Are you re-using the RiakClient object or repeatedly creating new ones? Can you provide any code that reproduces your issue? -- Luke Bakken Engineer lbak...@basho.com

Re: Reg:Continuous Periodic crashes after long operation

2017-01-31 Thread Steven Joseph
Hi Luke, Here's the output of $ sysctl fs.file-max fs.file-max = 2500 Regards Steven On Wed, Feb 1, 2017 at 9:30 AM Luke Bakken wrote: > Hi Steven, > > What is the output of this command on your systems? > > $ sysctl fs.file-max > > Mine is: > > fs.file-max = 1620211

Re: Reg:Continuous Periodic crashes after long operation

2017-01-31 Thread Luke Bakken
Hi Steven, What is the output of this command on your systems? $ sysctl fs.file-max Mine is: fs.file-max = 1620211 -- Luke Bakken Engineer lbak...@basho.com On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph wrote: > Hi Shaun, > > Im having this issue again, this time I

Re: Reg:Continuous Periodic crashes after long operation

2017-01-31 Thread Steven Joseph
Hi Shaun, Im having this issue again, this time I have captured the system limits, while riak is still crashing. Please note lsof and prlimit outputs at bottom. steven@hawk5:log/riak:» tail error.log

Re: Efficient way to fetch all records from a bucket

2017-01-30 Thread Russell Brown
IF you use leveldb, there is a function in the riak-erlang-client that gets all objects in a bucket, I don’t know if it has been implemented in the java client as it was written specifically for riak-cs. https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130

Re: Efficient way to fetch all records from a bucket

2017-01-28 Thread Russell Brown
I the link I provided gives you the _objects_ too. list_keys gives only keys. On 28 Jan 2017, at 12:21, Grigory Fateyev wrote: > Hello! > > I think this link > https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506 > ? You need list_keys/2

Re: Efficient way to fetch all records from a bucket

2017-01-28 Thread Grigory Fateyev
Hello! I think this link https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506 ? You need list_keys/2 function. 2017-01-28 13:33 GMT+03:00 Russell Brown : > IF you use leveldb, there is a function in the riak-erlang-client that > gets all

Re: Efficient way to fetch all records from a bucket

2017-01-28 Thread Russell Brown
IF you use leveldb, there is a function in the riak-erlang-client that gets all objects in a bucket, I don’t know if it has been implemented in the java client as it was written specifically for riak-cs. https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130

Re: Reg:Continuous Periodic crashes after long operation

2017-01-27 Thread Steven Joseph
Hi Shaun, I have already set this to a very high value (r...@hawk1.streethawk.com)1> os:cmd("ulimit -n"). "2500\n" (r...@hawk1.streethawk.com)2> So the issue is not that the limit is low, but maybe a resource leak ? As I mentioned our application processes continuously run queries on the

Re: Reg:Continuous Periodic crashes after long operation

2017-01-27 Thread Steven Joseph
I've had this issue again, this time I checked the output of lsof and it seems like its the number of established connections are way high, I've configured my application tasks to exit and cleanup connections periodicaly. That should solve it. Thanks guys. Steven On Fri, Jan 27, 2017 at 3:07 AM

Re: How many keys inside a bucket ?

2017-01-27 Thread Alexander Sicular
Hello Alex, As long as each bucket does not have its own properties but rather shares one or a handful of bucket types you should be fine and it wouldn't make a difference. One way to record data temporally, aka in a time series fashion, from a data model perspective is via a pattern called

Re: How many keys inside a bucket ?

2017-01-26 Thread Alex Feng
Hi Alexander, Yes, I should consider the possibility of switching to Riak TS. But I guess the question still valid, does it ? Should I divide millions of keys to different buckets, does it make any difference in performance, memory, space ? Br, Alex 2017-01-27 2:50 GMT+01:00 Alexander Sicular

Re: How many keys inside a bucket ?

2017-01-26 Thread Alexander Sicular
Hi, you should consider using Riak TS for this use case. -Alexander @siculars http://siculars.posthaven.com Sent from my iRotaryPhone > On Jan 27, 2017, at 01:54, Alex Feng wrote: > > Hi, > > I am wondering if there are some best practice or recommendation for how

Re: Riak: leveldb vs multi backend disk usage

2017-01-26 Thread Alexander Sicular
Riak CS stores data chunks in bitcask and the index/metadata file in leveldb. Bitcask, as noted, has no compression. When you force Riak to use level for the data chunks you get compression for that data which may or may not be good for your use case. If it's not good for your use case I

Re: memory usage of Riak.

2017-01-26 Thread Alex Feng
HI Mattew, Thank you for the help. I got answer from Shaun already, seems LevelDB and Bitcask, they are same, memory usage does not show up in Erlang. Br, Alex 2017-01-26 14:47 GMT+01:00 Matthew Von-Maszewski : > Alex, > > Which backend are you using? Leveldb's memory

Re: Reg:Continuous Periodic crashes after long operation

2017-01-26 Thread Matthew Von-Maszewski
FYI: this is the function that is crashing: get_uint32_measurement(Request, #internal{os_type = {unix, linux}}) -> {ok,F} = file:open("/proc/loadavg",[read,raw]), %% <--- crash line {ok,D} = file:read(F,24), ok = file:close(F),

Re: Reg:Continuous Periodic crashes after long operation

2017-01-26 Thread Luke Bakken
Steven, You may be able to get information via the lsof command as to what process(es) are using many file handles (if that is the cause). I searched for that particular error and found this GH issue: https://github.com/emqtt/emqttd/issues/426 Which directed me to this page:

Re: memory usage of Riak.

2017-01-26 Thread Matthew Von-Maszewski
Alex, Which backend are you using? Leveldb's memory usage does not show up within Erlang. Maybe that is what you are experiencing? Matthew Sent from my iPad > On Jan 26, 2017, at 5:47 AM, Alex Feng wrote: > > Hi Riak Users, > > One of my riak nodes, it has 4G

Re: Reg:Continuous Periodic crashes after long operation

2017-01-26 Thread Steven Joseph
Hi Shaun, I have already set this to a very high value (r...@hawk1.streethawk.com)1> os:cmd("ulimit -n"). "2500\n" (r...@hawk1.streethawk.com)2> So the issue is not that the limit is low, but maybe a resource leak ? As I mentioned our application processes continuously run queries on the

Re: Reg:Continuous Periodic crashes after long operation

2017-01-26 Thread Shaun McVey
Hi Steven, Based on that log output, it looks like you're running into issues with system limits, probably open file limits. You can check the value that Riak has available by connecting to one of the nodes with riak attach, then executing: ``` os:cmd("ulimit -n"). ``` (After, disconnect with

Re: Active Anti Entropy Directory when AAE is disabled

2017-01-26 Thread Magnus Kessler
On 25 January 2017 at 21:09, Arun Rajagopalan wrote: > Thanks Luke. Sorry it took me some time to experiment ... > > I am not sure what happens in a couple of scenarios. Maybe you can explain > > Lets say I lose a node completely and want to replace it. Will the

Re: Object not found after successful PUT on S3 API

2017-01-25 Thread Daniel Miller
Thanks for the quick response, Luke. There is nothing unusual about the keys. The format is a name + UUID + some other random URL-encoded charaters, like most other keys in our cluster. There are no errors near the time of the incident in any of the logs (the last [error] is from over a month

Re: Active Anti Entropy Directory when AAE is disabled

2017-01-25 Thread Arun Rajagopalan
Thanks Luke. Sorry it took me some time to experiment ... I am not sure what happens in a couple of scenarios. Maybe you can explain Lets say I lose a node completely and want to replace it. Will the keys yet to be "anti-entropied" by that node be distributed correctly when I restore that node ?

Re: Object not found after successful PUT on S3 API

2017-01-25 Thread Luke Bakken
Hi Daniel - This is a strange scenario. I recommend looking at all of the log files for "[error]" or other entries at about the same time as these PUTs or 404 responses. Is there anything unusual about the key being used? -- Luke Bakken Engineer lbak...@basho.com On Wed, Jan 25, 2017 at 6:40

Re: Riak CS race condition at start-up (was: Riak-CS issues when Riak endpoint fails-over to new server)

2017-01-22 Thread Toby Corkindale
> > Hi guys, > > I've switched our configuration around, so that Riak CS now talks to > > 127.0.0.1:8087 instead of the haproxy version. > > > > We have immediately re-encountered the problems that caused us to move to > > haproxy. > > On start-up, riak ta

Re: Riak CS race condition at start-up (was: Riak-CS issues when Riak endpoint fails-over to new server)

2017-01-20 Thread Luke Bakken
19, 2017 at 5:38 PM, Toby Corkindale <t...@dryft.net> wrote: > Hi guys, > I've switched our configuration around, so that Riak CS now talks to > 127.0.0.1:8087 instead of the haproxy version. > > We have immediately re-encountered the problems that caused us to move to >

Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Matthew Von-Maszewski
Damion, I will explain what happened. ring_size = 8: The default ring_size is 64. It is based on the recommendation of five servers for a minimum cluster. You stated you are using only one machine. 64 divided by 5 is 12.8 vnodes per server ... and ring size needs to be a power of 2. So

Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Junk, Damion A
Matthew - That did it! Actually, I tried with both settings, and also with just the ring_size change. Setting ring_size to 8 got rid of crashing. I'll have to do a bit more reading on this setting I suppose. I have a much more memory-constrained virtual machine running on my local desktop

Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Matthew Von-Maszewski
Damion, Add the following settings within riak.conf: leveldb.limited_developer_mem = on ring_size = 8 Erase all data / vnodes and start over. Matthew > On Jan 19, 2017, at 8:51 AM, Junk, Damion A wrote: > > Hi Magnus - > > I've tried a wide range of parameters for

Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Junk, Damion A
Hi Magnus - I've tried a wide range of parameters for leveldb.maximum_memory_percent ranging from 5 to 70. I also tried the leveldb.maximum_memory setting in bytes, ranging from 500MB to 4GB. I get the same results in the crash/console log no matter what the settings. But the log messages seem

Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Magnus Kessler
gure out what's going on with the server, so I > completely wiped /var/lib/riak and re-installed from packagecould). Ulimit > -n is set appropriately as well. > > If I make the following changes to /etc/riak/riak.conf I get crash error > messages: > > storage_backend = leveldb > search =

Re: Active Anti Entropy Directory when AAE is disabled

2017-01-18 Thread Luke Bakken
Hi Arun - I don't know the answer off the top of my head, but I suspect that disabling AAE will leave that directory and the files in it untouched afterward. One way to find out would be to disable AAE and monitor the access time of the files in the anti_entropy directory. -- Luke Bakken

Re: Deleted bucket key but still showing up in yokozuna search results

2017-01-18 Thread Luke Bakken
Hi Pulin, Did this document eventually disappear from search results? You should check your Riak logs and solr.log files for errors with regard to communication between Riak and the Solr process. -- Luke Bakken Engineer lbak...@basho.com On Fri, Dec 16, 2016 at 12:10 PM, Pulin Gupta

Re: Question about claimant node.

2017-01-18 Thread Alex Feng
ative commands that you'd find it > doesn't work. You just simply mark it as down though and the cluster will > re-elect a new claimant to take over the role and you can continue. > > Kind Regards, > Shaun > > On Wed, Jan 18, 2017 at 9:05 AM, Alex Feng <sweden.f...@gmail.com> wrot

Re: Question about claimant node.

2017-01-18 Thread Shaun McVey
. During normal operations, it's just like any other node. It's only when attempting to run certain administrative commands that you'd find it doesn't work. You just simply mark it as down though and the cluster will re-elect a new claimant to take over the role and you can continue. Kind

Re: secondary indexes

2017-01-17 Thread Russell Brown
Tuesday, January 17, 2017 3:45:34 PM > To: Andy leu > Cc: riak-users@lists.basho.com > Subject: Re: secondary indexes > > Hi, > Riak's secondary indexes require a sorted backend, either of the memory or > leveldb backends will work, bitcask does not support secondary indexes

Re: secondary indexes

2017-01-17 Thread Andy leu
ubject: Re: secondary indexes Hi, Riak's secondary indexes require a sorted backend, either of the memory or leveldb backends will work, bitcask does not support secondary indexes. More details here http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/ Cheers Russell On

Re: consult some questions about riak,thank you

2017-01-17 Thread Luke Bakken
std::bad_alloc is thrown when memory can't be allocated. This can happen when there is no more free RAM. Do you have monitoring enabled on these servers where you can watch memory consumption? -- Luke Bakken Engineer lbak...@basho.com On Fri, Jan 13, 2017 at 8:21 AM, 270917674

Re: secondary indexes

2017-01-16 Thread Russell Brown
Hi, Riak's secondary indexes require a sorted backend, either of the memory or leveldb backends will work, bitcask does not support secondary indexes. More details here  http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/ Cheers Russell On Jan 17, 2017, at 07:13 AM, Andy

Re: Riak CS - admin keys changing

2017-01-16 Thread Shaun McVey
Hi Toby, If you put another user into the config, that's all it takes to make them the admin user. There's no special value that's set in the database itself. Any user can be an admin user, it doesn't even have to be the first one created. It's just whatever user you have set in the config.

Re: Riak CS - admin keys changing

2017-01-15 Thread Toby Corkindale
Hi, I have a follow-up question around this security aspect. If the riak-cs.conf and stanchion.conf files are changed so that their admin.key and admin.secret match a different user (eg. not that first-created admin user) then will that user now have admin-like privileges? Or are the

Re: Riak CS - admin keys changing

2017-01-12 Thread Toby Corkindale
Thanks, Luke! On Fri, 13 Jan 2017 at 12:10 Luke Bakken wrote: Hi Toby, When you create the user, the data is stored in Riak (and is the authoritative location). The values must match in the config files to provide credentials used when connecting to various parts of your CS

Re: Riak CS - admin keys changing

2017-01-12 Thread Luke Bakken
Hi Toby, When you create the user, the data is stored in Riak (and is the authoritative location). The values must match in the config files to provide credentials used when connecting to various parts of your CS cluster. -- Luke Bakken Engineer lbak...@basho.com On Thu, Jan 12, 2017 at 3:47

Re: Export Riak TS tables to CSV

2017-01-12 Thread Alexander Sicular
Hi Ricardo, Riak itself won't do this. Afaik, the client libraries return language specific array like data structures. To actually convert those to csv would be an exercise for the developer. Thankfully most languages have a readily available array to csv library which will basically do it for

Re: Unable to use riak-ts shell

2017-01-12 Thread Ricardo Mayerhofer
Hi Gordon, It was a good addition! Very descriptive, you can walk through the extensions and commands without having to check to the docs. However the concept of extension is not common and may cause confusion IMO. An option would be just have it as commands and categories. Ricardo On Fri, Jan

Re: Situations where a fetch can retun no causal context

2017-01-10 Thread Alex Moore
Hi Michael, For the Set, Map, and Counter data types the only other situation I can think of is if the user explicitly set the "INCLUDE_CONTEXT" option to false. That option defaults to true, so it should always return one if the data type you fetched isn't a bottom (initial) value. If it is

Re: Reg: value error in mochiglobal:compile/2 line 51

2017-01-09 Thread DeadZen
seems not actually a mochiglobal error so much as os_mon reported a system limit. you can up some of your values in vm.args. max processes/ets table limits/etc On Mon, Jan 9, 2017 at 5:14 AM Steven Joseph wrote: > Hi Folks, > > I've started getting this error in my riak

Re: Is every node share the same metadata ?

2017-01-06 Thread Alexander Sicular
Hi Alex, You seem to be referring to bitcask metadata. That metadata is loaded into ram on each node. As you note, the bitcask capacity planner calculates ram required by a cluster to service a certain number of keys. I think where you are confused is that this metadata is not synced across the

Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-05 Thread Toby Corkindale
nect to its KV node should be passed to the > client/front-end, which should have all the proper logic for re-attempts or > error reporting. > > > I'm surprised more people with highly-available Riak CS installations > haven't hit the same issues. > > As I mentioned, ou

Re: I2 queries fail when few nodes are down

2017-01-05 Thread Magnus Kessler
On 4 January 2017 at 23:22, Tomi Takussaari wrote: > Hello Riak-users > > We have 9 node Riak-cluster, that we use to store user accounts. > > Some of the crucial data fields of user account are indexed using I2, so > that we can do secondary index queries based on

Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-05 Thread Shaun McVey
nue to deal with new requests without problems. Any failures to connect to its KV node should be passed to the client/front-end, which should have all the proper logic for re-attempts or error reporting. > I'm surprised more people with highly-available Riak CS installations haven't hit th

Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-04 Thread Toby Corkindale
and Riak solved the problem of needing the local Riak to be started first. But it seems we just were putting the core problem off, rather than solving it. ie. That Riak CS doesn't understand it needs to re-connect and retry. I'm surprised more people with highly-available Riak CS installations

Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-04 Thread Magnus Kessler
Hi Toby, As far as I know Riak CS has none of the more advanced retry capabilities that Riak KV has. However, in the design of CS there seems to be an assumption that a CS instance will talk to a co-located KV node on the same host. To achieve high availability, in CS deployments HAProxy is often

Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-03 Thread Toby Corkindale
Hello all, Now that we're all back from the end-of-year holidays, I'd like to bump this question up. I feel like this has been a long-standing problem with Riak CS not handling dropped TCP connections. Last time the cause was haproxy dropping idle TCP connections after too long, but we solved that

Re: Reaping Tombstones

2017-01-03 Thread Nick Marino
We have new tombstone reaping functionality in the works for Riak KV 2.3, which should allow for safe and automatic removal of old leftover tombstones. In the mean time, you can potentially trigger the deletion of old, leftover tombstones by doing reads to those keys; if all the primary replicas

Re: Reaping Tombstones

2016-12-30 Thread Matthew Von-Maszewski
lan <arun.v.rajagopa...@gmail.com> > wrote: > > Thanks Matthew & Luca > > Re: global expiry - will that option retroactively remove objects? That is > remove objects that became "unneeded" before the option was set ? > Same question w.r.t delete_mode > >

Re: Reaping Tombstones

2016-12-30 Thread Arun Rajagopalan
Thanks Matthew & Luca Re: global expiry - will that option retroactively remove objects? That is remove objects that became "unneeded" before the option was set ? Same question w.r.t delete_mode Re: Map / Reduce - I am not sure the delete would remove the tombstone unless I set t

Re: Reaping Tombstones

2016-12-30 Thread Luca Favatella
On 30 December 2016 at 15:06, Matthew Von-Maszewski wrote: > Greetings, > > I am not able to answer your tombstone questions. That question needs a > better expert. > > Just wanted to point out that Riak now has global expiry in both the leveldb > and bitcask backends. That

Re: Reaping Tombstones

2016-12-30 Thread Matthew Von-Maszewski
Greetings, I am not able to answer your tombstone questions. That question needs a better expert. Just wanted to point out that Riak now has global expiry in both the leveldb and bitcask backends. That might be a quicker solution for your frequent delete operations:

Re: Storing Timestamps With Microsecond Resolution in RiakTS

2016-12-20 Thread Alexander Sicular
Hi Joe, You could do that. Riak currently supports millisecond resolution. Look at the primary key composition. There are two lines, the first is the partition key and the second is the local key. The local key denotes the sort order and is the actual unique key for that grouping (quanta).

Re: Error creating bucket with LevelDB bucket-type

2016-12-15 Thread Felipe Esteves
Hi, Luke, It worked! Thanks a lot for the help and the feedback! Felipe Esteves Tecnologia felipe.este...@b2wdigital.com Tel.: (21) 3504-7162 ramal 57162 Skype: felipe2esteves 2016-12-15 14:19 GMT-02:00 Luke Bakken : > What is the output of

Re: Error creating bucket with LevelDB bucket-type

2016-12-15 Thread Felipe Esteves
Hi, I've managed to correct this error of mine, using the correct backend name, that is, *leveldb_mult* instead of *leveldb* Now I have another problem: the python client returns no error, the riak log is also clean. But I can find the created bucket when I run buckets?buckets=true Seems to me

Re: List all keys on a small bucket

2016-12-08 Thread John Daily
I'd completely forgotten leveldb had that advantage. Russell is correct. Sent from my iPhone > On Dec 8, 2016, at 4:51 PM, Russell Brown wrote: > > Depends on what backend you are running, no? If leveldb then this list keys > operation can be pretty cheap. > > It’s a

Re: List all keys on a small bucket

2016-12-08 Thread Russell Brown
Depends on what backend you are running, no? If leveldb then this list keys operation can be pretty cheap. It’s a coverage query, but if it’s leveldb at least you will seek to the start of the bucket and iterate over only the keys in that bucket. Cheers Russell On 8 Dec 2016, at 21:19, John

Re: List all keys on a small bucket

2016-12-08 Thread John Daily
The size of the bucket has no real impact on the cost of a list keys operation because each key on the cluster must be examined to determined whether it resides in the relevant bucket. -John > On Dec 8, 2016, at 4:17 PM, Arun Rajagopalan > wrote: > > Hello Riak

Re: Monitor Riak Network Port and IO Port

2016-12-08 Thread Fred Dushin
The process is (typically) beam.smp, though you may have multiple on your machine, if for example, you are connected to riak via the console, or if you are running administrative commands (e.g., riak-admin). For the ports (if that is also what you are looking for) see:

Re: Percentile in Riak TS

2016-12-07 Thread Pavel Hardak
Hi Ricardo, Yes, we do plan to add PERCENTILE aggregation function to the next version of Riak TS, it is high on our to-do list. We also want to add several other aggregators, like MEDIAN, LAST, TOP, BOTTOM, etc. You probably noticed we have a number of aggregation functions already, they are

Re: using Riak TS for KV buckets

2016-12-06 Thread Pavel Hardak
Hi Gal, I am the product manager for Riak TS and Spark Connector. Thanks for your questions and PRs. We have two versions of Riak - KV and TS. They share most of the code, but RIak TS is optimised for time series operations, while Riak KV - for key value operations. It is correct that most KV

Re: [ANN] Basho move Lager to erlang-lager organization on Github

2016-12-05 Thread Mark Allen
Just to follow up on this:  Andrew Thompson and I are going to start hosting a monthly lager issue/PR triage meeting on freenode, in #lager every third Thursday of the month starting 19 January 2017 at 1600 US/Central time, 1400 US/Pacific time, 2200 UTC for about an hour or so. During that

Re: Clarification questions re bucket types

2016-12-05 Thread Shaun McVey
Hi Henning, So normally, a bucket's custom properties are stored in the ring file. It's this file which is gossiped around regularly in a cluster. When users create hundreds/thousands of custom customer properties (it's been done), it can grind the cluster almost to a standstill. A bucket-type

Clarification questions re bucket types

2016-12-05 Thread Henning Verbeek
Hi guys, I still struggle with bucket types and have some questions. Going back a year I could not find many threads about it, but forgive me if I missed something and am asking already-answered questions. ## Cluster-awareness I've understood so far that bucket types are used as part of the

Re: Question about RiakCluster - Java client - 2.x

2016-12-01 Thread Alex Moore
Hi Konstantin, The RiakClient class is reentrant and thread-safe, so you should be able to share it among the different workers. You may have to adjust the min / max connection settings to get the most performance, but that's relatively easy. One other thing to notice is RiakClient's cleanup()

Re: Uneven distribution of partitions in RIAK cluster

2016-11-29 Thread Semov, Raymond
<mailto:rse...@ebay.com>> Cc: "riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>" <riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>> Subject: Re: Uneven distribution of partitions in RIAK cluster Hi Ray, Riak's partition distributi

Re: S3ResponseError: 403 Forbidden,backend configuration and riak cs version.

2016-11-29 Thread Shaun McVey
Hi Neo, 1) When building from source, the version numbers never show. They only appear in the packaged versions. That's why you're not seeing them. 2) After creating the admin user - did you change anonymous_user_creation back to 'off'? 3) I'm not entirely clear on what you did there - did

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-28 Thread Daniel Miller
Hi Alexander, Thanks a lot for your input. I have a few follow-up questions. > Stupid math: > > 3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more > due to obj > 1MB size ) > > 1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4 > GB. > > You

Re: Java client -- slow to shutdown

2016-11-28 Thread Luke Bakken
Hi Toby - Thanks for reporting this. We can continue the discussion via GH issue #689. -- Luke Bakken Engineer lbak...@basho.com On Wed, Nov 23, 2016 at 9:58 PM, Toby Corkindale wrote: > Hi, > I'm using the Java client via protocol buffers to Riak. > (Actually I'm using it via

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-23 Thread Alexander Sicular
Hello DeadZen, Yes, networking interconnect becomes a bigger issue with more nodes in the cluster. A Riak cluster is actually a fully meshed network of erlang virtual machines. Multiple 1/10 gig nics dedicated to inter/intra networking are your friends. That said, we have many customers

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-23 Thread DeadZen
ok I loled at this. then got worries trump could win a node election. anyways. 24gigs per riak server is not a bad safe bet. Erlang in general is ram heavy. It uses it more effectively then most languages wrt concurrency, but ram is the fuel for concurrency and buffer for operations, especially

Re: rpberrorresp - Unable to access functioning riak node

2016-11-23 Thread Luke Bakken
er 23, 2016 10:33 AM > To: A R <roman.an...@outlook.com> > Cc: riak-users@lists.basho.com > Subject: Re: rpberrorresp - Unable to access functioning riak node > > Hi Andre, > > If you remove the load balancer, does it work? > > -- > Luke Bakken > Engineer > lb

Re: rpberrorresp - Unable to access functioning riak node

2016-11-23 Thread Luke Bakken
Hi Andre, If you remove the load balancer, does it work? -- Luke Bakken Engineer lbak...@basho.com On Tue, Nov 22, 2016 at 10:56 AM, A R wrote: > To whom it may concern, > > > I've set up a 2 riak ts nodes and a load-balancer on independent machines. > I'm able to

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-23 Thread Daniel Miller
Hi Alexander, Thanks for responding. > How many nodes? We currently have 9 nodes in our cluster. > How much ram per node? Each node has 4GB of ram and 4GB of swap. The memory levels (ram + swap) on each node are currently between 4GB and 5.5GB. > How many objects (files)? What is the average

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-22 Thread Alexander Sicular
Hi Daniel, Ya, I'm not surprised you're having issues. 4GB ram is woefully underspecd.  邏Stupid math: 3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more due to obj > 1MB size ) 1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4 GB. You

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-21 Thread Alexander Sicular
Hi Daniel, How many nodes? -You should be using 5 minimum if you using the default config. There are reasons. How much ram per node? -As you noted, in Riak CS, 1MB file chunks are stored in bitcask. Their key names and some overhead consume memory. How many objects (files)? What is the average

Re: Riak CS: avoiding RAM overflow and OOM killer

2016-11-21 Thread Daniel Miller
I found a similar question from over a year ago ( http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-July/017327.html), and it sounds like leveldb is the way to go, although possibly not well tested. Has anything changed with regard to Basho's (or anyone else) experience with using

Re: Initializing a commit hook

2016-11-19 Thread Luke Bakken
Hi Mav, I opened the following issue to continue investigation: https://github.com/basho/riak_kv/issues/1541 That would be the best place to continue discussion. I'll find time to reproduce what you have reported. Thanks - -- Luke Bakken Engineer lbak...@basho.com On Fri, Nov 18, 2016 at

Re: Initializing a commit hook

2016-11-18 Thread Mav erick
No luck :( I set up a bucket type called test-bucket-type. I did NOT set data type. I set the hooks Ran your curl -X PUT. The Hook was not called. Tried several times, no luck I changed the curl to hit my non-typed bucket, and the commit hook hit $ riak-admin bucket-type list default (active)

Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Thanks for correcting that. Everything looks set up correctly. How are you saving objects? If you're using HTTP, what is the URL? Can you associate your precommit hook with a bucket type ("test-bucket-type" below) that is *not* set up for the "map" data type and see if your hook is called

Re: Initializing a commit hook

2016-11-18 Thread Mav erick
Here you go ... For the second url - I think you meant testbucket without the hyphen, Also I think you have an extra "props" in there * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 8098 (#0) > GET /types/maps/props HTTP/1.1 >

Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
What is the output of these commands? curl -4vvv localhost:8098/types/maps/props curl -4vvv localhost:8098/types/maps/props/buckets/test-bucket/props On Fri, Nov 18, 2016 at 2:21 PM, Mav erick wrote: > Luke > > I was able to change the properties with your URL, but still the

Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav - You're not using the correct HTTP URL. You can use this command: http://docs.basho.com/riak/kv/2.1.4/using/reference/bucket-types/#updating-a-bucket-type Or this URL: curl -XPUT localhost:8098/types/maps/props -H 'Content-Type: application/json' -d

Re: Initializing a commit hook

2016-11-18 Thread Mav erick
Hi Luke I tried that and didn't work for a bucket with bucket type = maps. My erlang code below does work for buckets without types. But I think its because I didn't set the hook for the typed bucket correctly.Could you check my curl below, please ? I did this to set the hook curl -X PUT

Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav - Please remember to use "Reply All" so that the riak-users list can learn from what you find out. Thanks. Thebucket = riak_object:bucket(Object), Can you check to see if "Thebucket" is really a two-tuple of "{BucketType, Bucket}"? I believe that is what is returned. -- Luke Bakken

Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav - Can you go into more detail? The subject of your message is "initializing a commit hook". -- Luke Bakken Engineer lbak...@basho.com On Thu, Nov 17, 2016 at 9:09 AM, Mav erick wrote: > Folks > > Is there way RIAK can call an erlang function in a module when RIAK starts

RE: Riak TS Agility on handling Petabytes of Data

2016-11-18 Thread rajaa.krishnamurthy
ailable 24 X 7 [Blue_Pil] From: Alexander Sicular [mailto:sicul...@basho.com] Sent: 18 November 2016 10:03 To: Krishnamurthy,R,Rajaa,TAB13 C; riak-users@lists.basho.com Cc: Balaji,H,Hari,TAB13 C Subject: Re: Riak TS Agility on handling Petabytes of Data Hi Rajaa, What's your retention policy? At

Re: Associating Riak CRDT Sets to Buckets / Keys via Erlang Client

2016-11-17 Thread Vikram Lalit
This is awesome...! Many thanks Magnus - much appreciated... Must have overlooked some of these details in my initial analysis but am sure I have a very good starting point / details now! Thanks again! On Thu, Nov 17, 2016 at 7:48 AM, Magnus Kessler wrote: > On 16 November

Re: Associating Riak CRDT Sets to Buckets / Keys via Erlang Client

2016-11-17 Thread Magnus Kessler
On 16 November 2016 at 17:40, Vikram Lalit wrote: > Hi - I am trying to leveraging CRDT sets to store chat messages that my > distributed Riak infrastructure would store. Given the intrinsic > conflict-resolution, I thought this might be more beneficial than me > putting

Re: Doc typo

2016-11-15 Thread sean mcevoy
Cheers Luca, easy when you know how ;-) PR has been made. //Sean. On Tue, Nov 15, 2016 at 9:31 AM, Luca Favatella < luca.favate...@erlang-solutions.com> wrote: > On 15 November 2016 at 09:17, sean mcevoy wrote: > [...] > >> Hi Basho guys, >> >> What's your procedure on

<    1   2   3   4   5   6   7   8   9   10   >