Hi Alex,
There is some info on this page that can help you decide:
http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/
See the sections titled "When to Use Secondary Indexes" and " When Not to Use
Secondary Indexes".
Sent from my iPad
> On Feb 2, 2017, at 4:43 AM, Alex
n
>
> Luke Bakken <lbak...@basho.com> writes:
>
>> Hi Steven,
>>
>> At this point I suspect you're using the Python client in such a way
>> that too many connections are being created. Are you re-using the
>> RiakClient object or repeatedly creating new ones? Can you
experiment with using process ids as keys to access a process
specific riak client in forked child ?
Regards
Steven
Luke Bakken <lbak...@basho.com> writes:
> Hi Steven,
>
> At this point I suspect you're using the Python client in such a way
> that too many connections are being c
Hi Steven,
At this point I suspect you're using the Python client in such a way
that too many connections are being created. Are you re-using the
RiakClient object or repeatedly creating new ones? Can you provide any
code that reproduces your issue?
--
Luke Bakken
Engineer
lbak...@basho.com
Hi Luke,
Here's the output of
$ sysctl fs.file-max
fs.file-max = 2500
Regards
Steven
On Wed, Feb 1, 2017 at 9:30 AM Luke Bakken wrote:
> Hi Steven,
>
> What is the output of this command on your systems?
>
> $ sysctl fs.file-max
>
> Mine is:
>
> fs.file-max = 1620211
Hi Steven,
What is the output of this command on your systems?
$ sysctl fs.file-max
Mine is:
fs.file-max = 1620211
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph wrote:
> Hi Shaun,
>
> Im having this issue again, this time I
Hi Shaun,
Im having this issue again, this time I have captured the system limits,
while riak is still crashing.
Please note lsof and prlimit outputs at bottom.
steven@hawk5:log/riak:» tail error.log
IF you use leveldb, there is a function in the riak-erlang-client that gets all
objects in a bucket, I don’t know if it has been implemented in the java client
as it was written specifically for riak-cs.
https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130
I the link I provided gives you the _objects_ too. list_keys gives only keys.
On 28 Jan 2017, at 12:21, Grigory Fateyev wrote:
> Hello!
>
> I think this link
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506
> ? You need list_keys/2
Hello!
I think this link
https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506
? You need list_keys/2 function.
2017-01-28 13:33 GMT+03:00 Russell Brown :
> IF you use leveldb, there is a function in the riak-erlang-client that
> gets all
IF you use leveldb, there is a function in the riak-erlang-client that gets all
objects in a bucket, I don’t know if it has been implemented in the java client
as it was written specifically for riak-cs.
https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130
Hi Shaun,
I have already set this to a very high value
(r...@hawk1.streethawk.com)1> os:cmd("ulimit -n").
"2500\n"
(r...@hawk1.streethawk.com)2>
So the issue is not that the limit is low, but maybe a resource leak ? As I
mentioned our application processes continuously run queries on the
I've had this issue again, this time I checked the output of lsof and it
seems like its the number of established connections are way high, I've
configured my application tasks to exit and cleanup connections
periodicaly. That should solve it.
Thanks guys.
Steven
On Fri, Jan 27, 2017 at 3:07 AM
Hello Alex,
As long as each bucket does not have its own properties but rather shares one
or a handful of bucket types you should be fine and it wouldn't make a
difference.
One way to record data temporally, aka in a time series fashion, from a data
model perspective is via a pattern called
Hi Alexander,
Yes, I should consider the possibility of switching to Riak TS.
But I guess the question still valid, does it ? Should I divide millions
of keys to different buckets, does it make any difference in performance,
memory, space ?
Br,
Alex
2017-01-27 2:50 GMT+01:00 Alexander Sicular
Hi, you should consider using Riak TS for this use case.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Jan 27, 2017, at 01:54, Alex Feng wrote:
>
> Hi,
>
> I am wondering if there are some best practice or recommendation for how
Riak CS stores data chunks in bitcask and the index/metadata file in leveldb.
Bitcask, as noted, has no compression. When you force Riak to use level for the
data chunks you get compression for that data which may or may not be good for
your use case. If it's not good for your use case I
HI Mattew,
Thank you for the help.
I got answer from Shaun already, seems LevelDB and Bitcask, they are same,
memory usage does not show up in Erlang.
Br,
Alex
2017-01-26 14:47 GMT+01:00 Matthew Von-Maszewski :
> Alex,
>
> Which backend are you using? Leveldb's memory
FYI: this is the function that is crashing:
get_uint32_measurement(Request, #internal{os_type = {unix, linux}}) ->
{ok,F} = file:open("/proc/loadavg",[read,raw]), %% <---
crash line
{ok,D} = file:read(F,24),
ok = file:close(F),
Steven,
You may be able to get information via the lsof command as to what
process(es) are using many file handles (if that is the cause).
I searched for that particular error and found this GH issue:
https://github.com/emqtt/emqttd/issues/426
Which directed me to this page:
Alex,
Which backend are you using? Leveldb's memory usage does not show up within
Erlang. Maybe that is what you are experiencing?
Matthew
Sent from my iPad
> On Jan 26, 2017, at 5:47 AM, Alex Feng wrote:
>
> Hi Riak Users,
>
> One of my riak nodes, it has 4G
Hi Shaun,
I have already set this to a very high value
(r...@hawk1.streethawk.com)1> os:cmd("ulimit -n").
"2500\n"
(r...@hawk1.streethawk.com)2>
So the issue is not that the limit is low, but maybe a resource leak ? As I
mentioned our application processes continuously run queries on the
Hi Steven,
Based on that log output, it looks like you're running into issues with
system limits, probably open file limits. You can check the value that
Riak has available by connecting to one of the nodes with riak attach, then
executing:
```
os:cmd("ulimit -n").
```
(After, disconnect with
On 25 January 2017 at 21:09, Arun Rajagopalan
wrote:
> Thanks Luke. Sorry it took me some time to experiment ...
>
> I am not sure what happens in a couple of scenarios. Maybe you can explain
>
> Lets say I lose a node completely and want to replace it. Will the
Thanks for the quick response, Luke.
There is nothing unusual about the keys. The format is a name + UUID + some
other random URL-encoded charaters, like most other keys in our cluster.
There are no errors near the time of the incident in any of the logs (the
last [error] is from over a month
Thanks Luke. Sorry it took me some time to experiment ...
I am not sure what happens in a couple of scenarios. Maybe you can explain
Lets say I lose a node completely and want to replace it. Will the keys yet
to be "anti-entropied" by that node be distributed correctly when I restore
that node ?
Hi Daniel -
This is a strange scenario. I recommend looking at all of the log
files for "[error]" or other entries at about the same time as these
PUTs or 404 responses.
Is there anything unusual about the key being used?
--
Luke Bakken
Engineer
lbak...@basho.com
On Wed, Jan 25, 2017 at 6:40
> > Hi guys,
> > I've switched our configuration around, so that Riak CS now talks to
> > 127.0.0.1:8087 instead of the haproxy version.
> >
> > We have immediately re-encountered the problems that caused us to move to
> > haproxy.
> > On start-up, riak ta
19, 2017 at 5:38 PM, Toby Corkindale <t...@dryft.net> wrote:
> Hi guys,
> I've switched our configuration around, so that Riak CS now talks to
> 127.0.0.1:8087 instead of the haproxy version.
>
> We have immediately re-encountered the problems that caused us to move to
>
Damion,
I will explain what happened.
ring_size = 8: The default ring_size is 64. It is based on the recommendation
of five servers for a minimum cluster. You stated you are using only one
machine. 64 divided by 5 is 12.8 vnodes per server ... and ring size needs to
be a power of 2. So
Matthew -
That did it!
Actually, I tried with both settings, and also with just the ring_size change.
Setting ring_size to 8 got rid of crashing. I'll have to do a bit more reading
on this setting I suppose. I have a much more memory-constrained virtual
machine running on my local desktop
Damion,
Add the following settings within riak.conf:
leveldb.limited_developer_mem = on
ring_size = 8
Erase all data / vnodes and start over.
Matthew
> On Jan 19, 2017, at 8:51 AM, Junk, Damion A wrote:
>
> Hi Magnus -
>
> I've tried a wide range of parameters for
Hi Magnus -
I've tried a wide range of parameters for leveldb.maximum_memory_percent
ranging from 5 to 70. I also tried the leveldb.maximum_memory setting in bytes,
ranging from 500MB to 4GB. I get the same results in the crash/console log no
matter what the settings. But the log messages seem
gure out what's going on with the server, so I
> completely wiped /var/lib/riak and re-installed from packagecould). Ulimit
> -n is set appropriately as well.
>
> If I make the following changes to /etc/riak/riak.conf I get crash error
> messages:
>
> storage_backend = leveldb
> search =
Hi Arun -
I don't know the answer off the top of my head, but I suspect that
disabling AAE will leave that directory and the files in it untouched
afterward.
One way to find out would be to disable AAE and monitor the access
time of the files in the anti_entropy directory.
--
Luke Bakken
Hi Pulin,
Did this document eventually disappear from search results? You should
check your Riak logs and solr.log files for errors with regard to
communication between Riak and the Solr process.
--
Luke Bakken
Engineer
lbak...@basho.com
On Fri, Dec 16, 2016 at 12:10 PM, Pulin Gupta
ative commands that you'd find it
> doesn't work. You just simply mark it as down though and the cluster will
> re-elect a new claimant to take over the role and you can continue.
>
> Kind Regards,
> Shaun
>
> On Wed, Jan 18, 2017 at 9:05 AM, Alex Feng <sweden.f...@gmail.com> wrot
. During normal operations, it's just like any other node. It's only
when attempting to run certain administrative commands that you'd find it
doesn't work. You just simply mark it as down though and the cluster will
re-elect a new claimant to take over the role and you can continue.
Kind
Tuesday, January 17, 2017 3:45:34 PM
> To: Andy leu
> Cc: riak-users@lists.basho.com
> Subject: Re: secondary indexes
>
> Hi,
> Riak's secondary indexes require a sorted backend, either of the memory or
> leveldb backends will work, bitcask does not support secondary indexes
ubject: Re: secondary indexes
Hi,
Riak's secondary indexes require a sorted backend, either of the memory or
leveldb backends will work, bitcask does not support secondary indexes.
More details here
http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/
Cheers
Russell
On
std::bad_alloc is thrown when memory can't be allocated. This can
happen when there is no more free RAM.
Do you have monitoring enabled on these servers where you can watch
memory consumption?
--
Luke Bakken
Engineer
lbak...@basho.com
On Fri, Jan 13, 2017 at 8:21 AM, 270917674
Hi,
Riak's secondary indexes require a sorted backend, either of the memory or
leveldb backends will work, bitcask does not support secondary indexes.
More details here
http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/
Cheers
Russell
On Jan 17, 2017, at 07:13 AM, Andy
Hi Toby,
If you put another user into the config, that's all it takes to make them
the admin user. There's no special value that's set in the database
itself. Any user can be an admin user, it doesn't even have to be the
first one created. It's just whatever user you have set in the config.
Hi,
I have a follow-up question around this security aspect.
If the riak-cs.conf and stanchion.conf files are changed so that their
admin.key and admin.secret match a different user (eg. not that
first-created admin user) then will that user now have admin-like
privileges?
Or are the
Thanks, Luke!
On Fri, 13 Jan 2017 at 12:10 Luke Bakken wrote:
Hi Toby,
When you create the user, the data is stored in Riak (and is the
authoritative location). The values must match in the config files to
provide credentials used when connecting to various parts of your CS
Hi Toby,
When you create the user, the data is stored in Riak (and is the
authoritative location). The values must match in the config files to
provide credentials used when connecting to various parts of your CS
cluster.
--
Luke Bakken
Engineer
lbak...@basho.com
On Thu, Jan 12, 2017 at 3:47
Hi Ricardo,
Riak itself won't do this. Afaik, the client libraries return language
specific array like data structures. To actually convert those to csv would
be an exercise for the developer. Thankfully most languages have a readily
available array to csv library which will basically do it for
Hi Gordon,
It was a good addition! Very descriptive, you can walk through the
extensions and commands without having to check to the docs. However the
concept of extension is not common and may cause confusion IMO. An option
would be just have it as commands and categories.
Ricardo
On Fri, Jan
Hi Michael,
For the Set, Map, and Counter data types the only other situation I can
think of is if the user explicitly set the "INCLUDE_CONTEXT" option to
false. That option defaults to true, so it should always return one if the
data type you fetched isn't a bottom (initial) value. If it is
seems not actually a mochiglobal error so much as os_mon reported a system
limit. you can up some of your values in vm.args. max processes/ets table
limits/etc
On Mon,
Jan 9, 2017 at 5:14 AM Steven Joseph wrote:
> Hi Folks,
>
> I've started getting this error in my riak
Hi Alex,
You seem to be referring to bitcask metadata. That metadata is loaded into
ram on each node. As you note, the bitcask capacity planner calculates ram
required by a cluster to service a certain number of keys. I think where
you are confused is that this metadata is not synced across the
nect to its KV node should be passed to the
> client/front-end, which should have all the proper logic for re-attempts or
> error reporting.
>
> > I'm surprised more people with highly-available Riak CS installations
> haven't hit the same issues.
>
> As I mentioned, ou
On 4 January 2017 at 23:22, Tomi Takussaari
wrote:
> Hello Riak-users
>
> We have 9 node Riak-cluster, that we use to store user accounts.
>
> Some of the crucial data fields of user account are indexed using I2, so
> that we can do secondary index queries based on
nue to deal with new requests without
problems. Any failures to connect to its KV node should be passed to the
client/front-end, which should have all the proper logic for re-attempts or
error reporting.
> I'm surprised more people with highly-available Riak CS installations
haven't hit th
and Riak solved the problem of needing the
local Riak to be started first.
But it seems we just were putting the core problem off, rather than solving
it. ie. That Riak CS doesn't understand it needs to re-connect and retry.
I'm surprised more people with highly-available Riak CS installations
Hi Toby,
As far as I know Riak CS has none of the more advanced retry capabilities
that Riak KV has. However, in the design of CS there seems to be an
assumption that a CS instance will talk to a co-located KV node on the same
host. To achieve high availability, in CS deployments HAProxy is often
Hello all,
Now that we're all back from the end-of-year holidays, I'd like to bump
this question up.
I feel like this has been a long-standing problem with Riak CS not handling
dropped TCP connections.
Last time the cause was haproxy dropping idle TCP connections after too
long, but we solved that
We have new tombstone reaping functionality in the works for Riak KV 2.3,
which should allow for safe and automatic removal of old leftover
tombstones. In the mean time, you can potentially trigger the deletion of
old, leftover tombstones by doing reads to those keys; if all the primary
replicas
lan <arun.v.rajagopa...@gmail.com>
> wrote:
>
> Thanks Matthew & Luca
>
> Re: global expiry - will that option retroactively remove objects? That is
> remove objects that became "unneeded" before the option was set ?
> Same question w.r.t delete_mode
>
>
Thanks Matthew & Luca
Re: global expiry - will that option retroactively remove objects? That is
remove objects that became "unneeded" before the option was set ?
Same question w.r.t delete_mode
Re: Map / Reduce - I am not sure the delete would remove the tombstone
unless I set t
On 30 December 2016 at 15:06, Matthew Von-Maszewski wrote:
> Greetings,
>
> I am not able to answer your tombstone questions. That question needs a
> better expert.
>
> Just wanted to point out that Riak now has global expiry in both the leveldb
> and bitcask backends. That
Greetings,
I am not able to answer your tombstone questions. That question needs a better
expert.
Just wanted to point out that Riak now has global expiry in both the leveldb
and bitcask backends. That might be a quicker solution for your frequent
delete operations:
Hi Joe,
You could do that. Riak currently supports millisecond resolution. Look at the
primary key composition. There are two lines, the first is the partition key
and the second is the local key. The local key denotes the sort order and is
the actual unique key for that grouping (quanta).
Hi, Luke,
It worked! Thanks a lot for the help and the feedback!
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com
Tel.: (21) 3504-7162 ramal 57162
Skype: felipe2esteves
2016-12-15 14:19 GMT-02:00 Luke Bakken :
> What is the output of
Hi,
I've managed to correct this error of mine, using the correct backend name,
that is, *leveldb_mult* instead of *leveldb*
Now I have another problem: the python client returns no error, the riak
log is also clean.
But I can find the created bucket when I run buckets?buckets=true
Seems to me
I'd completely forgotten leveldb had that advantage. Russell is correct.
Sent from my iPhone
> On Dec 8, 2016, at 4:51 PM, Russell Brown wrote:
>
> Depends on what backend you are running, no? If leveldb then this list keys
> operation can be pretty cheap.
>
> It’s a
Depends on what backend you are running, no? If leveldb then this list keys
operation can be pretty cheap.
It’s a coverage query, but if it’s leveldb at least you will seek to the start
of the bucket and iterate over only the keys in that bucket.
Cheers
Russell
On 8 Dec 2016, at 21:19, John
The size of the bucket has no real impact on the cost of a list keys operation
because each key on the cluster must be examined to determined whether it
resides in the relevant bucket.
-John
> On Dec 8, 2016, at 4:17 PM, Arun Rajagopalan
> wrote:
>
> Hello Riak
The process is (typically) beam.smp, though you may have multiple on your
machine, if for example, you are connected to riak via the console, or if you
are running administrative commands (e.g., riak-admin). For the ports (if that
is also what you are looking for) see:
Hi Ricardo,
Yes, we do plan to add PERCENTILE aggregation function to the next version
of Riak TS, it is high on our to-do list. We also want to add several other
aggregators, like MEDIAN, LAST, TOP, BOTTOM, etc.
You probably noticed we have a number of aggregation functions already,
they are
Hi Gal,
I am the product manager for Riak TS and Spark Connector. Thanks for
your questions and PRs.
We have two versions of Riak - KV and TS. They share most of the code,
but RIak TS is optimised for time series operations, while Riak KV -
for key value operations. It is correct that most KV
Just to follow up on this:
Andrew Thompson and I are going to start hosting a monthly lager issue/PR
triage meeting on freenode, in #lager every third Thursday of the month
starting 19 January 2017 at 1600 US/Central time, 1400 US/Pacific time, 2200
UTC for about an hour or so.
During that
Hi Henning,
So normally, a bucket's custom properties are stored in the ring file.
It's this file which is gossiped around regularly in a cluster. When users
create hundreds/thousands of custom customer properties (it's been done),
it can grind the cluster almost to a standstill.
A bucket-type
Hi guys,
I still struggle with bucket types and have some questions. Going back
a year I could not find many threads about it, but forgive me if I
missed something and am asking already-answered questions.
## Cluster-awareness
I've understood so far that bucket types are used as part of the
Hi Konstantin,
The RiakClient class is reentrant and thread-safe, so you should be able to
share it among the different workers. You may have to adjust the min / max
connection settings to get the most performance, but that's relatively
easy.
One other thing to notice is RiakClient's cleanup()
<mailto:rse...@ebay.com>>
Cc: "riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>"
<riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>>
Subject: Re: Uneven distribution of partitions in RIAK cluster
Hi Ray,
Riak's partition distributi
Hi Neo,
1) When building from source, the version numbers never show. They only
appear in the packaged versions. That's why you're not seeing them.
2) After creating the admin user - did you change anonymous_user_creation
back to 'off'?
3) I'm not entirely clear on what you did there - did
Hi Alexander,
Thanks a lot for your input. I have a few follow-up questions.
> Stupid math:
>
> 3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more
> due to obj > 1MB size )
>
> 1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4
> GB.
>
> You
Hi Toby -
Thanks for reporting this. We can continue the discussion via GH issue #689.
--
Luke Bakken
Engineer
lbak...@basho.com
On Wed, Nov 23, 2016 at 9:58 PM, Toby Corkindale wrote:
> Hi,
> I'm using the Java client via protocol buffers to Riak.
> (Actually I'm using it via
Hello DeadZen,
Yes, networking interconnect becomes a bigger issue with more nodes in the
cluster. A Riak cluster is actually a fully meshed network of erlang virtual
machines. Multiple 1/10 gig nics dedicated to inter/intra networking are your
friends. That said, we have many customers
ok I loled at this. then got worries trump could win a node election.
anyways. 24gigs per riak server is not a bad safe bet.
Erlang in general is ram heavy. It uses it more effectively then most
languages wrt concurrency, but ram is the fuel for concurrency and buffer
for operations, especially
er 23, 2016 10:33 AM
> To: A R <roman.an...@outlook.com>
> Cc: riak-users@lists.basho.com
> Subject: Re: rpberrorresp - Unable to access functioning riak node
>
> Hi Andre,
>
> If you remove the load balancer, does it work?
>
> --
> Luke Bakken
> Engineer
> lb
Hi Andre,
If you remove the load balancer, does it work?
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, Nov 22, 2016 at 10:56 AM, A R wrote:
> To whom it may concern,
>
>
> I've set up a 2 riak ts nodes and a load-balancer on independent machines.
> I'm able to
Hi Alexander,
Thanks for responding.
> How many nodes?
We currently have 9 nodes in our cluster.
> How much ram per node?
Each node has 4GB of ram and 4GB of swap. The memory levels (ram + swap) on
each node are currently between 4GB and 5.5GB.
> How many objects (files)? What is the average
Hi Daniel,
Ya, I'm not surprised you're having issues. 4GB ram is woefully underspecd.
邏Stupid math:
3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more due
to obj > 1MB size )
1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4 GB.
You
Hi Daniel,
How many nodes?
-You should be using 5 minimum if you using the default config. There
are reasons.
How much ram per node?
-As you noted, in Riak CS, 1MB file chunks are stored in bitcask.
Their key names and some overhead consume memory.
How many objects (files)? What is the average
I found a similar question from over a year ago (
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-July/017327.html),
and it sounds like leveldb is the way to go, although possibly not well
tested. Has anything changed with regard to Basho's (or anyone else)
experience with using
Hi Mav,
I opened the following issue to continue investigation:
https://github.com/basho/riak_kv/issues/1541
That would be the best place to continue discussion. I'll find time to
reproduce what you have reported.
Thanks -
--
Luke Bakken
Engineer
lbak...@basho.com
On Fri, Nov 18, 2016 at
No luck :(
I set up a bucket type called test-bucket-type. I did NOT set data type.
I set the hooks
Ran your curl -X PUT. The Hook was not called. Tried several times, no luck
I changed the curl to hit my non-typed bucket, and the commit hook hit
$ riak-admin bucket-type list
default (active)
Thanks for correcting that. Everything looks set up correctly.
How are you saving objects? If you're using HTTP, what is the URL?
Can you associate your precommit hook with a bucket type
("test-bucket-type" below) that is *not* set up for the "map" data
type and see if your hook is called
Here you go ...
For the second url - I think you meant testbucket without the hyphen, Also
I think you have an extra "props" in there
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8098 (#0)
> GET /types/maps/props HTTP/1.1
>
What is the output of these commands?
curl -4vvv localhost:8098/types/maps/props
curl -4vvv localhost:8098/types/maps/props/buckets/test-bucket/props
On Fri, Nov 18, 2016 at 2:21 PM, Mav erick wrote:
> Luke
>
> I was able to change the properties with your URL, but still the
Mav -
You're not using the correct HTTP URL. You can use this command:
http://docs.basho.com/riak/kv/2.1.4/using/reference/bucket-types/#updating-a-bucket-type
Or this URL:
curl -XPUT localhost:8098/types/maps/props -H 'Content-Type:
application/json' -d
Hi Luke
I tried that and didn't work for a bucket with bucket type = maps. My
erlang code below does work for buckets without types.
But I think its because I didn't set the hook for the typed bucket
correctly.Could you check my curl below, please ?
I did this to set the hook
curl -X PUT
Mav -
Please remember to use "Reply All" so that the riak-users list can
learn from what you find out. Thanks.
Thebucket = riak_object:bucket(Object),
Can you check to see if "Thebucket" is really a two-tuple of
"{BucketType, Bucket}"? I believe that is what is returned.
--
Luke Bakken
Mav -
Can you go into more detail? The subject of your message is
"initializing a commit hook".
--
Luke Bakken
Engineer
lbak...@basho.com
On Thu, Nov 17, 2016 at 9:09 AM, Mav erick wrote:
> Folks
>
> Is there way RIAK can call an erlang function in a module when RIAK starts
ailable 24 X 7
[Blue_Pil]
From: Alexander Sicular [mailto:sicul...@basho.com]
Sent: 18 November 2016 10:03
To: Krishnamurthy,R,Rajaa,TAB13 C; riak-users@lists.basho.com
Cc: Balaji,H,Hari,TAB13 C
Subject: Re: Riak TS Agility on handling Petabytes of Data
Hi Rajaa,
What's your retention policy? At
This is awesome...! Many thanks Magnus - much appreciated...
Must have overlooked some of these details in my initial analysis but am
sure I have a very good starting point / details now!
Thanks again!
On Thu, Nov 17, 2016 at 7:48 AM, Magnus Kessler wrote:
> On 16 November
On 16 November 2016 at 17:40, Vikram Lalit wrote:
> Hi - I am trying to leveraging CRDT sets to store chat messages that my
> distributed Riak infrastructure would store. Given the intrinsic
> conflict-resolution, I thought this might be more beneficial than me
> putting
Cheers Luca, easy when you know how ;-)
PR has been made.
//Sean.
On Tue, Nov 15, 2016 at 9:31 AM, Luca Favatella <
luca.favate...@erlang-solutions.com> wrote:
> On 15 November 2016 at 09:17, sean mcevoy wrote:
> [...]
>
>> Hi Basho guys,
>>
>> What's your procedure on
501 - 600 of 11135 matches
Mail list logo