I am putting in 2i right now.
Obvious first thoughts:
1) Why is there a "bucket" as well as index->value? Why are indexes
not "global?" Since getting all keys for a bucket requires a keyspace
walk, why do we need to pass a bucket parameter?What is the
relationship here?
2) No ordering of
3 node cluster of 1.0.2, level_db backend, pb interface. Build up a
store of 9 connections (3 to each node) and pull one out randomly.
--snip
62>
riakc_pb_socket:list_keys(gen_server:call(forum_store:get_random_pid(alliance),
get_connector_pid), <<"alliance_overview">>
{ok,[]}
63>
riakc_pb_sock
l(forum_store:get_random_pid(alliance),get_connector_pid),
<<"alliance_overview">>).
{ok,[]}
Console again shows
= Sat Jan 21 13:55:06 PST 2012
13:55:06.888 [info] Handoff receiver for partition undefined exited
after processing 0 objects
-mox
On Sat, Jan 21, 2012 at 10
I get this on a riakc_pb_socket:ping() ...
20:24:04.361 [error] Handoff receiver for partition undefined exited
abnormally after processing 0 objects:
{noproc,{gen_fsm,sync_send_all_state_event,[undefined,{handoff_data,<<>>},6]}}
-mox
On Sat, Jan 21, 2012 at 8:43 AM, Mike Oxford
esn't work so well when running again a handoff port. /shame
Taunts/jeers accepted off-list. Thanks.
-mox
On Sat, Jan 21, 2012 at 8:29 PM, Mike Oxford wrote:
> I get this on a riakc_pb_socket:ping() ...
>
> 20:24:04.361 [error] Handoff receiver for partition undefined exited
Used to be we had to write "resolvers" due to write-heavy collisions on the
same kv, to handle the children.
Does Riak have any built-in resolvers (yet?) or is everyone still using
statebox or something else?
What's the current state of Riak in this regard, and is there anything in
progress along
On Mon, Aug 27, 2012 at 10:56 PM, Mark Phillips wrote:
> Hi mox,
>
> Welcome back. :)
>
>
Thanks, good to be back. Been an odd run lately.
> As it stands right now, no. Still plain old boring Riak. If you want
> custom resolution logic you'll still be doing that client-side with
> something li
Use the https://github.com/basho/riak-erlang-client directly, instead of
calling os:cmd and pushing through CURL.
You can also parallelize it at that time, because right now you're doing
25million os:cmd calls and making 25million curl calls. Open up a pool of
connections (or even just N and round
Oh, also, your logging is going to cause a HUGE performance hit, especially
if that machine is one of the Riak nodes. Too much disk and IO thrash.
-mox
On Tue, Aug 28, 2012 at 10:56 AM, Mike Oxford wrote:
> Use the https://github.com/basho/riak-erlang-client directly, instead of
> c
When a key comes up on expiration, I would like to know about it.
Is this supported anywhere in the Bitcask backend (like {expiry_secs,N}),
by any chance?
Thanks!
-mox
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailma
I am stacking my application and Riak on the same machine.
It seems wasteful to use 127.0.0.1 or even AF_UNIX sockets. Is there
anything that precludes me from running it directly within the same VM as
Riak, to cut down/eliminate the impedance mismatch/marshalling of going
through the external cli
What is the time between "node down" and "read with R=3" ?
Did you "shut down" the node or kill it by brutally powering the box down
or yanking the network cable?
It possible that Riak noticed the node_down and had already done the
recovery. While net_ticktime can be as long as 60 seconds by defa
Check that you have your ports correct. If you connect your client up to
the ring distribution ports you can get these kinds of errors.
-mox
On Wed, Oct 10, 2012 at 10:29 PM, Mikhail Kuznetsov <
kuznetsov.m...@gmail.com> wrote:
> I deploy a test stand for server app demo for clients. Riak 1.2.0
I think the key may lie here ",{checkout,false,5000}"
Are you releasing your connections back to the pool? Is your throughput
greater than the system can handle due to limited connection pool sizes?
What is your ulimit set to (ulimit -n) ... maybe you're running out of
FD's?
-mox
On Wed,
One more use-case for backups: If you're running a big cluster and UserX
makes a bad code deploy which horks a bunch of data ... restore may be the
only option.
It happens.
-mox
On Wed, Oct 2, 2013 at 12:12 PM, John E. Vincent <
lusis.org+riak-us...@gmail.com> wrote:
> I'm going to take a com
We know that transactions can be "synthesized" in application spaces,
kinda-sorta, with lots of caveats and limitations.
We know Riak does not have transactions.
Does Riak have, or plan to have, the ability to say "here's a function
which may update N buckets (CRDT or otherwise) please execute al
I thought I understood Riak, then I ran across the fact that riak_core was
split out separately.
When would you use riak_core that you wouldn't use Riak? Is it more
ephemeral, with shared state
in an ETS ring compared to a storage-backed node?
Thanks...
-mox
blog post. Specifically, I want to focus on the mechanics
> of the "vnode" as this, AFAICT, is the main player when you want to leverage
> riak-core. Consider this a teaser to make sure I follow thru on my word :)
>
> -Ryan
>
> [1]: https://github.com/rklophaus/BashoBan
hanks!
-mox
On Wed, Mar 30, 2011 at 6:00 PM, Justin Sheehy wrote:
> Hi, Mike.
>
> On Wed, Mar 30, 2011 at 5:46 PM, Mike Oxford wrote:
>
> > I thought I understood Riak, then I ran across the fact that riak_core
> was
> > split out separately.
> > When would y
In Riak design, with a given bucket of "subscribers" would it be better to
use the bucket as an actual container of users (denormallized) or use Riak's
linking feature?
While both will work, there are surely some under-the-covers things to
consider...
Thanks!
-mox
___
d...@basho.com
>
>
> On Fri, Apr 1, 2011 at 2:40 PM, Mike Oxford wrote:
>
>> In Riak design, with a given bucket of "subscribers" would it be better to
>> use the bucket as an actual container of users (denormallized) or use Riak's
>> linking feature?
>
he bucket and then read its value.
>
> 3. Depending on your needs, you may want to read the next key
> immediately or stop reading keys until the next subscriber query.
>
> 4. If you unsubscribe, you just remove the bucket using 'DROP TABLE'.
>
>
> On Mon, Apr 4, 20
Best practices and pitfalls for building the plugins. Not just "do this"
but "do this because X and watch out for Y."
Rock on!
-mox
On Mon, Apr 4, 2011 at 11:59 AM, Mark Phillips wrote:
> Hey Jon,
>
> A webcast (or better yet a series of webcasts) on riak_core is a great
> idea, and somethi
Does Riak or riak_core do any memory caching of the database on a
per-partition level?
eg, MachineX has 3 partitions on it... does it cache those partitions in
memory for access or pull them off storage each time?
eg, will I need to hook in a memcache-like sytem/layer on top of it, since
RAM acce
Riak's Erlang pb client rebar.config was updated to use protobuffs 0.6 but
the protobuffs repo itself still only has 0.5.1
{expected,"0.6.*"},
{has,"0.5.1"}}}.
Just a heads-up for whomever is coordinating those internally. :)
-mox
__
wrote:
> On Wed, Apr 06, 2011 at 01:50:24PM -0700, Mike Oxford wrote:
> > Riak's Erlang pb client rebar.config was updated to use protobuffs 0.6
> but
> > the protobuffs repo itself still only has 0.5.1
> >
> >{expected,"0.6.*"},
> &
When you crash a riak_cord vnode with a console attached, what is the
expected behaviour (ba-dum, dump)?
I ran a riak_core vnode with some bad code (1/0) to cause a bad arith crash
in the FSM, initiated from the a console (via node "attach").
It crashed as expected, obviously, "** State machine <
Good call, that was it.
Seems dangerous.
-mox
On Wed, Apr 6, 2011 at 5:24 PM, Ryan Zezeski wrote:
> Lemme guess, sync_spawn_command? If so it sets timeout to infinity so your
> shell process is locked in a receive.
>
> [Sent from my iPhone]
>
> On Apr 6, 2011, at 7:41 PM,
Skimmed it, killer job. Going to require more time than I have right now
but I'm excited to get some time to go over it.
Thanks for making this available!
-mox
On Wed, Apr 6, 2011 at 8:40 PM, Ryan Zezeski wrote:
> Everyone,
>
>
> https://github.com/rzezeski/try-try-try/tree/master/2011/riak-c
Be careful here..
I do not thing Riak's "protocol buffers" are the same as Google's protocol
buffers.
Google's does bit-level packing and some other tricks that Riak does not do,
even though they both use the ".proto" file extension and very very similar
proto semantics.
That said, if they ARE th
pper code in his Ripple
> repo... Though he eventually abandoned it after C++ left him permanently
> cross-eyed (I think that's why).
>
> Scott
>
> On Apr 8, 2011, at 5:20 PM, Mike Oxford wrote:
>
> Be careful here..
>
> I do not thing Riak's "protocol buffe
by).
>
> I believe Sean Cribbs has some initial C++-wrapper code in his Ripple
> repo... Though he eventually abandoned it after C++ left him permanently
> cross-eyed (I think that's why).
>
> Scott
>
> On Apr 8, 2011, at 5:20 PM, Mike Oxford wrote:
>
> Be careful here.
On a given Riak ring we already know that the majority of the buckets will
*not* "live locally" when a request comes in to a given ring member.
Now, obviously the more machines you have the greater the chance poor
locality for that particular dataset.
A request comes in to node 1 of a 500 node ri
Even exposing CPU load through the client interface would be a big win, as
that logic could be
acquired and cached application-side, poll-style.
I would spend an 1 extra call per second to be able to say "you know, NodeX
is over 70% CPU, let's kick to
nodeY instead." All requests in a 1 second wi
On Wed, May 4, 2011 at 2:50 PM, Luc Castera wrote:
> Hi,
>
> Sorry for the Lil Kim reference on the subject line, I couldn't help it :-)
>
>
Tootsie Roll pops owl > Lil Kim
I suppose it would depend on not just your raw # of links but also your link
depth.
One more thing to check out...
-mox
__
On Wed, May 4, 2011 at 3:01 PM, Sean Cribbs wrote:
> Note that this is an HTTP limitation, not the datastore in general. That
> said, don't go crazy with them.
>
> Sean Cribbs
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
>
Think 100k links is doable, or is that "crazy?"
As someone not familiar with Riak's internals...
You have xN replication.
Make the transition-nodes be part of an x(N+1) and write-only (eg, they
don't count in the read quorum.)
If you're set up for x3 replication, then the transition bucket ends up as
part of an x4 replication.
As more queries
gonyea/pabst/tree/master/ext
>>
>> But as part of an Objective-C library (called ObjFW). So, the code is
>> actually an Objective-C++ wrapper around the C++ PB code, that exchanges
>> messages with Objective-C code (that hooks into Ruby).
>>
>> I believe Se
This may be slightly nit-picky, but this tool marginal at best.
http://wiki.basho.com/Bitcask-Capacity-Planning.html
Total number of keys
Average Bucket Size (bytes)
Average Key size
Average Value Size (Bytes)
Some of these are kind of ambiguous, and I'm not sure why this setup was
chosen...
1
Unavailable.
On Thu, May 12, 2011 at 4:28 PM, Bob Ippolito wrote:
> statebox is the (Erlang) code we use at Mochi to model eventual
> consistency with automatic conflict resolution on read. Now it has a
> blog post:
> http://labs.mochimedia.com/archive/2011/05/08/statebox/
>
> See also:
>
> * ht
I do not see a facility in Rebar to pass variables through to the command
line after a generate.
The .app file is transient (will be destroyed and rebuilt on a clean), and
generating nodes kinda needs -f which blows stuff away there too.
I need to pass an option to gproc to turn on gproc_dist and
gt; which is what the ebin/foo.app is generated from?
>
> Then you put whatever you want in there?
>
> -Anthony
>
> On Wed, May 18, 2011 at 12:23:20PM -0700, Mike Oxford wrote:
> > I do not see a facility in Rebar to pass variables through to the command
> > line after
//etc/app.config does not the values either.
I'll keep looking ...
Thanks,
-mox
On Wed, May 18, 2011 at 2:41 PM, Mike Oxford wrote:
> Yes, you are correct. I completely forgot about the baseline file.
>
> Back to my hole...and thank you.
>
> -mox
>
> On Wed, May 18, 201
ist but
happily rebar is Doing the Right Thing.
-mox
On Wed, May 18, 2011 at 3:04 PM, Mike Oxford wrote:
> Perhaps I spoke too soon. :)
>
> 'rebar compile' will nuke the ebin/.app file.
> Setting the {env, []} setting inside this file and then generating (or in
> the srv/.
On Thu, May 26, 2011 at 11:21 AM, Sean Cribbs wrote:
> With all of this discussion it has been pointed out to me there are two
> issues at hand, possibly conflated as one:
>
> * Which is the least surprise, caching the key list or the incurring the
> large cost of the operation? Or is it that it
With enough RAM you could just have it keep the whole thing in disk-cache...
-mox
On Fri, May 27, 2011 at 11:11 PM, Greg Nelson wrote:
> Michael,
>
> You might want to check out riak_kv_ets_backend, riak_kv_gb_trees_backend,
> and riak_kv_cache_backend.
>
> http://wiki.basho.com/Configuration-
Protobufs is "tighter on the wire" than BERT, due to predefined schema
and better packing of things like numbers. The same goes for Thrift.
If you need more languages for protobuf, have a look at
http://code.google.com/p/protobuf/wiki/ThirdPartyAddOns
Rock on.
-mox
On Tue, Jun 7, 2011 at 3:39
> Am 08.06.2011 16:11, schrieb Justin Sheehy:
> The saving seems even smaller if you consider the overhead imposed by the
> memory allocator. I wrote a small test program in C++ which allocates one
> million blocks of memory of a given size and prints the overhead for each
> allocation. Turns out t
You're changing the "id" field? (Gold's Gym vs Super Cuts)
-mox
On Mon, Jun 27, 2011 at 3:48 PM, James Linder wrote:
> I forgot to note. The data in the cluster being queried is not
> changing. (i.e., no inserts, no updates, no deletes happening)
>
>
> On Mon, Jun 27, 2011 at 6:44 PM, James Lin
Sorry, misread the output reply. Nevermind..
-mox
On Mon, Jun 27, 2011 at 5:42 PM, Mike Oxford wrote:
> You're changing the "id" field? (Gold's Gym vs Super Cuts)
>
> -mox
>
> On Mon, Jun 27, 2011 at 3:48 PM, James Linder
> wrote:
>> I forgot to
High performance updates to a single bucket/key space where ordering
isn't critical. Say, 5k TPS into a single bucket/key. Data is
written out such that it can be ordered later.
I'm aware of sharding/fragmenting/splitting and what not ... I'm
looking purely at intra-bucket performance. Yes, 5k
Now that Riak has secondary indexing, are there any plans to "auto
index" a Keyspace into a Bucket?
Such that walking a Bucket doesn't do a global scan?
Yes, we can do it in the application-space, but it seems like a good
initial application of it, and having it done automatically
server-side has
You cannot rename buckets. Riak is "flat" KV store, and there is no
difference between a bucket and a key; they're just squished together
to form a single name.
As such, there is no "container," and this is also why getting all of
the keys in a bucket is so expensive (without a container, you're j
Is anyone using this as a frontend for Riak?
I'm looking at using it, but the packaging is kind of wonky, thought
it's kind of based around Rebar (I'm trying to use it as a dep) so it
doesn't build clean in that situation.
Zotonic didn't work real well, even following tutorials and whatnot.
I hav
Bucket+Key gives us the flattened Key for a value
Bucket+Key2 gives us another Key for a different value
While we cannot say "give us all keys in a bucket" because it's flat
and causes a full scan, is there any way to say "give me all matching
buckets" without doing a full scan, or does that still
thought would be to batch those writes locally for a given period
> of time and then flush to Riak.
> To your question, if you really have 5k/s then that's 300k siblings for one
> minute. Given that Riak uses lists for siblings underneath I highly doubt
> this will be feasible.
ave to. Enough other things to do. :)
Rock on.
-mox
On Tue, Oct 4, 2011 at 6:18 AM, Ryan Zezeski wrote:
>
>
> On Tue, Oct 4, 2011 at 12:07 AM, Mike Oxford wrote:
>>
>> SSDs are an option, sure. I have one in my laptop; we have a bunch
>> of X25s on the way already
You'll want to run protobufs if you're looking to optimize your
response time; HTTP sockets (even to localhost) will require much more
overhead and time.
Even better would be unix sockets if they're available, and you can
bypass the whole TCP stack.
I would go A over B. Less thrash, more control,
On Tue, Oct 4, 2011 at 3:59 PM, Greg Stein wrote:
> Regarding security: it is the same for option A and B and C (you're
> just shifting stuff around, but it is pretty much all the same). Put
> your webservers in one security group, and the Riak nodes in another.
> Open the Riak ports *only* to the
Any update/ETA on protobuf exposure?
Thanks,
-mox
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Fri, Oct 28, 2011 at 5:36 PM, Sean Cribbs wrote:
> You can already store indexes with pbufs. Use a mapreduce with a single
> reduce phase of riak_kv_mapreduce:reduce_identity if you want the equivalent
> of the http query. This is what the Erlang client does.
Ew. That's not very friendly.
61 matches
Mail list logo