So, I've been doing some testing lately. Riak sounds like it'll meet
your usecases with some caveats. LevelDB on SSDs will handle change
often fairly well, as long as your disks have enough throughput, and
CPUs are large enough to handle the compaction. If you're in GCE, I
recommend using persistent SSDs only, and if you're in AWS, local SSDs
would be ideal. If your cluster can't keep up, you may see distributed
Erlang getting backed up, and that can be very problematic, as it can
raise latency for other transactions. This can be worked around by
building a Riak with the right amount of capacity. Other than that, I
don't find a workload with heavy updates too concerning.

For the second corpus of keys, I have some other concerns. It's
possible to do some of the stuff you're talking about with Riak 2.0.0
features, like strong-consistency. You could actually do RAMPs to get
what you're looking for without any external tools, or you could use
an external toolkit with Riak to get full serializability.

On Wed, Oct 1, 2014 at 2:20 AM, snacktime <snackt...@gmail.com> wrote:
>
> 1) What are your requirements in terms of data durability?
>
> If we are talking write failures, then family 1 can have some loss in
> durability.   My caching layer has a write behind cache that already trades
> a few seconds of durability on node failure for the ability to buffer writes
> to the backend.  And for a rather significant part of the data in certain
> types of games that is acceptable.
>
> 2) Do you have real hardware?
>
> Large games mostly yes.  Small games no.
>
> 3) Do you the capability to use SSDs?
>
> Yes
> 4) What's the size of your total keyspace? What about your working set?
>
> For larger games total keyspace would probably range from 10-40 million
> keys.   For a stable game with 10 million keys, I'd say an average might be
> around 200k active keys.  But a number of things can make that number swing
> up wildly over short periods of time.  Launch week, promotions, etc..  Also,
> it's worth noting that some values will increase in size over time as
> players collect more and more items.
>
> 5) Do you need atomic, multi-key updates?
>
> Will always have a small set of stuff that requires this.
>
> Chris
>
> On Tue, Sep 30, 2014 at 11:53 PM, snacktime <snackt...@gmail.com> wrote:
>>
>> I'm going to be testing out Riak for use in an open source game server
>> platform I created, and was looking for some tips on use and configuration.
>>
>> Read/write volumes as are normal for many games, is about 50/50.
>> Consistent response times with low latency is a priority, as is not having
>> eventually consistent get out of control like with what can happen with that
>> 'other' nosql db (ya another horror story not worth going into here).
>>
>> The data being saved would be a combination of json and binary.  It's
>> protocol buffers and I provide the option to serialize to json or native
>> protobuf.
>>
>> Currently all keys are compound.  A scope followed by a delimiter followed
>> by the actual key.
>>
>> Usage patterns basically break down into two groups.  One is data that can
>> change up to several times a second.  Local caching is available, but
>> updates to the database every few seconds would be required.
>>
>> The other class of data does not change that often, and in some cases
>> requires atomic updates.  This would be in game trading, etc..
>>
>>
>> Any tips people have on getting the most out of Riak for this type of
>> environment would be appreciated.
>>
>> Chris
>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to