Re: GPG error in Apt

2013-08-07 Thread Matt Black
Confirmed as working from my end.


On 8 August 2013 05:58, Hector Castro  wrote:

> Hey Matt,
>
> That issue should be resolved now. Please ping us if you hit any further
> issues.
>
> --
> Hector
>
>
> On Tue, Aug 6, 2013 at 7:42 PM, Matt Black 
> wrote:
> > Hey Basho peeps,
> >
> > Looks like you might have signed the latest Riak release with new
> > certificate (or something) - Apt is reporting that the key from
> > http://apt.basho.com/gpg/basho.apt.key is incorrect this morning.
> >
> >> apt-get update
> > W: GPG error: http://apt.basho.com precise Release: The following
> signatures
> > were invalid: BADSIG F933E597DDF2E833 Basho Technologies (Debian / Ubuntu
> > signing key) 
> >
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Backup/restore progress/ETA?

2013-08-07 Thread Justin
Would you expect a restore of this size to take this long?

My cluster is comprised of 6 nodes, bitcask, and ring size of 128.

Total of 1.2 million keys/objects in a single bucket.

I have some fairly large objects. A few, less than 10, are around if not
greater than 1GB. Less than 1000 are greater than 10MB but less than 100MB.

Hope this helps. Thanks
On Aug 6, 2013 3:34 PM, "Justin"  wrote:

> Hey Mark,
>
> ~ 613GB
>
> ls -lhtra /var/lib/riak/backups/all_nodes.20130725.bak
> -rw-r--r-- 1 riak riak 613G Jul 25 20:47
> /var/lib/riak/backups/all_nodes.20130725.bak
>
>
> On Tue, Aug 6, 2013 at 2:24 PM, Mark Phillips  wrote:
>
>> Hi Justin,
>>
>> For starters, how much data are you restoring?
>>
>> Mark
>>
>>
>> On Mon, Aug 5, 2013 at 2:46 PM, Justin  wrote:
>>
>>> Hello all,
>>>
>>> Is there any way to determine progress/percent complete?
>>>
>>> This has been running for 3 days now. I figured it would finish over the
>>> weekend but it hasn't.
>>>
>>> # riak-admin restore riak@ riak
>>> /var/lib/riak/backups/all_nodes.20130725.bak
>>> Restoring from '/var/lib/riak/backups/all_nodes.20130725.bak' to cluster
>>> to which 'riak@ belongs.
>>>
>>> I'm reluctant to kill it. It could be nearly complete OR it could be 3
>>> more days from finishing. The alternative is to clear my Riak cluster and
>>> re-import/ETL the data from scratch, which takes at least 5 days.
>>>
>>> In addition to determining progress for "restore", I'm also interested
>>> in determining progress for "backup".
>>>
>>> Thanks for the help.
>>>
>>> Kind regards,
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: GPG error in Apt

2013-08-07 Thread Hector Castro
Hey Matt,

That issue should be resolved now. Please ping us if you hit any further issues.

--
Hector


On Tue, Aug 6, 2013 at 7:42 PM, Matt Black  wrote:
> Hey Basho peeps,
>
> Looks like you might have signed the latest Riak release with new
> certificate (or something) - Apt is reporting that the key from
> http://apt.basho.com/gpg/basho.apt.key is incorrect this morning.
>
>> apt-get update
> W: GPG error: http://apt.basho.com precise Release: The following signatures
> were invalid: BADSIG F933E597DDF2E833 Basho Technologies (Debian / Ubuntu
> signing key) 
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ripple Gem Help

2013-08-07 Thread Bryce Kerley
On Aug 7, 2013, at 10:35 AM, Jace Poirier-Pinto  wrote:

> Hello,
> 
> I am trying to use Riak's gem 'ripple' for a Ruby on Rails project and I was 
> wondering if there was any 'advanced' documentation for data manipulation in 
> the Document Model. Specifically, setting up and using secondary indexes, 
> accessing 'self' variables, etc.

I've been working on new documentation that's more prose-like: 
http://ripple-docs.herokuapp.com/

Right now it's in a Rails app awkwardly jammed in to a side-branch in my fork: 
https://github.com/bkerley/ripple/tree/page-generator

If you'd like anything more specific, please let me know!

Bryce Kerley


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ripple Gem Help

2013-08-07 Thread Chris Meiklejohn
Hi Jace,

The rubydoc is probably the best resource to use.  For instance, here's
more information on how to manipulate secondary indexes:

http://rubydoc.info/github/seancribbs/ripple/Ripple/Index

Let me know if that's not what you're looking for.

- Chris



On Wed, Aug 7, 2013 at 7:35 AM, Jace Poirier-Pinto  wrote:

> Hello,
>
> I am trying to use Riak's gem 'ripple' for a Ruby on Rails project and I
> was wondering if there was any 'advanced' documentation for data
> manipulation in the Document Model. Specifically, setting up and using
> secondary indexes, accessing 'self' variables, etc.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: PB Java Client API 1.4 runtime exception

2013-08-07 Thread Brian Roach
On Wed, Aug 7, 2013 at 10:44 AM, rsb  wrote:
> I have tried updating my project to use the new PB 1.4, however during
> runtime I get the following exception:
> ...
> Any ideas what is causing the issue, and how can I resolve it? - Thanks.

Yes; don't do that.

The 1.4.0 version of the riak-pb jar is for the 1.4.x version of the
Java client. It won't nor isn't meant to work with any previous
version.

Thanks,
- Roach

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


PB Java Client API 1.4 runtime exception

2013-08-07 Thread rsb
I have tried updating my project to use the new PB 1.4, however during
runtime I get the following exception:
  

 
This occurs on the following line:
 

  
My project is using /com.basho.riak:riak-client:1.1.1/ retrieved using
maven. Within the library I replaced the class /riak-pb-1.2.jar/ for the new
one /riak-pb-1.4.0.jar/.

Any ideas what is causing the issue, and how can I resolve it? - Thanks.



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/PB-Java-Client-API-1-4-runtime-exception-tp4028743.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: memory consumption

2013-08-07 Thread Alexander Ilyin
Ok, thank you! Looking forward to the next release.

On 6 August 2013 17:32, Evan Vigil-McClanahan  wrote:

> 11 + 4 + 16, so 31.
>
> 18 bytes there are the actual data, so that can't go away.  Since the
> allocation sized are going to be word aligned, the least overhead
> there is going to be word aligning the entire structure, i.e. where
> (key_len + bucket_len + 2 + 18) % 8 == 0, but that sort of
> optimization only works with fixed length keys.
>
> Khash overheads are expected to be small-ish, but are
> under-researched, at least by me.  I suspect most of the overhead is
> coming from the allocator.   So moving to tcmalloc is a possible win
> there, because it does a better job keeping amortized per-allocation
> overheads low for small allocations than libc's malloc, but of course
> with the caveats mentioned in my last email (tl;dr test
> *exhaustively*, because we don't and likely won't).
>
> Another possible improvement would be to move to a fixed-length
> structure (that points to an allocated location for oversized
> key-bucket binaries, but that has a very bad pathological case where
> someone selects all keys larger than your fixed size, where you have
> the fixed len - 8 as an additional overhead.
>
> On Tue, Aug 6, 2013 at 2:56 AM, Alexander Ilyin 
> wrote:
> > So if you will succeed with all your patches the memory overhead will
> > decrease by 22 (=16+4+2) bytes, am I right?
> >
> >
> > On 5 August 2013 16:38, Evan Vigil-McClanahan 
> wrote:
> >>
> >> Before I'd done the research, I too thought that the overheads were a
> >> much lower, near to what the calculator said, but not too far off.
> >>
> >> There are a few things that I plan on addressing this release cycle:
> >>   - 16b per-allocation overhead from using enif_alloc.  This allows us
> >> a lot of flexibility about which allocator to use, but I suspect that
> >> since allocation speed isn't a big bitcask bottleneck, this overhead
> >> simply isn't worth it.
> >>   - 13b per value overhead from naive serialization of the bucket/key
> >> value.  I have a branch that reduced this by 11 bytes.
> >>   - 4b per value overhead from a single bit flag that is stored in an
> >> int. No patch for this thus far.
> >>
> >> Additionally, I've found that running with tcmalloc using LD_PRELOAD
> >> reduces the cost for bitcask's many allocations, but a) I've never
> >> done so in production and b) they say that it never releases memory,
> >> which is worrying, although the paging system theoretically should
> >> take care of it fairly easily as long as their page usages isn't
> >> insane.
> >>
> >> My original notes looked like this:
> >>
> >> 1)   ~32 bytes for the OS/malloc + khash overhead @ 50M keys
> >> (amortized, so bigger for fewer keys, smaller for more keys).
> >> 2) + 16 bytes of erlang allocator overhead
> >> 3) + 22 bytes for the NIF C structure
> >> 4) +  8 bytes for the entry pointer stored in the khash
> >> 5) + 13 bytes of kv overhead
> >>
> >> tcmalloc does what it can for line 1.
> >> My patches do what I can for lines 2, 3, and 5.
> >>
> >> 4 isn't amenable to anything other than a change in the way the keydir
> >> is stored, which could also potentially help with 1 (fewer
> >> allocations, etc).  That, unfortunately, is not very likely to happen
> >> soon.
> >>
> >> So things will get better relatively soon, but there are some
> >> architectural limits that will be harder to address.
> >>
> >> On Mon, Aug 5, 2013 at 1:49 AM, Alexander Ilyin 
> >> wrote:
> >> > Evan,
> >> >
> >> > News about per key overhead of 91 bytes are quite frustrating. When we
> >> > were
> >> > choosing a key value storage per key metadata size was a crucial point
> >> > for
> >> > us. We have a simple use case but a lot of data (hundreds of millions
> of
> >> > items) so we were looking for the ways to reduce memory consumption.
> >> > Here and here is stated a value of 40 bytes. 22 bytes in ram
> calculator
> >> > seemed like a mistake because the following example obviously uses a
> >> > value
> >> > of 40.
> >> >
> >> > Anyway, thanks for your response.
> >> >
> >> >
> >> > On 4 August 2013 04:39, Evan Vigil-McClanahan 
> >> > wrote:
> >> >>
> >> >> Some responses inline.
> >> >>
> >> >> On Fri, Aug 2, 2013 at 3:11 AM, Alexander Ilyin <
> alexan...@rutarget.ru>
> >> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I have a few questions about Riak memory usage.
> >> >> > We're using Riak 1.3.1 on a 3 node cluster. According to bitcask
> >> >> > capacity
> >> >> > calculator
> >> >> >
> >> >> >
> >> >> > (
> http://docs.basho.com/riak/1.3.1/references/appendices/Bitcask-Capacity-Planning/
> )
> >> >> > Riak should use about 30Gb of RAM for out data. Actually, it uses
> >> >> > about
> >> >> > 45Gb
> >> >> > and I can't figure out why. I'm looking at %MEM column in top on
> each
> >> >> > node
> >> >> > for a beam.smp process.
> >> >>
> >> >> I've recently done some research on this and have filed bugs against
> >> >> the calculator, it's a bit wrong and ha

Re: Nailing down the bucket design?

2013-08-07 Thread Guido Medina
As a 2nd thought, you could have a key per player on the player's bucket 
and a key with the collection of units per player on the unit's bucket.


Guido.

On 07/08/13 15:52, Guido Medina wrote:
Whats the size of each unit JSON wise?, if it is too small, you could 
have the player's units inside a single key as a collection, that way 
when you fetch a player your key will contain the units and you could 
play around with mutations/locking of such player's key. And also, it 
will leverage your multi-threading code making each player thread your 
multi-threading pattern (Removing the need of a thread pool to fetch 
units)


The drawback of such design is that each player's key would be fetched 
as a whole just to add/remove a unit. If you have a single 
reader/writer (Server), you could add some in-memory cache (For 
example, in Java you could use Google's Guava framework; LoadingCache)


Maybe I proposed something with way out of proportions, I think I 
would need more details.


Anyway, HTH,

Guido.

On 07/08/13 10:28, Maksymilian Strzelecki wrote:
Hi. I've read somewhere around the Internet that Riak benefits its 
performence when there are more buckets and not massive amount of 
keys in them. Now, if this is true I'm asking for an advice on how to 
go about my data model.


I've started off with one bucket called accounts_units. Every player 
has his set of units in-game (like 10-20) and they all have their 
individual keys in there. Since knowing what your units are is pretty 
important thing I would be doing a lot of queries on that bucket. 
Then I thought I could create individual buckets for every player and 
his units. Though it would create A LOT of buckets with little keys. 
Would that be better? What do you think?


Thanks for your time,
Max.


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Nailing down the bucket design?

2013-08-07 Thread Jeremiah Peschka
Responses inline.

---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop


On Wed, Aug 7, 2013 at 2:28 AM, Maksymilian Strzelecki wrote:

> Hi. I've read somewhere around the Internet that Riak benefits its
> performence when there are more buckets and not massive amount of keys in
> them. Now, if this is true I'm asking for an advice on how to go about my
> data model.
>

Riak can benefit from having a large number of buckets, but it's very use
case specific.


> I've started off with one bucket called accounts_units. Every player has
> his set of units in-game (like 10-20) and they all have their individual
> keys in there. Since knowing what your units are is pretty important thing
> I would be doing a lot of queries on that bucket. Then I thought I could
> create individual buckets for every player and his units. Though it would
> create A LOT of buckets with little keys. Would that be better? What do you
> think?
>

In direct answer to your question, your data model might be better served
by storing all of a player's units in a single value; e.g. units/peschkaj
instead of having between 10 and 20 keys in a units_peschkaj bucket. The
reasoning here being that a read of a single key is going to be less load
on the system than 20 reads, even though that one key might be 20x larger.

You need to be aware of data access patterns - if multiple actors in your
system can modify the same value, then you need to account for that in your
data access patterns and either serialize writes through a single point
(not a great idea for concurrency) or potentially allow conflicts to occur
and have your application manage sibling resolution.

You also need to be aware of the size of each value that you're storing,
especially if you have siblings. There are performance implications of
storing objects around 4MB in size.


>
> Thanks for your time,
> Max.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Nailing down the bucket design?

2013-08-07 Thread Guido Medina
Whats the size of each unit JSON wise?, if it is too small, you could 
have the player's units inside a single key as a collection, that way 
when you fetch a player your key will contain the units and you could 
play around with mutations/locking of such player's key. And also, it 
will leverage your multi-threading code making each player thread your 
multi-threading pattern (Removing the need of a thread pool to fetch units)


The drawback of such design is that each player's key would be fetched 
as a whole just to add/remove a unit. If you have a single reader/writer 
(Server), you could add some in-memory cache (For example, in Java you 
could use Google's Guava framework; LoadingCache)


Maybe I proposed something with way out of proportions, I think I would 
need more details.


Anyway, HTH,

Guido.

On 07/08/13 10:28, Maksymilian Strzelecki wrote:
Hi. I've read somewhere around the Internet that Riak benefits its 
performence when there are more buckets and not massive amount of keys 
in them. Now, if this is true I'm asking for an advice on how to go 
about my data model.


I've started off with one bucket called accounts_units. Every player 
has his set of units in-game (like 10-20) and they all have their 
individual keys in there. Since knowing what your units are is pretty 
important thing I would be doing a lot of queries on that bucket. Then 
I thought I could create individual buckets for every player and his 
units. Though it would create A LOT of buckets with little keys. Would 
that be better? What do you think?


Thanks for your time,
Max.


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Nailing down the bucket design?

2013-08-07 Thread Maksymilian Strzelecki
Hi. I've read somewhere around the Internet that Riak benefits its
performence when there are more buckets and not massive amount of keys in
them. Now, if this is true I'm asking for an advice on how to go about my
data model.

I've started off with one bucket called accounts_units. Every player has
his set of units in-game (like 10-20) and they all have their individual
keys in there. Since knowing what your units are is pretty important thing
I would be doing a lot of queries on that bucket. Then I thought I could
create individual buckets for every player and his units. Though it would
create A LOT of buckets with little keys. Would that be better? What do you
think?

Thanks for your time,
Max.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Ripple Gem Help

2013-08-07 Thread Jace Poirier-Pinto
Hello,

I am trying to use Riak's gem 'ripple' for a Ruby on Rails project and I
was wondering if there was any 'advanced' documentation for data
manipulation in the Document Model. Specifically, setting up and using
secondary indexes, accessing 'self' variables, etc.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.3.1 Errors

2013-08-07 Thread Bryan Fink
On Wed, Aug 7, 2013 at 4:37 AM, Shane McEwan  wrote:
> It doesn't seem to be causing us any problems and @beerriot's comments on
> #49 seem to indicate the messages are safe to ignore so that's what I'll do
> until we're ready to upgrade.

Indeed, the problem is harmless, other than generating log spam.
Unfortunately, while the patches that Jared mentioned address similar
errors, the specific "exit with reason noproc in context
shutdown_error" might still happen on 1.3.2 and 1.4.x. It's a race in
the shutdown monitoring logic of Erlang/OTP's 'supervisor' module (see
my last comment on riak_pipe issue #49 [1]). The fix requires more
restructuring than we had time to do for the 1.4 release. Despite the
'error' level of the log (which is set by code we don't control), the
race is harmless, and the correct logical thing (making sure the
process is dead) is happening.

-Bryan

[1] https://github.com/basho/riak_pipe/issues/49#issuecomment-17147806

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


WARNING: Not all replicas will be on distinct nodes (with 5 nodes)

2013-08-07 Thread Guillermo
Hi. I saw before this warning  with clusters with not enough nodes.

On this case, the environment is amazon ec2 c1.xlarge machines in the same
availability zone. 5 machines. Manage with chef and the oficial riak
cookbook 2.2.0 that installs riak 1.4.0 build 1.

The process:


knife ssh "roles:riak" "riak-admin cluster join riak@node01"
ssh node01 riak-admin cluster plan
ssh node01 riak-admin cluster commit


In the plan face, it already says:


WARNING: Not all replicas will be on distinct nodes


And after commit, riak-admin diag confirms that:

[warning] The following preflists do not satisfy the n_val: [[{0,
   'riak@node02
'},

{2854495385411919762116571938898990272765493248,
   'riak@node02
'},

{5708990770823839524233143877797980545530986496,

[] (with
metions to all the nodes)


The solution I found was:

In node01 (or probably any). riak-admin cluster leave. Wait
till transfers finish, and join again.
After this tedious process (30 minuts every cluster commit), riak-diag no
longer mentions nothing.

I do this process to times, with the same result, and needing to apply the
same solution.
Any way to skip this? I am doing something wrong?

Thanks.


-- 
Guillermo Álvarez
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.3.1 Errors

2013-08-07 Thread Shane McEwan
Thanks Jared. I wonder why my searching didn't locate these issues on 
GitHub? My Google-fu must be lacking. :-)


It doesn't seem to be causing us any problems and @beerriot's comments 
on #49 seem to indicate the messages are safe to ignore so that's what 
I'll do until we're ready to upgrade.


Thanks again!

On 07/08/13 00:22, Jared Morrow wrote:

Shane,

This bug was fixed in Riak 1.4.x in this commit
https://github.com/basho/riak_pipe/pull/73 which was backported to 1.3.2
with this https://github.com/basho/riak_pipe/pull/74.  I'm not an expert on
the issue itself, so I'll have to ask if those messages are something that
are safe to ignore.

-Jared


On Mon, Aug 5, 2013 at 8:43 AM, Shane McEwan  wrote:


G'day!

Nearly every day since upgrading to Riak 1.3.1 I've been seeing the
following errors on random nodes:

2013-08-05 01:00:05.775 [error] <0.212.0> Supervisor riak_pipe_fitting_sup
had child undefined started with riak_pipe_fitting:start_link() at <
0.26698.882> exit with reason noproc in context shutdown_error

2013-08-05 14:14:35.431 [error] <0.709.0> Supervisor riak_kv_mrc_sink_sup
had child undefined started with riak_kv_mrc_sink:start_link() at <
0.18229.970> exit with reason noproc in context shutdown_error

What do they mean? I suspect I can ignore them as they're very similar to
the "{sink_died,normal}" messages that we would occasionally get with
previous versions. However, if they're not indicative of a problem why are
they reported as an [error]?

Can someone shed some light on these errors?

Thanks!

Shane.

__**_
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/**mailman/listinfo/riak-users_**lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com