Re: riak 1.4.0 upgrade failed

2013-07-19 Thread Jared Morrow
So 'epmd' (erlang port mapper daemon) is probably still running under the
riak user.  You can try killing epmd and making sure no other process is
running under the riak user.

I'll put in an issue to not fail so hard if the usermod command fails.
 Sorry for the troubles.

-Jared


On Fri, Jul 19, 2013 at 8:58 AM, kzhang  wrote:

> Hi,
>
> Thanks for the reply. Sorry, I tried a few times, I did stop it at some
> point, still the same result. Just tried again and here is the result:
>
> # riak stop
> Attempting to restart script through sudo -H -u riak ok
>
> # riak-admin status
> Attempting to restart script through sudo -H -u riak Node is not running!
>
> # sudo rpm -Uvhv riak-1.4.0-1.el6.x86_64.rpm
> D: == riak-1.4.0-1.el6.x86_64.rpm
> D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
> D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
> D: loading keyring from rpmdb
> D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
> D: opening  db index   /var/lib/rpm/Packages rdonly mode=0x0
> D: locked   db index   /var/lib/rpm/Packages
> D: opening  db index   /var/lib/rpm/Name rdonly mode=0x0
> D:  read h# 457 Header sanity check: OK
> D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
> D: Using legacy gpg-pubkey(s) from rpmdb
> D: Expected size: 25326908 = lead(96)+sigs(180)+pad(4)+data(25326628)
> D:   Actual size: 25326908
> D: riak-1.4.0-1.el6.x86_64.rpm: Header SHA1 digest: OK
> (f2fc385c419ea7010a2691e6c7e7e8ad51a045a6)
> D: == relocations
> D:  read h# 496 Header SHA1 digest: OK
> (7b0796f33bb3381f63e1d28c779198e8313729d0)
> D:  added binary package [0]
> D: found 0 source and 1 binary packages
> D: == +++ riak-1.4.0-1.el6 x86_64/linux 0x2
> D: opening  db index   /var/lib/rpm/Basenames rdonly mode=0x0
> D:  read h#  13 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: /bin/bash YES (db files)
> D:  Requires: /bin/sh   YES (db files)
> D:  Requires: /bin/sh   YES (cached)
> D:  Requires: /bin/sh   YES (cached)
> D:  read h# 155 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: /usr/bin/env  YES (db files)
> D:  Requires: config(riak) = 1.4.0-1.el6YES (added
> provide)
> D: opening  db index   /var/lib/rpm/Providename rdonly mode=0x0
> D:  read h#  11 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libc.so.6()(64bit)YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.2.5)(64bit) YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3)(64bit)   YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3.2)(64bit) YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3.4)(64bit) YES (db
> provides)
> D:  read h# 157 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libcrypto.so.10()(64bit)  YES (db
> provides)
> D:  Requires: libdl.so.2()(64bit)   YES (db
> provides)
> D:  Requires: libdl.so.2(GLIBC_2.2.5)(64bit)YES (db
> provides)
> D:  read h#   1 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libgcc_s.so.1()(64bit)YES (db
> provides)
> D:  Requires: libgcc_s.so.1(GCC_3.0)(64bit) YES (db
> provides)
> D:  Requires: libm.so.6()(64bit)YES (db
> provides)
> D:  Requires: libm.so.6(GLIBC_2.2.5)(64bit) YES (db
> provides)
> D:  read h#  12 Header V3 RSA/SHA256 Signature, key ID c105b9de: OK
> D:  Requires: libncurses.so.5()(64bit)  YES (db
> provides)
> D:  Requires: libpthread.so.0()(64bit)  YES (db
> provides)
> D:  Requires: libpthread.so.0(GLIBC_2.2.5)(64bit)   YES (db
> provides)
> D:  Requires: libpthread.so.0(GLIBC_2.3.2)(64bit)   YES (db
> provides)
> D:  Requires: librt.so.1()(64bit)   YES (db
> provides)
> D:  Requires: librt.so.1(GLIBC_2.2.5)(64bit)YES (db
> provides)
> D:  Requires: libssl.so.10()(64bit) YES (db
> provides)
> D:  read h#  27 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libstdc++.so.6()(64bit)   YES (db
> provides)
> D:  Requires: libstdc++.so.6(CXXABI_1.3)(64bit) YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4)(64bit)YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4.11)(64bit) YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4.9)(64bit)  YES (db
> provides)
> D:  Requires: libtinfo.so.5()(64bit)YES (db
> provides)
> D:  Requires: libutil.so.1()(64bit) YES (db
> provides)
>

Re: Yokozuna in Riak Control

2013-07-19 Thread Chris Meiklejohn
Good idea!

I've filed the following issue in GitHub for tracking this:
https://github.com/basho/riak_control/issues/116

- Chris


On Fri, Jul 19, 2013 at 3:03 PM, Dave Martorana  wrote:

> Hey everyone,
>
> A feature request, if I may - RAM monitoring in Riak Control currently
> shows Riak RAM usage and all-other usage. Would love if it showed
> SOLR/Lucene RAM usage as well in future, integrated Yokozuna builds.
>
> Cheers!
>
> Dave
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Comparing Riak MapReduce and Hadoop MapReduce

2013-07-19 Thread Xiaoming Gao
Hi everyone,

I am trying to learn about Riak MapReduce and comparing it with Hadoop
MapReduce, and there are some details that I am interested in but not
covered in the online documents. So hopefully we can get some help here
about the following questions? Thanks in advance!

1. For a given MapReduce request (or to say, job), how does Riak decide how
many mappers to use for the job? For example, if I have 8 nodes and my data
are distributed across all nodes with an "N" value of 2, will I have 4
mappers running on 4 nodes concurrently? Is it possible to have multiple
mappers (e.g., 4 or even 6) for the same MR job running on each node (for
better processing speed)?

2. If I run a MapReduce job over the results of a Riak Search query, how
does Riak schedule the mappers based on the search results?

3. How does Riak handle intermediate data generated by mappers?
Specifically:
(1) In Hadoop MapReduce, the output of mappers are  pairs, and
the output from all mappers are first grouped based on keys, and then handed
over to the reducer. Does Riak do similar grouping of intermediate data? 

(2) How are mapper outputs transmitted to the reducer? Does Riak use local
disks on the mapper nodes or reducer nodes to store the intermediate data
temporarily?

4. According to the document
http://docs.basho.com/riak/latest/dev/advanced/mapreduce/#How-Phases-Work ,
each MR job only schedules one reducer, which runs on the coordinate node.
Is there any way to configure a MR job to use multiple reducers?

Best regards,
Xiaoming



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Comparing-Riak-MapReduce-and-Hadoop-MapReduce-tp4028454.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak 1.4.0 upgrade failed

2013-07-19 Thread kzhang
Hi, 

Thanks for the reply. Sorry, I tried a few times, I did stop it at some
point, still the same result. Just tried again and here is the result:

# riak stop
Attempting to restart script through sudo -H -u riak ok

# riak-admin status
Attempting to restart script through sudo -H -u riak Node is not running!

# sudo rpm -Uvhv riak-1.4.0-1.el6.x86_64.rpm
D: == riak-1.4.0-1.el6.x86_64.rpm
D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
D: loading keyring from rpmdb
D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
D: opening  db index   /var/lib/rpm/Packages rdonly mode=0x0
D: locked   db index   /var/lib/rpm/Packages
D: opening  db index   /var/lib/rpm/Name rdonly mode=0x0
D:  read h# 457 Header sanity check: OK
D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
D: Using legacy gpg-pubkey(s) from rpmdb
D: Expected size: 25326908 = lead(96)+sigs(180)+pad(4)+data(25326628)
D:   Actual size: 25326908
D: riak-1.4.0-1.el6.x86_64.rpm: Header SHA1 digest: OK
(f2fc385c419ea7010a2691e6c7e7e8ad51a045a6)
D: == relocations
D:  read h# 496 Header SHA1 digest: OK
(7b0796f33bb3381f63e1d28c779198e8313729d0)
D:  added binary package [0]
D: found 0 source and 1 binary packages
D: == +++ riak-1.4.0-1.el6 x86_64/linux 0x2
D: opening  db index   /var/lib/rpm/Basenames rdonly mode=0x0
D:  read h#  13 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: /bin/bash YES (db files)
D:  Requires: /bin/sh   YES (db files)
D:  Requires: /bin/sh   YES (cached)
D:  Requires: /bin/sh   YES (cached)
D:  read h# 155 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: /usr/bin/env  YES (db files)
D:  Requires: config(riak) = 1.4.0-1.el6YES (added
provide)
D: opening  db index   /var/lib/rpm/Providename rdonly mode=0x0
D:  read h#  11 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libc.so.6()(64bit)YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.2.5)(64bit) YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3)(64bit)   YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3.2)(64bit) YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3.4)(64bit) YES (db
provides)
D:  read h# 157 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libcrypto.so.10()(64bit)  YES (db
provides)
D:  Requires: libdl.so.2()(64bit)   YES (db
provides)
D:  Requires: libdl.so.2(GLIBC_2.2.5)(64bit)YES (db
provides)
D:  read h#   1 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libgcc_s.so.1()(64bit)YES (db
provides)
D:  Requires: libgcc_s.so.1(GCC_3.0)(64bit) YES (db
provides)
D:  Requires: libm.so.6()(64bit)YES (db
provides)
D:  Requires: libm.so.6(GLIBC_2.2.5)(64bit) YES (db
provides)
D:  read h#  12 Header V3 RSA/SHA256 Signature, key ID c105b9de: OK
D:  Requires: libncurses.so.5()(64bit)  YES (db
provides)
D:  Requires: libpthread.so.0()(64bit)  YES (db
provides)
D:  Requires: libpthread.so.0(GLIBC_2.2.5)(64bit)   YES (db
provides)
D:  Requires: libpthread.so.0(GLIBC_2.3.2)(64bit)   YES (db
provides)
D:  Requires: librt.so.1()(64bit)   YES (db
provides)
D:  Requires: librt.so.1(GLIBC_2.2.5)(64bit)YES (db
provides)
D:  Requires: libssl.so.10()(64bit) YES (db
provides)
D:  read h#  27 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libstdc++.so.6()(64bit)   YES (db
provides)
D:  Requires: libstdc++.so.6(CXXABI_1.3)(64bit) YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4)(64bit)YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4.11)(64bit) YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4.9)(64bit)  YES (db
provides)
D:  Requires: libtinfo.so.5()(64bit)YES (db
provides)
D:  Requires: libutil.so.1()(64bit) YES (db
provides)
D:  Requires: libutil.so.1(GLIBC_2.2.5)(64bit)  YES (db
provides)
D:  Requires: rpmlib(CompressedFileNames) <= 3.0.4-1YES (rpmlib
provides)
D:  Requires: rpmlib(FileDigests) <= 4.6.0-1YES (rpmlib
provides)
D:  Requires: rpmlib(PayloadFilesHavePrefix) <= 4.0-1   YES (rpmlib
provides)
D:  Requires: rtld(GNU_HASH)YES (db
provides)
D:  Requires: rpmlib(PayloadIsXz) <= 5.2-1  YES (rpmlib
provides)
D: opening  db index   /var/lib/rpm/Confli

Re: riak 1.4.0 upgrade failed

2013-07-19 Thread Andrew Thompson
On Fri, Jul 19, 2013 at 09:53:02AM -0700, kzhang wrote:
> Thanks!
> 
> install went through. 
> # sudo rpm -Uvh riak-1.4.0-1.el6.x86_64.rpm
> Preparing...###   
>
>  [100%]
>1:riak   warning: /etc/riak/app.config creat   
>
> ed as /etc/riak/app.config.rpmnew
> warning: /etc/riak/vm.args created as /etc/riak/vm.args.rpmnew
> ### [100%]
> 
> but failed on start.
> # riak start
> riak failed to start within 15 seconds,
> see the output of 'riak console' for more information.
> If you want to wait longer, set the environment variable
> WAIT_FOR_ERLANG to the number of seconds to wait.
> 
It looks like riak is still running as well, can you kill it and try to
start it again?

Andrew

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Javascript count of keys using mapreduce gives different counts

2013-07-19 Thread kpandey
I'm using javascript map reduce to count keys on a bucket with memory
backend.  Count seem to randomly increase or decrease on subsequent call
when no new data is inserted. Call is hitting the same instance in a
cluster.
Any idea why this might be happening

Each count request is made in few seconds interval using http post like so

curl -XPOST http://xx:8098/mapred -H 'Content-Type:
application/json' -d '
{"inputs":"TEST_BUCKET",
 "query":[{"map":{"language":"javascript",
  "keep":false,
  "source":"function(riakobj) {return [1]; }"}},
  {"reduce":{"language":"javascript",
 "keep":true,
 "name":"Riak.reduceSum"}}]}'


Thanks
Kumar



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Javascript-count-of-keys-using-mapreduce-gives-different-counts-tp4028455.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Quickly deleting + recreating item in Riak deletes new item

2013-07-19 Thread Kelly McLaughlin
Matthew,

I did some testing with your code and I was able to reproduce what you were
seeing. I would occasionally see an error similar to the following:

Failed to fetch item 460 err Object not found

This behavior is a result of the trade-offs of using an eventually
consistent database like Riak. It is not the case that your inserts are
failing to write or the data is being lost, but what is actually happening
is that the quick read after writing with the default request options does
not provide any guarantee that you will read your writes. So basically when
you make the read, the replicas that are responding to your request have
not seen the latest value yet and so you end up with "Not Found" as the
response. If you did another read attempt for one of those objects reported
missing, it would succeed because Riak's read-repair would have kicked in
to make sure each replica has the value. To increase the likelihood of
reading your writes you should set the optional request parameters pr and
pw to ensure that all of the primary replicas are available prior to
performing a read or write request. I altered your code to use those
options and put the updates in a gist [1] (it's my first stab at Go so my
changes may not be very idiomatic). Additionally I changed to riak driver
so that NotFoundOk was false instead of true. With these changes I was able
to run the test 50 times in a row with no errors where previously I would
see at least one error every 10 iterations or so. Hope that helps.

[1] : https://gist.github.com/kellymclaughlin/6041109

Kelly


On Wed, Jul 17, 2013 at 4:07 PM, Matthew Dawson wrote:

> On July 17, 2013 08:45:01 AM Kelly McLaughlin wrote:
> > Matthew,
> >
> > I find it really surprising that you don't see any difference in behavior
> > when you set delete_mode to keep. I think it would be helpful if you
> could
> > outline your specific setup and give the steps to reproduce what you're
> > seeing to be able to make a determination if this represents a bug or
> not.
> > Thanks.
> >
> > Kelly
> Hi Kelly,
>
> Sure, no problem.  Hardware wise, I have:
>  - An AMD Phenom II X6 Desktop with 16G memory, and a HDD with an SSD
> cache.
>  - An Intel Ivy Bridge Dual Core (+HT) Laptop with 16G memory and SSD.
> Both have lots of free memory + disk space for running my tests, and my
> Desktop never seems to be IO bound.  Both machines are connected over
> Ethernet
> on the same LAN.
>
> On top of that hardware, both are running two instances of Riak each, all
> forming one 4 node cluster.  I'm using the default ring size of 64.  I've
> also
> upgraded all the nodes to the latest release, 1.4, using the 1.4 tag from
> Git.
> I'm not using this to seriously benchmark Riak, so I don't think this setup
> should cause any issues.  I'm also going to setup a really cluster for
> production use, so ring size is not a concern.
> Each Riak instance uses LevelDB as the datastore, Riak Search is disabled.
> I'm using Riak's PB API for access, and I've bumped up the backlog
> parameter
> to 1024 for now.  Originally my program would connect to a single node, but
> recently I've been playing with HAProxy locally, and now I use that to
> connect
> to all four instances.  The problem existed before I implemented HAProxy.
> Riak Control is also enabled on one node per computer.
>
> For my application, it effectively stores in Riak two pieces of
> information.
> First it stores a list of keys associated with an object, and then stores
> an
> individual item at each key.  I limit the number of keys to 1 per
> object.
>
> For my test suite, I automatically clean up after each test by listing all
> the
> keys associated with a bucket, and then delete each key individually.  I
> only
> store items in two buckets, so this cleans the slate before each run.
>
> The test that has the high chance of failing is testing how the system
> deals
> with inserting 1 items against one object.  The key list remains below
> 1M.
> Occasionally I see other tests fail, but I think this one fails more often
> as
> it stresses the entire system the most.  If I stop the automatic cleanup,
> the
> not found key is also not findable by Curl either.
>
> Before posting, I would delete and insert keys, without using a vclock.  I
> had
> figured this was safe as I ran with allow_mult=true on both buckets, and I
> implemented conflict resolution first.  As suggested on this list, I now
> have
> the 1 item test suite use vclocks from start to finish.  However, I
> still
> see this behaviour.
>
> I've attached a program (written in go as that is what I'm using) to this
> email which triggers the behaviour.  As far as I understand Riak, it is
> properly fetching vclocks whenever possible.  The library I'm using
> (located
> at: github.com/tpjg/goriakpbc ) was just recently updated to ensure that
> vclocks are fetched, even if the item is deleted.  I am using an up to date
> version of the library.  The program is acts similarly to my ap

More efficient JavaScript map jobs

2013-07-19 Thread David Pollak
Howdy,

I'm working on using ClojureScript (which compiles down to JavaScript) to
define MapReduce jobs in Riak.

ClojureScript compiles down to JavaScript and includes the core
ClojureScript libraries, also compiled down to JavaScript. Even the
simplest functions result in 10K to 100K of library code.

Is there any way to run a JavaScript mapreduce job such that you include an
environment setup set of JavaScript that will be run in the JavaScript VM
before the job starts and then each of the map function calls is a call
into that environment?

Thanks,

David



-- 
Telegram, Simply Beautiful CMS https://telegr.am
Lift, the simply functional web framework http://liftweb.net
Follow me: http://twitter.com/dpp
Blog: http://goodstuff.im
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying 2i using erlang native api

2013-07-19 Thread luigi max
What I am trying to do is access 2i information from a post commit hook
written in erlang. What is the best way to do it from there?


On Fri, Jul 19, 2013 at 1:32 PM, Sean Cribbs  wrote:

> You should avoid using the internal client (which isn't a client, really),
> except for debugging. Instead, use
> https://github.com/basho/riak-erlang-client or any of our other
> language-specific clients. See also
> http://docs.basho.com/riak/latest/references/Client-Libraries/
>
>
> On Fri, Jul 19, 2013 at 3:21 PM, luigi max  wrote:
>
>>  I have been looking around the Riak docs and as many other places that I
>> can think of and I can't seem to find how to query 2i with the Riak
>> internal erlang client.
>>
>> I can do a query to 2i with the http interface with:
>>
>> /buckets/TEST/index/pos_int/1/15
>>
>> and it returns
>>
>> {"keys":["set2i"]}
>>
>> I can create an entry with the following code (loaded into riak):
>>
>> Robj =  riak_object:new(<<"TEST">>, <<"set2i">>, void, "application/json"),
>> Lst = [{"pos_int", 5}],
>> Meta = dict:store(<<"index">>,Lst, dict:new()),
>> I2obj = riak_object:update_metadata(Robj, Meta)
>> {ok,C} = riak:local_client().
>> C:put(I2obj).
>>
>> This works nicely, but the problem I have is with trying to figure out
>> how to do the same query using the native api for riak. The documentation
>> for anything to do with the riak internal client is effectively non
>> existent.
>>
>> What all I need:
>>
>>- information on how to do a integer range query
>>- information on my other 2i query options, in case I need it
>>
>> Any help is much appreciated.
>>
>> --
>> By Luke Harvey;
>>
>> LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>



-- 
By Luke Harvey;

LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying 2i using erlang native api

2013-07-19 Thread luigi max
I saw those before, the part that was not clear to me is what constitutes a
 -Query :: riak_index:query_def().
Any test I did in riak console timed out.


On Fri, Jul 19, 2013 at 2:20 PM, Sean Cribbs  wrote:

> Aha, that clears things up. I'd be careful about that (the effects of
> post-commit hooks will be invisible to your application) but you can use
> these functions in riak_client:
>
> -export([get_index/4,get_index/3]).
> -export([stream_get_index/4,stream_get_index/3]).
>
>
>
> On Fri, Jul 19, 2013 at 3:48 PM, luigi max  wrote:
>
>> What I am trying to do is access 2i information from a post commit hook
>> written in erlang. What is the best way to do it from there?
>>
>>
>> On Fri, Jul 19, 2013 at 1:32 PM, Sean Cribbs  wrote:
>>
>>> You should avoid using the internal client (which isn't a client,
>>> really), except for debugging. Instead, use
>>> https://github.com/basho/riak-erlang-client or any of our other
>>> language-specific clients. See also
>>> http://docs.basho.com/riak/latest/references/Client-Libraries/
>>>
>>>
>>> On Fri, Jul 19, 2013 at 3:21 PM, luigi max  wrote:
>>>
  I have been looking around the Riak docs and as many other places that
 I can think of and I can't seem to find how to query 2i with the Riak
 internal erlang client.

 I can do a query to 2i with the http interface with:

 /buckets/TEST/index/pos_int/1/15

 and it returns

 {"keys":["set2i"]}

 I can create an entry with the following code (loaded into riak):

 Robj =  riak_object:new(<<"TEST">>, <<"set2i">>, void, "application/json"),
 Lst = [{"pos_int", 5}],
 Meta = dict:store(<<"index">>,Lst, dict:new()),
 I2obj = riak_object:update_metadata(Robj, Meta)
 {ok,C} = riak:local_client().
 C:put(I2obj).

 This works nicely, but the problem I have is with trying to figure out
 how to do the same query using the native api for riak. The documentation
 for anything to do with the riak internal client is effectively non
 existent.

 What all I need:

- information on how to do a integer range query
- information on my other 2i query options, in case I need it

 Any help is much appreciated.

 --
 By Luke Harvey;

 LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


>>>
>>>
>>> --
>>> Sean Cribbs 
>>> Software Engineer
>>> Basho Technologies, Inc.
>>> http://basho.com/
>>>
>>
>>
>>
>> --
>> By Luke Harvey;
>>
>> LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
>>
>
>
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>



-- 
By Luke Harvey;

LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS invalid module: nofile

2013-07-19 Thread Kelly McLaughlin
Dimitri,

It looks like you do not have Riak properly configured for use with RiakCS.
Specifically the it looks like add_paths setting is missing or incorrect.
You can find more info on the specifics of configuring Riak for RiakCS
here:
http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/.
Hope that helps.

Kelly


On Fri, Jul 19, 2013 at 11:10 AM, Dimitri Aivaliotis wrote:

> Hi,
>
> I'm kicking the tires on a RiakCS-on-SmartOS install, and noticed the
> following message in riak-cs's console.log at cloud storage
> calculation time:
>
> 2013-07-19 06:00:00.936 UTC [error] <0.32509.27> gen_fsm
> riak_cs_storage_d in state calculating terminated with reason:
> {json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
> named in PhaseSpec function:\n must be a valid module name (failed to
> load riak_cs_storage: nofile)">>}}}
> 2013-07-19 06:00:00.938 UTC [error] <0.32509.27> CRASH REPORT Process
> riak_cs_storage_d with 1 neighbours exited with reason:
> {json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
> named in PhaseSpec function:\n must be a valid module name (failed to
> load riak_cs_storage: nofile)">>}}} in gen_fsm:terminate/7 line 611
> 2013-07-19 06:00:00.940 UTC [error] <0.125.0> Supervisor riak_cs_sup
> had child riak_cs_storage_d started with
> riak_cs_storage_d:start_link() at <0.32509.27> exit with reason
> {json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
> named in PhaseSpec function:\n must be a valid module name (failed to
> load riak_cs_storage: nofile)">>}}} in context child_terminated
>
> Predictably, the storage statistics are empty.
>
>
> I also get the same error when doing an 'ls' with s3cmd:
>
> 2013-07-19 10:42:01.848 UTC [error] <0.24146.40> gen_fsm <0.24146.40>
> in state waiting_map_reduce terminated with reason: <<"Phase
> {xform_map,0}: invalid module named in PhaseSpec function:\n must be a
> valid module name (failed to load riak_cs_utils: nofile)">>
> 2013-07-19 10:42:01.854 UTC [error] <0.24146.40> CRASH REPORT Process
> <0.24146.40> with 1 neighbours exited with reason: <<"Phase
> {xform_map,0}: invalid module named in PhaseSpec function:\n must be a
> valid module name (failed to load riak_cs_utils: nofile)">> in
> gen_fsm:terminate/7 line 611
>
>
> I looked through the riak-cs packages, but found no 'nofile.beam' - is
> there supposed to be one?
>
> Thanks for any help.
>
> - Dimitri
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying 2i using erlang native api

2013-07-19 Thread Sean Cribbs
Aha, that clears things up. I'd be careful about that (the effects of
post-commit hooks will be invisible to your application) but you can use
these functions in riak_client:

-export([get_index/4,get_index/3]).
-export([stream_get_index/4,stream_get_index/3]).



On Fri, Jul 19, 2013 at 3:48 PM, luigi max  wrote:

> What I am trying to do is access 2i information from a post commit hook
> written in erlang. What is the best way to do it from there?
>
>
> On Fri, Jul 19, 2013 at 1:32 PM, Sean Cribbs  wrote:
>
>> You should avoid using the internal client (which isn't a client,
>> really), except for debugging. Instead, use
>> https://github.com/basho/riak-erlang-client or any of our other
>> language-specific clients. See also
>> http://docs.basho.com/riak/latest/references/Client-Libraries/
>>
>>
>> On Fri, Jul 19, 2013 at 3:21 PM, luigi max  wrote:
>>
>>>  I have been looking around the Riak docs and as many other places that
>>> I can think of and I can't seem to find how to query 2i with the Riak
>>> internal erlang client.
>>>
>>> I can do a query to 2i with the http interface with:
>>>
>>> /buckets/TEST/index/pos_int/1/15
>>>
>>> and it returns
>>>
>>> {"keys":["set2i"]}
>>>
>>> I can create an entry with the following code (loaded into riak):
>>>
>>> Robj =  riak_object:new(<<"TEST">>, <<"set2i">>, void, "application/json"),
>>> Lst = [{"pos_int", 5}],
>>> Meta = dict:store(<<"index">>,Lst, dict:new()),
>>> I2obj = riak_object:update_metadata(Robj, Meta)
>>> {ok,C} = riak:local_client().
>>> C:put(I2obj).
>>>
>>> This works nicely, but the problem I have is with trying to figure out
>>> how to do the same query using the native api for riak. The documentation
>>> for anything to do with the riak internal client is effectively non
>>> existent.
>>>
>>> What all I need:
>>>
>>>- information on how to do a integer range query
>>>- information on my other 2i query options, in case I need it
>>>
>>> Any help is much appreciated.
>>>
>>> --
>>> By Luke Harvey;
>>>
>>> LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>>
>> --
>> Sean Cribbs 
>> Software Engineer
>> Basho Technologies, Inc.
>> http://basho.com/
>>
>
>
>
> --
> By Luke Harvey;
>
> LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
>



-- 
Sean Cribbs 
Software Engineer
Basho Technologies, Inc.
http://basho.com/
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Querying 2i using erlang native api

2013-07-19 Thread luigi max
I have been looking around the Riak docs and as many other places that I
can think of and I can't seem to find how to query 2i with the Riak
internal erlang client.

I can do a query to 2i with the http interface with:

/buckets/TEST/index/pos_int/1/15

and it returns

{"keys":["set2i"]}

I can create an entry with the following code (loaded into riak):

Robj =  riak_object:new(<<"TEST">>, <<"set2i">>, void, "application/json"),
Lst = [{"pos_int", 5}],
Meta = dict:store(<<"index">>,Lst, dict:new()),
I2obj = riak_object:update_metadata(Robj, Meta)
{ok,C} = riak:local_client().
C:put(I2obj).

This works nicely, but the problem I have is with trying to figure out how
to do the same query using the native api for riak. The documentation for
anything to do with the riak internal client is effectively non existent.

What all I need:

   - information on how to do a integer range query
   - information on my other 2i query options, in case I need it

Any help is much appreciated.

-- 
By Luke Harvey;

LinkedIn Profile:  http://www.linkedin.com/in/lukeharvey00
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak 1.4.0 upgrade failed

2013-07-19 Thread kzhang
Thanks!

install went through. 
# sudo rpm -Uvh riak-1.4.0-1.el6.x86_64.rpm
Preparing...### 
 
 [100%]
   1:riak   warning: /etc/riak/app.config creat 
 
ed as /etc/riak/app.config.rpmnew
warning: /etc/riak/vm.args created as /etc/riak/vm.args.rpmnew
### [100%]

but failed on start.
# riak start
riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.



# riak console
config is OK
Exec: /usr/lib64/riak/erts-5.9.1/bin/erlexec -boot
/usr/lib64/riak/releases/1.4.0/riak  -config
/etc/riak/app.config -pa /usr/lib64/riak/lib/basho-patches  
  
-args_file /etc/riak/vm.args -- console
Root: /usr/lib64/riak
{error_logger,{{2013,7,19},{12,22,35}},"Protocol: ~p: register error:
~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}},[{inet_tcp_dist,listen,1,[{file,"inet_tcp_dist.erl"},{line,70}]},{net_kernel,start_protos,4,[{file,"net_kernel.erl"},{line,1314}]},{net_kernel,start_protos,3,[{file,"net_kernel.erl"},{line,1307}]},{net_kernel,init_node,2,[{file,"net_kernel.erl"},{line,1197}]},{net_kernel,init,1,[{file,"net_kernel.erl"},{line,357}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}]}
{error_logger,{{2013,7,19},{12,22,35}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}},{ancestors,[net_sup,kernel_sup,<0.10.0>]},{messages,[]},{links,[#Port<0.204>,<0.17.0>]},{dictionary,[{longnames,true}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,503}],[]]}
{error_logger,{{2013,7,19},{12,22,35}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[['riak@10.24.16.39',longnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
{error_logger,{{2013,7,19},{12,22,35}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
{error_logger,{{2013,7,19},{12,22,35}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]}
{"Kernel pid
terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"}

Crash dump was written to: /var/log/riak/erl_crash.dump
Kernel pid terminated (application_controller)
({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})







--
View this message in context: 
http://riak-users.197444.n3.nabble.com/riak-1-4-0-upgrade-failed-tp4028429p4028441.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS Question

2013-07-19 Thread Christian Dahlqvist
Hi Vahric,

The reason you are getting errors related to bit cask and indexing is that you 
only have configured the required multi backend on one of the Riak nodes. All 
the Riak nodes need to have the same backend configuration, as data will be 
replicated across the cluster.

Best regards,

Christian



On 19 Jul 2013, at 15:32, Vahric Muhtaryan  wrote:

> attached, thanks
> VM
> 
> 
> On Fri, Jul 19, 2013 at 4:00 PM, Christian Dahlqvist  
> wrote:
> Hi Vahric,
> 
> Please can you send me the logs and configuration files from Riak, Riak-CS 
> and Stanchion from all nodes in the cluster?
> 
> Best regards,
> 
> Christian
> 
> 
> 
> 
> On 18 Jul 2013, at 21:45, Vahric Muhtaryan  wrote:
> 
>> Hello All,
>> 
>> i got such error
>> 
>> [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning] 
>> <0.121.0>@stanchion_utils:email_available:591 Error occurred trying to check 
>> if the address <<"vah...@doruk.net.tr">> has been registered. Reason: 
>> <<"{error,{indexes_not_supported,riak_kv_bitcask_backend}}">>
>> 
>> but my config should be okay
>> 
>> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
>> {storage_backend, riak_cs_kv_multi_backend},
>> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
>> {multi_backend_default, be_default},
>> {multi_backend, [
>>  {be_default, riak_kv_eleveldb_backend, [
>>{max_open_files, 50},
>>  {data_root, "/var/lib/riak/leveldb"}
>>  ]},
>>{be_blocks, riak_kv_bitcask_backend, [
>>  {data_root, "/var/lib/riak/bitcask"}
>>  ]}
>> ]},
>> 
>> (r...@xxx.xxx.xxx.xxx)3> code:which(riak_cs_kv_multi_backend).   
>>
>> "/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin/riak_cs_kv_multi_backend.beam"
>> 
>> Any idea why i can not create admin user when i try to create ? My config 
>> said default db backend is level not bits cask, could be a bug ? Any body 
>> know ? 
>> 
>> Regards
>> VM ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna kv write timeouts on 1.4 (yz-merge-1.4.0)

2013-07-19 Thread Dave Martorana
Looks great so far. Importing lots of data now, will let you know if I run
in to anything else. Thanks!


On Thu, Jul 18, 2013 at 5:06 PM, Ryan Zezeski  wrote:

> Okay.  Yokozuna has been targeted to Riak 1.4.0.  Please notice the
> integration branch names changed to rz-yz-merge-1.4.0 (notice the addition
> of rz- prefix and different version).
>
>
> https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md#install-from-github
>
> Make sure to do a fresh checkout to avoid any lingering old dependencies.
>  Let me know if you run into more issues.
>
> -Z
>
>
> On Thu, Jul 18, 2013 at 9:33 AM, Ryan Zezeski  wrote:
>
>> Dave,
>>
>> I'm currently in the process re-targeting Yokozuna to 1.4.0 for the 0.8.0
>> release.  I'll ping this thread when the transition is complete.
>>
>> -Z
>>
>>
>> On Wed, Jul 17, 2013 at 8:53 PM, Eric Redmond  wrote:
>>
>>> Dave,
>>>
>>> Your initial line was correct. Yokozuna is not yet compatible with 1.4.
>>>
>>> Eric
>>>
>>> On Jul 15, 2013, at 1:00 PM, Dave Martorana  wrote:
>>>
>>> Hi everyone. First post, if I leave anything out just let me know.
>>>
>>> I have been using vagrant in testing Yokozuna with 1.3.0 (the official
>>> 0.7.0 “release") and it runs swimmingly. When 1.4 was released and someone
>>> pointed me to the YZ integration branch, I decided to give it a go.
>>>
>>> I realize that YZ probably doesn’t support 1.4 yet, but here are my
>>> experiences.
>>>
>>> - Installs fine
>>> - Using default stagedevrel with 5 node setup
>>> - Without yz enabled in app.config, kv accepts writes and reads
>>> - With yz enabled on dev1 and nowhere else, kv accepts writes and reads,
>>> creates yz index, associates index with bucket, does not index content
>>> - With yz enabled on 4/5 nodes, kv stops accepting writes (timeout)
>>>
>>> Ex:
>>>
>>> (env)➜  curl -v -H 'content-type: text/plain' -XPUT '
>>> http://localhost:10018/buckets/players/keys/name' -d "Ryan Zezeski"
>>> * Adding handle: conn: 0x7f995a804000
>>> * Adding handle: send: 0
>>> * Adding handle: recv: 0
>>> * Curl_addHandleToPipeline: length: 1
>>> * - Conn 0 (0x7f995a804000) send_pipe: 1, recv_pipe: 0
>>> * About to connect() to localhost port 10018 (#0)
>>> *   Trying 127.0.0.1...
>>> * Connected to localhost (127.0.0.1) port 10018 (#0)
>>> > PUT /buckets/players/keys/name HTTP/1.1
>>> > User-Agent: curl/7.30.0
>>> > Host: localhost:10018
>>> > Accept: */*
>>> > content-type: text/plain
>>> > Content-Length: 12
>>> >
>>> * upload completely sent off: 12 out of 12 bytes
>>> < HTTP/1.1 503 Service Unavailable
>>> < Vary: Accept-Encoding
>>> * Server MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue) is
>>> not blacklisted
>>> < Server: MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue)
>>> < Date: Mon, 15 Jul 2013 19:54:50 GMT
>>> < Content-Type: text/plain
>>> < Content-Length: 18
>>> <
>>> request timed out
>>> * Connection #0 to host localhost left intact
>>>
>>> Here are my Vagrant file:
>>>
>>> https://gist.github.com/themartorana/460a52bb3f840010ecde
>>>
>>> and build script for the server:
>>>
>>> https://gist.github.com/themartorana/e2e0126c01b8ef01cc53
>>>
>>> Hope this helps.
>>>
>>> Dave
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Yokozuna in Riak Control

2013-07-19 Thread Dave Martorana
Hey everyone,

A feature request, if I may - RAM monitoring in Riak Control currently
shows Riak RAM usage and all-other usage. Would love if it showed
SOLR/Lucene RAM usage as well in future, integrated Yokozuna builds.

Cheers!

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RiakCS invalid module: nofile

2013-07-19 Thread Dimitri Aivaliotis
Hi,

I'm kicking the tires on a RiakCS-on-SmartOS install, and noticed the
following message in riak-cs's console.log at cloud storage
calculation time:

2013-07-19 06:00:00.936 UTC [error] <0.32509.27> gen_fsm
riak_cs_storage_d in state calculating terminated with reason:
{json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
named in PhaseSpec function:\n must be a valid module name (failed to
load riak_cs_storage: nofile)">>}}}
2013-07-19 06:00:00.938 UTC [error] <0.32509.27> CRASH REPORT Process
riak_cs_storage_d with 1 neighbours exited with reason:
{json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
named in PhaseSpec function:\n must be a valid module name (failed to
load riak_cs_storage: nofile)">>}}} in gen_fsm:terminate/7 line 611
2013-07-19 06:00:00.940 UTC [error] <0.125.0> Supervisor riak_cs_sup
had child riak_cs_storage_d started with
riak_cs_storage_d:start_link() at <0.32509.27> exit with reason
{json_encode,{bad_term,{error,<<"Phase {xform_map,0}: invalid module
named in PhaseSpec function:\n must be a valid module name (failed to
load riak_cs_storage: nofile)">>}}} in context child_terminated

Predictably, the storage statistics are empty.


I also get the same error when doing an 'ls' with s3cmd:

2013-07-19 10:42:01.848 UTC [error] <0.24146.40> gen_fsm <0.24146.40>
in state waiting_map_reduce terminated with reason: <<"Phase
{xform_map,0}: invalid module named in PhaseSpec function:\n must be a
valid module name (failed to load riak_cs_utils: nofile)">>
2013-07-19 10:42:01.854 UTC [error] <0.24146.40> CRASH REPORT Process
<0.24146.40> with 1 neighbours exited with reason: <<"Phase
{xform_map,0}: invalid module named in PhaseSpec function:\n must be a
valid module name (failed to load riak_cs_utils: nofile)">> in
gen_fsm:terminate/7 line 611


I looked through the riak-cs packages, but found no 'nofile.beam' - is
there supposed to be one?

Thanks for any help.

- Dimitri

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS Question

2013-07-19 Thread Vahric Muhtaryan
Sorry i see the issue because of replication . Right
19 Tem 2013 19:20 tarihinde "Vahric Muhtaryan"  yazdı:

> Okay i will configure it and check it. Because of i am sending request to
> only 245 i assume that no need to config. Why others are in action . Im
> sending request to 245 ?
> 19 Tem 2013 19:07 tarihinde "Christian Dahlqvist" 
> yazdı:
>
>> Hi Vahric,
>>
>> The reason you are getting errors related to bit cask and indexing is
>> that you only have configured the required multi backend on one of the Riak
>> nodes. All the Riak nodes need to have the same backend configuration, as
>> data will be replicated across the cluster.
>>
>> Best regards,
>>
>> Christian
>>
>>
>>
>> On 19 Jul 2013, at 15:32, Vahric Muhtaryan  wrote:
>>
>> attached, thanks
>> VM
>>
>>
>> On Fri, Jul 19, 2013 at 4:00 PM, Christian Dahlqvist > > wrote:
>>
>>> Hi Vahric,
>>>
>>> Please can you send me the logs and configuration files from Riak,
>>> Riak-CS and Stanchion from all nodes in the cluster?
>>>
>>> Best regards,
>>>
>>> Christian
>>>
>>>
>>>
>>>
>>> On 18 Jul 2013, at 21:45, Vahric Muhtaryan  wrote:
>>>
>>> Hello All,
>>>
>>> i got such error
>>>
>>> [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning]
>>> <0.121.0>@stanchion_utils:email_available:591 Error occurred trying to
>>> check if the address <<"vah...@doruk.net.tr">> has been registered.
>>> Reason: <<"{error,{indexes_not_supported,riak_kv_bitcask_backend}}">>
>>>
>>> but my config should be okay
>>>
>>> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
>>> {storage_backend, riak_cs_kv_multi_backend},
>>> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
>>> {multi_backend_default, be_default},
>>> {multi_backend, [
>>>  {be_default, riak_kv_eleveldb_backend, [
>>>{max_open_files, 50},
>>>  {data_root, "/var/lib/riak/leveldb"}
>>>  ]},
>>>{be_blocks, riak_kv_bitcask_backend, [
>>>  {data_root, "/var/lib/riak/bitcask"}
>>>  ]}
>>> ]},
>>>
>>> (r...@xxx.xxx.xxx.xxx)3> code:which(riak_cs_kv_multi_backend).
>>>
>>> "/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin/riak_cs_kv_multi_backend.beam"
>>>
>>> Any idea why i can not create admin user when i try to create ? My
>>> config said default db backend is level not bits cask, could be a bug ? Any
>>> body know ?
>>>
>>> Regards
>>> VM ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>> 
>>
>>
>>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


riak 1.4.0 upgrade failed

2013-07-19 Thread kzhang
We were on 1.3.0, I was able to upgrade it to 1.3.2 (sudo rpm -Uvh
riak-1.3.2-2.el6.x86_64.rpm). After that, I was trying to upgrade it to
1.4.0 (sudo rpm -Uvh riak-1.4.0-1.el6.x86_64.rpm). I got:
error: %pre(riak-1.4.0-1.el6.x86_64) scriptlet failed, exit status 8
error:   install: %pre scriptlet failed (2), skipping riak-1.4.0-1.el6

Here is the detailed info:

# sudo rpm -Uvhv riak-1.4.0-1.el6.x86_64.rpm
D: == riak-1.4.0-1.el6.x86_64.rpm
D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
D: loading keyring from rpmdb
D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
D: opening  db index   /var/lib/rpm/Packages rdonly mode=0x0
D: locked   db index   /var/lib/rpm/Packages
D: opening  db index   /var/lib/rpm/Name rdonly mode=0x0
D:  read h# 457 Header sanity check: OK
D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
D: Using legacy gpg-pubkey(s) from rpmdb
D: Expected size: 25326908 = lead(96)+sigs(180)+pad(4)+data(25326628)
D:   Actual size: 25326908
D: riak-1.4.0-1.el6.x86_64.rpm: Header SHA1 digest: OK
(f2fc385c419ea7010a2691e6c7e7e8ad51a045a6)
D: == relocations
D:  read h# 496 Header SHA1 digest: OK
(7b0796f33bb3381f63e1d28c779198e8313729d0)
D:  added binary package [0]
D: found 0 source and 1 binary packages
D: == +++ riak-1.4.0-1.el6 x86_64/linux 0x2
D: opening  db index   /var/lib/rpm/Basenames rdonly mode=0x0
D:  read h#  13 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: /bin/bash YES (db files)
D:  Requires: /bin/sh   YES (db files)
D:  Requires: /bin/sh   YES (cached)
D:  Requires: /bin/sh   YES (cached)
D:  read h# 155 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: /usr/bin/env  YES (db files)
D:  Requires: config(riak) = 1.4.0-1.el6YES (added
provide)
D: opening  db index   /var/lib/rpm/Providename rdonly mode=0x0
D:  read h#  11 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libc.so.6()(64bit)YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.2.5)(64bit) YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3)(64bit)   YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3.2)(64bit) YES (db
provides)
D:  Requires: libc.so.6(GLIBC_2.3.4)(64bit) YES (db
provides)
D:  read h# 157 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libcrypto.so.10()(64bit)  YES (db
provides)
D:  Requires: libdl.so.2()(64bit)   YES (db
provides)
D:  Requires: libdl.so.2(GLIBC_2.2.5)(64bit)YES (db
provides)
D:  read h#   1 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libgcc_s.so.1()(64bit)YES (db
provides)
D:  Requires: libgcc_s.so.1(GCC_3.0)(64bit) YES (db
provides)
D:  Requires: libm.so.6()(64bit)YES (db
provides)
D:  Requires: libm.so.6(GLIBC_2.2.5)(64bit) YES (db
provides)
D:  read h#  12 Header V3 RSA/SHA256 Signature, key ID c105b9de: OK
D:  Requires: libncurses.so.5()(64bit)  YES (db
provides)
D:  Requires: libpthread.so.0()(64bit)  YES (db
provides)
D:  Requires: libpthread.so.0(GLIBC_2.2.5)(64bit)   YES (db
provides)
D:  Requires: libpthread.so.0(GLIBC_2.3.2)(64bit)   YES (db
provides)
D:  Requires: librt.so.1()(64bit)   YES (db
provides)
D:  Requires: librt.so.1(GLIBC_2.2.5)(64bit)YES (db
provides)
D:  Requires: libssl.so.10()(64bit) YES (db
provides)
D:  read h#  27 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
D:  Requires: libstdc++.so.6()(64bit)   YES (db
provides)
D:  Requires: libstdc++.so.6(CXXABI_1.3)(64bit) YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4)(64bit)YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4.11)(64bit) YES (db
provides)
D:  Requires: libstdc++.so.6(GLIBCXX_3.4.9)(64bit)  YES (db
provides)
D:  Requires: libtinfo.so.5()(64bit)YES (db
provides)
D:  Requires: libutil.so.1()(64bit) YES (db
provides)
D:  Requires: libutil.so.1(GLIBC_2.2.5)(64bit)  YES (db
provides)
D:  Requires: rpmlib(CompressedFileNames) <= 3.0.4-1YES (rpmlib
provides)
D:  Requires: rpmlib(FileDigests) <= 4.6.0-1YES (rpmlib
provides)
D:  Requires: rpmlib(PayloadFilesHavePrefix) <= 4.0-1   YES (rpmlib
provides)
D:  Requires: rtld(GNU_HASH)YES (db
provides)
D:  Requires: rpmlib(PayloadIsXz) <= 5.2-1  YES (rpmlib
prov

Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
only after restarting the Riak instance on this node the awaiting
handoffs where processed.. this is weird :(

On Fri, 19 Jul 2013 15:55:43 +0200
Simon Effenberg  wrote:

> It looked good for some hours but now again we got 
> 
> 2013-07-19 13:27:07.800 UTC [error] 
> <0.18747.29>@riak_core_handoff_sender:start_fold:216 hinted_handoff transfer 
> of riak_kv_vnode from 'riak@10.46.109.207' 
> 1136089163393944065322395631681798128560666312704 to 'riak@10.47.109.202' 
> 1136089163393944065322395631681798128560666312704 failed because of TCP recv 
> timeout
> 
> and on the destination host I see:
> 
> 
> 2013-07-19 13:25:04.455 UTC [error] 
> <0.28632.25>@riak_core_handoff_receiver:handle_info:80 Handoff receiver for 
> partition 1136089163393944065322395631681798128560666312704 exited abnormally 
> after processing 2 objects: 
> {timeout,{gen_fsm,sync_send_all_state_event,[<0.1107.0>,{handoff_data,<<141,146,205,110,211,64,20,133,237,4,211,132,2,170,80,69,37,150,22,203,186,216,249,105,210,172,42,149,95,137,162,2,5,177,129,232,120,102,156,153,137,61,78,237,113,72,10,172,186,101,195,51,176,224,1,120,12,158,130,55,97,198,173,68,83,177,192,35,223,197,55,231,156,185,158,235,27,155,36,87,115,86,148,208,34,87,227,146,145,130,233,242,206,173,46,153,204,59,60,18,125,61,91,208,123,223,188,51,190,70,157,86,49,206,99,201,136,206,28,199,249,167,209,110,172,122,83,67,92,222,164,78,187,24,27,135,102,74,243,54,117,174,81,65,52,60,108,152,213,194,17,66,190,33,175,60,220,189,204,108,78,195,150,117,123,198,205,139,168,64,47,103,12,26,12,11,83,31,96,134,20,128,128,170,245,91,86,186,254,46,120,37,48,13,222,30,99,130,1,158,152,213,67,132,199,168,240,26,7,72,12,123,134,23,198,25,154,247,33,30,225,16,18,39,56,56,63,210,173,139,205,241,132,162,108,33,175,226,205,139,248,231,40,117,112,152,83,145,8,70,121,51,54,134,15,177,211,252,252,59,118,218,223,127,94,114,93,183,174,53,194,81,148,76,227,13,142,77,43,1,134,82,90,254,227,147,111,238,212,31,69,219,126,44,168,63,242,211,124,206,210,101,86,149,130,116,250,251,147,12,34,221,33,121,230,111,251,101,189,207,243,100,63,143,89,161,4,83,59,148,25,30,151,6,79,39,162,43,62,46,79,213,105,181,103,181,150,173,140,197,64,208,58,33,234,134,123,195,97,212,11,13,210,70,23,117,7,189,78,103,216,31,12,118,67,211,6,169,69,187,211,98,113,50,226,18,75,213,77,184,255,229,252,115,120,195,246,220,58,186,251,244,236,101,182,117,159,55,224,42,207,193,215,247,191,110,203,191,67,118,255,127,200,114,229,122,169,227,145,148,65,153,32,93,84,76,74,243,19,85,102,8,137,80,140,254,1>>},6]}}
> 
> so both shows a timeout. How could I takle this down?
> 
> - could this happen when many read repairs occur (through AAE)?
> 
> Also our "fsm PUT time is going higher but not really the GET time".. is this 
> the normal behavior in LOAD/read repair situations?
> 
> Also is this a bigger problem with eLevelDB or would it be the same case for 
> Bitcask?
> 
> Cheers
> Simon
> 
> 
> On Fri, 19 Jul 2013 10:25:05 +0200
> Simon Effenberg  wrote:
> 
> > once again with the list included... argh
> > 
> > Hey Christian,
> > 
> > so it could be also a erlang limit? I found out why my riak instances
> > are all having different processlimits. My mcollectived daemons have
> > the different limits and when I triggered a puppetrun through
> > mcollective they got this processlimit as well.
> > 
> > Also in the crash log I see:
> > 
> > exception exit: {{system_limit,[{erlang,spawn
> > 
> > for the too many processes. So it doesn't look like a Erlang limit, do
> > it? But I will keep this +P in my mind!! Thanks a lot.
> > 
> > The zdbbl is now at 100MB.
> > 
> > Cheers
> > Simon
> > 
> > On Fri, 19 Jul 2013 08:49:50 +0100
> > Christian Dahlqvist  wrote:
> > 
> > > Hi Simon,
> > > 
> > > If you have objects that can be a s big as 15MB, it is probably wise to 
> > > increase the size of +zdbbl in order to avoid filling up buffers when 
> > > these large objects need to be transferred between nodes. What an 
> > > appropriate level is depends a lot on the size distribution of your data 
> > > and your access patterns, so I would recommend benchmarking to find a 
> > > suitable value.
> > > 
> > > Erlang also has a default process limit of 32768 (at least in R15B01), 
> > > which may be what you are hitting. You can override this to 256k by 
> > > adding the following line to the vm.args file:
> > > 
> > > +P 262144
> > > 
> > > Best regards,
> > > 
> > > Christian
> > > 
> > > 
> > > 
> > > On 19 Jul 2013, at 08:24, Simon Effenberg  
> > > wrote:
> > > 
> > > > The +zdbbl parameter helped a lot but the hinted handoffs didn't
> > > > disappear completely. I have no more busy dist port errors in the
> > > > _console.log_ (why aren't they in the error.log? it looks for me like a
> > > > serious problem you have.. at least our cluster was behaving not that
> > > > nice).
> > > > 
> > > > I'll try to increase the buffer size to a higher value because my
> > > > suggestion is that also the objects send from one to another are also
> > > > stored 

Re: Riak-CS Question

2013-07-19 Thread Vahric Muhtaryan
My mail waiting moderator approval because of size.
Fyi
Vm
19 Tem 2013 16:00 tarihinde "Christian Dahlqvist" 
yazdı:

> Hi Vahric,
>
> Please can you send me the logs and configuration files from Riak, Riak-CS
> and Stanchion from all nodes in the cluster?
>
> Best regards,
>
> Christian
>
>
>
>
> On 18 Jul 2013, at 21:45, Vahric Muhtaryan  wrote:
>
> Hello All,
>
> i got such error
>
> [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning]
> <0.121.0>@stanchion_utils:email_available:591 Error occurred trying to
> check if the address <<"vah...@doruk.net.tr">> has been registered.
> Reason: <<"{error,{indexes_not_supported,riak_kv_bitcask_backend}}">>
>
> but my config should be okay
>
> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
> {storage_backend, riak_cs_kv_multi_backend},
> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
> {multi_backend_default, be_default},
> {multi_backend, [
>  {be_default, riak_kv_eleveldb_backend, [
>{max_open_files, 50},
>  {data_root, "/var/lib/riak/leveldb"}
>  ]},
>{be_blocks, riak_kv_bitcask_backend, [
>  {data_root, "/var/lib/riak/bitcask"}
>  ]}
> ]},
>
> (r...@xxx.xxx.xxx.xxx)3> code:which(riak_cs_kv_multi_backend).
>
> "/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin/riak_cs_kv_multi_backend.beam"
>
> Any idea why i can not create admin user when i try to create ? My config
> said default db backend is level not bits cask, could be a bug ? Any body
> know ?
>
> Regards
> VM ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS Question

2013-07-19 Thread Vahric Muhtaryan
Okay i will configure it and check it. Because of i am sending request to
only 245 i assume that no need to config. Why others are in action . Im
sending request to 245 ?
19 Tem 2013 19:07 tarihinde "Christian Dahlqvist" 
yazdı:

> Hi Vahric,
>
> The reason you are getting errors related to bit cask and indexing is that
> you only have configured the required multi backend on one of the Riak
> nodes. All the Riak nodes need to have the same backend configuration, as
> data will be replicated across the cluster.
>
> Best regards,
>
> Christian
>
>
>
> On 19 Jul 2013, at 15:32, Vahric Muhtaryan  wrote:
>
> attached, thanks
> VM
>
>
> On Fri, Jul 19, 2013 at 4:00 PM, Christian Dahlqvist 
> wrote:
>
>> Hi Vahric,
>>
>> Please can you send me the logs and configuration files from Riak,
>> Riak-CS and Stanchion from all nodes in the cluster?
>>
>> Best regards,
>>
>> Christian
>>
>>
>>
>>
>> On 18 Jul 2013, at 21:45, Vahric Muhtaryan  wrote:
>>
>> Hello All,
>>
>> i got such error
>>
>> [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning]
>> <0.121.0>@stanchion_utils:email_available:591 Error occurred trying to
>> check if the address <<"vah...@doruk.net.tr">> has been registered.
>> Reason: <<"{error,{indexes_not_supported,riak_kv_bitcask_backend}}">>
>>
>> but my config should be okay
>>
>> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
>> {storage_backend, riak_cs_kv_multi_backend},
>> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
>> {multi_backend_default, be_default},
>> {multi_backend, [
>>  {be_default, riak_kv_eleveldb_backend, [
>>{max_open_files, 50},
>>  {data_root, "/var/lib/riak/leveldb"}
>>  ]},
>>{be_blocks, riak_kv_bitcask_backend, [
>>  {data_root, "/var/lib/riak/bitcask"}
>>  ]}
>> ]},
>>
>> (r...@xxx.xxx.xxx.xxx)3> code:which(riak_cs_kv_multi_backend).
>>
>> "/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin/riak_cs_kv_multi_backend.beam"
>>
>> Any idea why i can not create admin user when i try to create ? My config
>> said default db backend is level not bits cask, could be a bug ? Any body
>> know ?
>>
>> Regards
>> VM ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
> 
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak 1.4.0 upgrade failed

2013-07-19 Thread Hector Castro
Hey Kathleen,

Did you make sure to stop Riak before upgrading as described here? [0]
Looks like the Riak service may still be running.

--
Hector

[0] http://docs.basho.com/riak/latest/cookbooks/Rolling-Upgrades/#RHEL-CentOS

On Fri, Jul 19, 2013 at 10:10 AM, kzhang  wrote:
> We were on 1.3.0, I was able to upgrade it to 1.3.2 (sudo rpm -Uvh
> riak-1.3.2-2.el6.x86_64.rpm). After that, I was trying to upgrade it to
> 1.4.0 (sudo rpm -Uvh riak-1.4.0-1.el6.x86_64.rpm). I got:
> error: %pre(riak-1.4.0-1.el6.x86_64) scriptlet failed, exit status 8
> error:   install: %pre scriptlet failed (2), skipping riak-1.4.0-1.el6
>
> Here is the detailed info:
>
> # sudo rpm -Uvhv riak-1.4.0-1.el6.x86_64.rpm
> D: == riak-1.4.0-1.el6.x86_64.rpm
> D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
> D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
> D: loading keyring from rpmdb
> D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
> D: opening  db index   /var/lib/rpm/Packages rdonly mode=0x0
> D: locked   db index   /var/lib/rpm/Packages
> D: opening  db index   /var/lib/rpm/Name rdonly mode=0x0
> D:  read h# 457 Header sanity check: OK
> D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
> D: Using legacy gpg-pubkey(s) from rpmdb
> D: Expected size: 25326908 = lead(96)+sigs(180)+pad(4)+data(25326628)
> D:   Actual size: 25326908
> D: riak-1.4.0-1.el6.x86_64.rpm: Header SHA1 digest: OK
> (f2fc385c419ea7010a2691e6c7e7e8ad51a045a6)
> D: == relocations
> D:  read h# 496 Header SHA1 digest: OK
> (7b0796f33bb3381f63e1d28c779198e8313729d0)
> D:  added binary package [0]
> D: found 0 source and 1 binary packages
> D: == +++ riak-1.4.0-1.el6 x86_64/linux 0x2
> D: opening  db index   /var/lib/rpm/Basenames rdonly mode=0x0
> D:  read h#  13 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: /bin/bash YES (db files)
> D:  Requires: /bin/sh   YES (db files)
> D:  Requires: /bin/sh   YES (cached)
> D:  Requires: /bin/sh   YES (cached)
> D:  read h# 155 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: /usr/bin/env  YES (db files)
> D:  Requires: config(riak) = 1.4.0-1.el6YES (added
> provide)
> D: opening  db index   /var/lib/rpm/Providename rdonly mode=0x0
> D:  read h#  11 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libc.so.6()(64bit)YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.2.5)(64bit) YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3)(64bit)   YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3.2)(64bit) YES (db
> provides)
> D:  Requires: libc.so.6(GLIBC_2.3.4)(64bit) YES (db
> provides)
> D:  read h# 157 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libcrypto.so.10()(64bit)  YES (db
> provides)
> D:  Requires: libdl.so.2()(64bit)   YES (db
> provides)
> D:  Requires: libdl.so.2(GLIBC_2.2.5)(64bit)YES (db
> provides)
> D:  read h#   1 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libgcc_s.so.1()(64bit)YES (db
> provides)
> D:  Requires: libgcc_s.so.1(GCC_3.0)(64bit) YES (db
> provides)
> D:  Requires: libm.so.6()(64bit)YES (db
> provides)
> D:  Requires: libm.so.6(GLIBC_2.2.5)(64bit) YES (db
> provides)
> D:  read h#  12 Header V3 RSA/SHA256 Signature, key ID c105b9de: OK
> D:  Requires: libncurses.so.5()(64bit)  YES (db
> provides)
> D:  Requires: libpthread.so.0()(64bit)  YES (db
> provides)
> D:  Requires: libpthread.so.0(GLIBC_2.2.5)(64bit)   YES (db
> provides)
> D:  Requires: libpthread.so.0(GLIBC_2.3.2)(64bit)   YES (db
> provides)
> D:  Requires: librt.so.1()(64bit)   YES (db
> provides)
> D:  Requires: librt.so.1(GLIBC_2.2.5)(64bit)YES (db
> provides)
> D:  Requires: libssl.so.10()(64bit) YES (db
> provides)
> D:  read h#  27 Header V3 RSA/SHA1 Signature, key ID c105b9de: OK
> D:  Requires: libstdc++.so.6()(64bit)   YES (db
> provides)
> D:  Requires: libstdc++.so.6(CXXABI_1.3)(64bit) YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4)(64bit)YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4.11)(64bit) YES (db
> provides)
> D:  Requires: libstdc++.so.6(GLIBCXX_3.4.9)(64bit)  YES (db
> provides)
> D:  Requires: libtinfo.so.5()(64bit)YES (db
> provides)
> D:  Requires: libutil.so.1()(64bit) YES (db
> provides)
> D:  Re

CorrugatedIron v1.4.0 released

2013-07-19 Thread Jeremiah Peschka
Get it while it's hot. Features include Riak 1.4 support and... stuff.

Release notes -
https://github.com/DistributedNonsense/CorrugatedIron/blob/v1.4.0/RELEASE_NOTES.md

Blog post - http://www.brentozar.com/archive/2013/07/corrugatediron-1-4/
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
I'm getting again crash reports about system_limits:

2013-07-19 14:30:58 UTC =CRASH REPORT
  crasher:
initial call: riak_kv_exchange_fsm:init/1
pid: <0.25883.24>
registered_name: []
exception exit: 
{{{system_limit,[{erlang,spawn,[riak_kv_get_put_monitor,spawned,[gets,<0.11717.31>]],[]},{riak_kv_get_put_monitor,get_fsm_spawned,1,[{file,"src/riak_kv_get_put_monitor.erl"},{line,53}]},{riak_kv_get_fsm,init,1,[{file,"src/riak_kv_get_fsm.erl"},{line,135}]},{gen_fsm,init_it,6,[{file,"gen_fsm.erl"},{line,361}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]},{gen_server,call,[<0.1187.0>,{compare,{856348615623575928634971581669697081829647974400,3},#Fun,#Fun},infinity]}},[{gen_fsm,terminate,7,[{file,"gen_fsm.erl"},{line,611}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [riak_kv_entropy_manager,riak_kv_sup,<0.569.0>]
messages: 
[{'DOWN',#Ref<0.0.26.196075>,process,<0.1187.0>,{system_limit,[{erlang,spawn,[riak_kv_get_put_monitor,spawned,[gets,<0.11717.31>]],[]},{riak_kv_get_put_monitor,get_fsm_spawned,1,[{file,"src/riak_kv_get_put_monitor.erl"},{line,53}]},{riak_kv_get_fsm,init,1,[{file,"src/riak_kv_get_fsm.erl"},{line,135}]},{gen_fsm,init_it,6,[{file,"gen_fsm.erl"},{line,361}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}}]
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 1597
stack_size: 24
reductions: 380
  neighbours:

I'm trying now to increase the erlang process limit but "system_limit" always 
looks like a "system" limit and not an "erlang" limit?!

That's the limits for the process:

[root@kriak47-9:/var/log/riak]# cat /proc/17313/limits
Limit Soft Limit   Hard Limit   Units 
Max cpu time  unlimitedunlimitedseconds   
Max file size unlimitedunlimitedbytes 
Max data size unlimitedunlimitedbytes 
Max stack size8388608  unlimitedbytes 
Max core file size0unlimitedbytes 
Max resident set  unlimitedunlimitedbytes 
Max processes unlimitedunlimitedprocesses 
Max open files33files 
Max locked memory 6553665536bytes 
Max address space unlimitedunlimitedbytes 
Max file locksunlimitedunlimitedlocks 
Max pending signals   1638216382signals   
Max msgqueue size 819200   819200   bytes 
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus



On Fri, 19 Jul 2013 16:08:44 +0200
Simon Effenberg  wrote:

> only after restarting the Riak instance on this node the awaiting
> handoffs where processed.. this is weird :(
> 
> On Fri, 19 Jul 2013 15:55:43 +0200
> Simon Effenberg  wrote:
> 
> > It looked good for some hours but now again we got 
> > 
> > 2013-07-19 13:27:07.800 UTC [error] 
> > <0.18747.29>@riak_core_handoff_sender:start_fold:216 hinted_handoff 
> > transfer of riak_kv_vnode from 'riak@10.46.109.207' 
> > 1136089163393944065322395631681798128560666312704 to 'riak@10.47.109.202' 
> > 1136089163393944065322395631681798128560666312704 failed because of TCP 
> > recv timeout
> > 
> > and on the destination host I see:
> > 
> > 
> > 2013-07-19 13:25:04.455 UTC [error] 
> > <0.28632.25>@riak_core_handoff_receiver:handle_info:80 Handoff receiver for 
> > partition 1136089163393944065322395631681798128560666312704 exited 
> > abnormally after processing 2 objects: 
> > {timeout,{gen_fsm,sync_send_all_state_event,[<0.1107.0>,{handoff_data,<<141,146,205,110,211,64,20,133,237,4,211,132,2,170,80,69,37,150,22,203,186,216,249,105,210,172,42,149,95,137,162,2,5,177,129,232,120,102,156,153,137,61,78,237,113,72,10,172,186,101,195,51,176,224,1,120,12,158,130,55,97,198,173,68,83,177,192,35,223,197,55,231,156,185,158,235,27,155,36,87,115,86,148,208,34,87,227,146,145,130,233,242,206,173,46,153,204,59,60,18,125,61,91,208,123,223,188,51,190,70,157,86,49,206,99,201,136,206,28,199,249,167,209,110,172,122,83,67,92,222,164,78,187,24,27,135,102,74,243,54,117,174,81,65,52,60,108,152,213,194,17,66,190,33,175,60,220,189,204,108,78,195,150,117,123,198,205,139,168,64,47,103,12,26,12,11,83,31,96,134,20,128,128,170,245,91,86,186,254,46,120,37,48,13,222,30,99,130,1,158,152,213,67,132,199,168,240,26,7,72,12,123,134,23,198,25,154,247,33,30,225,16,18,39,56,56,63,210,173,139,205,241,132,162,108,33,175,226,205,139,248,231,40,117,112,152,83,145,8,70,121,51,54,134,15,177,211,252,252,59,118,218,223,127,

Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
It looked good for some hours but now again we got 

2013-07-19 13:27:07.800 UTC [error] 
<0.18747.29>@riak_core_handoff_sender:start_fold:216 hinted_handoff transfer of 
riak_kv_vnode from 'riak@10.46.109.207' 
1136089163393944065322395631681798128560666312704 to 'riak@10.47.109.202' 
1136089163393944065322395631681798128560666312704 failed because of TCP recv 
timeout

and on the destination host I see:


2013-07-19 13:25:04.455 UTC [error] 
<0.28632.25>@riak_core_handoff_receiver:handle_info:80 Handoff receiver for 
partition 1136089163393944065322395631681798128560666312704 exited abnormally 
after processing 2 objects: 
{timeout,{gen_fsm,sync_send_all_state_event,[<0.1107.0>,{handoff_data,<<141,146,205,110,211,64,20,133,237,4,211,132,2,170,80,69,37,150,22,203,186,216,249,105,210,172,42,149,95,137,162,2,5,177,129,232,120,102,156,153,137,61,78,237,113,72,10,172,186,101,195,51,176,224,1,120,12,158,130,55,97,198,173,68,83,177,192,35,223,197,55,231,156,185,158,235,27,155,36,87,115,86,148,208,34,87,227,146,145,130,233,242,206,173,46,153,204,59,60,18,125,61,91,208,123,223,188,51,190,70,157,86,49,206,99,201,136,206,28,199,249,167,209,110,172,122,83,67,92,222,164,78,187,24,27,135,102,74,243,54,117,174,81,65,52,60,108,152,213,194,17,66,190,33,175,60,220,189,204,108,78,195,150,117,123,198,205,139,168,64,47,103,12,26,12,11,83,31,96,134,20,128,128,170,245,91,86,186,254,46,120,37,48,13,222,30,99,130,1,158,152,213,67,132,199,168,240,26,7,72,12,123,134,23,198,25,154,247,33,30,225,16,18,39,56,56,63,210,173,139,205,241,132,162,108,33,175,226,205,139,248,231,40,117,112,152,83,145,8,70,121,51,54,134,15,177,211,252,252,59,118,218,223,127,94,114,93,183,174,53,194,81,148,76,227,13,142,77,43,1,134,82,90,254,227,147,111,238,212,31,69,219,126,44,168,63,242,211,124,206,210,101,86,149,130,116,250,251,147,12,34,221,33,121,230,111,251,101,189,207,243,100,63,143,89,161,4,83,59,148,25,30,151,6,79,39,162,43,62,46,79,213,105,181,103,181,150,173,140,197,64,208,58,33,234,134,123,195,97,212,11,13,210,70,23,117,7,189,78,103,216,31,12,118,67,211,6,169,69,187,211,98,113,50,226,18,75,213,77,184,255,229,252,115,120,195,246,220,58,186,251,244,236,101,182,117,159,55,224,42,207,193,215,247,191,110,203,191,67,118,255,127,200,114,229,122,169,227,145,148,65,153,32,93,84,76,74,243,19,85,102,8,137,80,140,254,1>>},6]}}

so both shows a timeout. How could I takle this down?

- could this happen when many read repairs occur (through AAE)?

Also our "fsm PUT time is going higher but not really the GET time".. is this 
the normal behavior in LOAD/read repair situations?

Also is this a bigger problem with eLevelDB or would it be the same case for 
Bitcask?

Cheers
Simon


On Fri, 19 Jul 2013 10:25:05 +0200
Simon Effenberg  wrote:

> once again with the list included... argh
> 
> Hey Christian,
> 
> so it could be also a erlang limit? I found out why my riak instances
> are all having different processlimits. My mcollectived daemons have
> the different limits and when I triggered a puppetrun through
> mcollective they got this processlimit as well.
> 
> Also in the crash log I see:
> 
> exception exit: {{system_limit,[{erlang,spawn
> 
> for the too many processes. So it doesn't look like a Erlang limit, do
> it? But I will keep this +P in my mind!! Thanks a lot.
> 
> The zdbbl is now at 100MB.
> 
> Cheers
> Simon
> 
> On Fri, 19 Jul 2013 08:49:50 +0100
> Christian Dahlqvist  wrote:
> 
> > Hi Simon,
> > 
> > If you have objects that can be a s big as 15MB, it is probably wise to 
> > increase the size of +zdbbl in order to avoid filling up buffers when these 
> > large objects need to be transferred between nodes. What an appropriate 
> > level is depends a lot on the size distribution of your data and your 
> > access patterns, so I would recommend benchmarking to find a suitable value.
> > 
> > Erlang also has a default process limit of 32768 (at least in R15B01), 
> > which may be what you are hitting. You can override this to 256k by adding 
> > the following line to the vm.args file:
> > 
> > +P 262144
> > 
> > Best regards,
> > 
> > Christian
> > 
> > 
> > 
> > On 19 Jul 2013, at 08:24, Simon Effenberg  wrote:
> > 
> > > The +zdbbl parameter helped a lot but the hinted handoffs didn't
> > > disappear completely. I have no more busy dist port errors in the
> > > _console.log_ (why aren't they in the error.log? it looks for me like a
> > > serious problem you have.. at least our cluster was behaving not that
> > > nice).
> > > 
> > > I'll try to increase the buffer size to a higher value because my
> > > suggestion is that also the objects send from one to another are also
> > > stored therein and we have sometimes objects which are up to 15MB.
> > > 
> > > But I saw now also some crashes in the last 6 hours on only two machines
> > > complaining about too many processes
> > > 
> > > 
> > > console.log
> > > 2013-07-19 02:04:21.962 UTC [error] <0.12813.29> CRASH REPORT Process 
> > > <0.12813.29> with 15 neighbours exited with r

Re: Riak-CS Question

2013-07-19 Thread Christian Dahlqvist
Hi Vahric,

Please can you send me the logs and configuration files from Riak, Riak-CS and 
Stanchion from all nodes in the cluster?

Best regards,

Christian




On 18 Jul 2013, at 21:45, Vahric Muhtaryan  wrote:

> Hello All,
> 
> i got such error
> 
> [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning] 
> <0.121.0>@stanchion_utils:email_available:591 Error occurred trying to check 
> if the address <<"vah...@doruk.net.tr">> has been registered. Reason: 
> <<"{error,{indexes_not_supported,riak_kv_bitcask_backend}}">>
> 
> but my config should be okay
> 
> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
> {storage_backend, riak_cs_kv_multi_backend},
> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
> {multi_backend_default, be_default},
> {multi_backend, [
>  {be_default, riak_kv_eleveldb_backend, [
>{max_open_files, 50},
>  {data_root, "/var/lib/riak/leveldb"}
>  ]},
>{be_blocks, riak_kv_bitcask_backend, [
>  {data_root, "/var/lib/riak/bitcask"}
>  ]}
> ]},
> 
> (r...@xxx.xxx.xxx.xxx)3> code:which(riak_cs_kv_multi_backend).
>   
> "/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin/riak_cs_kv_multi_backend.beam"
> 
> Any idea why i can not create admin user when i try to create ? My config 
> said default db backend is level not bits cask, could be a bug ? Any body 
> know ? 
> 
> Regards
> VM ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: using 2i with protocol buffers

2013-07-19 Thread Joey Yandle
Hi Christian,

Thanks for the quick reply.  It turns out 2i was working fine, I was
just using a different timestamp format for search that didn't range
match the format used for put.

cheers,

Joey


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
once again with the list included... argh

Hey Christian,

so it could be also a erlang limit? I found out why my riak instances
are all having different processlimits. My mcollectived daemons have
the different limits and when I triggered a puppetrun through
mcollective they got this processlimit as well.

Also in the crash log I see:

exception exit: {{system_limit,[{erlang,spawn

for the too many processes. So it doesn't look like a Erlang limit, do
it? But I will keep this +P in my mind!! Thanks a lot.

The zdbbl is now at 100MB.

Cheers
Simon

On Fri, 19 Jul 2013 08:49:50 +0100
Christian Dahlqvist  wrote:

> Hi Simon,
> 
> If you have objects that can be a s big as 15MB, it is probably wise to 
> increase the size of +zdbbl in order to avoid filling up buffers when these 
> large objects need to be transferred between nodes. What an appropriate level 
> is depends a lot on the size distribution of your data and your access 
> patterns, so I would recommend benchmarking to find a suitable value.
> 
> Erlang also has a default process limit of 32768 (at least in R15B01), which 
> may be what you are hitting. You can override this to 256k by adding the 
> following line to the vm.args file:
> 
> +P 262144
> 
> Best regards,
> 
> Christian
> 
> 
> 
> On 19 Jul 2013, at 08:24, Simon Effenberg  wrote:
> 
> > The +zdbbl parameter helped a lot but the hinted handoffs didn't
> > disappear completely. I have no more busy dist port errors in the
> > _console.log_ (why aren't they in the error.log? it looks for me like a
> > serious problem you have.. at least our cluster was behaving not that
> > nice).
> > 
> > I'll try to increase the buffer size to a higher value because my
> > suggestion is that also the objects send from one to another are also
> > stored therein and we have sometimes objects which are up to 15MB.
> > 
> > But I saw now also some crashes in the last 6 hours on only two machines
> > complaining about too many processes
> > 
> > 
> > console.log
> > 2013-07-19 02:04:21.962 UTC [error] <0.12813.29> CRASH REPORT Process 
> > <0.12813.29> with 15 neighbours exited with reason: {system_limit
> > 
> > crash.log
> > 2013-07-19 02:04:21 UTC =ERROR REPORT
> > Too many processes
> > 
> > 
> > the process has a process limit of 95142. So I will increase it now but I 
> > never saw any information about such problems on the linux tuning page. Am 
> > I missing something?
> > 
> > Cheers
> > Simon
> > 
> > 
> > On Thu, 18 Jul 2013 19:34:18 +0100
> > Guido Medina  wrote:
> > 
> >> If what you are describing is happening for 1.4, type riak-admin diag 
> >> and see the new recommended kernel parameters, also, on vm.args 
> >> uncomment the +zdbbl 32768 parameter, since what you are describing is 
> >> similar to what happened to us when we upgraded to 1.4.
> >> 
> >> HTH,
> >> 
> >> Guido.
> >> 
> >> On 18/07/13 19:21, Simon Effenberg wrote:
> >>> Hi @list,
> >>> 
> >>> I see sometimes logs talking about "hinted_handoff transfer of .. failed 
> >>> because of TCP recv timeout".
> >>> Also riak-admin transfers shows me many handoffs (is it possible to give 
> >>> some insights about "how many" handoffs happened through "riak-admin 
> >>> status"?).
> >>> 
> >>> - Is it a normal behavior to have up to 30 handoffs from/to different 
> >>> nodes?
> >>> - How can I get down to the problem with the TCP recv timeout? I'm not 
> >>> sure if this is a network problem or if the other node is too slow. The 
> >>> load is ok on the machines (some IOwait but not 100%). Maybe interfering 
> >>> with AAE?
> >>> 
> >>> Here the log information about the TCP recv timeout. But that is not that 
> >>> often but handoffs happens really often:
> >>> 
> >>> 2013-07-18 16:22:05.654 UTC [error] 
> >>> <0.28933.14>@riak_core_handoff_sender:start_fold:216 hinted_handoff 
> >>> transfer of riak_kv_vnode from 'riak@10.46.109.207' 
> >>> 1118962191081472546749696200048404186924073353216 to 'riak@10.46.109.205' 
> >>> 1118962191081472546749696200048404186924073353216 failed because of TCP 
> >>> recv timeout
> >>> 2013-07-18 16:22:05.673 UTC [error] 
> >>> <0.202.0>@riak_core_handoff_manager:handle_info:282 An outbound handoff 
> >>> of partition riak_kv_vnode 
> >>> 1118962191081472546749696200048404186924073353216 was terminated for 
> >>> reason: {shutdown,timeout}
> >>> 
> >>> 
> >>> Thanks in advance
> >>> Simon
> >>> 
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >> 
> >> 
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > 
> > 
> > -- 
> > Simon Effenberg | Site Ops Engineer | mobile.international GmbH
> > Fon: + 49-(0)30-8109 - 7173
> > Fax: + 49-(0)30-8109 - 7131
> > 
> > Mail: seffenb...@team.mob

Re: using 2i with protocol buffers

2013-07-19 Thread Christian Dahlqvist
Hi Joey,

When using secondary indexes the index names must end in either `_int` for 
integer indexes or `_bin` for binary indexes in order for them to be recognised 
by the system [1]. All indexes must be set when you write the objects.

You must also ensure that the bucket you are writing the objects to are using 
either the memory or leveldb backends as bitcask does not support secondary 
indexes. Can you verify that this is the case? Are there any error messages in 
the logs?

Which exact client version are you using?

[1] 
http://docs.basho.com/riak/latest/cookbooks/Secondary-Indexes---Configuration/

Best regards,

Christian



On 19 Jul 2013, at 07:09, Joey Yandle  wrote:

> Hi everbody,
> 
> I'm using riak to store large numbers of FX rates, to allow for deep
> backtests of trading strategies.  My code seems to be working, except
> that I don't get any keys back from 2i range requests.
> 
> Here's the code which puts the rate into riak (the rate is itself a
> protocol buffers object):
> 
> std::string s = p->rate().time();
> std::string rate;
> p->rate().SerializeToString(&rate);
> 
> mrx::riak::RpbPutReqPtr req(new mrx::riak::RpbPutReq());
> 
> req->set_bucket(symbol);
> req->set_key(s);
> 
> mrx::riak::RpbContent* content = req->mutable_content();
> 
> content->set_value(rate);
> 
> mrx::riak::RpbPair* index = content->add_indexes();
> 
> index->set_key(vm["riak.index"].as());
> index->set_value(s);
> 
> Here's the code which does the index search:
> 
> mrx::riak::RpbIndexReqPtr req(new mrx::riak::RpbIndexReq());
> 
> req->set_bucket(symbol);
> req->set_index(vm["riak.index"].as());
> req->set_qtype(mrx::riak::RpbIndexReq::range);
> 
> req->set_range_min(vm["rates.start"].as());
> req->set_range_max(vm["rates.end"].as());
> 
> I originally used the string "time" as the index key, and there was no
> error when doing the put command.  However, when trying to search with
> this index, I got the following error:
> 
> [{unknown_field_type,<<"time">>}]
> 
> This error went away when I used "time_bin" as the index.  However,
> regardless of what I use as the index key when doing the put, I never
> get any keys returned during the search.
> 
> Can someone point me to a working example of using protocol buffers to
> do a put and subsequent index search?  The examples online only show the
> latter.
> 
> thanks,
> 
> Joey
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Christian Dahlqvist
Hi Simon,

If you have objects that can be a s big as 15MB, it is probably wise to 
increase the size of +zdbbl in order to avoid filling up buffers when these 
large objects need to be transferred between nodes. What an appropriate level 
is depends a lot on the size distribution of your data and your access 
patterns, so I would recommend benchmarking to find a suitable value.

Erlang also has a default process limit of 32768 (at least in R15B01), which 
may be what you are hitting. You can override this to 256k by adding the 
following line to the vm.args file:

+P 262144

Best regards,

Christian



On 19 Jul 2013, at 08:24, Simon Effenberg  wrote:

> The +zdbbl parameter helped a lot but the hinted handoffs didn't
> disappear completely. I have no more busy dist port errors in the
> _console.log_ (why aren't they in the error.log? it looks for me like a
> serious problem you have.. at least our cluster was behaving not that
> nice).
> 
> I'll try to increase the buffer size to a higher value because my
> suggestion is that also the objects send from one to another are also
> stored therein and we have sometimes objects which are up to 15MB.
> 
> But I saw now also some crashes in the last 6 hours on only two machines
> complaining about too many processes
> 
> 
> console.log
> 2013-07-19 02:04:21.962 UTC [error] <0.12813.29> CRASH REPORT Process 
> <0.12813.29> with 15 neighbours exited with reason: {system_limit
> 
> crash.log
> 2013-07-19 02:04:21 UTC =ERROR REPORT
> Too many processes
> 
> 
> the process has a process limit of 95142. So I will increase it now but I 
> never saw any information about such problems on the linux tuning page. Am I 
> missing something?
> 
> Cheers
> Simon
> 
> 
> On Thu, 18 Jul 2013 19:34:18 +0100
> Guido Medina  wrote:
> 
>> If what you are describing is happening for 1.4, type riak-admin diag 
>> and see the new recommended kernel parameters, also, on vm.args 
>> uncomment the +zdbbl 32768 parameter, since what you are describing is 
>> similar to what happened to us when we upgraded to 1.4.
>> 
>> HTH,
>> 
>> Guido.
>> 
>> On 18/07/13 19:21, Simon Effenberg wrote:
>>> Hi @list,
>>> 
>>> I see sometimes logs talking about "hinted_handoff transfer of .. failed 
>>> because of TCP recv timeout".
>>> Also riak-admin transfers shows me many handoffs (is it possible to give 
>>> some insights about "how many" handoffs happened through "riak-admin 
>>> status"?).
>>> 
>>> - Is it a normal behavior to have up to 30 handoffs from/to different nodes?
>>> - How can I get down to the problem with the TCP recv timeout? I'm not sure 
>>> if this is a network problem or if the other node is too slow. The load is 
>>> ok on the machines (some IOwait but not 100%). Maybe interfering with AAE?
>>> 
>>> Here the log information about the TCP recv timeout. But that is not that 
>>> often but handoffs happens really often:
>>> 
>>> 2013-07-18 16:22:05.654 UTC [error] 
>>> <0.28933.14>@riak_core_handoff_sender:start_fold:216 hinted_handoff 
>>> transfer of riak_kv_vnode from 'riak@10.46.109.207' 
>>> 1118962191081472546749696200048404186924073353216 to 'riak@10.46.109.205' 
>>> 1118962191081472546749696200048404186924073353216 failed because of TCP 
>>> recv timeout
>>> 2013-07-18 16:22:05.673 UTC [error] 
>>> <0.202.0>@riak_core_handoff_manager:handle_info:282 An outbound handoff of 
>>> partition riak_kv_vnode 1118962191081472546749696200048404186924073353216 
>>> was terminated for reason: {shutdown,timeout}
>>> 
>>> 
>>> Thanks in advance
>>> Simon
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> -- 
> Simon Effenberg | Site Ops Engineer | mobile.international GmbH
> Fon: + 49-(0)30-8109 - 7173
> Fax: + 49-(0)30-8109 - 7131
> 
> Mail: seffenb...@team.mobile.de
> Web:www.mobile.de
> 
> Marktplatz 1 | 14532 Europarc Dreilinden | Germany
> 
> 
> Geschäftsführer: Malte Krüger
> HRB Nr.: 18517 P, Amtsgericht Potsdam
> Sitz der Gesellschaft: Kleinmachnow 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Fallback node

2013-07-19 Thread fenix . serega
Riak 1.3.1
5 nodes

Cluster are healthy. There 39 stale handoffs on 1,2,3,5 nodes.
4d node - all KV nodes in fallback mode.

Could you please clarify - what does it mean !? Why 4nd node in fallback
mode !?

riak@de3:/opt/riak/etc$ ../bin/riak-admin ring-status
== Claimant
===
Claimant:  'riak@de3 '
Status: up
Ring Ready: true

== Ownership Handoff
==
No pending changes.

== Unreachable Nodes
==
All nodes are up and reachable




riak@de3:/opt/riak/etc$ ../bin/riak-admin member-status
= Membership
==
Status RingPendingNode
---
valid  19.9%  --  'riak@de1 '
valid  19.9%  --  'riak@de2 '
valid  19.9%  --  'riak@de3 '
valid  20.3%  --  'riak@de4 '
valid  19.9%  --  'riak@de5 '
---
Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0





riak@de3:/opt/riak/etc$ ../bin/riak-admin transfers
'riak@de5 ' waiting to handoff 39 partitions
'riak@de3 ' waiting to handoff 39 partitions
'riak@de2 ' waiting to handoff 39 partitions
'riak@de1 ' waiting to handoff 39 partitions

Active Transfers:
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
wow.. now I have something to search for..

riak46-1 Max processes unlimitedunlimited
processes 
riak46-2 Max processes unlimitedunlimited
processes 
riak46-3 Max processes unlimitedunlimited
processes 
riak46-4 Max processes unlimitedunlimited
processes 
riak46-5 Max processes unlimitedunlimited
processes 
riak46-6 Max processes unlimitedunlimited
processes 
riak46-7 Max processes 9514295142
processes 
riak46-8 Max processes unlimitedunlimited
processes 
riak46-9 Max processes 9514295142
processes 
riak47-1 Max processes 191896   191896   
processes 
riak47-2 Max processes 192920   192920   
processes 
riak47-3 Max processes unlimitedunlimited
processes 
riak47-4 Max processes unlimitedunlimited
processes 
riak47-5 Max processes unlimitedunlimited
processes 
riak47-6 Max processes unlimitedunlimited
processes 
riak47-7 Max processes 9514295142
processes 
riak47-8 Max processes 9514295142
processes 
riak47-9 Max processes 9514295142
processes 


riak46-{7..9}, riak47-1 and riak47-{7..9} are quiet newly reinstalled but all 
with puppet and in theory nothing special about them compared to the other 
once..

I need to have a look and probably try to enforce an "unlimited" process limit.

Cheers
Simon

On Fri, 19 Jul 2013 09:24:07 +0200
Simon Effenberg  wrote:

> The +zdbbl parameter helped a lot but the hinted handoffs didn't
> disappear completely. I have no more busy dist port errors in the
> _console.log_ (why aren't they in the error.log? it looks for me like a
> serious problem you have.. at least our cluster was behaving not that
> nice).
> 
> I'll try to increase the buffer size to a higher value because my
> suggestion is that also the objects send from one to another are also
> stored therein and we have sometimes objects which are up to 15MB.
> 
> But I saw now also some crashes in the last 6 hours on only two machines
> complaining about too many processes
> 
> 
> console.log
> 2013-07-19 02:04:21.962 UTC [error] <0.12813.29> CRASH REPORT Process 
> <0.12813.29> with 15 neighbours exited with reason: {system_limit
> 
> crash.log
> 2013-07-19 02:04:21 UTC =ERROR REPORT
> Too many processes
> 
> 
> the process has a process limit of 95142. So I will increase it now but I 
> never saw any information about such problems on the linux tuning page. Am I 
> missing something?
> 
> Cheers
> Simon
> 
> 
> On Thu, 18 Jul 2013 19:34:18 +0100
> Guido Medina  wrote:
> 
> > If what you are describing is happening for 1.4, type riak-admin diag 
> > and see the new recommended kernel parameters, also, on vm.args 
> > uncomment the +zdbbl 32768 parameter, since what you are describing is 
> > similar to what happened to us when we upgraded to 1.4.
> > 
> > HTH,
> > 
> > Guido.
> > 
> > On 18/07/13 19:21, Simon Effenberg wrote:
> > > Hi @list,
> > >
> > > I see sometimes logs talking about "hinted_handoff transfer of .. failed 
> > > because of TCP recv timeout".
> > > Also riak-admin transfers shows me many handoffs (is it possible to give 
> > > some insights about "how many" handoffs happened through "riak-admin 
> > > status"?).
> > >
> > > - Is it a normal behavior to have up to 30 handoffs from/to different 
> > > nodes?
> > > - How can I get down to the problem with the TCP recv timeout? I'm not 
> > > sure if this is a network problem or if the other node is too slow. The 
> > > load is ok on the machines (some IOwait but not 100%). Maybe interfering 
> > > with AAE?
> > >
> > > Here the log information about the TCP recv timeout. But that is not that 
> > > often but handoffs happens really often:
> > >
> > > 2013-07-18 16:22:05.654 UTC [error] 
> > > <0.28933.14>@riak_core_handoff_sender:start_fold:216 hinted_handoff 
> > > transfer of riak_kv_vnode from 'riak@10.46.109.207' 
> > > 1118962191081472546749696200048404186924073353216 to 'riak@10.46.109.205' 
> > > 1118962191081472546749696200048404186924073353216 failed because of TCP 
> > > recv timeout
> > > 2013-07-18 16:22:05.673 UTC [error] 
> > > <0.202.0>@riak_core_handoff_manager:handle_info:282 An outbound handoff 
> > > of partition riak_kv_vnode 
> > > 1118962191081472546749696200048404186924073353216 was terminated for 
> > > reason: {shutdown,timeout}
> > >
> > >
> > > Thanks in advance
> > > Simon
> > >
> > > __

Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Simon Effenberg
The +zdbbl parameter helped a lot but the hinted handoffs didn't
disappear completely. I have no more busy dist port errors in the
_console.log_ (why aren't they in the error.log? it looks for me like a
serious problem you have.. at least our cluster was behaving not that
nice).

I'll try to increase the buffer size to a higher value because my
suggestion is that also the objects send from one to another are also
stored therein and we have sometimes objects which are up to 15MB.

But I saw now also some crashes in the last 6 hours on only two machines
complaining about too many processes


console.log
2013-07-19 02:04:21.962 UTC [error] <0.12813.29> CRASH REPORT Process 
<0.12813.29> with 15 neighbours exited with reason: {system_limit

crash.log
2013-07-19 02:04:21 UTC =ERROR REPORT
Too many processes


the process has a process limit of 95142. So I will increase it now but I never 
saw any information about such problems on the linux tuning page. Am I missing 
something?

Cheers
Simon


On Thu, 18 Jul 2013 19:34:18 +0100
Guido Medina  wrote:

> If what you are describing is happening for 1.4, type riak-admin diag 
> and see the new recommended kernel parameters, also, on vm.args 
> uncomment the +zdbbl 32768 parameter, since what you are describing is 
> similar to what happened to us when we upgraded to 1.4.
> 
> HTH,
> 
> Guido.
> 
> On 18/07/13 19:21, Simon Effenberg wrote:
> > Hi @list,
> >
> > I see sometimes logs talking about "hinted_handoff transfer of .. failed 
> > because of TCP recv timeout".
> > Also riak-admin transfers shows me many handoffs (is it possible to give 
> > some insights about "how many" handoffs happened through "riak-admin 
> > status"?).
> >
> > - Is it a normal behavior to have up to 30 handoffs from/to different nodes?
> > - How can I get down to the problem with the TCP recv timeout? I'm not sure 
> > if this is a network problem or if the other node is too slow. The load is 
> > ok on the machines (some IOwait but not 100%). Maybe interfering with AAE?
> >
> > Here the log information about the TCP recv timeout. But that is not that 
> > often but handoffs happens really often:
> >
> > 2013-07-18 16:22:05.654 UTC [error] 
> > <0.28933.14>@riak_core_handoff_sender:start_fold:216 hinted_handoff 
> > transfer of riak_kv_vnode from 'riak@10.46.109.207' 
> > 1118962191081472546749696200048404186924073353216 to 'riak@10.46.109.205' 
> > 1118962191081472546749696200048404186924073353216 failed because of TCP 
> > recv timeout
> > 2013-07-18 16:22:05.673 UTC [error] 
> > <0.202.0>@riak_core_handoff_manager:handle_info:282 An outbound handoff of 
> > partition riak_kv_vnode 1118962191081472546749696200048404186924073353216 
> > was terminated for reason: {shutdown,timeout}
> >
> >
> > Thanks in advance
> > Simon
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


-- 
Simon Effenberg | Site Ops Engineer | mobile.international GmbH
Fon: + 49-(0)30-8109 - 7173
Fax: + 49-(0)30-8109 - 7131

Mail: seffenb...@team.mobile.de
Web:www.mobile.de

Marktplatz 1 | 14532 Europarc Dreilinden | Germany


Geschäftsführer: Malte Krüger
HRB Nr.: 18517 P, Amtsgericht Potsdam
Sitz der Gesellschaft: Kleinmachnow 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com