Re: object_lager_event what is it ?

2019-10-21 Thread Luke Bakken
Hey Bryan,

Something similar in the RabbitMQ code base made me go "huh" the other
day. Calls to rabbit_log_ldap module functions when no such module
exists. Turns out there's a parse transform defined that turns these
calls into lager function calls to an extra sink:

https://github.com/rabbitmq/rabbitmq-common/blob/master/mk/rabbitmq-build.mk#L46

I couldn't find any instance of object_lager_event:info (debug,
warning, etc) function calls in the Riak code I have lying around,
though, so /shrug

Good luck -
Luke

On Mon, Oct 21, 2019 at 9:18 AM Bryan Hunt
 wrote:
>
> Given the following configuration, can anyone explain to me what the 
> object_lager_event section does?
>
> I see this code all over the place (including snippets I provided in the 
> distant past).
>
> However, a search on github (basho org/erlang-lager/lager) don’t throw up any 
> code/module matches.
>
> (GitHub indexing doesn’t work well as GitHub only index master and basho 
> stuff is a mess branch wise).
>
> Anyone got any idea ?
>
> [
> {lager,
>[
>   {extra_sinks,
>[
> {object_lager_event,
>  [{handlers,
>[{lager_file_backend,
>  [{file, "/var/log/riak/object.log"},
>   {level, info},
>   {formatter_config, [date, " ", time," [",severity,"] 
> ",message, "\n"]}
>  ]
> }]
>   },
>   {async_threshold, 500},
>   {async_threshold_window, 50}]
> }
> ]
>   }
> ]
> }]

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Wants/Choose claim functions

2019-02-19 Thread Luke Bakken
Hi Guido,

Searching google for "riak wants_claim_fun" turns up this document.
Please see the section "Q: A node left the cluster before handing off
all data. How can I resolve this?"

https://docs.basho.com/riak/kv/2.2.3/developing/faq/

This may be helpful as well -

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/038435.html

On Tue, Feb 19, 2019 at 3:22 AM Guido Medina  wrote:
>
> Hi,
>
> Can someone please point me out to the guide explaining how to change wants 
> and choose claim functions for Riak distribution % distribution?
>
> We would like to set these permanently and trigger a cluster redistribution, 
> I just can't find that documentation anymore, I was able to find this but I 
> can't remember how to use it:
>
> {wants_claim_fun, {riak_core_claim, wants_claim_v3}},
> {choose_claim_fun, {riak_core_claim, choose_claim_v3}}
>
>
> Kind regards,
> Guido.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


A new home for basho/cuttlefish

2017-09-27 Thread Luke Bakken
Hi again everyone -

Apologies for not setting a subject in my first email.

Of course this is Bet365's decision. The purpose of this message
thread is to let people know that the RabbitMQ team is willing to
maintain cuttlefish. I also found the following more-up-to-date fork
and will be looking at the changes there -

https://github.com/Kyorai/cuttlefish

Thanks again!
Luke

On Wed, Sep 27, 2017 at 11:27 AM, Bryan Hunt
 wrote:
> Hi Luke,
>
> It’s for bet365 to make that decision.
>
> For the good of the general population the following would be nice:
>
> a) Upgrade to rebar3
> b) Upgrade Erlang version support so it compiles under Erlang 20 (if it 
> doesn’t already)
> c) Package uploaded to hex.pm
>
> Bryan
>
>
>> On 27 Sep 2017, at 18:59, Luke Bakken  wrote:
>>
>> Hello Riak users -
>>
>> The next RabbitMQ release (3.7.0) will use cuttlefish for its
>> configuration. I'm writing to express interest on behalf of the
>> RabbitMQ team in taking over maintenance of the project. At the
>> moment, we forked cuttlefish to the RabbitMQ organization [0] to fix a
>> couple pressing issues. After that, it would be great if the
>> repository could be transferred to either its own, new organization or
>> to the RabbitMQ organization entirely. Basho transferred both
>> Webmachine and Lager to their own independent organizations, for
>> instance (github.com/webmachine, github.com/erlang-lager)
>>
>> Once transferred, GitHub will take care of all the necessary
>> redirections from the Basho organization to cuttlefish's new home.
>>
>> Thanks,
>> Luke Bakken
>>
>> [0] - https://github.com/rabbitmq/cuttlefish

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[no subject]

2017-09-27 Thread Luke Bakken
Hello Riak users -

The next RabbitMQ release (3.7.0) will use cuttlefish for its
configuration. I'm writing to express interest on behalf of the
RabbitMQ team in taking over maintenance of the project. At the
moment, we forked cuttlefish to the RabbitMQ organization [0] to fix a
couple pressing issues. After that, it would be great if the
repository could be transferred to either its own, new organization or
to the RabbitMQ organization entirely. Basho transferred both
Webmachine and Lager to their own independent organizations, for
instance (github.com/webmachine, github.com/erlang-lager)

Once transferred, GitHub will take care of all the necessary
redirections from the Basho organization to cuttlefish's new home.

Thanks,
Luke Bakken

[0] - https://github.com/rabbitmq/cuttlefish

--
Staff Software Engineer
Pivotal / RabbitMQ

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: User creation not possible Riak CS

2017-05-03 Thread Luke Bakken
Hi Henry,

Please use "Reply All" so that your responses are sent to the list.

At this point, I recommend looking through the console.log and
error.log files for Riak CS and Riak to see if anything looks obvious
as to the source of the error.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, May 2, 2017 at 6:02 AM, Luke Bakken  wrote:
> In addition, the correct field name should be "email", not name.
>
> http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/riak-cs/#specifying-the-admin-user
>
> On Tue, May 2, 2017 at 6:01 AM, Luke Bakken  wrote:
>> Hi Henry,
>>
>> In the JSON you provided, "name" is followed by a semicolon. Is this a
>> typo? If not, that could be the cause of the error.
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Mon, May 1, 2017 at 10:26 AM, Henry- Norbert Cocos
>>  wrote:
>>> Hello,
>>>
>>>
>>> I'm trying to run Riak CS on a RPI 3. I got it compiled and running, but now
>>> i'm having troubles.
>>>
>>>
>>> I already changed the necessary attributes in the riak-cs.conf. I set up
>>> stanchion, riak kv and riak cs.
>>>
>>> The attributes i changed include anonymous_user_creation=on and
>>> admin.listener=:8000.
>>>
>>> I wanted to know, if there is a different way to create an admin user or is
>>> there something wrong in the way i'm trying to create the user.
>>>
>>> Everytime i run the following command i get the following error:
>>>
>>> curl -v -H 'Content-Type: application/json' -XPOST
>>> http://:8000/riak-cs/user --data '{"name";"m...@email.com",
>>> "name":"adminuser"}'

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.4 crashes with Out of Memory Error

2017-05-03 Thread Luke Bakken
Hi Jerald -

What is the average size of an object that you are storing in Riak?

I'm also seeing a lot of errors in the logs related to AAE, like you
note. I know there have been fixes in that part of Riak since 2.1.4
and recommend upgrading.

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, May 2, 2017 at 6:56 AM, Arulappan, Jerald (Jerald)
 wrote:
> Hi,
>
> I am using a single node riak server 2.1.4 with bitcask as backend for
> storing files.
> The riak node stops working after every week. (Looks like when the active
> anti-entropy process recreates the hash tree)
> The sylog shows Out of memory Error. But the console.log shows "sst: No such
> file or directory"
> Syslog Error:
>
> Apr 26 17:39:37 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:37 TLCCBAPRO2 kernel: Killed process 16987 (sh)
> total-vm:106168kB, anon-rss:116kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 30374 (memsup)
> total-vm:4112kB, anon-rss:80kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 14351 (cpu_sup)
> total-vm:4112kB, anon-rss:68kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 30385 (sh)
> total-vm:106164kB, anon-rss:136kB, file-rss:416kB
> Apr 26 17:44:48 TLCCBAPRO2 run_erl[16682]: Erlang closed the connection.
>
> Console.log:
>
> 2017-04-26 17:37:03.493 [info]
> <0.625.0>@riak_kv_vnode:maybe_create_hashtrees:227
> riak_kv/91343852333181432387730302044767688728495783936: unable to start
> index_hashtree: {error,{{badmatch,{error,{db_open,"IO error:
> ./data/anti_entropy/91343852333181432387730302044767688728495783936/sst_0/001954.sst:
> No such file or
> directory"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,675}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,610}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,474}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,268}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
> 2017-04-26 17:37:03.515 [error] <0.30178.2881> CRASH REPORT Process
> <0.30178.2881> with 0 neighbours exited with reason: no match of right hand
> value {error,{db_open,"IO error:
> ./data/anti_entropy/936274486415109681974235595958868809467081785344/37.sst:
> No such file or directory"}} in hashtree:new_segment_store/2 line 675 in
> gen_server:init_it/6 line 328
> 2017-04-26 17:37:03.515 [info]
> <0.623.0>@riak_kv_vnode:maybe_create_hashtrees:227
> riak_kv/45671926166590716193865151022383844364247891968: unable to start
> index_hashtree: {error,{{badmatch,{error,{db_open,"IO error:
> ./data/anti_entropy/45671926166590716193865151022383844364247891968/sst_0/002239.sst:
> No such file or
> directory"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,675}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,610}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,474}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,268}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
> 2017-04-26 17:37:03.516 [error] <0.30207.2881> CRASH REPORT Process
> <0.30207.2881> with 0 neighbours exited with reason: no match of right hand
> value {error,{db_open,"IO error:
> ./data/anti_entropy/45671926166590716193865151022383844364247891968/sst_0/002239.sst:
> No such file or directory"}} in hashtree:new_segment_store/2 line 675 in
> gen_server:init_it/6 line 328
>
>
>
> The complete logs are in the attached zip file. Any thoughts on the root
> cause and possible solution to overcome this is much appreciated.
>
>
>
> Regards,
>
> Jerald
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: User creation not possible Riak CS

2017-05-02 Thread Luke Bakken
In addition, the correct field name should be "email", not name.

http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/riak-cs/#specifying-the-admin-user

On Tue, May 2, 2017 at 6:01 AM, Luke Bakken  wrote:
> Hi Henry,
>
> In the JSON you provided, "name" is followed by a semicolon. Is this a
> typo? If not, that could be the cause of the error.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 1, 2017 at 10:26 AM, Henry- Norbert Cocos
>  wrote:
>> Hello,
>>
>>
>> I'm trying to run Riak CS on a RPI 3. I got it compiled and running, but now
>> i'm having troubles.
>>
>>
>> I already changed the necessary attributes in the riak-cs.conf. I set up
>> stanchion, riak kv and riak cs.
>>
>> The attributes i changed include anonymous_user_creation=on and
>> admin.listener=:8000.
>>
>> I wanted to know, if there is a different way to create an admin user or is
>> there something wrong in the way i'm trying to create the user.
>>
>> Everytime i run the following command i get the following error:
>>
>> curl -v -H 'Content-Type: application/json' -XPOST
>> http://:8000/riak-cs/user --data '{"name";"m...@email.com",
>> "name":"adminuser"}'

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: User creation not possible Riak CS

2017-05-02 Thread Luke Bakken
Hi Henry,

In the JSON you provided, "name" is followed by a semicolon. Is this a
typo? If not, that could be the cause of the error.

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, May 1, 2017 at 10:26 AM, Henry- Norbert Cocos
 wrote:
> Hello,
>
>
> I'm trying to run Riak CS on a RPI 3. I got it compiled and running, but now
> i'm having troubles.
>
>
> I already changed the necessary attributes in the riak-cs.conf. I set up
> stanchion, riak kv and riak cs.
>
> The attributes i changed include anonymous_user_creation=on and
> admin.listener=:8000.
>
> I wanted to know, if there is a different way to create an admin user or is
> there something wrong in the way i'm trying to create the user.
>
> Everytime i run the following command i get the following error:
>
> curl -v -H 'Content-Type: application/json' -XPOST
> http://:8000/riak-cs/user --data '{"name";"m...@email.com",
> "name":"adminuser"}'

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ClusterOffline Unable to access functioning Riak node

2017-04-17 Thread Luke Bakken
Thanks for letting us know the outcome.
--
Luke Bakken
Engineer
lbak...@basho.com


On Fri, Apr 14, 2017 at 12:49 PM, Charles Solar  wrote:
> Thanks for the tip Luke - I updated those timeouts to 30s each and not
> seeing anymore failures.  I guess ideally updates should happen in under 4
> seconds though so I'll have to find out why certain saves are taking so
> long!
>
> Charles
>
> On Fri, Apr 14, 2017 at 12:33 PM, Luke Bakken  wrote:
>>
>> Hi Charles -
>>
>> Extend the read and write timeouts using this setting:
>>
>>
>> https://github.com/basho/riak-dotnet-client/blob/develop/src/RiakClientTests.Live/App.config#L24
>>
>> The above example extends it to 60 seconds.
>>
>> The default is 4 seconds which may be too short if you are running
>> long queries. If 4 seconds is exceeded, the socket read times out and
>> the client assumes that there is an issue with the node, marking it
>> down. Eventually, all nodes can be marked down.
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Wed, Apr 12, 2017 at 11:03 AM, Charles Solar 
>> wrote:
>> > Hi list - I'm currently running both Riak and RiakTS in a lab
>> > environment
>> > for testing and my clients writing data get
>> >
>> > "ClusterOffline Unable to access functioning Riak node"
>> >
>> > errors fairly often.  I am wondering if this is an indication that I
>> > need to
>> > add more nodes to increase capacity? Or tune some other settings?
>> >
>> > I've looked through Riak logs and there is no indication of a problem,
>> > are
>> > there other diagnostics I can do?
>> >
>> >
>> > Im finding RiakTS commits fail with this commit far more often.
>> >
>> > I'm using the C# client, nodePollTime 5000, retryWaitTime 100,
>> > retryCount 3
>> >
>> > with 7 riak nodes and 3 riakts nodes.
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ClusterOffline Unable to access functioning Riak node

2017-04-14 Thread Luke Bakken
Hi Charles -

Extend the read and write timeouts using this setting:

https://github.com/basho/riak-dotnet-client/blob/develop/src/RiakClientTests.Live/App.config#L24

The above example extends it to 60 seconds.

The default is 4 seconds which may be too short if you are running
long queries. If 4 seconds is exceeded, the socket read times out and
the client assumes that there is an issue with the node, marking it
down. Eventually, all nodes can be marked down.
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Apr 12, 2017 at 11:03 AM, Charles Solar  wrote:
> Hi list - I'm currently running both Riak and RiakTS in a lab environment
> for testing and my clients writing data get
>
> "ClusterOffline Unable to access functioning Riak node"
>
> errors fairly often.  I am wondering if this is an indication that I need to
> add more nodes to increase capacity? Or tune some other settings?
>
> I've looked through Riak logs and there is no indication of a problem, are
> there other diagnostics I can do?
>
>
> Im finding RiakTS commits fail with this commit far more often.
>
> I'm using the C# client, nodePollTime 5000, retryWaitTime 100, retryCount 3
>
> with 7 riak nodes and 3 riakts nodes.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: API question about conflict resolution and getValue(s)

2017-04-06 Thread Luke Bakken
Hi Johan,

I'm assuming you're using the Java client, and the getValues method shown here:

https://github.com/basho/riak-java-client/blob/develop/src/main/java/com/basho/riak/client/api/commands/kv/KvResponseBase.java#L80-L91

> does: "fetch -> getValues -> ... pick one ... -> modify -> store", work?

Yes. Please refer to the docs here:

https://docs.basho.com/riak/kv/2.2.3/developing/usage/conflict-resolution/java/#conflict-resolution-and-writes

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Apr 6, 2017 at 4:48 AM, ジョハンガル  wrote:
>
> Hello,
>
> I have a simple question regarding FetchValue.Response/getValue, 
> FetchValue.Response/getValues and conflict resolution.
>
> In the documentation 
> http://docs.basho.com/riak/kv/2.2.3/developing/usage/conflict-resolution/
> the described sequence is: "fetch -> getValue -> modify -> store"
>
> does: "fetch -> getValues -> ... pick one ... -> modify -> store", work?
>
> Is the causal context from the implicitly resolved object obtained from 
> getValue, the same than the causal context in the siblings recovered with 
> getValues?

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Does RiakKV require a lot of memory?

2017-03-27 Thread Luke Bakken
Hi Tsutomu,

You should only use objects up to a maximum of 1MiB in size with Riak
KV. If you wish to store larger objects or files, please use Riak CS.

--
Luke Bakken
Engineer
lbak...@basho.com

On Sun, Mar 26, 2017 at 5:38 PM,   wrote:
> Hello.
>
> The following problem occurred, Please give me on solution.
>
> (Problem)
> I registered a 10mb to 1000mb file.
> Then RiakKV hung up when the batch program was accessing files from 100 MB to 
> 600 MB.
> RiakKV hung up due to insufficient memory.
>
> (Question)
> 1.The problem that occurred this time is the effect of data accessed to a 
> 20-1000 MB file?
>   Or is it another problem?
>
> 2.During RiakKV data update, when you want to access RiakKV data,
>   do you need to do something special?
>   Do you need a lot of memory?
>
> (Objects)
> file: 3,882,892
> file size(total): 332.46GB(346,243,500,913byte)
> file objects:
>   1-  10MB = 3,800,000(all)
>  20-  50MB = 40 to 100
> 100-1000MB = 50
>
> (Riak Sever)
> OS:Red Hat Enterprise Linux Server release 6.7 (Santiago)
>  (Linux patdevsrv02 2.6.32-573.el6.x86_64)
> CPU:Intel(R) Xeon(R) CPU E5640  @2.67GHz * 2
> Memory:12GB
> Swap:14GB
> Network:1Gbps
> Disk:
>  Filesystem  Size  Used Avail Use% Mounted on
>  /dev/sda3   261G   67G  182G  27% /
>  tmpfs   5.9G  300K  5.9G   1% /dev/shm
>  /dev/sda1   477M   71M  381M  16% /boot
>  /dev/sdb1   275G  243G   19G  93% /USR1
>  /dev/sdc1   1.7T  1.5T   76G  96% /USR2
>  /dev/sdd1   1.1T  736G  309G  71% /USR3 <<< Store
>  /dev/sde1   1.1T  1.1T   18G  99% /USR4
>  /dev/sdf1   1.4T  1.1T  365G  74% /media/USB-HDD1
>
> (Riak)
> RiakKV 2.2.0 (riak-2.2.0-1.el6.x86_64.rpm)
>
> (Riak Node)
> 1 Node.
>
> !!!We plan to add two nodes to the cluster at a later date.!!!
>
> (Java)
> Java1.8 (jre-8u121-linux-x64.rpm)
>
> (riak setting)
> /etc/riak/riak.conf
> storage_backend = leveldb
> leveldb.maximum_memory.percent = 50
> object.size.maximum = 2GB
> listener.http.internal = 0.0.0.0:8098
> listener.http.internal = 0.0.0.0:8098
> platform_data_dir = /USR3/riak
> nodename = riak@...
> riak_control = on
>
> !!!Settings other than these remain the default.!!!
>
> (linux setting)
> /etc/security/limits.conf
> * soft nofile 65536
> * hard nofile 65536
>
> Thank you.
>
> Tsutomu Wakuda
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-03-10 Thread Luke Bakken
Just to clarify ...

What Alexander is suggesting is what Daniel is currently using, and
what I suspect may be causing Daniel's issues.

If you wish to run a leveldb-only Riak CS cluster, you still *must*
use the advanced.config file and the riak_cs_kv_multi_backend, and the
other settings that I mention in my response and in the docs. Notice
the multi_backend_prefix_list setting, for one thing.

Daniel -

The storage_backend setting in advanced.config will *override*
storage_backend in riak.conf. If you wish to ensure the riak.conf
setting is overridden, you may comment it out in that file.

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Mar 10, 2017 at 9:08 AM, Alexander Sicular  wrote:
>
> Hi Daniel,
>
> Riak CS uses multi by default. By default the manifests are stored in leveldb 
> and the blobs/chunks are stored in bitcask. If you're looking to force 
> everything to level you should remove multi and use level as the backend 
> setting. As Luke noted elsewhere, this configuration hasn't been fully tested 
> and is not supported.
>
> Off the top of my head, take a look at the email Martin (?) sent about his 
> modified level backend a few weeks ago for reasons why using level for data 
> chunks may not be the best idea at this time.
>
> Thanks,
> Alexander
>
> @siculars
> http://siculars.posthaven.com
>
> Sent from my iRotaryPhone
>
> On Mar 10, 2017, at 10:50, Daniel Miller  wrote:
>
> Hi Luke,
>
> Again, thanks for your help. We are currently preparing to move all objects 
> into a new cluster using the S3 API. One question on configuration: currently 
> I have "storage_backend = leveldb" in my riak.conf. I assume that on the new 
> cluster, in addition to using the advanced.config you provided, I also need 
> to set "storage_backend = multi" in riak.conf – is that correct?
>
> Referring back to the subject of this thread for a bit, I'm assuming your 
> current theory for why the (most recent) object went missing is because we 
> have a bad backend configuration. Note that that object went missing weeks 
> after it was originally written into riak, and it was successfully retrieved 
> many times before it went missing. Is there a way I can query riak to verify 
> your theory that the manifest was overwritten? Russel Brown suggested: "I 
> wonder if you can get the manifest and then see if any/all of the chunks are 
> present?" Would that help to answer the question about why the object went 
> missing? Can you provide any hints on how to do that?
>
> While bad configuration may be the cause of this most recent object going 
> missing, it does not explain the original two objects that went missing 
> immediately after they were PUT. Those original incidents happened when our 
> cluster was still using bitcask/mutli backend, so should not have been 
> affected by bad configuration.
>
> ~ Daniel
>
> On Tue, Mar 7, 2017 at 3:58 PM, Luke Bakken  wrote:
>>
>> Hi Daniel,
>>
>> Thanks for providing all of that information.
>>
>> You are missing important configuration for riak_kv that can only be 
>> provided in an /etc/riak/advanced.config file. Please see the following 
>> document, especially the section to which I link here:
>>
>> http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/riak-for-cs/#setting-up-the-proper-riak-backend
>>
>> [
>> {riak_kv, [
>> % NOTE: double-check this path for your environment:
>> {add_paths, ["/usr/lib/riak-cs/lib/riak_cs-2.1.1/ebin"]},
>> {storage_backend, riak_cs_kv_multi_backend},
>> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
>> {multi_backend_default, be_default},
>> {multi_backend, [
>> {be_default, riak_kv_eleveldb_backend, [
>> {data_root, "/opt/data/ecryptfs/riak"}
>> ]},
>> {be_blocks, riak_kv_eleveldb_backend, [
>> {data_root, "/opt/data/ecryptfs/riak_blocks"}
>> ]}
>> ]}
>> ]}
>> ].
>>
>> Your configuration will look like the above. The contents of this file are 
>> merged with the contents of /etc/riak/riak.conf to produce the configuration 
>> that Riak uses.
>>
>> Notice that I chose riak_kv_eleveldb_backend twice because of the discussion 
>> you had previously about RAM usage and bitcask 
>> (http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/018801.html)
>>
>> In your current configuration, you are not using the expected prefix for the 
>> block data. My guess is that on very rare occasio

Fwd: Object not found after successful PUT on S3 API

2017-03-08 Thread Luke Bakken
> Thanks for taking the time to look into this Luke. I should have asked
more questions when I setup the configuration for leveldb backend since
there is no clear documentation for how configure CS with leveldb only.

The reason for this is that a leveldb-only configuration is neither
supported nor tested. I re-read your previous thread and found this message
which gave instructions to not use the multi backend (https://goo.gl/BL6HXI).
At this time, I believe those to be incorrect instructions and that you
still must use riak_cs_kv_multi_backend where each sub-backend is
riak_kv_eleveldb_backend.

> Do you have a recommendation to get my data to a new state? Like will it
work if I create new nodes and replace each existing node with a new node
configured correctly? Or do I need a more involved migration process?

Since you are changing your backend configuration completely, the best path
forward is to set up an entirely new cluster and re-save your data there
through the API. As I mentioned in my last email, there is no guarantee
your current data isn't corrupted somehow.

--
Luke Bakken
Engineer
lbak...@basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-03-07 Thread Luke Bakken
Hi Daniel,

Thanks for providing all of that information.

You are missing important configuration for riak_kv that can only be
provided in an /etc/riak/advanced.config file. Please see the following
document, especially the section to which I link here:

http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/
riak-for-cs/#setting-up-the-proper-riak-backend

[
{riak_kv, [
*% NOTE: double-check this path for your environment:*
{add_paths, ["/usr/lib/riak-cs/lib/riak_cs-2.1.1/ebin"]},
{storage_backend, riak_cs_kv_multi_backend},
{multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
{multi_backend_default, be_default},
{multi_backend, [
{be_default, riak_kv_eleveldb_backend, [
{data_root, "/opt/data/ecryptfs/riak"}
]},
{be_blocks, riak_kv_eleveldb_backend, [
{data_root, "/opt/data/ecryptfs/riak_blocks"}
]}
]}
]}
].

Your configuration will look like the above. The contents of this file are
merged with the contents of /etc/riak/riak.conf to produce the
configuration that Riak uses.

Notice that I chose riak_kv_eleveldb_backend twice because of the
discussion you had previously about RAM usage and bitcask (
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/
018801.html)

In your current configuration, you are not using the expected prefix for
the block data. My guess is that on very rare occasions your data happens
to overwrite the manifest for a file. You may also have corrupted files at
this point without noticing it at all.

*IMPORTANT:* you can't switch from your current configuration to this new
one without re-saving all of your data.

--
Luke Bakken
Engineer
lbak...@basho.com

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Mar 7, 2017 at 6:47 AM, Daniel Miller  wrote:

> Responses inline.
>
> On Mon, Mar 6, 2017 at 3:04 PM, Luke Bakken  wrote:
>
>> Hi Daniel,
>>
>> Two questions:
>>
>> * Do you happen to have an /etc/riak/app.config file present?
>>
>
> No.
>
> Not sure if relevant, but I did notice that /etc/riak-cs/advanced.config
> does exist, which contradicts with what I said earlier. This is surprising
> to me because I did not create this file. Maybe it was created by the riak
> installer? Anyway, the content is:
>
> $ cat /etc/riak-cs/advanced.config
> [
>  {riak_cs,
>   [
>   ]}
> ].
>
>
>>
>> * On one of your Riak nodes, could you please execute the following
>> commands:
>>
>> riak attach
>> rp(application:get_all_env(riak_kv)).
>>
>> Copy the output of the previous command and attach as a separate file
>> to your response. Please note that the period is significant. Use
>> CTRL-C CTRL-C to exit the "riak attach" session.
>>
>
> Attached.
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-03-06 Thread Luke Bakken
Hi Daniel,

Two questions:

* Do you happen to have an /etc/riak/app.config file present?

* On one of your Riak nodes, could you please execute the following commands:

riak attach
rp(application:get_all_env(riak_kv)).

Copy the output of the previous command and attach as a separate file
to your response. Please note that the period is significant. Use
CTRL-C CTRL-C to exit the "riak attach" session.

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Mar 6, 2017 at 11:29 AM, Daniel Miller  wrote:
> Hi Luke,
>
> I do not have an advanced.config file since I switched to leveldb storage
> backend. Generated configs attached.
>
> Hopefully not relevant, the data root is on an ecryptfs volume.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-03-06 Thread Luke Bakken
Hi Daniel -

Did you forget to include the advanced.config file in your archive of
configuration files? I only see three *.conf.j2 files. The reason I
ask is that the following settings are critical to Riak CS functioning
correctly:

http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/riak-for-cs/#setting-up-the-proper-riak-backend

I realize you have replaced the "be_blocks" backed with leveldb, but I
would like to confirm that you have the other settings.

In fact it would be best to archive the generated.configs directory
from one of your Riak nodes to include here.

Thanks

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Mar 6, 2017 at 7:07 AM, Daniel Miller  wrote:
> I recently had another case of a disappearing object. This time the object
> was successfully PUT, and (unlike the previous cases reported in this
> thread) for a period of time GETs were also successful. Then GETs started
> 404ing for no apparent reason. There are no errors in the logs to indicate
> that anything unusual happened. This is quite disconcerting. Is it normal
> that Riak CS just loses track of objects? At this point we are using CS as
> primary object storage, meaning we do not have the data stored in another
> database so it's critical that the data is not randomly lost

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Erlang Client 2.5.3 released

2017-03-06 Thread Luke Bakken
Hi everyone -

Version 2.5.3 of the Riak Erlang client is available from both GitHub
and hex.pm. This is the first version published to hex.pm from Basho,
and I'd like to thank Drew Kerrigan for his assistance in getting that
set up.

https://hex.pm/packages/riakc

https://github.com/basho/riak-erlang-client/releases/tag/2.5.3

https://github.com/basho/riak-erlang-client/blob/master/RELNOTES.md

https://github.com/basho/riak-erlang-client/issues?q=milestone%3Ariak-erlang-client-2.5.3

Previous release milestones can be found here:

https://github.com/basho/riak-erlang-client/milestones?state=closed

Thanks! As always, issues and PRs are welcome via the project's GitHub page.

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Issue with yokozuna_extractor_map (riak 2.1.1)

2017-02-28 Thread Luke Bakken
Hi Simon -

Did you copy the .beam file for your custom extractor to a directory
in the Erlang VM's code path?

If you run "pgrep -a beam.smp" you'll see an argument to beam.smp like this:

-pa /home/lbakken/Projects/basho/riak_ee-2.1.1/rel/riak/bin/../lib/basho-patches

On my machine, that adds the
"/home/lbakken/Projects/basho/riak_ee-2.1.1/rel/riak/lib/basho-patches"
directory to the code path. You will see something that starts with
"/usr/lib/riak/.." or "/usr/lib64/riak/..." in your environment.

You must copy the .beam file to the "basho-patches" directory, and
re-start Riak. Then your extractor code will be found.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Feb 28, 2017 at 3:54 AM, Simon Jaspar
 wrote:
> Hi,
>
> I’m currently experimenting with riak 2.1.1 for a project. I recently ran
> into some trouble with yokozuna trying to register a custom extractor.
>
> I’m not sure how I ended up in that situation, but I’m currently stuck with
> my cluster's yokozuna_extractor_map equal to the atom ignore…
>
> I remember having the default extractor map there, before I try to register
> a custom extractor (following basho documentation
> https://docs.basho.com/riak/kv/2.2.0/developing/usage/custom-extractors/ ),
> and end up here.
>
> While attached to one of my riak's node, running yz_extractor:get_map().
> returns ignore.
>
> And trying to register a new extractor
> yz_extractor:register("custom_extractor",yz_noop_extractor). returns
> already_registered , with this in my logs :
>
> 2017-02-28 11:41:39.265 [error]
> <0.180.0>@riak_core_ring_manager:handle_call:406 ring_trans: invalid return
> value:
> {'EXIT',{function_clause,[{orddict,find,["custom_extractor",ignore],[{file,"orddict.erl"},{line,80}]},{yz_extractor,get_def,3,[{file,"src/yz_extractor.erl"},{line,67}]},{yz_extractor,register_map,2,[{file,"src/yz_extractor.erl"},{line,138}]},{yz_misc,set_ring_trans,2,[{file,"src/yz_misc.erl"},{line,302}]},{riak_core_ring_manager,handle_call,3,[{file,"src/riak_core_ring_manager.erl"},{line,389}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,585}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
>
> I have been trying to bypass that issue by reseting the extractor map to its
> default value using lower level functions from yokozuna source code, but
> with no success.
>
> If anyone has any idea or solution that’d be great !
>
> Thanks in advance for your help.
>
> Best,
> Simon JASPAR
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Update metadata of entries in bucket

2017-02-24 Thread Luke Bakken
Hi Grigory,

Check out this article:
http://basho.com/posts/technical/webinar-recap-mapreduce-querying-in-riak/

Specifically, the use of riak:local_client() and C:put
--
Luke Bakken
Engineer
lbak...@basho.com


On Thu, Feb 23, 2017 at 10:38 AM, Grigory Fateyev  wrote:
> Hello!
>
> I'm trying to write riak_pipe command that updates metadata, the code:
> https://gist.github.com/greggy/7d7fa3102d89673019410c6e244650cd
>
> I'm getting every entry in update_metadata/1 then creating a new metadata,
> updating it in Item.
>
> My question is how to update r_object in a bucket?
>
> Thank you!
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Start up problem talking to Riak

2017-02-17 Thread Luke Bakken
Another option would be to use Riak via Docker:
https://hub.docker.com/r/basho/riak-kv/

Or, use VMs, but that's pretty heavy.

If you have issues building Riak, that's what this mailing list is
for. Good luck!

Luke

On Fri, Feb 17, 2017 at 10:20 AM, AWS  wrote:
> Total Linux newby - scared of that one. I will try though :-)
>
> I am building this to help with my university project course so not going
> live at all.
> Thanks
>
> David
>
>
> ----- Original Message -
> From: "Luke Bakken" 
> To: "AWS" 
> Cc: "riak-users" 
> Subject: Re: Start up problem talking to Riak
> Date: 02/17/2017 17:49:18 (Fri)
>
> On Fri, Feb 17, 2017 at 9:30 AM, AWS  wrote:
>> When I had three AWS servers it was easy to get them working together as
>> they each had a separate install. I would like to install three copies of
>> Riak KV on the same machine. How would I do that given that the install
>> package always installs to "Riak"?
>
> The packages are not meant to be used this way. If you need to run
> multiple nodes on the same server (only for development purposes,
> right??) then building Riak from source is the best way to do so:
>
> http://docs.basho.com/riak/kv/2.2.0/setup/installing/source/
>
> You will want to use a command like this:
>
> make DEVNODES=3 stagedevrel
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> 
>
> Message sent using Winmail Mail Server

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Start up problem talking to Riak

2017-02-17 Thread Luke Bakken
On Fri, Feb 17, 2017 at 9:30 AM, AWS  wrote:
> When I had three AWS servers it was easy to get them working together as
> they each had a separate install. I would like to install three copies of
> Riak KV on the same machine. How would I do that given that the install
> package always installs to "Riak"?

The packages are not meant to be used this way. If you need to run
multiple nodes on the same server (only for development purposes,
right??) then building Riak from source is the best way to do so:

http://docs.basho.com/riak/kv/2.2.0/setup/installing/source/

You will want to use a command like this:

make DEVNODES=3 stagedevrel

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-02-09 Thread Luke Bakken
Hi Daniel -

I don't have any ideas at this point. Has this scenario happened again?

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jan 25, 2017 at 2:11 PM, Daniel Miller  wrote:
> Thanks for the quick response, Luke.
>
> There is nothing unusual about the keys. The format is a name + UUID + some
> other random URL-encoded charaters, like most other keys in our cluster.
>
> There are no errors near the time of the incident in any of the logs (the
> last [error] is from over a month before). I see lots of messages like this
> in console.log:
>
> /var/log/riak/console.log
> 2017-01-20 15:38:10.184 [info]
> <0.22902.1193>@riak_kv_exchange_fsm:key_exchange:263 Repaired 2 keys during
> active anti-entropy exchange of
> {776422744832042175295707567380525354192214163456,3} between
> {776422744832042175295707567380525354192214163456,'riak-fa...@fake3.fake.com'}
> and
> {822094670998632891489572718402909198556462055424,'riak-fa...@fake9.fake.com'}
> 2017-01-20 15:40:39.640 [info]
> <0.21789.1193>@riak_kv_exchange_fsm:key_exchange:263 Repaired 1 keys during
> active anti-entropy exchange of
> {936274486415109681974235595958868809467081785344,3} between
> {959110449498405040071168171470060731649205731328,'riak-fa...@fake3.fake.com'}
> and
> {981946412581700398168100746981252653831329677312,'riak-fa...@fake5.fake.com'}
> 2017-01-20 15:46:40.918 [info]
> <0.13986.1193>@riak_kv_exchange_fsm:key_exchange:263 Repaired 2 keys during
> active anti-entropy exchange of
> {662242929415565384811044689824565743281594433536,3} between
> {685078892498860742907977265335757665463718379520,'riak-fa...@fake3.fake.com'}
> and
> {707914855582156101004909840846949587645842325504,'riak-fa...@fake6.fake.com'}
> 2017-01-20 15:48:25.597 [info]
> <0.29943.1193>@riak_kv_exchange_fsm:key_exchange:263 Repaired 2 keys during
> active anti-entropy exchange of
> {776422744832042175295707567380525354192214163456,3} between
> {776422744832042175295707567380525354192214163456,'riak-fa...@fake3.fake.com'}
> and
> {799258707915337533392640142891717276374338109440,'riak-fa...@fake0.fake.com'}
>
> Thanks!
> Daniel
>
>
>
> On Wed, Jan 25, 2017 at 9:45 AM, Luke Bakken  wrote:
>>
>> Hi Daniel -
>>
>> This is a strange scenario. I recommend looking at all of the log
>> files for "[error]" or other entries at about the same time as these
>> PUTs or 404 responses.
>>
>> Is there anything unusual about the key being used?
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Wed, Jan 25, 2017 at 6:40 AM, Daniel Miller  wrote:
>> > I have a 9-node Riak CS cluster that has been working flawlessly for
>> > about 3
>> > months. The cluster configuration, including backend and bucket
>> > parameters
>> > such as N-value are using default settings. I'm using the S3 API to
>> > communicate with the cluster.
>> >
>> > Within the past week I had an issue where two objects were PUT resulting
>> > in
>> > a 200 (success) response, but all subsequent GET requests for those two
>> > keys
>> > return status of 404 (not found). Other than the fact that they are now
>> > missing, there was nothing out of the ordinary with these particular to
>> > PUTs. Maybe I'm missing something, but this seems like a scenario that
>> > should never happen. All information included here about PUTs and GETs
>> > comes
>> > from reviewing the CS access logs. Both objects were PUT on the same
>> > node,
>> > however GET requests returning 404 have been observed on all nodes.
>> > There is
>> > plenty of other traffic on the cluster involving GETs and PUTs that are
>> > not
>> > failing. I'm unsure of how to troubleshoot further to find out what may
>> > have
>> > happened to those objects and why they are now missing. What is the best
>> > approach to figure out why an object that was successfully PUT seems to
>> > be
>> > missing?
>> >
>> > Thanks!
>> > Daniel Miller
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cross-Compile Riak for Embedded Device

2017-02-09 Thread Luke Bakken
Hi Darshan,

Riak's memory requirements don't lend it to use on embedded devices.
Which device are you targeting?

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Feb 8, 2017 at 5:49 AM, Darshan Shah  wrote:
> Hi,
>
> I am new to Riak and I would like to know steps to cross compile Riak for
> Embedded Unit.
> From website I found steps to compile with source code but I need steps to
> cross compile.
> It will be appreciated if you can give steps or links from which I can get
> idea to cross compile.
>
> Thanking in advance,
> Darshan Shah
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs fails to start after reimporting Docker container

2017-02-09 Thread Luke Bakken
Hi Jean-Marc -

Can you provide a complete archive of the log directory? I wonder if
another file might have more information.

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Feb 9, 2017 at 1:58 AM, Jean-Marc Le Roux
 wrote:
>
> Hello,
>
> here is the original github issue :
>
> https://github.com/basho/riak_cs/issues/1329
>
> I'm using riak-cs 2.1.1-1.el6 with stanchion 1.5.0-1.el6 on CentOS 6.8 in a 
> Docker container.
> To make the data persistent, the following directories are mounted from 
> outside the container :
>
> /var/log
> /var/lib/riak/
>
> Everything works fine except when I remove/reimport the container.
> Even when it's the same container.
> The riak data is here in /var/lib/riak (bitcask and leveldb stuff). ACLs look 
> fine on those files.
>
> Riak starts. Stanchion starts. But riak-cs won't start.
> With a riak-cs concole, it looks like the problem is here :
>>
>> (riak-cs@127.0.0.1)1> [os_mon] memory supervisor port (memsup): Erlang has 
>> closed
>>
>> =INFO REPORT 18-Jan-2017::09:38:31 ===
>> alarm_handler: {clear,system_memory_high_watermark}
>> [os_mon] cpu supervisor port (cpu_sup): Erlang has closed
>> {"Kernel pid 
>> terminated",application_controller,"{application_start_failure,riak_cs,{notfound,{riak_cs_app,start,[normal,[]]}}}"}
>
> var/log/riak-cs/access.log.2017_01_18_09 is empty.
> Here is what /var/log/riak-cs/crash.log says:
>>
>> 2017-01-18 09:38:31 =CRASH REPORT
>>   crasher:
>> initial call: application_master:init/4
>> pid: <0.148.0>
>> registered_name: []
>> exception exit: 
>> {{notfound,{riak_cs_app,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>> ancestors: [<0.147.0>]
>> messages: [{'EXIT',<0.149.0>,normal}]
>> links: [<0.147.0>,<0.7.0>]
>> dictionary: []
>> trap_exit: true
>> status: running
>> heap_size: 376
>> stack_size: 27
>> reductions: 119
>>   neighbours:

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Periodically solr down issue.

2017-02-04 Thread Luke Bakken
Hi Alex -

What is in solr.log ?
--
Luke Bakken
Engineer
lbak...@basho.com


On Sat, Feb 4, 2017 at 3:37 AM, Alex Feng  wrote:
> Hello Riak users,
>
> I recently found our solr system got down after running some time, it got
> recovered after a restart. But then it happened again.
>
> Below is the error output, do you guys have any clue about this ?
>
>
> 2017-02-04 18:12:22.329 [error] <0.830.0>@yz_kv:index_internal:237 failed to
> index object
> {{<<"process_test_result">>,<<83,104,105,76,111,110,103,90,105,9
> 5,77,66,80,65,67,75,45,231,148,159,228,186,167,232,189,166,233,151,180,50,48,49,55,50,52>>},<<"9H5AlII0OZm8xwnGbI">>}
> with error {"Failed to index docs",o
> ther,{error,{conn_failed,{error,econnrefused because
> [{yz_solr,index,3,[{file,"src/yz_solr.erl"},{line,205}]},{yz_kv,index,7,[{file,"src/yz_kv.erl"},{
> line,293}]},{yz_kv,index_internal,5,[{file,"src/yz_kv.erl"},{line,224}]},{riak_kv_vnode,actual_put,6,[{file,"src/riak_kv_vnode.erl"},{line,1619}]},{riak_k
> v_vnode,perform_put,3,[{file,"src/riak_kv_vnode.erl"},{line,1607}]},{riak_kv_vnode,do_put,7,[{file,"src/riak_kv_vnode.erl"},{line,1398}]},{riak_kv_vnode,h
> andle_command,3,[{file,"src/riak_kv_vnode.erl"},{line,558}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,346}]}]
>
>
>
> restart node, got this>
>
>
> 2017-02-04 18:13:47.026 [error] <0.4340.0>@yz_pb_search:maybe_process:111
> {solr_error,{500,"http://localhost:8093/internal_solr/process_test_result/select
> ",<<"{\"error\":{\"msg\":\"org.apache.solr.client.solrj.SolrServerException:
> Server refused connection at:
> http://nosql-3.dsdb:8093/internal_solr/process_
> test_result\",\"trace\":\"org.apache.solr.common.SolrException:
> org.apache.solr.client.solrj.SolrServerException: Server refused connection
> at: http://nos
> ql-3.dsdb:8093/internal_solr/process_test_result\\n\\tat
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:308)\\n\\tat
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\\n\\tat
> org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)\\
> n\\tat
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)\\n\\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDis
> patchFilter.java:427)\\n\\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)\\n\\tat
> org.eclipse.jetty.servlet.ServletHa
> ndler$CachedChain.doFilter(ServletHandler.java:1419)\\n\\tat
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\\n\\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\\n\\tat
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\\n\\tat
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\\n\\tat
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\\n\\tat
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\\n\\tat
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\\n\\tat
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\\n\\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\\n\\tat
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\\n\\tat
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\\n\\tat
> org.eclipse.jetty.server.handler.HandlerWrap...">>}}
> [{yz_solr,search,3,[{file,"src/yz_solr.erl"},{line,292}]},{yz_pb_search,maybe_process,3,[{file,"src/yz_pb_search.erl"},{line,76}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,388}]},{riak_api_pb_server,connected,2,[{file,"src/riak_api_pb_server.erl"},{line,226}]},{riak_api_pb_server,decode_buffer,2,[{file,"src/riak_api_pb_server.erl"},{line,364}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]
> 2017-02-04 18:17:02.756 [error]
> <0.6871.0>@riak_core_handoff_receiver:handle_info:97 Handoff receiver for
> partition 0 exited abnormally after processing 0 objects from
> {"192.168.9.247",39483}:
> {error,{vnode_timeout,6,844,<<131,104,2,100,0,10,101,110,99,111,100,101,95,114,97,119,104,3,104,2,109,0,0,0,19,112,114,111,99,101,115,115,95,116,101,115,116,95,114,101,115,117,108,116,109,

Re: Reg:Continuous Periodic crashes after long operation

2017-02-01 Thread Luke Bakken
Thanks for the information. Yes, one RiakClient instance per Unix
process is correct.

I will see if there is a way for you to keep track of connections from
the client to Riak. Off the top of my head the Python client doesn't
have the ability to set limits.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Feb 1, 2017 at 1:59 PM, Steven Joseph  wrote:
> Hi Luke,
>
> Yes I am creating new client objects for each of my tasks.
>
> Please see this github issuse against the python client for some
> background as to why.
>
> https://github.com/basho/riak-python-client/issues/497
>
> Basicaly I ran into issues with concurrency when processes are forked.
>
> I might experiment with using process ids as keys to access a process
> specific riak client in forked child ?
>
>
> Regards
>
> Steven
>
> Luke Bakken  writes:
>
>> Hi Steven,
>>
>> At this point I suspect you're using the Python client in such a way
>> that too many connections are being created. Are you re-using the
>> RiakClient object or repeatedly creating new ones? Can you provide any
>> code that reproduces your issue?
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Tue, Jan 31, 2017 at 7:47 PM, Steven Joseph  wrote:
>>> Hi Luke,
>>>
>>> Here's the output of
>>>
>>> $ sysctl fs.file-max
>>>
>>> fs.file-max = 2500
>>>
>>> Regards
>>>
>>> Steven
>>>
>>> On Wed, Feb 1, 2017 at 9:30 AM Luke Bakken  wrote:
>>>>
>>>> Hi Steven,
>>>>
>>>> What is the output of this command on your systems?
>>>>
>>>> $ sysctl fs.file-max
>>>>
>>>> Mine is:
>>>>
>>>> fs.file-max = 1620211
>>>>
>>>> --
>>>> Luke Bakken
>>>> Engineer
>>>> lbak...@basho.com
>>>>
>>>>
>>>> On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph 
>>>> wrote:
>>>> > Hi Shaun,
>>>> >
>>>> > Im having this issue again, this time I have captured the system limits,
>>>> > while riak is still crashing.
>>>> >
>>>> > Please note lsof and prlimit outputs at bottom.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Reg:Continuous Periodic crashes after long operation

2017-02-01 Thread Luke Bakken
Hi Steven,

At this point I suspect you're using the Python client in such a way
that too many connections are being created. Are you re-using the
RiakClient object or repeatedly creating new ones? Can you provide any
code that reproduces your issue?

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Jan 31, 2017 at 7:47 PM, Steven Joseph  wrote:
> Hi Luke,
>
> Here's the output of
>
> $ sysctl fs.file-max
>
> fs.file-max = 2500
>
> Regards
>
> Steven
>
> On Wed, Feb 1, 2017 at 9:30 AM Luke Bakken  wrote:
>>
>> Hi Steven,
>>
>> What is the output of this command on your systems?
>>
>> $ sysctl fs.file-max
>>
>> Mine is:
>>
>> fs.file-max = 1620211
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph 
>> wrote:
>> > Hi Shaun,
>> >
>> > Im having this issue again, this time I have captured the system limits,
>> > while riak is still crashing.
>> >
>> > Please note lsof and prlimit outputs at bottom.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Reg:Continuous Periodic crashes after long operation

2017-01-31 Thread Luke Bakken
Hi Steven,

What is the output of this command on your systems?

$ sysctl fs.file-max

Mine is:

fs.file-max = 1620211

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Jan 31, 2017 at 12:22 PM, Steven Joseph  wrote:
> Hi Shaun,
>
> Im having this issue again, this time I have captured the system limits,
> while riak is still crashing.
>
> Please note lsof and prlimit outputs at bottom.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Reg:Continuous Periodic crashes after long operation

2017-01-26 Thread Luke Bakken
Steven,

You may be able to get information via the lsof command as to what
process(es) are using many file handles (if that is the cause).

I searched for that particular error and found this GH issue:
https://github.com/emqtt/emqttd/issues/426

Which directed me to this page:
https://github.com/emqtt/emqttd/wiki/linux-kernel-tuning

Basho also has a set of recommended tuning parameters:
http://docs.basho.com/riak/kv/2.2.0/using/performance/

Do you have other error entries in any of Riak's logs at around the
same time as these messages? Particularly crash.log.

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Jan 26, 2017 at 4:42 AM, Steven Joseph  wrote:
> Hi Shaun,
>
> I have already set this to a very high value
>
> (r...@hawk1.streethawk.com)1> os:cmd("ulimit -n").
> "2500\n"
> (r...@hawk1.streethawk.com)2>
>
>
> So the issue is not that the limit is low, but maybe a resource leak ? As I
> mentioned our application processes continuously run queries on the cluster.
>
> Kind Regards
>
> Steven

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-01-25 Thread Luke Bakken
Hi Daniel -

This is a strange scenario. I recommend looking at all of the log
files for "[error]" or other entries at about the same time as these
PUTs or 404 responses.

Is there anything unusual about the key being used?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jan 25, 2017 at 6:40 AM, Daniel Miller  wrote:
> I have a 9-node Riak CS cluster that has been working flawlessly for about 3
> months. The cluster configuration, including backend and bucket parameters
> such as N-value are using default settings. I'm using the S3 API to
> communicate with the cluster.
>
> Within the past week I had an issue where two objects were PUT resulting in
> a 200 (success) response, but all subsequent GET requests for those two keys
> return status of 404 (not found). Other than the fact that they are now
> missing, there was nothing out of the ordinary with these particular to
> PUTs. Maybe I'm missing something, but this seems like a scenario that
> should never happen. All information included here about PUTs and GETs comes
> from reviewing the CS access logs. Both objects were PUT on the same node,
> however GET requests returning 404 have been observed on all nodes. There is
> plenty of other traffic on the cluster involving GETs and PUTs that are not
> failing. I'm unsure of how to troubleshoot further to find out what may have
> happened to those objects and why they are now missing. What is the best
> approach to figure out why an object that was successfully PUT seems to be
> missing?
>
> Thanks!
> Daniel Miller
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS race condition at start-up (was: Riak-CS issues when Riak endpoint fails-over to new server)

2017-01-20 Thread Luke Bakken
Hi Toby,

The process you use to run "riak-cs start" could use the "riak-admin
wait-for-service riak_kv" command to ensure Riak is ready first:

http://docs.basho.com/riak/kv/2.2.0/using/admin/riak-admin/#wait-for-service

--
Luke Bakken
Engineer
lbak...@basho.com


On Thu, Jan 19, 2017 at 5:38 PM, Toby Corkindale  wrote:
> Hi guys,
> I've switched our configuration around, so that Riak CS now talks to
> 127.0.0.1:8087 instead of the haproxy version.
>
> We have immediately re-encountered the problems that caused us to move to
> haproxy.
> On start-up, riak takes slightly longer than riak-cs to get ready, and so
> riak-cs logs the following then exits.
> Restarting riak-cs again (so now 15 seconds after Riak started) results in a
> successful start-up, but obviously this is really annoying for our ops guys
> to have to remember to do after restarting riak or rebooting a machine.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Active Anti Entropy Directory when AAE is disabled

2017-01-18 Thread Luke Bakken
Hi Arun -

I don't know the answer off the top of my head, but I suspect that
disabling AAE will leave that directory and the files in it untouched
afterward.

One way to find out would be to disable AAE and monitor the access
time of the files in the anti_entropy directory.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Jan 18, 2017 at 11:49 AM, Arun Rajagopalan
 wrote:
> Hello Riak Users
>
> Lets say I stop Active Anti-Entropy by disabling it. Will the node continue
> to populate the anti_entropy ?
>
> This is part of a thinking exercise in case you wonder why I would want to
> do that :)
>
> Thanks
> Arun
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Deleted bucket key but still showing up in yokozuna search results

2017-01-18 Thread Luke Bakken
Hi Pulin,

Did this document eventually disappear from search results? You should
check your Riak logs and solr.log files for errors with regard to
communication between Riak and the Solr process.

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Dec 16, 2016 at 12:10 PM, Pulin Gupta  wrote:
> Hi,
>
> I am trying to delete a key with following command
> curl -XDELETE
> http://195.197.177.53:8098/types/buckettypename/buckets/bucketname/keys/2321845853185375
>
> Even though Riak KV results in 404 or not found, riak search i.e yokozuna
> still results the key with valid document id. something like following:
>  "_yz_id": "1*buckettypename*bucketname* 2321845853185375*81",
> "_yz_rb": "bucket name",
> "_yz_rk": "04054881627186",
> "_yz_rt": "buckettypename",
> "content": "dsds\n"
>
> Please help how to get it removed from yokozuna search results as well?
>
> Br,
> Pulin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: consult some questions about riak,thank you

2017-01-17 Thread Luke Bakken
std::bad_alloc is thrown when memory can't be allocated. This can
happen when there is no more free RAM.

Do you have monitoring enabled on these servers where you can watch
memory consumption?

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Jan 13, 2017 at 8:21 AM, 270917674 <270917...@qq.com> wrote:
> Dear basho team:
>   I encountered a strange question, we use riak/riak-cs build a cluster, the
> cluster has 7 nodes, DB of riak KV is multi(level & bitcask), earch node has
> 16G RAM.total_leveldb_mem_percent=50.
>   today we found one node crash down but without crash log, and erlang log
> has something like this:
>terminate called after throwing an instance of 'std::bad_akkic' wath():
> std::bad_alloc
>(riak kv version:2.0.5)
>[os_mon] memory supervisor port(memsup):Erlang has closed;
>[os_mon] cpu supervisor port(cpu_sup):Erlang has close.
>  And we restart this node and it can work properly again, use the (free -m)
> command to view the memory, +-buffered/cache: used=2000Mb available<14000Mb.
> and memory status of other nodes are: used 6000Mb~7000Mb.
> can you help me analysis of why the node crash down?
> and i found other problem on other cluster. under normal conditions,the
> memory of used was 4000Mb~5000Mb,but
> every 20 minutes will rise once, about doubled, and then returned to normal,
> why?
>
> thank you very much!
> --
> 微软(中国)有限责任公司
> 手机: +86
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS - admin keys changing

2017-01-12 Thread Luke Bakken
Hi Toby,

When you create the user, the data is stored in Riak (and is the
authoritative location). The values must match in the config files to
provide credentials used when connecting to various parts of your CS
cluster.

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Jan 12, 2017 at 3:47 PM, Toby Corkindale  wrote:
> Hi,
> In Riak CS, the admin key and secret is in the config files for both CS and
> Stanchion.
> Is that the authoritative location for the secrets, or is the
> initially-created admin user the source, and those just have to match?
>
> I tried to figure this out from the source code, but my Erlang really isn't
> up to scratch :(
>
> Toby
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error creating bucket with LevelDB bucket-type

2016-12-15 Thread Luke Bakken
What is the output of this command:

curl 127.0.1.1:8098/types/ldb/buckets?buckets=true

I see that our docs do *not* give an example of listing buckets in a
bucket type:

http://docs.basho.com/riak/kv/2.2.0/developing/api/http/list-buckets/

I have opened an issue here to improve the documentation:
https://github.com/basho/basho_docs/issues/2337

On Thu, Dec 15, 2016 at 8:13 AM, Felipe Esteves
 wrote:
> Hi, Luke,
>
> The bucket_type with leveldb is called ldb. Buckets "teste and "books"
> already existed.
>
> Python3:
 myClient = riak.RiakClient(http_port=8098, protocol='http',
 host='127.0.1.1')
 myBucket = myClient.bucket('foo', bucket_type='ldb')
 keyb = myBucket.new('bar', data='foobar')
 keyb.store()
> 
 myClient.get_buckets()
> [, ]
>
> HTTP:
> curl 127.0.1.1:8098/buckets?buckets=true
>
> {"buckets":["teste","books"]}
>
> If I run the same procedure without specifying bucket_type, it works well.
> Couldn't figure out yet what's the problem.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error creating bucket with LevelDB bucket-type

2016-12-15 Thread Luke Bakken
> But I can find the created bucket when I run buckets?buckets=true
> Seems to me it isn't being persisted, I'm investigating.

What is the command you are running? Please provide the complete command.

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client -- slow to shutdown

2016-11-28 Thread Luke Bakken
Hi Toby -

Thanks for reporting this. We can continue the discussion via GH issue #689.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Nov 23, 2016 at 9:58 PM, Toby Corkindale  wrote:
> Hi,
> I'm using the Java client via protocol buffers to Riak.
> (Actually I'm using it via Scala 2.11.8 on OpenJDK 8)
>
> After calling client.shutdown(), there is always a delay of 4 seconds before
> the app actually exits. Why is this, and what can I do about it?
>
> To demonstrate the issue, use these files:
> https://gist.github.com/TJC/9a6a174cb1419a7c32e8018c5a495e3d
>
> If you put both of them in a fresh directory and then run "sbt", it should
> grab various dependencies and stuff, and then you can use "compile" and
> "run" commands.
> (You'll need to do "export RIAK_SERVER=my.riak.cluster.net" in the shell
> before you run sbt)
>
> If you do "run" a few times, you'll see it always takes four seconds to get
> back to the sbt prompt. If you comment out the two riak statements in the
> source code (the connection and shutdown), then "run" a few times, it takes
> zero seconds.
>
> I've tested this outside of sbt and the same issue exists.. it's just easier
> to make a quick demo that works inside sbt.
>
> Also reported as https://github.com/basho/riak-java-client/issues/689
>
> Cheers
> Toby

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: rpberrorresp - Unable to access functioning riak node

2016-11-23 Thread Luke Bakken
Hi Andre,

Please remember to use "reply all" so the entire list can learn from
our communication.

There may be a bug where you aren't getting the text of the
"rpberrorresp" message. What you should do is check the error.log file
on both of your Riak nodes and look at the contents. There may be a
data format issue in the data you are sending to Riak TS

Please run riak-debug and attach the generated archives to this GitHub issue:

https://github.com/basho/riak-dotnet-client/issues/328

On the above issue, please include the following information:

* Riak .NET client version
* Riak TS version
* Table definition you're using
* Example data
* Example C# code to reproduce the issue (if possible)

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Nov 23, 2016 at 7:39 AM, Andre Roman
 wrote:
> Hey Luke,
>
> I wasn't able to push data into tables without either, received same error.
> Added load balancer since the error references a load balancer, even though
> originally there wasn't any in place.
>
> The load balancer I'm using is HAProxy.
>
> Best Regards,
>
>
> Andre Roman
> Automation Engineer
> Energy Metrics
> e. andre.ro...@energymetricsllc.com
> www.energymetricsllc.com | LinkedIn | Twitter
>
> -Original Message-
> From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of
> Luke Bakken
> Sent: Wednesday, November 23, 2016 10:33 AM
> To: A R 
> Cc: riak-users@lists.basho.com
> Subject: Re: rpberrorresp - Unable to access functioning riak node
>
> Hi Andre,
>
> If you remove the load balancer, does it work?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Tue, Nov 22, 2016 at 10:56 AM, A R  wrote:
>> To whom it may concern,
>>
>>
>> I've set up a 2 riak ts nodes and a load-balancer on independent machines.
>> I'm able to successfully create tables, list keys and buckets using
>> the C# SDK but am unable to push data in. It returns the following
>> error from
>>
>>
>> "Expected tsputresp, got rpberrorresp, Expected tsputresp, got
>> rpberrorresp, Unable to access functioning Riak node (load balancer
>> returned no nodes)."
>>
>>
>> I've configured the nodes/load-balancer as per the installation
>> procedure and was able to successfully connect the nodes, and
>> configure the load-balancer to direct traffic accordingly. May i be
>> missing a setting in the riak.conf to enable writing?
>>
>>
>> Best Regards,
>>
>> Andre
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: rpberrorresp - Unable to access functioning riak node

2016-11-23 Thread Luke Bakken
Hi Andre,

If you remove the load balancer, does it work?

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Nov 22, 2016 at 10:56 AM, A R  wrote:
> To whom it may concern,
>
>
> I've set up a 2 riak ts nodes and a load-balancer on independent machines.
> I'm able to successfully create tables, list keys and buckets using the C#
> SDK but am unable to push data in. It returns the following error from
>
>
> "Expected tsputresp, got rpberrorresp, Expected tsputresp, got rpberrorresp,
> Unable to access functioning Riak node (load balancer returned no nodes)."
>
>
> I've configured the nodes/load-balancer as per the installation procedure
> and was able to successfully connect the nodes, and configure the
> load-balancer to direct traffic accordingly. May i be missing a setting in
> the riak.conf to enable writing?
>
>
> Best Regards,
>
> Andre
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-19 Thread Luke Bakken
Hi Mav,

I opened the following issue to continue investigation:

https://github.com/basho/riak_kv/issues/1541

That would be the best place to continue discussion. I'll find time to
reproduce what you have reported.

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Nov 18, 2016 at 4:57 PM, Mav erick  wrote:
> No luck :(
>
> I set up a bucket type called test-bucket-type. I did NOT set data type.
> I set the hooks
> Ran your curl -X PUT. The Hook was not called. Tried several times, no luck
> I changed the curl to hit my non-typed bucket, and the commit hook hit
>
> $ riak-admin bucket-type list
> default (active)
> test-bucket-type (active)
> sets (active)
> maps (active)
> counters (active)
>
> I made sure the hooks are applied
> Also note there is **no** data type associated
> {
>   "props": {
> "active": true,
> "allow_mult": true,
> "basic_quorum": false,
> "big_vclock": 50,
> "chash_keyfun": {
>   "fun": "chash_std_keyfun",
>   "mod": "riak_core_util"
> },
> "claimant": "riak@10.243.44.165",
> "dvv_enabled": true,
> "dw": "quorum",
> "last_write_wins": false,
> "linkfun": {
>   "fun": "mapreduce_linkfun",
>   "mod": "riak_kv_wm_link_walker"
> },
> "n_val": 3,
> "notfound_ok": true,
> "old_vclock": 86400,
> "postcommit": [],
> "pr": 0,
> "precommit": [
>   {
> "fun": "precommit_hook",
> "mod": "commit_hooks"
>   }
> ],
> "pw": 0,
> "r": "quorum",
> "rw": "quorum",
> "small_vclock": 50,
> "w": "quorum",
> "young_vclock": 20
>   }
> }
>
> curl -4vvv -H 'Content-Type: text/plain'
> localhost:8098/types/test-bucket-type/buckets/test-bucket/keys/test-key -d
> "THIS IS THE DATA FOR TEST-KEY"

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Thanks for correcting that. Everything looks set up correctly.

How are you saving objects? If you're using HTTP, what is the URL?

Can you associate your precommit hook with a bucket type
("test-bucket-type" below) that is *not* set up for the "map" data
type and see if your hook is called correctly?

This command should save an object and trigger the precommit hook
*with* a bucket and bucket type:

curl -4vvv -H 'Content-Type: text/plain'
localhost:8098/types/test-bucket-type/buckets/test-bucket/keys/test-key
-d "THIS IS THE DATA FOR TEST-KEY"

Luke

On Fri, Nov 18, 2016 at 3:03 PM, Mav erick  wrote:
>
> * Hostname was NOT found in DNS cache
> *   Trying 127.0.0.1...
> * Connected to localhost (127.0.0.1) port 8098 (#0)
>> GET /types/maps/props HTTP/1.1
>> User-Agent: curl/7.35.0
>> Host: localhost:8098
>> Accept: */*
>>
> < HTTP/1.1 200 OK
> < Vary: Accept-Encoding
> * Server MochiWeb/1.1 WebMachine/1.10.6 (no drinks) is not blacklisted
> < Server: MochiWeb/1.1 WebMachine/1.10.6 (no drinks)
> < Date: Fri, 18 Nov 2016 22:27:56 GMT
> < Content-Type: application/json
> < Content-Length: 545
> <
> * Connection #0 to host localhost left intact
> {"props":{"active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod"
> :"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak@10.243.44.165","datatype":"map","dvv_en
> abled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"m
> apreduce_linkfun"},"n_val":3,"notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit
> ":[{"mod":"commit_hooks","fun":"precommit_hook"}],"pw":0,"r":"quorum","rw":"quorum","smal
> l_vclock":50,"w":"quorum","young_vclock":20}}
>
> $ curl -4vvv localhost:8098/types/maps/buckets/testbucket/props
> * Hostname was NOT found in DNS cache
> *   Trying 127.0.0.1...
> * Connected to localhost (127.0.0.1) port 8098 (#0)
>> GET /types/maps//buckets/testbucket/props HTTP/1.1
>> User-Agent: curl/7.35.0
>> Host: localhost:8098
>> Accept: */*
>>
> < HTTP/1.1 200 OK
> < Vary: Accept-Encoding
> * Server MochiWeb/1.1 WebMachine/1.10.6 (no drinks) is not blacklisted
> < Server: MochiWeb/1.1 WebMachine/1.10.6 (no drinks)
> < Date: Fri, 18 Nov 2016 22:59:22 GMT
> < Content-Type: application/json
> < Content-Length: 565
> <
> * Connection #0 to host localhost left intact
> {"props":{"name":"testbucket","active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":
> chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak@10.243.44.165",
> atype":"map","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_
> ink_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":true,"old_vclock":86400,"postcomm
> [],"pr":0,"precommit":[{"mod":"commit_hooks","fun":"precommit_hook"}],"pw":0,"r":"quo
> ,"rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}$

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
What is the output of these commands?

curl -4vvv localhost:8098/types/maps/props

curl -4vvv localhost:8098/types/maps/props/buckets/test-bucket/props

On Fri, Nov 18, 2016 at 2:21 PM, Mav erick  wrote:
> Luke
>
> I was able to change the properties with your URL, but still the hooks are
> not being called for typed buckets ONLY.
>
> The hook is being called for all buckets with no type. So I am sure that
> riak can find the beam file on all my nodes.
>
> I tried restarting riak on all nodes. Still same problem. The hook is called
> for buckets without a type, but wont be called for buckets with a type
>
>
> On 18 November 2016 at 16:45, Luke Bakken  wrote:
>>
>> Mav -
>>
>> You're not using the correct HTTP URL. You can use this command:
>>
>>
>> http://docs.basho.com/riak/kv/2.1.4/using/reference/bucket-types/#updating-a-bucket-type
>>
>> Or this URL:
>>
>> curl -XPUT localhost:8098/types/maps/props -H 'Content-Type:
>> application/json' -d
>> '{"props":{"precommit":[{"mod":"myhooks","fun":"precommit_hook"}]}}'
>>
>> Please ensure that the "myhooks" beam file is on all Riak servers in a
>> directory that will be picked up by Riak when it starts:
>>
>> http://docs.basho.com/riak/kv/2.1.4/using/reference/custom-code/
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav -

You're not using the correct HTTP URL. You can use this command:

http://docs.basho.com/riak/kv/2.1.4/using/reference/bucket-types/#updating-a-bucket-type

Or this URL:

curl -XPUT localhost:8098/types/maps/props -H 'Content-Type:
application/json' -d
'{"props":{"precommit":[{"mod":"myhooks","fun":"precommit_hook"}]}}'

Please ensure that the "myhooks" beam file is on all Riak servers in a
directory that will be picked up by Riak when it starts:

http://docs.basho.com/riak/kv/2.1.4/using/reference/custom-code/

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Nov 18, 2016 at 1:36 PM, Mav erick  wrote:
> Hi Luke
>
> I tried that and didn't work for a bucket with bucket type = maps. My erlang
> code below does work for buckets without types.
>
> But I think its because I didn't set the hook for the typed bucket
> correctly.Could you check my curl below, please ?
>
> I did this to set the hook
> curl -X PUT localhost:8098/riak/types/maps -H 'Content-Type:
> application/json' -d
> '{"props":{"precommit":[{"mod":"myhooks","fun":"precommit_hook"}]}}'
>
> That returns 204, but when I get the props ...
> curl http://localhost:8098/types/maps/props
> {
>   "props": {
> "active": true,
> "allow_mult": true,
> "basic_quorum": false,
> "big_vclock": 50,
> "chash_keyfun": {
>   "fun": "chash_std_keyfun",
>   "mod": "riak_core_util"
> },
> "claimant": "riak@10.243.44.165",
> "datatype": "map",
> "dvv_enabled": true,
> "dw": "quorum",
> "last_write_wins": false,
> "linkfun": {
>   "fun": "mapreduce_linkfun",
>   "mod": "riak_kv_wm_link_walker"
> },
> "n_val": 3,
> "notfound_ok": true,
> "old_vclock": 86400,
> "postcommit": [],
> "pr": 0,
> "precommit": [],
> "pw": 0,
> "r": "quorum",
> "rw": "quorum",
> "small_vclock": 50,
> "w": "quorum",
> "young_vclock": 20
>   }
> }
>
> The hook code is ...
>
> precommit_hook(Object) ->
>case riak_object:bucket(Object) of
>   {BucketType, Bucket} -> Bstr = binary_to_list(Bucket), Btstr =
> binary_to_list(BucketType);
>   Bucket -> Bstr = binary_to_list(Bucket), Btstr = <<"">>
>end,
>K = riak_object:key(Object),
>Kstr = binary_to_list(K),
>lager:info("MyHook Bucket type ~s, bucket ~s, key ~s [Btstr, Bstr,
> Kstr]),
>Object.
>
>
> On 18 November 2016 at 14:15, Luke Bakken  wrote:
>>
>> Mav -
>>
>> Please remember to use "Reply All" so that the riak-users list can
>> learn from what you find out. Thanks.
>>
>> Thebucket = riak_object:bucket(Object),
>>
>> Can you check to see if "Thebucket" is really a two-tuple of
>> "{BucketType, Bucket}"? I believe that is what is returned.
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>> On Fri, Nov 18, 2016 at 10:54 AM, Mav erick  wrote:
>> > I have some initializing to do - like connecting to a notification
>> > server,
>> > before I can use the commit hook. But I think I have figured that out
>> > now
>> > that I learnt about supervisors and OTP
>> >
>> > I have one other question though ...
>> > How do I get the bucket type of the bucket of the key that was committed
>> > ?
>> >
>> > I am using these to get the key and bucket names. But I cant seem to
>> > find a
>> > call to get the bucket's bucket type
>> >
>> >   Thebucket = riak_object:bucket(Object),
>> >   Thekey = riak_object:key(Object),
>> >
>> > Thanks !
>> >
>> > On 18 November 2016 at 12:14, Luke Bakken  wrote:
>> >>
>> >> Mav -
>> >>
>> >> Can you go into more detail? The subject of your message is
>> >> "initializing a commit hook".
>> >>
>> >> --
>> >> Luke Bakken
>> >> Engineer
>> >> lbak...@basho.com
>> >>
>> >> On Thu, Nov 17, 2016 at 9:09 AM, Mav erick  wrote:
>> >> > Folks
>> >> >
>> >> > Is there way RIAK can call an erlang function in a module when RIAK
>> >> > starts
>> >> > up ?
>> >> >
>> >> > Thanks
>> >> > Mav
>> >
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav -

Please remember to use "Reply All" so that the riak-users list can
learn from what you find out. Thanks.

Thebucket = riak_object:bucket(Object),

Can you check to see if "Thebucket" is really a two-tuple of
"{BucketType, Bucket}"? I believe that is what is returned.

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Nov 18, 2016 at 10:54 AM, Mav erick  wrote:
> I have some initializing to do - like connecting to a notification server,
> before I can use the commit hook. But I think I have figured that out now
> that I learnt about supervisors and OTP
>
> I have one other question though ...
> How do I get the bucket type of the bucket of the key that was committed ?
>
> I am using these to get the key and bucket names. But I cant seem to find a
> call to get the bucket's bucket type
>
>   Thebucket = riak_object:bucket(Object),
>   Thekey = riak_object:key(Object),
>
> Thanks !
>
> On 18 November 2016 at 12:14, Luke Bakken  wrote:
>>
>> Mav -
>>
>> Can you go into more detail? The subject of your message is
>> "initializing a commit hook".
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>> On Thu, Nov 17, 2016 at 9:09 AM, Mav erick  wrote:
>> > Folks
>> >
>> > Is there way RIAK can call an erlang function in a module when RIAK
>> > starts
>> > up ?
>> >
>> > Thanks
>> > Mav
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Initializing a commit hook

2016-11-18 Thread Luke Bakken
Mav -

Can you go into more detail? The subject of your message is
"initializing a commit hook".

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Nov 17, 2016 at 9:09 AM, Mav erick  wrote:
> Folks
>
> Is there way RIAK can call an erlang function in a module when RIAK starts
> up ?
>
> Thanks
> Mav

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Calling erlang shell from the command line

2016-11-14 Thread Luke Bakken
Hi Arun -

When you install Riak it installs the Erlang VM to a well-known location,
like /usr/lib/riak/erts-5.9.1

You can use /usr/lib/riak/erts-5.9.1/bin/erlc and know that it is the same
Erlang that Riak is using.

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Nov 14, 2016 at 11:20 AM, Arun Rajagopalan <
arun.v.rajagopa...@gmail.com> wrote:
> Hi RIAK users
>
> I would like to attach to the riak shell and compile an .erl program and
> quit. The reason is I want to be absolutely sure I am building the erl
> program with the version of erlang that my riak installation has
>
> Something like
> riak attach c(myprog.erl).
> Ctr-C a
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS False 503 Service Unavailable Error on Client

2016-11-14 Thread Luke Bakken
Hi Anthony,

Looking at the linked issue it appears that the 503 response can be
returned erroneously when communication between a Riak CS and Riak
node has an error ("If the first member of the preflist is down").

Is there anything predictable about these errors? You say they come
from 1 client only?
--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Nov 1, 2016 at 7:51 AM, Valenti, Anthony
 wrote:
> We are having a lot of 503 Service Unvailable errors for 1 particular
> application client(s) when connecting to Riak-CS.  Everything looks fine in
> Riak/Riak-CS and when I check the Riak-CS access logs, I can see access from
> other applications to other buckets before, during and after the reported
> error time.  We have a 5 node cluster that are load balanced and all of them
> seem to be operating normally and they should be able to handle the incoming
> connections.  We did find this Jira in Github which looks like our exact
> problem (https://github.com/basho/riak_cs/issues/1283) , but there is 1
> comment and it was closed and I’m not sure of the fix/workaround/resolution.
> Has this been resolved in a later version than we are using – riak cs
> 1.5.3-1?  Is there a way to correct the issue from the client side?
>
>
>
> Thanks,
>
> Anthony
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Increasing listen() backlog on riak's HTTP api

2016-10-27 Thread Luke Bakken
Hi Rohit,

Mochiweb's max connections are set as an argument to the start()
function. I don't believe there is a way to increase it at run time.

If you're hitting the listen backlog, your servers aren't able to keep
up with the request workload. Are you doing any listing or mapreduce
operations?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Oct 19, 2016 at 10:20 PM, Rohit Sanbhadti
 wrote:
> Hi all,
>
> I’ve been trying to increase the backlog that riak uses when opening a 
> listening socket with HTTP, as I’ve seen a fair number of backlog overflow 
> errors in my use case (we have a 10 node riak cluster which takes a lot of 
> traffic, and we certainly expect the peak of concurrent traffic to exceed the 
> default backlog size of 128). I just found out that there appears to be no 
> way to customize the backlog that riak passes to webmachine/mochiweb, as 
> indicated by this issue (https://github.com/basho/riak_api/issues/108).  Can 
> anyone recommend a way to increase this backlog without having to modify and 
> recompile the riak_api, or without switching to protocol buffers? Is there 
> any set of erlang commands I can run from the attachable riak console to 
> change the backlog and restart the listening socket?
>
> --
> Rohit S.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: enotconn error

2016-10-12 Thread Luke Bakken
Travis -

What are the client failures you are seeing? What Riak client library
are you using, and are you using the PB or HTTP interface to Riak?

The error message you provided indicates that the ping request
returned from Riak after haproxy closed the socket for the request.
One cause would be very high server load causing timeouts.
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Sep 14, 2016 at 7:23 AM, Travis Kirstine
 wrote:
> Hi all,
>
>
>
> I’m using haproxy with a riak 2.1.4 in a 5 node cluster.  I’m getting fairly
> consistent enotconn errors in riak which happen to coincide with client
> failures.  We’ve setup haproxy as recommended
> (https://gist.github.com/gburd/1507077) see below.  I’m running a leveldb
> backend with 9 GB max memory (I can go higher if needed).  I’m not sure at
> this point if I have a network issue or leveldb / riak issue.
>
>
>
> 2016-09-14 08:10:16 =CRASH REPORT
>
>   crasher:
>
> initial call: mochiweb_acceptor:init/3
>
> pid: <0.28104.442>
>
> registered_name: []
>
> exception error:
> {function_clause,[{webmachine_request,peer_from_peername,[{error,enotconn},{webmachine_request,{wm_reqstate,#Port<0.6539192>,[],undefined,undefined,undefined,{wm_reqdata,'GET',http,{1,0},"defined_in_wm_req_srv_init","defined_in_wm_req_srv_init",defined_on_call,defined_in_load_dispatch_data,"/ping","/ping",[],defined_in_load_dispatch_data,"defined_in_load_dispatch_data",500,1073741824,67108864,[],[],{0,nil},not_fetched_yet,false,{0,nil},<<>>,follow_request,undefined,undefined,[]},undefined,undefined,undefined}}],[{file,"src/webmachine_request.erl"},{line,150}]},{webmachine_request,get_peer,1,[{file,"src/webmachine_request.erl"},{line,124}]},{webmachine,new_request,2,[{file,"src/webmachine.erl"},{line,69}]},{webmachine_mochiweb,loop,2,[{file,"src/webmachine_mochiweb.erl"},{line,49}]},{mochiweb_http,headers,5,[{file,"src/mochiweb_http.erl"},{line,96}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>
> ancestors: ['http://192.168.18.64:8098_mochiweb',riak_api_sup,<0.319.0>]
>
> messages: []
>
> links: [<0.325.0>,#Port<0.6539192>]
>
> dictionary: []
>
> trap_exit: false
>
> status: running
>
> heap_size: 987
>
> stack_size: 27
>
> reductions: 963
>
>   neighbours:
>
>
>
>
>
> # haproxy.cfg
>
> global
>
> log 127.0.0.1 local2 info
>
> chroot  /var/lib/haproxy
>
> pidfile /var/run/haproxy.pid
>
> maxconn 256000
>
> userhaproxy
>
> group   haproxy
>
> spread-checks   5
>
> daemon
>
> quiet
>
> stats socket /var/lib/haproxy/stats
>
> defaults
>
> log global
>
> option  httplog
>
> option  dontlognull
>
> option  redispatch
>
> timeout connect 5000
>
> maxconn 256000
>
>
>
> frontend  main *:80
>
> modehttp
>
> acl url_static   path_beg   -i /static /images /javascript
> /stylesheets
>
> acl url_static   path_end   -i .jpg .gif .png .css .js
>
>
>
> backend static
>
> balance roundrobin
>
> server  static 127.0.0.1:4331 check
>
>
>
> backend app
>
> modehttp
>
> balance roundrobin
>
> server  wmts1riak 192.168.18.72:80 check
>
> server  wmts2riak 192.168.18.73:80 check
>
>
>
> backend riak_rest_backend
>
>mode   http
>
>balanceroundrobin
>
>option httpchk GET /ping
>
>option httplog
>
>server riak1 192.168.18.64:8098 weight 1 maxconn 1024  check
>
>server riak2 192.168.18.65:8098 weight 1 maxconn 1024  check
>
>server riak3 192.168.18.66:8098 weight 1 maxconn 1024  check
>
>server riak4 192.168.18.67:8098 weight 1 maxconn 1024  check
>
>server riak5 192.168.18.68:8098 weight 1 maxconn 1024  check
>
>
>
> frontend riak_rest
>
>bind   *:8098
>
>mode   http
>
>option contstats
>
>default_backendriak_rest_backend
>
>
>
> backend riak_protocol_buffer_backend
>
>balanceleastconn
>
>mode   tcp
>
>option tcpka
>
> 

Re: Erlang client map reduce?

2016-10-12 Thread Luke Bakken
Hi Brandon -

The riak_object module exports a type() function that will return the
bucket type of an object in Riak
(https://github.com/basho/riak_kv/blob/develop/src/riak_object.erl#L589-L592).

MapReduce docs:
http://docs.basho.com/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce/

In addition, here is a repository containing some exampe map/reduce
code: https://github.com/basho/riak_function_contrib/wiki

Having said all that, your use case may be better suited to Riak
Search. MapReduce is best run on an intermittent basis due to the load
it places on a cluster. Is your query something that will frequently
be run or could the output of the query be saved in Riak on a daily
basis, for instance?

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Sep 29, 2016 at 5:57 PM, Brandon Martin  wrote:
> So I am trying to figure out how to do a map reduce on a bucket type with
> the erlang client in erl. I didn’t see in the documentation how to do a map
> reduce with a bucket type. I have the bucket type and the bucket. I want to
> map reduce to basically filter out any documents whose createTime(which is
> just int/number) is less then 24 hours and return those. I have only been
> using riak for a few weeks and erlang for about a day. Any pointers or help
> would be appreciated.
>
> Thanks
>
> --
> Brandon Martin

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High number of Riak buckets

2016-09-30 Thread Luke Bakken
Hi Vikram,

If all of your buckets use the same bucket type with your custom
n_val, there won't be a performance issue. Just be sure to set n_val
on the bucket type, and that all buckets are part of that bucket type.

http://docs.basho.com/riak/kv/2.1.4/developing/usage/bucket-types/

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Sep 29, 2016 at 4:42 PM, Vikram Lalit  wrote:
> Hi - I am creating a messaging platform wherein am modeling each topic to
> serve as a separate bucket. That means there can potentially be millions of
> buckets, with each message from a user becoming a value on a distinct
> timestamp key.
>
> My question is there any downside to modeling my data in such a manner? Or
> can folks advise a better way of storing the same in Riak?
>
> Secondly, I would like to modify the default bucket properties (n_val) - I
> understand that such 'custom' buckets have a higher performance overhead due
> to the extra load on the gossip protocol. Is there a way the default n_val
> of newly created buckets be changed so that even if I have the above said
> high number of buckets, there is no performance degrade? Believe there was
> such a config allowed in app.config but not sure that file is leveraged any
> more after riak.conf was introduced.
>
> Thanks much.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: a weird error while post request to server for store object

2016-09-07 Thread Luke Bakken
Hello Alan -

Which PHP client library are you using?

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Sep 6, 2016 at 10:29 PM, HQS^∞^  wrote:
> dear everyone:
> I follow the tutorial at
> http://docs.basho.com/riak/kv/2.1.4/developing/usage/document-store/   ,
> Step by Step Practice  , when I've Post a request for store object , but the
> riak server respond 400 (Bad Request)  ,  I review my code again and again ,
> but  no problem found . see below:
>
> 
>
> class BlogPost {
>  var  $_title = '';
>  var  $_author = '';
>  var  $_content = '';
>  var  $_keywords = [];
>  var  $_datePosted = '';
>  var  $_published = false;
>  var  $_bucketType = "cms";
>  var  $_bucket = null;
>  var  $_riak = null;
>  var  $_location = null;
>   public function __construct(Riak $riak, $bucket, $title, $author,
> $content, array $keywords, $date, $published)
>   {
> $this->_riak = $riak;
> $this->_bucket = new Bucket($bucket, "cms");
> $this->_location = new Riak\Location('blog1',$this->_bucket,"cms");
> $this->_title = $title;
> $this->_author = $author;
> $this->_content = $content;
> $this->_keywords = $keywords;
> $this->_datePosted = $date;
> $this->_published = $published;
>   }
>
>   public function store()
>   {
> $setBuilder = (new UpdateSet($this->_riak));
>
> foreach($this->_keywords as $keyword) {
>   $setBuilder->add($keyword);
> }
> /*
>(new UpdateMap($this->_riak))
>   ->updateRegister('title', $this->_title)
>   ->updateRegister('author', $this->_author)
>   ->updateRegister('content', $this->_content)
>   ->updateRegister('date', $this->_datePosted)
>   ->updateFlag('published', $this->_published)
>   ->updateSet('keywords', $setBuilder)
>   ->withBucket($this->_bucket)
>   ->build()
>   ->execute();
>
> */
>$response = (new UpdateMap($this->_riak))
>   ->updateRegister('title', $this->_title)
>   ->updateRegister('author', $this->_author)
>   ->updateRegister('content', $this->_content)
>   ->updateRegister('date', $this->_datePosted)
>   ->updateFlag('published', $this->_published)
>   ->updateSet('keywords', $setBuilder)
>   ->atLocation($this->_location)
>   ->build()
>   ->execute();
>
> echo '';
>   var_dump($response);
> echo '';
>   }
> }
>
>  $node = (new Node\Builder)
> ->atHost('192.168.111.2')
> ->onPort(8098)
> ->build();
>
> $riak = new Riak([$node]);
>
>
> $keywords = ['adorbs', 'cheshire'];
> $date = new \DateTime('now');
>
>
> $post1 = new BlogPost(
>   $riak,
>   'cat_pics', // bucket
>   'This one is so lulz!', // title
>   'Cat Stevens', // author
>   'Please check out these cat pics!', // content
>   $keywords, // keywords
>   $date, // date posted
>   true // published
> );
> $post1->store();
>
> the wireshark captured packet :
>
>  192.168.171.124(client ip)  =>  192.168.111.2(riak server ip)HTTP
> 511POST /types/cms/buckets/cat_pics/datatypes/alldoc? HTTP/1.1
> (application/json)
>  192.168.111.2192.168.171.124HTTP251HTTP/1.1 400 Bad Request
>
>  GET http://192.168.111.2:8098//types/cms/buckets/cat_pics/props
> {"props":{"name":"cat_pics","young_vclock":20,"w":"quorum","small_vclock":50,"search_index":"blog_posts","rw":"quorum","r":"quorum","pw":0,"precommit":[],"pr":0,"postcommit":[],"old_vclock":86400,"notfound_ok":true,"n_val":3,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"last_write_wins":false,"dw":"quorum","dvv_enabled":true,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"big_vclock":50,"basic_quorum":false,"allow_mult":true,"datatype":"map","active":true,"claimant":"node1@192.168.111.1"}}
>
>please help me catch the bugs  thanks in advance!
>
> regards
>
> Alan

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-31 Thread Luke Bakken
Kyle -

Verify return code: 19 (self signed certificate in certificate chain)

Since your server cert is self-signed, there's not much more that can
be done at this point I believe. My security tests use a dedicated CA
where the Root cert is available for validation
(https://github.com/basho/riak-client-tools/tree/master/test-ca)

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Aug 31, 2016 at 3:11 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> I am getting the following information:
>
> Verify return code: 19 (self signed certificate in certificate chain)

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using Riak CS with Hadoop

2016-08-31 Thread Luke Bakken
Riak CS provides an S3 capable API, so theoretically it could work.
Have you tried? If so and you're having issues, follow up here.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Aug 31, 2016 at 7:38 AM, Valenti, Anthony
 wrote:
> Has anyone setup Hadoop to be able use Raik CS as an S3 source/destination
> instead of or in addition to Amazon S3?  Hadoop assumes that it should go to
> Amazon S3 by default.  Specifically, I am trying to use Hadoop distcp to
> copy files to Riak CS.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Luke Bakken
This command will show the handshake used for HTTPS. It will show if
the server's certificate (the same one used for TLS) can be validated.

Using "openssl s_client" is a good way to start diagnosing what's
actually happening when SSL/TLS is enabled in Riak.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Aug 30, 2016 at 2:18 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> I am using TLS for protocol buffer - not sure if you're thinking of HTTP only.
>
> Thanks
>
> -Kyle-
>
> -Original Message-
> From: Luke Bakken [mailto:lbak...@basho.com]
> Sent: Tuesday, August 30, 2016 2:14 PM
> To: Nguyen, Kyle
> Cc: Riak Users
> Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
> using Java client
>
> Kyle,
>
> I would be interested to see the output of this command run on the same 
> server as your Riak node:
>
> openssl s_client -debug -connect localhost:8098
>
> Please replace "8098" with the HTTPS port used in this configuration setting 
> in your /etc/riak.conf file:
>
> listener.https.internal

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Luke Bakken
Kyle,

I would be interested to see the output of this command run on the
same server as your Riak node:

openssl s_client -debug -connect localhost:8098

Please replace "8098" with the HTTPS port used in this configuration
setting in your /etc/riak.conf file:

listener.https.internal

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Aug 30, 2016 at 12:01 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> I believe this is not the case. The Java riak-client (version 2.0.6) that I 
> used does validate the server's cert but not checking on server's CN. If I 
> replaced getACert CA in the trustor with another unknown CA then SSL will 
> fail with "unable to find valid certification path to requested target". I 
> don't even see an option to ignore server cert validation on the client side. 
> I am wondering if you can help provide some details related to SSL 
> certification validation configuration.
>
> My riak node builder code:
> RiakNode.Builder builder = new 
> RiakNode.Builder().withRemoteAddress("127.0.0.1").withRemotePort(8087);
> builder.withAuth(username, password, trustStore, keyStore, 
> keyPasswd);
>
> Thanks
>
> -Kyle-

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Luke Bakken
Kyle -

The CN should be either the DNS-resolvable host name of the Riak node,
or its IP address (without "riak@"). Then, the Java client should be
configured to use that to connect to the node (either DNS or IP).
Without doing that, I really don't have any idea how the Java client
is validating the server certificate during TLS handshake. Did you
configure the client to *not* validate the server cert?

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Aug 29, 2016 at 3:18 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> The CN for client's certificate is "kyle" and the CN for riak cert 
> (ssl.certfile) is "riak@127.0.0.1" which matches the nodename in the 
> riak.conf. Riak ssl.cacertfile.pem contains the same CA (getACert) which I 
> used to sign both client and riak public keys. It appears that riak also 
> validated the client certificate following this SSL debug info. I do see *** 
> CertificateVerify (toward the end) after the client certificate is requested 
> by Riak. Please let me know if it looks right to you.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-29 Thread Luke Bakken
Hi Kyle -

Thanks for the info. Just so you know, setting check_clr = off means
that Riak will not validate the signing chain of your client
certificate.

What value are you using for "CN=" for the certificates pointed to by
the various "ssl.*" settings in riak.conf?

http://docs.basho.com/riak/kv/2.1.4/using/security/basics/#certificate-configuration

I ask because the validation of the server certificate by the client
during the TLS handshake depends on the CN= value.

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Aug 29, 2016 at 2:07 PM, Nguyen, Kyle  wrote:
> Thanks a lot, Luke! I finally got the mutual certificate based authentication 
> working by setting check_clr = off since I don't see any documentation on how 
> to set this up and we might not need this feature. Another thing that I added 
> to make it work is to add the correct entry for cidr. I was using 
> 127.0.0.1/32 instead of 10.0.2.2/32 which is the Ubuntu ip that my laptop 
> localhost is sending the request to.
>
> +++---+--+
> |   users|cidr|  source   | options  |
> +++---+--+
> |kyle|10.0.2.2/32 |certificate|[]
>
> TLS also works without using the DNS-resolvable hostname with protocol 
> buffer. Hence, I thought you must have referred to HTTPS.
>
> -Kyle-
>
> -Original Message-
> From: Luke Bakken [mailto:lbak...@basho.com]
> Sent: Monday, August 29, 2016 7:59 AM
> To: Nguyen, Kyle
> Cc: Riak Users
> Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
> using Java client
>
> Kyle -
>
> What is the output of these commands?
>
> riak-admin security print-users
> riak-admin security print-sources
>
> http://docs.basho.com/riak/kv/2.1.4/using/security/basics/#user-management
>
> Please note that setting up certificate authentication *requires* that you 
> have set up SSL / TLS in Riak as well.
>
> http://docs.basho.com/riak/kv/2.1.4/using/security/basics/#enabling-ssl
>
> The SSL certificates used by Riak *must* have their "CN=" section match the 
> server's DNS-resolvable host name. This is an SSL/TLS requirement, not 
> specific to Riak. Then, when you connect via the Java client, you must use 
> the DNS name and not IP address. The client must have the appropriate public 
> key information to validate the server cert as well (from Get a Cert).
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-29 Thread Luke Bakken
Kyle -

What is the output of these commands?

riak-admin security print-users
riak-admin security print-sources

http://docs.basho.com/riak/kv/2.1.4/using/security/basics/#user-management

Please note that setting up certificate authentication *requires* that
you have set up SSL / TLS in Riak as well.

http://docs.basho.com/riak/kv/2.1.4/using/security/basics/#enabling-ssl

The SSL certificates used by Riak *must* have their "CN=" section
match the server's DNS-resolvable host name. This is an SSL/TLS
requirement, not specific to Riak. Then, when you connect via the Java
client, you must use the DNS name and not IP address. The client must
have the appropriate public key information to validate the server
cert as well (from Get a Cert).

--
Luke Bakken
Engineer
lbak...@basho.com

On Fri, Aug 26, 2016 at 3:34 PM, Nguyen, Kyle  wrote:
> Update – Handshake was successfully after I opted out mutual authentication
> option, client no longer sends its certificate to riak. However, getting the
> following error after TLS is established:
>
>
>
> *** Finished
>
> verify_data:  { 149, 140, 49, 23, 238, 152, 45, 212, 158, 44, 189, 155 }
>
> ***
>
> %% Cached client session: [Session-12, TLS_RSA_WITH_AES_128_CBC_SHA256]
>
> nioEventLoopGroup-2-4, WRITE: TLSv1.2 Application Data, length = 21
>
> nioEventLoopGroup-2-4, called closeOutbound()
>
> …..
>
> Caused by: com.basho.riak.client.core.NoNodesAvailableException
>
> at
> com.basho.riak.client.core.RiakCluster.retryOperation(RiakCluster.java:469)
>
> at
> com.basho.riak.client.core.RiakCluster.access$1000(RiakCluster.java:48)
>
> at
> com.basho.riak.client.core.RiakCluster$RetryTask.run(RiakCluster.java:554)
>
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> ... 1 more

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak bitcask calculation

2016-07-21 Thread Luke Bakken
Hi Travis,

The memory used by Riak (beam.smp) is for more than key storage, of course.

You can assume that in your case RES will get to about the same amount
after a reboot, given a similar workload in Riak.

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Jul 18, 2016 at 8:54 AM, Travis Kirstine
 wrote:
> Yes, the reason I'm concerned is that we projected much lower memory usage 
> based on the calculations.   We originally provisioned 2x the required memory 
> and it appears that this will not be enough.
>
> Am I correct in that the top cmd's RES memory for the beam.smp command is the 
> memory being used by riak for "key storage", if the server was to be rebooted 
> the memory would eventually climb back to this level?
>
> Thanks for your help
>
> -Original Message-
> From: Luke Bakken [mailto:lbak...@basho.com]
> Sent: July-18-16 11:35 AM
> To: Travis Kirstine 
> Cc: riak-users@lists.basho.com; ac...@jdbarnes.com
> Subject: Re: riak bitcask calculation
>
> Hi Travis -
>
> The calculation provided for bitcask memory consumption is only a rough 
> guideline. Using more memory than the calculation suggests is normal and 
> expected with Riak. As you increase load on this cluster memory use may go up 
> further as the operating system manges disk operations and buffers.
>
> Is there a reason you're concerned about this usage?
>
> On Mon, Jul 18, 2016 at 8:28 AM, Travis Kirstine 
>  wrote:
>> Yes from the free command
>>
>> [root@riak1 ~]# free -g
>>   totalusedfree  shared  buff/cache   
>> available
>> Mem: 45   9   0   0  36  
>> 35
>> Swap:23   0  22
>>
>> Or from top
>>
>> PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
>> 24421 riak  20   0 16.391g 8.492g  41956 S  82.7 18.5  10542:10 beam.smp

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Python Client 2.5.5 released

2016-07-18 Thread Luke Bakken
Hello everyone,

I released version 2.5.5 of the Python client today. This fixes a
long-standing issue with multi-get and multi-put operations where the
thread pool did not shut down cleanly when the interpreter shuts down.
Remember to "close()" your RiakClient instances to ensure cleanup.

https://pypi.python.org/pypi/riak/2.5.5

https://github.com/basho/riak-python-client/blob/master/RELNOTES.md

API docs: http://basho.github.io/riak-python-client/

https://github.com/basho/riak-python-client/releases/tag/2.5.5

Milestone in GH:

https://github.com/basho/riak-python-client/issues?q=milestone%3Ariak-python-client-2.5.5

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak bitcask calculation

2016-07-18 Thread Luke Bakken
Hi Travis -

The calculation provided for bitcask memory consumption is only a
rough guideline. Using more memory than the calculation suggests is
normal and expected with Riak. As you increase load on this cluster
memory use may go up further as the operating system manges disk
operations and buffers.

Is there a reason you're concerned about this usage?

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Jul 18, 2016 at 8:28 AM, Travis Kirstine
 wrote:
> Yes from the free command
>
> [root@riak1 ~]# free -g
>   totalusedfree  shared  buff/cache   
> available
> Mem: 45   9   0   0  36  
> 35
> Swap:23   0  22
>
> Or from top
>
> PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 24421 riak  20   0 16.391g 8.492g  41956 S  82.7 18.5  10542:10 beam.smp
>
>
> I don't think that we are IO bound
> dstat
>
> total-cpu-usage -dsk/total- -net/total- ---paging-- ---system--
> usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
>   0   0  99   0   0   0| 150k  633k|   0 0 |   1B   25B| 702  2030
>   0   5  95   0   0   0|   0 0 |  10k 2172B|   0 0 |1125   765
>   2   6  92   0   0   0|   0 0 | 213k  135k|   0 0 |2817  7502
>   2   5  92   0   0   0|   0 0 | 159k   88k|   0 0 |2758  9834
>   2   5  93   0   0   0|   0  4884k| 278k   70k|   0 0 |2923  7453
>   0   5  95   0   0   0|4096B   10M|  21k 1066B|   0 0 |3121   781
>   4   7  89   0   0   0|   010M| 258k  160k|   0 0 |  13k   16k
>   0   5  95   0   0   0|   0  4096B| 200k   65k|   0 0 |1413  1589
>   1   5  92   1   0   0|   026k| 287k  206k|   0 0 |2124  4990
>   1   4  95   0   0   0|   0  2048B|  67k   78k|   0 0 |1667  4504
>   1   4  95   0   0   0|   0  1560k| 102k  105k|   0 0 |1639  4146
>   3   8  88   1   0   0|   086M| 453k  335k|   0 0 |609716k
>   4  14  81   0   0   0|   015k| 635k  564k|   0 0 |538314k
>   0   4  96   0   0   0|   0 0 |  29k 1697B|   0 0 |1121   769
>   4   7  89   0   0   0|   0 0 | 339k  376k|   0 0 |801715k
>   5  16  79   0   0   0|   011M| 847k  824k|   0 0 |  13k   30k
>   2  12  86   1   0   0|4096B   10M| 301k  272k|   0 0 |463911k
>   3  10  87   0   0   0|   010M| 508k  610k|   0 0 |826017k
>   2   9  87   2   0   0|   013k| 523k  354k|   0 0 |343210k
>   0   4  96   0   0   0|   0 0 |3434B 1468B|   0 0 |1063   774

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak bitcask calculation

2016-07-18 Thread Luke Bakken
Hi Travis,

Could you go into detail about how you're coming up with 9GiB per
node? Is this from the output of the "free" command?

Bitcask uses the operating system's buffers for file operations, and
will happily use as much free ram as it can get to speed up
operations. However, the OS will use that memory for other programs
that need it should that need arise.

If you're using Linux, these settings may improve bitcask performance
in your cluster:

http://docs.basho.com/riak/kv/2.1.4/using/performance/#optional-i-o-settings

Benchmarking before and after making changes is the recommended way to proceed.

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com


On Fri, Jul 15, 2016 at 8:48 AM, Travis Kirstine
 wrote:
> I've put ~74 million objects in my riak cluster with a bucket size of 9
> bytes and keys size of 21 bytes.  According the riak capacity calculator
> this should require ~4 GiB of RAM per node.  Right now my servers are
> showing ~ 9 GiB used per node.  Is this caused by hashing or something
> else..
>
> # capacity calculator output
>
> To manage your estimated 73.9 million key/bucket pairs where bucket names
> are ~9 bytes, keys are ~21 bytes, values are ~36 bytes and you are setting
> aside 16.0 GiB of RAM per-node for in-memory data management within a
> cluster that is configured to maintain 3 replicas per key (N = 3) then Riak,
> using the Bitcask storage engine, will require at least:
>
> 5 nodes
> 3.9 GiB of RAM per node (19.7 GiB total across all nodes)
> 11.4 GiB of storage space per node (56.8 GiB total storage space used across
> all nodes)

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Create Bucket failed

2016-07-18 Thread Luke Bakken
Hi Salman,

Please re-read the Riak CS instructions carefully. You *must* only
have one Stanchion service running in your entire cluster:

https://docs.basho.com/riak/cs/2.1.1/cookbooks/installing/#installing-stanchion-on-a-node

Based on your latest email, it sounds as though Stanchion is running
on every node.

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Jul 18, 2016 at 8:00 AM, Salman Khaleghian  wrote:
> We have 3 servers. One of them failed. And we set it as down server. As I
> know and think it is should not be problem. So i did not mention it and
> focuse on other parts. Today after the server come back suddenly the problem
> was solved. Snachion service was ok on two working server. AND  we set that
> node as down.
>
>  Original message ----
> From: Luke Bakken 
> Date: 07/18/2016 7:26 PM (GMT+03:30)
> To: Salman Khaleghian 
> Cc: riak-users 
> Subject: Re: Create Bucket failed
>
> Salman -
>
> Please use "reply all" to include the mailing list in the discussion.
>
> This is the first time you have mentioned a "failed server". Can you
> go into more detail? How many servers are in this cluster?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Mon, Jul 18, 2016 at 2:39 AM, Salman Khaleghian 
> wrote:
>> Hello
>> After the failed server come back and we add it, the problem was solved.
>> But
>> I do not understand the reason. any idea please.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Create Bucket failed

2016-07-18 Thread Luke Bakken
Salman -

Please use "reply all" to include the mailing list in the discussion.

This is the first time you have mentioned a "failed server". Can you
go into more detail? How many servers are in this cluster?

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Jul 18, 2016 at 2:39 AM, Salman Khaleghian  wrote:
> Hello
> After the failed server come back and we add it, the problem was solved. But
> I do not understand the reason. any idea please.
>
>
> ---- On Thu, 14 Jul 2016 17:25:34 +0430 Luke Bakken wrote
> 
>
> Salman -
>
> Can you provide more detailed debugging logs from s3cmd or some way to
> reproduce this?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Wed, Jul 13, 2016 at 3:34 AM, Salman Khaleghian 
> wrote:
>> Hello
>> I use cloudberry and s3cmd both. Both of them show internal server error.
>> Bests
>>
>>
>>  On Tue, 12 Jul 2016 19:33:37 +0430 Luke Bakken
>> wrote
>> 
>>
>> What tool are you using to create buckets? If you can provide debug
>> output, it looks as though the message sent to Riak CS is bad ("error,
>> malformed_xml")
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Sat, Jul 9, 2016 at 1:11 AM, s251251251  wrote:
>>> Hello
>>> after some day after riak-cs Installation, I can not create bucket.
>>> Server
>>> Error is:
>>>
>>> 2016-07-09 12:36:15.401 [error] <0.796.0> Webmachine error at path
>>> "/buckets/test" :
>>>
>>>
>>> {error,{error,{badmatch,{error,malformed_xml}},[{riak_cs_s3_response,xml_error_code,1,[{file,"src/riak_cs_s3_response.erl"},{line,396}]},{riak_cs_s3_response,error_response,1,[{file,"src/riak_cs_s3_response.erl"},{line,273}]},{riak_cs_wm_bucket,accept_body,2,[{file,"src/riak_cs_wm_bucket.erl"},{line,130}]},{riak_cs_wm_common,accept_body,2,[{file,"src/riak_cs_wm_common.erl"},{line,342}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_resource.erl"},{line,186}]},{webmachine_resource,...},...]}}
>>> in riak_cs_s3_response:xml_error_code/1 line 396
>>>
>>> however i can get and put files. stanchion started.
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Create Bucket failed

2016-07-14 Thread Luke Bakken
Salman -

Can you provide more detailed debugging logs from s3cmd or some way to
reproduce this?

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jul 13, 2016 at 3:34 AM, Salman Khaleghian  wrote:
> Hello
> I use cloudberry and s3cmd both. Both of them show internal server error.
> Bests
>
>
>  On Tue, 12 Jul 2016 19:33:37 +0430 Luke Bakken wrote
> 
>
> What tool are you using to create buckets? If you can provide debug
> output, it looks as though the message sent to Riak CS is bad ("error,
> malformed_xml")
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Sat, Jul 9, 2016 at 1:11 AM, s251251251  wrote:
>> Hello
>> after some day after riak-cs Installation, I can not create bucket. Server
>> Error is:
>>
>> 2016-07-09 12:36:15.401 [error] <0.796.0> Webmachine error at path
>> "/buckets/test" :
>>
>> {error,{error,{badmatch,{error,malformed_xml}},[{riak_cs_s3_response,xml_error_code,1,[{file,"src/riak_cs_s3_response.erl"},{line,396}]},{riak_cs_s3_response,error_response,1,[{file,"src/riak_cs_s3_response.erl"},{line,273}]},{riak_cs_wm_bucket,accept_body,2,[{file,"src/riak_cs_wm_bucket.erl"},{line,130}]},{riak_cs_wm_common,accept_body,2,[{file,"src/riak_cs_wm_common.erl"},{line,342}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_resource.erl"},{line,186}]},{webmachine_resource,...},...]}}
>> in riak_cs_s3_response:xml_error_code/1 line 396
>>
>> however i can get and put files. stanchion started.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: java client: overload leads to BlockingOperationException

2016-07-14 Thread Luke Bakken
Hi Henning,

The best place to continue discussion would be to file an issue in
GitHub. This sounds like a bug or at least a place for improvement.

https://github.com/basho/riak-java-client/issues

> How many active, busy connections does Riak KV support?

You're correct that "it depends" is the right answer. In doing some
benchmarks with the .NET client, I found that there was little benefit
to the maximum number of connections exceeding the ring size in the
cluster. This is probably specific to the benchmarks I was doing at
the time, too. The best option is always to simulate your workload,
tweak settings, and benchmark.

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jul 13, 2016 at 7:03 AM, Henning Verbeek  wrote:
> I'm still struggling with a BlockingOperationException thrown by
> riak-java-client 2.0.6, which occurs when I put heavy load on Riak KV.
> Since https://github.com/basho/riak-java-client/issues/523 is fixed,
> this happens only in - what I assume is - an overload-scenario.
>
> The exception:
>
> 2016-07-13 14:41:12.789  localhost [nioEventLoopGroup-2-2] ERROR
> com.basho.riak.client.core.RiakNode - Operation onException() channel:
> id:237445453 localhost:8087 {}
> io.netty.util.concurrent.BlockingOperationException:
> DefaultChannelPromise@77ccd827(incomplete)
> at 
> io.netty.util.concurrent.DefaultPromise.checkDeadLock(DefaultPromise.java:390)
> at 
> io.netty.channel.DefaultChannelPromise.checkDeadLock(DefaultChannelPromise.java:157)
> at 
> io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:251)
> at 
> io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:129)
> at 
> io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:28)
> at 
> com.basho.riak.client.core.RiakNode.doGetConnection(RiakNode.java:697)
> at 
> com.basho.riak.client.core.RiakNode.getConnection(RiakNode.java:656)
> at com.basho.riak.client.core.RiakNode.execute(RiakNode.java:587)
> at 
> com.basho.riak.client.core.DefaultNodeManager.executeOnNode(DefaultNodeManager.java:91)
> at 
> com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:322)
> at 
> com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:240)
> at 
> com.basho.riak.client.api.commands.kv.StoreValue.executeAsync(StoreValue.java:117)
> at 
> com.basho.riak.client.api.commands.kv.UpdateValue$1.handle(UpdateValue.java:182)
> at 
> com.basho.riak.client.api.commands.ListenableFuture.notifyListeners(ListenableFuture.java:78)
> at 
> com.basho.riak.client.api.commands.CoreFutureAdapter.handle(CoreFutureAdapter.java:120)
> at 
> com.basho.riak.client.core.FutureOperation.fireListeners(FutureOperation.java:176)
> at 
> com.basho.riak.client.core.FutureOperation.setComplete(FutureOperation.java:224)
> at com.basho.riak.client.core.RiakNode.onSuccess(RiakNode.java:878)
> at 
> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:30)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
> at 
> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
>

Re: Riak on Solaris/OmniOS/illumos

2016-07-12 Thread Luke Bakken
Hi Henrik,

Sorry for the delay in responding to you via this channel. Solaris /
Illumos support will be dropped in a future Riak release, as will
FreeBSD. However, this does not preclude the community continuing
support for these platforms. All the necessary code to build
platform-specific Riak packages is in this repository:

https://github.com/basho/node_package

Maintaining support for a platform basically entails that the build
for that platform continues to work on that platform's supported
versions.

If you'd like to contribute, please give building the packages for
your platform a try. If you have difficulty or find issues, file them
on GitHub, or better yet, send in a PR.

Thanks

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Jun 13, 2016 at 5:30 AM, Henrik Johansson  wrote:
> Hi,
>
> I've recently been told that Riak will no longer be supported on 
> Solaris/illumos based distributions. At the same time ZFS was recommended 
> which I find a bit strange since ZFS comes from Solaris/illumos and is still 
> the most tested of the platform. There are also Riak probes for DTrace.
>
> Can someone confirm this and/or give me some background to this?
>
> Regards
> Henrik
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client question

2016-07-12 Thread Luke Bakken
Hi Guido,

I see that you opened up this PR, thanks -
https://github.com/basho/riak-java-client/pull/631

Would you mind filing these questions in an issue on GitHub to
continue the discussion there?

https://github.com/basho/riak-java-client/issues/new

Thanks!

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jun 29, 2016 at 7:02 AM, Guido Medina  wrote:
> Hi,
>
> Are there any plans on releasing a Riak Java client with Netty-4.1.x?
>
> The reasoning for this is that some projects like Vert.x 3.3.0 for example
> are already on Netty-4.1.x and AFAIK Netty's 4.1.x isn't just a drop in
> replacement for 4.0.x
>
> Would it make sense to support another Riak Java client, say version 2.1.x
> with Netty-4.1.x as a way to move forward?
>
> Or maybe Riak 2.0.x works with Netty 4.1.x? but I doubt it.
>
> Best regards,
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Create Bucket failed

2016-07-12 Thread Luke Bakken
What tool are you using to create buckets? If you can provide debug
output, it looks as though the message sent to Riak CS is bad ("error,
malformed_xml")

--
Luke Bakken
Engineer
lbak...@basho.com


On Sat, Jul 9, 2016 at 1:11 AM, s251251251  wrote:
> Hello
> after some day  after riak-cs Installation, I can not create bucket. Server
> Error is:
>
> 2016-07-09 12:36:15.401 [error] <0.796.0> Webmachine error at path
> "/buckets/test" :
> {error,{error,{badmatch,{error,malformed_xml}},[{riak_cs_s3_response,xml_error_code,1,[{file,"src/riak_cs_s3_response.erl"},{line,396}]},{riak_cs_s3_response,error_response,1,[{file,"src/riak_cs_s3_response.erl"},{line,273}]},{riak_cs_wm_bucket,accept_body,2,[{file,"src/riak_cs_wm_bucket.erl"},{line,130}]},{riak_cs_wm_common,accept_body,2,[{file,"src/riak_cs_wm_common.erl"},{line,342}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_resource.erl"},{line,186}]},{webmachine_resource,...},...]}}
> in riak_cs_s3_response:xml_error_code/1 line 396
>
> however i can get and put files. stanchion started.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Create-Bucket-failed-tp4034449.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Python Client 2.5.4 released

2016-07-11 Thread Luke Bakken
Hello everyone,

I released version 2.5.4 of the Python client today. This fixes a
couple bugs found while testing with Riak TS 1.3.0, and addresses the
issue of time zones when datetime objects are stored in Riak TS.

https://pypi.python.org/pypi/riak/2.5.4

https://github.com/basho/riak-python-client/blob/master/RELNOTES.md

API docs: http://basho.github.io/riak-python-client/

https://github.com/basho/riak-python-client/releases/tag/2.5.4

Milestone in GH:

https://github.com/basho/riak-python-client/issues?q=milestone%3Ariak-python-client-2.5.4

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Erlang Client 2.4.1 released

2016-07-11 Thread Luke Bakken
Hi everyone -

I just released version 2.4.1 of the Erlang client. The main feature
is support for OTP 19 in the client itself and its dependencies.
Dialyzer appears to be making erroneous warnings at this point that
will be addressed in the future.

https://github.com/basho/riak-erlang-client/releases/tag/2.4.1

https://github.com/basho/riak-erlang-client/blob/master/RELNOTES.md

https://github.com/basho/riak-erlang-client/issues?q=milestone%3Ariak-erlang-client-2.4.1

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Erlang Client 2.4.0 released

2016-07-08 Thread Luke Bakken
Hi everyone -

I just released version 2.4.0 of the Erlang client. The main feature
is support for both Riak KV and Riak TS.

https://github.com/basho/riak-erlang-client/releases/tag/2.4.0

https://github.com/basho/riak-erlang-client/blob/master/RELNOTES.md

https://github.com/basho/riak-erlang-client/issues?q=milestone%3Ariak-erlang-client-2.4.0

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Node.js client 2.2.2 released

2016-07-07 Thread Luke Bakken
Hello everyone,

I released version 2.2.2 of the Riak Node.js client today, which
contains a fix for a bug in how the client decodes Solr documents
after executing a Search command.

More information is available at the following links:

https://github.com/basho/riak-nodejs-client/blob/master/RELNOTES.md

https://github.com/basho/riak-nodejs-client/milestone/12?closed=1

https://github.com/basho/riak-nodejs-client/releases/tag/v2.2.2

https://www.npmjs.com/package/basho-riak-client

Thanks!

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Node.js client 2.2.1 released

2016-07-06 Thread Luke Bakken
Hello everyone,

I released version 2.2.1 of the Riak Node.js client today, which
contains a fix for a bug that leads to an unhandled exception.

More information is available at the following links:

https://github.com/basho/riak-nodejs-client/blob/master/RELNOTES.md

https://github.com/basho/riak-nodejs-client/milestone/11?closed=1

https://github.com/basho/riak-nodejs-client/releases/tag/v2.2.1

https://www.npmjs.com/package/basho-riak-client

Thanks!

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Creating riak-kv bucket type programmatically

2016-07-05 Thread Luke Bakken
Hi Kyle,

There is no support for this at the moment, but keep an eye on this
issue and "vote" for it if you like.

https://github.com/basho/riak_kv/issues/1123

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Jul 5, 2016 at 11:15 AM, Nguyen, Kyle  wrote:
> Hi all,
>
>
>
> Is there a way for us to create bucket type programmatically without using
> the cli “sudo riak-admin bucket-type create my_bucket_type” command?
>
>
>
> Thanks
>
>
>
> -Kyle-
>
>
>
>
> 
> The information contained in this message may be confidential and legally
> protected under applicable law. The message is intended solely for the
> addressee(s). If you are not the intended recipient, you are hereby notified
> that any use, forwarding, dissemination, or reproduction of this message is
> strictly prohibited and may be unlawful. If you are not the intended
> recipient, please contact the sender by return e-mail and destroy all copies
> of the original message.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Memory backend doesn't work (with multi backend)?

2016-06-23 Thread Luke Bakken
Hello,

I think the documentation for multi backend could use some
improvement. Once you configure Riak to use the "multi" storage
backend, you must then *name* backends to be used by your bucket
types:

Here is a configuration that you can use, that would create two
backends, one using the memory backend that is *named*
"my_memory_backend", and one using the leveldb backend that is named
"my_leveldb_backend":

In riak.conf:

storage_backend = multi
multi_backend.my_memory_backend.storage_backend = memory
multi_backend.my_leveldb_backend.storage_backend = leveldb

The following is an example of setting a leveldb-specific setting for
the "my_leveldb_backend" named backend:

multi_backend.my_leveldb_backend.leveldb.maximum_memory.percent = 70

Then, you associate your "os_cache" bucket type with
"my_memory_backend" this way:

riak-admin bucket-type create os_cache '{"props": {"write_once":
true,"backend":"my_memory_backend","n_val":2,"r":1,"w":1,"dw":1}}'
riak-admin bucket-type activate os_cache

Please let me know if this resolves your issue. You may have to remove
the content's of Riak's data and ring directories to overwrite your
old settings.

I have filed an issue in GitHub to improve the documentation for
multi-backend: https://github.com/basho/basho_docs/issues/2123

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Jun 22, 2016 at 5:34 AM, Nagy, Attila  wrote:
> Hi,
>
> Trying to use the memory backend via multi.
>
> What I did:
> in riak.conf:
> storage_backend = multi
>
> Then creating a bucket type:
> riak-admin bucket-type create os_cache '{"props": {"write_once":
> true,"backend":"memory","n_val":2,"r":1,"w":1,"dw":1}}'
> riak-admin bucket-type activate os_cache
>
> Then I PUT an object:
> curl -v -XPUT -H "Content-Type: application/octet-stream"
> 'http://localhost:8098/types/os_cache/buckets/os_cache/keys/test1' -d 'test'
>
> Mext, I shut down all of the servers at once, leaving nothing in the
> cluster, then restarting them.
> And a new get for the above URL returns "test".
>
> When I set storage_backend to memory, it works as expected, after a full
> cluster stop, the bucket is empty.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Node.js client 2.2.0 released

2016-06-17 Thread Luke Bakken
Hello everyone,

I released version 2.2.0 of the Riak Node.js client today. More
information is available at the following links:

https://github.com/basho/riak-nodejs-client/blob/master/RELNOTES.md

https://github.com/basho/riak-nodejs-client/issues?q=milestone%3Ariak-nodejs-client-2.2.0

https://github.com/basho/riak-nodejs-client/releases/tag/v2.4.0

https://www.npmjs.com/package/basho-riak-client

Thanks!

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: bitcask merges & deletions

2016-06-15 Thread Luke Bakken
Hi Johnny,

Since this seems to happen regularly on one node on your cluster (not
necessarily the same node), do you have a repetitive process that
performs a *lot* of updates or deletes on a single key that could be
correlated to these merges?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jun 15, 2016 at 10:22 AM, Johnny Tan  wrote:
> We're running riak-1.4.2
>
> Every few weeks, we have a riak node that starts to slowly fill up on disk
> space for several days, and then suddenly gain that space back again.
>
> In looking into this more today, I think I see what's going on.
>
> Per the console.log on a node that it's happening to right now, there are an
> unusually large amount of merges happening right now. There are 6 total
> nodes in our cluster, it's only happening to this node today. (In previous
> weeks, it's been other nodes, but it's always been one node at a time.)
>
> Normally, we get 50-70 merges per day per node (according to various nodes'
> console.log, including the node in question). Yesterday and today, the node
> in question has several hundred merges happening.
>
> When I look inside the bitcask directory, I see a lot of files with this set
> of permissions:
> -rwSrw-r--
>
> My understanding is that those are files marked for deletions after bitcask
> merging.
>
> The number of those files is currently growing, and from a spot-check, they
> indeed match up as the files that have been merged.
>
> So it seems the two are related: a lot of merges are happening, which then
> causes a large number of files to be marked for deletion, and those marked
> files are piling up and not getting deleted for some reason.
>
> If I don't do anything, those files eventually get deleted, and everything
> is good again for another couple weeks until it happens to another node. But
> the disk usage does get high enough to alert us, and obviously we don't want
> it to get anywhere near 100%.
>
>
> I'm trying to figure out why there are times when this happens. One thing I
> noticed is a difference in the merge log entries.
>
> Here's one from a "normal" day, nearly all the entries for that day are
> roughly this same length and same amount of time merging:
> 2016-06-10 05:27:39.426 UTC [info] <0.15230.160> Merged
> {["/var/lib/riak/bitcask/890602560248518965780370444936484965102833893376/84000.bitcask.data","/var/lib/riak/bitcask/890602560248518965780370444936484965102833893376/83999.bitcask.data"],[]}
> in 11.902028 seconds.
>
> But here's one from today on the problematic node:
> 2016-06-15 17:13:40.626 UTC [info] <0.17903.500> Merged
> {["/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83633.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83632.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83631.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83630.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83629.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83628.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83627.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83626.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83625.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83624.bitcask.data","/var/lib/riak/bitcask/12331420064979493372343590776043637978346
 
93083136/83623.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83622.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83621.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83620.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83619.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83618.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83617.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83616.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83615.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83614.bitcask.data","/var/lib/riak/bitcask/1233142006497949337234359077604363797834693083136/83613.bi

Ruby 2.4.0 client released (Was: Python 2.5.3 client released)

2016-06-15 Thread Luke Bakken
For those keeping track, I sent the following email with the wrong subject.

Have a great day -

On Wed, Jun 15, 2016 at 8:44 AM, Luke Bakken  wrote:
> Hello everyone,
>
> Version 2.4.0 of the Ruby client was released today. This release
> contains various community PRs and a breaking change to how the client
> processes timestamps returned from Riak TS:
>
> https://rubygems.org/gems/riak-client
>
> https://github.com/basho/riak-ruby-client/blob/master/RELNOTES.md
>
> https://github.com/basho/riak-ruby-client/releases/tag/2.4.0
>
> The 2.4.0 release milestone in GitHub can be found here:
>
> https://github.com/basho/riak-ruby-client/issues?q=milestone%3Ariak-ruby-client-2.4.0
>
> Thank you for your continued use and support of Riak and the Riak Ruby
> client. Please don't hesitate to file an issue in GitHub if you have
> questions about using the client or find a bug.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Python 2.5.3 client released

2016-06-15 Thread Luke Bakken
Hello everyone,

Version 2.4.0 of the Ruby client was released today. This release
contains various community PRs and a breaking change to how the client
processes timestamps returned from Riak TS:

https://rubygems.org/gems/riak-client

https://github.com/basho/riak-ruby-client/blob/master/RELNOTES.md

https://github.com/basho/riak-ruby-client/releases/tag/2.4.0

The 2.4.0 release milestone in GitHub can be found here:

https://github.com/basho/riak-ruby-client/issues?q=milestone%3Ariak-ruby-client-2.4.0

Thank you for your continued use and support of Riak and the Riak Ruby
client. Please don't hesitate to file an issue in GitHub if you have
questions about using the client or find a bug.

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Updating Riak values in batch

2016-06-14 Thread Luke Bakken
Hi Ricardo,

I don't have examples right at hand, here is a link to our advanced
map/reduce examples:

http://docs.basho.com/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce/

Our own Joe Caswell provided this example: http://stackoverflow.com/a/23633491

If you run into issues after following our examples, feel free to
share what you've done via a gist and we can continue to help out.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Jun 14, 2016 at 10:51 AM, Ricardo Mayerhofer
 wrote:
> Hi Luke,
> Thanks for your reply! Can I use map/reduce operation to update documents?
> Do you have any link with an example?
>
> On Tue, Jun 14, 2016 at 2:44 PM, Luke Bakken  wrote:
>>
>> Hi Ricardo,
>>
>> If you know your keys in advance, you can fetch the keys in parallel,
>> update them, and write them back to Riak in parallel.
>>
>> Other options include map/reduce jobs that iterate over all the keys,
>> but keep in mind that any key listing operation will be resource
>> intensive.
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Mon, Jun 13, 2016 at 11:06 AM, Ricardo Mayerhofer
>>  wrote:
>> > Hi all,
>> > I've a large dataset in Riak (about 20 million keys), storing JSON
>> > documents. I'd like to update those documents to remove a JSON
>> > attribute.
>> > What's the best way to approach this problem in Riak?
>> >
>> > Thanks.
>> >
>> > --
>> > Ricardo Mayerhofer
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>
>
>
> --
> Ricardo Mayerhofer

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: get and put operations are slow on Riak KV cluster

2016-06-14 Thread Luke Bakken
Hello Abhinav,

Have you followed all of our suggested tuning in this document?

http://docs.basho.com/riak/kv/2.1.4/using/performance/

Specifically, these settings may help a lot in your environment:

http://docs.basho.com/riak/kv/2.1.4/using/performance/#optional-i-o-settings


--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Jun 13, 2016 at 12:00 AM, Abhinav Tripathi
 wrote:
> Hi,
> I am seeing quite slow get and put operations on our Riak KV cluster. We are
> new to Riak and have tried to follow the instructions in the documentation.
> It would be great if you could take a look and let us know what are we doing
> wrong.
>
> We are using Riak as a distributed cache for our application. The 95th
> percentile (and above) get operations are >10ms. Similarly, 99th and above
> percentile for put operations is getting close to 10ms. Ideally, we want the
> 99% gets to stay around 1ms and 99% puts to stay <5ms.
>
> I have attached the configurations for all 3 nodes in our cluster. Also,
> attached are recent console.log files. I have attached a recent one-minute
> stats file as well from which the below values are.
>
> GETs
> "node_get_fsm_time_100": 71429,
> "node_get_fsm_time_95": 16551,
> "node_get_fsm_time_99": 27513,
> "node_get_fsm_time_mean": 4393,
> "node_get_fsm_time_median": 989
>
> PUTs
> "node_put_fsm_time_100": 10785,
> "node_put_fsm_time_95": 2679,
> "node_put_fsm_time_99": 7800,
> "node_put_fsm_time_mean": 1752,
> "node_put_fsm_time_median": 1608
>
> I have recorded many one-minute stats for our cluster. Most of them for the
> past couple of days show high values like these.
>
> All three Riak nodes are 4-core, 16GB RAM, 128 GB SSD machines ... basically
> m3.xlarge AWS instances.
>
> Also, I can see many such lines in the console.log,
>
> 2016-06-13 00:03:32.545 [info] <0.23701.3269> Merged
> {["/apps/riak/lib/bitcask/50239118783249787813251666124688006726811648/75.bitcask.data","/apps/riak/lib/bitcask/50239118783249787813251666124688006726811648/76.bitcask.data","/apps/riak/lib/bitcask/50239118783249787813251666124688006726811648/77.bitcask.data","/apps/riak/lib/bitcask/50239118783249787813251666124688006726811648/78.bitcask.data"],[]}
> in 596.530484 seconds.
>
> These merge operations are usually taking close to 600 seconds. Could that
> be a problem as well?
>
> Thanks,
> Abhinav.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Updating Riak values in batch

2016-06-14 Thread Luke Bakken
Hi Ricardo,

If you know your keys in advance, you can fetch the keys in parallel,
update them, and write them back to Riak in parallel.

Other options include map/reduce jobs that iterate over all the keys,
but keep in mind that any key listing operation will be resource
intensive.
--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Jun 13, 2016 at 11:06 AM, Ricardo Mayerhofer
 wrote:
> Hi all,
> I've a large dataset in Riak (about 20 million keys), storing JSON
> documents. I'd like to update those documents to remove a JSON attribute.
> What's the best way to approach this problem in Riak?
>
> Thanks.
>
> --
> Ricardo Mayerhofer
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Put failure: too many siblings

2016-06-03 Thread Luke Bakken
Hi Vladyslav,

If you recognize the full name of the object raising the sibling
warning, it is most likely a manifest object. Sometimes, during hinted
handoff, you can see these messages. They should resolve after handoff
completes.

Please see the documentation for the transfer-limit command as well:

http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#transfer-limit

--
Luke Bakken
Engineer
lbak...@basho.com


On Fri, Jun 3, 2016 at 2:55 AM, Vladyslav Zakhozhai
 wrote:
> Hi.
>
> I have a trouble with PUT to Riak CS cluster. During this process I
> periodically see the following message in Riak error.log:
>
> 2016-06-03 11:15:55.201 [error]
> <0.15536.142>@riak_kv_vnode:encode_and_put:2253 Put failure: too many
> siblings for object OBJECT_NAME (101)
>
> and also
>
> 2016-06-03 12:41:50.678 [error]
> <0.20448.515>@riak_api_pb_server:handle_info:331 Unrecognized message
> {7345880,{error,{too_many_siblings,101}}}
>
> Here OBJECT_NAME - is the name of object in Riak which has too many
> siblings.
>
> I definitely sure that this objects are static. Nobody deletes is, nobody
> rewrites it. I have no idea why more than 100 siblings of this object
> occurs.
>
> The following effect of this issue occurs:
>
> Great amount of keys are loaded to RAM. I almost out of RAM (Do each sibling
> has it own key or key duplicate?).
> Nodes are slow - adding new nodes are too slow
> Presence of "too many siblings" affects ownership handoffs
>
> So I have several questions:
>
> Do hinted or ownership handoffs can affect siblings count (I mean can
> siblings be created during ownership of hinted handoffs)
> Is there any workaround of this issue. Do I need remove siblings manually or
> it removes during merges, read repairs and so on
>
>
> My configuration:
>
> riak from basho's packages - 2.1.3-1
> riak cs from basho's packages - 2.1.0-1
> 24 riak/riak-cs nodes
> 32 GB RAM per node
> AAE is disabled
>
>
> I appreciate you help.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: using riak to store user sessions

2016-06-02 Thread Luke Bakken
Hi Norman,

A quick search turns up this Node.js module:
https://www.npmjs.com/package/express-session

There is currently not a session store for Riak
(https://www.npmjs.com/package/express-session#compatible-session-stores)

Since memcached operates as a key/value store, forking the connector
for that session store as a starting point would be your best bet.

https://github.com/balor/connect-memcached

You would then use the Riak Node.js client to fetch and store data:
https://github.com/basho/riak-nodejs-client

--
Luke Bakken
Engineer
lbak...@basho.com


On Thu, Jun 2, 2016 at 6:19 AM, Norman Khine  wrote:
> hello, i am trying to setup a node.js/express application to use riak to
> store the logged in users' sessions as detailed
> http://basho.com/use-cases/session-data/
>
> i have setup the dev cluster on my machine and everything is running fine.
>
> what is the correct way to set this up?
>
> any advise much appreciated.
>
>
>
> --
> %>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or chr(97+(ord(c)-83)%26) for
> c in ",adym,*)&uzq^zqf" ] )
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing ring size on 1.4 cluster

2016-06-01 Thread Luke Bakken
Alex,

While a command does exist to resize the ring in Riak 2, it is for all
intents and purposes deprecated.

The best solution, as always, is to benchmark your cluster's
performance using real world load, and plan for a bit of head room if
expansion is likely.

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jun 1, 2016 at 11:51 AM, Alex De la rosa
 wrote:
> Can the ring size be changed easily in Riak 2.X?
>
> Imagine I have 5 servers originally with a ring_size = 64... If later on I
> add 5 more servers (10 in total) and I also want to duplicate the
> partitions, can I just edit the ring_size to 128?
>
> How would be the process to do it? Will it rebalance properly and have no
> issues?
>
> Thanks,
> Alex
>
> On Wed, Jun 1, 2016 at 10:46 PM, Luke Bakken  wrote:
>>
>> Hi Johnny,
>>
>> Yes, the latter two are your main options. For a 1.4 series Riak
>> installation, your only option is to bring up a new cluster with the
>> desired ring size and replicate data.
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Fri, May 27, 2016 at 12:11 PM, Johnny Tan  wrote:
>> > The docs
>> > http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
>> > seem to imply that there's no easy, non-destructive way to change a
>> > cluster's ring size live for Riak-1.4x.
>> >
>> > I thought about replacing one node at a time, but you can't join a new
>> > node
>> > or replace an existing one with a node that has a different ring size.
>> >
>> > I was also thinking of bring up a completely new cluster with the new
>> > ring
>> > size, and then replicating the data from the original cluster, and take
>> > a
>> > quick maintenance window to failover to the new cluster.
>> >
>> > One other alternative seems to be to upgrade to 2.0, and then use 2.x's
>> > ability to resize the ring.
>> >
>> > Are these latter two my main options?
>> >
>> > johnny
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing ring size on 1.4 cluster

2016-06-01 Thread Luke Bakken
Hi Johnny,

Yes, the latter two are your main options. For a 1.4 series Riak
installation, your only option is to bring up a new cluster with the
desired ring size and replicate data.
--
Luke Bakken
Engineer
lbak...@basho.com


On Fri, May 27, 2016 at 12:11 PM, Johnny Tan  wrote:
> The docs http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
> seem to imply that there's no easy, non-destructive way to change a
> cluster's ring size live for Riak-1.4x.
>
> I thought about replacing one node at a time, but you can't join a new node
> or replace an existing one with a node that has a different ring size.
>
> I was also thinking of bring up a completely new cluster with the new ring
> size, and then replicating the data from the original cluster, and take a
> quick maintenance window to failover to the new cluster.
>
> One other alternative seems to be to upgrade to 2.0, and then use 2.x's
> ability to resize the ring.
>
> Are these latter two my main options?
>
> johnny
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search and custom Solr components

2016-05-31 Thread Luke Bakken
Hi Steve and Guillaume -

If you subscribe to this GitHub issue you can provide input and follow
progress on improving access to solrconfig.xml:
https://github.com/basho/yokozuna/issues/537

You might be able to restart a Solr core via the HTTP /solr endpoint -
that may be easier than restarting all of Riak.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, May 31, 2016 at 8:54 AM, Steve Garon  wrote:
> Hey Guillaume,
>
> We had to alter the solrconfig.xml in our deployment because the default
> solrconfig.xml does not give good search throughput with a large amount of
> documents updating often. I've opened a ticket at basho to get the
> solrconfig.xml save into the buckets settings that way they would be the
> same across the cluster. I'm not sure when and if they'll have it in. I
> guess the more ppl that ask for this, the sooner we'll get it.
>
> In the meantime, what we did is that during our riak node installation
> procedure, we create the index using curl then we override the
> solrconfig.xml and then restart riak which in turn restarts solr. From then,
> the solrconfig.xml should remain intact. There is one more thing though.
> Sometimes, for unknown reasons, riak decides to recreate the solrconfig.xml.
> So you might wanna have a script that runs on the node to validate the
> solrconfig.xml has not changed and to fix it automatically if it does.
>
> I know this is not ideal, solrconfig.xml really needs to be saved in bucket
> settings like the schema.xml. Lets just hope the Basho works on fixing this
> ASAP because for big deployments this is kind of crutial.
>
>
>
> Steve
>
> On 30 May 2016 at 13:18, Guillaume Boddaert
>  wrote:
>>
>> Please allow me to bump this previous message that was sent late last
>> friday and that didn't attract much attention due to a well deserved
>> week-end.
>>
>>
>> On 27/05/2016 19:14, Guillaume Boddaert wrote:
>>
>> Hi there,
>>
>> I'm currently testing custom Component in my Riak Search system. As I need
>> a suggestion mechanism from the Solr index, I implemented the Suggester
>> component (https://wiki.apache.org/solr/Suggester).
>> It seems to work correctly, yet I have some question regarding the usage
>> of custom Solr configuration inside of Riak. The only caveat is that Riak
>> commits too often and that the suggestion index must be build manually and
>> not on commit. That's fine for me, i'll cron that once per day.
>>
>> First of all, how do stop/start/reload the solr instance without
>> disturbing the Riak core ? For the time being I'm stuck with service
>> start/stop. How can I reload my solr cores configuration without stopping
>> riak ?
>>
>> Secondly, is that a good pattern to start tweaking defaults from my solr
>> cores (solrconfig.xml) ? Should I stop that right now and consider to use a
>> distinct Solr instance if I require those modifications ? Or should I
>> consider safe to alter cache/performance settings as well for Solr to match
>> my needs ? Can I play with other solr files such as stopwords and stuff like
>> that ?
>>
>> Finally, is there a proper way to alter default solrconfig.xml ? It is
>> auto-generated at index creation by Riak, yet that's a BIG and complex file
>> that may evolve between Riak releases. I'm creating indexes programatically
>> sending my index through Riak interface ( RiakClient.create_search_index in
>> Riak Python lib), but if I start to alter configuration I guess I need more
>> than that.
>> How do you guys would handle this ?
>>
>> Thanks,
>>
>> Guillaume
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ranking search results by relevancy

2016-05-25 Thread Luke Bakken
Alex,

Here's what you asked in your original email, and why I mentioned OR:
"Can it be done all at once in just 1 search query? or should I
compile results from 3 queries?"

These documents indicate that the default sort is descending relevancy score:

* 
https://wiki.apache.org/solr/SolrRelevancyFAQ#Why_are_search_results_returned_in_the_order_they_are.3F
* https://wiki.apache.org/solr/CommonQueryParameters#sort

The relevancy FAQ link I provided has useful information and links to
other documents that should be able to give you more information about
what kinds of sorting you can do.

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, May 25, 2016 at 9:33 AM, Alex De la rosa
 wrote:
> Hi Luke,
>
> That was not the question... I know that I can use ORs, etc... I wanted to
> know how to sort them by relevancy or higher equality score.
>
> Thanks,
> Alex
>
> On Wed, May 25, 2016 at 8:08 PM, Luke Bakken  wrote:
>>
>> Hi Alex,
>>
>> You can use the HTTP search endpoint to see what information Riak
>> returns for Solr queries as well as to try out queries:
>> https://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>>
>> Since you're indexing first and last name, I'm not sure what indexing
>> a full name buys you on top of that.
>>
>> It should be possible to combine your queries using OR.
>>
>> More info about Solr ranking can be found online (such as
>> https://wiki.apache.org/solr/SolrRelevancyFAQ).
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Wed, May 18, 2016 at 10:07 AM, Alex De la rosa
>>  wrote:
>> > Hi all,
>> >
>> > I would like to perform a search on Riak/Solr of people given an input
>> > containing its full name (or part of it), like when searching for
>> > members in
>> > Facebook's search bar.
>> >
>> > search input [ alex garcia ]
>> >
>> > results = client.fulltext_search('people', 'firstname_register:*alex* OR
>> > lastname_register:*garcia*')
>> >
>> > this would give me members like:
>> >
>> > alex garcia
>> > alexis garcia
>> > alex fernandez
>> > jose garcia
>> >
>> > Is there any way to get these results ranked/ordered by the most precise
>> > search? "alex garcia" would be the most relevant because matches equally
>> > to
>> > the search input... "alexis garcia" may come second as even not an exact
>> > match is very similar pattern, the other two would come after as they
>> > match
>> > only 1 of the 2 search parameters.
>> >
>> > Would it be convenient to index also fullname_register:alex garcia in
>> > order
>> > to find exact matches too?
>> >
>> > Can it be done all at once in just 1 search query? or should I compile
>> > results from 3 queries?
>> >
>> > result_1 = client.fulltext_search('people', 'fullname_register:alex
>> > garcia')
>> > result_2 = client.fulltext_search('people', 'firstname_register:*alex*
>> > AND
>> > lastname_register:*garcia*')
>> > result_3 = client.fulltext_search('people', 'firstname_register:*alex*
>> > OR
>> > lastname_register:*garcia*')
>> >
>> > Thanks and Best Regards,
>> > Alex
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ranking search results by relevancy

2016-05-25 Thread Luke Bakken
Hi Alex,

You can use the HTTP search endpoint to see what information Riak
returns for Solr queries as well as to try out queries:
https://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying

Since you're indexing first and last name, I'm not sure what indexing
a full name buys you on top of that.

It should be possible to combine your queries using OR.

More info about Solr ranking can be found online (such as
https://wiki.apache.org/solr/SolrRelevancyFAQ).

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, May 18, 2016 at 10:07 AM, Alex De la rosa
 wrote:
> Hi all,
>
> I would like to perform a search on Riak/Solr of people given an input
> containing its full name (or part of it), like when searching for members in
> Facebook's search bar.
>
> search input [ alex garcia ]
>
> results = client.fulltext_search('people', 'firstname_register:*alex* OR
> lastname_register:*garcia*')
>
> this would give me members like:
>
> alex garcia
> alexis garcia
> alex fernandez
> jose garcia
>
> Is there any way to get these results ranked/ordered by the most precise
> search? "alex garcia" would be the most relevant because matches equally to
> the search input... "alexis garcia" may come second as even not an exact
> match is very similar pattern, the other two would come after as they match
> only 1 of the 2 search parameters.
>
> Would it be convenient to index also fullname_register:alex garcia in order
> to find exact matches too?
>
> Can it be done all at once in just 1 search query? or should I compile
> results from 3 queries?
>
> result_1 = client.fulltext_search('people', 'fullname_register:alex garcia')
> result_2 = client.fulltext_search('people', 'firstname_register:*alex* AND
> lastname_register:*garcia*')
> result_3 = client.fulltext_search('people', 'firstname_register:*alex* OR
> lastname_register:*garcia*')
>
> Thanks and Best Regards,
> Alex
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Luke Bakken
On Mon, May 23, 2016 at 11:07 AM, Alex De la rosa
 wrote:
> So if the node with Stanchion fatally crashed and can not be recovered I can
> install Stanchion in another node and this node will get the "master" role?

Yes. There is no concept of "master" or "slave" with Stanchion, since
only one Stanchion process should ever be running at a time and
servicing requests.

> Also, you said that if Stanchion is down it can not create users and
> buckets... but can it still create keys inside the existing buckets? and
> also read data from the nodes?

Yes, since these operations do not involve Stanchion.

--
Luke Bakken
Engineer
lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Luke Bakken
Alex -

You won't be able to create new users or buckets while Stanchion is
offline. You would follow normal procedures to rebuild Riak KV on the
crashed node, and in the meantime could bring up Stanchion on an
existing node.
--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, May 23, 2016 at 9:29 AM, Alex De la rosa
 wrote:
> Hi Luke,
>
> Ok, understood, what if I don't have a load balancer and the node having
> Stanchion crashes? what will happen to the cluster and how to rebuild it?
>
> Thanks,
> Alex
>
> On Mon, May 23, 2016 at 8:09 PM, Luke Bakken  wrote:
>>
>> Hi Alex,
>>
>> You should only have one active Stanchion process running in your
>> cluster, since its purpose is to ensure consistent, ordered operations
>> with regard to users and buckets. You can have a hot-backup if you
>> configure a load balancer to proxy requests from the Riak CS
>> processes.
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Sat, May 21, 2016 at 11:41 AM, Alex De la rosa
>>  wrote:
>> > Hi there,
>> >
>> > I'm creating a Riak CS cluster and I got some questions about the
>> > following
>> > sentence from the documentation:
>> >
>> > Riak KV and Riak CS must be installed on each node in your cluster.
>> > Stanchion, however, needs to be installed on only one node.
>> >
>> > Is this statement saying that only 1 node can have Stanchion? Or can it
>> > be
>> > placed in more servers? like Riak KV and Riak CS must be in 5 out of
>> > 5
>> > nodes but Stanchion can be in 1 to 5 out of 5 nodes?
>> >
>> > If is referring that ONLY 1 out of 5 nodes can have Stanchion and the
>> > other
>> > 4 nodes are not allowed to have it installed, what happens if the
>> > "master"
>> > node that has Stanchion crashes?
>> >
>> > Thanks and Best Regards,
>> > Alex
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Luke Bakken
Hi Alex,

You should only have one active Stanchion process running in your
cluster, since its purpose is to ensure consistent, ordered operations
with regard to users and buckets. You can have a hot-backup if you
configure a load balancer to proxy requests from the Riak CS
processes.
--
Luke Bakken
Engineer
lbak...@basho.com


On Sat, May 21, 2016 at 11:41 AM, Alex De la rosa
 wrote:
> Hi there,
>
> I'm creating a Riak CS cluster and I got some questions about the following
> sentence from the documentation:
>
> Riak KV and Riak CS must be installed on each node in your cluster.
> Stanchion, however, needs to be installed on only one node.
>
> Is this statement saying that only 1 node can have Stanchion? Or can it be
> placed in more servers? like Riak KV and Riak CS must be in 5 out of 5
> nodes but Stanchion can be in 1 to 5 out of 5 nodes?
>
> If is referring that ONLY 1 out of 5 nodes can have Stanchion and the other
> 4 nodes are not allowed to have it installed, what happens if the "master"
> node that has Stanchion crashes?
>
> Thanks and Best Regards,
> Alex
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak install on instance with multiple OTP's

2016-05-23 Thread Luke Bakken
Hi Robert,

When you install the official Riak package for Ubuntu 14 Riak will use
an OTP release bundled with the package
(http://docs.basho.com/riak/kv/2.1.4/downloads/).

Unless you have specific requirements otherwise, installing Riak from
packages is the recommended, supported method.

Thanks -
--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, May 23, 2016 at 7:35 AM, Robert Latko  wrote:
> Hi Sargun,
>
> I installed it from source on Ubuntu14.04 LTS.
>
> I'll take a look at kerl as well; right now this issue more academic than
> anything else.
>
> Sincerely,
>
> Robert
>
>
> On 05/21/2016 12:42 PM, Sargun Dhillon wrote:
>>
>> How did you install OTP18? When dealing with Erlang, and multiple
>> installs of it, I might suggest using kerl
>> (https://github.com/kerl/kerl). It's an excellent tool for dealing
>> with the problem.
>>
>>
>> On Sat, May 21, 2016 at 12:01 PM, Robert Latko 
>> wrote:
>>>
>>> Hi all,
>>>
>>> Quick question:
>>>
>>> I have an instance with OTP18 and I want to make it a Riak Node. I
>>> DL/install  the OTP16 patch 8 for Riak 2.1.4.  How then do I use make rel
>>> to
>>> use OTP16 instead of the default OTP18??
>>>
>>>
>>> Thanks in advance.
>>>
>>> Sincerely,
>>>
>>> Robert
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-17 Thread Luke Bakken
Hello everyone -

I have confirmed that precommit hooks are correctly called in a 3-node
environment, running on the same VM. Riak was both started by and is
running as a local user account (no "root" permissions necessary).

Overview of the verification process can be found here:
https://gist.github.com/lukebakken/4c097233e2bfc4bf81233ce5581bde1d

I built Riak using these commands:

git clone git://github.com/basho/riak
cd riak && git co riak-2.1.4
make locked-deps
make stagedevrel DEVNODES=3

Sanket -

At this point, all indication is that this issue is due to something
specific to your environment. Since Amazon Linux is based on CentOS /
RedHat, the official packages for one of those distros *should* work.
I recommend that you use three separate servers and install official
packages on them, which should resolve this issue.

If you start over from a source build *and* require multiple nodes to
run on the same server, please use "make stagedevrel DEVNODES=3"
rather than "make rel". This should correctly create three dev
directories, each with a separate Riak ready to run in them. Note that
this is *not* a supported production environment, but is appropriate
for testing.

Thanks -

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, May 17, 2016 at 11:10 AM, Luke Bakken  wrote:
>
> Riak users subscribers -
>
> I would like to add the caveat that this workaround indicates an 
> environment-specific issue and not (at this point) a bug in Riak. I am 
> working to get to the root cause (pun intended)
>
> Thanks,
> Luke
> lbak...@basho.com
>
> On May 17, 2016 10:54 AM, "Sanket Agrawal"  wrote:
> >
> > All, the issue is fixed now - the problem is that Riak precommit and 
> > postcommit hooks seem to work only in root user mode. On AWS, switching 
> > from ec2-user to root user fixed the precommit trigger issue.
> > That does raise the question of whether Riak should really be using root 
> > user account. For example, Postgresql runs fine as non-root user.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-17 Thread Luke Bakken
Riak users subscribers -

I would like to add the caveat that this workaround indicates an
environment-specific issue and not (at this point) a bug in Riak. I am
working to get to the root cause (pun intended)

Thanks,
Luke
lbak...@basho.com

On May 17, 2016 10:54 AM, "Sanket Agrawal"  wrote:
>
> All, the issue is fixed now - the problem is that Riak precommit and
postcommit hooks seem to work only in root user mode. On AWS, switching
from ec2-user to root user fixed the precommit trigger issue.
> That does raise the question of whether Riak should really be using root
user account. For example, Postgresql runs fine as non-root user.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK TS - FreeBSD

2016-05-16 Thread Luke Bakken
Hello,

There is no plan for official FreeBSD support in Riak TS. Support for
FreeBSD packages for Riak KV will be discontinued in the future.

--
Luke Bakken
Engineer
lbak...@basho.com

On Sun, May 15, 2016 at 5:41 PM, Outback Dingo  wrote:
> Curious why there is no FreeBSD pkg for RIAK TS on the web site.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


  1   2   3   4   >