Re: Download link for basho-bench tool

2017-05-31 Thread Magnus Kessler
On 31 May 2017 at 18:38, Magnus Kessler <mkess...@basho.com> wrote:

> On 31 May 2017 at 10:04, helsing <patrik.hels...@klarna.com> wrote:
>
>> Hi,
>>
>> In the Benchmarking docs on the basho homepage
>> (http://docs.basho.com/riak/kv/2.2.3/using/performance/benchmarking/)
>> there
>> is a reference to download rpms (i.e
>> http://ps-tools.s3.amazonaws.com/basho-bench-0.10.0.53.g0e15
>> 158-1.el7.centos.x86_64.rpm)
>> However this link is not accessible.
>>
>> Where can I find downloadable binaries nowadays?
>>
>> Cheers
>>
>
>
> Hi Patrik,
>
> That particular file still seems to be available at
> https://web-beta.archive.org/web/*/http://ps-tools.s3.
> amazonaws.com/basho-bench_0.10.0.53.g0e15158-ubuntu14.04LTS-1_amd64.deb.
>
> I do not have admin rights to the Amazon S3 buckets, and am not in a
> position to establish alternative download locations.
>
> HTH.
>
> Regards,
>
> Magnus
>
>
And the RPM is actually here:
https://web-beta.archive.org/web/*/http://ps-tools.s3.amazonaws.com/basho-bench-0.10.0.53.g0e15158-1.el7.centos.x86_64.rpm

-M.


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Download link for basho-bench tool

2017-05-31 Thread Magnus Kessler
On 31 May 2017 at 10:04, helsing <patrik.hels...@klarna.com> wrote:

> Hi,
>
> In the Benchmarking docs on the basho homepage
> (http://docs.basho.com/riak/kv/2.2.3/using/performance/benchmarking/)
> there
> is a reference to download rpms (i.e
> http://ps-tools.s3.amazonaws.com/basho-bench-0.10.0.53.
> g0e15158-1.el7.centos.x86_64.rpm)
> However this link is not accessible.
>
> Where can I find downloadable binaries nowadays?
>
> Cheers
>


Hi Patrik,

That particular file still seems to be available at
https://web-beta.archive.org/web/*/http://ps-tools.s3.amazonaws.com/basho-bench_0.10.0.53.g0e15158-ubuntu14.04LTS-1_amd64.deb
.

I do not have admin rights to the Amazon S3 buckets, and am not in a
position to establish alternative download locations.

HTH.

Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP 19 support

2017-05-29 Thread Magnus Kessler
On 27 May 2017 at 10:43, Senthilkumar Peelikkampatti <sendto...@gmail.com>
wrote:

> Any timeline for upgrading Riak to OTP 19? OTP 20 is expected in few
> weeks. It is holding us back niceties like improved maps etc. from the
> upgrade.
>
> Thanks,
> Senthil
>
>
Hi Senthil,

AFAIK, there are still quite a few remaining issues in Riak's code base
that need to be resolved before the database itself can be compiled and run
on OTP-19+. However, IIRC the Riak Erlang Client can be compiled on OTP-19,
and some preliminary testing I did last year suggests that it works OK. If
you go down that route, please test your own build of the Riak Erlang
Client carefully.

Kind Regards,

Magnus



-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Put failure: too many siblings

2017-05-25 Thread Magnus Kessler
On 25 May 2017 at 09:39, Vladyslav Zakhozhai <v.zakhoz...@smartweb.com.ua>
wrote:

> Hi,
>
> I've been trying to change dvv_enabled for default bucket type. But this
> is impossible with riak-admin:
>
> riak-admin bucket-type update default '{"props":{"dvv_enabled":true}}'
> Error updating bucket type default:
> no_default_update
>
> I think that workaround for this is changing default props in riak config:
>
> {riak_core, [
>
>  {default_bucket_props, [
>  {allow_mult, true},
>  {dvv_enabled, true}
>  ]},
> ...
>
> (yes, I still use old-style configs)
>
> And then I need to restart all riak nodes. Here is two questions:
> 1. Is this approach correct?
> 2. Is it ok to have different default_bucket_props value on different
> nodes of the same cluster (in short period of time)?
>
> I have to restart 27 riak nodes. There is several billions of keys in riak
> and each node starts wery slow (20-30-60 min; bitcask backend). So I can't
> change default_bucket_props simultaneously in a such way.
>
> I also can change this parameter in riak console, i.e.
>
> application:set_env(riak_core, default_bucket_props, [{dvv_enabled, true},
> ..]). But what I need to do for applying this changes?
>
>
Hi Vladyslav,

The recommended approach for changing the default bucket type's properties
is to change the settings in `riak.conf` or `advanced.config`. However, I
just checked that any settings changed through a set_env call also seem to
be reflected in the runtime configuration.

If you'd like to try this, I recommend making the change on a test cluster
first, as I have not verified if this causes issues on a production CS
cluster. The set_env call should pass in the complete set of bucket type
properties, not just the changes. You can try the following (with default
default bucket-type properties):

riak_core_util:rpc_every_member_ann(application, set_env, [riak_core,
default_bucket_props,
[{allow_mult,false},{big_vclock,50},{chash_keyfun,{riak_core_util,chash_std_keyfun}},{dvv_enabled,
false},{dw,quorum},{last_write_wins,false},{linkfun,{modfun,riak_kv_wm_link_walker,mapreduce_linkfun}},{n_val,3},{notfound_ok,true},{old_vclock,86400},{postcommit,[]},{pr,0},{precommit,[]},{pw,0},{r,quorum},{repl,true},{rw,quorum},{small_vclock,50},{w,quorum},{write_once,false},{young_vclock,20}]],
5000).


If you go down this route, please don't forget to also make changes to the
configuration files, in order to these settings to persist across a restart.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Long delays when trying to recover from errors in Java client at startup

2017-05-22 Thread Magnus Kessler
On 22 May 2017 at 07:02, Toby Corkindale <t...@dryft.net> wrote:

> Hi,
> I've been trying to make a JVM-based app have better error recovery when
> the Riak cluster is still in a starting-up state.
> I have a fairly naive wait-loop that tries to connect and list buckets,
> and if there's an exception, retry again after a short delay.
>
> However once the Riak cluster comes good, the java client hangs on the
> first operation it makes, for a really long time. Minutes.
>  -- in particular, at com.basho.riak.client.core.
> RiakCluster.retryOperation(RiakCluster.java:479)
>
> I've tried shutting down and recreating the RiakClient between attempts,
> but this doesn't seem to help.
> I guess the node manager has its own back-offs and delays.. Is there a way
> to reduce these timeouts?
>
> Thanks,
> Toby
>
>
Hi Toby,

Using bucket listing as a method to determine live-ness is a really bad
idea. Bucket-listing, just as key-listing, requires a coverage query across
ALL objects stored in the cluster, and will take a really long time if the
cluster contains many objects.

A better alternative would be to have a canary object with a known key,
that can be read quickly.

In startup scripts, that need to wait until Riak KV is operational, we
recommend using `riak-admin wait-for-service riak_kv`.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Monitoring Riak-KV using Golang

2017-05-08 Thread Magnus Kessler
On 4 May 2017 at 18:46, Prakash Parmar <prakash.par...@outlook.com> wrote:

> Hi Magnus,
>
> How I can interpret health statics and decide that whether riak cluster is
> healthy or not ? I have gone through all statistics but not able to
> conclude that which and all I have to use to monitor health of cluster and
> based on what I can give conclusion. For eg. Disk status will give percent
> utilises so I monitor it and if it's >85% I can say it's warning, >95%
> alarm(not healthy) and < 85% is normal.
>
> Thanks for kind helps,
> Prakash Parmar
>
>

Hi Prakash,

Please have a look at the documentation of Riak's stats module [0]. In
addition, as a general rule, Riak's data disk should never get more than
about 80% full (assuming it holds only Riak data), as additional disk space
may be needed when fallback partitions are created when another node is
down. Rather than looking at point in time values as reported by
`riak-admin status` and the HTTP `stats` endpoint, it is often more
valuable to graph the development of the reported statistics over time. Of
particular interest are the GET and PUT times reported (both node and
vnode), as these will show the latencies of the cluster when returning
results.

Kind Regards,

Magnus

[0]:
http://docs.basho.com/riak/kv/2.2.3/using/reference/statistics-monitoring/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.4 crashes with Out of Memory Error

2017-05-04 Thread Magnus Kessler
On 4 May 2017 at 09:56, Arulappan, Jerald (Jerald) <ajer...@avaya.com>
wrote:

> Hi Magnus Kessler,
>
>
>
> The configuration looks good.
>
>
>
> [root@server205 bin]# ./riak config effective | grep "_dir"
>
> anti_entropy.data_dir = $(platform_data_dir)/anti_entropy
>
> bitcask.data_root = $(platform_data_dir)/bitcask
>
> leveldb.data_root = $(platform_data_dir)/leveldb
>
> log.console.file = $(platform_log_dir)/console.log
>
> log.crash.file = $(platform_log_dir)/crash.log
>
> log.error.file = $(platform_log_dir)/error.log
>
> platform_bin_dir = ./bin
>
> platform_data_dir = ./data
>
> platform_etc_dir = ./etc
>
> platform_lib_dir = ./lib
>
> platform_log_dir = ./log
>
> ring.state_dir = $(platform_data_dir)/ring
>
> search.anti_entropy.data_dir = $(platform_data_dir)/yz_anti_entropy
>
> search.root_dir = $(platform_data_dir)/yz
>
> search.temp_dir = $(platform_data_dir)/yz_temp
>
>
>
> Regards,
>
> Jerald
>
>
>
Hi Jerald,

The that fill up the logs at a very high rate are due to the use of
relative file paths for platform_{bin,data,etc,lib,log}_dir. Those entries
should generally contain absolute file paths, such as /var/lib/riak, as
init systems may start the application from an arbitrary working directory.
Please check if the errors go away after adjusting platform_data_dir.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.4 crashes with Out of Memory Error

2017-05-04 Thread Magnus Kessler
On 2 May 2017 at 14:56, Arulappan, Jerald (Jerald) <ajer...@avaya.com>
wrote:

> Hi,
>
> I am using a single node riak server 2.1.4 with bitcask as backend for
> storing files.
> The riak node stops working after every week. (Looks like when the active
> anti-entropy process recreates the hash tree)
> The sylog shows Out of memory Error. But the console.log shows "sst: No
> such file or directory"
> *Syslog Error:*
>
> Apr 26 17:39:37 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:37 TLCCBAPRO2 kernel: Killed process 16987 (sh)
> total-vm:106168kB, anon-rss:116kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 30374 (memsup)
> total-vm:4112kB, anon-rss:80kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 14351 (cpu_sup)
> total-vm:4112kB, anon-rss:68kB, file-rss:0kB
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Out of memory: Kill process 16685
> (beam.smp) score 824 or sacrifice child
> Apr 26 17:39:41 TLCCBAPRO2 kernel: Killed process 30385 (sh)
> total-vm:106164kB, anon-rss:136kB, file-rss:416kB
> Apr 26 17:44:48 TLCCBAPRO2 run_erl[16682]: Erlang closed the connection.
>
> *Console.log:*
>
> 2017-04-26 17:37:03.493 [info] 
> <0.625.0>@riak_kv_vnode:maybe_create_hashtrees:227
> riak_kv/91343852333181432387730302044767688728495783936: unable to start
> index_hashtree: {error,{{badmatch,{error,{db_open,"IO error:
> ./data/anti_entropy/91343852333181432387730302044767688728495783936/sst_0/001954.sst:
> No such file or directory"}}},[{hashtree,new_segment_store,2,[{file,"src/
> hashtree.erl"},{line,675}]},{hashtree,new,2,[{file,"src/
> hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_
> tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,
> 610}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{
> riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_
> index_hashtree.erl"},{line,474}]},{riak_kv_index_
> hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{
> line,268}]},{gen_server,init_it,6,[{file,"gen_server.erl"},
> {line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.
> erl"},{line,239}]}]}}
> 2017-04-26 17:37:03.515 [error] <0.30178.2881> CRASH REPORT Process
> <0.30178.2881> with 0 neighbours exited with reason: no match of right hand
> value {error,{db_open,"IO error: ./data/anti_entropy/
> 936274486415109681974235595958868809467081785344/37.sst: No such file
> or directory"}} in hashtree:new_segment_store/2 line 675 in
> gen_server:init_it/6 line 328
> 2017-04-26 17:37:03.515 [info] 
> <0.623.0>@riak_kv_vnode:maybe_create_hashtrees:227
> riak_kv/45671926166590716193865151022383844364247891968: unable to start
> index_hashtree: {error,{{badmatch,{error,{db_open,"IO error:
> ./data/anti_entropy/45671926166590716193865151022383844364247891968/sst_0/002239.sst:
> No such file or directory"}}},[{hashtree,new_segment_store,2,[{file,"src/
> hashtree.erl"},{line,675}]},{hashtree,new,2,[{file,"src/
> hashtree.erl"},{line,246}]},{riak_kv_index_hashtree,do_new_
> tree,3,[{file,"src/riak_kv_index_hashtree.erl"},{line,
> 610}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{
> riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_kv_
> index_hashtree.erl"},{line,474}]},{riak_kv_index_
> hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{
> line,268}]},{gen_server,init_it,6,[{file,"gen_server.erl"},
> {line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.
> erl"},{line,239}]}]}}
> 2017-04-26 17:37:03.516 [error] <0.30207.2881> CRASH REPORT Process
> <0.30207.2881> with 0 neighbours exited with reason: no match of right hand
> value {error,{db_open,"IO error: ./data/anti_entropy/
> 45671926166590716193865151022383844364247891968/sst_0/002239.sst: No such
> file or directory"}} in hashtree:new_segment_store/2 line 675 in
> gen_server:init_it/6 line 328
>
>
>
> The complete logs are in the attached zip file. Any thoughts on the root
> cause and possible solution to overcome this is much appreciated.
>
>
>
> Regards,
>
> Jerald
>
>
>


Hi Jerald,

I suspect that there is a mis-configuration on your setup. Please check by
running `riak config effective | grep "_dir"`, what the values of
`platform_data_dir` and `anti_entropy.data_dir` are set to.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Monitoring Riak-KV using Golang

2017-05-02 Thread Magnus Kessler
On 30 April 2017 at 13:06, Prakash Parmar <prakash.par...@outlook.com>
wrote:

> Hi Magnus,
>
> Thanks for your quick response.
>
> One more question. How I can get Cluster Status ?
>
> Regards,
> Prakash Parmar
>
>
Hi Prakash,

The command 'riak-admin cluster status' does not have an equivalent access
via HTTP. Like most sub-commands of 'riak-admin' it is implemented as an
RPC call [0] to the riak_core_console Erlang module.


[0]:
https://github.com/basho/riak/blob/109e61812e8cd1014e5439cc33b72c904144fe39/rel/files/riak-admin#L308-L311

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Monitoring Riak-KV using Golang

2017-04-30 Thread Magnus Kessler
On 30 April 2017 at 10:34, Prakash Parmar <prakash.par...@outlook.com>
wrote:

> Hi All,
>
> For my project I have to read riak status periodically. From CLI, I can do
> using command 'riak-admin status’. How I can get in Golang Program ?
>
> I have gone through riak-go-client but didn’t find any API.
>
> Any help is really appreciated :)
>
> You can answer on stackoverflow : http://stackoverflow.com/
> questions/43704208/monitoring-riak-from-golang
>
> Thanks,
> Prakash Parmar
>
>
Hi Prakash,

You can use the HTTP /stats endpoint [0], which returns all monitored
properties in JSON format.

Kind Regards,

Magnus

[0]:
http://docs.basho.com/riak/kv/2.2.3/using/reference/statistics-monitoring/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-04-28 Thread Magnus Kessler
On 28 April 2017 at 15:15, Matthew Von-Maszewski <matth...@basho.com> wrote:

> Daniel,
>
> Something is wrong.  All instances of leveldb within a node share the
> total memory configuration.  The memory is equally divided between all
> active vnodes.  It is possible to create an OOM situation if total RAM is
> low and vnodes count per node is high relative to RAM size.
>
> The best next step would be for you to execute the riak-debug program on
> one of the nodes known to experience OOM.  Send the resulting .tar.gz file
> directly to me (no need to share that with the mailing list).  I will
> review the memory situation and suggest options.
>
> Matthew
>
> On Apr 28, 2017, at 8:22 AM, Daniel Miller <dmil...@dimagi.com> wrote:
>
> Hi Luke,
>
> I'm reviving this thread from March where we discussed a new backend
> configuration for our riak cluster. We have had a chance to test out the
> new recommended configuration, and so far we have not been successful in
> limiting the RAM usage of leveldb with multi_backend. We have tried various
> configurations to limit memory usage without success.
>
> First try (default config).
> riak.conf: leveldb.maximum_memory.percent = 70
>
> Second try.
> riak.conf: leveldb.maximum_memory.percent = 40
>
> Third try
> riak.conf: #leveldb.maximum_memory.percent = 40 (commented out)
> advanced.config: [{eleveldb, [{total_leveldb_mem_percent, 30}]}, ...
>
> In all cases (under load) riak consumes all available RAM and eventually
> becomes unresponsive, presumably due to OOM conditions. Is there a way to
> limit the amount of RAM consumed by riak with the new multi_backend
> configuration? For example, do we need to consider ring size or other
> configuration parameters when calculating the value of
> total_leveldb_mem_percent?
>
> Notably, the old (storage_backend = leveldb in riak.conf, empty
> advanced.config) clusters have had very good RAM and disk usage
> characteristics. Is there any way we can make riak or riak cs avoid the
> rare occasions where it overwrites the manifest file while using this
> (non-multi) backend?
>
> Thank you,
> Daniel Miller
>
>
> On Tue, Mar 7, 2017 at 3:58 PM, Luke Bakken <lbak...@basho.com> wrote:
>
>> Hi Daniel,
>>
>> Thanks for providing all of that information.
>>
>> You are missing important configuration for riak_kv that can only be
>> provided in an /etc/riak/advanced.config file. Please see the following
>> document, especially the section to which I link here:
>>
>> http://docs.basho.com/riak/cs/2.1.1/cookbooks/configuration/
>> riak-for-cs/#setting-up-the-proper-riak-backend
>>
>> [
>> {riak_kv, [
>> *% NOTE: double-check this path for your environment:*
>> {add_paths, ["/usr/lib/riak-cs/lib/riak_cs-2.1.1/ebin"]},
>> {storage_backend, riak_cs_kv_multi_backend},
>> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
>> {multi_backend_default, be_default},
>> {multi_backend, [
>> {be_default, riak_kv_eleveldb_backend, [
>> {data_root, "/opt/data/ecryptfs/riak"}
>> ]},
>> {be_blocks, riak_kv_eleveldb_backend, [
>> {data_root, "/opt/data/ecryptfs/riak_blocks"}
>> ]}
>> ]}
>> ]}
>> ].
>>
>> Your configuration will look like the above. The contents of this file
>> are merged with the contents of /etc/riak/riak.conf to produce the
>> configuration that Riak uses.
>>
>> Notice that I chose riak_kv_eleveldb_backend twice because of the
>> discussion you had previously about RAM usage and bitcask (
>> http://lists.basho.com/pipermail/riak-users_lists.basho.com
>> /2016-November/018801.html)
>>
>> In your current configuration, you are not using the expected prefix for
>> the block data. My guess is that on very rare occasions your data happens
>> to overwrite the manifest for a file. You may also have corrupted files at
>> this point without noticing it at all.
>>
>> *IMPORTANT:* you can't switch from your current configuration to this
>> new one without re-saving all of your data.
>>
>
>
Hi Daniel,

In a typical CS setup, both the bitcask and leveldb backends are in use.
Bitcasks memory use depends directly on the amount of keys stored in this
backend. Can you please let me know how many files and how much total data
is stored in the CS cluster?

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 5-node low-cost, low-power Riak KV cluster -- this sound feasible?

2017-04-27 Thread Magnus Kessler
On 27 April 2017 at 04:01, Lloyd R. Prentice <ll...@writersglen.com> wrote:

> Hello,
>
> I'd like to build a five-node low-cost, low-power Riak KV cluster. Use
> case: learning Riak,  development, testing, storage of personal data.
>
> -- Based on pure hand-waving, I've plugged numbers that seem more than
> adequate for my purposes into the Riak Cluster Capacity Planning
> Calculator. Here's the recommendation:
>
> To manage your estimated 100.0 thousand key/bucket pairs where bucket
> names are ~10 bytes, keys are ~36 bytes, values are ~97.7 KiB and you are
> setting aside 2.0 GiB of RAM per-node for in-memory data management within
> a cluster that is configured to maintain 3 replicas per key (N = 3) then
> Riak, using the Bitcask storage engine, will require at least:
>
> 5 nodes
> 6.4 MiB of RAM per node (31.9 MiB total across all nodes)
> 5.6 GiB of storage space per node (28.0 GiB total storage space used
> across all nodes)
> Based on this, I'm considering the following hardware:
>
> 5 ODROID XU-4
> http://www.hardkernel.com/main/products/prdt_info.php?g_code=G143452239825
>
> Each with 8 Gb eMMC storage
>
> This provides a 64-bit 2 GHz processor and 2 Gb RAM per day node running
> Ubuntu 16.04 at total cost of something under total $65000 for the cluster.
>
> Does this sound like a feasible way to go? Any downsides?
>
> Many thanks,
>
> LRP
>
>
Hi Lloyd,

The hardware you've linked to is based on the ARM processor architecture.
Please be aware that Basho does not supply prebuilt binaries for this
architecture, and you would have to compile your own binaries.

If you simply want to learn how to operate a multi-node cluster you can
build a so-called DEVREL release from source. This will install a
configurable number of separately configured nodes on a standard OSX or
Linux work station and should be sufficient to experiment with most aspects
of operating a multi-node cluster.

As far as memory requirements for all but a small test work-load is
concerned, we recommend to provide at least 350 MB of memory per VNode if
using the leveldb backend in production. Typically, a Riak node should have
between 6 and 25 VNodes (depending on ring size), and a 5 node cluster in
its default configuration will have 13 or 12 VNodes per node. If you plan
to use Solr based search, add at least another 1 GB for Solr. I have
successfully run a test cluster with heavy Solr use on 8GB nodes.

That said, I am aware that with some extra work Riak can be built on ARM
hardware, such as RaspberryPI, and has been used for demonstration purposes
on this platform. You can certainly run a Riak cluster on constrained
hardware if needed for testing purposes. However, if you'd like to evaluate
performance characteristics, you should use a setup that mirrors an
eventual production setup more closely.

I hope this helps somewhat towards finding the best platform for you.

Kind Regards,

Magnus




-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: raik-admin status vs /stats

2017-04-26 Thread Magnus Kessler
On 26 April 2017 at 16:09, Travis Kirstine <tkirst...@firstbasesolutions.com
> wrote:

> All
>
>
>
> According to the docs 2.14 there is an issue executing the riak-admin
> status command more than once a minute can impact performance.  Is
> performance impacted when access stats via the HTTP interface /stats more
> than once a minute?
>
>
>
> Regards
>
>
>
>
>
Hi Travis,

Thank you for bringing this issue to our attention. It appears that the
warning in the documentation is a left-over from a much earlier version of
Riak. The paragraph following the warning is correct; you can call
`riak-admin status` and access the HTTP `/stats` endpoint more than once a
minute without negative effects. We will correct this on the documentation
site soon.

Please be aware, though, that many of the properties returned are
accumulated over a floating one minute window. We therefore still advise to
gather the stats not more often than about once a minute, as more frequent
access will not generate more meaningful fine grained values.

Kind Regards

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Re:

2017-04-18 Thread Magnus Kessler
On 19 April 2017 at 00:41, Cesar Stuardo <castua...@uchicago.edu> wrote:

> and apparently, is for riak ts. Is that what you installed?


That's correct. Riak-shell is currently only offered as part of Riak TS.

@Wagner, Riak KV is a Key Value store with a REST API and client
implementations in a number of supported client libraries. Riak TS is a
variant of Riak KV optimised for time series data. Riak TS also has a SQL
like query language (exposed via riak-shell), which can make some time
series queries easier to write for end users.

If you want to check out the basic CRUD operations in Riak KV, please
follow the "Getting Started" section of Riak's documentation [0]. There are
examples for many programming languages [1][2][3][4][5].

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/
[1]:
http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/python/crud-operations/
[2]:
http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/java/crud-operations/
[3]:
http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/erlang/crud-operations/
[4]:
http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/ruby/crud-operations/
[5]:
http://docs.basho.com/riak/kv/2.2.3/developing/getting-started/csharp/crud-operations/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Siblings on first write to a key

2017-04-18 Thread Magnus Kessler
On 18 April 2017 at 08:20, Daniel Abrahamsson <hams...@gmail.com> wrote:

> I've run into a case where I got a sbiling error/response on the first
> ever write to a key. I would like to understand how this could happen.
> Normally when you get siblings, it is because you have written a value
> with an out-of-date vclock. But since this is the first write, there
> is no vclock. Could someone shed some light on this for me?
>
> It is worth to mention that the it took 3 seconds for Riak to deliver
> the response, so it is possible there was some kind of network issue
> at the time.
>
> Here are some details about my setup:
> Number of nodes: 8.
> n_val: 5
> write options: pw: 3 (quorum), return_body
>
> Regards,
> Daniel Abrahamsson
>


Hi Daniel,

Please let me know if all nodes in this cluster were set up completely
fresh, with empty backend directories, or if any of them had been used
before for a Riak installation. If the latter is the case, it may be that
the key in question had already been used once before. Cluster nodes pick
up data from pre-existing backends.

How do you access the key for read and write operations?

Kind Regards,

Magnus


Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Node is not running!

2017-03-11 Thread Magnus Kessler
On 8 March 2017 at 08:40, Jurgen Ramaliu <jurgenrama...@gmail.com> wrote:

> Hello,
>
> I have two questions :
>
> *1.*
>
> Problem with Riak KV installation Bitcask. I try this link to install
> Bitcask on Riak KV:
>
> http://docs.basho.com/riak/kv/2.2.0/setup/planning/backend/bitcask/
>
> And when I go on step
> Bitcask Database Files
> <http://docs.basho.com/riak/kv/2.2.0/setup/planning/backend/bitcask/#bitcask-database-files>
>
> I try to do this command :
> curl -XPUT http://localhost:8098/types/default/buckets/test/keys/test \
> -H "Content-Type: text/plain" \ -d "hello
>
> I receive this error :
>
> *curl: (6) Couldn't resolve host ' -H'*
> *curl: (6) Couldn't resolve host 'Content-Type: text'*
> *curl: (6) Couldn't resolve host ' -d'*
> *curl: (6) Couldn't resolve host 'hello'*
> *500 Internal Server
> ErrorInternal Server ErrorThe server
> encountered an error while processing this
> request:{error,function_clause}mochiweb+webmachine
> web server*
>


Hi Jurgen,

I am not sure what causes this issue on your machine, but it may be due to
a copy and paste error. Please try putting the entire command onto one line
in the shell, like this:

curl -XPUT http://localhost:8098/types/default/buckets/test/keys/test -H
"Content-Type: text/plain" -d "hello"



>
>
> *2.*
>
> Activation bucket type problem.
> I have create bucket-type hello and i can't activte it.
> When I try to activate it with command :* riak-admin bucket-type activate
> hello* , I receive this message : '*hello has been created but cannot be
> activated yet'*
>
>
>
> Attached are screen shots.
> Can you please give to me one solution to resolve those problems?
>

Please ensure that all nodes in the cluster are up. Before a bucket type
can be activated, it must have been distributed to all nodes.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cannot add node to cluster: ring_creation_size

2017-03-11 Thread Magnus Kessler
On 8 March 2017 at 19:45, Scites, Troy <troy_sci...@comcast.com> wrote:

> Hello,
>
> I have 3 nodes in a development environment that were configured in a
> cluster.  It was working fine until one of the servers crashed.  I tried
> clearing the data directory and was able to removed the crashed node from
> the cluster.  When I try to add the third node back in, I get the following
> error:
> Failed: riak@ has a different ring_creation_size
>
> All 3 nodes have ring_size set to 64 in the riak.conf.  Has anyone
> experienced this error and have a suggestion for adding the node back into
> the ring?
>
> Thanks,
> Troy
>
>
Hi Troy,

Please check with 'riak status | egrep
"ring_creation_size|ring_num_partitions' what the different nodes believe
to be their actual ring size.

As you have already wiped (some of) the data directory on the crashed node,
you can stop this node and then it's safe to also remove the 'ring'
directory if this node reports a different ring size. Afterwards, restart
the node and attempt to join it to the rest of the cluster.

Please let me know if this solves your issue.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 2.2 CPU usage

2017-03-11 Thread Magnus Kessler
On 6 March 2017 at 11:24, Alexander Popov <mogada...@gmail.com> wrote:

> Did Upgrade  production cluster to  2.2.0
> looks that it eat more CPU,
> in idle time 10-30%  always( on 2.1.4 was  up to 10%  with same load )
> in Peaks  it takes up to 100% ( on 2.1.4 was  peaks was up to 75%   )
>

Hi Alexander,

There have been significant changes in Riak-2.2 regarding the Active
Anti-Entropy (AAE) implementation. Please see the upgrading documentation
[0] for more details. When you first upgrade to Riak-2.2, all AAE trees
need to be rebuilt and it may be that the additional CPU load is caused by
this process. Depending on your AAE settings, and data volume this may take
a few days to complete. You can monitor progress with the 'riak-admin
aae-status' command.

I would also recommend to check that there are no other 'beam.smp'
processes running on your system. We have observed elevated CPU load when
the beam process created by running 'riak attach' gets detached from it's
controlling TTY. If you find additional 'beam.smp' processes that are not
otherwise accounted for, it is safe to kill these.

Let us know if this answers your question, and if after a few days the CPU
load returned to normal levels.

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.2.0/setup/upgrading/version/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: AAE Off

2017-03-01 Thread Magnus Kessler
On 1 March 2017 at 07:23, al so <volks...@gmail.com> wrote:

> How would the data get repaired then? i.e. looking for complete list of
> Cons when AAE is Off.
>
> On Tue, Feb 28, 2017 at 9:45 AM, Alexander Sicular <sicul...@basho.com>
> wrote:
>
>> Right. AAE does not come for free. It consumes disk, memory and CPU.
>> Depending on your circumstances it may or may not be advantageous for your
>> system.
>>
>> On Tue, Feb 28, 2017 at 11:39 Matthew Von-Maszewski <matth...@basho.com>
>> wrote:
>>
>>> Performance gains on write intensive applications.
>>>
>>> > On Feb 28, 2017, at 11:18 AM, al so <volks...@gmail.com> wrote:
>>> >
>>> > Why would anyone disable AAE in riak 2.x?
>>>
>>
There are several mechanisms in Riak that repair data. AAE [0] is intended
to detect corrupted data that is not regularly accessed in other ways. When
objects are read, the read-repair mechanism [1] will also fix up lost or
corrupted data. Finally, if a partition is lost and AAE is not enabled, you
can perform a manual partition repair operation [2].

So, if your use case involves short-lived data, or data that is regularly
read or updated, turning off AAE may allow the cluster to handle a higher
peak load. However, there are several cases where having AAE enabled is
important. These include the use of Riak Search / Yokozuna, which without
AAE will not be able to detect objects missing or not yet deleted from a
Riak core under high load, and AAE based MDC replication. Overall, leaving
AAE turned on is recommended for most use cases, but you should give the
cluster enough resources to handle the maximum expected load while also
doing IO and CPU intensive AAE operations like AAE tree rebuilds.

Kind Regards,

Magnus


[0]:
https://docs.basho.com/riak/kv/2.2.0/learn/concepts/active-anti-entropy/
[1]:
https://docs.basho.com/riak/kv/2.2.0/learn/concepts/replication/#read-repair
[2]:
https://docs.basho.com/riak/kv/2.2.0/using/repair-recovery/repairs/#repairing-partitions
-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Node is not running!

2017-02-24 Thread Magnus Kessler
On 23 February 2017 at 13:58, Magnus Kessler <mkess...@basho.com> wrote:

> On 23 February 2017 at 13:38, Jurgen Ramaliu <jurgenrama...@gmail.com>
> wrote:
>
>> Hello Magnus,
>>
>> Attached is console.log.
>>
>>
>
>
Hi Jurgen,

If you haven't deleted any older logs since, could you please also run
'riak-debug' and post the resulting archive file? A Basho engineer would
like to see if there were similarities to another issue he's investigating.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Node is not running!

2017-02-23 Thread Magnus Kessler
On 23 February 2017 at 13:38, Jurgen Ramaliu <jurgenrama...@gmail.com>
wrote:

> Hello Magnus,
>
> Attached is console.log.
>
>
Hi Jurgen,

The log contains these lines:

2017-02-23 14:36:17.949 [error] <0.707.0>@riak_kv_vnode:init:512 Failed to
start riak_kv_eleveldb_backend backend for index
91343852333181432387730302044767688728495783936 error:
{db_open,"Corruption: bad record length"}
2017-02-23 14:36:17.950 [error] <0.718.0>@riak_kv_vnode:init:512 Failed to
start riak_kv_eleveldb_backend backend for index
25119559391624893906625833062344003363405824 error:
{db_open,"Corruption: bad record length"}
2017-02-23 14:36:17.970 [notice] <0.707.0>@riak:stop:43 "backend module
failed to start."
2017-02-23 14:36:17.970 [notice] <0.718.0>@riak:stop:43 "backend module
failed to start."

This suggests that a previous unclean shutdown of Riak left some leveldb
data files damaged, specifically in partitions
91343852333181432387730302044767688728495783936 and
25119559391624893906625833062344003363405824. Please follow the
instructions in the documentation[0] to repair the affected partitions. If
this is still a single test node without any valuable data, you can also
simply delete the leveldb backend directory.

Kind Regards,

Magnus

[0]:
https://docs.basho.com/riak/kv/2.2.0/using/repair-recovery/repairs/#healing-corrupted-leveldbs

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Node is not running!

2017-02-23 Thread Magnus Kessler
On 23 February 2017 at 07:19, Jurgen Ramaliu <jurgenrama...@gmail.com>
wrote:

> Hi Paul and Magnus,
> I have resolve it by using command :
>
>
>- riak stop
>
>
>- changing nodename into file riak.conf from nodename = riak@127.0.0.1
>to riak@192.168.1.10
>
>
>- riak-admin reip riak@127.0.0.1 riak@192.168.1.10
>
> But I have another problem, riak starts with this IP but shut down after
> about 15 seconds.
>
>
> Can you tell me how can I resolve this?
>
> Regards,
> Jurgen
>

Hi Jurgen,

Can post the portion of the console.log since the last restart of Riak,
please? Without further information it's hard to guess what may have gone
wrong.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Node is not running!

2017-02-22 Thread Magnus Kessler
On 22 February 2017 at 10:34, Jurgen Ramaliu <jurgenrama...@gmail.com>
wrote:

> Hello RIAK,
>
> I have one problem with riak node configuration. I have install RIAK KV
> and everything is ok.
> I run *riak start* on my CentOS 6.5 and it starts ok.
> I have configure file */etc/hosts* :
>
> *127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4*
> *::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6*
> *192.168.1.10  riak1.mydomain.com <http://riak1.mydomain.com/>
>riak1*
>
> and this file */etc/sysconfig/network* :
>
> *NETWORKING=yes*
> *HOSTNAME=riak1.mydomain.com <http://riak1.mydomain.com/>*
>
> But, when i try to do this steps, on this link :
> http://docs.basho.com/riak/kv/2.2.0/using/running-a-
> cluster/#configure-the-first-node
>
> to configure first node, when I change nodename form :
> *nodename = riak@127.0.0.1 <riak@127.0.0.1>* to *nodename
> = riak@192.168.1.10 <riak@192.168.1.10>* , riak doesn't start, I take
> back this error messages :
>
> [root@riak1 ~]# riak start
> Node 'riak@192.168.1.10' not responding to pings.
> Node 'riak@192.168.1.10' not responding to pings.
> Node is not running!
>
>
>
> or *nodename = r...@riak1.mydomain.com <r...@riak1.mydomain.com>*, riak
> doesn't start, I take back this error messages :
>
> [root@riak1 ~]# riak start
>
> Node 'r...@riak1.mydomain.com' not responding to pings.
> Node 'r...@riak1.mydomain.com' not responding to pings.
> Node is not running!
>
> Can you please tell me how can I resolve this problem?
>
> Thank you in advance,
>
> Jurgen
>
>
Hi Jurgen,

When a node is started for the first time, its name is recorded in the ring
file (a file under /var/lib/riak/ring by default on CentOS). For a single
node that hasn't joined a cluster, yet, the easiest way to start the node
with a new node name is to simply remove the ring directory and then start
the node. Please note that any changes to bucket properties for buckets
under the default bucket type get lost in this process. If any such changes
have been made, re-apply them once the node has been restarted.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Confusion about node_puts and vnode_puts.

2017-02-15 Thread Magnus Kessler
On 15 February 2017 at 10:09, xiao新 <luna.f...@foxmail.com> wrote:

> Hi,
>
> I am confused about the relation between the statistics "node_puts" and
> "vnode_puts", yes, I have read the user guide.
> For instance, this is the output from "riak-admin status | grep
> node_puts", I don't understand why vnode_puts is much bigger(40 times) than
> node_puts, my cluster has 3 nodes.
>
> Line 178: node_puts : 568
> Line 179: node_puts_counter : 0
> Line 180: node_puts_counter_total : 0
> Line 181: node_puts_map : 0
> Line 182: node_puts_map_total : 0
> Line 183: node_puts_set : 0
> Line 184: node_puts_set_total : 0
> Line 185: node_puts_total : 55476
> Line 347: vnode_puts : 23888
> Line 348: vnode_puts_total : 10305729
> vnode_puts Number of PUT operations coordinated by local vnodes on this
> node in the last minute
>
> node_puts Number of PUTs coordinated by this node, where a PUT is sent to
> a local vnode in the last minute
> Best Regards
> Luna
>
>
Hi Luna,

VNode puts and gets are registered each time a VNode performs an operation.
Any time a Node coordinates a client operation, there will be `n_val` VNode
operations. However, not all VNode operations occur as a result of
client-side operations. In particular AAE tree rebuilds, AAE repairs and
realtime or fullsync MDC replication are counted at the VNode level, but
don't register at the node level.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Truncated bit-cask files

2017-02-14 Thread Magnus Kessler
On 14 February 2017 at 14:46, Arun Rajagopalan <arun.v.rajagopa...@gmail.com
> wrote:

> Hi Magnus
>
> RIAK crashes on startup when I have trucated bitcask file
>
> It also crashes when the AAE files are bad too I think. Example below
>
> 2017-02-13 21:18:30 =CRASH REPORT
>
>   crasher:
>
> initial call: riak_kv_index_hashtree:init/1
>
> pid: <0.6037.0>
>
> registered_name: []
>
> exception exit: {{{badmatch,{error,{db_open,"Corruption: truncated
> record at end of file"}}},[{hashtree,new_segment_
>
> store,2,[{file,"src/hashtree.erl"},{line,675}]},{hashtree,
> new,2,[{file,"src/hashtree.erl"},{line,246}]},{riak_kv_index_h
>
> ashtree,do_new_tree,3,[{file,"src/riak_kv_index_hashtree.
> erl"},{line,610}]},{lists,foldl,3,[{file,"lists.erl"},{line,124
>
> 8}]},{riak_kv_index_hashtree,init_trees,3,[{file,"src/riak_
> kv_index_hashtree.erl"},{line,474}]},{riak_kv_index_hashtree,
>
> init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,
> 268}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}
>
> ,{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,
> 239}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line
>
> ,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>
> ancestors: [<0.715.0>,riak_core_vnode_sup,riak_core_sup,<0.160.0>]
>
> messages: []
>
> links: []
>
> dictionary: []
>
> trap_exit: false
>
> status: running
>
> heap_size: 1598
>
> stack_size: 27
>
> reductions: 889
>
>   neighbours:
>
>
>
> Regards
> Arun
>
>
Hi Arun,

The crash log you provided shows that there is a corrupted file in the AAE
(anti_entropy) backend. Entries in console.log should have more information
about which partition is affected. Please post output from the affected
node at around 2017-02-13T21:18:30. As this is AAE data, it is safe to
remove the directory named after the affected partition from the
active_entropy directory before restarting the node. You may find that
there is more than one affected partition, the next of which will be
encountered after the attempted restart only. If this is the case, simply
identify the next partition in the same way and remove it, too, until the
node starts up successfully again.

Is there a reason why the nodes aren't shut down in the regular way?

Kind Regards,

Magnus



-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cross-Compile Riak for Embedded Device

2017-02-14 Thread Magnus Kessler
On 13 February 2017 at 16:06, Darshan Shah <dg.shah1...@gmail.com> wrote:

> Our main usecase is to create a database in Embedded system to store value
> received from one server.
> For our usecase Key value based database is best suitable and we found
> Riak is one of best for this.
> So we want to cross compile Riak databse for Embedded system.
>
> On Fri, Feb 10, 2017, 3:46 PM Stephen Etheridge <setheri...@basho.com>
> wrote:
>
>> Darshan,
>>
>> Perhaps if you gave some more details of what you are trying to do I
>> might be able to help further?
>>
>> Stephen
>>
>>
>>
Hi Darshan,

If I understand you correctly, your embedded devices will locally store
data and will communicate with a central server, but not other peer
devices. Riak's strength lies in being a centralised distributed database
optimised for dealing with very large data sets. Riak installations
typically distribute the data set over a small(-ish) number of nodes to
achieve high availability and resilience.

In your use case I expect there to be a large number of embedded devices,
each responsible for a small amount of data. This is not a good fit for
Riak.

A quick search shows embeddable key-value stores, such as RocksDB [0],
unqlite [1], and others (*), which you could use to store data locally in
your embedded devices. If you'd like to use Riak on the central server, I'd
recommend using one of the Riak client libraries [2] to transfer data
between the central server and your devices.

Kind Regards,

Magnus

(*) The quoted embeddable DBs are examples only. I haven't personally used
them yet, and can't vouch for their suitability for your project.

[0]: http://rocksdb.org
[1]: http://unqlite.org
[2]: https://docs.basho.com/riak/kv/2.2.0/developing/client-libraries/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: cluster clear without stopping node

2017-02-09 Thread Magnus Kessler
On 9 February 2017 at 14:46, Daniel Miller <dmil...@dimagi.com> wrote:

> Hi Magnus,
>
> Thanks, this is great news! I was worried that clearing a planned node
> removal would stop the node from which I was issuing the clear command,
> which would be bad. Sounds like I have nothing to worry about in that case.
>
The behaviour of `riak-admin cluster plan` is quite complex, depending on
the staged changes. If a `leave` operation has been staged, it will just
undo the staged change; no node will be stopped. However, if a `join`
operation has been staged, the joining node will be shut down after its
ring has been cleared. When this node restarts, it will behave like a fresh
unjoined node and can be joined again. If `riak-admin cluster clear` was
run from a node that remains in the cluster, this node will be unaffected.


> On running cluster plan multiple times to generate a new plan: this is a
> little surprising since I think I’ve done that before but I haven’t
> observed it generating a new plan. I’ll try it again.
>
You may see the effect more pronounced if you use `claim_v3`, which is more
aggressive. For a discussion about how to rebalance the ring without adding
or removing nodes, please see
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/018815.html


> Thanks for very quick response.
> Daniel
> ​
>

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: cluster clear without stopping node

2017-02-09 Thread Magnus Kessler
On 8 February 2017 at 21:33, Daniel Miller <dmil...@dimagi.com> wrote:

> According to the docs
> <http://docs.basho.com/riak/kv/2.2.0/using/admin/commands/#clear>
> “Running [riak-admin cluster clear] will also stop the current node in
> addition to clearing any staged changes.
>
> Is there a way to clear the current cluster plan without stopping the
> current node?
>
Hi Daniel,

That statement from the docs is wrong. [riak-admin cluster clear] does not
stop the current node in general. It may stop the node if the command is
run on a joining node, and the join is cancelled, but should be safe to run
on all other nodes. I have reached out to the documentation team to get
this fixed.

Even better (shameless feature request) would be to have a way to “replan”
> the current plan. That is, clear the plan and then redo the currently
> planned actions with a single command. Often I have a plan that ends up
> with an uneven ring allocation and I’ve noticed that I can sometimes get a
> better allocation if I clear and re-plan several times, but this is tedious.
>
You can run  [riak-admin cluster plan] more than once before  [riak-admin
cluster commit], and it may generate a different transition plan every time
depending on cluster state.


> Thanks!
> Daniel​
>
Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak mapreduce error

2017-02-08 Thread Magnus Kessler
On 7 February 2017 at 03:50, raghuveer sj <raghuvee...@gmail.com> wrote:

> Hi Magnus,
>
> Previously i had used developer branch as master build was failing. Now
> with the latest changes i see :
>
> 10> riakc_pb_socket:mapred_bucket(Riak, <<"training">>, [{map, {qfun,
> ReFun}, Re, true}]).
> {error,<<"{\"phase\":0,\"error\":\"{badfun,#Fun 50752066>}\",\"input\":\"{ok,{r_object,<<\\\"training\\\">>,
> <<\\\"bar\\\">>"...>>}
>
> Please help me out.
>
> Regards,
> Raghuveer
>
>
>
Hi Raghuveer,

I have been able to reproduce the error message you are seeing after I
built riak-erlang-client with OTP-19. It appears that there are some
incompatibilities between newer OTP versions and the map-reduce code. I
haven't yet had the time to dig deeper, but suspect that changes in
Erlang's string handling may play a role.

Please try to re-compile the Erlang client with OTP-16, and let me know if
you can successfully run the example code you posted earlier under OTP-16
(e.g. start Erlang with "$(riak ertspath)/erl").

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Multiple nodes leaving cluster

2017-02-08 Thread Magnus Kessler
On 7 February 2017 at 23:25, Daniel Miller <dmil...@dimagi.com> wrote:

> Hi Riak Users,
>
> In the documentation
> <http://docs.basho.com/riak/kv/2.2.0/using/admin/commands/#leave> for 
> riak-admin
> cluster leave it says “You can stage multiple leave command before
> planning/committing.” This implies that it is safe to stage multiple nodes
> leaving the cluster simultaneously. Is that true? Will all data in the
> cluster be continuously available during the removal period if, for
> example, I setup and commit a plan for 3 nodes to be leave a 9-node cluster
> (assuming there is enough space for the data on the remaining 6 nodes)?
>
> I had asked a similar question on IRC a couple weeks ago. In that case I
> was asking about replacing multiple nodes simultaneously using riak-admin
> cluster replace. The answer I got there left some doubt in my mind as to
> whether it is safe (i.e., will not result in a period data availability) to
> have multiple nodes leaving the cluster at once. The documentation for
> replace implies that it is safe to replace multiple nodes simultaneously as
> well: “You can stage multiple replace actions before planning/committing.”
>
> Note that I am not asking about force-remove or force-replace, which I
> would expect to result in permanent data loss if multiple nodes are
> force-removed/replaced simultaneously.
>
> My cluster is running Riak 2.1.1 with standard nval of 3.
>
> Thanks!
> Daniel
> ​
>

Hi Daniel,

Yes, staging several riak-admin cluster leave steps before  riak-admin
cluster commit is safe. The leaving nodes will perform an ownership handoff
of all their partitions to other nodes in the cluster before shutting
themselves down. While this is happening, these nodes remain in the cluster
as fully functional nodes. The same is true for  riak-admin cluster replace.

The reason we recommend staging of multiple leave or join operations is
that this minimises reshuffling data throughout the cluster. Only set of
ownership handoffs needs to happen, whereas if you were to perform several
consecutive leave or join operations, on each commit a substantial amount
of handoffs throughout the cluster is needed.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak mapreduce error

2017-02-06 Thread Magnus Kessler
On 3 February 2017 at 18:31, raghuveer sj <raghuvee...@gmail.com> wrote:

> Hi Team,
>
> I am trying to run mapreduce in erlang.
>
> curl -XPUT http://localhost:8098/buckets/training/keys/foo -H
> 'Content-Type: text/plain' -d 'caremad data goes here'
> curl -XPUT http://localhost:8098/buckets/training/keys/bar -H
> 'Content-Type: text/plain' -d 'caremad caremad caremad caremad'
> curl -XPUT http://localhost:8098/buckets/training/keys/baz -H
> 'Content-Type: text/plain' -d 'nothing to see here'
> curl -XPUT http://localhost:8098/buckets/training/keys/bam -H
> 'Content-Type: text/plain' -d 'caremad caremad caremad'
>
> *Running in erlang shell :*
>
> ReFun = fun(O, _, Re) -> case re:run(riak_object:get_value(O), Re,
> [global]) of
> {match, Matches} -> [{riak_object:key(O), length(Matches)}];
> nomatch -> [{riak_object:key(O), 0}]
> end end.
>
> code:which(riakc_pb_socket).
> "./ebin/riakc_pb_socket.beam"
>
> {ok, Pid} = riakc_pb_socket:start_link("127.0.0.1", 8087).
> {ok,<0.36.0>}
>
> riakc_pb_socket:ping(Pid).
> pong
>
> {ok, Re} = re:compile("caremad").
> {ok,{re_pattern,0,0,0,
> <<69,82,67,80,85,0,0,0,0,0,0,0,81,0,0,0,255,255,255,255,
>   255,255,...>>}}
>
> {ok, Riak} = riakc_pb_socket:start_link("127.0.0.1", 8087).
> {ok,<0.42.0>}
>
> riakc_pb_socket:mapred_bucket(Riak, <<"training">>, [{map, {qfun, ReFun},
> Re, true}]).
> ** 1: variable 'ReFun' is unbound*
>
> Trying to run the famous erlang sample program sample. I am stuck at this
> error. Kindly help me out.
>
> Regards,
> Raghuveer
>
>
Hi Raghuveer,

I have run the steps you provided, and found that they work fine for me.
Can you let me know which version of Riak you are running this against, and
which version of Erlang is used on the client side? Has the
riak-erlang-client been compiled with the same Erlang version?

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Active Anti Entropy Directory when AAE is disabled

2017-01-26 Thread Magnus Kessler
On 25 January 2017 at 21:09, Arun Rajagopalan <arun.v.rajagopa...@gmail.com>
wrote:

> Thanks Luke. Sorry it took me some time to experiment ...
>
> I am not sure what happens in a couple of scenarios. Maybe you can explain
>
> Lets say I lose a node completely and want to replace it. Will the keys
> yet to be "anti-entropied" by that node be distributed correctly when I
> restore that node ?
>
> Secondly restore multiple nodes from a backup, should I replace the
> anti-entropy directory also ?
>
>
Hi Arun,

You can consider AAE data as ephemeral; AAE trees will be recalculated
automatically if missing, or if trees are encountered that are too old
(which may happen if you turn AAE off for some time and then on again).

In a forced replacement scenario (after completely losing a node), the
replacement node would first calculate its own set of AAE trees, which
would essentially be an empty set. Subsequent AAE exchanges with other
nodes will detect the differences and cause missing KV objects to be
repaired.

For more information about AAE and partition repairs, please see the
documentation [0][1].

Kind Regards,

Magnus

[0]:
https://docs.basho.com/riak/kv/2.2.0/learn/concepts/active-anti-entropy/
[1]:
https://docs.basho.com/riak/kv/2.2.0/using/repair-recovery/repairs/#repairing-partitions

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Crash Log: yz_anti_entropy

2017-01-19 Thread Magnus Kessler
admatch,{error,{db_open,"IO error: lock
> /var/lib/riak/yz_anti_entropy/639406966332270026714112114313373821099470487552/LOCK:
> Cannot allocate memory"}}},[{hashtree,new_segment_store,2,[{file,"src/
> hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/
> hashtree.erl"},{line,246}]},{yz_index_hashtree,do_new_tree,
> 3,[{file,"src/yz_index_hashtree.erl"},{line,377}]},{
> lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{yz_index_
> hashtree,init_trees,3,[{file,"src/yz_index_hashtree.erl"},{line,340}]},...]}}
> in yz_entropy_mgr:'-reload_hashtrees/3-fun-0-'/2 line 371
> 2017-01-18 15:35:39.101 [error] <0.3030.0> CRASH REPORT Process
> yz_entropy_mgr with 0 neighbours exited with reason: no match of right hand
> value {error,{{badmatch,{error,{db_open,"IO error: lock
> /var/lib/riak/yz_anti_entropy/639406966332270026714112114313373821099470487552/LOCK:
> Cannot allocate memory"}}},[{hashtree,new_segment_store,2,[{file,"src/
> hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/
> hashtree.erl"},{line,246}]},{yz_index_hashtree,do_new_tree,
> 3,[{file,"src/yz_index_hashtree.erl"},{line,377}]},{
> lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{yz_index_
> hashtree,init_trees,3,[{file,"src/yz_index_hashtree.erl"},{line,340}]},...]}}
> in yz_entropy_mgr:'-reload_hashtrees/3-fun-0-'/2 line 371 in
> gen_server:terminate/6 line 744
> 2017-01-18 15:35:39.102 [error] <0.2010.0> Supervisor yz_general_sup had
> child yz_entropy_mgr started with yz_entropy_mgr:start_link() at <0.3030.0>
> exit with reason no match of right hand value 
> {error,{{badmatch,{error,{db_open,"IO
> error: lock 
> /var/lib/riak/yz_anti_entropy/639406966332270026714112114313373821099470487552/LOCK:
> Cannot allocate memory"}}},[{hashtree,new_segment_store,2,[{file,"src/
> hashtree.erl"},{line,725}]},{hashtree,new,2,[{file,"src/
> hashtree.erl"},{line,246}]},{yz_index_hashtree,do_new_tree,
> 3,[{file,"src/yz_index_hashtree.erl"},{line,377}]},{
> lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{yz_index_
> hashtree,init_trees,3,[{file,"src/yz_index_hashtree.erl"},{line,340}]},...]}}
> in yz_entropy_mgr:'-reload_hashtrees/3-fun-0-'/2 line 371 in context
> child_terminated
>

Hi Damion,

Let me first state that AAE always uses leveldb, regardless of the storage
backend chosen for Riak KV data. Could you please state how much physical
memory your Riak nodes have, and what you have configured for
"leveldb.maximum_memory.percent" in "riak.conf"? Have you changed the
settings for "search.solr.jvm_options", in particular the memory allocated
to Solr?

As a general rule, leveldb should have at least 350MB of memory available
per partition, and performance has been shown to increase with up to 2GB
(2.5 GB when also using Search and AAE) per partition. Please check that
you have enough memory available in your system.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: I2 queries fail when few nodes are down

2017-01-05 Thread Magnus Kessler
On 4 January 2017 at 23:22, Tomi Takussaari <tomi.takussa...@gmail.com>
wrote:

> Hello Riak-users
>
> We have 9 node Riak-cluster, that we use to store user accounts.
>
> Some of the crucial data fields of user account are indexed using I2, so
> that we can do secondary index queries based on them.
>
> Today, we tested how our cluster performs when few nodes go down, and
> results were not very good.
>
> If more than 2 nodes go down, all I2 queries will start failing, returning
> HTTP 500, with "insufficient vnodes available" error. After nodes are up
> again, things start working again.
> Normal object CRUD operations worked fine.
>
> Is this to be expected behaviour ?
>
> Funny thing is, that we have other cluster, with same configuration but
> with 6 nodes, for other environment, and that also experiences same
> problems when more than 2 nodes go down, so it does not seem to have
> anything to do with percentage of nodes being down..
>
> Our ring size is 256, and current Riak version is 2.2.
>
> Both clusters were first created years ago, with Riak 1.4, if memory
> serves, and I believe we tested this same thing back then, and I2 queries
> did not stop working this easily then..
>
> Any help would be appreciated!
>
>
Hi Tomi,

For a cluster that uses the default replication factor (`n_val`) of 3, the
behaviour you observed is expected. Secondary index queries work on a
covering set of VNodes, that include 1 replica for each KV object. With
`n_val=3` the covering set can only be guaranteed if no more than 2 nodes
are offline at any given time. This behaviour has not changed since the 1.4
release.

As our documentation [0] states:

"Riak stores 3 replicas of all objects by default, although this can be
changed using bucket types, which manage buckets’ replication properties.
The system is capable of generating a full set of results from one third of
the system’s partitions as long as it chooses the right set of partitions.
The query is sent to each partition, the index data is read, and a list of
keys is generated and then sent back to the requesting node."

Other operations working fine is due to Riak's ability to spin up fallback
partitions (VNodes on one of the remaining nodes), that will accept and
temporarily store data while the Node that should own the data is down.

Kind Regards,

Magnus

[0]:
http://docs.basho.com/riak/kv/2.2.0/using/reference/secondary-indexes/#how-it-works

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS issues when Riak endpoint fails-over to new server

2017-01-04 Thread Magnus Kessler
Hi Toby,

As far as I know Riak CS has none of the more advanced retry capabilities
that Riak KV has. However, in the design of CS there seems to be an
assumption that a CS instance will talk to a co-located KV node on the same
host. To achieve high availability, in CS deployments HAProxy is often
deployed in front of the CS nodes. Could you please let me know if this is
an option for your setup?

Kind Regards,

Magnus


On 4 January 2017 at 01:04, Toby Corkindale <t...@dryft.net> wrote:

> Hello all,
> Now that we're all back from the end-of-year holidays, I'd like to bump
> this question up.
> I feel like this has been a long-standing problem with Riak CS not
> handling dropped TCP connections.
> Last time the cause was haproxy dropping idle TCP connections after too
> long, but we solved that at the haproxy end.
>
> This time, it's harder -- we're failing over to a different Riak backend,
> so the TCP connections between Riak CS and Riak PBC *have* to go down, but
> Riak CS just doesn't handle it well at all.
>
> Is there a trick to configuring it better?
>
> Thanks
> Toby
>
>
> On Thu, 22 Dec 2016 at 16:48 Toby Corkindale <t...@dryft.net> wrote:
>
>> Hi,
>> We've been seeing some issues with Riak CS for a while in a specific
>> situation. Maybe you can advise if we're doing something wrong?
>>
>> Our setup has redundant haproxy instances in front of a cluster of riak
>> nodes, for both HTTP and PBC. The haproxy instances share a floating IP
>> address.
>> Only one node holds the IP, but if it goes down, another takes it up.
>>
>> Our Riak CS nodes are configured to talk to the haproxy on that floating
>> IP.
>>
>> The problem occurs if the floating IP moves from one haproxy to another.
>>
>> Suddenly we see a flurry of errors in riak-cs log files.
>>
>> This is presumably because it was holding open TCP connections, and the
>> new haproxy instance doesn't know anything about them, so they get TCP
>> RESET and shutdown.
>>
>> The problem is that riak-cs doesn't try to reconnect and retry
>> immediately, instead it just throws a 503 error back to the client. Who
>> then retries, but Riak-CS has a pool of a couple of hundred connections to
>> cycle through, all of which throw the error!
>>
>> Does this sound like it is a likely description of the fault?
>> Do you have any ways to mitigate this issue in Riak CS when using TCP
>> load balancing above Riak PBC?
>>
>> Toby
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Associating Riak CRDT Sets to Buckets / Keys via Erlang Client

2016-11-17 Thread Magnus Kessler
On 16 November 2016 at 17:40, Vikram Lalit <vikramla...@gmail.com> wrote:

> Hi - I am trying to leveraging CRDT sets to store chat messages that my
> distributed Riak infrastructure would store. Given the intrinsic
> conflict-resolution, I thought this might be more beneficial than me
> putting together a merge implementation based on the causal context.
>
> However, my data model requires each chat message to be associated to
> something like a post, hence I was thinking of having the post reference as
> the bucket, and chat references as keys in that bucket. With of course the
> bucket-type datasource equated to 'set'. Unfortunately though, from the
> documentation, I'm not able to ascertain how to associate a created set
> with an existing bucket and a new key reference if I use the Erlang client.
> This seems possible for other languages but not for Erlang, with the Basho
> doc mentioning  "%% Sets in the Erlang client are opaque data structures
> that collect operations as you mutate them. We will associate the data  
> structure
> with a bucket type, bucket, and key later on.".
>
> Subsequent code only seems to fetch the set from the bucket / key but
> where exactly is the allocation happening?
>
> {ok, SetX} = riakc_pb_socket:fetch_type(Pid, {<<"sets">>,<<"travel">>}, <<
> "cities">>).
>
> Perhaps I'm missing something, or is there a code snippet that I can
> leverage?
>
> Thanks!
>
>
Hi Vikram,

Please have a look at the following snippet, that shows the complete set of
operations used to update a CRDT set with the Erlang client:

update_crdt_set(Server, BType, Bucket, Key, Val) ->
T = unicode:characters_to_binary(BType),
B = unicode:characters_to_binary(Bucket),
K = unicode:characters_to_binary(Key),

{ok, Pid} = riakc_pb_socket:start_link(Server, 8087),

Set = case
riakc_pb_socket:fetch_type(Pid, {T, B}, K)
of
{ok, O} -> O;
{error, {notfound, set}} -> riakc_set:new()
end,

Set1 = riakc_set:add_element(unicode:characters_to_binary(Val),
Set),

{ok, {set, Vals, _Adds, _Dels, _Ctx}} = riakc_pb_socket:update_type(
Pid, {T, B}, K, riakc_set:to_op(Set1), [return_body]),
Vals.

The set is updated with riakc_set:add_element/2, and sent back to the
server with riakc_pb_socket:update_type/5, which in turn takes an argument
returned from riakc_set:to_op/1.

More information and samples can be found in the Riak documentation [0] and
the Riak Erlang client API docs [1][2].

Please let me know if this answered your question.

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.1.4/developing/data-types/sets/
[1]: https://basho.github.io/riak-erlang-client/riakc_set.html
[2]:
https://basho.github.io/riak-erlang-client/riakc_pb_socket.html#update_type-5

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: reads/writes during node replacement

2016-11-14 Thread Magnus Kessler
On 12 November 2016 at 00:08, Johnny Tan <johnnyd...@gmail.com> wrote:

> When doing a node replace (http://docs.basho.com/riak/1.
> 4.12/ops/running/nodes/replacing/), after commit-ing the plan, how does
> the cluster handle reads/writes? Do I include the new node in my app's
> config as soon as I commit, and let riak internally handle which node(s)
> will do the reads/writes? Or do I wait until the ringready on the new node
> before being able to do reads/writes to it?
>
> johnny
>
>
Hi Johnny,

As soon as a node has been joined to the cluster it is capable of taking on
requests. `riak-admin ringready` returns true after a join or leave
operation when the new ring state has been communicated successfully to all
nodes in the cluster.

During a replacement operation, the leaving node will hand off [0] all its
partitions to the joining node. Both nodes can handle requests during this
phase and store data in the partitions they own. Once the leaving node has
handed off all its partitions, it will automatically shut down. Please keep
this in mind when configuring your clients or load balancers. Clients
should deal with nodes being temporarily or permanently unavailable.

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.1.4/using/reference/handoff/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: throughput test & connection fail

2016-11-14 Thread Magnus Kessler
On 12 November 2016 at 03:00, Jing Liu <jingliu...@gmail.com> wrote:

> Hi,
>
> When I try to simply test the throughput of Risk in the setting that
> just start a single node and utilize two clients to issue requests, I
> got connection refused after the client's thread of sending GET
> request overcome about 400. Actually the server crashed. Why is it and
> how can I fix?
>
> Thanks
> J.
>

Hi Jing,

You do not specify exactly how you perform your throughput test, so let me
first ask a few questions. How many concurrent requests are active against
the server at any given time? Can you clarify what you mean by "the server
crashed"? Did the riak process actually terminate?

What hardware environment does Riak run on? Please provide some information
about CPU, memory and most importantly your disk IO subsystem.

How big are the objects that you send to / request from Riak?

Even on moderate hardware, a single node should be able to serve hundreds
of requests per second. However, every system can be pushed to the limits
of what the hardware can support, and Riak is no exception. Depending on
the bottlenecks, an overload can manifest itself in a multitude of
different ways.

For load tests, I recommend to start with a small, sustainable load and
ramp it up to establish which subsystem is the bottleneck. Please monitor
the OS and Riak's performance carefully during the test. Riak exposes many
performance metrics as JSON through its HTTP stats endpoint
(http://:8098/stats).
Consider ingesting these into your favourite monitoring solution. OS
metrics like CPU, network and IO usage should also be collected and
graphed. One easy open source solution that's gaining traction recently is
netdata (https://github.com/firehol/netdata).

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Monitor RIAK process with Supervisord

2016-10-31 Thread Magnus Kessler
On 29 October 2016 at 19:59, vmalhotra <varun.malho...@minjar.com> wrote:

> We run 8 nodes RIAK cluster in our Prod environment. Lot of time, RIAK
> process stops and we also noticed out of memory issues. Typically, we run
> restart the affected node to recover from the issue. I thought of using
> Supervisor to control the RIAK processes so the idea is if any of the
> process crash SupervisorD daemon will automatically restart that process on
> a crash.
>
> Wanted to know what you guys think? Can it cause any other issue or it
> should work fine?
>

Hi Varun,

I would recommend against blindly restarting Riak nodes, in particular if
these were shut down uncleanly, as may happen in out of memory situations.
There is a risk that an unclean shutdown leaves behind corrupted files and
that a subsequent restart is unsuccessful.

You should instead investigate why Riak stops being responsive.

Please have a look at the documentation, in particular memory
requirements[0], and OS tuning[1].

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.1.4/setup/planning/cluster-capacity/
[1]: http://docs.basho.com/riak/kv/2.1.4/using/performance/




>
> Thanks in advance.
>
>
>
> --
> View this message in context: http://riak-users.197444.n3.
> nabble.com/Monitor-RIAK-process-with-Supervisord-tp4034655.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak_explorer stopped working after turn on security on cluster

2016-10-21 Thread Magnus Kessler
On 21 October 2016 at 03:45, AJAX DoneBy Jack <ajaxd...@gmail.com> wrote:

> Hello Basho,
>
> Today I turned on security on my cluster but riak_explorer stopped working
> after that.
> Anything I need to check on riak_explorer to make it works again?
>
> Thanks,
> Ajax
>
>
Hi Ajax,

After turning on Riak security, all clients must communicate with Riak over
a TLS secured connection, and must also send valid security credentials
with each request. AFAIK, this functionality has not yet been added to Riak
Explorer. There is an open github issue to add this functionality [0].

Kind Regards,

Magnus

[0]: https://github.com/basho-labs/riak_explorer/issues/91

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread Magnus Kessler
On 14 October 2016 at 20:18, AJAX DoneBy Jack <ajaxd...@gmail.com> wrote:

> Hello Basho,
>
> I am very new on Riak Search, I know can add {!dismax}before query string
> to use it, but don't know how to specify qf or other dismax related
> parameters in Riak Java Client. Could you advise?
>
> Thanks,
> Ajax
>

Hi Ajax,

The Riak Java Client, as most other Riak clients, uses the Protocol Buffer
API to communicate with Riak. Yokozuna's implementation of the Protocol
Buffer API allows only for a small set of query parameters [0], which have
been chosen to support the standard query parser. As such, there is
currently no easy way to use the extended set of query parameters through
the java api.

However, you may have better luck if you talk directly to HTTP API, exposed
at http://:8098/search/query/. This will accept all
queries supported by Solr 4.7. Please be aware, though, that some query
results that require accumulating data from all Solr nodes (such as stats
queries), may not work as expected. Yokozuna constructs a new coverage
query very frequently, and the actual results returned depend on which
nodes are chosen in this query.

Kind Regards,

Magnus

[0]:
https://github.com/basho/yokozuna/blob/develop/src/yz_pb_search.erl#L144-L150



>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak claimant

2016-10-17 Thread Magnus Kessler
On 12 October 2016 at 19:07, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:

> Does the riak claimant node have higher load than the other nodes?
>

Hi Travis,

The role of the claimant node is simply to coordinate certain cluster
related operations that involve changes to the ring, such as nodes joining
or leaving the cluster. Otherwise this node has no special role during
operations that manipulate data stored in the cluster.

Kind Regards,

Magnus



>
>
> Travis Kirstine
>
> *Project Supervisor *
>
> 140 Renfrew Drive, Suite 100
>
> Markham, Ontario L3R 6B3 Canada
>
> <http://www.firstbasesolutions.com/>
> T: 905.477.3600 Ext 267 | C: 647
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Delete bucket type

2016-09-29 Thread Magnus Kessler
On 28 September 2016 at 17:59, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:

> Thank you for your quick reply, Magnus! We’re considering using bucket
> type to support multi-tenancy in our system. Hence, all objects stored
> within the namespace of the bucket type and bucket type need to be removed
> once the client has decided to opt-out.
>
>
>
> Thanks
>
>
>
> -Kyle-
>
>
>

Hi Kile,

Riak uses bucket-types and buckets primarily as a name space. In a default
configuration, the bucket type and bucket names are hashed together with
the key, and this hash determines the location of a given object on the
ring. This concept is known as consistent hashing and ensures that there
are no hot spots and that data is evenly distributed across the partitions.

Riak does *NOT* keep data stored under different bucket types or buckets
separate from each other. Therefore, in order to delete all data stored
under a given bucket type or bucket, it is necessary to delete each
matching object individually. Furthermore, Riak does not keep an index of
objects stored in a bucket type or bucket.

It is possible to obtain lists of these objects via key-listing or
mapreduce. However, these are very expensive operations in Riak and should
be avoided in a high availability production cluster.

You should also be aware of how deletion in Riak works. When an object is
deleted, a tombstone is placed into the database (effectively an empty
object with some additional metadata). Tombstones are eventually reaped
during merging (bitcask) or compaction (leveldb) phases of the backend
storage. However, there is no guarantee when this actually happens, and in
particular with leveldb it can take a long time for any particular object
to be actually deleted from disk.

Please let me know if you have any additional questions regarding this
topic.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Delete bucket type

2016-09-28 Thread Magnus Kessler
On 27 September 2016 at 20:50, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:

> Hi all,
>
>
>
> Is deleting bucket type possible in version 2.1.4? If not, is there any
> workaround or available script/code that we can do this in a production
> environment without too much performance impact?
>
>
>
> Thanks
>
>
>
> -Kyle-
>
>
>

Hi Kyle,

There is currently no option to delete bucket types once they have been
created. Are you trying to delete a bucket type and any objects stored
within the namespace of the bucket type, or just remove a previously
configured bucket type?

In Riak, bucket types have very little operational overhead. They get
stored in the cluster meta data and take up a small amount of disk space
there. Bucket types are not gossiped around the ring on a regular basis,
though, and therefore have not got the negative impact a large number of
custom buckets would have. With Riak-2.x we generally recommend storing any
configuration in bucket types, rather than creating custom buckets, even if
there is only one bucket using that specific configuration.


Kind Regards,

Magnus




-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-13 Thread Magnus Kessler
On 11 September 2016 at 02:27, Weixi Yen <we...@blitzchat.com> wrote:

> Sort of a unique case, my app was under heavy stress and one of my riak
> nodes got backed up (other 4 nodes were fine).
>
> I think this caused Riak.update to create an extra index in Solr for the
> same object when users began running .update on that object.
>

Hi Weixi,

Can you please confirm what you mean by "extra index"? Do you mean that an
object was indexed more than once and gets counted / returned by Solr
queries? If that's the case, can you please let me know how you query Solr?



>
> I have basically 2 questions:
>
> 1) Is what I'm describing something that is possible?
>

Riak/Yokozuna indexes each replica of a Riak object into Solr. With the
default n_val of 3, there will be 3 copies of any given object indexed in
Solr. Depending on the version of Riak you are using, it's also possible
that siblings of Riak objects get indexed independently. So yes, it is
possible to find several additional objects in Solr for each KV object.
When querying Solr through Riak/Yokozuna, the internal queries are
structured in a way that only one replica is returned. Quering Solr nodes
directly will typically lack these filters and may return more than one
copy of an object.


>
> 2) Is there a way to tell Solr to re-index one single item and get rid of
> all other indexes of that item?
>

You can perform a GET/PUT cycle through Riak KV on an object. This will
result in n_val copies of the objects across the Solr instances, that
replace previous versions. It is not possible to have just 1 copy, unless
the n_val for the object is exactly 1. AFAIK, there have been some fixes to
Yokozuna in 2.0.7 and the upcoming 2.2 release that deal better with
indexed siblings. Discrepancies between KV objects and their Solr
counterparts should be detected and resolved by active anti-entropy (AAE).


>
> Considering RiakTS to resolve these issues long term, but have to stick
> with Solr for at least the next 3 months, would appreciate any insight into
> how to solve this duplicate index problem.
>
> Thanks,
>
> Weixi
>
>
Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: speeding up bulk loading

2016-08-31 Thread Magnus Kessler
On 26 August 2016 at 22:20, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:

> Is there any way to speed up bulk loading?  I wondering if I should be
> tweeking the erlang, aae or other config options?
>
>
>
>
>
Hi Travis,

Excuse the late reply; your message had been stuck in the moderation queue.
Please consider subscribing to this list.

Without knowing more about how you perform bulk uploads, it's difficult to
recommend any changes. Are you using the HTTP REST API or one of the client
libraries, which use protocol buffers by default? What concerns do you have
about the upload performance? Please let us know a bit more about your
setup.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Migrating to riak 2.1.4, selecting a specific multi-backend from the Java Client

2016-08-10 Thread Magnus Kessler
On 9 August 2016 at 15:42, Henning Verbeek <hankipa...@gmail.com> wrote:

> We're slowly migrating our application from Riak KV 1.4 to Riak KV
> 2.1.4, and at the same time riak-java-client 1.4.x to riak-java-client
> 2.0.6. We are doing this by adding seven new Riak KV 2.1 nodes to the
> ring, and later removing the seven Riak KV 1.4 nodes from the ring.
> For a few days, the cluster will consist of a mix.
>

Hi Henning,

This plan will generate a lot of unnecessary data shuffling in the entire
cluster. Each time a node is added or removed from the ring, the claim
algorithm will re-assign partitions to/from all nodes. Please consider
updating the existing nodes in place. For more information see the
documentation[0]. Please let us know if any of the documentation is
unclear, so that we can make it better.


> [...]
>

Kind Regards,

Magnus


[0]: http://docs.basho.com/riak/kv/2.1.4/setup/upgrading/cluster/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak high IO usage

2016-07-16 Thread Magnus Kessler
Hi "!"

Could you please sign with a name, so that we know how to address you?


On 15 July 2016 at 07:45, ! <linwater...@qq.com> wrote:

> hi all,
>   there is a program comfusing me that Riak produce continuous high IO
>

[graph showing IO spikes reaching 100% for minutes at a time removed]

Unfortunately you don't give us a lot of information to work with. The
graphs show spikes in IO reaching 100%, but I would not call that
continuous, unless you mean that whenever there is traffic it reaches 100%
IO and stays that way until traffic drops again.

It is quite possible to max out the available IO bandwidth when PUTting or
GETting large amounts of data from Riak. What workloads are you running
against your Riak installation? How are your storage volumes configured?

We always recommend to use the fastest storage solutions you can afford,
and to run tests with workloads that are as similar as possible to the
expected production workload of your application.



>   when IO coming,I  observer a phenomenon :
>

[image of console showing file tmp_agent_at5SIa removed]

   what is tmp_agent_ ? what is the relationship between it and the
> high IO ?
>

There is no mentioning of tmp_agent anywhere in Riak's code base. Please
check your OS installation for possible causes of this file appearing. You
may want to remove the file from /var/lib/riak, as it has no place there.

Please give us some more information, if you'd like to pursue this
investigation further.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Rick TS error

2016-06-30 Thread Magnus Kessler
On 30 June 2016 at 14:02, Humberto Rodriguez <rhumber...@outlook.com> wrote:

> Hi, have anyone got this error "vm.args needs to have a -name parameter.”
> I was trying to test last version (1.3.0) of Riak TS, but every time that I
> try to start it fails and show me that message
>
> I am using Mac OS The Captain
>
> Thanks,
> Humberto
>


Hi Humberto,

Please check that your riak.conf file contains a line like

nodename = r...@ip.addr

or

nodename = riak@fqdn

This line should contain a distributed Erlang node name [0]. You can chose
any valid node name.

Kind Regards,

Magnus

[0]: http://erlang.org/doc/reference_manual/distributed.html

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Control

2016-06-08 Thread Magnus Kessler
On 8 June 2016 at 00:40, satish bhatti <extr...@gmail.com> wrote:

> I am trying to run Riak Control for my Riak 1.4.12 setup on OSX.  I
> followed the instructions here:
>
> http://docs.basho.com/riak/1.4.12/ops/advanced/riak-control/
>
> for setting it up. When I try to access:
>
> https://localhost:8069/admin
>
> it generates errors in the logs, which are attached. What am I doing wrong?
>
> Satish
>
>
Hi Satish,

Your logs show these errors:

2016-06-07 16:36:55.824 [info] <0.354.0>@riak_core:wait_for_service:464
> Wait complete for service riak_kv (9 seconds)

2016-06-07 16:37:00.346 [error] <0.1952.0> gen_fsm <0.1952.0> in state
> hello terminated with reason: no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174

2016-06-07 16:37:00.346 [error] <0.1952.0> CRASH REPORT Process <0.1952.0>
> with 0 neighbours exited with reason: no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174 in
> gen_fsm:terminate/7 line 611
> 2016-06-07 16:37:00.346 [error] <0.86.0> Supervisor ssl_connection_sup had
> child undefined started with {ssl_connection,start_link,undefined} at
> <0.1952.0> exit with reason no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174 in context
> child_terminated
> 2016-06-07 16:37:00.346 [error] <0.1954.0> gen_fsm <0.1954.0> in state
> hello terminated with reason: no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174
> 2016-06-07 16:37:00.346 [error] <0.1954.0> CRASH REPORT Process <0.1954.0>
> with 0 neighbours exited with reason: no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174 in
> gen_fsm:terminate/7 line 611
> 2016-06-07 16:37:00.346 [error] <0.86.0> Supervisor ssl_connection_sup had
> child undefined started with {ssl_connection,start_link,undefined} at
> <0.1954.0> exit with reason no function clause matching
> ssl_certificate:signature_type({1,2,840,113549,1,1,11}) line 174 in context
> child_terminated
> 2016-06-07 16:37:00.347 [error] <0.198.0> application: mochiweb, "Accept
> failed error", "{'EXIT',\n{{function_clause,\n
> [{ssl_certificate,signature_type,\n
>  [{1,2,840,113549,1,1,11}],\n
>  [{file,\"ssl_certificate.erl\"},{line,174}]},\n
>  {ssl_cipher,filter,2,[{file,\"ssl_cipher.erl\"},{line,401}]},\n
>  {ssl_handshake,select_session,8,\n
>  [{file,\"ssl_handshake.erl\"},{line,593}]},\n
>  {ssl_handshake,hello,4,[{file,\"ssl_handshake.erl\"},{line,152}]},\n
>{ssl_connection,hello,2,[{file,\"ssl_connection.erl\"},{line,413}]},\n
>{ssl_connection,next_state,4,\n
>  [{file,\"ssl_connection.erl\"},{line,1929}]},\n
>  {gen_fsm,handle_msg,7,[{file,\"gen_fsm.erl\"},{line,494}]},\n
>  {proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,227}]}]},\n
> {gen_fsm,sync_send_all_state_event,[<0.1954.0>,start,infinity]}}}"
> 2016-06-07 16:37:00.347 [error] <0.198.0> CRASH REPORT Process <0.198.0>
> with 0 neighbours exited with reason: {error,accept_failed} in
> mochiweb_acceptor:init/3 line 34
> 2016-06-07 16:37:00.347 [error] <0.196.0>
> {mochiweb_socket_server,310,{acceptor_error,{error,accept_failed}}}
> 2016-06-07 16:37:00.347 [error] <0.197.0> application: mochiweb, "Accept
> failed error", "{'EXIT',\n{{function_clause,\n
> [{ssl_certificate,signature_type,\n
>  [{1,2,840,113549,1,1,11}],\n
>  [{file,\"ssl_certificate.erl\"},{line,174}]},\n
>  {ssl_cipher,filter,2,[{file,\"ssl_cipher.erl\"},{line,401}]},\n
>  {ssl_handshake,select_session,8,\n
>  [{file,\"ssl_handshake.erl\"},{line,593}]},\n
>  {ssl_handshake,hello,4,[{file,\"ssl_handshake.erl\"},{line,152}]},\n
>{ssl_connection,hello,2,[{file,\"ssl_connection.erl\"},{line,413}]},\n
>{ssl_connection,next_state,4,\n
>  [{file,\"ssl_connection.erl\"},{line,1929}]},\n
>  {gen_fsm,handle_msg,7,[{file,\"gen_fsm.erl\"},{line,494}]},\n
>  {proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,227}]}]},\n
> {gen_fsm,sync_send_all_state_event,[<0.1952.0>,start,infinity]}}}"


Note the "no function clause matching ssl_certificate:signature_type" error
message. This is most likely caused by an incompatibility between the quite
old SSL implementation in OTP-15 (which is used by Riak 1.4.12) and the
certificate you generated. How did you generate your certificate? If it is
using SHA-256 as the signature algorithm, could you please try to generate
a cer

Re: Compilation issues with OTP 18.3: "Failed to load erlang_js_drv.so"

2016-05-16 Thread Magnus Kessler
On 15 May 2016 at 23:38, Humberto Rodríguez Avila <rhumber...@gmail.com>
wrote:

> Hello, I have been trying to compile erlang_js with OTP 18.3, but allways
> I get this message "Failed to load erlang_js_drv.so". I tried in Ubunto
> 14.0.04 and OSX 10.11.4.
>
> Here you can find the full log of my error:
> https://gist.github.com/rhumbertgz/ee0bf432edfa89ffa0a47405f3250fcd
>
> Any suggestion?
> Thanks in advance
>
>
Hi Humberto,

I saw that you also opened a github issue about this (
https://github.com/basho/erlang_js/issues/61), and that you were pointed to
this pull request (https://github.com/basho/erlang_js/pull/58) that may fix
the issue for you.

The development engineers are working on a OTP-18 compatible code base and
a release planned for later this year should work with OTP-18. According to
the program managers this will also then include OTP-18 compatible versions
of Erlang based client libraries.

Please bear with us while the development work is being done.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Getting key of the map in erlang

2016-04-29 Thread Magnus Kessler
On 28 April 2016 at 17:02, Sanket Agrawal <sanket.agra...@gmail.com> wrote:

> Not sure if this has been asked before - given a map, how does one go
> about retrieving the key of the map?
>
> For example, in Riak example for map
> <http://docs.basho.com/riak/kv/2.0.0/developing/data-types/#maps>, a map
> is created with "ahmed_info" key.
>
> If we were to write a commit hook in Erlang where we want to do some kind
> of action based on the key, it will be helpful to have a way to extract the
> key.
>
> I looked in basho erlang client documentation here for map, but don't see
> any function to extract the key. Perhaps we have to do pattern match to
> extract the key?
> http://basho.github.io/riak-erlang-client/riakc_map.html
>
> I also see erlang libraries under riak installation (one of them
> "riak_object" is called in "commit hook" example in documentation) - I can
> check there as well if there is online documentation somewhere for them.
>
> I am thinking of storing user info as immutable maps, something like
> __, and have an erlang commit hook that updates
> __ map with the latest entry. For that, we need to
> extract the map key.
>
>
Hi Sanket,

The actual association between an Erlang CRDT object and a bucket-type,
bucket, and key has to be done separately from creating the object itself.
Please have a look at the "counters" example on the same page.

I agree that this could be documented better and have mentioned your
question to our documentations team.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK Docs are messed up ?

2016-04-18 Thread Magnus Kessler
Hi Surajit,

Thank you for making us aware of this missing document. I have forwarded
your report to the documentation team. We are in the process of upgrading
the documentation site, and unfortunately have hit a few snags on the way.
Please bear with us, while we are fixing the site.

Kind Regards,

Magnus


On 16 April 2016 at 12:01, Surajit Ray <sura...@edusynapse.com> wrote:

> Hi,
>
> All the riak docs are seemed to be messed up.
>
> http://docs.basho.com/riak/kv/2.1.4/dev/using/keyfilters/
> Getting file not found
>
> --
> Surajit Ray
> CEO and Founder
> EduSynapse Pvt Ltd
> Phone : +91 9871838911
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-03-24 Thread Magnus Kessler
Hi Oleksiy,

On 24 March 2016 at 14:55, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> Hi Magnus,
>
> Thanks! I guess I will go with index deletion because I've already tried
> expiring the trees before.
>
> Do I need to delete AAE data somehow or removing the index is enough?
>

If you expire the AAE trees with the commands I posted earlier, there
should be no need to remove the AAE data directories manually.

I hope this works for you. Please monitor the tree rebuild and exchanges
with `riak-admin search aae-status` for the next few days. In particular
the exchanges should be ongoing on a continuous basis once all trees have
been rebuilt. If they don't, please let me know. At that point you should
also gather `riak-debug` output from all nodes before it gets rotated out
after 5 days by default.

Kind Regards,

Magnus


>
> On 24 March 2016 at 13:28, Magnus Kessler <mkess...@basho.com> wrote:
>
>> Hi Oleksiy,
>>
>> As a first step, I suggest to simply expire the Yokozuna AAE trees again
>> if the output of `riak-admin search aae-status` still suggests that no
>> recent exchanges have taken place. To do this, run `riak attach` on one
>> node and then
>>
>> riak_core_util:rpc_every_member_ann(yz_entropy_mgr, expire_trees, [], 5000).
>>
>>
>> Exit from the riak console with `Ctrl+G q`.
>>
>> Depending on your settings and amount of data the full index should be
>> rebuilt within the next 2.5 days (for a cluster with ring size 128 and
>> default settings). You can monitor the progress with `riak-admin search
>> aae-status` and also in the logs, which should have messages along the
>> lines of
>>
>> 2016-03-24 10:28:25.372 [info]
>> <0.4647.6477>@yz_exchange_fsm:key_exchange:179 Repaired 83055 keys during
>> active anti-entropy exchange of partition
>> 1210306043414653979137426502093171875652569137152 for preflist
>> {1164634117248063262943561351070788031288321245184,3}
>>
>>
>> Re-indexing can put additional strain on the cluster and may cause
>> elevated latency on a cluster already under heavy load. Please monitor the
>> response times while the cluster is re-indexing data.
>>
>> If the cluster load allows it, you can force more rapid re-indexing by
>> changing a few parameters. Again at the `riak attach` console, run
>>
>> riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna, 
>> anti_entropy_build_limit, {4, 6}], 5000).
>> riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna, 
>> anti_entropy_concurrency, 5], 5000).
>>
>> This will allow up to 4 trees per node to be built/exchanged per hour,
>> with up to 5 concurrent exchanges throughout the cluster. To return back to
>> the default settings, use
>>
>> riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna, 
>> anti_entropy_build_limit, {1, 36}], 5000).
>> riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna, 
>> anti_entropy_concurrency, 2], 5000).
>>
>>
>> If the cluster still doesn't make any progress with automatically
>> re-indexing data, the next steps are pretty much what you already
>> suggested, to drop the existing index and re-index from scratch. I'm
>> assuming that losing the indexes temporarily is acceptable to you at this
>> point.
>>
>> Using any client API that supports RpbYokozunaIndexDeleteReq, you can
>> drop the index from all Solr instances, losing any data stored there
>> immediately. Next, you'll have to re-create the index. I have tried this
>> with the python API, where I deleted the index and re-created it with the
>> same already uploaded schema:
>>
>> from riak import RiakClient
>>
>> c = RiakClient()
>> c.delete_search_index('my_index')
>> c.create_search_index('my_index', 'my_schema')
>>
>> Note that simply deleting the index does not remove it's existing
>> association with any bucket or bucket type. Any PUT operations on these
>> buckets will lead to indexing failures being logged until the index has
>> been recreated. However, this also means that no separate operation in
>> `riak-admin` is required to associate the newly recreated index with the
>> buckets again.
>>
>> After recreating the index expire the trees as explained previously.
>>
>> Let us know if this solves your issue.
>>
>> Kind Regards,
>>
>> Magnus
>>
>>
>> On 24 March 2016 at 08:44, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>>
>>> This is how things are looking after two weeks:
>>>
>>> - there are no solr indexing 

Re: Yokozuna inconsistent search results

2016-03-24 Thread Magnus Kessler
Hi Oleksiy,

As a first step, I suggest to simply expire the Yokozuna AAE trees again if
the output of `riak-admin search aae-status` still suggests that no recent
exchanges have taken place. To do this, run `riak attach` on one node and
then

riak_core_util:rpc_every_member_ann(yz_entropy_mgr, expire_trees, [], 5000).


Exit from the riak console with `Ctrl+G q`.

Depending on your settings and amount of data the full index should be
rebuilt within the next 2.5 days (for a cluster with ring size 128 and
default settings). You can monitor the progress with `riak-admin search
aae-status` and also in the logs, which should have messages along the
lines of

2016-03-24 10:28:25.372 [info]
<0.4647.6477>@yz_exchange_fsm:key_exchange:179 Repaired 83055 keys during
active anti-entropy exchange of partition
1210306043414653979137426502093171875652569137152 for preflist
{1164634117248063262943561351070788031288321245184,3}


Re-indexing can put additional strain on the cluster and may cause elevated
latency on a cluster already under heavy load. Please monitor the response
times while the cluster is re-indexing data.

If the cluster load allows it, you can force more rapid re-indexing by
changing a few parameters. Again at the `riak attach` console, run

riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_build_limit, {4, 6}], 5000).
riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_concurrency, 5], 5000).

This will allow up to 4 trees per node to be built/exchanged per hour, with
up to 5 concurrent exchanges throughout the cluster. To return back to the
default settings, use

riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_build_limit, {1, 36}], 5000).
riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_concurrency, 2], 5000).


If the cluster still doesn't make any progress with automatically
re-indexing data, the next steps are pretty much what you already
suggested, to drop the existing index and re-index from scratch. I'm
assuming that losing the indexes temporarily is acceptable to you at this
point.

Using any client API that supports RpbYokozunaIndexDeleteReq, you can drop
the index from all Solr instances, losing any data stored there
immediately. Next, you'll have to re-create the index. I have tried this
with the python API, where I deleted the index and re-created it with the
same already uploaded schema:

from riak import RiakClient

c = RiakClient()
c.delete_search_index('my_index')
c.create_search_index('my_index', 'my_schema')

Note that simply deleting the index does not remove it's existing
association with any bucket or bucket type. Any PUT operations on these
buckets will lead to indexing failures being logged until the index has
been recreated. However, this also means that no separate operation in
`riak-admin` is required to associate the newly recreated index with the
buckets again.

After recreating the index expire the trees as explained previously.

Let us know if this solves your issue.

Kind Regards,

Magnus


On 24 March 2016 at 08:44, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> This is how things are looking after two weeks:
>
> - there are no solr indexing issues for a long period (2 weeks)
> - there are no yokozuna errors at all for 2 weeks
> - there is an index with all empty schema, just _yz_* fields, objects
> stored in a bucket(s) are binary and so are not analysed by yokozuna
> - same yokozuna query repeated gives different number for num_found,
> typically the difference between real number of keys in a bucket and
> num_found is about 25%
> - number of keys repaired by AAE (according to logs) is about 1-2 per few
> hours (number of keys "missing" in index is close to 1,000,000)
>
> Should I now try to delete the index and yokozuna AAE data and wait
> another 2 weeks? If yes - how should I delete the index and AAE data?
> Will RpbYokozunaIndexDeleteReq be enough?
>
>
>
-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-03-14 Thread Magnus Kessler
Hi Oleksiy,

Would you mind sharing the output of 'riak-debug' from all nodes? You can
upload the files to a location of your choice and PM me the details. As far
as we are aware we have fixed all previously existing issues that would
prevent a full YZ AAE tree rebuild from succeeding when non-indexable data
was present. However, the logs may still contain hints that may help us to
identify the root cause of your issue.

Many Thanks,

Magnus

On 14 March 2016 at 09:45, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> I would like to continue as this seems to me like a serious problem, on a
> bucket with 700,000 keys the difference in num_found can be up to 200,000!
> And thats a search index that doesn't index, analyse or store ANY of the
> document fields, the schema has only required _yz_* fields and nothing else.
>
> I have tried deleting the search index (with PBC call) and tried expiring
> AAE trees. Nothing helps. I can't get consistent search results from
> Yokozuna.
>
> Please help.
>
> On 11 March 2016 at 18:18, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>
>> Hi Fred,
>>
>> This is production environment but I can delete the index. However this
>> index covers ~3500 buckets and there are probably 10,000,000 keys.
>>
>> The index was created after the buckets. The schema for the index is just
>> the basic required fields (_yz_*) and nothing else.
>>
>> Yes, I'm willing to resolve this. When you say to delete chunks_index, do
>> you mean the simple RpbYokozunaIndexDeleteReq or something else is required?
>>
>> Thanks!
>>
>>
>>
>>
>> On 11 March 2016 at 17:08, Fred Dushin <fdus...@basho.com> wrote:
>>
>>> Hi Oleksiy,
>>>
>>> This is definitely pointing to an issue either in the coverage plan
>>> (which determines the distributed query you are seeing) or in the data you
>>> have in Solr.  I am wondering if it is possible that you have some data in
>>> Solr that is causing the rebuild of the YZ AAE tree to incorrectly
>>> represent what is actually stored in Solr.
>>>
>>> What you did was to manually expire the YZ (Riak Search) AAE trees,
>>> which caused them to rebuild from the entropy data stored in Solr.  Another
>>> thing we could try (if you are willing) would be to delete the
>>> 'chunks_index' data in Solr (as well as the Yokozuna AAE data), and then
>>> let AAE repair the missing data.  What Riak will essentially do is compare
>>> the KV hash trees with the YZ hash trees (which will be empty), too that it
>>> is missing in Solr, and add it to Solr, as a result.  This would
>>> effectively result in re-indexing all of your data, but we are only talking
>>> about ~30k entries (times 3, presumably, if your n_val is 3), so that
>>> shouldn't take too much time, I wouldn't think.  There is even some
>>> configuration you can use to accelerate this process, if necessary.
>>>
>>> Is that something you would be willing to try?  It would result in down
>>> time on query.  Is this production data or a test environment?
>>>
>>> -Fred
>>>
>>> --
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using $bucket index for listing keys

2016-03-11 Thread Magnus Kessler
Hi Oleksiy,

As Russel pointed out, 2i queries, including $bucket queries, are only
supported when the backend supports ordered keys. This is currently not the
case with bitcask.

It appears, though, that you have discovered a bug where the multi-backend
module accepts the query despite the fact that the actually configured
backend for the bucket(-type) cannot support the query. I'll make the
engineering team aware of this.

At this point in time I can only recommend that you do not use the $bucket
query with your configuration and use an alternative, such as Solr-based
search instead.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using $bucket index for listing keys

2016-03-11 Thread Magnus Kessler
Hi Oleksiy,

Could you please share the bucket or bucket-type properties for that small
bucket? If you open an issue on github, please add the properties there,
too.

Many Thanks,

On 11 March 2016 at 13:46, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> I got the recursive behavior with other, larger buckets but I had no
> logging so when I enabled debugging this was the first bucket to replicate
> the problem. I have a lot of buckets of the same type, some have many
> thousands keys some are small. My task is to iterate the keys (once only)
> of all buckets. Either with 2i or with Yokozuna.
> On Fri, Mar 11, 2016 at 15:32 Russell Brown <russell.br...@me.com> wrote:
>
>> Not the answer, by why pagination for 200 keys? Why the cost of doing the
>> query 20 times vs once?
>>
>> On 11 Mar 2016, at 13:28, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>>
>> > Unfortunately there are just 200 keys in that bucket. So with larger
>> max_results I just get all the keys without continuation. I'll try to
>> replicate this with a bigger bucket.
>> > On Fri, Mar 11, 2016 at 15:21 Russell Brown <russell.br...@me.com>
>> wrote:
>> > That seems very wrong. Can you do me a favour and try with a larger
>> max_results. I remember a bug with small results set, I thought it was
>> fixed, I’m looking into the past issues, but can you try “max_results=1000”
>> or something, and let me know what you see?
>> >
>> > On 11 Mar 2016, at 13:03, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>> >
>> > > Here it is without the `value` part of request:
>> > >
>> > > curl '
>> http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU=
>> '
>> > >
>> > >
>> {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20AAAAja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
>> > >
>> > > On 11 March 2016 at 14:58, Oleksiy Krivoshey <oleks...@gmail.com>
>> wrote:
>> > > I'm actually using PB interface, but I can replicate the problem with
>> HTTP as in my previous email. Request with '=' returns the
>> result set with the same continuation .
>> > >
>> > > On 11 March 2016 at 14:55, Magnus Kessler <mkess...@basho.com> wrote:
>> > > Hi Oleksiy,
>> > >
>> > > How are you performing your 2i-based key listing? Querying with
>> pagination as shown in the documentation[0] should work.
>> > >
>> > > As an example here is the HTTP invocation:
>> > >
>> > > curl "
>> https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10=g20CNTM=
>> "
>> > >
>> > > Once the end of the key list is reached, the server returns an empty
>> keys list and no further continuation value.
>> > >
>> > > Please let me know if this works for you.
>> > >
>> > > Kind Regards,
>> > >
>> > > Magnus
>> > >
>> > >
>> > > [0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying
>> > >
>> > > On 11 March 2016 at 10:06, Oleksiy Krivoshey <oleks...@gmail.com>
>> wrote:
>> > > Anyone?
>> > >
>> > > On 4 March 2016 at 19:11, Oleksiy Krivoshey <oleks...@gmail.com>
>> wrote:
>> > > I have a bucket with ~200 keys in it and I wanted to iterate them
>> with the help of $bucket index and 2i request, however I'm facing the
>> recursive behaviour, for example I send the following 2i request:
>> > >
>> > > {
>> > > bucket: 'BUCKET_NAME',
>> > > type: 'BUCKET_TYPE',
>> > > index: '$bucket',
>> > > key: 'BUCKET_NAME',
>> > > qtype: 0,
>> > > max_results: 10,
>> > > continuation: ''
>> > > }
>> > >
>> > > I receive 10 keys and continuation '', I then repeat the request
>> with continuation '' and at this point I can receive a reply

Re: Using $bucket index for listing keys

2016-03-11 Thread Magnus Kessler
Hi Oleksiy,

How are you performing your 2i-based key listing? Querying with pagination
as shown in the documentation[0] should work.

As an example here is the HTTP invocation:

curl "
https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10=g20CNTM=
"

Once the end of the key list is reached, the server returns an empty keys
list and no further continuation value.

Please let me know if this works for you.

Kind Regards,

Magnus


[0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying

On 11 March 2016 at 10:06, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> Anyone?
>
> On 4 March 2016 at 19:11, Oleksiy Krivoshey <oleks...@gmail.com> wrote:
>
>> I have a bucket with ~200 keys in it and I wanted to iterate them with
>> the help of $bucket index and 2i request, however I'm facing the recursive
>> behaviour, for example I send the following 2i request:
>>
>> {
>> bucket: 'BUCKET_NAME',
>> type: 'BUCKET_TYPE',
>> index: '$bucket',
>> key: 'BUCKET_NAME',
>> qtype: 0,
>> max_results: 10,
>> continuation: ''
>> }
>>
>> I receive 10 keys and continuation '', I then repeat the request with
>> continuation '' and at this point I can receive a reply with
>> continuation '' or '' or even '' and its going in never ending
>> recursion.
>>
>> I'm running this on a 5 node 2.1.3 cluster.
>>
>> What I'm doing wrong? Or is this not supported at all?
>>
>> Thanks!
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna inconsistent search results

2016-02-29 Thread Magnus Kessler
On 26 February 2016 at 15:51, Oleksiy Krivoshey <oleks...@gmail.com> wrote:

> Yes, AAE is enabled:
>
> anti_entropy = active
>
> [...]

> However the output of "riak-admin search aae-status" looks like this:
> http://oleksiy.sirv.com/misc/search-aae.png
>
>
Hi Oleksiy,

there are two partitions on the node that haven't seen their AAE tree
rebuilt in a long time. The reason for this is not clear at the moment,
although we have seen this happening when a partition contains data that
for some reason cannot be indexed with the configured Solr schema.

Please run on the 'riak attach' console:

riak_core_util:rpc_every_member_ann(yz_entropy_mgr, expire_trees, [], 5000).

Afterwards exit the console with "Ctrl-G q".

The AAE trees should start to be rebuilt shortly after. With default
settings, on a cluster with ring size 64 the whole process should finish in
about 1 day and any missing but indexable data should appear in all
assigned Solr instances. During this time you may still observe
inconsistent search results due to the way the coverage query is performed.
Keep an eye open for any errors from one of the yz_ modules in the logs
during this time.

Please let us know if the Search AAE trees can be rebuilt successfully and
if this solved your issue.

Kind Regards,

Magnus


PS: Please consider subscribing to this subscribers only mailing list to
avoid your messages being held in the moderation queue. Instructions on how
to join can be found at
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com.


> On Fri, 26 Feb 2016 at 17:13 Fred Dushin <fdus...@basho.com> wrote:
>
>> I would check the coverage plans that are being used for the different
>> queries, which you can usually see in the headers of the resulting
>> document.  When you run a search query though yokozuna, it will use a
>> coverage plan from riak core to find a minimal set of nodes (and
>> partitions) to query to get a set of results, and the coverage plan may
>> change every few seconds.  You might be hitting nodes that have
>> inconsistencies or are in need of repair.  Do you have AAE enabled?
>>
>> -Fred
>>
>> > On Feb 26, 2016, at 8:36 AM, Oleksiy Krivoshey <oleks...@gmail.com>
>> wrote:
>> >
>> > Hi!
>> >
>> > Riak 2.1.3
>> >
>> > Having a stable data set (no documents deleted in months) I'm receiving
>> inconsistent search results with Yokozuna. For example first query can
>> return num_found: 3000 (correct), the same query repeated in next seconds
>> can return 2998, or 2995, then 3000 again. Similar inconsistency happens
>> when trying to receive data in pages (using start/rows options): sometimes
>> I get the same document twice (in different pages), sometimes some
>> documents are missing completely.
>> >
>> > There are no errors or warning in Yokozuna logs. What should I look for
>> in order to debug the problem?
>> >
>> > Thanks!
>
>
-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Re[5]: Get all keys from an bucket

2016-02-29 Thread Magnus Kessler
Hi Markus,

You may want to subscribe to the mailing list to avoid that messages from
you are held in the moderation queue. Only subscribed members can post
directly to this list. Also, please always reply to the list address, not
to individual subscribers to keep the discussion public.

Russel just beat me to an answer. Please let us know if this solved your
issue.

Kind Regards,

Magnus




On 26 February 2016 at 17:17, Markus Geck <zerebo...@mail.ru> wrote:

>
> Anyone?
>
> Saturday, January 30, 2016 1:42 AM +03:00 from Markus Geck <
> zerebo...@mail.ru>:
>
> Do you have an example how to stream them? The url I've posted in my
> initial mail explains how to use that index, but not how to stream the
> results. Unfortunately accessing the keys that way overloads the node.
>
>
> Friday, January 29, 2016 2:06 PM UTC from Russell Brown <
> russell.br...@me.com <https://e.mail.ru/compose?To=russell.br...@me.com>>:
>
>
> With leveldb you can use the special $bucket index. You can also stream
> the keys, and paginate them, meaning you can get them in smaller lumps,
> hopefully this will appear faster and avoid the timeout you're seeing.
>
>
> On 29 Jan 2016, at 14:03, Markus Geck <zerebo...@mail.ru
> <https://e.mail.ru/compose/?mailto=mailto%3azerebo...@mail.ru>> wrote:
>
> Yes, sorry I forgot to mention that.
>
>
> Monday, January 25, 2016 10:10 AM UTC from Russell Brown <
> russell.br...@me.com
> <https://e.mail.ru/compose/?mailto=mailto%3arussell.br...@me.com>>:
>
> Hi Markus,
> Are you using leveldb backend?
>
> Russell
>
> On 22 Jan 2016, at 19:05, Markus Geck <zerebo...@mail.ru> wrote:
>
> > Hello,
> >
> > is there any way to get all keys from an bucket?
> >
> > I've already tried this guide:
> http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html But
> riak always wents unresponsive with a huge server load.
> >
> > and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.
> >
> > Is there any other way?
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Increase number of partitions above 1024

2016-02-23 Thread Magnus Kessler
On 22 February 2016 at 16:10, Chathuri Gunawardhana <
lanch.gunawardh...@gmail.com> wrote:

> I'm using riak master version on riak github (riak_kv_version :
> <<"2.1.1-38-ga8bc9e0">>)
> . I don't use coverage queries.
>
> When I try to set the partition count over 1024, it suggest me to do it
> via advanced config (in cuttlefish schema for riak core, there is a
> validation to see whether it is above 1024 and if so they give this
> suggestion). But I don't know how I can add this parameter to
> advanced.config.
>
> Thank you very much!
>

Hi Chaturi,

$ riak config describe ring_size
Documentation for ring_size
Number of partitions in the cluster (only valid when first
creating the cluster). Must be a power of 2, minimum 8 and maximum
1024.

   Valid Values:
 - an integer
   Default Value : 64
   Value not set in /etc/riak/riak.conf
   Internal key  : riak_core.ring_creation_size

Cuttlefish configuration schemas are self-describing (via the riak config
describe command). You can use the name of the internal key in
advanced.config. If there's nothing else in advanced.config yet, the
setting looks like

[
 {riak_core,
  [
   {ring_creation_size, 2048}
  ]
 }
].

Also see
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/#Advanced-Configuration
for more details about using the advanced configuration file.

I hope this helps.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: vm.swappiness

2016-02-02 Thread Magnus Kessler
On 29 January 2016 at 18:26, Sakoulas, Byron <
byronsakou...@catholichealth.net> wrote:

> Is it possible that Basho’s recommendation on vm.swappiness is out of date?
>
> The current recommendation (per
> http://docs.basho.com/riak/latest/ops/tuning/linux/) is still to set
> vm.swappiness to 0.
>
> Based on
> https://www.percona.com/blog/2014/04/28/oom-relation-vm-swappiness0-new-kernel/
> it appears that in kernel 3.5 and up, the behavior of vm.swappiness=0 was
> changed.
>
> Due to that, should we modify our vm.swappiness to 1?
>
> Thanks
>
>
Hi Byron,

Yes, you are correct. If you want to keep the swapping behaviour prior to
Linux 3.5 (minimum amount of swapping without disabling it entirely) on
Linux 3.5+, you should use vm.swappiness=1.

I will make sure that the documentation is updated to reflect this.

Generally we recommend to turn swapping off to avoid long unpredictable
latencies when the kernel starts to swap out memory. Regardless of which
setting you use, please check the behaviour of your system under high
memory load.

Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High CPU usage by beam.smp

2016-01-28 Thread Magnus Kessler
On 28 January 2016 at 09:05, Fasil K <fa...@gnisir.com> wrote:

>
> Hello, I am using riak 2.1.1 for saving some datas (5 datas).
> I am running riak with a single node so far. my problem is riak is
> consuming almost 40% of CPU in idle state. Can any one help me to solve
> this issue.?
>
>
> With Regards,
>
> Fasil K
>


Hi Fasil,

The CPU usage you observe is due to the way Erlang schedulers work. When
they run out of work, they don't go to sleep immediately, but perform a
busy wait for some time instead. This increases responsiveness. For a nice,
detailed discussion of this please see
http://jlouisramblings.blogspot.co.uk/2013/01/how-erlang-does-scheduling.html

On a single node with the default ring size of 64, you also have a lot more
VNodes than on a typical production node. A VNode is handled by an Erlang
process, and all these processes require their own share of CPU and memory.

I hope this answers your question.

Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak versioning and handoff bug in 2.1.0

2016-01-13 Thread Magnus Kessler
On 13 January 2016 at 08:28, Vladyslav Zakhozhai <
v.zakhoz...@smartweb.com.ua> wrote:

> Hi,
>
> According to the following link there is a bug in 2.1.0 which may cause
> data loss:
> https://docs.basho.com/riak/2.0.5/community/product-advisories/210-dataloss/
>
> I've installed 2.1.1 riak deb package from basho's repository on
> packagecloud. But from riak stats I see that there is riak_core_version
> 2.1.1 in that package but riak_kv - 2.1.0. I think that this version of
> package is affected by this bug. Am I right?
>

Hi Vladyslav,

The Riak-KV 2.1.0 release had a misconfigured default handoff_ip setting
which caused nodes to hand off partition data to themselves, leading to
potential data loss. This was fixed with an update to the riak_core module.
As the riak_kv module was not affected, its version number remained the
same for the bug-fix release 2.1.1 of Riak-KV.


>
> Is there available compatability list of riak, riak cs and stanchion for
> recent versions?
> http://docs.basho.com/riakcs/latest/cookbooks/Version-Compatibility/ is
> outdated. Can I use basho's package 2.1.3 (riak_core_version: 2.1.5,
> riak_kv_version: 2.1.2) with riak cs and stanchion 2.1.0 basho's package?
>
> Thank you in advance.
>
>
While largely accurate for historical releases, this page has not yet been
updated to take the compatibility statement from
https://github.com/basho/riak_cs/blob/2.1/RELEASE-NOTES.md into account:
"Riak S2 2.1 is designed to work with both Riak KV 2.0.5+ and 2.1.1+."

You can also use the currently released Riak-KV versions 2.1.3 and 2.0.6
with Riak-S2 2.1.1.

I hope this helps.

Kind regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Pending handoff when node offline

2016-01-05 Thread Magnus Kessler
On 5 January 2016 at 10:07, Daniel Iwan <iwan.dan...@gmail.com> wrote:

> Hi all
>
> Am I right thinking that when node goes offline *riak-admin transfers* will
> always show transfers to be done? E.g.
>
> riak-admin transfers
> Attempting to restart script through sudo -H -u riak
> [sudo] password for myuser:
> Nodes ['riak@10.173.240.12'] are currently down.
> 'riak@10.173.240.9' waiting to handoff 18 partitions
> 'riak@10.173.240.11' waiting to handoff 13 partitions
> 'riak@10.173.240.10' waiting to handoff 13 partitions
>
> Active Transfers:
>
>
> Node 'riak@10.173.240.12' could not be contacted
>
>
> Even when I mark node as down with *riak-admin down* transfers are still
> waiting.
> Is that normal?
>

> The reason I ask is because our services before they start are checking if
> all transfers are complete (normal process during riak startup). This is
> because in the past we've had issues with using 2i queries when Riak.
>
> Unfortunately this means that after e.g. reboot our services won't start
> until timeout expires or missing node comes back and handoff finishes.
>
> Maybe there is a better way to check if Riak cluster is ready for 2i
> queries?
> We are still on Riak 1.3.1
>
> Regards
> Daniel
>


Hi Daniel,

this behaviour is completely normal and expected. As part of the high
availability capabilities of Riak, when a target VNode is not available to
write data to other Nodes will spin up fallback VNodes that temporarily
store incoming data. These will show up in the "riak-admin transfers"
output as partitions waiting to be handed off. The presence of partition
handoffs does not automatically mean that a Node is not capable of handling
queries and is therefore not a good indicator for your use case.

You may want to use "riak-admin wait-for-service riak_kv " to
detect that a restarted node is capable of handling requests again. See
http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#wait-for-service
for more details.

Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Getting Caused by: java.lang.ClassNotFoundException: io.netty.handler.codec.ByteToMessageDecoder

2015-11-18 Thread Magnus Kessler
On 18 November 2015 at 03:40, Rohan Kalambekar <r.kalambe...@tvilight.com>
wrote:

> Hi,
>
> I am new to RIAK, I am trying to search data in RIAK bucket using
> secondary indexes.
>
> I am getting this exception while executing the sample TasteOfRiak.java
> class available on RIAK website.
>
> Caused by: java.lang.ClassNotFoundException:
> io.netty.handler.codec.ByteToMessageDecoder
>
> The problem the logs point is for RIAKNode
>
>
>  RiakNode node = new RiakNode.Builder().withRemoteAddress("10.0.0.250")
> .withRemotePort(8087).build();
>
> Please find the attached error logs.
>
> Regards,
> Rohan
>
>
Hi Rohan,

please make sure that all necessary dependencies are available on the
classpath. For netty, you'll need the netty jar (e.g.
netty-all-4.0.17.Final.jar).

Please have a look at the maven pom here[0] for required dependencies.

Regards,

Magnus

[0] https://github.com/basho/riak-java-client/blob/develop/pom.xml

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: MapReduce "not_found" from Solr search index

2015-11-17 Thread Magnus Kessler
On 16 November 2015 at 11:47, Ellis Pritchard <ellis.pritch...@ft.com>
wrote:

> Hi,
>
> I've configured a Solr search index ("erights-users") for my bucket (named
> "missing", default type), containing a bunch of JSON documents, with a
> search schema ("erightsuser"); this seems to be working OK for simple
> queries, i.e. I can run a Solr query against it and it returns expected
> results:
>
> $ curl http://localhost:8098/types/default/buckets/missing/props
>
>
> {"props":{"allow_mult":false,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"missing","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","search_index":"erights-users","small_vclock":50,"w":"quorum","write_once":false,"young_vclock":20}}
>
>
> $ curl http://localhost:8098/search/query/erights-users?q=country.code:ANT
>
> 
>
> 0 name="QTime">58 name="q">country.code:ANT
> 127.0.0.1:8093/internal_solr/erights-users name="127.0.0.1:8093">_yz_pn:64
> OR (_yz_pn:61 AND (_yz_fpn:61)) OR _yz_pn:60 OR _yz_pn:57 OR _yz_pn:54 OR
> _yz_pn:51 OR _yz_pn:48 OR _yz_pn:45 OR _yz_pn:42 OR _yz_pn:39 OR _yz_pn:36
> OR _yz_pn:33 OR _yz_pn:30 OR _yz_pn:27 OR _yz_pn:24 OR _yz_pn:21 OR
> _yz_pn:18 OR _yz_pn:15 OR _yz_pn:12 OR _yz_pn:9 OR _yz_pn:6 OR
> _yz_pn:3 maxScore="12.103038">PR name="country.code">ANTFIN name="industry.code">ENC name="contactAddress.country.code">ANT name="email">x...@xxx.com name="gid">10783b99-9483-414d-a6f8-eb330ff6dfac name="userId">10422205 name="_yz_id">1*default*missing*10783b99-9483-414d-a6f8-eb330ff6dfac*51 name="_yz_rk">10783b99-9483-414d-a6f8-eb330ff6dfac name="_yz_rt">defaultmissing ...
>
>
> However, I'm trying to do a simple MapReduce on it, initially to count the
> documents (following the example in the 2.1.1 riakdocs) and I always seem
> to get 0 as a result:
>
> $ curl -XPOST http://localhost:8098/mapred  -H 'Content-Type:
> application/json'  -d
> '{"inputs":{"module":"yokozuna","function":"mapred_search","arg":["erights-users","country.code:ANT"]},"query":[{"map":{"language":"javascript","keep":false,"source":"function(v)
> { return [1];
> }"}},{"reduce":{"language":"javascript","keep":true,"name":"Riak.reduceSum"}}]}'
>
> [0]
>
>
> If I run with {"keep": true} on the map operation, I get the following:
>
>
> [[{"not_found":{"bucket_type":"default","bucket":"missing","key":"0063aac8-bb45-4051-a502-d541b41d327b","keydata":{}}},...
>
> (NB confusingly, my bucket is called "missing"!).
>
> Doing a GET for the keys that come back "not_found" works fine.
>
>
> What am I missing?
>
>
> Ellis.
>
> (Riak 2.1.1 MacOS X)
>
>
> Hi Ellis,

You don't mention why your use case requires MapReduce, but to simply
obtain the number of indexed objects there's a much easier way, using only
Solr query features:

curl -s "
http://localhost:8098/search/query/erights-users?wt=json=country.code:ANT=0
<http://localhost:8098/search/query/erights-users?wt=json=country.code:ANT?=*:*=0>"
| python -mjson.tool | grep numFound

The above asks Solr to return its results as JSON ('wt=json'), and requests
no actual objects, just the header information. The remainder of the line
uses the python 'json.tool' module to pretty-print the response, and
extracts the number.

The various Riak clients also offer APIs to obtain search results and may
make it easier to extract the desired information.

Please let me know if this helped.

Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakKV and RiakS2 co-existing

2015-11-09 Thread Magnus Kessler
On 24 October 2015 at 18:15, Emyr James <e...@sanger.ac.uk> wrote:

> Hi,
> I've started to use RiakKV on an 11 node cluster. So far it seems to be
> working reasonably well for my application although I occasionally hit
> Timeouts on inserts and have to keep retrying in my python client.
> I'm interested in giving RiakS2 a trial and given that I already have Riak
> installed on my nodes can I layer RiakS2 on top of my existing Riak cluster
> that I'm using as a key value store or should I run multiple riak instances
> listening on different ports for the 2 differing use cases ?
> Cheers,
> Emyr
>
>
Hi Emyr,

Using one and the same cluster for both Riak-KV and Riak-S2 is not
officially supported by Basho. Riak-S2 uses a specialised backend
configuration and very specific buckets, which a generic KV configuration
may interfere with. So, while technically possible, we recommend not to mix
a KV and an S2 use case within the same cluster.

If you have access to additional computing resources (e.g. cloud
instances), it would be easiest to spin up an entirely separate cluster for
your S2 trial. This avoids having to adjust KV and S2 configuration files
for non-standard ports. You can also spin up a new cluster on the existing
hardware, but please make sure that you change the cookie for the new
cluster to a unique value and that you adjust the ports in use so that they
don't overlap with the default ports.

Regards,

Magnus


>
> --
> The Wellcome Trust Sanger Institute is operated by Genome Research
> Limited, a charity registered in England with number 1021457 and a company
> registered in England with number 2742969, whose registered office is 215
> Euston Road, London, NW1 2BE.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.1 : Build error on Opensuse 13.1

2015-09-16 Thread Magnus Kessler
On 10 September 2015 at 14:33, Shing Hing Man <mat...@yahoo.com> wrote:

> Hi,
>
>  I have followed  the instructions on :
> http://docs.basho.com/riak/latest/ops/building/installing/from-source/
>
> curl -O 
> http://s3.amazonaws.com/downloads.basho.com/riak/2.1/2.1.1/riak-2.1.1.tar.gz
> tar zxvf riak-2.1.1.tar.gzcd riak-2.1.1
> make locked-deps
> make rel
>
> But when I do  "make locked-deps", I get the following error.
>
>
> shing@cauchy:~/Downloads/riak/riak-2.1.1> make locked-deps
> fatal: Not a git repository (or any parent up to mount point /home)
> Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
> Using rebar.config.lock file to fetch dependencies
> ./rebar -C rebar.config.lock get-deps
> Uncaught error in rebar_core: {'EXIT',
> {undef,
> [{crypto,start,[],[]},
> {rebar,run_aux,2,
> [{file,"src/rebar.erl"},{line,212}]},
> {rebar,main,1,
> [{file,"src/rebar.erl"},{line,58}]},
> {escript,run,2,
> [{file,"escript.erl"},{line,747}]},
> {escript,start,1,
> [{file,"escript.erl"},{line,277}]},
> {init,start_it,1,[]},
> {init,start_em,1,[]}]}}
> make: *** [locked-deps] Error 1
> shing@cauchy:~/Downloads/riak/riak-2.1.1>
>
>
> I have installed the Basho version of Erlang.
> shing@cauchy:~/Downloads/riak/riak-2.1.1> erl
> Erlang R16B02_basho8 (erts-5.10.3) [source] [64-bit] [smp:8:8]
> [async-threads:10] [hipe] [kernel-poll:false]
>
> Eshell V5.10.3  (abort with ^G)
> 1>
>
> Thanks in advance for any assistance !
>
> Shing
>

Hi Shing,

The error you observed may occur when the Erlang crypto module tries to use
ciphers that aren't available in the openssl library. We provide a
workaround for this for Red Hat / CentOS in our documentation [0]. Other
Linux distributions may also be affected.

Regards,

Magnus

[0]:
http://docs.basho.com/riak/latest/ops/building/installing/erlang/#Installing-on-RHEL-CentOS


-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak restart does not restart yokozuka well

2015-09-16 Thread Magnus Kessler
On 14 September 2015 at 16:43, Alexander Popov <mogada...@gmail.com> wrote:

> Yes there is plenty of errors there  like
> Committed before 500 {msg=GC overhead limit
> exceeded,trace=java.lang.OutOfMemoryError: GC overhead limit exceeded
>  null:org.eclipse.jetty.io.EofException
>
> and so on,  this is reason why I try to restart  node
>
> My concerns is:
> * search on this node come to un-working state and not repaired itself
> * halted node, requires manual  actions
> * false positive report or* riak restart *
>
>
>
>
Hi Alexander,

If you see garbage collection related Solr errors, you may want to revisit
your Java VM settings in 'riak.conf'. By default, only 1 GB of heap space
is given to the JVM. This is sufficient for light loads, but in production
you'd typically want to increase the heap allocation. Solr memory tuning
may also involve switching to a different GC algorithm. See e.g.
https://wiki.apache.org/solr/ShawnHeisey or
https://wiki.apache.org/solr/SolrPerformanceProblems for more details.

Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to fix an stale index situation after a bitcask data restore

2015-09-16 Thread Magnus Kessler
On 7 August 2015 at 18:47, Hao <jusf...@163.com> wrote:

> Single node with search=on, I previously indexed 2 map objects. Then I did
> a restore of my previous bitcask folder. I can see newly created bucket
> type still there. And when I search by index, the index data are also
> there. So I removed the index folder (under
> /var/lib/riak/yz/todoriak_main_movie_idx/data ) , riak should start to
> index anew. But it turned out, no.The index folder was re-created by Riak
> though.
>
> When I search by index, I still see the old 2 map objects. And when I add
> new objects, they are not indexed at all. Always the old 2 map objects only.
>
> I thought AAE will fix index. No? Is the AAE only for cluster?
> What can I do now? Delete index and re-create? What's the business of
> re-attach?
>
> Thank you
> -Hao
>

Hi Hao,

Riak's search AAE should eventually re-index your data. However, by default
AAE trees are rebuilt only once a week, and then only one tree is rebuilt
per hour on each node. It is possible to speed this up, though.

On one node, run 'riak attach'. Then, at the Erlang console run

riak_core_util:rpc_every_member_ann(application, set_env, [riak_kv,
anti_entropy_build_limit, {4, 360}], 60).
riak_core_util:rpc_every_member_ann(application, set_env, [riak_kv,
anti_entropy_concurrency, 4], 60).
riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_build_limit, {4, 360}], 60).
riak_core_util:rpc_every_member_ann(application, set_env, [yokozuna,
anti_entropy_concurrency, 4], 60).

The above will allow concurrent rebuilding of up to 4 trees per node per
hour. Be careful not to increase this too much on production systems. To
force expiration of the search AAE trees you can then run in the Erlang
console

riak_core_util:rpc_every_member_ann(yz_entropy_mgr, expire_trees, [], 60).

Hope this helps.

Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Moving the anti-entropy directory (riak 1.4)

2015-09-15 Thread Magnus Kessler
On 11 September 2015 at 11:45, Sujay Mansingh <su...@edited.com> wrote:

> On our riak cluster machines, we have a separate disk which stores all the
> bitcask data (in /data)
>
> This is so that we can have a (relatively) small disk for the OS, and have
> a big disk for /data.
>
> $ df -h / /data
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/root20G   11G  7.9G  57% /
> /dev/md33.6T  277G  3.1T   9% /data
>
> However, especially more so lately, we see that the /var/lib/riak
> directory often becomes very large.
>
> $ pwd
> /var/lib/riak
> $ du -hcs *
> 9.3Ganti_entropy
> 1.1Mkv_vnode
> 104Kring
> 9.3Gtotal
>
> So we see that the anti_entropy directory takes up 9.3GB (almost half of /).
> (Sometimes it balloons to 18GB, almost taking up the disk. We then restart
> riak to get the disk usage down.)
>
> I want to move that directory off /.
>
> Here are the steps I want to perform on each node.
>
>1. Stop riak
>2. Move /data to /riak/data (I will have to change the name of the
>mount point)
>3. Move /var/lib/riak/anti_entropy to /riak/anti_entropy
>4. Change the following items in app.config
>   - Change {anti_entropy_data_dir, "/var/lib/riak/anti_entropy"}, to 
> {anti_entropy_data_dir,
>   "/riak/anti_entropy"},
>   - Change {data_root, "/data"} to {data_root, "/riak/data"}
>5. Start riak
>
> I just wanted to confirm that if I move the files and change the config,
> riak will simply start up and continue as before.
>
>
Hi Sujay,

The steps you outlined are going to work fine. Please make sure that the
newly created '/riak' directory has the correct file permissions for the
'riak' user.

Regards,

Magnus


> Thanks!
>
> Sujay
> ​
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Search limitations

2015-08-25 Thread Magnus Kessler
On 18 August 2015 at 00:20, Brant Fitzsimmons brant.fitzsimm...@gmail.com
wrote:

 Hello all,

 Are the search suggestions on
 http://docs.basho.com/riak/latest/dev/using/application-guide/#Search
 still valid?

 Specifically, is it still advisable to use 2i when deep pagination is
 required, and if the cluster is going to be larger than 8-10 nodes should I
 still use something else for search?


Hi Brent,

Regarding deep pagination, you may want to try Solr's deep
paging [0][1] for your use case. You can issue an appropriate HTTP request
through Riak's HTTP endpoint for Solr.

Regards,

Magnus

[0]: http://solr.pl/en/2014/03/10/solr-4-7-efficient-deep-paging/
[1]:
https://wiki.apache.org/solr/CommonQueryParameters#Deep_paging_with_cursorMark

-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Rolling upgrade from 1.4.2 to 2.0.5

2015-08-06 Thread Magnus Kessler
On 5 August 2015 at 18:53, John Daily jda...@basho.com wrote:

 That’s correct: upgrades to either 2.0.x or 2.1.x are supported from the
 1.4 series.

 Side note: I definitely recommend testing the upgrade process in a QA
 environment first.

 -John


Hi Sujay,

The latest release in the 2.0 series is 2.0.6 [0]. Please use this version
if you upgrade to 2.0.

Please also review the documentation about the new 'riak.conf'
configuration file [1][2]. 2.x installations should use the new format, but
you can continue to use the 'app.config' format from Riak 1.x. To maintain
complete backwards compatibility when using 'app.config', please add

[{default_bucket_props,
  [{allow_mult,false}, %% have Riak resolve conflicts and do not return siblings
  {dvv_enabled,false}]}, %% use vector clocks for conflict resolution
  %% other settings
]

to 'app.config'. This will ensure that your existing application continues
to work exactly as before. When using 'riak.conf', these settings will be
applied automatically.


Magnus

[0] http://docs.basho.com/riak/2.0.6/downloads/
[1]
http://docs.basho.com/riak/latest/intro-v20/#Simplified-Configuration-Management
[2]
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/

On Aug 5, 2015, at 12:13 PM, Sujay Mansingh su...@editd.com wrote:


 Hello all, I have a 5 node riak cluster, all nodes running 1.4.2.

 I want to upgrade to riak 2.x

 According to this:
 http://docs.basho.com/riak/latest/ops/upgrading/rolling-upgrades/ I can
 perform a rolling upgrade (a mixed cluster)
 as long as the versions aren't more than 2 versions apart.

 There is no riak 1.5 so would riak 1.4.2 - 2.0.5 count as 1 version apart?

 Sujay
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Loopback Address as Handoff IP/Interface

2015-05-04 Thread Magnus Kessler
On 3 May 2015 at 12:19, Praveen Baratam praveen.bara...@gmail.com wrote:

 Hello Everybody,

 I am trying to setup a single node Riak cluster and want to use a loopback
 address (127.0.0.x) for the Handoff IP to keep the instance private and
 invisible to others.

 But the Riak node fails to start and throws an error -  handoff.ip
 invalid, must be a valid IP address

 Is it illegal to use loopback interface for Handoff IP? Came across some JIRA
 issues https://github.com/basho/riak_core/issues/670 but couldn't find
 a solution with out having to build packages from source.

 Any workarounds? I am trying to use Riak 1.4.12 on Ubuntu 14.04!

 Any advice in this regard will be greatly helpful.

 Thank you.

 Praveen


Hi Praveen,

As you are using Riak 1.4, you cannot use the new configuration format (via
the riak.conf file, a.k.a cuttlefish, introduced in Riak 2.0). The
handoff_ip setting belongs to riak_core and has to be changed in
app.config[0].

If you really want to set the handoff_ip and change it from its default
setting of 0.0.0.0, then you need to add an entry into the riak_core
settings in app.config

{riak_core, [
%% Other configs
{handoff_ip, 127.0.0.1},
%% Other configs
]}

However I would suggest to simply firewall the handoff port on your machine
(8099) so that it cannot be reached from the network.

Hope this helps.

Regards,

Magnus

[0]:
http://docs.basho.com/riak/1.4.12/ops/advanced/configs/configuration-files/#-code-riak_core-code-Settings

-- 
Magnus Kessler
Client Services Engineer @ Basho
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 1.3.1 crashing with segfault

2015-02-25 Thread Magnus Kessler
On 25 February 2015 at 12:11, Daniel Iwan iwan.dan...@gmail.com wrote:

 Hi

 I've checked all logs and there is nothing regarding memory issues.
 Since then I've had several Riak crashes but looks like other processes are
 failing as well

 Feb  2 22:05:28 node2 kernel: [20052.901884] beam.smp[1830]: segfault at
 8523111 ip 08523111 sp 7f03ba821be8 error 14 in
 13.log[7f01b689+140]
 Feb  4 10:16:01 node2 kernel: [150082.149742] MegaCli[29635]: segfault at
 8549b14 ip 08549b14 sp 7fff3c5f7840 error 14 in
 libtinfo.so.5.9[7f3ee87ba000+22000]
 Feb  4 16:35:27 node2 kernel: [172812.410113] MegaCli[23019]: segfault at
 85473d6 ip 085473d6 sp 7fffc8f49f20 error 14 in
 libtinfo.so.5.9[7fa9e90c3000+22000]
 Feb  6 14:50:38 node2 kernel: [339062.483587] sh[24637]: segfault at
 840460d ip 0840460d sp 7fff35251160 error 14 in
 libc-2.15.so[7fd4cc395000+1b5000]
 Feb  7 11:59:32 node2 kernel: [415077.342034] df[6393]: segfault at
 84029a0 ip 084029a0 sp 7fff758406d0 error 14 in
 libc-2.15.so[7f5433243000+1b5000]

 Feb  8 10:04:31 node2 kernel: [494451.877635] df[22107]: segfault at
 8404b00 ip 08404b00 sp 7eee1b08 error 14 in
 libc-2.15.so[7fbd82ce2000+1b5000]
 Feb  9 16:30:20 node2 kernel: [603829.476142] ls[2873]: segfault at
 840d04e ip 0840d04e sp 7fffaff38c60 error 14 in
 libnss_files-2.15.so[7f257c9c4000+c000]
 Feb  9 18:26:13 node2 kernel: [ 6503.710549] beam.smp[2140]: segfault at
 8523a00 ip 08523a00 sp 7f3955ff2d80 error 14 in
 06.log[7f377f27b000+140]
 Feb 10 17:34:46 node2 kernel: [36949.199740] beam.smp[1877]: segfault at
 85650b2 ip 085650b2 sp 7faba120fa70 error 14 in
 09.log[7fa99827c000+140]
 Feb 11 20:37:15 node2 kernel: [134145.969112] beam.smp[7276]: segfault at
 852287e ip 0852287e sp 7ff8625c9be0 error 14 in
 12.log[7ff66f703000+140]
 Feb 13 08:58:57 node2 kernel: [ 6414.659327] beam.smp[1877]: segfault at
 8569cfc ip 08569cfc sp 7f55aa48bab0 error 14 in
 12.log[7f537dc0e000+140]
 Feb 15 03:20:30 node2 kernel: [133707.360153] MegaCli[7442]: segfault at
 85473d6 ip 085473d6 sp 7fff39ba1570 error 14 in
 libtinfo.so.5.9[7f2728f79000+22000]

 Feb 15 10:02:23 node2 kernel: [157782.787481] beam.smp[2023]: segfault at
 85239d0 ip 085239d0 sp 7f47e3fe6d68 error 14 in
 61.log[7f463e32f000+140]
 Feb 16 17:30:18 node2 kernel: [270880.717532] console-kit-dae[1548]:
 segfault at84123e8  ip 084123e8 sp 7fffc6d8c0c0 error 14
 Feb 16 19:31:45 node2 kernel: [278156.348900] beam.smp[16617]: segfault at
 85650b2 ip 085650b2 sp 7f3dbe74ba70 error 14 in
 19.log[7f3b89f65000+140]
 Feb 16 21:45:34 node2 kernel: [286172.695110] sh[12432]: segfault at
 840460d ip 0840460d sp 7fffe6e2b3b0 error 14 in
 libc-2.15.so[7f9b57c77000+1b5000]
 Feb 17 07:27:23 node2 kernel: [  457.418215] beam.smp[1824]: segfault at
 8523111 ip 08523111 sp 7f36e9574be8 error 14 in
 21.log[7f34c24f3000+140]

 Feb 25 10:46:04 node2 kernel: [702478.037041] beam.smp[8832]: segfault at
 8522980 ip 08522980 sp 7fe8bbffede8 error 14 in
 06.log[7fe713e2c000+140]

 Riak is always touching anti-entropy files, like in the last example:


 /var/lib/riak/anti_entropy/22835963083295358096932575511191922182123945984/06.log

 Could it be an SSD failing?

 Daniel


Hi Daniel,

Random segfaults in all sorts of different programs and libraries are a
strong indicator of hardware failure, most likely memory failure.  You
might want to check the memory modules of your server using memtest86 (
http://www.memtest86.com).

For a an online tool explaining the different segfault error codes, please
have a look at http://rgeissert.blogspot.com/p/segmentation-fault-error.html

Regards,

Magnus
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Starting riak with init.d-script on Debian 8 fails

2015-02-23 Thread Magnus Kessler
On 23 February 2015 at 10:12, Karsten Hauser kl.hau...@epages.com wrote:

  Hi together,



 when I try to start my riak installation with the init-script, I run into
 the following error message:



 · root@unity-backend-dev:~# /etc/init.d/riak start

 · [] Starting riak (via systemctl): riak.serviceFailed to
 start riak.service: Unit riak.service failed to load: No such file or
 directory.

 · failed!



 So “riak.service” seems to be missing, but I don’t know where.



 My system is “Debian GNU/Linux 8” and I have installed
 “riak_2.0.4-1_amd64.deb”.



 “riak start” without init.d-script just works well.



 Can somebody please help me with this?



 Regards

 Karsten


Hi Karsten,

This is a known issue with Riak on the most recent Debian systems. Debian
has now moved to systemd, and Riak does not yet distribute a systemd
*.service file.

I will raise this with our engineering team.

Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer @ Basho
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com