Re: object_lager_event what is it ?

2019-10-22 Thread Bryan Hunt
Thanks Luke, 

Grep’d and ditto, seems like it’s a vestigial limb, deleting from my configs. 

Thanks

Bryan

> On 21 Oct 2019, at 18:58, Luke Bakken  wrote:
> 
> Hey Bryan,
> 
> Something similar in the RabbitMQ code base made me go "huh" the other
> day. Calls to rabbit_log_ldap module functions when no such module
> exists. Turns out there's a parse transform defined that turns these
> calls into lager function calls to an extra sink:
> 
> https://github.com/rabbitmq/rabbitmq-common/blob/master/mk/rabbitmq-build.mk#L46
> 
> I couldn't find any instance of object_lager_event:info (debug,
> warning, etc) function calls in the Riak code I have lying around,
> though, so /shrug
> 
> Good luck -
> Luke
> 
> On Mon, Oct 21, 2019 at 9:18 AM Bryan Hunt
>  wrote:
>> 
>> Given the following configuration, can anyone explain to me what the 
>> object_lager_event section does?
>> 
>> I see this code all over the place (including snippets I provided in the 
>> distant past).
>> 
>> However, a search on github (basho org/erlang-lager/lager) don’t throw up 
>> any code/module matches.
>> 
>> (GitHub indexing doesn’t work well as GitHub only index master and basho 
>> stuff is a mess branch wise).
>> 
>> Anyone got any idea ?
>> 
>> [
>> {lager,
>>   [
>>  {extra_sinks,
>>   [
>>{object_lager_event,
>> [{handlers,
>>   [{lager_file_backend,
>> [{file, "/var/log/riak/object.log"},
>>  {level, info},
>>  {formatter_config, [date, " ", time," [",severity,"] 
>> ",message, "\n"]}
>> ]
>>}]
>>  },
>>  {async_threshold, 500},
>>  {async_threshold_window, 50}]
>>}
>>]
>>  }
>>]
>> }]


-- 


Code Sync & Erlang Solutions Conferences

Code BEAM Lite 
<https://www2.erlang-solutions.com/l/23452/2019-08-22/69x6h1> - Berlin: 11 
October 2019

RabbitMQ Summit 
<https://www2.erlang-solutions.com/l/23452/2019-06-24/66sd8l> - London: 4 
November 2019

Code Mesh LDN 
<https://www2.erlang-solutions.com/l/23452/2019-06-24/66sd8x> - London: 7-8 
November 2019

Code BEAM Lite 
<https://www2.erlang-solutions.com/l/23452/2019-08-22/69x6jc> - Bangalore: 
14 November 2019

Code BEAM Lite 
<https://www2.erlang-solutions.com/l/23452/2019-06-24/66sdbs> - Amsterdam: 
28 November 2019

Lambda Days 
<https://www2.erlang-solutions.com/l/23452/2019-06-24/66sdcd> - Kraków: 
13-14 February 2020

Code BEAM SF 
<https://www2.erlang-solutions.com/l/23452/2019-08-22/69x6cm> - San 
Francisco: 5-6 March 2020

Code BEAM STO - Stockholm: 28-29 May 2020





Erlang Solutions cares about your data and privacy; please find all details 
about the basis for communicating with you and the way we process your data 
in our Privacy Policy 
<https://www.erlang-solutions.com/privacy-policy.html>. You can update your 
email preferences or opt-out from receiving Marketing emails here 
<https://www2.erlang-solutions.com/email-preference?epc_hash=JtO6C7Q2rJwCdZxBx3Ad8jI2D4TJum7XcUWcgfjZ8YY>.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


object_lager_event what is it ?

2019-10-21 Thread Bryan Hunt
Given the following configuration, can anyone explain to me what the 
object_lager_event section does?

I see this code all over the place (including snippets I provided in the 
distant past).

However, a search on github (basho org/erlang-lager/lager) don’t throw up any 
code/module matches.

(GitHub indexing doesn’t work well as GitHub only index master and basho stuff 
is a mess branch wise).

Anyone got any idea ?

[
{lager,
   [
  {extra_sinks,
   [
{object_lager_event,
 [{handlers,
   [{lager_file_backend,
 [{file, "/var/log/riak/object.log"},
  {level, info},
  {formatter_config, [date, " ", time," [",severity,"] 
",message, "\n"]}
 ]
}]
  },
  {async_threshold, 500},
  {async_threshold_window, 50}]
}
]
  }
]
}].

Thanks,

Bryan 


-- 


Code Sync & Erlang Solutions Conferences

Code BEAM Lite 
 - Berlin: 11 
October 2019

RabbitMQ Summit 
 - London: 4 
November 2019

Code Mesh LDN 
 - London: 7-8 
November 2019

Code BEAM Lite 
 - Bangalore: 
14 November 2019

Code BEAM Lite 
 - Amsterdam: 
28 November 2019

Lambda Days 
 - Kraków: 
13-14 February 2020

Code BEAM SF 
 - San 
Francisco: 5-6 March 2020

Code BEAM STO - Stockholm: 28-29 May 2020





Erlang Solutions cares about your data and privacy; please find all details 
about the basis for communicating with you and the way we process your data 
in our Privacy Policy 
. You can update your 
email preferences or opt-out from receiving Marketing emails here 
.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: File corruption after power cut

2019-09-04 Thread Bryan Hunt
Easiest solution is to just delete the vnode - and let it recover from replicas 
- vnode directory will be 844930634081928249586505293914101120738586001408



> On 4 Sep 2019, at 10:01, Guido Medina  wrote:
> 
> Hi all,
> 
> We had a power cut which caused one of the nodes to corrupt one of the 
> LevelDB files, after this that node doesn't even want to start, here is the 
> error we are seeing:
>> 2019-09-04 08:46:41.584 [error] <0.2329.0>@riak_kv_vnode:init:527 Failed to 
>> start riak_kv_eleveldb_backend backend for index 
>> 844930634081928249586505293914101120738586001408 error: 
>> {db_open,"Corruption: CURRENT file does not end with newline"}
> 
> Thanks in advance for your help ;-)
> Guido.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


-- 


Code Sync & Erlang Solutions Conferences

Code BEAM Lite 
 - Budapest: 
20 September 2019

Code BEAM Lite 
 - New York 
City: 01 October 2019

Code BEAM Lite 
 - Berlin: 11 
October 2019

RabbitMQ Summit 
 - London: 4 
November 2019

Code Mesh LDN 
 - London: 7-8 
November 2019

Code BEAM Lite 
 - Bangalore: 
14 November 2019

Code BEAM Lite 
 - Amsterdam: 
29 November 2019

Lambda Days 
 - Kraków: 
13-14 February 2020

Code BEAM SF 
 - San 
Francisco: 5-6 March 2020



Code BEAM STO - Stockholm: 28-29 May 2020





*Erlang Solutions cares about your data and privacy; please find all 
details about the basis for communicating with you and the way we process 
your data in our **Privacy Policy* 
*.You can update your 
email preferences or opt-out from receiving Marketing emails here 
.*
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Pull request for eleveldb

2019-08-14 Thread Bryan Hunt
Created a PR for eleveldb develop branch : 

https://github.com/basho/eleveldb/pull/254 


Lets operate on the basis that :

The future of Riak is leveled - particularly as there's been next to no 
maintenance on this since 2016
The develop branch is for users of rebar3, that's a reasonable presumption in 
2019
People would like to be able to build on OSX, Alpine, Debian.
People are using riak_core and it needs hashtree/metadata etc which are all 
dependent upon eleveldb (for now).
People are using Erlang 20+, and in many cases Erlang 22
People are using riak_core in Elixir applications
Lots of people do their daily work on OSX.
This PR :

Removes the usage of rebar (the binary blob that you don't really know where it 
came from or what it does).
Provides two dockerfiles, one for Alpine, one for debian that demonstrate 
what's necessary to get eleveldb to build on those two platforms, using rebar3, 
with these changes - perhaps those docker builds could be added to travis (if 
it's still running) in a future PR.
Uses the operating system provided version of snappy instead of using an 
arbitrary, unverified (no checksum) version of the snappy library marked as 
1.0.4 - incidentally, there is an existing security advisory for snappy version 
1.1.4. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-7577 
 - in any event 
the intention was to switch to lz4 as the default - I think we can agree this 
doesn't represent very good practice now (made sense at the time).
Downside :
The tools for inspecting SST files don't build - can't get them to work in any 
of the envs (OSX,Alpine,Debian) - perhaps someone with better c/c++ experience 
can figure that out - in any event nobody uses them apart from @matthewvon 
 - maybe he can offer a suggestion/tweak to get 
them working again. I think that's acceptable for now.




-- 


Code Sync & Erlang Solutions Conferences

Code BEAM Lite BD 
 - Budapest: 
20 September 2019

Code BEAM Lite NYC 
 - NYC: 01 
October 2019

Code BEAM Lite 
 - Berlin: 
11 October 2019

RabbitMQ Summit 
 - London: 4 
November 2019

Code Mesh LDN 
 - London: 7-8 
November 2019

Code BEAM Lite India 
 - 
Bangalore: 14 November 2019

Code BEAM Lite AMS 
 - Amsterdam: 
28 November 2019

Lambda Days 
 - Kraków: 
13-14 February 2020

Code BEAM SF - San Francisco: 5-6 March 2020





*Erlang Solutions cares about your data and privacy; please find all 
details about the basis for communicating with you and the way we process 
your data in our **Privacy Policy* 
*.You can update your 
email preferences or opt-out from receiving Marketing emails here 
.*
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.9.0 - Update Available

2019-06-28 Thread Bryan Hunt
Top quality spelunking - always fun to read - thanks Martin !

> On 28 Jun 2019, at 10:24, Martin Sumner  wrote:
> 
> Bryan,
> 
> We saw that Riak was using much more memory than was expected at the end of 
> the handoffs.  Using `riak-admin top` we could see that this wasn't process 
> memory, but binaries.  Firstly did some work via attach looping over 
> processes and running GC to confirm that this wasn't a failure to collect 
> garbage - the references to memory were real.  Then did a bit of work in 
> attach writing some functions to analyse process_info/2 for each process 
> (looking at binary and memory), and discovered that there were penciller 
> processes that had lots of references to lots of large binaries (and this 
> accounted for all the unexpected memory use), and where the penciller was the 
> only process with a reference to the binary.  This made no sense initially as 
> the penciller should only have small binaries (metadata).  Then looked at the 
> running state of the penciller processes and could see no large binaries in 
> the state, but could see that a lot of the active keys in the penciller were 
> keys that were known to have large object values (but small amounts of 
> metadata) - and that the size of the object values were the same as the size 
> of the binary references found on the penciller process via process_info/2.. 
> 
> I then recalled the first part of this: 
> https://dieswaytoofast.blogspot.com/2012/12/erlang-binaries-and-garbage-collection.html
>  
> .
>   It was obvious that in extracting the metadata the beam was naturally 
> retaining a reference to the whole binary, as long as the sub-binary was 
> retained by the a process (the Penciller).  Forcing a binary copy resolved 
> this referencing issue.  It was nice that the same tools used to detect the 
> issue, made it quite easy to write a test to confirm resolution - 
> https://github.com/martinsumner/leveled/blob/master/test/end_to_end/riak_SUITE.erl#L1214-L1239
>  
> .
> 
> The memory leak section of Fred Herbert's http://www.erlang-in-anger.com/ 
>  is great reading for helping with these 
> types of issues. 
> 
> Thanks
> 
> Martin
> 
> 
> On Fri, 28 Jun 2019 at 09:46, b h  > wrote:
> Nice work - I've read issue / PR - how did you discover / track it down - 
> tools or just reading the code ? 
> 
> On Fri, 28 Jun 2019 at 09:35, Martin Sumner  > wrote:
> There is now a second update available for 2.9.0: 
> https://github.com/basho/riak/tree/riak-2.9.0p2 
> .
> 
> This patch, like the patch before, resolves a memory management issue in 
> leveled, which this time could be triggered by sending many large objects in 
> a short period of time.  The underlying problem is described a bit further 
> here https://github.com/martinsumner/leveled/issues/285 
> , and is resolved by 
> leveled working more sympathetically with the beam binary memory management. 
> 
> Switching to the patched version is not urgent unless you are using the 
> leveled backend, and may send a large number of large objects in a burst.  
> 
> Updated packages are available (thanks to Nick Adams at TI Tokyo) - 
> https://files.tiot.jp/riak/kv/2.9/2.9.0p2/ 
> 
> 
> Thanks again to the testing team at the NHS Spine project, Aaron Gibbon 
> (BJSS) and Ramen Sen, who discovered the problem.  The issue was discovered 
> in a handoff scenario where there were a tens of thousands of 2MB objects 
> stored in a portion of the keyspace at the end of the handoff - which led to 
> memory issues until either more PUTs were received (to force a persist to 
> disk) or a restart occurred..
> 
> Regards
> 
> 
> On Sat, 25 May 2019 at 09:35, Martin Sumner  > wrote:
> Unfortunately, Riak 2.9.0 was released with an issue whereby a race condition 
> in heavy-PUT scenarios (e.g. handoffs), could cause a leak of file 
> descriptors.
> 
> The issue is described here - https://github.com/basho/riak_kv/issues/1699 
> , and the underlying issue here 
> - https://github.com/martinsumner/leveled/issues/278 
> .
> 
> There is a new patched version of the release available (2.9.0p1) at 
> https://github.com/basho/riak/tree/riak-2.9.0p1 
> .  This should be used in 
> preference to the original release of 2.9.0.
> 
> Updated packages are available (thanks to Nick Adams at TI Tokyo) - 
> https://files.tiot.jp/riak/kv/2.9/2.9.0p1/ 
> 

Re: [ANN] Riak 2.9.0 - Release Candidate 5 Available

2019-03-14 Thread Bryan Hunt
Riak running without a trace of leveldb would be a truly wonderful thing - it's 
one of things that got me most excited about leveled in the first place. 

I have access to some Power 9 servers - but porting leveldb is not on my 
roadmap.

> On 14 Mar 2019, at 10:55, Martin Sumner  wrote:

> I have had some thoughts about getting Riak running without leveldb 
> (https://github.com/basho/riak/issues/961 
> ), but have nothing currently 
> available to help you right now.
> 
> Martin

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available

2019-02-04 Thread Bryan Hunt
Yeah - it used to be the case that it would be typical to rolling upgrade 
replace, leave, individual cluster nodes - but that was the era when :

a). It cost $5000 per node for a riak_repl (enterprise) license
b) Most folk ran on physical hardware 

Now with most deployments (Bet366/NHS being notable exceptions) running on 
cloud, and no license fee to pay - it’s a no brainer to go the route of create 
new cluster, replicate, verify, switch over, archive old data.

B

> On 1 Feb 2019, at 14:08, Guido Medina  wrote:
> 
> We could probably go the "replace" way, but maybe we take this as a good 
> opportunity to increase our ring size, it is still something we are 
> considering.
> 
> Resizing the cluster while operating daily is no joke, too much data and Riak 
> becomes extremely slow when we have to add a new node, so it would either go 
> the "replace" way or replicate and switch to the new cluster.
> 
> Thanks for all the answers ;-)
> Guido.
> 
> On 01/02/2019 13:50, Nicholas Adams wrote:
>> If he has an extra node he can add then a replace would be much better. I 
>> provided this example under the assumption that he had no additional 
>> resources to play with.
>>  
>> Nicholas
>>  
>> From: riak-users  
>> <mailto:riak-users-boun...@lists.basho.com> On Behalf Of Fred Dushin
>> Sent: 01 February 2019 22:47
>> To: riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>> Subject: Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available
>>  
>> Wouldn't it be better to do a `riak-admin replace`?  Leave could be 
>> problematic if there are other nodes in the cluster that are 
>> under-provisioned (disk space, for example).  Plus a leave and add would 
>> move the data around the cluster twice, for each node in the cluster, 
>> whereas a replace would just move the data to the new node once, no?
>>  
>> -Fred
>> 
>> 
>> On Feb 1, 2019, at 8:32 AM, Nicholas Adams > <mailto:nicholas.ad...@tiot.jp>> wrote:
>>  
>> Hi Guido,
>> Although Martin would be the most qualified to comment on this, I believe 
>> you should be able to do a slow migration.
>>  
>> Choose target node.
>> Make target node leave cluster as in a full “riak-admin cluster leave”, 
>> commit and wait for transfers to finish.
>> Set up target node with leveled and TicTac AAE.
>> Have node rejoin cluster and wait for transfers to finish.
>> Repeat with every single node in the cluster until all have been done.
>>  
>> Unless you are using specific features restricted to your current backend 
>> then Riak will usually put up with multiple backends in the same cluster.
>>  
>> Failing that, I’d go with Bryan’s suggestion to use MDC to replicate from 
>> your existing cluster to a separate cluster that is using the leveled 
>> backend and TicTac AAE.
>>  
>> Either way, be sure to try in a dev environment first and only proceed when 
>> you are happy with the process.
>>  
>> Best regards,
>>  
>> Nicholas
>>  
>> From: riak-users > <mailto:riak-users-boun...@lists.basho.com>> On Behalf Of Bryan Hunt
>> Sent: 01 February 2019 19:22
>> To: Guido Medina mailto:gmed...@temetra.com>>
>> Cc: riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>> Subject: Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available
>>  
>> Replication would be the optimum solution - in theory anything can be 
>> implemented - but it would be enormously painful in comparison to simply 
>> standing up a new cluster.
>> 
>> 
>> 
>> On 1 Feb 2019, at 10:14, Guido Medina > <mailto:gmed...@temetra.com>> wrote:
>>  
>> Hi all,
>> 
>> Nice work on the upcoming 2.9.0 release, I have a quick question:
>> Will it be possible to switch from the eleveldb to the new leveled backend 
>> and Tictac AAE for an existing cluster?
>> 
>> In case it is not possible we are thinking to use the new replication and 
>> move to a brand new cluster.
>> 
>> Kind regards,
>> Guido.
>> 
>> 
>> On 30/01/2019 15:23, Nicholas Adams wrote:
>> Dear All,
>> Following on from Martin’s email, the initial builds of the KV 2.9.0rc1 
>> packages are complete. Currently we are limited to Redhat/CentOS 6 and 7 as 
>> well as Ubuntu Bionic. We shall be adding more packages over the next few 
>> days. Please see https://files.tiot.jp/riak/kv/2.9/2.9.0rc1/ 
>> <https://files.tiot.jp/riak/kv/2.9/2.9.0rc1/> to download.
>>  
>> Disclaimer: This is a release candidate. Please do not

Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available

2019-02-01 Thread Bryan Hunt
Replication would be the optimum solution - in theory anything can be 
implemented - but it would be enormously painful in comparison to simply 
standing up a new cluster.

> On 1 Feb 2019, at 10:14, Guido Medina  wrote:
> 
> Hi all,
> 
> Nice work on the upcoming 2.9.0 release, I have a quick question:
> Will it be possible to switch from the eleveldb to the new leveled backend 
> and Tictac AAE for an existing cluster?
> 
> In case it is not possible we are thinking to use the new replication and 
> move to a brand new cluster.
> 
> Kind regards,
> Guido.
> 
> On 30/01/2019 15:23, Nicholas Adams wrote:
>> Dear All,
>> Following on from Martin’s email, the initial builds of the KV 2.9.0rc1 
>> packages are complete. Currently we are limited to Redhat/CentOS 6 and 7 as 
>> well as Ubuntu Bionic. We shall be adding more packages over the next few 
>> days. Please see https://files.tiot.jp/riak/kv/2.9/2.9.0rc1/ 
>>  to download.
>>  
>> Disclaimer: This is a release candidate. Please do not use in production as 
>> we expect bugs and abnormal functionality may occur in certain conditions. 
>> However, please test this lots and place any issues on github so that they 
>> can be fixed before the final release.
>>  
>> Best regards,
>>  
>> Nicholas
>>  
>> From: riak-users  
>>  On Behalf Of Martin Sumner
>> Sent: 30 January 2019 21:07
>> To: riak-users@lists.basho.com 
>> Subject: [ANN] Riak 2.9.0 - Release Candidate 1 Available
>>  
>> All,
>>  
>> There is now a publicly available release candidate for Riak 2.9.0 for users 
>> to test in their own environment.  The release candidate is available to 
>> build from source here - https://github.com/basho/riak/tree/riak-2.9.0rc1 
>> 
>>  
>> There is only one significant change to Riak 2.2.6 for users maintaining 
>> existing configuration settings.  The release adds the `vnode_soft_limits` 
>> feature that aims to reduce high percentile PUT latency, by checking the 
>> outstanding work queue for a vnode before selecting it as a coordinator of a 
>> PUT.
>>  
>> With additional configuration, there are two major features added in riak 
>> 2.9.0:
>>  
>> - Support for the leveled backend (as an alternative to bitcask or 
>> eleveldb), to provide for improved throughput in some use cases and lower 
>> tail latency in some use cases 
>> -https://github.com/martinsumner/riak_testing_notes/blob/master/Release%202.9%20-%20Choosing%20a%20Backend.md
>>  
>> ;
>> - Support for a new anti-entropy mechanism (Tictac AAE) as an alternative 
>> for the existing anti-entropy mechanism (to provide both intra-cluster and 
>> inter-cluster data repair, with greater flexibility and reliability).
>>  
>> The release also adds support for:
>> - The riak_core node_worker_pool - which provides a mechanism for queueing 
>> background jobs and queries to control the resource consumed on the node by 
>> different queries.  No pre-existing features will use the node-worker_pool.
>> - AAE folds which allow for both cluster-wide AAE queries (e.g. produce a 
>> merkle tree representing all or a partial range of the cluster data), and 
>> administrative queries (e.g. discovering object sizes and sibling counts 
>> within the database depending on bucket name, key range and last modified 
>> date).  AAE folds depend on the Tictac AAE feature being activated.
>> - An interface to request re-replication of an object (where variances 
>> between clusters have been discovered).
>>  
>> Further details of the release, and the release plan in general can be found 
>> here - 
>> https://github.com/basho/riak/blob/develop-2.9/doc/Release%202.9%20Series%20-%20Overview.md
>>  
>> .
>>  
>> It is hoped that there will be a short period of about 1 month before the 
>> release candidate will be converted into a formal release.  The period will 
>> allow for more testing by Riak users, and also there will be further 
>> enhanced testing of the new modules with the help of Quviq 
>> (http://www.quviq.com/ ).  
>>  
>> There will follow additional releases under the 2.9 banner, targeted at 
>> enhancing both new and existing inter-cluster replication features.  In 
>> parallel to this, work will continue on Release 3.0 which is intended to 
>> modernise the OTP/rebar platform used by Riak.
>>  
>> Thanks to all those who contributed to the release.  Apologies to those who 
>> have been kept waiting over the past few months as finalising the changes 
>> and completing the testing has dragged on.
>>  
>> Regards
>>  
>> Martin
>>  
>>  
>>  
>> 
>> 
>> ___
>> riak-users maili

Re: Failed to start riak_kv_multi_backend

2018-07-12 Thread Bryan Hunt
Amol,

Although a repair tool exists for eleveldb the simplest solutions is to simply 
delete the corrupted partition and run a repair job. 

1. Stop the node

2. Locate the directory you have configured for eleveldb data storage and 
delete the subdirectory 479555224749202520035584085735030365824602865664

3. Start the node

4. Follow these instructions : 
https://docs.basho.com/riak/kv/2.0.0/using/repair-recovery/repairs/#repairing-partitions

PS - It’s better to run a cluster with 5 nodes. 

Best Regards,

Bryan Hunt



> On 12 Jul 2018, at 14:32, Amol Zambare(HO)  
> wrote:
> 
> Hello all,
> 
> We have 4 node cluster, one node is unable to start because of below error
> 
> Failed to start riak_kv_multi_backend backend for index 
> 479555224749202520035584085735030365824602865664 error: 
> [{riak_kv_eleveldb_backend,{db_open,"Corruption: truncated record at end of 
> file"}}]
> 
> 
> we have try to start after deleting anti entropy still getting same error, 
> kindly guild us is there any way to repair this node.
> 
> Thanks. 
> 
> 
> *
> 
> PRIVILEGED AND CONFIDENTIAL COMMUNICATION 
> 
> This e-mail transmission, and any documents, files or previous e-mail 
> messages attached to it, may contain confidential information that is legally 
> privileged. If you are not the intended recipient or a person responsible for 
> delivering it to the intended recipient, you are hereby notified that any 
> disclosure, copying, distribution or use of any of the information contained 
> in or attached to this transmission is STRICTLY PROHIBITED. If you have 
> received this transmission in error, please: 
> (1) immediately notify me by reply e-mail, or by telephone call; and 
> (2) destroy the original transmission and its attachments without reading or 
> saving in any manner.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak One partition handoff stall

2018-05-28 Thread Bryan Hunt
Are you constantly executing a particular riak command, in your system 
monitoring scripts, for example: `riak-admin vnode-status` ? 

What size is your data per server ? 

How many objects are you storing ? 

---
Erlang Solutions cares about your data and privacy; please find all details 
about the basis for communicating with you and the way we process your data in 
our Privacy Policy.You can update your email preferences or opt-out from 
receiving Marketing emails here.

> On 28 May 2018, at 08:29, Gaurav Sood  
> wrote:
> 
> Hi All - Good Day!
> 
> I have a 7 Node Raik_KV cluster. Recently I have upgraded this cluster from 
> 1.4.2  to 1.4.12 on Ubuntu 16.04. After upgrading the cluster whenever I 
> leave a node from cluster one partition hand off stalled every time & Active 
> transfers shows 'waiting to handoff 1 partitions", to complete this process I 
> need to reboot the riak service on all nodes one by one. 
> 
> I am not sure if it's configuration problem. Here is the current state of 
> cluster.
> 
> #output of riak-admin member-status
> = Membership 
> ==
> Status RingPendingNode
> ---
> leaving 0.0%  --  'riak@192.168.2.10 '
> valid  14.1%  --  'riak@192.168.2.11 '
> valid  14.1%  --  'riak@192.168.2.12 '
> valid  15.6%  --  'riak@192.168.2.13 '
> valid  14.1%  --  'riak@192.168.2.14 '
> valid  14.1%  --  'riak@192.168.2.15 '
> valid  14.1%  --  'riak@192.168.2.16 '
> valid  14.1%  --  'riak@192.168.2.17 '
> ---
> Valid:7 / Leaving:1 / Exiting:0 / Joining:0 / Down:0
> 
> #output of riak-admin transfers
> 
> 'riak@192.168.2.10 ' waiting to handoff 1 partitions
> 
> Active Transfers:
> 
> (nothing here)
> 
> 
> #Output of riak-admin ring_status
> == Claimant 
> ===
> Claimant:  'riak@192.168.2.10 '
> Status: up
> Ring Ready: true
> 
> == Ownership Handoff 
> ==
> No pending changes.
> 
> == Unreachable Nodes 
> ==
> All nodes are up and reachable
> 
> current Transfer Limit is 2.
> 
> Thanks
> Gaurav
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK repl enhancements

2018-05-23 Thread Bryan Hunt
Andrew, 

Your proposed changes sound like very well considered and welcome improvements. 

What is going to be the strategy for merging? 

Do you intend to publish to feature branches and make pull requests main 
development branches with community review?

Are the changes going to be targeted at the existing 2.* line or 3.0? 

Progress towards 3.* (switch to modern Erlang/rebar3/leveled/new AAE) has been 
slow, it may be expeditious to target the 2.* line. 

Bryan

> On 23 May 2018, at 13:17,   
> wrote:
> 
> Hi,
>  
> A quick note to highlight bet365 have published a summary of the replication 
> work currently in final stages of dev.
>  
> http://bet365techblog.com/riak-update 
>  
> These enhancements will be pushed to the canonical repo in the coming weeks, 
> at which point we’ll also publish a more in-depth explanation of the changes.
>  
> Thanks,
> Andy.
>  
> Andrew Deane
> Systems Development Manager - Core Systems
> Hillside (Technology) Limited
> andrew.de...@bet365.com 
> bet365.com 
>  
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: N = 3 and RW = 2 not finding some keys

2018-05-18 Thread Bryan Hunt
Russell, Good question. I’m guessing they are iterating, and requesting a 
different object for each request? 

Guido, given the behaviour you initially described, before applying the 
configuration I suggested - did you receive a successful response upon 
subsequent requests for the same object ?

> On 18 May 2018, at 13:13, Russell Brown  wrote:
> 
> But why isn’t read repair “working”?
> 
>> On 18 May 2018, at 11:07, Bryan Hunt  wrote:
>> 
>> Of course, AAE will eventually repair the missing object replicas but it 
>> seems like you need something more immediate. 
>> 
>>> On 18 May 2018, at 11:00, Bryan Hunt  
>>> wrote:
>>> 
>>> Hi Guido, 
>>> 
>>> You should attempt to change the bucket property ‘notfound_ok’ from the 
>>> default of ‘true' to ‘false'.
>>> 
>>> I.e 
>>> 
>>> curl -XPUT 127.0.0.1:10018/buckets/foo/props -H "Content-Type: 
>>> application/json" -d '{"props":{"notfound_ok": false}}'
>>> 
>>> This makes GET operations for non-existent keys slower as it forces an 
>>> internal GET for each of the three copies.
>>> 
>>> https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
>>> 
>>> From what you describe, it sounds like only a single copy (out of the 
>>> original three), somehow remain present in your cluster.
>>> 
>>> Best Regards,
>>> 
>>> Bryan Hunt
>>> 
>>>> On 17 May 2018, at 15:42, Guido Medina  wrote:
>>>> 
>>>> Hi all,
>>>> 
>>>> After some big rebalance of our cluster some keys are not found anymore 
>>>> unless we set R = 3, we had N = 3 and R = W = 2
>>>> 
>>>> Is there any sort of repair that would correct such situation for Riak 
>>>> 2.2.3, this is really driving us nuts.
>>>> 
>>>> Any help will be truly appreciated.
>>>> 
>>>> Kind regards,
>>>> Guido.
>>>> ___
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: N = 3 and RW = 2 not finding some keys

2018-05-18 Thread Bryan Hunt
Of course, AAE will eventually repair the missing object replicas but it seems 
like you need something more immediate. 

> On 18 May 2018, at 11:00, Bryan Hunt  wrote:
> 
> Hi Guido, 
> 
> You should attempt to change the bucket property ‘notfound_ok’ from the 
> default of ‘true' to ‘false'.
> 
> I.e 
> 
> curl -XPUT 127.0.0.1:10018/buckets/foo/props -H "Content-Type: 
> application/json" -d '{"props":{"notfound_ok": false}}'
> 
> This makes GET operations for non-existent keys slower as it forces an 
> internal GET for each of the three copies.
> 
> https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
>  
> <https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok>
> 
> From what you describe, it sounds like only a single copy (out of the 
> original three), somehow remain present in your cluster.
> 
> Best Regards,
> 
> Bryan Hunt
> 
>> On 17 May 2018, at 15:42, Guido Medina > <mailto:gmed...@temetra.com>> wrote:
>> 
>> Hi all,
>> 
>> After some big rebalance of our cluster some keys are not found anymore 
>> unless we set R = 3, we had N = 3 and R = W = 2
>> 
>> Is there any sort of repair that would correct such situation for Riak 
>> 2.2.3, this is really driving us nuts.
>> 
>> Any help will be truly appreciated.
>> 
>> Kind regards,
>> Guido.
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: N = 3 and RW = 2 not finding some keys

2018-05-18 Thread Bryan Hunt
Hi Guido, 

You should attempt to change the bucket property ‘notfound_ok’ from the default 
of ‘true' to ‘false'.

I.e 

curl -XPUT 127.0.0.1:10018/buckets/foo/props -H "Content-Type: 
application/json" -d '{"props":{"notfound_ok": false}}'

This makes GET operations for non-existent keys slower as it forces an internal 
GET for each of the three copies.

https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
 
<https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok>

From what you describe, it sounds like only a single copy (out of the original 
three), somehow remain present in your cluster.

Best Regards,

Bryan Hunt

> On 17 May 2018, at 15:42, Guido Medina  wrote:
> 
> Hi all,
> 
> After some big rebalance of our cluster some keys are not found anymore 
> unless we set R = 3, we had N = 3 and R = W = 2
> 
> Is there any sort of repair that would correct such situation for Riak 2.2.3, 
> this is really driving us nuts.
> 
> Any help will be truly appreciated.
> 
> Kind regards,
> Guido.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS Backup Solutions

2018-04-25 Thread Bryan Hunt
Anthony, 

Your snapshot solution is not bad if you just want a daily version you can roll 
back to for disaster recovery.

I presume : 

1. You’re running on VM’s 
2. You stop the world at the time of the snapshot

Riak CS makes multiple (3 by default) copies of your data and generally 
speaking, stores them onto multiple physical servers with the following caveats 
: 

1. Verify that the partitions are evenly spread across the machines, sometimes 
old Riak puts two copies of the data on a single physical machine.
2. Always run a cluster of 5 or more machines.
3. Take care that you’re not storing all your data on the same physical device 
(SAN server in the basement anyone?) ).

I suppose the reason you want backups is to mitigate the situation where an 
adversary gets inside your servers and begins silently deleting or tampering 
with data, in which case, snapshots are prudent. 

If you wanted to do something more sophisticated, you could try : 

1. Re-building your cluster using the soon to be released Riak 2.2.5.
2. Replicate off site to a physically remote location
3. More regularly perform snapshots there (as they won’t impact latencies on 
your main Riak CS cluster)

Does that make sense? Any questions? 

Bryan 




> On 25 Apr 2018, at 14:30, Anthony Valenti  wrote:
> 
> 
> We are looking to improve on our Riak CS backup strategy.  We have had Riak 
> CS in place for a while and we are currently taking a snapshot of each 
> server.  I'm not sure is this is the best method in terms of recoverability, 
> space used, timeliness or the backups, etc.  What methods has anyone else 
> used?  What is the basho recommended solution?
> 
> Thanks,
> Anthony Valenti
> 
> 
> 
> 
> 
>  
> Inmar Confidentiality Note:  This e-mail and any attachments are confidential 
> and intended to be viewed and used solely by the intended recipient.  If you 
> are not the intended recipient, be aware that any disclosure, dissemination, 
> distribution, copying or use of this e-mail or any attachment is prohibited.  
> If you received this e-mail in error, please notify us immediately by 
> returning it to the sender and delete this copy and all attachments from your 
> system and destroy any printed copies.  Thank you for your cooperation.
> 
> 
>  
> Notice of Protected Rights:  The removal of any copyright, trademark, or 
> proprietary legend contained in this e-mail or any attachment is prohibited 
> without the express, written permission of Inmar, Inc.  Furthermore, the 
> intended recipient must maintain all copyright notices, trademarks, and 
> proprietary legends within this e-mail and any attachments in their original 
> form and location if the e-mail or any attachments are reproduced, printed or 
> distributed.
> 
>  
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


riak-2.2.5rc1 release candidate packages

2018-04-03 Thread Bryan Hunt
Hola!  

I’ve built packages for the following distributions: 

- Ubuntu xenial 
- Ubuntu artful 
- Ubuntu trusty
- RHEL 7 / Centos 7
- RHEL 6 / Centos 6

The packages can be downloaded from the following package cloud repository: 

https://packagecloud.io/erlang-solutions/riak_release_candidates 


The instructions for using the package cloud repository are on the linked page, 
just follow the link.

Notes: 

- I haven’t built packages for Ubuntu zesty as that product has recently been 
retired.

- I haven’t built for Ubuntu bionic as we don’t yet have a build of basho 
r16b02_basho10 for this (new) distribution.

- I’ll take a look at Alpine Linux packaging later this week.

If you experience any problems, please get in contact over Slack and I’ll 
investigate. 

Bryan___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-2.2.5 progress update

2018-03-29 Thread Bryan Hunt
From our side, it’s bank holiday weekend now, so we shall start building 
packages on Monday/Tuesday and share them out via package cloud. 
Will keep you updated. 
B

> On 29 Mar 2018, at 16:15, Russell Brown  wrote:
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-2.2.5 progress update

2018-03-27 Thread Bryan Hunt
Could you share URL for those outstanding failures please. B

> On 27 Mar 2018, at 16:37, Russell Brown  wrote:
> 
> Hi Again,
> 
> More progress update. All the PRs are merged (thanks Bryan Hunt (ESL)
> and Nick/Martin (bet365)).
> 
> I’m planning on tagging Riak with riak-2.2.5RC1 this week.
> 
> We haven’t been able to run load tests. I’m not 100% sure that all
> basho’s releases had extensive load tests run, though I know that
> later releases had a perf team, and MvM always load tested leveldb.
> 
> The aim is to have those willing parties that deploy riak, and
> therefore have perf testing already set up, to test and report
> back. The NHS will run load tests and report results back. I hope that others 
> can do the same.
> To the end we’ll probably tag RC1-noperftest.
> 
> There are a few failing riak tests (was Riak ever released without a
> failing riak-test?) If you have the time/capacity to run riak-test,
> and you’re interested in helping out, get in touch and I’ll help you
> get started.
> 
> The failures, should one pique your interest:
> 
> datatypes - riak667_mixed-eleveldb
> ensemble - ensemble_basic3-eleveldb ensemble_basic4-eleveldb
> yoko - yz_crdt-eleveldb yz_solr_upgrade_downgrade-eleveldb
> 
> Let me know if you want to look into ensemble or yoko.
> 
> Still aiming to have this tagged by end-of-week.
> 
> Cheers
> 
> Russell
> 
> On 2 Mar 2018, at 09:44, Russell Brown  wrote:
> 
>> Hi,
>> 
>> Just an update on the progress of riak-2.2.5 release. I realize we said
>> "end of 2017" and then "end of Jan" and then "end of Feb" and here we
>> are, 1st March, spring is upon us, and still no 2.2.5. I thought it best
>> to at least keep you posted.
>> 
>> Why no release? Well, we're not quite finished yet. In terms of what is
>> left:
>> 
>> - a few PRs need review and merge against the upstream Basho repo (see
>> [1] if you want to help there);
>> - Opening of PRs for gsets;
>> - Docs for the changes;
>> - Release notes;
>> - Tagging;
>> - A final round of testing (and fixing?) after all is merged;
>> - Client support. This is crucial, but all the above work only has
>> client support in the Basho erlang clients. I'm hoping the community
>> that uses the Java/Python/Ruby etc clients can step up here. But we can 
>> release Riak before all client work is done.
>> 
>> The optimist in me says 2 weeks.
>> 
>> What do I mean "release"?
>> 
>> From my point of view the release is the tags. After that I'm sincerely
>> hoping ESL will continue to kindly build and host the actual artifacts.
>> 
>> What's in the release?
>> 
>> As a reminder, since it's been a while since StokeCon17, this release
>> contains:
>> 
>> - Some developer clean-up around `make test` and `riak_test` to make
>> them more reliable/trustworthy;
>> - Open source MDC Repl (thanks bet365!);
>> - A fix for a bug in riak core claim that led to unbalanced rings[2];
>> - `node_confirms` a feature like `w` or `pw` but for physical diversity in 
>> durability[3];
>> - `participate in coverage` an admin setting that takes a node out of
>> the coverage plan (for example after adding a node while transfers
>> take place);
>> - Riak repl fix, and change to unsafe default behaviour[4];
>> - Addition of a GSet to riak data types;
>> - Fix to repl stats[5].
>> 
>> Sorry if I missed anything. The release notes will have it all.
>> 
>> Work is already begun on Riak 3.0 with OTP20 support well under way.
>> Some candidates for inclusion that we're working on are a new
>> pure-Erlang backend[6], and a radical overhaul of AAE for both intra and 
>> inter-cluster anti-entropy.
>> 
>> Sorry for the delay, thanks for your patience. We’ll keep you posted.
>> 
>> Cheers
>> 
>> Russell
>> 
>> Titus Systems - ti-sys.co.uk
>> 
>> [1] Coverage:
>> https://github.com/basho/riak_core/pull/917
>> https://github.com/basho/riak_kv/pull/1664
>> https://github.com/basho/riak_test/pull/1300
>> 
>> Repl:
>> https://github.com/basho/riak_repl/pull/777
>> https://github.com/basho/riak_test/pull/1301
>> 
>> Node Confirms:
>> https://github.com/basho/riak_test/pull/1299
>> https://github.com/basho/riak_test/pull/1298
>> https://github.com/basho/riak-erlang-client/pull/371
>> https://github.com/basho/riak_core/pull/915
>> https://github.com/basho/riak-erlang-http-client/pull/69
>> 
>> [2]
>> https

London Riak meetup - April 2018

2018-03-12 Thread Bryan Hunt
Erlang Solutions is organising the next London Riak Meetup in April, we’re 
looking for speakers, any volunteers?

Bryan 




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Meetup at ESL offices riak_core vs partisan

2018-01-16 Thread Bryan Hunt
It will be - the talks are posted under Erlang Solutions channel 
<https://www.youtube.com/channel/UCKrD_GYN3iDpG_uMmADPzJQ> on YouTube - I’ll 
post the link afterward. 

Bryan 

> On 16 Jan 2018, at 15:55, Nick Marino  wrote:
> 
> That sounds like a great talk, any chance of it being videoed and posted 
> online?
> 
> Nick
> 
> On Tue, Jan 16, 2018 at 9:37 AM, Bryan Hunt  <mailto:bryan.h...@erlang-solutions.com>> wrote:
> Meetup event at ESL London offices tomorrow (Wednesday) evening 
> 
> https://www.meetup.com/riak-london/events/246719322/ 
> <https://www.meetup.com/riak-london/events/246719322/>
> 
> Mariano Guerra is in London for his talk on building distributed 
> applications: riak_core vs partisan. 
> 
> Mariano will explore the similarities and differences between riak_core and 
> partisan, and what they have in common.
> 
> In addition, Mariano will be taking us through the process of building a 
> basic app using both plumtree for gossip and hyparview for peer membership.
> 
> Enjoy ! 
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Meetup at ESL offices riak_core vs partisan

2018-01-16 Thread Bryan Hunt
Meetup event at ESL London offices tomorrow (Wednesday) evening 

https://www.meetup.com/riak-london/events/246719322/ 


Mariano Guerra is in London for his talk on building distributed applications: 
riak_core vs partisan. 

Mariano will explore the similarities and differences between riak_core and 
partisan, and what they have in common.

In addition, Mariano will be taking us through the process of building a basic 
app using both plumtree for gossip and hyparview for peer membership.

Enjoy ! 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: K8n Riak statefulset ?

2017-12-15 Thread Bryan Hunt
Excellent. Thank you so much Deepthi. 

> On 15 Dec 2017, at 12:50, Deepthi Devaki  wrote:
> 
> We have done a test deployment of Antidote on AWS with kubernetes. Here is 
> the setup https://github.com/deepthidevaki/antidote-aws 
> <https://github.com/deepthidevaki/antidote-aws>. 
> 
> On Fri, Dec 15, 2017 at 12:26 PM, Bryan Hunt  <mailto:bryan.h...@erlang-solutions.com>> wrote:
> Thanks Chris. Much appreciated ! 
> 
> 
>> On 14 Dec 2017, at 21:57, Christopher Meiklejohn 
>> mailto:christopher.meiklej...@gmail.com>> 
>> wrote:
>> 
>> I believe the Antidote folks (Riak Core application) have.  I can forward 
>> this message on to them.
>> 
>> On Thu, Dec 14, 2017, 22:38 Bryan Hunt > <mailto:ad...@binarytemple.co.uk>> wrote:
>> Anyone done any work with Riak and k8n statefulset ? I see people already 
>> doing stuff with rabbitmq HA 
>> https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ 
>> <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
>> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
>> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> 
> 
> 
> 
> -- 
> Regards,
> Deepthi Akkoorath
> http://dd.thekkedam.org/ <http://dd.thekkedam.org/>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: K8n Riak statefulset ?

2017-12-15 Thread Bryan Hunt
Thanks Chris. Much appreciated ! 

> On 14 Dec 2017, at 21:57, Christopher Meiklejohn 
>  wrote:
> 
> I believe the Antidote folks (Riak Core application) have.  I can forward 
> this message on to them.
> 
> On Thu, Dec 14, 2017, 22:38 Bryan Hunt  <mailto:ad...@binarytemple.co.uk>> wrote:
> Anyone done any work with Riak and k8n statefulset ? I see people already 
> doing stuff with rabbitmq 
> HAhttps://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ 
> <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


K8n Riak statefulset ?

2017-12-14 Thread Bryan Hunt
Anyone done any work with Riak and k8n statefulset ? I see people already
doing stuff with rabbitmq HA
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak_repl integration

2017-12-05 Thread Bryan Hunt
Raghu,

We’re mid way through building packages, in the meantime, if you’re feeling 
impatient you can try the following : 

https://gist.github.com/bryanhuntesl/cc0eefa6939b757ca7b86ebb1d7b29b2 
<https://gist.github.com/bryanhuntesl/cc0eefa6939b757ca7b86ebb1d7b29b2>

The above recipe requires a reasonably recent Docker daemon and cli tools 
installed on the host. 

Best Regards,

Bryan Hunt



> On 5 Dec 2017, at 19:17, Raghavendra Sayana 
>  wrote:
> 
> Hey Bryan,
> 
> We are running riak 2.1.4 on RHEL 6
> 
> Thanks
> Raghu
> 
> -----Original Message-
> From: Bryan Hunt [mailto:bryan.h...@erlang-solutions.com] 
> Sent: Tuesday, December 05, 2017 12:59 PM
> To: Raghavendra Sayana 
> Cc: Dan Sweeney ; riak-users 
> 
> Subject: Re: riak_repl integration
> 
> Raghu, 
> 
> What bistro are you running? 
> 
> Bryan 
> 
>> On 5 Dec 2017, at 17:10, Russell Brown  wrote:
>> 
>> Hi Raghu,
>> At present Riak is still stuck on the r16 basho OTP. This will change next 
>> year. For now, and the next release of riak, r16 is the compiler you need.
>> 
>> If you want to use riak_repl with open source riak, there are a couple of 
>> options. There is the 2.2.4 tag in the basho riak repo, which bet365 kindly 
>> put together when riak_repl was open sourced. Cloning 
>> https://github.com/basho/riak and checking out tag 2.2.4, then running `make 
>> rel` will get you a local release of OS riak as it was at 2.2.3 + riak_repl. 
>> Someone did mention maybe building packages of 2.2.4, if you’d rather 
>> download than build your own, but I don’t know if that happened yet.
>> 
>> In order to _use_ MDC you can follow the docs on basho’s site 
>> http://docs.basho.com/riak/kv/2.2.3/configuring/v3-multi-datacenter/.
>> 
>> There is a branch develop-2.2.5 which is the active work for the next 
>> release of riak, which will contain riak_repl.  We plan to release 
>> riak-2.2.5 (OS riak+repl, and some small features+bug fixes) in early 2018.
>> 
>> Hope that helps
>> 
>> Cheers
>> 
>> Russell
>> 
>> 
>> On 5 Dec 2017, at 16:11, Raghavendra Sayana 
>>  wrote:
>> 
>>> Hi All,
>>> 
>>> I want to use riak MDC replication using riak_repl module. Is there any 
>>> documentation on how I can perform this integration? Do you know what 
>>> erlang compiler version I should be using to compile the code? Any help on 
>>> this is appreciated.
>>> 
>>> Thanks
>>> Raghu
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak_repl integration

2017-12-05 Thread Bryan Hunt
Raghu, 

What bistro are you running? 

Bryan 

> On 5 Dec 2017, at 17:10, Russell Brown  wrote:
> 
> Hi Raghu,
> At present Riak is still stuck on the r16 basho OTP. This will change next 
> year. For now, and the next release of riak, r16 is the compiler you need.
> 
> If you want to use riak_repl with open source riak, there are a couple of 
> options. There is the 2.2.4 tag in the basho riak repo, which bet365 kindly 
> put together when riak_repl was open sourced. Cloning 
> https://github.com/basho/riak and checking out tag 2.2.4, then running `make 
> rel` will get you a local release of OS riak as it was at 2.2.3 + riak_repl. 
> Someone did mention maybe building packages of 2.2.4, if you’d rather 
> download than build your own, but I don’t know if that happened yet.
> 
> In order to _use_ MDC you can follow the docs on basho’s site 
> http://docs.basho.com/riak/kv/2.2.3/configuring/v3-multi-datacenter/.
> 
> There is a branch develop-2.2.5 which is the active work for the next release 
> of riak, which will contain riak_repl.  We plan to release riak-2.2.5 (OS 
> riak+repl, and some small features+bug fixes) in early 2018.
> 
> Hope that helps
> 
> Cheers
> 
> Russell
> 
> 
> On 5 Dec 2017, at 16:11, Raghavendra Sayana 
>  wrote:
> 
>> Hi All,
>> 
>> I want to use riak MDC replication using riak_repl module. Is there any 
>> documentation on how I can perform this integration? Do you know what erlang 
>> compiler version I should be using to compile the code? Any help on this is 
>> appreciated.
>> 
>> Thanks
>> Raghu
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Slack (was: Re: Deprecation of Riak SNMP and Riak JMX?)

2017-11-13 Thread Bryan Hunt
To not include them in the open-source release. 

> On 13 Nov 2017, at 16:00, martin@bet365.com wrote:
> 
> I'm in support of deprecating. Is the intention to include them in the 
> open-source release, only to remove them in the following release? May as 
> well not include them to start with, I'd have thought?
> 
> Martin Cox
> Software Developer


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Deprecation of Riak SNMP and Riak JMX?

2017-11-13 Thread Bryan Hunt
Already discussed on Slack but just to be public:

I’m in favour of keeping the repositories around but not including them as Riak 
dependencies or supporting them going forward.

Any others ?

> On 13 Nov 2017, at 10:48, Russell Brown  wrote:
> 
> Hi all,
> It looks like we’re moving toward shipping the open source repl code with the 
> next release of Riak.
> 
> I’m canvassing for opinions about the riak_snmp and riak_jmx portions of the 
> enterprise code. Is there anyone out there that depends on these features? 
> I’d like to deprecate them in the next release, and remove them in the 
> release following that.
> 
> Cheers
> 
> Russell
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP migration

2017-11-13 Thread Bryan Hunt
Done. 

> On 13 Nov 2017, at 12:10, Jean Parpaillon  wrote:
> 
> Hi Bryan,
> The wiki page and tickets have been created on purpose, as I've guessed many 
> work has already been done here and there and the hardest part is now to 
> integrate it :)
> 
> -> can you add a line in the wiki page: 
> https://github.com/basho/riak/wiki/OTP-migration-task-follow-up 
> <https://github.com/basho/riak/wiki/OTP-migration-task-follow-up> ?
> 
> Jean
> 
> Le lundi 13 novembre 2017 à 12:05 +, Bryan Hunt a écrit :
>> To get the ball rolling, I’ve got a very low risk p/r for riak-erlang-client 
>> to enable Erlang 20 support. 
>> 
>> Would be great to get it merged in. 
>> 
>> https://github.com/basho/riak-erlang-client/pull/367 
>> <https://github.com/basho/riak-erlang-client/pull/367>
>> 
>> 
>>> On 13 Nov 2017, at 11:17, Jean Parpaillon >> <mailto:jean.parpail...@free.fr>> wrote:
>>> 
>>> Hi all,
>>> As announced during the meetup, at KBRW we want to help on the OTP 
>>> migration task.
>>> IIRC, the migration is targeted for post 2.2.5 release, as to not postpone 
>>> this release.
>>> As suggested by Russell, we're starting from nhs-riak-2.2.5 branch.
>>> 
>>> I've opened a ticket for following-up our work on it:
>>> https://github.com/basho/riak/issues/929 
>>> <https://github.com/basho/riak/issues/929>
>>> 
>>> I've also created a wiki page for more detailed infos:
>>> https://github.com/basho/riak/wiki/OTP-migration-task-follow-up 
>>> <https://github.com/basho/riak/wiki/OTP-migration-task-follow-up>
>>> 
>>> Don't hesitate to put any relevant information there : existing pr, 
>>> branches, etc
>>> 
>>> Regards,
>>> -- 
>>> Jean Parpaillon
>>> --
>>> Senior Developper @ KBRW Adventure
>>> Chairman @ OW2 Consortium
>>> --
>>> Phone: +33 6 30 10 92 86
>>> im: jean.parpail...@gmail.com <mailto:jean.parpail...@gmail.com>
>>> skype: jean.parpaillon
>>> linkedin: http://www.linkedin.com/in/jeanparpaillon/en 
>>> <http://www.linkedin.com/in/jeanparpaillon/en>___
>>> riak-users mailing list
>>> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> -- 
> Jean Parpaillon
> --
> Senior Developper @ KBRW Adventure
> Chairman @ OW2 Consortium
> --
> Phone: +33 6 30 10 92 86
> im: jean.parpail...@gmail.com <mailto:jean.parpail...@gmail.com>
> skype: jean.parpaillon
> linkedin: http://www.linkedin.com/in/jeanparpaillon/en 
> <http://www.linkedin.com/in/jeanparpaillon/en>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP migration

2017-11-13 Thread Bryan Hunt
To get the ball rolling, I’ve got a very low risk p/r for riak-erlang-client to 
enable Erlang 20 support. 

Would be great to get it merged in. 

https://github.com/basho/riak-erlang-client/pull/367 



> On 13 Nov 2017, at 11:17, Jean Parpaillon  wrote:
> 
> Hi all,
> As announced during the meetup, at KBRW we want to help on the OTP migration 
> task.
> IIRC, the migration is targeted for post 2.2.5 release, as to not postpone 
> this release.
> As suggested by Russell, we're starting from nhs-riak-2.2.5 branch.
> 
> I've opened a ticket for following-up our work on it:
> https://github.com/basho/riak/issues/929 
> 
> 
> I've also created a wiki page for more detailed infos:
> https://github.com/basho/riak/wiki/OTP-migration-task-follow-up 
> 
> 
> Don't hesitate to put any relevant information there : existing pr, branches, 
> etc
> 
> Regards,
> -- 
> Jean Parpaillon
> --
> Senior Developper @ KBRW Adventure
> Chairman @ OW2 Consortium
> --
> Phone: +33 6 30 10 92 86
> im: jean.parpail...@gmail.com 
> skype: jean.parpaillon
> linkedin: http://www.linkedin.com/in/jeanparpaillon/en 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re:

2017-09-27 Thread Bryan Hunt
Hi Luke,

It’s for bet365 to make that decision.

For the good of the general population the following would be nice:

a) Upgrade to rebar3 
b) Upgrade Erlang version support so it compiles under Erlang 20 (if it doesn’t 
already)
c) Package uploaded to hex.pm

Bryan


> On 27 Sep 2017, at 18:59, Luke Bakken  wrote:
> 
> Hello Riak users -
> 
> The next RabbitMQ release (3.7.0) will use cuttlefish for its
> configuration. I'm writing to express interest on behalf of the
> RabbitMQ team in taking over maintenance of the project. At the
> moment, we forked cuttlefish to the RabbitMQ organization [0] to fix a
> couple pressing issues. After that, it would be great if the
> repository could be transferred to either its own, new organization or
> to the RabbitMQ organization entirely. Basho transferred both
> Webmachine and Lager to their own independent organizations, for
> instance (github.com/webmachine, github.com/erlang-lager)
> 
> Once transferred, GitHub will take care of all the necessary
> redirections from the Basho organization to cuttlefish's new home.
> 
> Thanks,
> Luke Bakken
> 
> [0] - https://github.com/rabbitmq/cuttlefish
> 
> --
> Staff Software Engineer
> Pivotal / RabbitMQ
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Help with handling Riak disk failure

2017-09-19 Thread Bryan Hunt
Sorry Leo, 

That’s completely impossible to guess :-D 

Factors include - I/O, Network cards, network switch, selinux, block size, CPU, 
size of objects, number of objects, CRDT, Riak version, etc… 

Best, 

Bryan 

> On 19 Sep 2017, at 18:53, Leo  wrote:
> 
> Dear Bryan,
> 
> Thank you very much for your answers. They are very helpful to me.
> I will use more nodes (>=5) in future.
> 
> From your experience with using Riak, what would your guess be for the
> time taken to finish all the AAE transfers and be done with the
> recovery for about 1 TB worth of data (assuming my cluster is
> otherwise completely idle without any user accessing the cluster
> during this process and that  I am continuously watching the transfers
> and re-enabling disabled AAE trees gradually )?  I am just asking for
> rough estimate from your past experience ( please quote from your
> experience with a difference sized cluster / data size too ). My guess
> is that it will take approx. 2 days or more. Do you concur?
> 
> Thanks,
> Leo
> 
> 
> On Tue, Sep 19, 2017 at 12:41 PM, Bryan Hunt
>  wrote:
>> (0) Three nodes are insufficient, you should have 5 nodes
>> (1) You could iterate and read every object in the cluster - this would also
>> trigger read repair for every object
>> (2) - copied from Engel Sanchez response to a similar question  April 10th
>> 2014 )
>> 
>> * If AAE is disabled, you don't have to stop the node to delete the data in
>> the anti_entropy directories
>> * If AAE is enabled, deleting the AAE data in a rolling manner may trigger
>> an avalanche of read repairs between nodes with the bad trees and nodes
>> with good trees as the data seems to diverge.
>> 
>> If your nodes are already up, with AAE enabled and with old incorrect trees
>> in the mix, there is a better way.  You can dynamically disable AAE with
>> some console commands. At that point, without stopping the nodes, you can
>> delete all AAE data across the cluster.  At a convenient time, re-enable
>> AAE.  I say convenient because all trees will start to rebuild, and that
>> can be problematic in an overloaded cluster.  Doing this over the weekend
>> might be a good idea unless your cluster can take the extra load.
>> 
>> To dynamically disable AAE from the Riak console, you can run this command:
>> 
>>> riak_core_util:rpc_every_member_ann(riak_kv_entropy_manager, disable, [],
>> 6).
>> 
>> and enable with the similar:
>> 
>>> riak_core_util:rpc_every_member_ann(riak_kv_entropy_manager, enable, [],
>> 6).
>> 
>> That last number is just a timeout for the RPC operation.  I hope this
>> saves you some extra load on your clusters.
>> 
>> (3) That’s going to be :
>> (3a) List all keys using the client of your choice
>> (3b) Fetch each object
>> 
>> https://www.tiot.jp/riak-docs/riak/kv/2.2.3/developing/usage/reading-objects/
>> 
>> https://www.tiot.jp/riak-docs/riak/kv/2.2.3/developing/usage/secondary-indexes/
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On 19 Sep 2017, at 18:31, Leo  wrote:
>> 
>> Dear Riak users and experts,
>> 
>> I really appreciate any help with my questions below.
>> 
>> I have a 3 node Riak cluster with each having approx. 1 TB disk usage.
>> All of a sudden, one node's hard disk failed unrecoverably. So, I
>> added a new node using the following steps:
>> 
>> 1) riak-admin cluster join 2) down the failed node 3) riak-admin
>> force-replace failed-node new-node 4) riak-admin cluster plan 5)
>> riak-admin cluster commit.
>> 
>> This almost fixed the problem except that after lots of data transfers
>> and handoffs, now not all three nodes have 1 TB disk usage. Only two
>> of them have 1 TB disk usage. The other one is almost empty (few 10s
>> of GBs). This means there are no longer 3 copies on disk anymore. My
>> data is completely random (no two keys have same data associated with
>> them. So, compression of data cannot be the reason for less data on
>> disk),
>> 
>> I also tried using the "riak-admin cluster replace failednode newnode"
>> command so that the leaving node handsoff data to the joining node.
>> This however is not helpful if the leaving node has a failed hard
>> disk. I want the remaining live vnodes to help the new node recreate
>> the lost data using their replica copies.
>> 
>> I have three questions:
>> 
>> 1) What commands should I run to forcefully make sure there are three
>> replicas on disk overall without waiting for read

Re: Help with handling Riak disk failure

2017-09-19 Thread Bryan Hunt
(0) Three nodes are insufficient, you should have 5 nodes
(1) You could iterate and read every object in the cluster - this would also 
trigger read repair for every object
(2) - copied from Engel Sanchez response to a similar question  April 10th 2014 
)
* If AAE is disabled, you don't have to stop the node to delete the data in
the anti_entropy directories
* If AAE is enabled, deleting the AAE data in a rolling manner may trigger
an avalanche of read repairs between nodes with the bad trees and nodes
with good trees as the data seems to diverge.

If your nodes are already up, with AAE enabled and with old incorrect trees
in the mix, there is a better way.  You can dynamically disable AAE with
some console commands. At that point, without stopping the nodes, you can
delete all AAE data across the cluster.  At a convenient time, re-enable
AAE.  I say convenient because all trees will start to rebuild, and that
can be problematic in an overloaded cluster.  Doing this over the weekend
might be a good idea unless your cluster can take the extra load.

To dynamically disable AAE from the Riak console, you can run this command:

> riak_core_util:rpc_every_member_ann(riak_kv_entropy_manager, disable, [],
6).

and enable with the similar:

> riak_core_util:rpc_every_member_ann(riak_kv_entropy_manager, enable, [],
6).

That last number is just a timeout for the RPC operation.  I hope this
saves you some extra load on your clusters.
(3) That’s going to be :
(3a) List all keys using the client of your choice
(3b) Fetch each object

https://www.tiot.jp/riak-docs/riak/kv/2.2.3/developing/usage/reading-objects/ 


https://www.tiot.jp/riak-docs/riak/kv/2.2.3/developing/usage/secondary-indexes/ 


 





> On 19 Sep 2017, at 18:31, Leo  wrote:
> 
> Dear Riak users and experts,
> 
> I really appreciate any help with my questions below.
> 
> I have a 3 node Riak cluster with each having approx. 1 TB disk usage.
> All of a sudden, one node's hard disk failed unrecoverably. So, I
> added a new node using the following steps:
> 
> 1) riak-admin cluster join 2) down the failed node 3) riak-admin
> force-replace failed-node new-node 4) riak-admin cluster plan 5)
> riak-admin cluster commit.
> 
> This almost fixed the problem except that after lots of data transfers
> and handoffs, now not all three nodes have 1 TB disk usage. Only two
> of them have 1 TB disk usage. The other one is almost empty (few 10s
> of GBs). This means there are no longer 3 copies on disk anymore. My
> data is completely random (no two keys have same data associated with
> them. So, compression of data cannot be the reason for less data on
> disk),
> 
> I also tried using the "riak-admin cluster replace failednode newnode"
> command so that the leaving node handsoff data to the joining node.
> This however is not helpful if the leaving node has a failed hard
> disk. I want the remaining live vnodes to help the new node recreate
> the lost data using their replica copies.
> 
> I have three questions:
> 
> 1) What commands should I run to forcefully make sure there are three
> replicas on disk overall without waiting for read-repair or
> anti-entropy to make three copies ? Bandwidth usage or CPU usage is
> not a huge concern for me.
> 
> 2) Also, I will be very grateful if someone lists the commands that I
> can run using "riak attach" so that I can clear the AAE trees and
> forcefully make sure all data has 3 copies.
> 
> 3) I will be very thankful if someone helps me with the commands that
> I should run to ensure that all data has 3 replicas on disk after the
> disk failure (instead of just looking at the disk space usage in all
> the nodes as hints)?
> 
> Thanks,
> Leo
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] rabl, RabbitMQ based, open source, realtime replication for Riak

2017-09-06 Thread Bryan Hunt
Nice work Russell!

On Mon, 4 Sep 2017, 11:35 Russell Brown  wrote:

> Hi,
>
> Before I knew about bet365's acquisition of Basho's assets I started
> work for the NHS on an open source realtime replication application
> for Riak. It's called rabl, and uses RabbitMQ. I wrote an introductory
> blog post, which you can read here:
>
> https://github.com/nhs-riak/rabl/blob/master/docs/introducing.md
>
> This is pre-release software. Probably best described as alpha.
>
> Feel free to give it a poke and let me know what you think. If you run
> it (not in production!) and find problems please use the `issues` for
> the github repository. Time will tell whether we iterate on it or give
> it up for Open Source Basho/bet365 realtime.
>
> Cheers
>
> Russell
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Connection multiplexing in the Erlang client

2015-12-30 Thread Bryan Hunt
  Paulo,You can find a list of community maintained PBC pooling ‎libraries in the Erlang sub-section of this page: ‎http://docs.basho.com/riak/latest/dev/using/libraries/#Community-LibrariesII was under the impression that Riak Erlang client ships with poolboy ‎so I'm uncertain of the distinction between the different libraries listed.Perhaps someone could comment to clarify?Bryan   From: Paulo AlmeidaSent: Wednesday, December 30, 2015 10:40 PMTo: riak-users@lists.basho.comSubject: Connection multiplexing in the Erlang clientHi,Does the Erlang Riak client support multiplexing multiple concurrent calls in a single TCP connection? Specifically when using the PB interface (riakc_pb_socket:start_link).Thanks.Regards,Paulo


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Forming inputs to MR job

2015-12-30 Thread Bryan Hunt
  Take a look at this:https://gist.github.com/binarytemple/6bf84b041db0fabbdc74Specifically : Sample mapreduce job sectionFrom: Timur FayruzovSent: Wednay, December 30, 2015 8:01 PMTo: riak-usersSubject: Forming inputs to MR jobHello,I'm trying to write a simple MR job using _javascript_ and hit a wall right at start. I can't figure out how to specify "inputs". Here's the code:curl -XPOST "my_riak_server:8098/mapred" -H "Content-Type: application/json" -d @- <{  "input": "my_bucket"  "query":[{    "map":{        "language":"_javascript_",        "source":"function(riakObject, keydata, arg) {            var m = riakObject.values[0].data;            return [m];        }"    }   }]}EOFthis returns empty array.Aside: I know that listing all keys is slow but for now I can live with this.Note, that I'm using non-default bucket type, so the actual location of my keys is my_riak_server/types/my_bucket_type/buckets/my_bucket/my_key, but I can't figure out how to communicate this location properly in the "input" field. I have found this "documentation":https://github.com/basho/riak_kv/blob/2.1/src/riak_kv_mapred_json.erl#L101, but it does not explain how to specify bucket type and I'm not proficient enough in Erlang to follow the code easily. I did not find any other documentation on this field.  Following returns all keys successfully, so data is there:curl 'http://my_riak_cluster:8098/types/my_bucket_type/buckets/my_bucket/keys?keys=true'Any pointers are highly appreciated.Thanks,Timur


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: cannot start solr, no logs

2015-12-28 Thread Bryan Hunt
Hao, 

The Solr process is crashing because the JVM cannot allocate enough heap 
memory. You need to provide more RAM. The message is: ‎
‎
2015-12-28 16:34:41.735 [info] <0.573.0>@yz_solr_proc:handle_info:135solr 
stdout/err: Error occurred during initialization of VM
Could not reserve enough space for object heap‎

The configuration is made through riak.conf. 

The setting for JVM heap is : ‎‎ "-d64","-Xms1g","-Xmx1g",‎

The setting in use gives 1Gb minimum and max, i.e fixed sized of 1Gb.
‎
You will need to increase the max heap size setting ( ‎-Xmx1g‎ ).

Bryan

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Pagination

2015-12-21 Thread Bryan Hunt
It would seem the order of recreation may be different to that of the original 
ingest. Isn't sorting best performed on the application server side  in order 
to reduce demands on cluster RAM anyway? When you migrated to the new cluster 
did you make any change to the storage configuration ?

  Original Message  
From: Garrido
Sent: Tuesday, December 22, 2015 1:09 AM
To: Bryan Hunt
Cc: riak-users@lists.basho.com
Subject: Re: Riak Search Pagination

Solr (2.x), 
> On Dec 21, 2015, at 7:08 PM, Bryan Hunt  wrote:
> 
> In the context of Solr (2.x), legacy (1.4), or secondary indexes (2i) (1.x+)? 
> 
> 
> Original Message 
> From: Garrido
> Sent: Monday, December 21, 2015 11:36 PM
> To: riak-users@lists.basho.com
> Subject: Riak Search Pagination
> 
> Hello, 
> 
> Recently we migrated our Riak nodes to another network, so we backup the data 
> and then regenerate the ring, all is well, but there is a strange behaviour 
> in a riak search, for example if we execute a query using the 
> riak_erlang_client, returns the objects in the order:
> 
> A, B, C
> 
> And then if we execute again the same query the result is:
> 
> B, A, C, 
> 
> So, in other order, do you know what is causing this?, before to change our 
> riak ring to another network, it was working perfectly.
> 
> Thank you
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Pagination

2015-12-21 Thread Bryan Hunt
In the context of Solr (2.x), legacy (1.4), or secondary indexes (2i) (1.x+)? 


  Original Message  
From: Garrido
Sent: Monday, December 21, 2015 11:36 PM
To: riak-users@lists.basho.com
Subject: Riak Search Pagination

Hello, 

Recently we migrated our Riak nodes to another network, so we backup the data 
and then regenerate the ring, all is well, but there is a strange behaviour in 
a riak search, for example if we execute a query using the riak_erlang_client, 
returns the objects in the order:

A, B, C

And then if we execute again the same query the result is:

B, A, C, 

So, in other order, do you know what is causing this?, before to change our 
riak ring to another network, it was working perfectly.

Thank you
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-2.1.1 compilation from source

2015-07-29 Thread Bryan Hunt
  For the second error increase the ulimit for the riak user to 20 via /etc/security/limits.conf or equivalent.  Sent from my BlackBerry 10 smartphone.From: Humberto Rodríguez AvilaSent: Wednesday, 29 July 2015 12:17 AMTo: riak-users@lists.basho.comSubject: riak-2.1.1 compilation from sourceHi, am trying to compile riak from the source code, but at the end I obtain this WARN 
==> rel (generate)
Schema: ["/Users/Humberto/ErlangProjects/riak/rel/riak/lib/10-riak.schema",
         "/Users/Humberto/ErlangProjects/riak/rel/riak/lib/11-erlang_vm.schema",
         
  ..
         "/Users/Humberto/ErlangProjects/riak/rel/riak/lib/30-yokozuna.schema"]
WARN:  'generate' command does not apply to directory /Users/Humberto/ErlangProjects/riakFurthermore, when I try to start riak I obtain this error:Humbertos-MacBook-Pro-2:bin Humberto$ ./riak start

 WARNING: ulimit -n is 2560; 65536 is the recommended minimum.

riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.What I am doing wrong?I am using Mac OS 10.10Thanks,Humberto


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak crashing until I delete AEE folders

2015-06-15 Thread bryan hunt
Hi Alex, 

Yes, I see the error messages, such as this: 

2015-06-14 22:21:13.005 [error] emulator Error in process <0.20175.11> on node 
‘riak@x.x.x.x' with exit value: 
{function_clause,[{proplists,get_value,[n_val,{error,no_type},undefined],[{file,"proplists.erl"},{line,225}]},{riak_kv_util,get_index_n,1,[{file,"src/riak_kv_util.erl"},{line,190}]},{yz_index_hashtree,'-fold_keys/2-fun-0-'...

It’s an old bug, a race condition related to the creation and activation of 
bucket types.
https://github.com/basho/riak_kv/issues/872 
<https://github.com/basho/riak_kv/issues/872>

It was fixed in the following commit, although it doesn’t have seem to have 
made it into your 2.0.1 build.

https://github.com/basho/riak_test/commit/7cd264556498a8e6ebdf1f383f5d59fbaecf7f0a?diff=split
 
<https://github.com/basho/riak_test/commit/7cd264556498a8e6ebdf1f383f5d59fbaecf7f0a?diff=split>

2.0.1 is way, way, old. I’d recommend a jump to 2.0.6, it’s a much more stable 
release, or 2.1.1 for the latest hotness.

Best Regards,

Bryan Hunt




> On 15 Jun 2015, at 17:25, Alex De la rosa  wrote:
> 
> Hi Bryan
> 
> Thanks for your help : )
> 
> Here you have the /var/logs/folder in a zip file.
> 
> I was using Ubuntu 14.04 LTS, just 1 node with 3 replicas (not the best, but 
> just for testing purposes).
> 
> I was curious on why deleting the AAE subfolders fixed it, and wondered if in 
> the future i found something similar if that could be a solution... but what 
> if you have massive amount of data, nodes, etc... would this be viable?
> 
> Thanks!
> Alex
> 
> On Mon, Jun 15, 2015 at 5:12 PM, bryan hunt  <mailto:bh...@basho.com>> wrote:
> Hi Alex,
> 
> Slight confusion in the last response.  Just zip up the log files from that 
> node and email them directly to me.
> 
> We’ll take a look.
> 
> B
> 
> > On 15 Jun 2015, at 15:50, Matthew Brender  > <mailto:mbren...@basho.com>> wrote:
> >
> > Hey Alex,
> >
> > The best place to start on troubleshooting behaviors like the one you
> > saw is to run `riak-debug`, which is in `/bin`. There's a great deal
> > of data in there which you could upload to a Gist [1] and share with
> > us.
> >
> > [1] https://gist.github.com/ <https://gist.github.com/>
> >
> > Best,
> > Matt
> >
> > Developer Advocacy Lead
> > Basho Technologies
> > t: @mjbrender
> >
> > On Sun, Jun 14, 2015 at 3:52 AM, Alex De la rosa
> > mailto:alex.rosa@gmail.com>> wrote:
> >> My riak node (i was doing some tests with Riak 2.0.0-1 in Ubuntu having 
> >> just
> >> 1 node) and after some time, out of a sudden started crashing and even if i
> >> start it again, it hold running only a few seconds before crashing again.
> >>
> >> Then I deleted the AEE folders within "anti_entropy" and "yz_anti_entropy"
> >> and is working fine again... I would like to know the reason for that : )
> >>
> >> Thanks,
> >> Alex
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> >> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> >>
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> > <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> 
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak crashing until I delete AEE folders

2015-06-15 Thread bryan hunt
Hi Alex, 

Slight confusion in the last response.  Just zip up the log files from that 
node and email them directly to me. 

We’ll take a look. 

B

> On 15 Jun 2015, at 15:50, Matthew Brender  wrote:
> 
> Hey Alex,
> 
> The best place to start on troubleshooting behaviors like the one you
> saw is to run `riak-debug`, which is in `/bin`. There's a great deal
> of data in there which you could upload to a Gist [1] and share with
> us.
> 
> [1] https://gist.github.com/
> 
> Best,
> Matt
> 
> Developer Advocacy Lead
> Basho Technologies
> t: @mjbrender
> 
> On Sun, Jun 14, 2015 at 3:52 AM, Alex De la rosa
>  wrote:
>> My riak node (i was doing some tests with Riak 2.0.0-1 in Ubuntu having just
>> 1 node) and after some time, out of a sudden started crashing and even if i
>> start it again, it hold running only a few seconds before crashing again.
>> 
>> Then I deleted the AEE folders within "anti_entropy" and "yz_anti_entropy"
>> and is working fine again... I would like to know the reason for that : )
>> 
>> Thanks,
>> Alex
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: No WM route

2015-05-12 Thread Bryan Hunt
Hi Amit. Did you resolve your issue?
On 23 Apr 2015 16:46, "Amit Anand"  wrote:

> Hi all Im very new to Riak CS so please bear with me! Finally managed to
> get Riak, Riak CS and Stanchion all running and now Im trying to add the
> admin user. When I run my curl command I get nothing back and in the logs I
> see this error:
>
> 2015-04-23 11:37:18.024 [error] <0.94.0> No WM route: 'POST' /riak-cs/user
> {7,{"user-agent",{'User-Agent',"curl/7.29.0"},{"host",{'Host',"
> 10.7.2.113:8080
> "},{"accept",{'Accept',"*/*"},nil,{"content-type",{'Content-Type',"application/json"},{"content-length",{'Content-Length',"45"},nil,nil},nil}},nil},{"x-rcs-rewrite-path",{"x-rcs-rewrite-path","/riak-cs/user"},{"x-rcs-raw-url",{"x-rcs-raw-url","/riak-cs/user"},nil,nil},nil}}}
>
>
> I have set anonymous user to on in the riak-cs.conf AS WELL as to true in
> the advanced.config. I tried with just in .conf and that didnt work either.
> Would anybody have any ideas on how to get this work I would really
> appreciate it. Thanks and sorry if Im asking a basic question I just cant
> figure it out.
>
> Im running CentOS 7 and:
>
> [root@riakcs riak-cs]# riak version
> 2.1.0
> [root@riakcs riak-cs]# riak-cs version
> 2.0.0
> [root@riakcs riak-cs]# stanchion version
> 2.0.0
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: my cluster spontaneously loses a node after ~48hrs

2015-05-12 Thread Bryan Hunt
Also ensure ulimit is set according to the recommendations on
docs.basho.com. ulimit set too low is a common cause of node termination.
On 5 May 2015 21:23, "Jason Golubock"  wrote:

>
> Scott - thanks for the response,
> yes i've used all those tools at one point, but i'm not sure
> exactly what i'm looking for or what to do with the output.
>
> i've restarted my cluster again but next time it happens,
> i'll attach some output/snapshot files.
>
> ~ Jason
>
>
>
>
>
>
> On 04.05.2015 19:32, Scott Lystig Fritchie wrote:
>
>> Hi, Jason.  Have you tried using the system inspection utilities bundled
>> with Riak?
>>
>> http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#top
>>
>> http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#cluster-info
>>
>> http://docs.basho.com/riak/latest/ops/upgrading/production-checklist/#Confirming-Configuration-with-Riaknostic
>>
>> The "top" utility can show very quickly the most active processes within
>> the virtual machine.
>>
>> -Scott
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java client 1.1.4 and headOnly() in domain buckets

2015-05-12 Thread Bryan Hunt
Hi Daniel, do you have reasons not to upgrade to the latest driver. I know
the semantics have changed somewhat. Are you running a fork of the client?
Bryan
On 12 May 2015 00:40, "Daniel Iwan"  wrote:

> Hi all
>
> Am I right thinking that v1.1.4 does not support headOnly() on domain
> buckets?
>
> During domain.fetch()  line 237 in
>
> https://github.com/basho/riak-java-client/blob/1.1.4/src/main/java/com/basho/riak/client/bucket/DomainBucket.java
>
> there is no check/call headOnly() on FetchMeta object.
>
>
> Code snippet
>
> public T fetch(String key) throws RiakException {
> final FetchObject fo = bucket.fetch(key, clazz)
> .withConverter(converter)
> .withResolver(resolver)
> .withRetrier(retrier);
>
> if (fetchMeta.hasR()) {
> fo.r(fetchMeta.getR());
> }
>
> if (fetchMeta.hasPr()) {
> fo.pr(fetchMeta.getPr());
> }
>
> if (fetchMeta.hasBasicQuorum()) {
> fo.basicQuorum(fetchMeta.getBasicQuorum());
> }
>
> if (fetchMeta.hasNotFoundOk()) {
> fo.notFoundOK(fetchMeta.getNotFoundOK());
> }
>
> if (fetchMeta.hasReturnDeletedVClock()) {
> fo.returnDeletedVClock(fetchMeta.getReturnDeletedVClock());
> }
> return fo.execute();
> }
>
>
>
> --
> View this message in context:
> http://riak-users.197444.n3.nabble.com/Java-client-1-1-4-and-headOnly-in-domain-buckets-tp4033042.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to retrieve an image from Riak using UGameDB

2015-05-12 Thread Bryan Hunt
Didn't realise Chris and Christopher had already replied to this.
On 12 May 2015 13:25, "Bryan Hunt"  wrote:

> Hi Syed. I can't comment upon UGameDB but generally speaking you will need
> to set a content-type header indicating the mime type. For a JPEG image
> that type would be image/jpeg for a PNG it would be image/png. Riak just
> regards data as opaque blobs so it is probably defaulting to text/plain.
> Take a look at the examples on the docs website -
> http://docs.basho.com/riak/latest/dev/using/basics/
> On 11 May 2015 05:28, "syed shabeer"  wrote:
>
>> I am new to Riak and UGameDB, seeking help to retrieve image stored in
>> Riak using UGameDB.
>> I've used Bucket.Set(key,object) and inside the object i am passing an
>> image.
>> When i retrieve the key values using Bucket.Get(Key), i see the values in
>> ASCII.
>> Please let me know the correct approach to retrieve the image.
>>
>> Thank you
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to retrieve an image from Riak using UGameDB

2015-05-12 Thread Bryan Hunt
Hi Syed. I can't comment upon UGameDB but generally speaking you will need
to set a content-type header indicating the mime type. For a JPEG image
that type would be image/jpeg for a PNG it would be image/png. Riak just
regards data as opaque blobs so it is probably defaulting to text/plain.
Take a look at the examples on the docs website -
http://docs.basho.com/riak/latest/dev/using/basics/
On 11 May 2015 05:28, "syed shabeer"  wrote:

> I am new to Riak and UGameDB, seeking help to retrieve image stored in
> Riak using UGameDB.
> I've used Bucket.Set(key,object) and inside the object i am passing an
> image.
> When i retrieve the key values using Bucket.Get(Key), i see the values in
> ASCII.
> Please let me know the correct approach to retrieve the image.
>
> Thank you
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: nodes with 100% HD usage

2015-04-13 Thread bryan hunt
Result - Failed writes, reduced AAE availability, system errors, probably other 
(OS level) processes terminating.

100% disk usage is never good. However, our storage systems are write-append, 
which will mitigate against data corruption.

If the node becomes completely unavailable, the other nodes will also attempt 
to rebalance the data, with less nodes this means each node will be responsible 
for more storage, which could potentially cause a cascading failure.

Moral of the story - monitor, and start sending SMS messages when disk use goes 
above 80%, a standard devops chore, and applicable to any business critical 
computer system.

Bryan

> On 9 Apr 2015, at 14:10, Alex De la rosa  wrote:
> 
> Hi there,
> 
> One theoretical question; what happens when a node (or more) hits a 100% HD 
> usage?
> 
> Riak can easily scale horizontally adding new nodes to the cluster, but what 
> if one of them is full? will the system have troubles? will this node only be 
> used only for reading and new items get saved in the other nodes? will the 
> data rebalance in newly added servers freeing some space in the fully used 
> node?
> 
> Thanks!
> Alex
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: object sizes

2015-04-13 Thread bryan hunt
Alex,


Maps and Sets are stored just like a regular Riak object, but using a 
particular data structure and object serialization format. As you have 
observed, there is an overhead, and you want to monitor the growth of these 
data structures.

It is possible to write a MapReduce map function (in Erlang) which  retrieves a 
provided object by type/bucket/id and returns the size of it's data. Would such 
a thing be of use?

It would not be hard to write such a module, and I might even have some code 
for doing so if you are interested. There are also reasonably good examples in 
our documentation - http://docs.basho.com/riak/latest/dev/advanced/mapreduce

I haven't looked at the Python PB API in a while, but I'm reasonably certain it 
supports the invocation of MapReduce jobs.

Bryan


> On 10 Apr 2015, at 13:51, Alex De la rosa  wrote:
> 
> Also, I forgot, i'm most interested on bucket_types instead of simple riak 
> buckets. Being able how my mutable data inside a MAP/SET has grown.
> 
> For a traditional standard bucket I can calculate the size of what I'm 
> sending before, so Riak won't get data bigger than 1MB. Problem arise in 
> MAPS/SETS that can grown.
> 
> Thanks,
> Alex
> 
> On Fri, Apr 10, 2015 at 2:47 PM, Alex De la rosa  > wrote:
> Well... using the HTTP Rest API would make no sense when using the PB API... 
> would be extremely costly to maintain, also it may include some extra bytes 
> on the transport.
> 
> I would be interested on being able to know the size via Python itself using 
> the PB API as I'm doing.
> 
> Thanks anyway,
> Alex
> 
> On Fri, Apr 10, 2015 at 1:58 PM, Ciprian Manea  > wrote:
> Hi Alex,
> 
> You can always query the size of a riak object using `curl` and the REST API:
> 
> i.e. curl -I :8098/buckets/test/keys/demo
> 
> 
> Regards,
> Ciprian
> 
> On Thu, Apr 9, 2015 at 12:11 PM, Alex De la rosa  > wrote:
> Hi there,
> 
> I'm using the python client (by the way).
> 
> obj = RIAK.bucket('my_bucket').get('my_key')
> 
> Is there any way to know the actual size of an object stored in Riak? to make 
> sure something mutable (like a set) didn't added up to more than 1MB in 
> storage size.
> 
> Thanks!
> Alex
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 
> 
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Erlang crash dump viewer with Riak crash dumps

2015-02-17 Thread bryan hunt
Good tip, thanks Scott!

Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

> On 17 Feb 2015, at 02:18, Scott Lystig Fritchie  
> wrote:
> 
> Hi, Bryan, sorry to jump in so late.  Have you tried this?
> 
>webtool:start().
> 
> Webtool has a non-native-GUI version of the CrashDumpViewer: load
> http://127.0.0.1:/ in a local web browser, then start the
> CrashDumpViewer, then load the dump file, then browse.
> 
> -Scott


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Erlang crash dump viewer with Riak crash dumps

2015-02-12 Thread Bryan Hunt
I can't even get the gui to initialize on osx.  I've been using the viewer
from R17 with success. I use kerl to maintain multiple erlang versions on
my system.
On 12 Feb 2015 11:16, "Simon Hartley"  wrote:

>  I am attempting to use the Erlang crash dump viewer application
> (erl5.9.1\lib\observer-1.1\priv\bin\cdv.bat) to view a crash dump generated
> by Riak 1.4.9
>
>
>
> The app hangs on the message “Processing Timers” and the following output
> is seen in the console:
>
>
>
> =ERROR REPORT 12-Feb-2015::11:12:26 ===
>
> Error in process <0.81.0> on node 'cdv@WH5001654' with exit value:
> {badarg,[{erlang,list_to_pid,["riak_kv_entropy_manager"],[]},{crashdump_viewer,'-get_timers/2-fun-0-',2,[{file,"crashdump_viewer.erl"},{line,1498}]},{crashdump_viewer,progress_map,4,[{file,"crashdump_viewer.erl"},{line,2585}]}]}
>
>
>
>
>
> =ERROR REPORT 12-Feb-2015::11:12:26 ===
>
> Error in process <0.84.0> on node 'cdv@WH5001654' with exit value:
> {badarg,[{erlang,list_to_pid,["riak_core_capability"],[]},{crashdump_viewer,'-get_timers/2-fun-0-',2,[{file,"crashdump_viewer.erl"},{line,1498}]},{crashdump_viewer,progress_map,4,[{file,"crashdump_viewer.erl"},{line,2585}]}]}
>
>
>
>
>
> =ERROR REPORT 12-Feb-2015::11:12:26 ===
>
> Error in process <0.78.0> on node 'cdv@WH5001654' with exit value:
> {badarg,[{erlang,list_to_pid,["riak_core_vnode_manager"],[]},{crashdump_viewer,'-get_timers/2-fun-0-',2,[{file,"crashdump_viewer.erl"},{line,1498}]},{crashdump_viewer,progress_map,4,[{file,"crashdump_viewer.erl"},{line,2585}]}]}
>
>
>
>
>
> =ERROR REPORT 12-Feb-2015::11:12:26 ===
>
> Error in process <0.83.0> on node 'cdv@WH5001654' with exit value:
> {badarg,[{erlang,list_to_pid,["riak_core_claimant"],[]},{crashdump_viewer,'-get_timers/2-fun-0-',2,[{file,"crashdump_viewer.erl"},{line,1498}]},{crashdump_viewer,progress_map,4,[{file,"crashdump_viewer.erl"},{line,2585}]}]}
>
>
>
>
>
> It looks very much like the crash dump file is not valid for the viewer
> app, surely this can’t be the case?
>
>
>
> What am I doing wrong?
>
>
>
> *Simon Hartley*
>
> Solutions Architect
>
>
>
> Email: simon.hart...@williamhill.com
>
> Skype: *+44 (0)113 397 6747 <%2B44%20%280%29113%20397%206747>*
>
> Skype: *sijomons*
>
>
>
> *William Hill Online*, St. Johns, Merrion St. Leeds, LS2 8LQ
>
> [image: Description: Description: Description:
> cid:image002.png@01CC2FFA.24244CF0]
>
>
>  Confidentiality: The contents of this e-mail and any attachments
> transmitted with it are intended to be confidential to the intended
> recipient; and may be privileged or otherwise protected from disclosure. If
> you are not an intended recipient of this e-mail, do not duplicate or
> redistribute it by any means. Please delete it and any attachments and
> notify the sender that you have received it in error. This e-mail is sent
> by a William Hill PLC group company. The William Hill group companies
> include, among others, William Hill PLC (registered number 4212563),
> William Hill Organization Limited (registered number 278208), William Hill
> US HoldCo Inc, WHG (International) Limited (registered number 99191) and
> WHG Trading Limited (registered number 101439). Each of William Hill PLC,
> William Hill Organization Limited is registered in England and Wales and
> has its registered office at Greenside House, 50 Station Road, Wood Green,
> London N22 7TP. William Hill U.S. HoldCo, Inc. is 160 Greentree Drive,
> Suite 101, Dover 19904, Kent, Delaware, United States of America. Each of
> WHG (International) Limited and WHG Trading Limited is registered in
> Gibraltar and has its registered office at 6/1 Waterport Place, Gibraltar.
> Unless specifically indicated otherwise, the contents of this e-mail are
> subject to contract; and are not an official statement, and do not
> necessarily represent the views, of William Hill PLC, its subsidiaries or
> affiliated companies. Please note that neither William Hill PLC, nor its
> subsidiaries and affiliated companies can accept any responsibility for any
> viruses contained within this e-mail and it is your responsibility to scan
> any emails and their attachments. William Hill PLC, its subsidiaries and
> affiliated companies may monitor e-mail traffic data and also the content
> of e-mails for effective operation of the e-mail system, or for security,
> purposes..
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How you are dealing with spikes?

2014-12-09 Thread Bryan Hunt
Hey Alexander,
The times are missing from the leftmost graph (requests). We also don't
recommend mapreduce and 2i for user initiated requests. You would have much
more stable response times if you  precalculated that stuff (just as when
using a relational database). mapreduce and 2i are coverage queries,
meaning that typically they will hit 1/3 of your nodes.

Bryan
On 9 Dec 2014 16:55, "Alexander Popov"  wrote:

> Stats when  recent spike  happens for 15 minutes around it
>  get  (826)
>  save  (341)
>  listByIndex  (1161)
>  mapReduce  (621)  //Input is IDs list
>  SOLR  (4294)
>
> 6 Solr requests was longer than 9sec ( all returns 0 rows )
> 4 Solr requests was longer within 4-5s ( both returns 0 rows )
> 11 listByIndex requests was longer than within 4-5s ( both returns 0 rows )
> all another requests was less than 300ms
>
>
> Sometimes more load do not make such spikes
> Some graphs from  maintanance tasks:
> 1. http://i.imgur.com/xAE6B06.png
> 3 simple tasks, first 2  of them reads  all keys, decide to do
> nothing and continue so just read happens, third task resave all data
> in bucket.
> since  rate is pretty good, some peaks happens
>
> 2. More complex task
> http://i.imgur.com/7nwHb3Q.png,  it have  more serious computing, and
> updating typed bucked( map ), but no peaks to 9s
>
>
>
> sysctl -a | fgrep vm.dirty_:
>
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 10
> vm.dirty_bytes = 0
> vm.dirty_expire_centisecs = 3000
> vm.dirty_ratio = 20
> vm.dirty_writeback_centisecs = 500
>
> On Tue, Dec 9, 2014 at 5:46 PM, Luke Bakken  wrote:
> > Hi Alexander,
> >
> > Can you comment on the read vs. write load of this cluster/
> >
> > Could you please run the following command and reply with the output?
> >
> > sysctl -a | fgrep vm.dirty_
> >
> > We've seen cases where dirty pages get written in a synchronous manner
> > all at once, causing latency spikes due to I/O blocking.
> > --
> > Luke Bakken
> > Engineer / CSE
> > lbak...@basho.com
> >
> >
> > On Tue, Dec 9, 2014 at 4:58 AM, Alexander Popov 
> wrote:
> >> I have Riak 2.0.1 cluster with 5 nodes ( ec2 m3-large ) with elnm in
> front
> >> sometimes I got spikes  up to 10 seconds
> >>
> >> I can't say that I have  huge load at this time,  max 200 requests per
> >> second for all 5 nodes.
> >>
> >> Most expensive queries is
> >> * list by secondary index ( usually returns from 0 to 100 records  )
> >> * and solr queries( max 10 records )
> >>
> >> save operations  is slowdown sometimes but not so much ( up to 1 sec )
> >>
> >> It's slowdown not for specific requests, same one work pretty fast
> later.
> >>
> >> Does it any possibilities to profile|log somehow to determine reason
> >> why this happen?
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Bucket type - change of properties

2014-12-04 Thread bryan hunt
Jason,

When you change the conflict resolution settings, you change conflict 
resolution subsequent to that point - but you already generated siblings before 
your change. 

It is not necessary to restart Riak after changing bucket-type or bucket 
properties. 

Are you still generating siblings or are you just encountering siblings which 
were generated prior to your bucket-type properties change?

Bryan 


> On 1 Dec 2014, at 17:37, Jason Ryan  wrote:
> 
> Hi all,
> 
> 
> I have a simple riak 1 node install on a VM for development - I created a new 
> bucket type, activated it and was using it, and then noticed I was getting 
> siblings back, I'd forgotten to set allow_mult:false and last_write_wins:true
> I did an update on the bucket-type, restarted Riak and checked the bucket 
> type status and all was correct. But I'm still getting siblings.
> 
> Any reason why? very confused!
> 
> Thanks
> Jason
> 
> 
> This message is for the named person's use only. If you received this message 
> in error, please immediately delete it and all copies and notify the sender. 
> You must not, directly or indirectly, use, disclose, distribute, print, or 
> copy any part of this message if you are not the intended recipient. Any 
> views expressed in this message are those of the individual sender and not 
> Trustev Ltd. Trustev is registered in Ireland No. 516425 and trades from 2100 
> Cork Airport Business Park, Cork, Ireland.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Multiple Versions on RHEL

2014-11-12 Thread Bryan Hunt
Short of running virtual machines, or docker instances this is not
currently possible.
On 11 Nov 2014 22:01, "Andrew Zeneski"  wrote:

> I have a cluster of RHEL 6 test servers currently running 1.4.10. I'd like
> to install 2.0.2 on these servers along side of 1.4.10, without building
> from source. I need the ability to switch between both while testing our
> application with the latest Riak. Both versions do not need to be running
> at the same time. Is there a proven process for doing this?
>
> Thanks!
>
> Andrew
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Generic server memsup terminating Mountain Lion

2014-09-15 Thread bryan hunt
Hi Spiro,

What version are you running, If you don’t mind me asking?

Bryan

On 14 Sep 2014, at 21:34, Spiro N  wrote:

> Hi Bryan, thanks for your response. I had used the same commands with the 
> same settings you used before I raised it to no avail. After inspecting the 
> process I noticed Active anti_entropy was the culprit. Since I am running 
> only a single node I ended up disabling Active anti_entropy. The problem 
> disappeared after that. If I had multiple nodes it would be a concern but for 
> now it's a quick fix. I wonder if 2.0 will give me the same errors.
> 
> On Sep 14, 2014 11:57 AM, "Bryan Hunt"  wrote:
> Spiro,
> 
> I am somewhat clueless on OSX, but I use the following command when starting 
> Riak, and it seems to work for me:
> 
> sudo launchctl limit maxfiles 65536 65536
> ulimit -n 65536
> 
> Bryan
> 
> On Wed, Sep 10, 2014 at 1:54 AM, Toby Corkindale  wrote:
> Are you trying to use Riak CS for file storage, or are you just using Riak 
> and storing 20M against a single key?
> It's not clear from your email.
> 
> I ask because if you're in the latter case, it's just not going to work -- I 
> believe the maximum per key is around a single megabyte.
> 
> On 10 September 2014 07:30, Spiro N  wrote:
> Sorry, I am sure you have posted in regards to this topic before but I am at 
> a stand still. It just started after doing a "get"
> the video was about 20 MB. The beam.smp  spikes at 100 and riak crashes. I 
> have done everything the Docs ask for and I provided all that I feel may be 
> relevant below. However I don't know what I don't know and could use some 
> help. Mountain Lion does not let you set the ulmit to unlimited. Thanks in 
> advance for anything at all that may help.
> 
> Spiro
> 
> 
> This is my limit, I am running Mountain Lion 10.8.5
> 
> server:riak gvs$ launchctl limit
> cpu unlimited  unlimited  
> filesizeunlimited  unlimited  
> dataunlimited  unlimited  
> stack   838860867104768   
> core0  unlimited  
> rss unlimited  unlimited  
> memlock unlimited  unlimited  
> maxproc 7091064   
> maxfiles65336  100
> --
> This is my Bitcask  content
> 
> server:lib gvs$ cd /usr/local/var/lib/riak/
> server:riak gvs$ ls bitcask/*/* |wc -l
>  206
> --
> This is the crash.log message
> 
> 
> 2014-09-09 14:34:51 =ERROR REPORT
> ** Generic server memsup terminating 
> ** Last message in was 
> {'EXIT',<0.20807.0>,{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 
> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}}
> ** When Server state == 
> {state,{unix,darwin},false,undefined,undefined,false,6,3,0.8,0.05,<0.20807.0>,#Ref<0.0.0.120573>,undefined,[reg],[]}
> ** Reason for termination == 
> ** {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 
> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
> 2014-09-09 14:34:51 =CRASH REPORT
>   crasher:
> initial call: memsup:init/1
> pid: <0.20806.0>
> registered_name: memsup
> exception exit: {{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 
> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: [os_mon_sup,<0.96.0>]
> messages: []
> links: [<0.97.0>]
> dictionary: []
> trap_exit: true
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 204
>   neighbours:
> 2014-09-09 14:34:51 =SUPERVISOR REPORT
>  Supervisor: {local,os_mon_sup}
>  Context:child_terminated
>  Reason: {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 
> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
>  Offender:   
> [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,start_link,[]}},{restart_type,permanent

Re: Generic server memsup terminating Mountain Lion

2014-09-14 Thread Bryan Hunt
Spiro,

I am somewhat clueless on OSX, but I use the following command when
starting Riak, and it seems to work for me:

sudo launchctl limit maxfiles 65536 65536
ulimit -n 65536

Bryan

On Wed, Sep 10, 2014 at 1:54 AM, Toby Corkindale  wrote:

> Are you trying to use Riak CS for file storage, or are you just using Riak
> and storing 20M against a single key?
> It's not clear from your email.
>
> I ask because if you're in the latter case, it's just not going to work --
> I believe the maximum per key is around a single megabyte.
>
> On 10 September 2014 07:30, Spiro N 
> wrote:
>
>> S
>> *orry, I am sure you have posted in regards to this topic before but I am
>> at a stand still. It just started after doing a "get"*
>> *the video was about 20 MB. The beam.smp  spikes at 100 and riak crashes.
>> I have done everything the Docs ask for and I provided all that I feel may
>> be relevant below. However I don't know what I don't know and could use
>> some help. Mountain Lion does not let you set the ulmit to unlimited.*
>>
>> *Thanks in advance for anything at all that may help.*
>>
>> *Spiro*
>>
>>
>> *This is my limit, I am running Mountain Lion 10.8.5*
>>
>> server:riak gvs$ launchctl limit
>> cpu unlimited  unlimited
>> filesizeunlimited  unlimited
>> dataunlimited  unlimited
>> stack   838860867104768
>> core0  unlimited
>> rss unlimited  unlimited
>> memlock unlimited  unlimited
>> maxproc 7091064
>> *maxfiles65336  100*
>> --
>>
>> *This is my Bitcask  content*
>> server:lib gvs$ cd /usr/local/var/lib/riak/
>> server:riak gvs$ ls bitcask/*/* |wc -l
>>  206
>> --
>> *This is the crash.log message*
>>
>>
>> 2014-09-09 14:34:51 =ERROR REPORT
>> ** Generic server memsup terminating
>> ** Last message in was
>> {'EXIT',<0.20807.0>,{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}}
>> ** When Server state ==
>> {state,{unix,darwin},false,undefined,undefined,false,6,3,0.8,0.05,<0.20807.0>,#Ref<0.0.0.120573>,undefined,[reg],[]}
>> ** Reason for termination ==
>> ** {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
>> 2014-09-09 14:34:51 =CRASH REPORT
>>   crasher:
>> initial call: memsup:init/1
>> pid: <0.20806.0>
>> registered_name: memsup
>> exception exit: {{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s
>> unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: [os_mon_sup,<0.96.0>]
>> messages: []
>> links: [<0.97.0>]
>> dictionary: []
>> trap_exit: true
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 204
>>   neighbours:
>> 2014-09-09 14:34:51 =SUPERVISOR REPORT
>>  Supervisor: {local,os_mon_sup}
>>  Context:child_terminated
>>  Reason: {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd
>> 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
>>  Offender:
>> [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]
>>
>> 2014-09-09 14:34:51 =SUPERVISOR REPORT
>>  Supervisor: {local,os_mon_sup}
>>  Context:shutdown
>>  Reason: reached_max_restart_intensity
>>  Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,st
>>
>> 
>> *This is the error.log message*
>>
>> server:riak gvs$ tail error.log
>> 2014-09-09 17:00:25.907 [error] <0.439.1> gen_server memsup terminated
>> with reason: maximum number of file descriptors exhausted, check ulimit -n
>> 2014-09-09 17:00:25.908 [error] <0.439.1> CRASH REPORT Process memsup
>> with 0 neighbours exited with reason: maximum number of file descriptors
>> exhausted, check ulimit -n in gen_server:terminate/6 line 747
>> 2014-09-09 17:00:25.908 [error] <0.97.0> Supervisor os_mon_sup had child
>> memsup started with memsup:start_link() at <0.439.1> exit with reason
>> maximum number of file descriptors exhausted, check ulimit -n in context
>> child_terminated
>> 2014-09-09 17:00:25.908 [error] <0.442.1> gen_server memsup terminated
>> with reason: maximum number of file descriptors exh

Re: How to call riak_core_ring:remove_member/3 from erlang shell?

2014-08-20 Thread bryan hunt
Hi Sebastian,

Wow that’s a really old version, I know with modern versions the ring file can 
be nuked at the expense of a ton of transfer activity when you join the nodes 
back into a single cluster, shouldn’t lose data though. Anyone want to chime in 
with an opinion on this version ? 

Bryan

On 20 Aug 2014, at 20:28, Sebastian Wittenkamp  wrote:

> Hi Bryan, thanks for getting back to me.
> 
> We are running riak as a service on CentOS.
> 
> We are running individual VMs with a riak node on each VM. 
> 
> We are running version 1.0.0 (yeah, I know...).
> 
> Regarding the -name parameter, it may be helpful to understand what happened 
> here:
> 
> One of our customer cloned 3 VMs running a riak cluster. He accidentally 
> booted up the three new VMs with the network interfaces hot before he had a 
> chance to re-ip them. The nodes joined themselves to the existing cluster and 
> things went south. He changed the -name parameter to be something distinct 
> for each node it may have been done after the nodes were already joined to 
> the cluster.
> 
> When we did a 'riak member_status' we see output that looks like this:
> 
> = Membership 
> ==
> Status RingPendingNode
> ---
> valid  25.0% 25.0%'riak@192.168.0.19'
> valid  25.0% 25.0%'riak@192.168.0.20'
> valid  25.0% 25.0%'riak@192.168.0.21'
> valid  25.0% 25.0%'riak@192.168.0.22'
> valid  25.0% 25.0%'riak@192.168.0.22'
> valid  25.0% 25.0%'riak@192.168.0.22'
> 
> I looked at the docs you sent me and I think the version we are running is 
> too old to have the 'riak-admin cluster replace' command. 
> 
> I'm wondering - if we nuke the ring metadata on all the nodes that shouldn't 
> cause any data loss, correct?
> 
> Thanks so much! Let me know if there is any other information I can provide.
> 
> From: Bryan Hunt 
> Sent: Wednesday, August 20, 2014 1:08 AM
> To: Sebastian Wittenkamp
> Subject: Re: How to call riak_core_ring:remove_member/3 from erlang shell?
>  
> How are you running Riak?
> Are you running individual vm's, using 'make devrel', or other?
> What version are you running?
> Have you set the -name parameter in vm.args to a different value for each 
> node in your cluster?
> This page gives information about ring manipulation and should provide you 
> with what you need to get back up and running:
> http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/
> On 19 Aug 2014 22:54, "Sebastian Wittenkamp"  wrote:
> Hello all, riak shell newbie here. I have a cluster running Riak 1.0.0 which 
> is showing duplicate entries in its ring_member list. 
> 
> E.g. I have 'riak@192.168.10.22' listed multiple times when I do 'riak-admin  
> member_status'. If I tell the node to leave or force-remove it, only one 
> entry is removed from the list.
> 
> From looking at the docs it appears that there's a function 
> http://basho.github.io/riak_core/riak_core_ring.html#remove_member-3 which 
> can be used to remove a member from the ring. I'm wondering how to call that 
> from an erlang shell and also if that's the best/only option? 
> 
> Basically, I just want to forcibly remove the node through any means 
> necessary. Please let me know if more information is needed. Thanks in 
> advance.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-08-06 Thread bryan hunt
Simon,

If you want to get more verbose logging information, you could perform the 
following to change the logging level, to debug, then run `repair-2i`, and 
finally switching back to the normal logging level.

- `riak attach`
- `(riak@nodename)1> SetDebug = fun() -> {node(), 
lager:set_loglevel(lager_file_backend, "/var/log/riak/console.log", debug)} 
end.`
- `(riak@nodename)2> rp(rpc:multicall(erlang, apply, [SetDebug,[]])).`
(don't forget the period at the end of these statements)
- Hit CTRL+C twice to quit from the node

You can then revert back to the normal `info` logging level by running the 
following command via `riak attach`:

- `riak attach`
- `(riak@nodename)1> SetInfo = fun() -> {node(), 
lager:set_loglevel(lager_file_backend, "/var/log/riak/console.log", info)} end.`
- `(riak@nodename)2> rp(rpc:multicall(erlang, apply, [SetInfo,[]])).`
(don't forget the period at the end of these statements)
- Hit CTRL+C twice to quit from a the node

Please also see the docs for info on `riak attach` monitoring of repairs.

http://docs.basho.com/riak/1.4.9/ops/running/recovery/repairing-partitions/#Monitoring-Repairs

Repairs can also be monitored using the `riak-admin transfers` command.

http://docs.basho.com/riak/1.4.9/ops/running/recovery/repairing-partitions/#Running-a-Repair

Best Regards,

Bryan Hunt 

Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 6 Aug 2014, at 12:53, Effenberg, Simon  wrote:

> Hi Engel,
> 
> I tried it yesterday but it was the same:
> 
> 2014-08-05 17:53:14.728 UTC [info] 
> <0.24306.9>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 548063113999088594326381812268606132370974703616
> 2014-08-05 17:53:14.728 UTC [info] 
> <0.24306.9>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 548063113999088594326381812268606132370974703616
> 2014-08-05 17:53:14.753 UTC [info] 
> <0.24306.9>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-08-05 17:53:14.772 UTC [info] 
> <0.24306.9>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 548063113999088594326381812268606132370974703616
> 2014-08-05 17:58:14.773 UTC [info] 
> <0.24305.9>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>Total partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 548063113999088594326381812268606132370974703616
> Error: index_scan_timeout
> 
> Can't we use some erlang commands to execute parts of this manually to check 
> where the timeout actually happens? Or at least who is timing out?
> 
> Cheers
> Simon
> 
> On Tue, Aug 05, 2014 at 10:21:57AM -0400, Engel Sanchez wrote:
>>   Simon:  The data scan for that partition seems to be taking more than 5
>>   minutes to collect a batch of 1000 items, so the 2i repair process is
>>   giving up on it before it has a chance to finish.   You can reduce the
>>   likelihood of this happening by configuring the batch parameter to
>>   something small.  In the riak_kv section of the configuration file, set
>>   this:
>>   {riak_kv, [
>>  {aae_2i_batch_size, 10},
>>  ...
>>   Let us know if that allows it to finish the repair.  You should still look
>>   into what may be causing the slowness.  A combination of slow disks or
>>   very large data sets might do it.
>> 
>>   On Fri, Aug 1, 2014 at 5:24 AM, Russell Brown 
>>   wrote:
>> 
>> Hi Simon,
>> Sorry for the delays. I'm on vacation for a couple of days. Will pick
>> this up on Monday.
>> 
>> Cheers
>> Russell
>> On 1 Aug 2014, at 09:56, Effenberg, Simon 
>> wrote:
>> 
>>> Hi Russell, @basho
>>> 
>>> any updates on this? We still have the issues with 2i (repair is also
>>> still not possible) and searching for the 2i indexes is reproducable
>>> creating (for one range I tested) 3 different values.
>>> 
>>> I would love to provide anything you need to debug that issue.
>>> 
>>> Cheers
>>> Simon
>>> 
>>> On Wed, Jul 30, 2014 at 09:22:56AM +, Effenberg, Simon wrote:
>>>> Great. Thanks Russell..
>>>> 
>>>> if you need me to do something.. feel free to ask.
>>>> 
>>>> Cheers
>>>> Simon
>>>> 
>>>> On Wed, Jul 30, 2014 at 10:19:56AM +0100, Russell 

Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread bryan hunt
Hi Simon,

Does the problem persist if you run it again? 

Does it happen if you run it against any other partition?

Best Regards,

Bryan



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 09:35, Effenberg, Simon  wrote:

> Hi,
> 
> we have some issues with 2i queries like that:
> 
> seffenberg@kriak46-1:~$ while :; do curl -s 
> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
> 
> 13853
> 13853
> 0
> 557
> 557
> 557
> 13853
> 0
> 
> 
> ...
> 
> So I tried to start a repair-2i first on one vnode/partition on one node
> (which is quiet new in the cluster.. 2 weeks or so).
> 
> The command is failing with the following log entries:
> 
> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> 22835963083295358096932575511191922182123945984
> Will repair 2i on these partitions:
>22835963083295358096932575511191922182123945984
> Watch the logs for 2i repair progress reports
> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> partitions [22835963083295358096932575511191922182123945984]
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.740 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-07-29 08:20:22.751 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:25:22.752 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>Total partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 22835963083295358096932575511191922182123945984
> Error: index_scan_timeout
> 
> 
> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> terminated with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155
> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in context child_terminated
> 
> 
> Anything I can do about that? What's the issue here?
> 
> I'm using Riak 1.4.8 (.deb package).
> 
> Cheers
> Simon
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread bryan hunt
Sounds like disk corruption to me.



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 13:06, Effenberg, Simon  wrote:

> Said to say but the issue stays the same.. even after the upgrade to
> 1.4.10.
> 
> Any ideas what is happening here?
> 
> Cheers
> Simon
> 
> On Tue, Jul 29, 2014 at 08:46:42AM +, Effenberg, Simon wrote:
>> Already started to prepare everything for it.. :)
>> 
>> On Tue, Jul 29, 2014 at 09:43:22AM +0100, Guido Medina wrote:
>>> Hi Simon,
>>> 
>>> There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I don't
>>> think there isn't any harm for you to do a rolling upgrade since nothing
>>> major changed, just bug fixes, here is the release notes' link for
>>> reference:
>>> 
>>> https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
>>> 
>>> Best regards,
>>> 
>>> Guido.
>>> 
>>> On 29/07/14 09:35, Effenberg, Simon wrote:
>>>> Hi,
>>>> 
>>>> we have some issues with 2i queries like that:
>>>> 
>>>> seffenberg@kriak46-1:~$ while :; do curl -s 
>>>> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
>>>> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
>>>> 
>>>> 13853
>>>> 13853
>>>> 0
>>>> 557
>>>> 557
>>>> 557
>>>> 13853
>>>> 0
>>>> 
>>>> 
>>>> ...
>>>> 
>>>> So I tried to start a repair-2i first on one vnode/partition on one node
>>>> (which is quiet new in the cluster.. 2 weeks or so).
>>>> 
>>>> The command is failing with the following log entries:
>>>> 
>>>> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
>>>> 22835963083295358096932575511191922182123945984
>>>> Will repair 2i on these partitions:
>>>>22835963083295358096932575511191922182123945984
>>>> Watch the logs for 2i repair progress reports
>>>> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
>>>> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
>>>> partitions [22835963083295358096932575511191922182123945984]
>>>> 2014-07-29 08:20:22.729 UTC [info] 
>>>> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on 
>>>> partition 22835963083295358096932575511191922182123945984
>>>> 2014-07-29 08:20:22.729 UTC [info] 
>>>> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
>>>> partition 22835963083295358096932575511191922182123945984
>>>> 2014-07-29 08:20:22.740 UTC [info] 
>>>> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
>>>> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
>>>> 2014-07-29 08:20:22.751 UTC [info] 
>>>> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index 
>>>> data for partition 22835963083295358096932575511191922182123945984
>>>> 2014-07-29 08:25:22.752 UTC [info] 
>>>> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>>>>Total partitions: 1
>>>>Finished partitions: 1
>>>>Speed: 100
>>>>Total 2i items scanned: 0
>>>>Total tree objects: 0
>>>>Total objects fixed: 0
>>>> With errors:
>>>> Partition: 22835963083295358096932575511191922182123945984
>>>> Error: index_scan_timeout
>>>> 
>>>> 
>>>> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
>>>> terminated with reason: bad argument in call to 
>>>> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
>>>> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>>>>  []) in eleveldb:write/3 line 155
>>>> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
>>>> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call 
>>>> to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
>>>> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110

Re: Issues with high node load and very slow response

2014-07-27 Thread Bryan Hunt
Hello Chaim,

How big is the object you are trying to index? Are siblings (allow_mult)
enabled? What type of object is it? CRDT, binary blob, text? Can you paste
the output from riak-admin member-status and ring-status. Also riak status.
Thanks,

Bryan
On 27 Jul 2014 04:55, "Chaim Solomon"  wrote:

> Hi,
>
> I've been having issues for some days with the cluster I am running (or
> now - trying to run). The last major operation was going through all items
> of a bucket and doing a read/update operation on them from Python code -
> nothing fancy, just modifying some data.
>
> I got issues with one node being VERY busy - 2 CPUs fully loaded - and the
> requests were timing out or at least VERY slow.
> I took the node out of the cluster and added another node - then after
> clearing the node brought it back. After some days of waiting for AAE and
> the migration of data to finish, I am still having one node at 2 CPUs at
> 100% and one node with one CPU at 100% and the requests are still VERY
> slow. I am getting a lot of these in the console.log:
>
> 2014-07-26 23:37:33.459 [error] <0.7427.266>@yz_kv:index:206 failed to
> index object {<<"linkmeta">>,<<"...">>} with error {"Failed to index
> docs",{error,req_timedout}} because
> [{yz_solr,index,3,[{file,"src/yz_solr.erl"},{line,192}]},{yz_kv,index,7,[{file,"src/yz_kv.erl"},{line,258}]},{yz_kv,index,3,[{file,"src/yz_kv.erl"},{line,193}]},{riak_kv_vnode,actual_put,6,[{file,"src/riak_kv_vnode.erl"},{line,1416}]},{riak_kv_vnode,perform_put,3,[{file,"src/riak_kv_vnode.erl"},{line,1404}]},{riak_kv_vnode,do_put,7,[{file,"src/riak_kv_vnode.erl"},{line,1199}]},{riak_kv_vnode,handle_command,3,[{file,"src/riak_kv_vnode.erl"},{line,485}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]}]
>
> And on almost all other nodex I am getting a lot of these:
> 2014-07-26 23:50:58.796 [info] <0.12268.221>@yz_kv:should_handoff:157
> waiting for bucket types prefix and indexes to agree between '
> riak@10.128.138.25' and 'riak@10.128.137.185'
>
> How can I get the cluster to work again?
>
> It is a 2.0.0rc1 cluster.
>
> Chaim Solomon
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Framework to Migrate data from SQL Server and from DB2 to RIAK for Java application

2014-06-30 Thread bryan hunt
Hi Sangeetha,

We can’t really advise users on the best way to pull data out of a relational 
database, you really need to get a RDBMS specialist to advise you on that 
matter.

Riak (at least in it’s current incarnation) is a Key-Value datastore, and we 
generally advise users not to store objects of more than 1MB in size, so that’s 
something you should definitely take into account in your planning.

There are a variety of options in terms of getting data into Riak, covering 
pretty much the entire spectrum of client’s (basho and community supported) 
from C to Smalltalk:

http://docs.basho.com/riak/latest/dev/using/libraries

You might find it initially useful to interact with Riak using the curl client:

http://docs.basho.com/riak/latest/dev/references/http/#Object-Key-Operations

Best Regards,

Bryan Hunt




On 30 Jun 2014, at 12:59, Sangeetha  wrote:

> could someone help me on this please... I am totally new to RIAK and i would
> like to hear suggestions on this...
> 
> 
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Framework-to-Migrate-data-from-SQL-Server-and-from-DB2-to-RIAK-for-Java-application-tp4031318p4031324.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Timeout when accessing a key in a strongly consistent bucket

2014-06-30 Thread bryan hunt

Hi Zsolt, 

Good to see you are trying out these pre-release features, there are still 
significant changes happening around strong consistency and it’s really helpful 
to have people trying it out right now.

There are a couple of things worth checking now in order to ensure your system 
is in the correct state.

Firstly, can you try running the command `riak-admin ensemble-status` for each 
of your nodes.

It should give you output for each node which looks somewhat like the following:

```
== Consensus System ===
Enabled: true
Active:  true
Ring Ready:  true
Paranoia:medium (AAE syncing required)
AAE enabled: true

== Ensembles ==
 Ensemble QuorumNodes  Leader
---
   root   5 / 5 5 / 5  dev1@127.0.0.1
2 0 / 3 3 / 3  --
3 3 / 3 3 / 3  dev2@127.0.0.1
4 3 / 3 3 / 3  dev3@127.0.0.1
5 3 / 3 3 / 3  dev2@127.0.0.1
```

If it does not, can you also run `riak-admin transfers` and see if there are 
any active transfers happening.

Thanks, 

Bryan Hunt


On 27 Jun 2014, at 16:15, Zsolt Laky  wrote:

> Hi Christian,
> 
> Exactly the same with beta1.
> 
> brg
> Zsolt
> 
> On Jun 27, 2014, at 13:25, Christian Dahlqvist  wrote:
> 
>> Hi Zsolt,
>> 
>> Riak 2.0.0pre5 is a quite old pre-release. Please upgrade to the latest 
>> release, which can be found here:
>> 
>> http://docs.basho.com/riak/2.0.0beta1/downloads/
>> 
>> Best regards,
>> 
>> Christian
>> 
>> 
>> 
>> On Thu, Jun 26, 2014 at 9:07 PM,  wrote:
>> Hello There,
>> 
>>  
>> 
>> I installed 2.0.0pre5 on OSX and I experience timeout situation. Here are 
>> the commands I run and I attached the config file.
>> 
>>  
>> 
>> It seems RIAK is aware of the strongly_consistent bucket type but I am not 
>> able to get or put data from/into it.
>> 
>>  
>> 
>> Only the first query for the keys in the bucket succeeds with {}. Other 
>> commands are experimental to describe the problem.
>> 
>>  
>> 
>> Could somebody help to resolve this?
>> 
>>  
>> 
>> Thanks in advance and kind regards
>> 
>> Zsolt
>> 
>> mac:bin zsoci$ ./riak-admin member-status
>> 
>> = Membership 
>> ==
>> 
>> Status RingPendingNode
>> 
>> ---
>> 
>> valid  34.4%  --  'riak1@127.0.0.1'
>> 
>> valid  32.8%  --  'riak2@127.0.0.1'
>> 
>> valid  32.8%  --  'riak3@127.0.0.1'
>> 
>> ---
>> 
>> Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
>> 
>> mac:bin zsoci$ ../riak-admin bucket-type list
>> 
>> strongly_consistent (active)
>> 
>> mac:bin zsoci$ curl -XGET 
>> http://127.0.0.1:8101/types/strongly_consistent/buckets/test1/keys
>> 
>> {}mac:bin zsoci$
>> 
>> mac:bin zsoci$ curl -XGET 
>> http://127.0.0.1:8101/types/strongly_consistent/buckets/test1/keys/a
>> 
>> request timed out
>> 
>> mac:bin zsoci$ curl -XGET 
>> http://127.0.0.1:8101/types/strongly_consistentx/buckets/test1/keys/a
>> 
>> Unknown bucket type: strongly_consistentxmac:bin zsoci$
>> 
>> mac:bin zsoci$ curl -XPUT -H "Content-Type: text/plain" -d "haho" 
>> http://127.0.0.1:8101/types/strongly_consistent/buckets/test1/keys/user1
>> 
>> request timed out
>> 
>> mac:bin zsoci$
>> 
>> 
>> 
>>  
>> This email is free from viruses and malware because avast! Antivirus 
>> protection is active.
>> 
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error starting riak server

2014-05-20 Thread bryan hunt
Rachana,

I recently had a similar problem. I solved it with the following.

On Ubuntu 12.04 I ran the following commands to compile and install Erlang from 
source.

sudo apt-get install libssl-dev
./configure --enable-m64-build
sudo make install

Now rebar runs without any problem.

I believe on Redhat variants the equivalent commands would be 

sudo yum install openssl-devel 
./configure --enable-m64-build
sudo make install

Regards,

Bryan

On 20 May 2014, at 17:06, Hector Castro  wrote:

> Rachana,
> 
> This appears to be an issue with how the Erlang you are trying to run
> was built. It looks like it did not build with OpenSSL support. Please
> ensure that the `openssl-devel` package is installed before
> recompiling Erlang.
> 
> --
> Hector
> 
> 
> On Tue, May 20, 2014 at 8:08 AM, Rachana Shroff  wrote:
>> Hi Hector,
>> 
>> 
>> 
>> I installed R16B02 as you suggested but still not working. Ending up with
>> below error message-
>> 
>> 
>> 
>> I have OTP 17 installed which is I guess overwriting R16B02. If yes, could
>> you please tell me steps to uninstalled it? I didn’t find any such
>> uninstalling file.
>> 
>> 
>> 
>> If you can send me steps to follow from scratch to run basho-bench, that
>> would be great.
>> 
>> 
>> 
>> [root@fdxrhel6142 riak-1.4.8]# cd basho_bench/
>> 
>> [root@fdxrhel6142 basho_bench]# make all
>> 
>> ls: cannot access tests/: No such file or directory
>> 
>> ./rebar get-deps
>> 
>> Uncaught error in rebar_core: {'EXIT',
>> 
>>   {undef,
>> 
>>[{crypto,start,[],[]},
>> 
>> {rebar,run_aux,2,[]},
>> 
>> {rebar,main,1,[]},
>> 
>> {escript,run,2,
>> 
>>  [{file,"escript.erl"},{line,747}]},
>> 
>> {escript,start,1,
>> 
>>  [{file,"escript.erl"},{line,277}]},
>> 
>> {init,start_it,1,[]},
>> 
>> {init,start_em,1,[]}]}}
>> 
>> make: *** [deps] Error 1
>> 
>> 
>> 
>> Thanks,
>> 
>> Rachana
>> 
>> 
>> 
>> From: Rachana Shroff [mailto:rshr...@tibco.com]
>> Sent: Tuesday, May 20, 2014 9:40 AM
>> To: Hector Castro
>> Cc: riak-users
>> Subject: Re: Error starting riak server
>> 
>> 
>> 
>> Hi Hector,
>> 
>> Thanks for prompt reply.
>> 
>> I would try installing R16B02 and check if things works.
>> 
>> 
>> 
>> IMT, could you please let me know any quick point of contact where i can
>> consult and get solution right away?
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Thanks,
>> 
>> Rachana
>> 
>> 
>> 
>> 
>> 
>> On Tue, May 20, 2014 at 12:13 AM, Hector Castro  wrote:
>> 
>> Rachana,
>> 
>> What version of Erlang are you trying to build with? I do not think
>> that basho_bench has support for OTP 17.
>> 
>> I built basho_bench successfully with R16B02.
>> 
>> --
>> Hector
>> 
>> 
>> 
>> On Mon, May 19, 2014 at 5:23 AM, Rachana Shroff  wrote:
>>> Hi Brian, Hector,
>>> 
>>> I want to run the riak benchmark tests and following below steps and stuck
>>> at error below-
>>> 
>>> $ git clone git://github.com/basho/basho_bench.git
>>> $ cd basho_bench
>>> $ make [also tried make all]
>>> 
>>> I am hitting below error -
>>> 
>>> ERROR: compile failed while processing
>>> /home/apps/AS/riak-1.4.8/basho_bench/deps/riakc: rebar_abort
>>> make: *** [compile] Error 1
>>> 
>>> Could you please help me with some pointers to be try out?
>>> 
>>> Also based on this i have few queries like what kind od schema is been
>>> used
>>> in benchmarking, are there any multi-threadig feature enabled, can
>>> multiple
>>> clients be launched at same time, Is there any batch enabled on to and fro
>>> data from cache?
>>> 
>>> 
>>> Appreciate quick help.
>>> 
>>> Thanks,
>>> Rachana
>>> 
>>> 
>>> 
>>> On Thu, May 15, 2014 at 4:02 PM, Rachana Shroff  wrote:
 
 Hi Hector, Brian:
 Thanks for your quick response and sorry couldn't get back to you.
 I did installation using rpm earlier but didn't worked for me. So went
 for
 source installation.
 Anyways, i did installation again using rpm and it worked.
 
 I wanted to run the benchmark bundled with it.
 
 Currently i am exploring this and would get back to you for any issues
 running same.
 
 
 
 Thanks & Regards,
 Rachana
 
 
 On Sat, May 10, 2014 at 12:31 AM, Brian Roach  wrote:
> 
> Looking at your logs, they all show:
> 
> Error loading "erlang_js_drv": "Driver compiled with incorrect version
> of erl_driver.h"
> 
> In your post you say you installed:
> 
> "Erlang - First otp_src_R14B02 and then otp_src_R15B01"
> 
> More than likely you've compiled Riak with the old version or erlang.
> 
> As Hector mentioned, you may want to use an RPM rather than building
> from source. You also want to remove R14B02 or at the very least make
> sure