Re: Riak 2.9.0 - Update Available

2019-06-28 Thread Russell Brown via riak-users

Good job on finding and fixing so fast.

I have to ask. What's with the naming scheme? Why not 2.9.2 instead of 
2.9.0p2?


Cheers

Russell

On 28/06/2019 10:24, Martin Sumner wrote:

Bryan,

We saw that Riak was using much more memory than was expected at the 
end of the handoffs.  Using `riak-admin top` we could see that this 
wasn't process memory, but binaries. Firstly did some work via attach 
looping over processes and running GC to confirm that this wasn't a 
failure to collect garbage - the references to memory were real.  Then 
did a bit of work in attach writing some functions to analyse 
process_info/2 for each process (looking at binary and memory), and 
discovered that there were penciller processes that had lots of 
references to lots of large binaries (and this accounted for all the 
unexpected memory use), and where the penciller was the only process 
with a reference to the binary.  This made no sense initially as the 
penciller should only have small binaries (metadata).  Then looked at 
the running state of the penciller processes and could see no large 
binaries in the state, but could see that a lot of the active keys in 
the penciller were keys that were known to have large object values 
(but small amounts of metadata) - and that the size of the object 
values were the same as the size of the binary references found on the 
penciller process via process_info/2..


I then recalled the first part of this: 
https://dieswaytoofast.blogspot.com/2012/12/erlang-binaries-and-garbage-collection.html. 
It was obvious that in extracting the metadata the beam was naturally 
retaining a reference to the whole binary, as long as the sub-binary 
was retained by the a process (the Penciller).  Forcing a binary copy 
resolved this referencing issue.  It was nice that the same tools used 
to detect the issue, made it quite easy to write a test to confirm 
resolution - 
https://github.com/martinsumner/leveled/blob/master/test/end_to_end/riak_SUITE.erl#L1214-L1239.


The memory leak section of Fred Herbert's 
http://www.erlang-in-anger.com/ is great reading for helping with 
these types of issues.


Thanks

Martin


On Fri, 28 Jun 2019 at 09:46, b h > wrote:


Nice work - I've read issue / PR - how did you discover / track it
down - tools or just reading the code ?

On Fri, 28 Jun 2019 at 09:35, Martin Sumner
mailto:martin.sum...@adaptip.co.uk>>
wrote:

There is now a second update available for 2.9.0:
https://github.com/basho/riak/tree/riak-2.9.0p2.

This patch, like the patch before, resolves a memory
management issue in leveled, which this time could be
triggered by sending many large objects in a short period of
time.  The underlying problem is described a bit further here
https://github.com/martinsumner/leveled/issues/285, and is
resolved by leveled working more sympathetically with the beam
binary memory management.

Switching to the patched version is not urgent unless you are
using the leveled backend, and may send a large number of
large objects in a burst.

Updated packages are available (thanks to Nick Adams at TI
Tokyo) - https://files.tiot.jp/riak/kv/2.9/2.9.0p2/

Thanks again to the testing team at the NHS Spine project,
Aaron Gibbon (BJSS) and Ramen Sen, who discovered the
problem.  The issue was discovered in a handoff scenario where
there were a tens of thousands of 2MB objects stored in a
portion of the keyspace at the end of the handoff - which led
to memory issues until either more PUTs were received (to
force a persist to disk) or a restart occurred..

Regards


On Sat, 25 May 2019 at 09:35, Martin Sumner
mailto:martin.sum...@adaptip.co.uk>> wrote:

Unfortunately, Riak 2.9.0 was released with an issue
whereby a race condition in heavy-PUT scenarios (e.g.
handoffs), could cause a leak of file descriptors.

The issue is described here -
https://github.com/basho/riak_kv/issues/1699, and the
underlying issue here -
https://github.com/martinsumner/leveled/issues/278.

There is a new patched version of the release available
(2.9.0p1) at
https://github.com/basho/riak/tree/riak-2.9.0p1. This
should be used in preference to the original release of 2.9.0.

Updated packages are available (thanks to Nick Adams at TI
Tokyo) - https://files.tiot.jp/riak/kv/2.9/2.9.0p1/

Thanks also to the testing team at the NHS Spine project,
Aaron Gibbon (BJSS) and Ramen Sen, who discovered the problem.

Regards

Martin




___
riak-users mailing list
riak-users@lists.basho.com 

Re: Riak KV 2.9.0 released at Code BEAM STO 2019

2019-05-17 Thread Russell Brown via riak-users

Congratulations all! Great news!

On 17/05/2019 16:04, Nicholas Adams wrote:


Dear All,

I am extremely pleased to announce with Martin Sumner at Code BEAM STO 
2019 that Riak KV 2.9.0 has officially been released!


GitHub

https://github.com/basho/riak/tree/riak-2.9.0

Packages

https://files.tiot.jp/riak/kv/2.9/2.9.0/

Many thanks to everybody who contributed to this great achievement.

As always any issues or questions, please post to GitHub, this mailing 
list or the Slack channel.


Best regards,

Nicholas Adams

Director

TI Tokyo

https://www.tiot.jp/en/


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak 2.2.3, allow_mult=false but still siblings

2019-04-15 Thread Russell Brown via riak-users
That's actually very useful, I hope you are OK if that I re-add back in 
the list so that this thread can be indexed/used in future.



On 15/04/2019 15:14, ジョハンガル wrote:
After -X DELETE types/???/buckets/???/props the properties reverted to 
the bucket type properties!

​
Thank you a lot!
​

-Original Message-
*From:* "Russell Brown"
*To:* "ジョハンガル";
*Cc:*
*Sent:* 2019-04-15 (月) 19:10:24 (GMT+09:00)
*Subject:* Re: riak 2.2.3, allow_mult=false but still siblings

I'm sorry, I don't know, I've not worked much on that code.


Here is how I assume it works


1, create a bucket type T (allow_mult=true by default)

2, create a bucket B of type T (allow_mult "inherited" from T)

3, change property allow_mult of B to false (it is false on B only, not T)


Default buckets have a default set of properties, allow_mult=false

Typed buckets have a default set of properties, allow_mult=true


buckets (named) inherit their Types properties, unless you change the
bucket properties.


On 15/04/2019 09:41, ジョハンガル wrote:
> So if I understand well
> There are basic bucket properties who are ignored
I don't understand "who are ignored" bucket properties are not ignored
> /buckets/???/props
> ​
> Then there are bucket types
> riak-admin bucket-type ...​
> ​
> Then there are bucket type properties under that type
> /types/???/buckets/???/props
> ​
> Are /types/??A/buckets/???/props and /types/??B/buckets/???/props
> different

Only if you changed the properties of A or B on creation of the type.
And further you can change the properties of buckets that or type A or B
(unless the types has immutable properties (like the allow_mult property
of a datatyped bucket))


> ​
> Just to make sure, I am quite confused.
> ​
> If I run -X DELETE /types/???/buckets/???/props would it revert it to
> the default bucket type properties??

I don't think so, no, I don't think you can delete bucket props that
way. You need to set the properties you want


If it helps, I'm confused too. What do the docs say on the matter?


Cheers


Russell

> ​
>
> -Original Message-
> *From:* "Russell Brown"
> *To:* "ジョハンガル";
> *Cc:*
> *Sent:* 2019-04-15 (月) 17:14:34 (GMT+09:00)
> *Subject:* Re: riak 2.2.3, allow_mult=false but still siblings
>
> That bucket has allow_mult=true, so the siblings are expected. How the
> bucket props managed to be changed from the bucket-type defaults is
> worth investigating though.
>
>
> On 15/04/2019 08:49, ジョハンガル wrote:
> > Sorry for the late answer!
> > ​
> > ​​
> > ​
> > Is it a case of bucket props overriding the type?
> > We deleted all the bucket props recently. (curl -X DELETE  /props)
> >
> > -Original Message-
> > *From:* "Russell Brown"
> > *To:* "ジョハンガル";
> > *Cc:*
> > *Sent:* 2019-04-13 (土) 05:05:33 (GMT+09:00)
> > *Subject:* Re: riak 2.2.3, allow_mult=false but still siblings
> >
> > Can I see the bucket properties for the bucket in question, please?
> > Buckets can override their type's properties, iirc
> >
> >
> > Cheers
> >
> > Russell
> >
> >
> > On 12/04/2019 13:36, ジョハンガル wrote:
> > > for the bucket type definition:
> > > ​
> > > ​
> > > For the headers
> > > ​
> > > ​
> > > These buckets formally allowed siblings (more of a default thing 
than

> > > anything).
> > > Following sibling explosion problems we modified all buckets that
> > > received updates from a single source to not allow siblings anymore.
> > > During some time we had bucket properties and types used at the same
> > > type, following the previously mentioned (property broadcast bug)
> > > repeatedly bringing our machines down, we identified the 
problem, made

> > > sure everything run with >2.0 clients, reset (deleted) all bucket
> > > properties and made sure to use types. Then the cluster became 
stable.

> > > Then while monitoring I noticed these siblings that shouldn't be.
> > > ​
> > > The entire content of these buckets is batch regenerated multiple
> > > times by minute (~1 to 10). There is very little total content 
(a few

> > > megabytes in that bucket) and the machines used are ridiculously
> > > overprovisionned (many cores 64gb ram machines).
> > > ​
> > >
> > > -Original Message-
> > > *From:* "Russell Brown"
> > > *To:* "ジョハンガル";
> > > ;
> > > *Cc:*
> > > *Sent:* 2019-04-12 (金) 20:47:56 (GMT+09:00)
> > > *Subject:* Re: riak 2.2.3, allo

Re: riak 2.2.3, allow_mult=false but still siblings

2019-04-12 Thread Russell Brown via riak-users

Can you let us see the bucket type definition, please?


Can you show me the headers from the curl command that returns siblings, 
please?



I want to say that what you are seeing is unpossible (from what I 
remember of the code.) But I don't remember the 2.2.3 release process, 
and I'd like to see some more evidence before I look at the code again.



I wonder if you remember the history of the bucket/bucket type in 
question. Has it had changes to allow_mult etc?



Cheers

Russell

On 12/04/2019 12:29, ジョハンガル wrote:

Hello,
​
I would be thankful is somebody could help me with some weird 
abnormalities.

​
curl https://~~~/types/THE-TYPE/bucket/~~~/keys/~~~ returns siblings
​
However the corresponding type has "allow_mult" set to false...
​​
The phenomenon appear with both 
"last_write_wins=true;dvv_enabled=false" and 
"last_write_wins=false;dvv_enabled=true".
The backend is leveldb (in a multi configuration), secondary indices 
are used.

​
The cluster is an old cluster (pre 2.0) that received rolling updates 
to 2.0 (remove node, reinstall riak with new config, read node).

Currently 2.2.3
We used to hit that problem a lot until we found out about it and 
fixed the problem https://github.com/basho/yokozuna/issues/389

I don't if it has any kind of relevancy though.
​
Does somebody have some kind of hint? Or diagnostic command I might 
run? Am I missing something obvious?

​
Best regards,
​


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Smarter put FSM coordinator selection

2018-06-20 Thread Russell Brown
Hi,
Just a quick post to share some work we’ve been doing at the NHS on Riak tail 
latencies for PUTS.

Here’s a blog post:


https://github.com/russelldb/riak_kv/blob/rdb/gh1661/docs/soft-limit-vnode.md

And here are the PRs:

https://github.com/basho/riak_core/pull/921
https://github.com/basho/riak_kv/pull/1670

We’re hoping to get this in the next release of Riak.

Let me know if you have opinions,

Cheers

Russell
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Partitions repair isn't working anymore?

2018-06-01 Thread Russell Brown

> On 1 Jun 2018, at 10:07, Guido Medina  wrote:
> 
> The last command should have been like the following:
>   • {ok, Ring} = riak_core_ring_manager:get_my_ring().
>   • Partitions = [P || {P, 'r...@node4.domain.com'} <- 
> riak_core_ring:all_owners(Ring)].
>   • [riak_kv_vnode:repair(P) || P <- Partitions].
> My bad, I think I copied the command from the wrong instructions before.

And does that work?

From your mail you said that 
>>>>>>> )3> [riak_search_vnode:repair(P) || P <- Partitions].
>>>>>>> ** exception error: undefined function riak_search_vnode:repair/1
>>>>>>> 

Does not work. This I would expect, and I was asking, why do you run this? Are 
you using legacy search?

If you run 

> • [riak_kv_vnode:repair(P) || P <- Partitions].

Does it work?

Cheers

Russell

> 
> Guido.
> 
> On 01/06/18 09:37, Russell Brown wrote:
>> I don’t see a call to `riak_search_vnode:repair` in those docs
>> 
>> Do you still run legacy riak search (i.e. not yokozuna/solr)?
>> 
>> 
>>> On 1 Jun 2018, at 09:35, Guido Medina 
>>>  wrote:
>>> 
>>> Sorry, not repairing a single partition but all partitions per node:
>>> 
>>> https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
>>> 
>>> 
>>> On 01/06/18 09:34, Guido Medina wrote:
>>> 
>>>> Hi Russell,
>>>> 
>>>> I was repairing each node as specified in this guide 
>>>> https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-a-single-partition
>>>> 
>>>> 
>>>> Guido.
>>>> 
>>>> On 01/06/18 09:16, Russell Brown wrote:
>>>> 
>>>>> riak_search has been removed from riak-2.2.5. Looks like some vestige 
>>>>> survived. 
>>>>> 
>>>>> Can you tell me what command you ran, it looks to me from the output 
>>>>> below that you’re connected to node and typing commands in the console?
>>>>> 
>>>>> Is this some snippet that you attach and run?
>>>>> 
>>>>> Cheers
>>>>> 
>>>>> Russell
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 1 Jun 2018, at 09:07, Guido Medina 
>>>>>> 
>>>>>>  wrote:
>>>>>> 
>>>>>> Hi all,
>>>>>> 
>>>>>> We started the partitions repair a couple of weeks ago, so far so good 
>>>>>> (3 nodes out of 7 done), then we started getting this error:
>>>>>> 
>>>>>> 
>>>>>>> (r...@node4.domain.com
>>>>>>> 
>>>>>>> )3> [riak_search_vnode:repair(P) || P <- Partitions].
>>>>>>> ** exception error: undefined function riak_search_vnode:repair/1
>>>>>>> 
>>>>>>> 
>>>>>> The first two steps for the node repair executed fine:
>>>>>> 
>>>>>> 
>>>>>>> {ok, Ring} = riak_core_ring_manager:get_my_ring().
>>>>>>> Partitions = [P || {P, '
>>>>>>> 
>>>>>>> r...@node4.domain.com
>>>>>>> 
>>>>>>> '} <- riak_core_ring:all_owners(Ring)].
>>>>>>> 
>>>>>>> 
>>>>>> We are running on 2.2.5
>>>>>> Guido.
>>>>>> ___
>>>>>> riak-users mailing list
>>>>>> 
>>>>>> 
>>>>>> riak-users@lists.basho.com
>>>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> ___
>>> riak-users mailing list
>>> 
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Partitions repair isn't working anymore?

2018-06-01 Thread Russell Brown
I don’t see a call to `riak_search_vnode:repair` in those docs

Do you still run legacy riak search (i.e. not yokozuna/solr)?

> On 1 Jun 2018, at 09:35, Guido Medina  wrote:
> 
> Sorry, not repairing a single partition but all partitions per node:
> https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
> 
> On 01/06/18 09:34, Guido Medina wrote:
>> Hi Russell,
>> 
>> I was repairing each node as specified in this guide 
>> https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-a-single-partition
>> 
>> Guido.
>> 
>> On 01/06/18 09:16, Russell Brown wrote:
>>> riak_search has been removed from riak-2.2.5. Looks like some vestige 
>>> survived. 
>>> 
>>> Can you tell me what command you ran, it looks to me from the output below 
>>> that you’re connected to node and typing commands in the console?
>>> 
>>> Is this some snippet that you attach and run?
>>> 
>>> Cheers
>>> 
>>> Russell
>>> 
>>> 
>>>> On 1 Jun 2018, at 09:07, Guido Medina 
>>>>  wrote:
>>>> 
>>>> Hi all,
>>>> 
>>>> We started the partitions repair a couple of weeks ago, so far so good (3 
>>>> nodes out of 7 done), then we started getting this error:
>>>> 
>>>>> (r...@node4.domain.com
>>>>> )3> [riak_search_vnode:repair(P) || P <- Partitions].
>>>>> ** exception error: undefined function riak_search_vnode:repair/1
>>>>> 
>>>> The first two steps for the node repair executed fine:
>>>> 
>>>>> {ok, Ring} = riak_core_ring_manager:get_my_ring().
>>>>> Partitions = [P || {P, '
>>>>> r...@node4.domain.com
>>>>> '} <- riak_core_ring:all_owners(Ring)].
>>>>> 
>>>> We are running on 2.2.5
>>>> Guido.
>>>> ___
>>>> riak-users mailing list
>>>> 
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Partitions repair isn't working anymore?

2018-06-01 Thread Russell Brown
riak_search has been removed from riak-2.2.5. Looks like some vestige survived. 

Can you tell me what command you ran, it looks to me from the output below that 
you’re connected to node and typing commands in the console?

Is this some snippet that you attach and run?

Cheers

Russell

> On 1 Jun 2018, at 09:07, Guido Medina  wrote:
> 
> Hi all,
> 
> We started the partitions repair a couple of weeks ago, so far so good (3 
> nodes out of 7 done), then we started getting this error:
>> (r...@node4.domain.com)3> [riak_search_vnode:repair(P) || P <- Partitions].
>> ** exception error: undefined function riak_search_vnode:repair/1
> 
> The first two steps for the node repair executed fine:
>> {ok, Ring} = riak_core_ring_manager:get_my_ring().
>> Partitions = [P || {P, 'r...@node4.domain.com'} <- 
>> riak_core_ring:all_owners(Ring)].
> 
> We are running on 2.2.5
> Guido.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[ANN] Riak 2.2.6 release

2018-05-31 Thread Russell Brown
Hi,

Riak-2.2.6 has been tagged and will be/is available from the usual
outlets (see follow up from Nicholas Adams.)

Riak-2.2.6 has no changes/diffences from riak-2.2.5.

When I tagged riak-2.2.5 I did not notice that there already existed a
tag `2.1.7-225` for riak_kv. I created a new tag with the same
name. Thankfully all the build/release tools used the new tag, but there
is still risk, and confusion, due to the duplicate tag on riak_kv.

Our options were to delete both tags, and re-create the latter tag,
which seemed wrong, or create a new tag for riak_kv (at the exact same
SHA) which is what we have done. The new tag is 2.1.7-226.

Creating this new tag meant updating the rebar.config for yokozuna and
riak_repl, and riak itself, which led to new SHAs, and new tags for
those repos. And that is how we came to have a riak-2.2.6 which is
exactly the same code as riak-2.2.5.

I have updated the release wiki instructions to include a step that
checks for existing branch/tag with the planned new tag name, so that we
don’t do this again.

Cheers

Russell



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: N = 3 and RW = 2 not finding some keys

2018-05-18 Thread Russell Brown
But why isn’t read repair “working”?

> On 18 May 2018, at 11:07, Bryan Hunt  wrote:
> 
> Of course, AAE will eventually repair the missing object replicas but it 
> seems like you need something more immediate. 
> 
>> On 18 May 2018, at 11:00, Bryan Hunt  wrote:
>> 
>> Hi Guido, 
>> 
>> You should attempt to change the bucket property ‘notfound_ok’ from the 
>> default of ‘true' to ‘false'.
>> 
>> I.e 
>> 
>> curl -XPUT 127.0.0.1:10018/buckets/foo/props -H "Content-Type: 
>> application/json" -d '{"props":{"notfound_ok": false}}'
>> 
>> This makes GET operations for non-existent keys slower as it forces an 
>> internal GET for each of the three copies.
>> 
>> https://docs.basho.com/riak/kv/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
>> 
>> From what you describe, it sounds like only a single copy (out of the 
>> original three), somehow remain present in your cluster.
>> 
>> Best Regards,
>> 
>> Bryan Hunt
>> 
>>> On 17 May 2018, at 15:42, Guido Medina  wrote:
>>> 
>>> Hi all,
>>> 
>>> After some big rebalance of our cluster some keys are not found anymore 
>>> unless we set R = 3, we had N = 3 and R = W = 2
>>> 
>>> Is there any sort of repair that would correct such situation for Riak 
>>> 2.2.3, this is really driving us nuts.
>>> 
>>> Any help will be truly appreciated.
>>> 
>>> Kind regards,
>>> Guido.
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[ANN] Riak 2.2.5 release

2018-04-26 Thread Russell Brown
Hi,

Thanks all for your patience and hard work. I just pushed the tag
riak-2.2.5 to the basho repo. Packages are being made.

Release notes are here 
https://github.com/basho/riak/blob/riak-2.2.5/RELEASE-NOTES.md

Cheers

Russell


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-2.2.5 progress update

2018-04-10 Thread Russell Brown
More update.

There were some minor changes that missed the rc1, so we added them

* yokozuna/746: [remove 
-XX:+UseStringCache](https://github.com/basho/yokozuna/pull/746)
* yokozuna/747: [Remove jvm directive from test 
too](https://github.com/basho/yokozuna/pull/747)
* riak_repl/782: [Change ETS queue table permissions to 
protected](https://github.com/basho/riak_repl/pull/782)

And today tagged RC2 (riak-2.2.5rc2)

Same rules as before:

At this point I DID NOT tag all the dependencies so it is _VERY IMPORTANT_ that 
the rebar.config.lock file is used when you build a release to test (e.g. `make 
locked-all`)

If anyone (Tiot? ESL?) can/wants to build packages please post to this list 
about location for those who’d rather test against packages.

We’ve got perf test results upcoming, and then we’ll tag final.

Thanks for your patience, almost there now.

Cheers

Russell

On 29 Mar 2018, at 18:18, Bryan Hunt  wrote:

> From our side, it’s bank holiday weekend now, so we shall start building 
> packages on Monday/Tuesday and share them out via package cloud. 
> Will keep you updated. 
> B
> 
>> On 29 Mar 2018, at 16:15, Russell Brown  wrote:
>> 
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-2.2.5 progress update

2018-03-29 Thread Russell Brown
More update.

I just pushed riak-2.2.5rc1 to basho/riak.

At this point I DID NOT tag all the dependencies so it is _VERY IMPORTANT_ that 
the rebar.config.lock file is used when you build a release to test (e.g. `make 
locked-all`)

If anyone (Bryan Hunt, ESL?) can/wants to build packages please post to this 
list about location for those who’d rather test against packages.

Are we done? Not yet.

What to expect next:

1. Results of load tests from any who have said they can (Damien, Martin Cox, 
Martin Sumner?)
2. Docs and release notes (ESL, Tiot.jp, me?)
3. Final flakey riak tests (yokozuna and ensemble)
4. Tag all deps, tag final
5. Celebrate

It’s way early for celebrations, back slapping etc, but on a selfish, personal 
note, I really want to thank the NHS for enabling Ramen, Martin, and I to 
contribute to this project.

Cheers

Russell

On 29 Mar 2018, at 14:03, Damien Krotkine  wrote:

> Hi,
> 
> I'm probably coming late to the party, but we have a big number of riak boxes 
> running at work, in meta-clusters, so some rings are redundantly storing 
> data. I could move one of them to the RC and compare its 
> performance/errors/whatever with the non upgraded rings, if you people think 
> it's useful. Poke me if that's useful.
> 
> Cheers
> 
> On Wed, Mar 28, 2018, at 09:07, martin@bet365.com wrote:
>> Awesome progress.
>> 
>> We're happy to run generalised load tests against RC and share results 
>> when completed - probably in the following week.
>> 
>> Cheers
>> ________
>> From: riak-users [riak-users-boun...@lists.basho.com] on behalf of 
>> Russell Brown [russell.br...@icloud.com]
>> Sent: 27 March 2018 19:04
>> To: Fred Dushin
>> Cc: riak-users
>> Subject: Re: Riak-2.2.5 progress update
>> 
>> Hey Fred,
>> 
>> I can probably share my configs, yes. I have 4 configs!
>> 
>> current is riak-2.2.5, the develop-2.2 branch of basho/riak for all 
>> configs
>> previous is either riak-2.2.3 or riak-2.0.5 (and I have 4 configs as 
>> there has to be an EE config, for the repl upgrade tests, since 
>> riak-2.2.3 didn’t have repl, but riak_ee-2.2.3 did)
>> legacy is always 1.4.12
>> 
>> And since some tests have explicit version there are also explicit 
>> entries for 2.0.6, 2.0.4, 2.0.2, in each config.
>> 
>> Does that help?
>> 
>> Cheers
>> 
>> Russell
>> 
>> On 27 Mar 2018, at 18:51, Fred Dushin  wrote:
>> 
>>> @russeldb do you have ordained values for `current`, `previous`, and 
>>> `legacy` to test against?
>>> 
>>> Always the :bane: of riak_test
>>> 
>>> -Fred
>>> 
>>>> On Mar 27, 2018, at 1:47 PM, Russell Brown  wrote:
>>>> 
>>>> Giddyup died when basho was shuttered. These test runs have all been on 
>>>> private infrastructure that doesn’t have a public facing interface.
>>>> 
>>>> I guess I could probably cut and paste logs into a gist for you. But I’d 
>>>> have to take the time to sanitize them, just in case. I’d rather not take 
>>>> that time unless absolutely necessary.
>>>> 
>>>> On 27 Mar 2018, at 18:43, Bryan Hunt  
>>>> wrote:
>>>> 
>>>>> Could you share URL for those outstanding failures please. B
>>>>> 
>>>>>> On 27 Mar 2018, at 16:37, Russell Brown  wrote:
>>>>>> 
>>>>>> Hi Again,
>>>>>> 
>>>>>> More progress update. All the PRs are merged (thanks Bryan Hunt (ESL)
>>>>>> and Nick/Martin (bet365)).
>>>>>> 
>>>>>> I’m planning on tagging Riak with riak-2.2.5RC1 this week.
>>>>>> 
>>>>>> We haven’t been able to run load tests. I’m not 100% sure that all
>>>>>> basho’s releases had extensive load tests run, though I know that
>>>>>> later releases had a perf team, and MvM always load tested leveldb.
>>>>>> 
>>>>>> The aim is to have those willing parties that deploy riak, and
>>>>>> therefore have perf testing already set up, to test and report
>>>>>> back. The NHS will run load tests and report results back. I hope that 
>>>>>> others can do the same.
>>>>>> To the end we’ll probably tag RC1-noperftest.
>>>>>> 
>>>>>> There are a few failing riak tests (was Riak ever released without a
>>>>>> failing riak-test?) If you have the time/capacity to run riak-test,
&

Re: Riak-2.2.5 progress update

2018-03-27 Thread Russell Brown
Hey Fred,

I can probably share my configs, yes. I have 4 configs!

current is riak-2.2.5, the develop-2.2 branch of basho/riak for all configs
previous is either riak-2.2.3 or riak-2.0.5 (and I have 4 configs as there has 
to be an EE config, for the repl upgrade tests, since riak-2.2.3 didn’t have 
repl, but riak_ee-2.2.3 did)
legacy is always 1.4.12

And since some tests have explicit version there are also explicit entries for 
2.0.6, 2.0.4, 2.0.2, in each config.

Does that help?

Cheers

Russell

On 27 Mar 2018, at 18:51, Fred Dushin  wrote:

> @russeldb do you have ordained values for `current`, `previous`, and `legacy` 
> to test against?
> 
> Always the :bane: of riak_test
> 
> -Fred
> 
>> On Mar 27, 2018, at 1:47 PM, Russell Brown  wrote:
>> 
>> Giddyup died when basho was shuttered. These test runs have all been on 
>> private infrastructure that doesn’t have a public facing interface.
>> 
>> I guess I could probably cut and paste logs into a gist for you. But I’d 
>> have to take the time to sanitize them, just in case. I’d rather not take 
>> that time unless absolutely necessary.
>> 
>> On 27 Mar 2018, at 18:43, Bryan Hunt  wrote:
>> 
>>> Could you share URL for those outstanding failures please. B
>>> 
>>>> On 27 Mar 2018, at 16:37, Russell Brown  wrote:
>>>> 
>>>> Hi Again,
>>>> 
>>>> More progress update. All the PRs are merged (thanks Bryan Hunt (ESL)
>>>> and Nick/Martin (bet365)).
>>>> 
>>>> I’m planning on tagging Riak with riak-2.2.5RC1 this week.
>>>> 
>>>> We haven’t been able to run load tests. I’m not 100% sure that all
>>>> basho’s releases had extensive load tests run, though I know that
>>>> later releases had a perf team, and MvM always load tested leveldb.
>>>> 
>>>> The aim is to have those willing parties that deploy riak, and
>>>> therefore have perf testing already set up, to test and report
>>>> back. The NHS will run load tests and report results back. I hope that 
>>>> others can do the same.
>>>> To the end we’ll probably tag RC1-noperftest.
>>>> 
>>>> There are a few failing riak tests (was Riak ever released without a
>>>> failing riak-test?) If you have the time/capacity to run riak-test,
>>>> and you’re interested in helping out, get in touch and I’ll help you
>>>> get started.
>>>> 
>>>> The failures, should one pique your interest:
>>>> 
>>>> datatypes - riak667_mixed-eleveldb
>>>> ensemble - ensemble_basic3-eleveldb ensemble_basic4-eleveldb
>>>> yoko - yz_crdt-eleveldb yz_solr_upgrade_downgrade-eleveldb
>>>> 
>>>> Let me know if you want to look into ensemble or yoko.
>>>> 
>>>> Still aiming to have this tagged by end-of-week.
>>>> 
>>>> Cheers
>>>> 
>>>> Russell
>>>> 
>>>> On 2 Mar 2018, at 09:44, Russell Brown  wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> Just an update on the progress of riak-2.2.5 release. I realize we said
>>>>> "end of 2017" and then "end of Jan" and then "end of Feb" and here we
>>>>> are, 1st March, spring is upon us, and still no 2.2.5. I thought it best
>>>>> to at least keep you posted.
>>>>> 
>>>>> Why no release? Well, we're not quite finished yet. In terms of what is
>>>>> left:
>>>>> 
>>>>> - a few PRs need review and merge against the upstream Basho repo (see
>>>>> [1] if you want to help there);
>>>>> - Opening of PRs for gsets;
>>>>> - Docs for the changes;
>>>>> - Release notes;
>>>>> - Tagging;
>>>>> - A final round of testing (and fixing?) after all is merged;
>>>>> - Client support. This is crucial, but all the above work only has
>>>>> client support in the Basho erlang clients. I'm hoping the community
>>>>> that uses the Java/Python/Ruby etc clients can step up here. But we can 
>>>>> release Riak before all client work is done.
>>>>> 
>>>>> The optimist in me says 2 weeks.
>>>>> 
>>>>> What do I mean "release"?
>>>>> 
>>>>> From my point of view the release is the tags. After that I'm sincerely
>>>>> hoping ESL will continue to kindly build and host the actual artifacts.
>>>&g

Re: Riak-2.2.5 progress update

2018-03-27 Thread Russell Brown
Giddyup died when basho was shuttered. These test runs have all been on private 
infrastructure that doesn’t have a public facing interface.

I guess I could probably cut and paste logs into a gist for you. But I’d have 
to take the time to sanitize them, just in case. I’d rather not take that time 
unless absolutely necessary.

On 27 Mar 2018, at 18:43, Bryan Hunt  wrote:

> Could you share URL for those outstanding failures please. B
> 
>> On 27 Mar 2018, at 16:37, Russell Brown  wrote:
>> 
>> Hi Again,
>> 
>> More progress update. All the PRs are merged (thanks Bryan Hunt (ESL)
>> and Nick/Martin (bet365)).
>> 
>> I’m planning on tagging Riak with riak-2.2.5RC1 this week.
>> 
>> We haven’t been able to run load tests. I’m not 100% sure that all
>> basho’s releases had extensive load tests run, though I know that
>> later releases had a perf team, and MvM always load tested leveldb.
>> 
>> The aim is to have those willing parties that deploy riak, and
>> therefore have perf testing already set up, to test and report
>> back. The NHS will run load tests and report results back. I hope that 
>> others can do the same.
>> To the end we’ll probably tag RC1-noperftest.
>> 
>> There are a few failing riak tests (was Riak ever released without a
>> failing riak-test?) If you have the time/capacity to run riak-test,
>> and you’re interested in helping out, get in touch and I’ll help you
>> get started.
>> 
>> The failures, should one pique your interest:
>> 
>> datatypes - riak667_mixed-eleveldb
>> ensemble - ensemble_basic3-eleveldb ensemble_basic4-eleveldb
>> yoko - yz_crdt-eleveldb yz_solr_upgrade_downgrade-eleveldb
>> 
>> Let me know if you want to look into ensemble or yoko.
>> 
>> Still aiming to have this tagged by end-of-week.
>> 
>> Cheers
>> 
>> Russell
>> 
>> On 2 Mar 2018, at 09:44, Russell Brown  wrote:
>> 
>>> Hi,
>>> 
>>> Just an update on the progress of riak-2.2.5 release. I realize we said
>>> "end of 2017" and then "end of Jan" and then "end of Feb" and here we
>>> are, 1st March, spring is upon us, and still no 2.2.5. I thought it best
>>> to at least keep you posted.
>>> 
>>> Why no release? Well, we're not quite finished yet. In terms of what is
>>> left:
>>> 
>>> - a few PRs need review and merge against the upstream Basho repo (see
>>> [1] if you want to help there);
>>> - Opening of PRs for gsets;
>>> - Docs for the changes;
>>> - Release notes;
>>> - Tagging;
>>> - A final round of testing (and fixing?) after all is merged;
>>> - Client support. This is crucial, but all the above work only has
>>> client support in the Basho erlang clients. I'm hoping the community
>>> that uses the Java/Python/Ruby etc clients can step up here. But we can 
>>> release Riak before all client work is done.
>>> 
>>> The optimist in me says 2 weeks.
>>> 
>>> What do I mean "release"?
>>> 
>>> From my point of view the release is the tags. After that I'm sincerely
>>> hoping ESL will continue to kindly build and host the actual artifacts.
>>> 
>>> What's in the release?
>>> 
>>> As a reminder, since it's been a while since StokeCon17, this release
>>> contains:
>>> 
>>> - Some developer clean-up around `make test` and `riak_test` to make
>>> them more reliable/trustworthy;
>>> - Open source MDC Repl (thanks bet365!);
>>> - A fix for a bug in riak core claim that led to unbalanced rings[2];
>>> - `node_confirms` a feature like `w` or `pw` but for physical diversity in 
>>> durability[3];
>>> - `participate in coverage` an admin setting that takes a node out of
>>> the coverage plan (for example after adding a node while transfers
>>> take place);
>>> - Riak repl fix, and change to unsafe default behaviour[4];
>>> - Addition of a GSet to riak data types;
>>> - Fix to repl stats[5].
>>> 
>>> Sorry if I missed anything. The release notes will have it all.
>>> 
>>> Work is already begun on Riak 3.0 with OTP20 support well under way.
>>> Some candidates for inclusion that we're working on are a new
>>> pure-Erlang backend[6], and a radical overhaul of AAE for both intra and 
>>> inter-cluster anti-entropy.
>>> 
>>> Sorry for the delay, thanks for your patience. We’ll keep you posted.
>>> 
>>>

Re: Riak-2.2.5 progress update

2018-03-27 Thread Russell Brown
Hi Again,

More progress update. All the PRs are merged (thanks Bryan Hunt (ESL)
and Nick/Martin (bet365)).

I’m planning on tagging Riak with riak-2.2.5RC1 this week.

We haven’t been able to run load tests. I’m not 100% sure that all
basho’s releases had extensive load tests run, though I know that
later releases had a perf team, and MvM always load tested leveldb.

The aim is to have those willing parties that deploy riak, and
therefore have perf testing already set up, to test and report
back. The NHS will run load tests and report results back. I hope that others 
can do the same.
To the end we’ll probably tag RC1-noperftest.

There are a few failing riak tests (was Riak ever released without a
failing riak-test?) If you have the time/capacity to run riak-test,
and you’re interested in helping out, get in touch and I’ll help you
get started.

The failures, should one pique your interest:

datatypes - riak667_mixed-eleveldb
ensemble - ensemble_basic3-eleveldb ensemble_basic4-eleveldb
yoko - yz_crdt-eleveldb yz_solr_upgrade_downgrade-eleveldb

Let me know if you want to look into ensemble or yoko.

Still aiming to have this tagged by end-of-week.

Cheers

Russell

On 2 Mar 2018, at 09:44, Russell Brown  wrote:

> Hi,
> 
> Just an update on the progress of riak-2.2.5 release. I realize we said
> "end of 2017" and then "end of Jan" and then "end of Feb" and here we
> are, 1st March, spring is upon us, and still no 2.2.5. I thought it best
> to at least keep you posted.
> 
> Why no release? Well, we're not quite finished yet. In terms of what is
> left:
> 
> - a few PRs need review and merge against the upstream Basho repo (see
> [1] if you want to help there);
> - Opening of PRs for gsets;
> - Docs for the changes;
> - Release notes;
> - Tagging;
> - A final round of testing (and fixing?) after all is merged;
> - Client support. This is crucial, but all the above work only has
> client support in the Basho erlang clients. I'm hoping the community
> that uses the Java/Python/Ruby etc clients can step up here. But we can 
> release Riak before all client work is done.
> 
> The optimist in me says 2 weeks.
> 
> What do I mean "release"?
> 
> From my point of view the release is the tags. After that I'm sincerely
> hoping ESL will continue to kindly build and host the actual artifacts.
> 
> What's in the release?
> 
> As a reminder, since it's been a while since StokeCon17, this release
> contains:
> 
> - Some developer clean-up around `make test` and `riak_test` to make
> them more reliable/trustworthy;
> - Open source MDC Repl (thanks bet365!);
> - A fix for a bug in riak core claim that led to unbalanced rings[2];
> - `node_confirms` a feature like `w` or `pw` but for physical diversity in 
> durability[3];
> - `participate in coverage` an admin setting that takes a node out of
> the coverage plan (for example after adding a node while transfers
> take place);
> - Riak repl fix, and change to unsafe default behaviour[4];
> - Addition of a GSet to riak data types;
> - Fix to repl stats[5].
> 
> Sorry if I missed anything. The release notes will have it all.
> 
> Work is already begun on Riak 3.0 with OTP20 support well under way.
> Some candidates for inclusion that we're working on are a new
> pure-Erlang backend[6], and a radical overhaul of AAE for both intra and 
> inter-cluster anti-entropy.
> 
> Sorry for the delay, thanks for your patience. We’ll keep you posted.
> 
> Cheers
> 
> Russell
> 
> Titus Systems - ti-sys.co.uk
> 
> [1] Coverage:
> https://github.com/basho/riak_core/pull/917
> https://github.com/basho/riak_kv/pull/1664
> https://github.com/basho/riak_test/pull/1300
> 
> Repl:
> https://github.com/basho/riak_repl/pull/777
> https://github.com/basho/riak_test/pull/1301
> 
> Node Confirms:
> https://github.com/basho/riak_test/pull/1299
> https://github.com/basho/riak_test/pull/1298
> https://github.com/basho/riak-erlang-client/pull/371
> https://github.com/basho/riak_core/pull/915
> https://github.com/basho/riak-erlang-http-client/pull/69
> 
> [2]
> https://github.com/basho/riak_core/blob/develop-2.2/docs/claim-fixes.md
> [3]
> https://github.com/ramensen/riak_kv/blob/rs-physical-promises/docs/Node-Diversity.md
> [4] https://github.com/basho/riak_repl/issues/774
> https://github.com/basho/riak_repl/issues/772
> [5] https://github.com/basho/riak_repl/pull/776
> [6] https://github.com/martinsumner/leveled/
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak-2.2.5 progress update

2018-03-02 Thread Russell Brown
Hi,

Just an update on the progress of riak-2.2.5 release. I realize we said
"end of 2017" and then "end of Jan" and then "end of Feb" and here we
are, 1st March, spring is upon us, and still no 2.2.5. I thought it best
to at least keep you posted.

Why no release? Well, we're not quite finished yet. In terms of what is
left:

- a few PRs need review and merge against the upstream Basho repo (see
[1] if you want to help there);
- Opening of PRs for gsets;
- Docs for the changes;
- Release notes;
- Tagging;
- A final round of testing (and fixing?) after all is merged;
- Client support. This is crucial, but all the above work only has
client support in the Basho erlang clients. I'm hoping the community
that uses the Java/Python/Ruby etc clients can step up here. But we can release 
Riak before all client work is done.

The optimist in me says 2 weeks.

What do I mean "release"?

>From my point of view the release is the tags. After that I'm sincerely
hoping ESL will continue to kindly build and host the actual artifacts.

What's in the release?

As a reminder, since it's been a while since StokeCon17, this release
contains:

- Some developer clean-up around `make test` and `riak_test` to make
them more reliable/trustworthy;
- Open source MDC Repl (thanks bet365!);
- A fix for a bug in riak core claim that led to unbalanced rings[2];
- `node_confirms` a feature like `w` or `pw` but for physical diversity in 
durability[3];
- `participate in coverage` an admin setting that takes a node out of
the coverage plan (for example after adding a node while transfers
take place);
- Riak repl fix, and change to unsafe default behaviour[4];
- Addition of a GSet to riak data types;
- Fix to repl stats[5].

Sorry if I missed anything. The release notes will have it all.

Work is already begun on Riak 3.0 with OTP20 support well under way.
Some candidates for inclusion that we're working on are a new
pure-Erlang backend[6], and a radical overhaul of AAE for both intra and 
inter-cluster anti-entropy.

Sorry for the delay, thanks for your patience. We’ll keep you posted.

Cheers

Russell

Titus Systems - ti-sys.co.uk

[1] Coverage:
https://github.com/basho/riak_core/pull/917
https://github.com/basho/riak_kv/pull/1664
https://github.com/basho/riak_test/pull/1300

Repl:
https://github.com/basho/riak_repl/pull/777
https://github.com/basho/riak_test/pull/1301

Node Confirms:
https://github.com/basho/riak_test/pull/1299
https://github.com/basho/riak_test/pull/1298
https://github.com/basho/riak-erlang-client/pull/371
https://github.com/basho/riak_core/pull/915
https://github.com/basho/riak-erlang-http-client/pull/69

[2]
https://github.com/basho/riak_core/blob/develop-2.2/docs/claim-fixes.md
[3]
https://github.com/ramensen/riak_kv/blob/rs-physical-promises/docs/Node-Diversity.md
[4] https://github.com/basho/riak_repl/issues/774
https://github.com/basho/riak_repl/issues/772
[5] https://github.com/basho/riak_repl/pull/776
[6] https://github.com/martinsumner/leveled/



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


OTP20+ and rebar3 for Riak 3.0

2018-01-29 Thread Russell Brown
It _really_ needs doing. And a whole host of people have said they’re keen to 
work on it.

Ted Burghart did a lot of work on this at Basho (talk here: 
https://www.youtube.com/watch?v=TcUTsYjEon4) and of course Heinz Gies has made 
riak_core_ng available for some time.

At least 3 organisations have expressed a willingness to do the work, and as 
much as I hate meetings, I think we need to talk to get organised to get this 
done, rather than each org/individual attacking alone.

I hate Wednesdays almost as much as I hate meetings, so how does 1700 GMT, 
Wednesday 7th February sound?

Cheers

Russell



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


JMX/SNMP in upcoming OS Riak release

2017-12-19 Thread Russell Brown
Hi,
Luckily some riak_tests failed when upgrading from riak_ee to riak OS next. JMX 
config was why.

Basho's old Riak EE came with replication, JMX, and SNMP. bet365 opensourced 
all these, and have added replication to open source Riak. Running the 
replication riak_test suite leads to node start up failures when upgrading in 
place from Riak EE to OS Riak >= 2.2.4. The failures are caused by existing JMX 
properties from cuttlefish's inclusion of the JMX schema file when it generated 
the Riak EE riak.conf. I have no idea if this would be an issue in a real life 
upgrade scenario (do people keep their riak.conf between upgrades?) Here are 
the options as I see it, if you think of more and better, say so, please.

• Do not add JMX and SNMP to OS Riak, and document that you need to 
remove all JMX/SNMP properties from riak.conf files before starting up. People 
will notice if their nodes don't start.

• Do not add JMX and SNMP to OS Riak and add a bunch of dummy JMX/SNMP 
properties to riak_kv's schema file. On start up, detect these properties and 
log an ERROR that JMX/SNMP are not supported.

• Add JMX and SNMP to OS Riak and both document and log a warning that 
they're deprecated. Remove after 2.2.5

I prefer solution 1. Let me know what you think.

Cheers

Russell
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak_repl integration

2017-12-05 Thread Russell Brown
Hi Raghu,
At present Riak is still stuck on the r16 basho OTP. This will change next 
year. For now, and the next release of riak, r16 is the compiler you need.

If you want to use riak_repl with open source riak, there are a couple of 
options. There is the 2.2.4 tag in the basho riak repo, which bet365 kindly put 
together when riak_repl was open sourced. Cloning https://github.com/basho/riak 
and checking out tag 2.2.4, then running `make rel` will get you a local 
release of OS riak as it was at 2.2.3 + riak_repl. Someone did mention maybe 
building packages of 2.2.4, if you’d rather download than build your own, but I 
don’t know if that happened yet.

In order to _use_ MDC you can follow the docs on basho’s site 
http://docs.basho.com/riak/kv/2.2.3/configuring/v3-multi-datacenter/.

There is a branch develop-2.2.5 which is the active work for the next release 
of riak, which will contain riak_repl.  We plan to release riak-2.2.5 (OS 
riak+repl, and some small features+bug fixes) in early 2018.

Hope that helps

Cheers

Russell


On 5 Dec 2017, at 16:11, Raghavendra Sayana 
 wrote:

> Hi All,
>  
> I want to use riak MDC replication using riak_repl module. Is there any 
> documentation on how I can perform this integration? Do you know what erlang 
> compiler version I should be using to compile the code? Any help on this is 
> appreciated.
>  
> Thanks
> Raghu
>  
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP migration

2017-11-13 Thread Russell Brown
Ted Burghart did a lot of this work already. It would be great if we could 
re-use some of the work.

He talks about it here - 
http://www.erlang-factory.com/sfbay2017/ted-burghart.html

The NHS commissioned a report on this work a while back, and I’ll ask and see 
if they are willing to share it.

Cheers

Russell

On 13 Nov 2017, at 12:15, Bryan Hunt  wrote:

> Done. 
> 
>> On 13 Nov 2017, at 12:10, Jean Parpaillon  wrote:
>> 
>> Hi Bryan,
>> The wiki page and tickets have been created on purpose, as I've guessed many 
>> work has already been done here and there and the hardest part is now to 
>> integrate it :)
>> 
>> -> can you add a line in the wiki page: 
>> https://github.com/basho/riak/wiki/OTP-migration-task-follow-up ?
>> 
>> Jean
>> 
>> Le lundi 13 novembre 2017 à 12:05 +, Bryan Hunt a écrit :
>>> To get the ball rolling, I’ve got a very low risk p/r for 
>>> riak-erlang-client to enable Erlang 20 support. 
>>> 
>>> Would be great to get it merged in. 
>>> 
>>> https://github.com/basho/riak-erlang-client/pull/367
>>> 
>>> 
 On 13 Nov 2017, at 11:17, Jean Parpaillon  wrote:
 
 Hi all,
 As announced during the meetup, at KBRW we want to help on the OTP 
 migration task.
 IIRC, the migration is targeted for post 2.2.5 release, as to not postpone 
 this release.
 As suggested by Russell, we're starting from nhs-riak-2.2.5 branch.
 
 I've opened a ticket for following-up our work on it:
 https://github.com/basho/riak/issues/929
 
 I've also created a wiki page for more detailed infos:
 https://github.com/basho/riak/wiki/OTP-migration-task-follow-up
 
 Don't hesitate to put any relevant information there : existing pr, 
 branches, etc
 
 Regards,
 -- 
 Jean Parpaillon
 --
 Senior Developper @ KBRW Adventure
 Chairman @ OW2 Consortium
 --
 Phone: +33 6 30 10 92 86
 im: jean.parpail...@gmail.com
 skype: jean.parpaillon
 linkedin: http://www.linkedin.com/in/jeanparpaillon/en
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>> -- 
>> Jean Parpaillon
>> --
>> Senior Developper @ KBRW Adventure
>> Chairman @ OW2 Consortium
>> --
>> Phone: +33 6 30 10 92 86
>> im: jean.parpail...@gmail.com
>> skype: jean.parpaillon
>> linkedin: http://www.linkedin.com/in/jeanparpaillon/en
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Deprecation of Riak SNMP and Riak JMX?

2017-11-13 Thread Russell Brown
Hi all,
It looks like we’re moving toward shipping the open source repl code with the 
next release of Riak.

I’m canvassing for opinions about the riak_snmp and riak_jmx portions of the 
enterprise code. Is there anyone out there that depends on these features? I’d 
like to deprecate them in the next release, and remove them in the release 
following that.

Cheers

Russell
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


NHS Riak release work, update

2017-11-09 Thread Russell Brown
Hi,
I’ve spent a little time lately getting Riak’s `make test` command to work. I 
wrote about it here


https://github.com/russelldb/russelldb.github.io/blob/master/make_test.md

It includes my take on what I think we agreed the immediate road map is.

Let me know if you find problems with the nhs-riak-2.2.5 branches, please. Or 
think I’m wrong about the roadmap.

The riak_test runs are going well. IMO we’re on target for an end of year 
release.

Cheers

Russell
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Workshop Summary

2017-10-25 Thread Russell Brown
Hi,

TL:DR - https://github.com/nhs-riak/riak/tree/rdb/nhs-riak-2.2.5 is the 
fork/branch that I’ve been making progress on.

>From my understanding of what we discussed, or at least my view: in order that 
>we generate momentum and show signs of life, the best thing to do is release a 
>new Riak, but a low risk Riak. Based on that, we (NHS) thought take the last 
>known good release (riak-2.2.3) and branch of that. Use it as a basis for some 
>low risk features/fixes and release it soon. The aim was this year, or at 
>latest January 2018. That release would contain some NHS features, a fix for 
>claim, G-Sets, and maybe some AAE fullsync repl fixes. I have been working on 
>the NHS fork for this for a while. My initial aim has been so that a developer 
>can clone Riak, and run `make test`, and it will pass. Next is to make 
>riak-test pass. Set up CI. Then add in the new features already fixed, and 
>finally build and release. Then there should be a reasonably low-risk, 
>community Riak-2.2.5 for us to deploy. After, we build on that for the more 
>risky work (OTP20, etc etc.)

The fork I’ve been working off is https://github.com/nhs-riak/riak. The branch 
that one should be able to build is nhs-riak-2.2.5 
(https://github.com/nhs-riak/riak/tree/rdb/nhs-riak-2.2.5). Cloning 
https://github.com/nhs-riak/riak.git and then `git checkout nhs-riak-2.2.5` and 
then `make test` should work. Though I have one or two test failures still on 
linux (not on OS X afaict.)

Each of the deps that I had to change in order to get the build working is also 
forked in the https://github.com/nhs-riak/ organisation. Each dependancy's 
branch is named after the last released tag + “nhs-2.2.5”. For example, 
https://github.com/nhs-riak/riak_kv has branch 2.1.7-nhs-2.2.5 (since 2.1.7 was 
the last released tag.) When it all works the plan was to push each new branch 
to the canonical basho organisation and name it develop-2.2.5 (or something) 
and from there we can work towards the next release (and the one after, maybe a 
develop-3.0 branch based off 2.2.5.) We also sidestep the issue of reviewing 
and fixing up all of the basho unreleased dev code, which we can cherry-pick as 
we need.

I don’t know if this is the best idea, but it seems like a pragmatic approach 
to building momentum and getting something of value released, as well as 
setting us up for longer term improvements.

It is sort of what we agreed at the meet-up, but I’d love to hear views from 
the wider community.

Cheers

Russell

On 25 Oct 2017, at 10:28, Jean Parpaillon  wrote:

> Hi Andrew, all,
> IIRC, we have mentioned an important step to bootstrap the community is to 
> identify baselines.
> Can anyone confirm that contributions should be based on NHS branch (which 
> one ?) ? In particular, I'm interested in contributing to OTP19/20 upgrade.
> 
> Furthermore, I've mentioned OW2 organization would be glad to provide the 
> community with infrastructure: gitlab, CI (tools + servers), etc.
> Are you interested in it ? Just let me know if it is the case or have a look 
> at the project submission 
> page:https://projects.ow2.org/bin/view/wiki/submission#
> 
> Best regards,
> Jean
> 
> Le mardi 24 octobre 2017 à 10:32 +, andrew.de...@bet365.com a écrit :
>> Hi,
>>  
>> A summary of the RIAK Development Roadmap Workshop can be found here 
>> http://bet365techblog.com/riak-workshop-summary
>>  
>> Thanks,
>> Andy.
>>  
>> Andrew Deane
>> Systems Development Manager - Middleware
>> Hillside (Technology) Limited
>> andrew.de...@bet365.com
>> bet365.com
>>  
>> This email and any files transmitted with it are confidential and contain 
>> information which may be privileged or confidential and are intended solely 
>> to be for the use of the individual(s) or entity to which they are 
>> addressed. If you are not the intended recipient be aware that any 
>> disclosure, copying, distribution or use of the contents of this information 
>> is strictly prohibited and may be illegal. If you have received this email 
>> in error, please notify us by telephone or email immediately and delete it 
>> from your system. Activity and use of our email system is monitored to 
>> secure its effective operation and for other lawful business purposes. 
>> Communications using this system will also be monitored and may be recorded 
>> to secure effective operation and for other lawful business purposes. 
>> Internet emails are not necessarily secure. We do not accept responsibility 
>> for changes made to this message after it was sent. You are advised to scan 
>> this message for viruses and we cannot accept liability for any loss or 
>> damage which may be caused as a result of any computer virus.
>> 
>> This email is sent by a bet365 group entity. The bet365 group includes the 
>> following entities: Hillside (Shared Services) Limited (registration no. 
>> 3958393), Hillside (Spain New Media) Plc (registration no. 07833226), bet365 
>> Group Limited (registration no. 4241161),

Re: RIAK Roadmap Workshop

2017-10-04 Thread Russell Brown
Hi,
Interesting to hear about forks out there running in production.

WRT the meet-up, things we’d be happy to talk about/like to discuss if there’s 
time on the agenda. We’d be able to give talks/lead discussions on these topics:

- Repl, Rabl, AAE, and Fullsync
- Leveled
- New indexing and querying and the future CRDT work
- A review of the estate from the NHS POV (tech debt, OTP20, Build and release 
etc)

Cheers

Russell

On 3 Oct 2017, at 08:26, Jean Parpaillon  wrote:

> Hi all,
> Thanks for taking over riak management.
> I am strongly commited to the success of this take over as we intensively use 
> riak for our customers solutions.
> Arnaud Wetzel, founder and CTO of KBRW, the company I'm actually working for, 
> has been contributing to riak but, due the lack of visibility on the roadmap, 
> we have also been maintaining our own build of riak.
> Precisely, we use 3 flavours of riak:
> * our own build, based on riak 1.1, with additional patches: for instance, 
> https://github.com/basho/merge_index/pull/32
> * a vanilla riak 2.2
> * riak TS 1.3.0
> 
> As many of us, I suppose, we are really interested in sharing the efforts on 
> riak and want to contribute to it.
> 
> In a few words, by the past, we (ie Arnaud Wetzel :) ), has contributed to 
> the following features that are now integrated upstream:
> * IPV6 support (mainly through BEAM)
> * Cassandra search algorithm integration into riak_search (pending pull 
> request)
> * geoloc search, now merged into SOLR
> * Symfony bundle for riak (yeah, some use it :) )
> 
> Regarding the license, we also agree on Apache 2, and would like to avoid GPL.
> 
> Regarding the governance, we support the following points:
> * limit community disruption
> * looking out how to mitigate the roadmap uncertainty and lack of 
> transparency of the recent period.
> We are convinced that a better balance between community and business should 
> help to avoid this issue.
> 
> I need now to introduce myself with my other hat of OW2 consortium chairman. 
> This organization aims at supporting open source projects providing:
> * supporting infrastructure: gitlab, continuous integration, mailing lists, 
> etc (in case project need them)
> * maturity assesment tools
> * best practices and governance guidelines to help reaching it
> * logistic support with a strong presence into international conferences, 
> dedicated booths for our corporate members, etc
> 
> I would be glad to join the meeting at Bet365 facilities to discussing these 
> points.
> 
> Regards,
> Jean
> 
> 
> Le vendredi 22 septembre 2017 à 13:56 +, andrew.de...@bet365.com a écrit :
>>  Hi,
>>  
>> We are beginning to plan out the two day workshop [1]. We are thinking the 
>> first day will start with an introduction from Martin Davies, in which 
>> Martin will detail current status and the asset list.
>>  
>> After which we will get into project questions, such as:
>>  
>> Which license?
>> The general consensus is Apache2. We need to complete the discussion as 
>> HyperLogLog has been introduced which is GPL’d. Our thoughts are to replace 
>> HLL.
>>  
>> Baseline?
>> We have heard some discussion around where to baseline the codebase. The 
>> concern is during the demise of Basho the correct level of diligence was not 
>> applied to promotes meaning the code such as HLL has crept in.
>>  
>> Governance?
>> How are we to manage the project going forward? Our feeling is we do not 
>> need a formal body as being discussed in the slack group. We see the project 
>> going forward as a simple repo into which likeminded developers agree and 
>> contribute changes. Placing overbearing procedure and structure on what is 
>> currently a small community will dissuade other from joining and 
>> contributing.
>>  
>> Supporting infrastructure.
>> The Basho websites, github account, and mailing group are all included in 
>> the deal. We are currently supporting the mailing list, and once the deal is 
>> complete we will continue to support the Basho domains; restricting 
>> community disruption.
>>  
>> Roadmap
>> Below, in no particular order, are the items we wish to address internally. 
>> If we can all share our lists we can come to an agreed community roadmap 
>> between us.
>>  
>> · Review / rationalisation of Basho JIRA tickets / git issues
>>  
>> · Replication
>> oAddress known issues (30+) to add stability
>> oSelective replication
>> oPersistent realtime queue
>> oReview of general approach
>> §  In / out of band
>> §  Snapshot vs deltas
>>  
>> · General
>> oAddress known issues (20+) to add stability
>> oSilent data loss bug in riak_kv
>>  
>> · Enhancements
>> oG Sets
>> oBig Sets
>> oCRDT Maps
>> oAsync read / write vnodes
>> oSplit backend vnodes
>> oHead requests – don’t return body to co-ordinator
>> oRudimentary indexing (leverage TS work)
>>  
>> · Consolidation / feature cross population of KV,CS, and TS

Re: RIAK Roadmap Workshop

2017-09-22 Thread Russell Brown
Cool, thanks Andy.

On 22 Sep 2017, at 18:14, andrew.de...@bet365.com wrote:

> Thanks Russ.
> 
> I wasn't passing comment on any of the HLL work, only the GPL dep.
> 
> Thanks,
> Andy.
> 
> 
> Andrew Deane
> Systems Development Manager - Middleware
> Hillside (Technology) Limited
> andrew.de...@bet365.com
> bet365.com
> -Original Message-
> From: Russell Brown [mailto:russell.br...@icloud.com] 
> Sent: 22 September 2017 17:54
> To: Andrew Deane
> Cc: riak-users
> Subject: Re: RIAK Roadmap Workshop
> 
> Hi,
> I don't want to hijack the thread, thanks for posting the roadmap, I hope all 
> other interested parties can do the same. It's going to be a very valuable 
> meet up and I look forward to attending.
> 
> However, I'm sorry for being a picky picky pedant but I worked with the guy 
> who built the HLL feature, and it is a thorough, diligent, and professional 
> feature added as part of the release cycle. I was concerned about the GPL 
> dependancy (Proper) but after some investigation it looks like it is OK since 
> it is not shipped with Riak or linked in anyway, instead it is a test only 
> dependancy brought in by the dependancy on Hyper (but we should still get 
> more clarification there.) But the HLL work itself is top drawer, thoroughly 
> tested, and reviewed (perhaps more so than many Riak features.)
> 
> I do think that in the develop and unreleased branches there is probably 
> questionable code that isn't yet production ready, but I need to defend the 
> HLL work. It's good work. It's my fault that you've got the wrong end of the 
> stick about the HLL work because of my concerns over the GPL dependancy. 
> Sorry.
> 
> Cheers
> 
> Russell
> 
> On 22 Sep 2017, at 14:56, andrew.de...@bet365.com wrote:
> 
>> Hi,
>> 
>> We are beginning to plan out the two day workshop [1]. We are thinking the 
>> first day will start with an introduction from Martin Davies, in which 
>> Martin will detail current status and the asset list.
>> 
>> After which we will get into project questions, such as:
>> 
>> Which license?
>> The general consensus is Apache2. We need to complete the discussion as 
>> HyperLogLog has been introduced which is GPL'd. Our thoughts are to replace 
>> HLL.
>> 
>> Baseline?
>> We have heard some discussion around where to baseline the codebase. The 
>> concern is during the demise of Basho the correct level of diligence was not 
>> applied to promotes meaning the code such as HLL has crept in.
>> 
>> Governance?
>> How are we to manage the project going forward? Our feeling is we do not 
>> need a formal body as being discussed in the slack group. We see the project 
>> going forward as a simple repo into which likeminded developers agree and 
>> contribute changes. Placing overbearing procedure and structure on what is 
>> currently a small community will dissuade other from joining and 
>> contributing.
>> 
>> Supporting infrastructure.
>> The Basho websites, github account, and mailing group are all included in 
>> the deal. We are currently supporting the mailing list, and once the deal is 
>> complete we will continue to support the Basho domains; restricting 
>> community disruption.
>> 
>> Roadmap
>> Below, in no particular order, are the items we wish to address internally. 
>> If we can all share our lists we can come to an agreed community roadmap 
>> between us.
>> 
>> · Review / rationalisation of Basho JIRA tickets / git issues
>> 
>> · Replication
>> oAddress known issues (30+) to add stability
>> oSelective replication
>> oPersistent realtime queue
>> oReview of general approach
>> §  In / out of band
>> §  Snapshot vs deltas
>> 
>> · General
>> oAddress known issues (20+) to add stability
>> oSilent data loss bug in riak_kv
>> 
>> · Enhancements
>> oG Sets
>> oBig Sets
>> oCRDT Maps
>> oAsync read / write vnodes
>> oSplit backend vnodes
>> oHead requests - don't return body to co-ordinator
>> oRudimentary indexing (leverage TS work)
>> 
>> · Consolidation / feature cross population of KV,CS, and TS codebases
>> 
>> · Erlang / OTP upgrade
>> 
>> 
>> At this point we want to keep the structure of the day loose to allow the 
>> conversations to flow, our collective priorities to take precedence.
>> 
>> The sessions will be live streamed, with details to follow.
>> 

[ANN] rabl, RabbitMQ based, open source, realtime replication for Riak

2017-09-04 Thread Russell Brown
Hi,

Before I knew about bet365's acquisition of Basho's assets I started
work for the NHS on an open source realtime replication application
for Riak. It's called rabl, and uses RabbitMQ. I wrote an introductory
blog post, which you can read here:

https://github.com/nhs-riak/rabl/blob/master/docs/introducing.md

This is pre-release software. Probably best described as alpha.

Feel free to give it a poke and let me know what you think. If you run
it (not in production!) and find problems please use the `issues` for
the github repository. Time will tell whether we iterate on it or give
it up for Open Source Basho/bet365 realtime.

Cheers

Russell

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Lets come together - how riak(_core) should be developed in the future?

2017-08-17 Thread Russell Brown
Great idea Heinz, I think speaking together is the right start.

I’m free next week, but a key member of our team is not. I know we’re never 
going to get everyone to agree on a time, but if we could do it early the 
following week it would suit more. Otherwise, I will attend and pass info 
along, so don’t worry. I think bet365 should be involved too. They probably 
have the largest team anywhere working on Riak.

Maybe a doodle http://doodle.com/en_GB/ would be a good thing?

Russell

On 17 Aug 2017, at 15:47, Heinz N. Gies  wrote:

> Heya everyone, 
> I figured I’ll pick up the initiative here and see if we all can come 
> together before fragmentation hits. Annette mentioned this today and I think 
> she’s right.
> 
> There are many people that want to see risk(_core) continue to evolve and 
> improve, out of the top of my head there are ES - who picked up support for 
> riak, NHS/UK - which is using it and seem to keep doing so, there is Gordon, 
> Annette, Chris, Tristan, Mariano and me who all have an interest to keep 
> things going. I’m pretty sure I forgot a good few people (sorry! It doesn’t 
> mean I love you any less, I just have a brain like a sieve).
> 
> Amongst all of us I’m sure we’ve quite some forks. Tristan, Chris, Mariano 
> and me started https://github.com/Kyorai to consolidate the git repos that 
> correspond to the current hex packages as an example, NHS runs a fork of core 
> at https://github.com/ramensen/riak_core and I guess ES will have one too by 
> now. (Again apologies if I forgot someone).
> 
> I am quite sure that all of us have the same goal and probably are interested 
> in the same things (keeping riak(_core) strong). So what I’d like to suggest 
> is we find a time to sit together and chat about how we pool resources 
> instead of trying to cross merge in N different places. Perhaps late next 
> week EU afternoon US morning? Please raise a hand if interested, discuss and 
> share times that would work for you :)
> 
> Cheers,
> Heinz
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Core Claim Bug Fixes

2017-08-17 Thread Russell Brown

Hi,
I wrote up a blog post about two fixes to riak_core_claim made by/with
NHS-Digital.

https://github.com/ramensen/riak_core/blob/rdb/rs-claim-tail-violations/docs/claim-fixes.md

This work follows on from Martin Sumner's post to the list about the
discovery of these issues and some potential fixes
https://github.com/infinityworks/riak_core/blob/mas-claimv2issues/docs/ring_claim.md
back in May of this year.

TL:DR :-
Claim no longer generates rings with avoidable tail violations and
unbalanced vnode distributions.

We're still figuring out the best way to release these as part of a Riak build.

Cheers

Russell___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Status and maturity of riak-cs

2017-08-01 Thread Russell Brown
Not sure how great an idea is it to inject an unfinished solution into the 
conversation, but Scott Fritchie and a small team were working on Machi, before 
it was short-sightedly shelved by management.

https://github.com/basho/machi

Maybe the world could use another blob store after all?


On 1 Aug 2017, at 08:50, Javier Palacios  wrote:

> 
>> -Mensaje original-
>> De: riak-users [mailto:riak-users-boun...@lists.basho.com] En nombre de
>> Toby Corkindale
>> 
>> I've been actively trying to migrate over to Minio for S3 storage, instead.
>> Worth a look, and unlike riak CS, has had lots of active development in 
>> recent
>> times.
> 
> Regarding minio, we had a look a few months ago when deciding about object 
> storage. We discarded it mainly because at that time there was no way to 
> replace a damaged disk. That is, if you got 4 disk and 1 breaks, you will 
> keep with 3 disks. Self-healing was close to beta stage, but the fact that it 
> weren't considered a zero-day feature discouraged me a bit. It was also 
> unclear about growing capabilities respect to the allocated storage size. 
> Actually, I was told on the slack channel that it wasn't possible.
> The other primary alternative we evaluate was swift, but discarded because it 
> seemed too big for our current needs & infrastructure.
> 
> Javier Palacios
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Status and maturity of riak-cs

2017-07-19 Thread Russell Brown
I was really only thinking of LeoFS, Ceph, and Joyent Manta.

There must be others.

Or stay on CS. What do you need to stay on it? Basho were certainly not doing 
anything.

On 20 Jul 2017, at 06:57, Alex De la rosa  wrote:

> "However there are alternatives under active development."... can you please 
> list them? I'm also interested on a CS alternative.
> 
> Thanks,
> Alex
> 
> On Wed, Jul 19, 2017 at 9:27 PM, Russell Brown  
> wrote:
> Hi Stefan
> On the one hand, even before the demise of Basho, they’d stopped supporting 
> Riak CS. On the other, there is an organisation based in Japan, but with an 
> international remote team, that supports other CS customers, so may well be a 
> choice of support.
> 
> The CS code base has not had a huge amount of recent attention, but there are 
> plenty of people running at, in industry, at reasonable scale.
> 
> There’s a genuine market of providers of ex-Basho products, and a community 
> of CS users. However there are alternatives under active development.
> 
> Regards
> 
> Russell
> 
> On 19 Jul 2017, at 17:27, Stefan Funk  wrote:
> 
> > Hi everybody,
> >
> > I'm new to Riak-CS and just joined the group.
> >
> > We've been exploring Riak-CS now for a couple of days and consider it as a 
> > potential inhouse-alternative to external S3-based storage providers.
> >
> > Given the last commit was in January 2016, the question rose as to how well 
> > the project is supported and how mature the solution is.
> >
> > I'd be very thankful for any comments from the community on this.
> >
> > Best regards
> > Stefan
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Status and maturity of riak-cs

2017-07-19 Thread Russell Brown
Hi Stefan
On the one hand, even before the demise of Basho, they’d stopped supporting 
Riak CS. On the other, there is an organisation based in Japan, but with an 
international remote team, that supports other CS customers, so may well be a 
choice of support.

The CS code base has not had a huge amount of recent attention, but there are 
plenty of people running at, in industry, at reasonable scale.

There’s a genuine market of providers of ex-Basho products, and a community of 
CS users. However there are alternatives under active development.

Regards

Russell

On 19 Jul 2017, at 17:27, Stefan Funk  wrote:

> Hi everybody,
> 
> I'm new to Riak-CS and just joined the group.
> 
> We've been exploring Riak-CS now for a couple of days and consider it as a 
> potential inhouse-alternative to external S3-based storage providers.
> 
> Given the last commit was in January 2016, the question rose as to how well 
> the project is supported and how mature the solution is.
> 
> I'd be very thankful for any comments from the community on this.
> 
> Best regards
> Stefan
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-15 Thread Russell Brown

On 15 Jul 2017, at 01:29, Alexander Sicular  wrote:

> I'm not dancing on anyone's grave. Im a big fan of Riak, you know? I'm just 
> pointing out that there is in fact a grave and if there is any hope for a 
> Riak future the project will have to rebrand. 

This is the thing we disagree on. But like I said before, neither of us is a 
lawyer, and even if you’re right, it hardly matters. It’s inconsequential.

> 
> I'm sorry, but no company in their right mind would seriously consider using 
> Riak today if they weren't already using Riak. That's today, the story may 
> change tomorrow. 

Again, I disagree, but I’m personally not that interested in looking for new 
Riak users at the moment. I think now is the time to shore up the existing 
customers who want to stay with Riak. Chasing non-existent new business whilst 
ignoring the core users is a large part of what did for Basho.

> 
> 
>> On Fri, Jul 14, 2017 at 5:41 PM, Russell Brown  wrote:
>> 
>>> On 14 Jul 2017, at 23:36, Dave King  wrote:
>>> 
>>> You asked what difference the tm made.  The tm invokes clause 6.  So yes it 
>>> means moving forward means a new name.  That's one of the things to be 
>>> considered.
>> 
>> Agree, so if we’re sticking to the subject, “Is Riak dead?” the worst case 
>> is: “No, but potentially the name could be some time in the future”, right?
>> 
>> I asked rhetorically what difference the TM made. I’ve also read the 
>> license, and taken legal advice. This was not in any way a surprise.
>> 
>>> 
>>> - Peace
>>> Dave
>>> 
>>> 
>>> On Fri, Jul 14, 2017 at 4:30 PM, Russell Brown  
>>> wrote:
>>> 
>>> On 14 Jul 2017, at 23:28, Dave King  wrote:
>>> 
>>>> No, but I can read.
>>> 
>>> I read it too. I’m not sure what your point is. A day may come when the 
>>> name has to change?
>>> 
>>>> 
>>>> - Peace
>>>> Dave
>>>> 
>>>> 
>>>> On Fri, Jul 14, 2017 at 4:27 PM, Russell Brown  
>>>> wrote:
>>>> 
>>>> On 14 Jul 2017, at 23:26, Dave King  wrote:
>>>> 
>>>>> 6. Trademarks. This License does not grant permission to use the trade 
>>>>> names, trademarks, service marks, or product names of the Licensor, 
>>>>> except as required for reasonable and customary use in describing the 
>>>>> origin of the Work and reproducing the content of the NOTICE file.
>>>>> 
>>>>> So At minimum I read that as there would need to be a new product name, 
>>>>> and the docs for that product could say formerly known as Riak.
>>>> 
>>>> You are lawyer?
>>>> 
>>>>> 
>>>>> - Peace
>>>>> Dave
>>>>> 
>>>>> On Fri, Jul 14, 2017 at 4:22 PM, Russell Brown  
>>>>> wrote:
>>>>> 
>>>>> On 14 Jul 2017, at 23:17, Dave King  wrote:
>>>>> 
>>>>>> Looks like Basho has the trademark on Riak.
>>>>>> http://www.trademarkia.com/riak-77954950.html
>>>>> 
>>>>> They sure do. And if you look at the Apache2 license you’ll see that it 
>>>>> grants use of the name to identify the source.
>>>>> 
>>>>> Look, I’m not a lawyer, and I’m guessing you’re not a lawyer. What 
>>>>> difference does the name or trademark make? There are a bunch of active 
>>>>> users of Riak, some of whom have made it clear they plan to continue 
>>>>> using it. Dance on Basho’s grave all you want, but Riak isn’t dead yet.
>>>>> 
>>>>>> 
>>>>>> - Peace
>>>>>> Dave
>>>>>> 
>>>>>> 
>>>>>> On Fri, Jul 14, 2017 at 4:08 PM, Alexander Sicular  
>>>>>> wrote:
>>>>>> 
>>>>>> 
>>>>>>> On Jul 14, 2017, at 11:32, Russell Brown  wrote:
>>>>>>> 
>>>>>>> What do you mean “encumbered”? Riak is the name of an Apache2 licensed 
>>>>>>> open source database, so it can continue to be used to describe that 
>>>>>>> apache 2 licensed database, please don’t spread FUD.
>>>>>> 
>>>>>> You willing to invest time in a project that could be legally encumbered 
>>>>>> at some point in the future because lawyers? You willing to put that 
>>>>>> beyond the clown fiesta that is Basho? I don't think so.
>>>>>> ___
>>>>>> riak-users mailing list
>>>>>> riak-users@lists.basho.com
>>>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 23:36, Dave King  wrote:

> You asked what difference the tm made.  The tm invokes clause 6.  So yes it 
> means moving forward means a new name.  That's one of the things to be 
> considered.   

Agree, so if we’re sticking to the subject, “Is Riak dead?” the worst case is: 
“No, but potentially the name could be some time in the future”, right?

I asked rhetorically what difference the TM made. I’ve also read the license, 
and taken legal advice. This was not in any way a surprise.

> 
> - Peace
> Dave
> 
> 
> On Fri, Jul 14, 2017 at 4:30 PM, Russell Brown  wrote:
> 
> On 14 Jul 2017, at 23:28, Dave King  wrote:
> 
> > No, but I can read.
> 
> I read it too. I’m not sure what your point is. A day may come when the name 
> has to change?
> 
> >
> > - Peace
> > Dave
> >
> >
> > On Fri, Jul 14, 2017 at 4:27 PM, Russell Brown  
> > wrote:
> >
> > On 14 Jul 2017, at 23:26, Dave King  wrote:
> >
> > > 6. Trademarks. This License does not grant permission to use the trade 
> > > names, trademarks, service marks, or product names of the Licensor, 
> > > except as required for reasonable and customary use in describing the 
> > > origin of the Work and reproducing the content of the NOTICE file.
> > >
> > > So At minimum I read that as there would need to be a new product name, 
> > > and the docs for that product could say formerly known as Riak.
> >
> > You are lawyer?
> >
> > >
> > > - Peace
> > > Dave
> > >
> > > On Fri, Jul 14, 2017 at 4:22 PM, Russell Brown  
> > > wrote:
> > >
> > > On 14 Jul 2017, at 23:17, Dave King  wrote:
> > >
> > > > Looks like Basho has the trademark on Riak.
> > > > http://www.trademarkia.com/riak-77954950.html
> > >
> > > They sure do. And if you look at the Apache2 license you’ll see that it 
> > > grants use of the name to identify the source.
> > >
> > > Look, I’m not a lawyer, and I’m guessing you’re not a lawyer. What 
> > > difference does the name or trademark make? There are a bunch of active 
> > > users of Riak, some of whom have made it clear they plan to continue 
> > > using it. Dance on Basho’s grave all you want, but Riak isn’t dead yet.
> > >
> > > >
> > > > - Peace
> > > > Dave
> > > >
> > > >
> > > > On Fri, Jul 14, 2017 at 4:08 PM, Alexander Sicular  
> > > > wrote:
> > > >
> > > >
> > > > > On Jul 14, 2017, at 11:32, Russell Brown  
> > > > > wrote:
> > > > >
> > > > > What do you mean “encumbered”? Riak is the name of an Apache2 
> > > > > licensed open source database, so it can continue to be used to 
> > > > > describe that apache 2 licensed database, please don’t spread FUD.
> > > >
> > > > You willing to invest time in a project that could be legally 
> > > > encumbered at some point in the future because lawyers? You willing to 
> > > > put that beyond the clown fiesta that is Basho? I don't think so.
> > > > ___
> > > > riak-users mailing list
> > > > riak-users@lists.basho.com
> > > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > > >
> > >
> > >
> >
> >
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 23:28, Dave King  wrote:

> No, but I can read.

I read it too. I’m not sure what your point is. A day may come when the name 
has to change?

> 
> - Peace
> Dave
> 
> 
> On Fri, Jul 14, 2017 at 4:27 PM, Russell Brown  wrote:
> 
> On 14 Jul 2017, at 23:26, Dave King  wrote:
> 
> > 6. Trademarks. This License does not grant permission to use the trade 
> > names, trademarks, service marks, or product names of the Licensor, except 
> > as required for reasonable and customary use in describing the origin of 
> > the Work and reproducing the content of the NOTICE file.
> >
> > So At minimum I read that as there would need to be a new product name, and 
> > the docs for that product could say formerly known as Riak.
> 
> You are lawyer?
> 
> >
> > - Peace
> > Dave
> >
> > On Fri, Jul 14, 2017 at 4:22 PM, Russell Brown  
> > wrote:
> >
> > On 14 Jul 2017, at 23:17, Dave King  wrote:
> >
> > > Looks like Basho has the trademark on Riak.
> > > http://www.trademarkia.com/riak-77954950.html
> >
> > They sure do. And if you look at the Apache2 license you’ll see that it 
> > grants use of the name to identify the source.
> >
> > Look, I’m not a lawyer, and I’m guessing you’re not a lawyer. What 
> > difference does the name or trademark make? There are a bunch of active 
> > users of Riak, some of whom have made it clear they plan to continue using 
> > it. Dance on Basho’s grave all you want, but Riak isn’t dead yet.
> >
> > >
> > > - Peace
> > > Dave
> > >
> > >
> > > On Fri, Jul 14, 2017 at 4:08 PM, Alexander Sicular  
> > > wrote:
> > >
> > >
> > > > On Jul 14, 2017, at 11:32, Russell Brown  wrote:
> > > >
> > > > What do you mean “encumbered”? Riak is the name of an Apache2 licensed 
> > > > open source database, so it can continue to be used to describe that 
> > > > apache 2 licensed database, please don’t spread FUD.
> > >
> > > You willing to invest time in a project that could be legally encumbered 
> > > at some point in the future because lawyers? You willing to put that 
> > > beyond the clown fiesta that is Basho? I don't think so.
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > >
> >
> >
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 23:26, Dave King  wrote:

> 6. Trademarks. This License does not grant permission to use the trade names, 
> trademarks, service marks, or product names of the Licensor, except as 
> required for reasonable and customary use in describing the origin of the 
> Work and reproducing the content of the NOTICE file.
> 
> So At minimum I read that as there would need to be a new product name, and 
> the docs for that product could say formerly known as Riak.

You are lawyer?

> 
> - Peace
> Dave
> 
> On Fri, Jul 14, 2017 at 4:22 PM, Russell Brown  wrote:
> 
> On 14 Jul 2017, at 23:17, Dave King  wrote:
> 
> > Looks like Basho has the trademark on Riak.
> > http://www.trademarkia.com/riak-77954950.html
> 
> They sure do. And if you look at the Apache2 license you’ll see that it 
> grants use of the name to identify the source.
> 
> Look, I’m not a lawyer, and I’m guessing you’re not a lawyer. What difference 
> does the name or trademark make? There are a bunch of active users of Riak, 
> some of whom have made it clear they plan to continue using it. Dance on 
> Basho’s grave all you want, but Riak isn’t dead yet.
> 
> >
> > - Peace
> > Dave
> >
> >
> > On Fri, Jul 14, 2017 at 4:08 PM, Alexander Sicular  
> > wrote:
> >
> >
> > > On Jul 14, 2017, at 11:32, Russell Brown  wrote:
> > >
> > > What do you mean “encumbered”? Riak is the name of an Apache2 licensed 
> > > open source database, so it can continue to be used to describe that 
> > > apache 2 licensed database, please don’t spread FUD.
> >
> > You willing to invest time in a project that could be legally encumbered at 
> > some point in the future because lawyers? You willing to put that beyond 
> > the clown fiesta that is Basho? I don't think so.
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 23:17, Dave King  wrote:

> Looks like Basho has the trademark on Riak.
> http://www.trademarkia.com/riak-77954950.html

They sure do. And if you look at the Apache2 license you’ll see that it grants 
use of the name to identify the source.

Look, I’m not a lawyer, and I’m guessing you’re not a lawyer. What difference 
does the name or trademark make? There are a bunch of active users of Riak, 
some of whom have made it clear they plan to continue using it. Dance on 
Basho’s grave all you want, but Riak isn’t dead yet.

> 
> - Peace
> Dave
> 
> 
> On Fri, Jul 14, 2017 at 4:08 PM, Alexander Sicular  wrote:
> 
> 
> > On Jul 14, 2017, at 11:32, Russell Brown  wrote:
> >
> > What do you mean “encumbered”? Riak is the name of an Apache2 licensed open 
> > source database, so it can continue to be used to describe that apache 2 
> > licensed database, please don’t spread FUD.
> 
> You willing to invest time in a project that could be legally encumbered at 
> some point in the future because lawyers? You willing to put that beyond the 
> clown fiesta that is Basho? I don't think so.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 23:08, Alexander Sicular  wrote:

> 
> 
>> On Jul 14, 2017, at 11:32, Russell Brown  wrote:
>> 
>> What do you mean “encumbered”? Riak is the name of an Apache2 licensed open 
>> source database, so it can continue to be used to describe that apache 2 
>> licensed database, please don’t spread FUD.
> 
> You willing to invest time in a project that could be legally encumbered at 
> some point in the future because lawyers? You willing to put that beyond the 
> clown fiesta that is Basho?

Yup. Not just me. It’s just a name.

> I don't think so.
 Think different(tm)



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-14 Thread Russell Brown

On 14 Jul 2017, at 17:26, Alexander Sicular  wrote:

> I love your enthusiasm Lloyd. How about starting with a slack channel
> and take it from there…

What’s wrong with the existing IRC channel?

> 
> Things that would kinda need to happen for Riak to grow beyond where it is 
> now:

Grow? Surely we’re just talking survival and maintenance in the first instance. 
Security for those orgs that depend on it. Growth was the problem, right?

> 
> - complete rebranding. Basho is dead. The term "Riak" may be encumbered.

What do you mean “encumbered”? Riak is the name of an Apache2 licensed open 
source database, so it can continue to be used to describe that apache 2 
licensed database, please don’t spread FUD.

> - new everything. name. domain. github repo. bla bla bla.

Again, I don’t agree. Why a new name? Obviously the basho domain is off limits, 
but why does that matter, any great content on there? Or just cartoon guys?

Numerous orgs already have full forks of all repos, and all the issues copied, 
and pull request histories etc. This wasn’t a surprise.

> 
> Instead of a framework, how can the Riak codebase actually become an
> Apache project?

Personally not sure of the benefit of being an apache project.

Russell

> 
> 
> 
> 
> 
> On Thu, Jul 13, 2017 at 8:06 PM,   wrote:
>> Hi,
>> 
>> 
>> 
>> If there is enough interest, I'd be happy to do the legwork in Boston and
>> put up a very modest contribution toward up-front cost with hope of recovery
>> through registrations.
>> 
>> 
>> 
>> Boston has conference spaces of all sizes. I would imagine a conference on
>> the future of open-source Riak would be a fairly intimate group. I'm also
>> imagining a fairly bare-bones event to minimize cost of registration.
>> Registration cost should be just enough to cover the venue and nibbles.
>> 
>> 
>> 
>> Just to kick off brain-storming:
>> 
>> 
>> 
>> 1. Say two days--- Friday and Saturday
>> 
>> 2. Friday: Invite short (20 minute) proposals on key topics
>> 
>> a. Clarifying legal issues
>> 
>> b. Technical road map
>> 
>> c. Organizational framework; e.g. ala Apache Foundation or some such
>> 
>> d. Marketing; e.g. expanding the open-source community
>> 
>> 3. Saturday (or Saturday morning)
>> 
>> Discussion sessions with goal of recommendations for moving forward
>> 
>> 
>> 
>> How to get from here to there:
>> 
>> 
>> 
>> 1. Is there enough interest on the mailing list?
>> 
>> 2. How can we pin down an agenda?
>> 
>> 3. Can we get corporate sponsorships to minimize registration?
>> 
>> 4. How can we most widely promote the event?
>> 
>> 5. What else?
>> 
>> 
>> 
>> All the best,
>> 
>> 
>> 
>> Lloyd
>> 
>> 
>> 
>> 
>> 
>> -Original Message-
>> From: "Russell Brown" 
>> Sent: Thursday, July 13, 2017 4:00pm
>> To: ll...@writersglen.com
>> Cc: "riak-users" , "Senthilkumar Peelikkampatti"
>> , "Russell Brown" 
>> Subject: Re: Is Riak dead?
>> 
>> We have talked about it. Let's do it!
>> 
>> On Jul 13, 2017 7:56 PM, ll...@writersglen.com wrote:
>> 
>> Hmmm--- I wonder if anyone has given thought to a conference focused on the
>> future of open-source Riak?
>> 
>> 
>> 
>> I'm sure there are many good ideas out there re: roadmap and governance.
>> 
>> 
>> 
>> It's just too great not to be worth deep thought and prudent action.
>> 
>> 
>> 
>> All the best,
>> 
>> 
>> 
>> LRP
>> 
>> 
>> 
>> 
>> 
>> -Original Message-
>> From: "Russell Brown" 
>> Sent: Thursday, July 13, 2017 12:40pm
>> To: "Senthilkumar Peelikkampatti" 
>> Cc: "riak-users" 
>> Subject: Re: Is Riak dead?
>> 
>> Hi Senthilkumar,
>> No Riak is not dead. It’s parent company, Basho is dead. And they kept the
>> keys. There are numerous forks that are being worked on. Hopefully something
>> that is a canonical, community backed fork will emerge. Past mailing list
>> posts show strong support from large commercial and govt. organisations.
>> Riak is open source and will continue to be critical infrastructure for
>> numerous organisation in its niche for some time.
>> 
>> There are also a few support organisations springing up, as well as the
>> existing partners (Erlang Soluti

Re: Is Riak dead?

2017-07-13 Thread Russell Brown

On 13 Jul 2017, at 22:52, Christopher Meiklejohn 
 wrote:

> It was a joke. :)

Always better explained. (https://en.wikipedia.org/wiki/DEC_Alpha)

> 
> On Thu, Jul 13, 2017 at 2:29 PM, Outback Dingo  wrote:
> VMS as in Virtual Machine(S)
> 
> On Thu, Jul 13, 2017 at 5:01 PM, Christopher Meiklejohn 
>  wrote:
> VMS?  I don't think we ever got Riak compiling on Alpha. 💛😀
> 
> On Jul 13, 2017 1:30 PM, "Outback Dingo"  wrote:
> I dont believe abandon is the proper term, resurrect and carry forth,
> we will not let you go quietly into the night
> 
> LETS DO THIS SHIT! Ive got cloud resources I own... servers/VMS/XEN
> Ill do it myself if I need tooo.
> 
> On Thu, Jul 13, 2017 at 4:24 PM, Alexander Sicular  wrote:
> > Abandon hope all ye who enter here.
> >
> > On Thu, Jul 13, 2017 at 3:15 PM, Tom Santero  wrote:
> >> RICON: A New Hope
> >>
> >> On Thu, Jul 13, 2017 at 4:00 PM, Russell Brown 
> >> wrote:
> >>>
> >>> We have talked about it. Let's do it!
> >>>
> >>> On Jul 13, 2017 7:56 PM, ll...@writersglen.com wrote:
> >>>
> >>> Hmmm--- I wonder if anyone has given thought to a conference focused on
> >>> the future of open-source Riak?
> >>>
> >>>
> >>>
> >>> I'm sure there are many good ideas out there re: roadmap and governance.
> >>>
> >>>
> >>>
> >>> It's just too great not to be worth deep thought and prudent action.
> >>>
> >>>
> >>>
> >>> All the best,
> >>>
> >>>
> >>>
> >>> LRP
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> -Original Message-
> >>> From: "Russell Brown" 
> >>> Sent: Thursday, July 13, 2017 12:40pm
> >>> To: "Senthilkumar Peelikkampatti" 
> >>> Cc: "riak-users" 
> >>> Subject: Re: Is Riak dead?
> >>>
> >>> Hi Senthilkumar,
> >>> No Riak is not dead. It’s parent company, Basho is dead. And they kept the
> >>> keys. There are numerous forks that are being worked on. Hopefully 
> >>> something
> >>> that is a canonical, community backed fork will emerge. Past mailing list
> >>> posts show strong support from large commercial and govt. organisations.
> >>> Riak is open source and will continue to be critical infrastructure for
> >>> numerous organisation in its niche for some time.
> >>>
> >>> There are also a few support organisations springing up, as well as the
> >>> existing partners (Erlang Solutions, Trifork etc.) If you’re asking me, 
> >>> this
> >>> could be a very good thing for Riak in the medium to long term. It and 
> >>> it’s
> >>> users and community were served very badly by Basho these last couple of
> >>> years. It’s a little trodden under, but no way dead. Give it a little time
> >>> to bounce back.
> >>>
> >>> Cheers
> >>>
> >>> Russell
> >>>
> >>>
> >>> On 11 Jul 2017, at 08:15, Senthilkumar Peelikkampatti
> >>>  wrote:
> >>>
> >>> > Is Riak dead? No new activities on Riak GitHub repo.
> >>> >
> >>> > https://github.com/basho/riak_kv/pulse/monthly
> >>> > https://github.com/basho/riak/pulse/monthly
> >>> >
> >>> > Thanks,
> >>> > Senthil
> >>> > ___
> >>> > riak-users mailing list
> >>> > riak-users@lists.basho.com
> >>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >>>
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >>>
> >>>
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >>
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-13 Thread Russell Brown
We have talked about it. Let's do it!On Jul 13, 2017 7:56 PM, ll...@writersglen.com wrote:Hmmm--- I wonder if anyone has given thought to a conference focused on the future of open-source Riak?
 
I'm sure there are many good ideas out there re: roadmap and governance.
 
It's just too great not to be worth deep thought and prudent action.
 
All the best,
 
LRP
 
 
-Original Message-From: "Russell Brown" Sent: Thursday, July 13, 2017 12:40pmTo: "Senthilkumar Peelikkampatti" Cc: "riak-users" Subject: Re: Is Riak dead?

Hi Senthilkumar,No Riak is not dead. It’s parent company, Basho is dead. And they kept the keys. There are numerous forks that are being worked on. Hopefully something that is a canonical, community backed fork will emerge. Past mailing list posts show strong support from large commercial and govt. organisations. Riak is open source and will continue to be critical infrastructure for numerous organisation in its niche for some time.There are also a few support organisations springing up, as well as the existing partners (Erlang Solutions, Trifork etc.) If you’re asking me, this could be a very good thing for Riak in the medium to long term. It and it’s users and community were served very badly by Basho these last couple of years. It’s a little trodden under, but no way dead. Give it a little time to bounce back.CheersRussellOn 11 Jul 2017, at 08:15, Senthilkumar Peelikkampatti  wrote:> Is Riak dead? No new activities on Riak GitHub repo. > > https://github.com/basho/riak_kv/pulse/monthly> https://github.com/basho/riak/pulse/monthly> > Thanks,> Senthil> ___> riak-users mailing list> riak-users@lists.basho.com> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com___riak-users mailing listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-13 Thread Russell Brown
Agree.

But at the moment they’re also not dead. Somewhat of a zombie, rumoured to be 
litigious. Circumspection is required.

It’ll happen. Don’t worry.

On 13 Jul 2017, at 17:51, Outback Dingo  wrote:

> well, if thats the case someone best be starting a new site and
> mailing list soon in case this one dissappears
> 
> On Thu, Jul 13, 2017 at 12:40 PM, Russell Brown
>  wrote:
>> Hi Senthilkumar,
>> No Riak is not dead. It’s parent company, Basho is dead. And they kept the 
>> keys. There are numerous forks that are being worked on. Hopefully something 
>> that is a canonical, community backed fork will emerge. Past mailing list 
>> posts show strong support from large commercial and govt. organisations. 
>> Riak is open source and will continue to be critical infrastructure for 
>> numerous organisation in its niche for some time.
>> 
>> There are also a few support organisations springing up, as well as the 
>> existing partners (Erlang Solutions, Trifork etc.) If you’re asking me, this 
>> could be a very good thing for Riak in the medium to long term. It and it’s 
>> users and community were served very badly by Basho these last couple of 
>> years. It’s a little trodden under, but no way dead. Give it a little time 
>> to bounce back.
>> 
>> Cheers
>> 
>> Russell
>> 
>> 
>> On 11 Jul 2017, at 08:15, Senthilkumar Peelikkampatti  
>> wrote:
>> 
>>> Is Riak dead? No new activities on Riak GitHub repo.
>>> 
>>> https://github.com/basho/riak_kv/pulse/monthly
>>> https://github.com/basho/riak/pulse/monthly
>>> 
>>> Thanks,
>>> Senthil
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Is Riak dead?

2017-07-13 Thread Russell Brown
Hi Senthilkumar,
No Riak is not dead. It’s parent company, Basho is dead. And they kept the 
keys. There are numerous forks that are being worked on. Hopefully something 
that is a canonical, community backed fork will emerge. Past mailing list posts 
show strong support from large commercial and govt. organisations. Riak is open 
source and will continue to be critical infrastructure for numerous 
organisation in its niche for some time.

There are also a few support organisations springing up, as well as the 
existing partners (Erlang Solutions, Trifork etc.) If you’re asking me, this 
could be a very good thing for Riak in the medium to long term. It and it’s 
users and community were served very badly by Basho these last couple of years. 
It’s a little trodden under, but no way dead. Give it a little time to bounce 
back.

Cheers

Russell


On 11 Jul 2017, at 08:15, Senthilkumar Peelikkampatti  
wrote:

> Is Riak dead? No new activities on Riak GitHub repo. 
> 
> https://github.com/basho/riak_kv/pulse/monthly
> https://github.com/basho/riak/pulse/monthly
> 
> Thanks,
> Senthil
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Udon - mrallen1 (Mark Allen) · GitHub

2017-07-09 Thread Russell Brown

On 9 Jul 2017, at 21:54, Lloyd R. Prentice  wrote:

> Hello,
> 
> Just finished viewing Mark Allen's excellent 2015 Riak Core presentation and 
> have been reviewing his Udon code on GitHub.
> 
> https://github.com/mrallen1
> 
> Now I have two Micky-the-Dunce questions:
> 
> 1. Assuming a web-based system similar to Udon on a cluster of five physical 
> cores with data distributed across three virtual nodes, can one enter and 
> access the data through any one of the five physical nodes?
> 
> In other words, can one put a load balancer in front of the five physical 
> nodes to enter and access data regardless of where it's ultimately stored in 
> the cluster?

Yes, usually the node that handles the request has access to both the ring (a 
routing table/mapping of partitions to nodes) and the hash function.

> 
> 2. Can KV functionality be relatively easily built on top of Udon--- 
> providing storage of, say, Erlang terms as well as files?

I haven’t looked at Udon, but a KV store on riak core is a very common thing. 
There are quite a few examples out there, the largest (and most complex) being 
Riak itself https://github.com/basho/riak

> 
> Thanks to all,
> 
> LRP
> 
> 
> Sent from my iPad
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [IE] [IE] Re: [IE] Re: merge_contents error

2017-06-26 Thread Russell Brown
Good news.

No problem.

Cheers

Russell

On 26 Jun 2017, at 12:48, Mark Richard Thomas  wrote:

> Thanks Russell.
> 
> You're a star.
> 
> Right again - dvv_enabled property on a bucket.
> 
> My cluster is finally looking stable.
> 
> Mark
> Equifax Limited is registered in England with Registered No. 2425920. 
> Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
> Limited is authorised and regulated by the Financial Conduct Authority.
> Equifax Touchstone Limited is registered in Scotland with Registered No. 
> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, EH3 
> 8EH.
> Equifax Commercial Services Limited is registered in the Republic of Ireland 
> with Registered No. 215393. Registered Office: IDA Business & Technology 
> Park, Rosslare Road, Drinagh, Wexford.
> 
> This message contains information from Equifax which may be confidential and 
> privileged. If you are not an intended recipient, please refrain from any 
> disclosure, copying, distribution or use of this information and note that 
> such actions are prohibited. If you have received this transmission in error, 
> please notify by e-mail postmas...@equifax.com.


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [IE] Re: [IE] Re: merge_contents error

2017-06-26 Thread Russell Brown
,[{file,"src/riak_kv_get_core.erl"},{line,262}]},{riak_kv_get_core,response,1,[{file,"src/riak_kv_get_core.erl"},{line,140}]},{riak_kv_get_fsm,waiting_vnode_r,2,[{file,"src/riak_kv_get_fsm.erl"},{line,334}]}]},[{gen_fsm,terminate,7,[{file,"gen_fsm.erl"},{line,622}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>ancestors: 
> [riak_kv_get_fsm_sj_4,riak_kv_get_fsm_sj_worker_sup,riak_kv_get_fsm_sj,sidejob_sup,<0.372.0>]
>messages: 
> [{'$gen_event',{r,{ok,{r_object,{<<"commercial">>,<<"commercial_danda">>},<<"02565069">>,[{r_content,{dict,7,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[[<<"Links">>]],[],[],[],[],[],[],[],[[<<"content-type">>,97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110],[<<"X-Riak-VTag">>,54,110,74,113,77,100,117,79,84,109,119,78,110,119,120,84,48,51,106,119,79,103]],[[<<"index">>]],[],[[<<"X-Riak-Last-Modified">>|{1494,930206,640553}]],[],[[<<"X-Riak-Meta">>],[<<"charset">>,117,116,102,45,56]]}}},<<"{\"company\":\"02565069\",\"identification\":{\"companyNo\":\"02565069\",\"name\":\"TCS
>  0\",\"alpha\":\"TCS\",\"roAddress\":{\"line1\":\"st cross street 
> C\",\"line2\":\"ross Road T\",\"line3\":\"hiruvanmayur 
> C\",\"line4\":\"hennai\",\"postCode\":\"cb6 
> 1ad\",\"kpostcode\":\"cb61ad\"},\"accountReferenceDate\":\"3004\",\"incorporationDate\":\"2009-01-01\",\"returnMadeUpDate\":\"2009-03-03\",\"latestAccountsFiledDate\":\"2009-02-02\",\"accountsType\":5,\"latestAnalysedAccountDate\":\"2008-04-30\",\"nextAnnualReturnDue\":\"2010-03-31\",\"companyType\":\"0\",\"companyTypeDescription\":\"Other\",\"dissolved\":false}}">>}],[{<<90,0,32,188,84,53,62,41>>,{1,63662149406}}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[[clean|true]],[]}}},undefined}},433883298582611803841718934712646521460354973696,17576867}}]
>links: [<0.476.0>]
>dictionary: []
>trap_exit: false
>status: running
>heap_size: 4185
>stack_size: 27
>reductions: 2002
>  neighbours:
> 
> 
> -Original Message-
> From: Russell Brown [mailto:russell.br...@icloud.com]
> Sent: 26 June 2017 10:54
> To: Mark Richard Thomas
> Cc: riak-users
> Subject: [IE] Re: [IE] Re: merge_contents error
> 
> The crash report is for the same reason, the 3rd argument to 
> riak_object:merge_contents/3 is <<"false">> when it should be the atom 
> 'false'.
> 
> Is that crash report from _after_ you updated the bucket properties?
> 
> Cheers
> 
> Russell
> 
> On 26 Jun 2017, at 10:50, Mark Richard Thomas  wrote:
> 
>> Hello
>> 
>> Riak version 2.2.3
>> 
>> Yes, this is related to a cluster restore from backup.
>> 
>> .../types/commercial/props
>> 
>> {"props":{"name":"commercial_systest","active":true,"allow_mult":false,"basic_quorum":true,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"r...@wrkd1lrkd002.app.c9.equifax.com","dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":false,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
>> 
>> 
>> I modified the commercial bucket properties. Good spot on the dvv_enabled. I 
>> updated with:
>> 
>> riak-admin bucket-type update commercial '{"props":{"last_write_wins":true, 
>> "allow_mult":false, "dvv_enabled":false, "basic_quorum":true, 
>> "notfound_ok":false}}'
>> 
>> Any idea on what this crash report means?
>> 
>> A

Re: [IE] Re: merge_contents error

2017-06-26 Thread Russell Brown
The crash report is for the same reason, the 3rd argument to 
riak_object:merge_contents/3 is <<"false”>> when it should be the atom ‘false’.

Is that crash report from _after_ you updated the bucket properties?

Cheers

Russell

On 26 Jun 2017, at 10:50, Mark Richard Thomas  wrote:

> Hello
> 
> Riak version 2.2.3
> 
> Yes, this is related to a cluster restore from backup.
> 
> .../types/commercial/props
> 
> {"props":{"name":"commercial_systest","active":true,"allow_mult":false,"basic_quorum":true,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"r...@wrkd1lrkd002.app.c9.equifax.com","dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":false,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
> 
> 
> I modified the commercial bucket properties. Good spot on the dvv_enabled. I 
> updated with:
> 
> riak-admin bucket-type update commercial '{"props":{"last_write_wins":true, 
> "allow_mult":false, "dvv_enabled":false, "basic_quorum":true, 
> "notfound_ok":false}}'
> 
> Any idea on what this crash report means?
> 
> And how do I merge locally?
> 
> 2017-06-26 09:39:17 =CRASH REPORT
>  crasher:
>initial call: riak_kv_get_fsm:init/1
>pid: <0.29113.100>
>registered_name: []
>exception exit: 
> {{function_clause,[{riak_object,merge_contents,[{r_object,{<<"commercial">>,<<"commercial_danda">>},<<"SC412063">>,[{r_content,{dict,7,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[[<<"Links">>]],[],[],[],[],[],[],[],[[<<"content-type">>,97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110],[<<"X-Riak-VTag">>,50,51,118,107,78,122,101,98,57,52,73,81,65,87,100,112,109,86,102,99,97,82]],[[<<"index">>]],[],[[<<"X-Riak-Last-Modified">>|{1494,933827,646892}]],[],[[<<"charset">>,117,116,102,45,56],[<<"X-Riak-Meta">>]]}}},<<"{\"company\":\"SC412063\",\"identification\":{\"companyNo\":\"SC412063\",\"name\":\"TCS
>  0\",\"alpha\":\"TCS\",\"roAddress\":{\"line1\":\"st cross street 
> C\",\"line2\":\"ross Road T\",\"line3\":\"hiruvanmayur 
> C\",\"line4\":\"hennai\",\"postCode\":\"cb6 
> 1ad\",\"kpostcode\":\"cb61ad\"},\"accountReferenceDate\":\"3004\",\"incorporationDate\":\"2009-01-01\",\"returnMadeUpDate\":\"2009-03-03\",\"latestAccountsFiledDate\":\"2009-02-02\",\"accountsType\":5,\"latestAnalysedAccountDate\":\"2008-04-30\",\"nextAnnualReturnDue\":\"2010-03-31\",\"companyType\":\"0\",\"companyTypeDescription\":\"Other\",\"dissolved\":false},
>\"groupstructure\": {\"shareholder\": [{\"shareholderType\": 
> \"P\",\"title\": \"MR\",\"forename\": \"Veronica\",\"initial\": 
> \"J\",\"surname\": \"DEF\",\"percentageOfOrdinary\": 
> 15.5},{\"shareholderType\": \"P\",\"title\": \"MR\",\"forename\": \"NICHOLAS 
> ANTHONY\",\"initial\": \"J\",\"surname\": \"VIS\",\"percentageOfOrdinary\": 
> 15.5}]}}">>}],[{<<90,0,32,188,84,53,62,38>>,{1,63662153027}}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[[clean|true]],[]}}},undefined},{r_object,{<<"commercial">>,<<"commercial_danda">>},<<"SC412063">>,[{r_content,{dict,7,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[[<<"Links">>]],[],[],[],[],[],[],[],[[<<"content-type">>,97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110],[<<"X-Riak-VTag">>,50,51,118,107,78,122,101,98,57,52,73,81,65,87,100,112,109,86,102,99,97,82]],[[<<"index">>]],[],[[<<"X-Riak-Last-Modified">>|{1494,933827,646892}]],[],[[<<"X-Riak-Meta">>],[<<"charset">>,117,116,102,45,56]]}}},<<"{\"company\":\"SC412063\",\"identification\":{\"companyNo\":\"SC412063\",\"name\":\"TCS
>  0\",\"alpha\":\"TCS\",\"roAddress\":{\"line1\":\"st cross street 
> C\",\"line2\":\"ross Road T\",\"line3\":\"hiruvanmayur 
> C\",\"line4\":\"hennai\",\"postCode\":\"cb6 
> 1ad\",\"kpostcode\":\"cb61ad\"},\"accountReferenceDate\":\"3004\",\"incorporationDate\":\"2009-01-01\",\"returnMadeUpDate\":\"2009-03-03\",\"latestAccountsFiledDate\":\"2009-02-02\",\"accountsType\":5,\"latestAnalysedAccountDate\":\"2008-04-30\",\"nextAnnualReturnDue\":\"2010-03-31\",\"companyType\":\"0\",\"companyTypeDescription\":\"Other\",\"dissolved\":false},
>\"groupstructure\": {\"shareholder\": [{\"shareholderType\": 
> \"P\",\"title\": \"MR\",\"forename\": \"Veronica\",\"initial\": 
> \"J\",\"surname\": \"DEF\",\"percentageOfOrdinary\": 
> 15.5},{\"shareholderType\": \"P\",\"title\": \"MR\",\"forename\": \"NICHOLAS 
> ANTHONY\",\"initial\": \"J\",\"surname\": \"VIS\",\"percentageOfOrdinary\": 
> 15.5}]}}">>}],[{<<90,0,32,188,84,53,62,38>>,{1,63662153027}}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[[clean|true]],[]}}},undefined},<<"false">>],[{file,"src/riak_object.erl"},{line,306}]},{timer,tc,2,[{file,"timer.erl"},{line

Re: merge_contents error

2017-06-26 Thread Russell Brown
Ah wait, no, I see it, you seem to have set the DVV enabled bucket property to 
the binary <<“false”>>, it should be a boolean atom, either ‘true' or ‘false’.

Something weird has gone on in your bucket props validation, though, as that 
should have been caught or coerced to binary long before this point. How did 
you set that bucket property on this bucket?

Cheers

Russell

On 26 Jun 2017, at 10:03, Russell Brown  wrote:

> Hi Mark,
> It’s an error that means there is no function clause in 
> riak_object:merge_contents that matches the given arguments. It is hard to 
> tell from the snippet of log you posted what the issue is here since the 
> arguments are truncated. Is this related to you cluster restore from a 
> backup? It looks like it is related to your periodic read timeouts. 
> 
> What version of Riak are you running, and what are the bucket properties for 
> this object? Is there more information in the error log, any messages about 
> hd([]) for example?
> 
> Can you grab the actual object in questions from each of the primary vnodes 
> and in some console manually run the merge function and see what the results 
> are? If you need help with how to do this let me know.
> 
> Cheers
> 
> Russell
> 
> On 26 Jun 2017, at 09:34, Mark Richard Thomas  wrote:
> 
>> Hello
>> 
>> I'm seeing the following error message for a number of objects:
>> 
>> console.log:2017-06-26 02:06:00.351 [error] <0.12343.73> gen_fsm 
>> <0.12343.73> in state waiting_vnode_r terminated with reason: no function 
>> clause matching 
>> riak_object:merge_contents({r_object,{<<"commercial">>,<<"precalculatedchar">>},<<"2065">>,[{r_content,{dict,7,16,16,8,...},...}],...},
>>  
>> {r_object,{<<"commercial">>,<<"precalculatedchar">>},<<"2065">>,[{r_content,{dict,7,16,16,8,...},...}],...},
>>  <<"false">>) line 306
>> 
>> What does "no function clause matching riak_object:merge_contents" mean?
>> 
>> Thanks
>> 
>> Mark
>> 
>> Mark Thomas 
>> Technical Lead, UK IT
>> Equifax Inc.
>> 
>> O +44 (0)7908 798 270
>> mark.tho...@equifax.com 
>> 
>>  
>> 
>> Equifax Limited is registered in England with Registered No. 2425920. 
>> Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
>> Limited is authorised and regulated by the Financial Conduct Authority.
>> Equifax Touchstone Limited is registered in Scotland with Registered No. 
>> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, 
>> EH3 8EH.
>> Equifax Commercial Services Limited is registered in the Republic of Ireland 
>> with Registered No. 215393. Registered Office: IDA Business & Technology 
>> Park, Rosslare Road, Drinagh, Wexford.
>> 
>> This message contains information from Equifax which may be confidential and 
>> privileged. If you are not an intended recipient, please refrain from any 
>> disclosure, copying, distribution or use of this information and note that 
>> such actions are prohibited. If you have received this transmission in 
>> error, please notify by e-mail postmas...@equifax.com.
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: merge_contents error

2017-06-26 Thread Russell Brown
Hi Mark,
It’s an error that means there is no function clause in 
riak_object:merge_contents that matches the given arguments. It is hard to tell 
from the snippet of log you posted what the issue is here since the arguments 
are truncated. Is this related to you cluster restore from a backup? It looks 
like it is related to your periodic read timeouts. 

What version of Riak are you running, and what are the bucket properties for 
this object? Is there more information in the error log, any messages about 
hd([]) for example?

Can you grab the actual object in questions from each of the primary vnodes and 
in some console manually run the merge function and see what the results are? 
If you need help with how to do this let me know.

Cheers

Russell

On 26 Jun 2017, at 09:34, Mark Richard Thomas  wrote:

> Hello
>  
> I'm seeing the following error message for a number of objects:
>  
> console.log:2017-06-26 02:06:00.351 [error] <0.12343.73> gen_fsm <0.12343.73> 
> in state waiting_vnode_r terminated with reason: no function clause matching 
> riak_object:merge_contents({r_object,{<<"commercial">>,<<"precalculatedchar">>},<<"2065">>,[{r_content,{dict,7,16,16,8,...},...}],...},
>  
> {r_object,{<<"commercial">>,<<"precalculatedchar">>},<<"2065">>,[{r_content,{dict,7,16,16,8,...},...}],...},
>  <<"false">>) line 306
>  
> What does "no function clause matching riak_object:merge_contents" mean?
>  
> Thanks
>  
> Mark
>  
> Mark Thomas 
> Technical Lead, UK IT
> Equifax Inc.
>  
> O +44 (0)7908 798 270
> mark.tho...@equifax.com 
> 
>  
>  
> Equifax Limited is registered in England with Registered No. 2425920. 
> Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
> Limited is authorised and regulated by the Financial Conduct Authority.
> Equifax Touchstone Limited is registered in Scotland with Registered No. 
> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, EH3 
> 8EH.
> Equifax Commercial Services Limited is registered in the Republic of Ireland 
> with Registered No. 215393. Registered Office: IDA Business & Technology 
> Park, Rosslare Road, Drinagh, Wexford.
>  
> This message contains information from Equifax which may be confidential and 
> privileged. If you are not an intended recipient, please refrain from any 
> disclosure, copying, distribution or use of this information and note that 
> such actions are prohibited. If you have received this transmission in error, 
> please notify by e-mail postmas...@equifax.com.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Cluster Backup - cluster_meta

2017-06-23 Thread Russell Brown
Hi Mark,

The docs suggest you do. I don’t know what bad things happen if you don’t 
include it though. Cluster meta data is the gossip and storage subsystem for 
things like bucket-types and other internal riak metadata, you probably want it.

Cheers

Russell

On 23 Jun 2017, at 09:04, Mark Richard Thomas  wrote:

> Hello
>  
> I’m trying to create a cluster from backups, as per:
>  
> https://docs.basho.com/riak/kv/2.2.3/using/cluster-operations/backing-up/
> https://docs.basho.com/riak/kv/2.2.3/using/cluster-operations/changing-cluster-info/#clusters-from-backups
>  
> I’ve backup-up:
>  
> Bitcask  /var/lib/riak/bitcask
> Ring   /var/lib/riak/ring
> Configuration/etc/riak
> Cluster Metadata /var/lib/riak/cluster_meta
> Search  /var/lib/riak/yz
>  
> Do I need the ‘cluster_meta’ folder?
>  
> Thanks
>  
> Mark
> Equifax Limited is registered in England with Registered No. 2425920. 
> Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
> Limited is authorised and regulated by the Financial Conduct Authority.
> Equifax Touchstone Limited is registered in Scotland with Registered No. 
> SC113401. Registered Office: Exchange Tower,19 Canning Street, Edinburgh, EH3 
> 8EH.
> Equifax Commercial Services Limited is registered in the Republic of Ireland 
> with Registered No. 215393. Registered Office: IDA Business & Technology 
> Park, Rosslare Road, Drinagh, Wexford.
>  
> This message contains information from Equifax which may be confidential and 
> privileged. If you are not an intended recipient, please refrain from any 
> disclosure, copying, distribution or use of this information and note that 
> such actions are prohibited. If you have received this transmission in error, 
> please notify by e-mail postmas...@equifax.com.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Core: VNodes of different flavors

2017-06-09 Thread Russell Brown
Hi Lloyd,


On 9 Jun 2017, at 19:16, Lloyd R. Prentice  wrote:

> Hello,
> 
> A number of definitions and explanations of VNodes can be found in various 
> sites on the web.  I think I have a somewhat fuzzy understanding of VNodes 
> and how they are replicated around the ring in a Riak Core cluster. 
> 
> Slide 11 in the slide deck below shows "Your App" resting on top of Riak Core 
> and around Riak KV:
> 
> https://www.slideshare.net/mobile/argv0/riak-coredevnation
> 
> What's not clear to me is, if "Your App"  has multiple functionalities or 
> services, how are these different functionalities mapped to VNodes?
> 
> For instance, say one functionality requires user input, another obtains a 
> value from a sensor, another computes and stores a second value, and yet 
> another graphs and displays those values to the user. 
> 
> How is a system such as this mapped to VNodes in Riak Core--- all wrapped 
> into a single VNode implementation or parsed across VNodes of different type 
> or functionality and all replicated in the same ring? 

It’s really up to you. It makes sense if you have multiple separate units of 
work to have multiple vnodes. RiakKV (the database) has vnodes for 
reading/writing data, and other vnodes (riak_pipe vnodes) for map/reduce, for 
example. If you look at Ryan Zezeski’s seminal and still relevant TryTryTry 
blog series, you’ll see it has different vnodes for different tasks 
(https://github.com/rzezeski/try-try-try.)

Cheers

Russell

> 
> Many thanks,
> 
> LRP
> 
> Sent from my iPad
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP 19 support

2017-05-30 Thread Russell Brown
Ugh, sent it too fast, sorry. I also meant to add:

>> AFAIK, there are still quite a few remaining issues in Riak's code base that 
>> need to be resolved before the database itself can be compiled and run on 
>> OTP-19+

Is there a list of them somewhere so that the community can pitch in and help?

On 30 May 2017, at 07:26, Russell Brown  wrote:

> 
> On 29 May 2017, at 10:11, Magnus Kessler  wrote:
> 
>> On 27 May 2017 at 10:43, Senthilkumar Peelikkampatti  
>> wrote:
>> Any timeline for upgrading Riak to OTP 19? OTP 20 is expected in few weeks. 
>> It is holding us back niceties like improved maps etc. from the upgrade.
>> 
>> Thanks,
>> Senthil
>> 
>> 
>> Hi Senthil,
>> 
>> AFAIK, there are still quite a few remaining issues in Riak's code base that 
>> need to be resolved before the database itself can be compiled and run on 
>> OTP-19+.
> 
> Is there a timeline for when they might be resolved? An idea as to when an 
> OTP19 Riak might be released?
> 
>> However, IIRC the Riak Erlang Client can be compiled on OTP-19, and some 
>> preliminary testing I did last year suggests that it works OK. If you go 
>> down that route, please test your own build of the Riak Erlang Client 
>> carefully.
>> 
>> Kind Regards,
>> 
>> Magnus
>> 
>> 
>> 
>> -- 
>> Magnus Kessler
>> Client Services Engineer
>> Basho Technologies Limited
>> 
>> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OTP 19 support

2017-05-29 Thread Russell Brown

On 29 May 2017, at 10:11, Magnus Kessler  wrote:

> On 27 May 2017 at 10:43, Senthilkumar Peelikkampatti  
> wrote:
> Any timeline for upgrading Riak to OTP 19? OTP 20 is expected in few weeks. 
> It is holding us back niceties like improved maps etc. from the upgrade.
> 
> Thanks,
> Senthil
> 
> 
> Hi Senthil,
> 
> AFAIK, there are still quite a few remaining issues in Riak's code base that 
> need to be resolved before the database itself can be compiled and run on 
> OTP-19+.

Is there a timeline for when they might be resolved? An idea as to when an 
OTP19 Riak might be released?

> However, IIRC the Riak Erlang Client can be compiled on OTP-19, and some 
> preliminary testing I did last year suggests that it works OK. If you go down 
> that route, please test your own build of the Riak Erlang Client carefully.
> 
> Kind Regards,
> 
> Magnus
>   
> 
> 
> -- 
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
> 
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Issues with partition distribution across nodes

2017-05-24 Thread Russell Brown

On 24 May 2017, at 15:44, Denis  wrote:

> Hi Russell
> 
> Thank you for your suggestions. I found diag is saying: "The following 
> preflists do not satisfy the n_val. Please add more nodes". It seems because 
> ring size (128) divided in 6 nodes is hard to arrange. 
> The history of our cluster is long story, cause we testing it in our lab. 
> Initially it was deployed with 5 nodes without issues. Then it was expanded 
> to 6 nodes, without issues again. After some time all storage space on whole 
> cluster was fully utilized and we had to remove all data from leveldb dir, 
> flush ring dir and rebuild cluster. This has been done by adding all 6 nodes 
> at one time, so this may be the case. We can try to flush cluster data once 
> again and then add nodes one by one (committing cluster change each time), 
> waiting to partition transfer to end each time.

Or add two more nodes in one go, might be quicker, and if time is money, 
cheaper.

> 
> 2017-05-24 15:44 GMT+03:00 Russell Brown :
> Hi,
> 
> This is just a quick reply since this is somewhat a current  topic on the ML.
> 
> On 24 May 2017, at 12:57, Denis Gudtsov  wrote:
> 
> > Hello
> >
> > We have 6-nodes cluster with ring size 128 configured. The problem is that
> > two partitions has replicas only on two nodes rather than three as required
> > (n_val=3). We have tried several times to clean leveldb and ring directories
> > and then rebuild cluster, but this issue is still present.
> 
> There was a fairly long discussion about this very issue recently (see 
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2017-May/019281.html)
> 
> I ran a little code and the following {RingSize, NodeCount, IsViolated} 
> tuples were the result. If you built any of these clusters from scratch (i.e. 
> you started NodeCount nodes, and used riak-admin cluster join, riak-admin 
> cluster plan, riak-admin cluster commit to create a cluster of NodeCount from 
> scratch) then you have tail violations in your ring.
> 
> [{16,3,true},
>  {16,5,true},
>  {16,7,true},
>  {16,13,true},
>  {16,14,true},
>  {32,3,true},
>  {32,5,true},
>  {32,6,true},
>  {32,10,true},
>  {64,3,true},
>  {64,7,true},
>  {64,9,true},
>  {128,3,true},
>  {128,5,true},
>  {128,6,true},
>  {128,7,true},
>  {128,9,true},
>  {128,14,true},
>  {256,3,true},
>  {256,5,true},
>  {256,11,true},
>  {512,3,true},
>  {512,5,true},
>  {512,6,true},
>  {512,7,true},
>  {512,10,true}]
> 
> 
> > How can we diagnose where the issue is and fix it?
> 
> WRT your problem, a quick experiment looks like adding 2 new nodes will solve 
> your problem, just adding one doesn’t look like it does. I tried just adding 
> one new node and still had a single violated preflist, but I have just thrown 
> a little experiment together so I could well be wrong. It doesn’t actually 
> build any clusters, and uses the claim code out of context, ymmv
> 
> > Is there any way how we
> > can assign partition to node manually?
> 
> I don’t know of a way, but that would be very useful.
> 
> Do you remember if this cluster was built all at once as a 6-node cluster, or 
> has it grown over time? Have you run the command riak-admin diag 
> ring_preflists as documented here 
> http://docs.basho.com/riak/kv/2.2.3/setup/upgrading/checklist/#confirming-configuration-with-riaknostic?
> 
> Sorry I can’t be more help
> 
> Cheers
> 
> Russell
> 
> >
> > Please find output of member-status below and screen from riak control ring
> > status:
> > [root@riak01 ~]# riak-admin  member-status
> > = Membership
> > ==
> > Status RingPendingNode
> > ---
> > valid  17.2%  --  'riak@riak01.
> > valid  17.2%  --  'riak@riak02.
> > valid  16.4%  --  'riak@riak03.
> > valid  16.4%  --  'riak@riak04.
> > valid  16.4%  --  'riak@riak05.
> > valid  16.4%  --  'riak@riak06.
> > ---
> > Valid:6 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
> >
> > <http://riak-users.197444.n3.nabble.com/file/n4035179/10.png>
> >
> > Thank you.
> >
> >
> >
> > --
> > View this message in context: 
> > http://riak-users.197444.n3.nabble.com/Issues-with-partition-distribution-across-nodes-tp4035179.html
> > Sent from the Riak Users mailing list archive at Nabble.com.
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Issues with partition distribution across nodes

2017-05-24 Thread Russell Brown
Hi,

This is just a quick reply since this is somewhat a current  topic on the ML.

On 24 May 2017, at 12:57, Denis Gudtsov  wrote:

> Hello
> 
> We have 6-nodes cluster with ring size 128 configured. The problem is that
> two partitions has replicas only on two nodes rather than three as required
> (n_val=3). We have tried several times to clean leveldb and ring directories
> and then rebuild cluster, but this issue is still present. 

There was a fairly long discussion about this very issue recently (see 
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2017-May/019281.html)

I ran a little code and the following {RingSize, NodeCount, IsViolated} tuples 
were the result. If you built any of these clusters from scratch (i.e. you 
started NodeCount nodes, and used riak-admin cluster join, riak-admin cluster 
plan, riak-admin cluster commit to create a cluster of NodeCount from scratch) 
then you have tail violations in your ring.

[{16,3,true},
 {16,5,true},
 {16,7,true},
 {16,13,true},
 {16,14,true},
 {32,3,true},
 {32,5,true},
 {32,6,true},
 {32,10,true},
 {64,3,true},
 {64,7,true},
 {64,9,true},
 {128,3,true},
 {128,5,true},
 {128,6,true},
 {128,7,true},
 {128,9,true},
 {128,14,true},
 {256,3,true},
 {256,5,true},
 {256,11,true},
 {512,3,true},
 {512,5,true},
 {512,6,true},
 {512,7,true},
 {512,10,true}]


> How can we diagnose where the issue is and fix it?

WRT your problem, a quick experiment looks like adding 2 new nodes will solve 
your problem, just adding one doesn’t look like it does. I tried just adding 
one new node and still had a single violated preflist, but I have just thrown a 
little experiment together so I could well be wrong. It doesn’t actually build 
any clusters, and uses the claim code out of context, ymmv

> Is there any way how we
> can assign partition to node manually? 

I don’t know of a way, but that would be very useful.

Do you remember if this cluster was built all at once as a 6-node cluster, or 
has it grown over time? Have you run the command riak-admin diag ring_preflists 
as documented here 
http://docs.basho.com/riak/kv/2.2.3/setup/upgrading/checklist/#confirming-configuration-with-riaknostic?

Sorry I can’t be more help

Cheers

Russell

> 
> Please find output of member-status below and screen from riak control ring
> status:
> [root@riak01 ~]# riak-admin  member-status
> = Membership
> ==
> Status RingPendingNode
> ---
> valid  17.2%  --  'riak@riak01.
> valid  17.2%  --  'riak@riak02.
> valid  16.4%  --  'riak@riak03.
> valid  16.4%  --  'riak@riak04.
> valid  16.4%  --  'riak@riak05.
> valid  16.4%  --  'riak@riak06.
> ---
> Valid:6 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
> 
>  
> 
> Thank you.
> 
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Issues-with-partition-distribution-across-nodes-tp4035179.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Put failure: too many siblings

2017-05-24 Thread Russell Brown
Also, this issue https://github.com/basho/riak_kv/issues/1188 suggests that 
adding the property `riak_kv.retry_put_coordinator_failure=false` may help in 
future. But won’t help with your keys with too many siblings.

On 24 May 2017, at 09:22, Russell Brown  wrote:

> 
> On 24 May 2017, at 09:11, Vladyslav Zakhozhai  
> wrote:
> 
>> Hello,
>> 
>> My riak cluster still experiences "too many siblings". And hinted handoffs 
>> are not able to be finished completely. So "siblings will be resolved after 
>> hinted handoffs are finished" is not my case unfortunately.
>> 
>> According to basho's docs 
>> (http://docs.basho.com/riak/kv/2.2.3/learn/concepts/causal-context/#sibling-explosion)
>>  I need to enable dvv conflict resolution mechanism. So here is a quesion:
>> 
>> Is it safe to enable dvv on default bucket type and how it affects existing 
>> data?
> 
> It might not affect existing data enough. All the existing siblings are 
> “undotted” and would need a read-put cycle to resolve.
> 
>> It may be a solution, is not it?
> 
> You may require further action. I remember basho support helping someone with 
> a similar issue, and there was some manual intervention/scripted solution, 
> but I can’t remember what it was right now. I think those objects (as logged) 
> with the sibling issues need to be read and resolved. Maybe one of the 
> ex-basho support people remembers? I’ll prod one in a back channel and see if 
> they can help.
> 
>> 
>> Why I talk about default bucket type? Because there is only one riak client 
>> - Riak CS and it does not manage bucket types of PUT'ed object (so, default 
>> bucket type always is used during PUT's). Is it correct?
> 
> Yes.
> 
>> 
>> Thank you in advance.
>> 
>> On Fri, Jun 17, 2016 at 11:45 AM Vladyslav Zakhozhai 
>>  wrote:
>> Hi Russel,
>> 
>> thank you for your answer. I really appreciate your help.
>> 
>> 2.1.3 is not actually riak_kv version. It is version of basho's riak 
>> package. Versions of riak subsystems you can see below.
>> 
>> Bucket properties:
>> # riak-admin bucket-type list
>> default (active)
>> 
>> # riak-admin bucket-type status default
>> default is active
>> 
>> allow_mult: true
>> basic_quorum: false
>> big_vclock: 50
>> chash_keyfun: {riak_core_util,chash_std_keyfun}
>> dvv_enabled: false
>> dw: quorum
>> last_write_wins: false
>> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
>> n_val: 3
>> notfound_ok: true
>> old_vclock: 86400
>> postcommit: []
>> pr: 0
>> precommit: []
>> pw: 0
>> r: quorum
>> rw: quorum
>> small_vclock: 50
>> w: quorum
>> write_once: false
>> young_vclock: 20
>> 
>> I did not mentioned that upgrade from riak 1.5.4 have been took place couple 
>> months ago (about 6 months). As I understand DVV is disabled. Is it safe to 
>> migrate to setting DVV from Vector Clocks?
>> 
>> Package versions:
>> # dpkg -l | grep riak
>> ii  riak2.1.3-1  
>> amd64Riak is a distributed data store
>> ii  riak-cs 2.1.0-1  
>> amd64Riak CS
>> 
>> Subsystems versions:
>> "clique_version" : "0.3.2-0-ge332c8f",
>> "bitcask_version" : "1.7.2",
>> "sys_driver_version" : "2.2",
>> "riak_core_version" : "2.1.5-0-gb02ab53",
>> "riak_kv_version" : "2.1.2-0-gf969bba",
>> "riak_pipe_version" : "2.1.1-0-gb1ac2cf",
>> "cluster_info_version" : "2.0.3-0-g76c73fc",
>> "riak_auth_mods_version" : "2.1.0-0-g31b8b30",
>> "erlydtl_version" : "0.7.0",
>> "os_mon_version" : "2.2.13",
>> "inets_version" : "5.9.6",
>> "erlang_js_version" : "1.3.0-0-g07467d8",
>> "riak_control_version" : "2.1.2-0-gab3f924",
>> "xmerl_version" : "1.3.4",
>> "protobuffs_version" : "0.8.1p5-0-gf88fc3c",
>> "riak_sysmon_version" : "2.0.0",
>> "compiler_version" : "4.9.3",
>> "eleveldb_version" : "2.1.10-0-g0537ca9",
>> "lager_version" : "2.1.1",
>> "sasl_version" : "2.3.3",
>> "riak_dt_version" : "2.1.1-0-ga2986bc&

Re: Put failure: too many siblings

2017-05-24 Thread Russell Brown

On 24 May 2017, at 09:11, Vladyslav Zakhozhai  
wrote:

> Hello,
> 
> My riak cluster still experiences "too many siblings". And hinted handoffs 
> are not able to be finished completely. So "siblings will be resolved after 
> hinted handoffs are finished" is not my case unfortunately.
> 
> According to basho's docs 
> (http://docs.basho.com/riak/kv/2.2.3/learn/concepts/causal-context/#sibling-explosion)
>  I need to enable dvv conflict resolution mechanism. So here is a quesion:
> 
> Is it safe to enable dvv on default bucket type and how it affects existing 
> data?

It might not affect existing data enough. All the existing siblings are 
“undotted” and would need a read-put cycle to resolve.

> It may be a solution, is not it?

You may require further action. I remember basho support helping someone with a 
similar issue, and there was some manual intervention/scripted solution, but I 
can’t remember what it was right now. I think those objects (as logged) with 
the sibling issues need to be read and resolved. Maybe one of the ex-basho 
support people remembers? I’ll prod one in a back channel and see if they can 
help.

> 
> Why I talk about default bucket type? Because there is only one riak client - 
> Riak CS and it does not manage bucket types of PUT'ed object (so, default 
> bucket type always is used during PUT's). Is it correct?

Yes.

> 
> Thank you in advance.
> 
> On Fri, Jun 17, 2016 at 11:45 AM Vladyslav Zakhozhai 
>  wrote:
> Hi Russel,
> 
> thank you for your answer. I really appreciate your help.
> 
> 2.1.3 is not actually riak_kv version. It is version of basho's riak package. 
> Versions of riak subsystems you can see below.
> 
> Bucket properties:
> # riak-admin bucket-type list
> default (active)
> 
> # riak-admin bucket-type status default
> default is active
> 
> allow_mult: true
> basic_quorum: false
> big_vclock: 50
> chash_keyfun: {riak_core_util,chash_std_keyfun}
> dvv_enabled: false
> dw: quorum
> last_write_wins: false
> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
> n_val: 3
> notfound_ok: true
> old_vclock: 86400
> postcommit: []
> pr: 0
> precommit: []
> pw: 0
> r: quorum
> rw: quorum
> small_vclock: 50
> w: quorum
> write_once: false
> young_vclock: 20
> 
> I did not mentioned that upgrade from riak 1.5.4 have been took place couple 
> months ago (about 6 months). As I understand DVV is disabled. Is it safe to 
> migrate to setting DVV from Vector Clocks?
> 
> Package versions:
> # dpkg -l | grep riak
> ii  riak2.1.3-1  
> amd64Riak is a distributed data store
> ii  riak-cs 2.1.0-1  
> amd64Riak CS
> 
> Subsystems versions:
> "clique_version" : "0.3.2-0-ge332c8f",
> "bitcask_version" : "1.7.2",
> "sys_driver_version" : "2.2",
> "riak_core_version" : "2.1.5-0-gb02ab53",
> "riak_kv_version" : "2.1.2-0-gf969bba",
> "riak_pipe_version" : "2.1.1-0-gb1ac2cf",
> "cluster_info_version" : "2.0.3-0-g76c73fc",
> "riak_auth_mods_version" : "2.1.0-0-g31b8b30",
> "erlydtl_version" : "0.7.0",
> "os_mon_version" : "2.2.13",
> "inets_version" : "5.9.6",
> "erlang_js_version" : "1.3.0-0-g07467d8",
> "riak_control_version" : "2.1.2-0-gab3f924",
> "xmerl_version" : "1.3.4",
> "protobuffs_version" : "0.8.1p5-0-gf88fc3c",
> "riak_sysmon_version" : "2.0.0",
> "compiler_version" : "4.9.3",
> "eleveldb_version" : "2.1.10-0-g0537ca9",
> "lager_version" : "2.1.1",
> "sasl_version" : "2.3.3",
> "riak_dt_version" : "2.1.1-0-ga2986bc",
> "runtime_tools_version" : "1.8.12",
> "yokozuna_version" : "2.1.2-0-g3520d11",
> "riak_search_version" : "2.1.1-0-gffe2113",
> "sys_system_version" : "Erlang R16B02_basho8 (erts-5.10.3) [source] [64-bit] 
> [smp:4:4] [async-threads:64] [kernel-poll:true] [frame-pointer]",
> "basho_stats_version" : "1.0.3",
> "crypto_version" : "3.1",
> "merge_index_version" : "2.0.1-0-g0c8f77c",
> "kernel_version" : "2.16.3",
> "stdlib_version" : "1.19.3",
> "riak_pb_version" : "2.1.0.2-0-g620bc7

Re: Core Claim and Property-Based Tests

2017-05-17 Thread Russell Brown
Back to the original post, the important point for me is that this is not 
really about riak-core, but Riak, the database.

The OP in TL;DR form:

1. A thorough report of a long lived bug in claim that means many node/ring 
combos end up with multiple replicas on one physical node, silently!
2. A proposed fix (which I consider very important for anyone running Riak.)
3. The most important question: If the OP fixes this, how can everyone benefit?

I had some dataloss fixes merged by Basho in March/April, will they ever be 
released?
Will each of the major users hardfork Riak and struggle to benefit from each 
others work?

Cheers

Russell

On 16 May 2017, at 21:02, DeadZen  wrote:

> I'd like to keep the core project going, just depends on how much interest 
> there is.
> There are a lot of separate issues and stalled initiatives, if anyone likes 
> to discuss them. Some have to do simply with scaling Distributed Erlang. 
> Theres a riak core mailing list as well that probably could use some fresh 
> air. 
> 
> Thanks,
> Pedram
> 
> On Tue, May 16, 2017 at 3:29 PM Christopher Meiklejohn 
>  wrote:
> We're looking at mainly leveraging partisan for changing the
> underlying communication structure -- we hope to have via support in
> Partisan soon along with connection multiplexing, so we hope to avoid
> bottlenecks related to head-of-line-blocking in distributed Erlang, be
> able to support SSL/TLS easier for intra-cluster communication and
> have more robust visibility into how the cluster is operating.
> 
> One thing we learned from Riak MDC is that the single connection's
> used in distributed Erlang are a bottleneck and difficult to apply
> flow and congestion control to -- where, we believe a solution based
> completely on gen_tcp would be more flexible.
> 
> [Keep in mind this is a ~1 year vision at the moment.]
> 
> Thanks,
> - Christopher
> 
> On Tue, May 16, 2017 at 9:20 PM, Martin Sumner
>  wrote:
> > Chris,
> >
> > Is this only the communications part, so the core concepts like the Ring,
> > preflists, the Claimant role, the claim algo etc will remain the same?
> >
> > Where's the best place to start reading about Partisan, I'm interested in
> > the motivation for changing that part of Core.  Is there a special use case
> > or problem you're focused on (e,g. gossip problems in much larger clusters)?
> >
> > Ta
> >
> > Martin
> >
> > On 16 May 2017 at 20:06, Christopher Meiklejohn
> >  wrote:
> >>
> >> For what it's worth, the Lasp community is looking at doing a fork of
> >> Riak Core replacing all communication with our Partisan library and
> >> moving it completely off of distributed Erlang.  We'd love to hear
> >> from more folks that are interested in this work.
> >>
> >> - Christopher
> >>
> >> On Tue, May 16, 2017 at 6:53 PM, Tom Santero  wrote:
> >> > I'm aware of a few other companies and individuals who are interested in
> >> > continued development and support in a post-Basho world. Ideally the
> >> > community can come together and contribute to a single, canonical fork.
> >> >
> >> > Semi-related, there's a good chance this mailing list won't last much
> >> > longer, either. I'm happy to personally contribute time and resources to
> >> > help maintain the community.
> >> >
> >> > Tom
> >> >
> >> > On Tue, May 16, 2017 at 11:51 AM, Martin Sumner
> >> >  wrote:
> >> >>
> >> >>
> >> >> I've raised an issue with Core today
> >> >> (https://github.com/basho/riak_core/issues/908), related to the claim
> >> >> algorithms.
> >> >>
> >> >> There's a long-read associated with this, which provides a broader
> >> >> analysis of how claim works with the ring:
> >> >>
> >> >>
> >> >>
> >> >> https://github.com/martinsumner/riak_core/blob/mas-claimv2issues/docs/ring_claim.md
> >> >>
> >> >> I believe the long-read explains some of the common mysterious issues
> >> >> which can occur with claim.
> >> >>
> >> >> We're in the process of fixing up the property-based tests for
> >> >> riak_core_claim.erl, and will then be looking to make some improvements
> >> >> to
> >> >> claim v2 to try and pass the improved tests.
> >> >>
> >> >> Big question is though, how can we progress any contribution we make
> >> >> into
> >> >> the Riak codebase?  What is the plan going forward for open-source
> >> >> contributions to Riak?  Do Basho have any contingency plans for
> >> >> smoothly
> >> >> handing over open-source code to the community, before the list of
> >> >> Basho's
> >> >> Github people (https://github.com/orgs/basho/people) who still work at
> >> >> Basho
> >> >> is reduced to zero?
> >> >>
> >> >> Is this something of concern to others?
> >> >>
> >> >> Regards
> >> >>
> >> >> Martin
> >> >>
> >> >>
> >> >> ___
> >> >> riak-users mailing list
> >> >> riak-users@lists.basho.com
> >> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >> >>
> >> >
> >> >
> >> > ___
> >> > riak-users mailing list
> >> >

Re: API question about conflict resolution and getValue(s)

2017-04-06 Thread Russell Brown
Hi,
Which client are you using?


On 6 Apr 2017, at 12:48, ジョハンガル  wrote:

> Hello,
>  
> I have a simple question regarding FetchValue.Response/getValue, 
> FetchValue.Response/getValues and conflict resolution.
>  
> In the documentation 
> http://docs.basho.com/riak/kv/2.2.3/developing/usage/conflict-resolution/
> the described sequence is: "fetch -> getValue -> modify -> store"
> does: "fetch -> getValues -> ... pick one ... -> modify -> store", work?

The context you get from a fetch is the one you should send back with the 
resolved value, whatever strategy you use to converge the sibling values into a 
single value (even “pick one”!)

You should check the docs for the client library you use whether it handles 
passing the context around “behind” the scenes for you or not.

>  
> Is the causal context from the implicitly resolved object obtained from 
> getValue, the same than the causal context in the siblings recovered with 
> getValues?

Is there a client that implicitly resolves siblings via a call to “getValue”? 
Even if there is, I would hope and expect that it uses the same causal context 
as it fetched, and should therefore be the same as the one returned by 
”getValues”. If you let use know which client you use I could probably give a 
more certain answer.

Russell

>  
>  
> Best regards,
> Johan Gall
>  
> ps: riak 2.1.4, buckets with types so by default DVV enabled, 
> com.basho.riak/riak-client "2.1.1" 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Object not found after successful PUT on S3 API

2017-03-06 Thread Russell Brown
Genuinely stumped then.

I’m surprised that dvv_enabled=false is the default as sibling explosion is bad.

I don’t know the CS code very well, but I assume a not_found means that either 
the manifest or some chunk is not found. I wonder if you can get the manifest 
and then see if any/all of the chunks are present?

On 6 Mar 2017, at 17:21, Daniel Miller  wrote:

> > Would be good to know the riak version
> 
> Riak 2.1.1
> Riak CS 2.1.0
> Stanchion 2.1.0
> 
> > why the dvv_enabled bucket property is set to false, please?
> 
> Looks like that's the default. I haven't changed it.
> 
>  > Also, is there multi-datacentre replication involved?
> 
> no
> 
> > Do you re-use your keys, for example, have the keys in question been 
> > created, deleted, and then re-created?
> 
> no
> 
> Thank you for the prompt follow-up.
> 
> Daniel
> 
> 
> On Mon, Mar 6, 2017 at 10:38 AM, Russell Brown  
> wrote:
> Hi,
> Would be good to know the riak version, and why the dvv_enabled bucket 
> property is set to false, please? Also, is there multi-datacentre replication 
> involved? Do you re-use your keys, for example, have the keys in question 
> been created, deleted, and then re-created?
> 
> Cheers
> 
> Russell
> 
> On 6 Mar 2017, at 15:07, Daniel Miller  wrote:
> 
> > I recently had another case of a disappearing object. This time the object 
> > was successfully PUT, and (unlike the previous cases reported in this 
> > thread) for a period of time GETs were also successful. Then GETs started 
> > 404ing for no apparent reason. There are no errors in the logs to indicate 
> > that anything unusual happened. This is quite disconcerting. Is it normal 
> > that Riak CS just loses track of objects? At this point we are using CS as 
> > primary object storage, meaning we do not have the data stored in another 
> > database so it's critical that the data is not randomly lost.
> >
> > In the CS access logs I see
> >
> > # all prior GET requests for this object succeeding like this one. This is 
> > the last successful GET request:
> > [28/Feb/2017:14:42:35 +] "GET 
> > /buckets/blobdb/objects/commcarehq__apps%2F3d2b... HTTP/1.0" 200 14923 "" 
> > "Boto3/1.4.0 Python/2.7.6 Linux/3.13.0-86-generic Botocore/1.4.53 Resource"
> > ...
> > # all GET requests for this object are now failing like this one (the first 
> > 404):
> > [02/Mar/2017:08:36:11 +] "GET 
> > /buckets/blobdb/objects/commcarehq__apps%2F3d2b... HTTP/1.0" 404 240 "" 
> > "Boto3/1.4.0 Python/2.7.6 Linux/3.13.0-86-generic Botocore/1.4.53 Resource"
> >
> > The object name has been elided for readability. I do not know when this 
> > object was PUT into the cluster because I only have logs for the past 
> > month. Is there any way to dig further into Riak or Riak CS data to 
> > determine if the object content is actually completely lost or if there are 
> > any other details that might explain why it is now missing? Could I 
> > increase some logging parameters to get more information about what is 
> > going wrong when something like this happens?
> >
> > I have searched the logs for other 404 responses but found none (other than 
> > the two reported earlier), so this is the 3rd known missing object in the 
> > cluster. We retain logs for one month only (I'm increasing this now because 
> > of this issue), so it is possible that other objects have also gone 
> > missing, but I cannot see them since the logs have been truncated.
> >
> > The cluster now has 7 nodes instead of 9 (see earlier emails in this 
> > thread), and the riak storage backend is now leveldb instead of multi. I 
> > have attached config file templates for riak, raik-cs and stanchion (these 
> > are deployed with ansible).
> >
> > Bucket properties:
> > {
> >   "props": {
> > "notfound_ok": true,
> > "n_val": 3,
> > "last_write_wins": false,
> > "allow_mult": true,
> > "dvv_enabled": false,
> > "name": "blobdb",
> > "r": "quorum",
> > "precommit": [],
> > "old_vclock": 86400,
> > "dw": "quorum",
> > "rw": "quorum",
> > "small_vclock": 50,
> > "write_once": false,
> > "basic_quorum": false,
> > "big_vclock": 50,
> > "chash_keyfun": {
> >   "fun"

Re: Object not found after successful PUT on S3 API

2017-03-06 Thread Russell Brown
Hi,
Would be good to know the riak version, and why the dvv_enabled bucket property 
is set to false, please? Also, is there multi-datacentre replication involved? 
Do you re-use your keys, for example, have the keys in question been created, 
deleted, and then re-created?

Cheers

Russell

On 6 Mar 2017, at 15:07, Daniel Miller  wrote:

> I recently had another case of a disappearing object. This time the object 
> was successfully PUT, and (unlike the previous cases reported in this thread) 
> for a period of time GETs were also successful. Then GETs started 404ing for 
> no apparent reason. There are no errors in the logs to indicate that anything 
> unusual happened. This is quite disconcerting. Is it normal that Riak CS just 
> loses track of objects? At this point we are using CS as primary object 
> storage, meaning we do not have the data stored in another database so it's 
> critical that the data is not randomly lost.
> 
> In the CS access logs I see
> 
> # all prior GET requests for this object succeeding like this one. This is 
> the last successful GET request:
> [28/Feb/2017:14:42:35 +] "GET 
> /buckets/blobdb/objects/commcarehq__apps%2F3d2b... HTTP/1.0" 200 14923 "" 
> "Boto3/1.4.0 Python/2.7.6 Linux/3.13.0-86-generic Botocore/1.4.53 Resource"
> ...
> # all GET requests for this object are now failing like this one (the first 
> 404):
> [02/Mar/2017:08:36:11 +] "GET 
> /buckets/blobdb/objects/commcarehq__apps%2F3d2b... HTTP/1.0" 404 240 "" 
> "Boto3/1.4.0 Python/2.7.6 Linux/3.13.0-86-generic Botocore/1.4.53 Resource"
> 
> The object name has been elided for readability. I do not know when this 
> object was PUT into the cluster because I only have logs for the past month. 
> Is there any way to dig further into Riak or Riak CS data to determine if the 
> object content is actually completely lost or if there are any other details 
> that might explain why it is now missing? Could I increase some logging 
> parameters to get more information about what is going wrong when something 
> like this happens?
> 
> I have searched the logs for other 404 responses but found none (other than 
> the two reported earlier), so this is the 3rd known missing object in the 
> cluster. We retain logs for one month only (I'm increasing this now because 
> of this issue), so it is possible that other objects have also gone missing, 
> but I cannot see them since the logs have been truncated.
> 
> The cluster now has 7 nodes instead of 9 (see earlier emails in this thread), 
> and the riak storage backend is now leveldb instead of multi. I have attached 
> config file templates for riak, raik-cs and stanchion (these are deployed 
> with ansible).
> 
> Bucket properties:
> {
>   "props": {
> "notfound_ok": true,
> "n_val": 3,
> "last_write_wins": false,
> "allow_mult": true,
> "dvv_enabled": false,
> "name": "blobdb",
> "r": "quorum",
> "precommit": [],
> "old_vclock": 86400,
> "dw": "quorum",
> "rw": "quorum",
> "small_vclock": 50,
> "write_once": false,
> "basic_quorum": false,
> "big_vclock": 50,
> "chash_keyfun": {
>   "fun": "chash_std_keyfun",
>   "mod": "riak_core_util"
> },
> "postcommit": [],
> "pw": 0,
> "w": "quorum",
> "young_vclock": 20,
> "pr": 0,
> "linkfun": {
>   "fun": "mapreduce_linkfun",
>   "mod": "riak_kv_wm_link_walker"
> }
>   }
> }
> 
> I'll be happy to provide more context to help troubleshoot this issue.
> 
> Thanks in advance for any help you can provide.
> 
> Daniel
> 
> 
> On Tue, Feb 14, 2017 at 11:52 AM, Daniel Miller  wrote:
> Hi Luke,
> 
> Sorry for the late response and thanks for following up. I haven't seen it 
> happen since. At this point I'm going to wait and see if it happens again and 
> hopefully get more details about what might be causing it.
> 
> Daniel
> 
> On Thu, Feb 9, 2017 at 1:02 PM, Luke Bakken  wrote:
> Hi Daniel -
> 
> I don't have any ideas at this point. Has this scenario happened again?
> 
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
> 
> 
> On Wed, Jan 25, 2017 at 2:11 PM, Daniel Miller  wrote:
> > Thanks for the quick response, Luke.
> >
> > There is nothing unusual about the keys. The format is a name + UUID + some
> > other random URL-encoded charaters, like most other keys in our cluster.
> >
> > There are no errors near the time of the incident in any of the logs (the
> > last [error] is from over a month before). I see lots of messages like this
> > in console.log:
> >
> > /var/log/riak/console.log
> > 2017-01-20 15:38:10.184 [info]
> > <0.22902.1193>@riak_kv_exchange_fsm:key_exchange:263 Repaired 2 keys during
> > active anti-entropy exchange of
> > {776422744832042175295707567380525354192214163456,3} between
> > {776422744832042175295707567380525354192214163456,'riak-fa...@fake3.fake.com'}
> > and
> > {822094670998632891489572718402909198556462055424,'riak-fa...@fake9.fake.com'}
> > 2017-01-20 15:40:39.640 [info]
> > <0.217

Re: [CRDT_OP-in-CommitHook]

2017-03-02 Thread Russell Brown
Hi,

You’re using internal details of the CRDT implementation, I’m not sure that is 
such a great idea. You always have your `Context` set to `undefined` but if 
your ops are all adds that shouldn’t matter in this case.

The issue is that you’re calling `riak_kv_crdt:update` that needs to be called 
from within the vnode. The `ThisNode` value is not a serial actor, so you may 
have many concurrent updates with the actor `ThisNode`, and that’s not how to 
do it. It is crucial that the actor updating the CRDT acts in serial issuing an 
increasing count of events. Thats why we put the CRDT code inside Riak, inside 
the vnode.

You’re doing neither one thing nor the other, in that your using a datatype 
bucket but not the datatype API (it sends Ops from the client, you’re doing 
read/modify/write.)

I can see the issue here is that the API is external only, if you _must_ use 
the internal API have a look at 
https://github.com/basho/riak_kv/blob/develop/src/riak_kv_pb_crdt.erl and you 
see that the CRDT_OP is all that is sent by the client.

https://github.com/basho/riak_kv/blob/develop/src/riak_kv_pb_crdt.erl#L162

Those put options matter too, especially for counters, less so for sets.

I guess it would be great if riak_client had some internal API functions that 
made this easier to do from a hook. If you open an issue on 
github.com/basho/riak_kv I can look into that and make a PR.

Hope that helps

Russell

On 1 Mar 2017, at 09:30, 李明  wrote:

> Hi 
>I am new to erlang and riak.  I started to use riak as a kv store couple 
> of months ago. Now i want to implement a commit hook to riak so that riak 
> could help me to make some statistics.
> i read some docs and write a pre-hook scripts, which will fetch the object 
> key and store it into a set.
>This hook works fine if there is only one client write to riak, but if i 
> increase the connection to riak writing, i found it lost some elements in the 
> set. Looks like the crdt_op did not do the merge operation.And there is no 
> obvious error in the log files.
> 
>Could someone help me to finger out what happened or what i has missed.
> 
> i am using the riak 2.1.3
> 
> Thanks all!
> 
> 
> Here is the hook scripts:
> 
> --
> 
> -module(myhook).
> -export([pretest/1]).
> 
> now_to_local_string({MegaSecs, Secs, MicroSecs}) ->
> LocalTime = calendar:now_to_local_time({MegaSecs, Secs, MicroSecs}),
> {{Year, Month, Day}, {Hour, Minute, _}} = LocalTime,
> TimeStr = lists:flatten(io_lib:format("~4..0w~2..0w~2..0w~2..0w~2..0w",
> [Year, Month, Day, Hour, Minute])),
> TimeStr.
> 
> is_deleted(Object)->
> case dict:find(<<"X-Riak-Deleted">>,riak_object:get_metadata(Object)) of
> {ok,_} ->
> true;
> _ ->
> false
> end.
> 
> pretest(Object) ->
> % timer:sleep(1),
> try
>   ObjBucket = riak_object:bucket(Object),
>   %   riak_object:bucket(Obj).
>   % {<<"cn-archive">>,<<"local-test">>}
> 
>   Bucket = element(2, ObjBucket),
>   BucketType = element(1, ObjBucket),
> 
>   ObjKey = riak_object:key(Object),
>   % Key = binary_to_list(ObjKey),
>   % ObjData = riak_object:get_value(Object),
>   % Msg = binary_to_list(ObjData),
>   CommitItem = iolist_to_binary(mochijson2:encode({struct, [{b, 
> Bucket}, {k, ObjKey}, {t, BucketType}]})),
> 
>   case is_deleted(Object) of
>   true ->
>   KeyPrefix = "delete";
>   _ ->
>   KeyPrefix = "update"
>   end,
> 
>   CurMin = now_to_local_string(os:timestamp()),
>   IndexKey = binary:list_to_bin(io_lib:format("~s-~s", [CurMin, 
> KeyPrefix])),
> 
>   %% Get a riak client
>   {ok, C} = riak:local_client(),
>   % get node obj
>   ThisNode = atom_to_binary(node(), latin1),
> 
>   % get index obj and set context
>   BType = <<"archive">>,
>   B = <<"local-test">>,
>   
>   {SetObj, Context} = case C:get({BType, B}, IndexKey) of
>   {error, notfound} -> 
>   ThisSetObj = riak_kv_crdt:new({BType, B}, IndexKey, 
> riak_dt_orswot),
>   {ThisSetObj, undefined};
>   {ok, ThisSetObj} ->
>   % The datatype update requires the context if the value 
> exists
>   {{Ctx, _}, _} = riak_kv_crdt:value(ThisSetObj, 
> riak_dt_orswot),
>   {ThisSetObj, Ctx}
>   end,
> 
>   UpdateIndex = [{add, CommitItem}],
>   % UpdateOp = {crdt_op, riak_dt_orswot, {update, UpdateIndex}, 
> Context},
>   UpdateOp = {crdt_op, riak_dt_orswot, {update, UpdateIndex}, 
> undefined},
>   

Re: Updating particular version of CRDT

2017-02-22 Thread Russell Brown
Hi Andrey,
The register inside a map is a LWW-register.

Long story: When we added CRDTs to Riak we did it for those users who wanted to 
avoid writing custom sibling resolution code, following on from that decision 
we decided not to add the MVRegister type. Riak’s default object is an 
MVRegister after all. When I say “we” I mean the CRDT team as was then, not 
Basho, as I am no longer at Basho.

Cheers

Russell

On 21 Feb 2017, at 08:35, Andrey Ershov  wrote:

> Hi all,
> 
> I'm new to Riak and trying to figure out how to work with CRDTs properly. 
> First of all, I decided to try MVRegister support. I'm using this tutorial 
> http://docs.basho.com/riak/kv/2.2.0/developing/data-types/maps/. Language of 
> my choice is Java.
> 
> Find source code here: 
> https://gist.github.com/andrershov/d0ebb8fd111eca013b302f8abaf14445
> 
> I've created ahmedMap with two registers (name="Ahmed", phone="123")
> Now I would like to simulate that there are two concurrent updates to phone 
> register
> phone="456" and phone="789"
> 
> For that, I'm fetching initial record and get context from it. After that, 
> I'm performing two updates passing the same context to each one. 
> I'm expecting that after fetching the record and reading the phone register, 
> I should find two concurrent values in it: phone = ("456", "789"). But I get 
> only the latest value phone = "789".
> Also Register API seems confusing, because there is no method inside it that 
> may return multiple values. 
> 
> 
> Could you please help me?
> 
> -- 
> Thanks, 
> Andrey
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Secondary indexes or Riak search ?

2017-02-13 Thread Russell Brown
Hi Alex,
A quick look at riak_test it seems that the `return_body` parameter is 
generally supported for both HTTP and PB now. I wasn’t aware of that.

This test exercises the API.

https://github.com/basho/riak_test/blob/develop/tests/verify_2i_returnbody.erl

The `ref` returned from the link I gave you is a way to correlate messages 
received with the query. If you have a look at the code in the test here 

https://github.com/basho/riak_test/blob/develop/tests/secondary_index_tests.erl#L204

You can get an idea of the general pattern. I guess line 227

https://github.com/basho/riak_test/blob/develop/tests/secondary_index_tests.erl#L227

Is the pertinent one for you.

Cheers

Russell

On 13 Feb 2017, at 11:08, Alex Feng  wrote:

> Hi Russell,
> 
> We have tried this, but it returns a reference(some number) only.  We have 
> googled and haven't found any clue about how to use this API.
> The API user guide doesn't say much about the "reference".  What is the next 
> step with the returned reference ?
> 
> 
> cs_bucket_fold(Pid::pid(), Bucket::bucket() | bucket_and_type(), 
> Opts::cs_opts()) -> {ok, reference()} | {error, term()}
> secret function, do not use, or I come to your house and keeel you.
> 
> cs_opt() = {timeout, timeout()} | {continuation, binary()} | {max_results, 
> non_neg_integer() | all} | {start_key, binary()} | {start_incl, boolean()} | 
> {end_key, binary()} | {end_incl, boolean()}
> 
> 
> Br,
> Alex
> 
> 2017-02-13 16:34 GMT+08:00 Russell Brown :
> If you look at the riak-erlang-client here 
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1129
>  there is a client API call that was implemented for CS (riak s2, or whatever 
> it is called, the large object store thingy that basho had) that will use the 
> $keys index, but in the back end calls fold_objects rather than fold_keys on 
> eleveldb, and so returns the whole object. NOTE: this is equivalent to r=1, 
> you don’t get a quorum read here, but a single vnode per-value only. I don’t 
> know if other clients added this API, but it would not be hard to add to any 
> client that supports 2i. Like I say, originally it was for riakCS, but it’s 
> open source and part of the release for 4 years now, so hardly a secret.
> 
> Cheers
> 
> Russell
> 
> On 13 Feb 2017, at 06:18, Alex Feng  wrote:
> 
> > Hi Russell,
> >
> > In your reply, you mentioned this,
> >
> > >> There is also the feature that can return the actual riak objects for a 
> > >> $keys index search,
> > >>You can pack the index terms with data and return the terms in a query so 
> > >>that you don’t need a further object fetch (see >>return_terms in docs.)
> >
> > If I understood correctly, it is possible to fetch object (value) by 2i in 
> > one time. But, we have tried using "return_term = true", it returns the 
> > index with key comparing only key when not using "return_term=true".  It 
> > doesn't help much with extra index, what we want to achieve is to fetch the 
> > object in one time.
> >
> >
> > Our use case,  client search DB every 10 seconds by 2i, Riak will return a 
> > list of around 5000 results(Keys), then client will query DB to fetch value 
> > for each key, basically it is around 5000 times,  client is easy to run 
> > into some issues most of the time. Any suggestion here ?
> >
> > Many thanks in advance.
> >
> > Br,
> > Alex
> >
> > 2017-02-06 19:02 GMT+08:00 Alex Feng :
> > Hi Russell,
> >
> > It is really helpful, thank you a lot.
> > We are suffering from solr crash now, are considering to switch to 2i.
> >
> > Br,
> > Alex
> >
> > 2017-02-06 16:53 GMT+08:00 Russell Brown :
> > It’s worth noting that secondary indexes (2i) has some other advantages 
> > over solr search. If you _can_ model your queries in 2i then I'd recommend 
> > it.
> >
> > Secondary indexes  have a richer API than is currently documented, if you 
> > look at https://docs.basho.com/riak/1.4.7/dev/using/2i/ you’ll see that 
> > documents a feature that allows the index terms to be filtered via reg ex. 
> > There is also the feature that can return the actual riak objects for a 
> > $keys index search,
> > You can pack the index terms with data and return the terms in a query so 
> > that you don’t need a further object fetch (see return_terms in docs.)
> > Secondary indexes are written atomically with the object they index.
> > Operationally they don’t require you run a JVM and Solr alongside your riak 
> > nodes.
> >
> > You have the tools

Re: Secondary indexes or Riak search ?

2017-02-13 Thread Russell Brown
If you look at the riak-erlang-client here 
https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1129
 there is a client API call that was implemented for CS (riak s2, or whatever 
it is called, the large object store thingy that basho had) that will use the 
$keys index, but in the back end calls fold_objects rather than fold_keys on 
eleveldb, and so returns the whole object. NOTE: this is equivalent to r=1, you 
don’t get a quorum read here, but a single vnode per-value only. I don’t know 
if other clients added this API, but it would not be hard to add to any client 
that supports 2i. Like I say, originally it was for riakCS, but it’s open 
source and part of the release for 4 years now, so hardly a secret.

Cheers

Russell

On 13 Feb 2017, at 06:18, Alex Feng  wrote:

> Hi Russell,
> 
> In your reply, you mentioned this,
> 
> >> There is also the feature that can return the actual riak objects for a 
> >> $keys index search,
> >>You can pack the index terms with data and return the terms in a query so 
> >>that you don’t need a further object fetch (see >>return_terms in docs.)
> 
> If I understood correctly, it is possible to fetch object (value) by 2i in 
> one time. But, we have tried using "return_term = true", it returns the index 
> with key comparing only key when not using "return_term=true".  It doesn't 
> help much with extra index, what we want to achieve is to fetch the object in 
> one time.
> 
> 
> Our use case,  client search DB every 10 seconds by 2i, Riak will return a 
> list of around 5000 results(Keys), then client will query DB to fetch value 
> for each key, basically it is around 5000 times,  client is easy to run into 
> some issues most of the time. Any suggestion here ?  
> 
> Many thanks in advance.
> 
> Br,
> Alex
> 
> 2017-02-06 19:02 GMT+08:00 Alex Feng :
> Hi Russell,
> 
> It is really helpful, thank you a lot.
> We are suffering from solr crash now, are considering to switch to 2i.
> 
> Br,
> Alex
> 
> 2017-02-06 16:53 GMT+08:00 Russell Brown :
> It’s worth noting that secondary indexes (2i) has some other advantages over 
> solr search. If you _can_ model your queries in 2i then I'd recommend it.
> 
> Secondary indexes  have a richer API than is currently documented, if you 
> look at https://docs.basho.com/riak/1.4.7/dev/using/2i/ you’ll see that 
> documents a feature that allows the index terms to be filtered via reg ex. 
> There is also the feature that can return the actual riak objects for a $keys 
> index search,
> You can pack the index terms with data and return the terms in a query so 
> that you don’t need a further object fetch (see return_terms in docs.)
> Secondary indexes are written atomically with the object they index.
> Operationally they don’t require you run a JVM and Solr alongside your riak 
> nodes.
> 
> You have the tools with basho_bench to answer the question about performance 
> and overhead for your workload. I suspect for “overhead” 2i wins, as there is 
> no JVM-per-node.
> 
> Modelling for 2i is perhaps harder, in the classical nosql way, you have to 
> do more work upfront when designing your querying.
> 
> I hope that helps a little. I worked quite a lot on 2i and never really 
> understood why riak-search was seen as a replacment, imo they’re 
> complementary, and you pick the one that best fits.
> 
> Cheers
> 
> Russell
> 
> On 2 Feb 2017, at 09:43, Alex Feng  wrote:
> 
> > Hello Riak-users,
> >
> > I am currently using Riak search to do some queries, since my queries are 
> > very simple, it should be fulfilled by secondary indexes as well.
> > So, my question is which one has better performance and less overhead, 
> > let's say both can fulfill the query requirement.
> >
> > Many thanks in advance.
> >
> > Br,
> > Alex
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [Basho Riak] Fail To Update Document Repeatly With Cluster of 5 Nodes

2017-02-07 Thread Russell Brown

On 7 Feb 2017, at 10:27, my hue  wrote:

> Dear Russell,
> 
> Yes, I updated all registers in one go.
> And I do not try yet with updating a single register at a time.
> let me try to see.  But I wonder that any affect on solving conflict at riak 
> cluster 
> if update all in one go? 
> 

Just trying to make the search space as small as possible. I don’t think _any_ 
of this should fail. The maps code is very well tested and well used, so it’s 
all kind of odd.

Without hands on it’s hard to debug, and email back and forth is slow, so if 
you try the simplest possible thing and that still fails, it helps.

IMO the simplest possible thing is to start with a new, empty key and use 
modify_type to update a single register.

Many thanks

Russell

> 
> 
> On Tue, Feb 7, 2017 at 5:18 PM, Russell Brown  wrote:
> So in you’re updating all those registers in one go? Out of interest, what 
> happens if you update a single register at a time?
> 
> On 7 Feb 2017, at 10:02, my hue  wrote:
> 
> > Dear Russel,
> >
> > > Can you run riakc_map:to_op(Map). and show me the output of that, please?
> >
> > The following is output of riakc_map:to_op(Map) :
> >
> > {map, {update, [{update, 
> > {<<"updated_time_dt">>,register},{assign,<<"2017-02-06T17:22:39Z">>}}, 
> > {update,{<<"updated_by_id">>,register}, 
> > {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{update,{<<"status_id">>,register},{assign,<<"show">>}},{update,{<<"start_time">>,register},{assign,<<"dont_use">>}},{update,{<<"restaurant_status_id">>,register},
> >  {assign,<<"inactive">>}}, {update,{<<"restaurant_id">>,register}, 
> > {assign,<<"rest848e042b3a0488640981c8a6dc4a8281">>}},{update,{<<"rest_location_p">>,register},
> >  {assign,<<"10.844117421366443,106.63982392275398">>}}, 
> > {update,{<<"order_i">>,register},{assign,<<"0">>}}, 
> > {update,{<<"name">>,register},{assign,<<"fullmenu">>}}, 
> > {update,{<<"menu_category_revision_id">>,register}, 
> > {assign,<<"0-634736bc14e0bd3ed7e3fe0f1ee64443">>}}, 
> > {update,{<<"maintain_mode_b">>,register},{assign,<<"false">>}}, 
> > {update,{<<"id">>,register}, 
> > {assign,<<"menufe89488afa948875cab6b0b18d579f21">>}}, 
> > {update,{<<"end_time">>,register},{assign,<<"dont_use">>}},{update,{<<"currency">>,register},{assign,<<"cad">>}},
> >  {update,{<<"created_time_dt">>,register}, 
> > {assign,<<"2017-01-27T03:34:04Z">>}}, 
> > {update,{<<"created_by_id">>,register}, 
> > {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}}, 
> > {update,{<<"account_id">>,register}, 
> > {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}}]}, 
> > <<131,108,0,0,0,3,104,2,109,0,0,0,12,39,21,84,209,219,42,57,233,0,0,156,252,97,34,104,2,109,0,0,0,12,132,107,248,226,103,5,182,208,0,0,118,2,97,39,104,2,109,0,0,0,12,137,252,139,186,176,202,25,96,0,0,195,164,97,53,106>>}
> >
> >
> >
> >
> > On Tue, Feb 7, 2017 at 4:36 PM, Russell Brown  wrote:
> >
> > On 7 Feb 2017, at 09:34, my hue  wrote:
> >
> > > Dear Russell,
> > >
> > > >What operation are you performing? What is the update you perform? Do 
> > > >you set a register value, add a register, remove a register?
> > >
> > > I used riakc_map:update to update value with map. I do the following 
> > > steps :
> > >
> > > - Get FetchData map with  fetch_type
> > > - Extract key, value, context from FetchData
> > > - Obtain UpdateData with:
> > >
> > > + Init map with context
> >
> > I don’t understand this step
> >
> > > + Use :
> > >
> > >riakc_map:update({K, register}, fun(R) -> riakc_register:set(V,  R) 
> > > end,  InitMap)
> > >
> > > to obtain UpdateData
> > >
> > > Note:
> > > K : key
> > > V:  value
> > >
> > > - Then  update UpdateData with update_type
> > >
> >
> > Can you run 

Re: [Basho Riak] Fail To Update Document Repeatly With Cluster of 5 Nodes

2017-02-07 Thread Russell Brown
So in you’re updating all those registers in one go? Out of interest, what 
happens if you update a single register at a time?

On 7 Feb 2017, at 10:02, my hue  wrote:

> Dear Russel,
> 
> > Can you run riakc_map:to_op(Map). and show me the output of that, please? 
> 
> The following is output of riakc_map:to_op(Map) :
> 
> {map, {update, [{update, 
> {<<"updated_time_dt">>,register},{assign,<<"2017-02-06T17:22:39Z">>}}, 
> {update,{<<"updated_by_id">>,register}, 
> {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{update,{<<"status_id">>,register},{assign,<<"show">>}},{update,{<<"start_time">>,register},{assign,<<"dont_use">>}},{update,{<<"restaurant_status_id">>,register},
>  {assign,<<"inactive">>}}, {update,{<<"restaurant_id">>,register}, 
> {assign,<<"rest848e042b3a0488640981c8a6dc4a8281">>}},{update,{<<"rest_location_p">>,register},
>  {assign,<<"10.844117421366443,106.63982392275398">>}}, 
> {update,{<<"order_i">>,register},{assign,<<"0">>}}, 
> {update,{<<"name">>,register},{assign,<<"fullmenu">>}}, 
> {update,{<<"menu_category_revision_id">>,register}, 
> {assign,<<"0-634736bc14e0bd3ed7e3fe0f1ee64443">>}}, 
> {update,{<<"maintain_mode_b">>,register},{assign,<<"false">>}}, 
> {update,{<<"id">>,register}, 
> {assign,<<"menufe89488afa948875cab6b0b18d579f21">>}}, 
> {update,{<<"end_time">>,register},{assign,<<"dont_use">>}},{update,{<<"currency">>,register},{assign,<<"cad">>}},
>  {update,{<<"created_time_dt">>,register}, 
> {assign,<<"2017-01-27T03:34:04Z">>}}, {update,{<<"created_by_id">>,register}, 
> {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}}, 
> {update,{<<"account_id">>,register}, 
> {assign,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}}]}, 
> <<131,108,0,0,0,3,104,2,109,0,0,0,12,39,21,84,209,219,42,57,233,0,0,156,252,97,34,104,2,109,0,0,0,12,132,107,248,226,103,5,182,208,0,0,118,2,97,39,104,2,109,0,0,0,12,137,252,139,186,176,202,25,96,0,0,195,164,97,53,106>>}
> 
> 
>  
> 
> On Tue, Feb 7, 2017 at 4:36 PM, Russell Brown  wrote:
> 
> On 7 Feb 2017, at 09:34, my hue  wrote:
> 
> > Dear Russell,
> >
> > >What operation are you performing? What is the update you perform? Do you 
> > >set a register value, add a register, remove a register?
> >
> > I used riakc_map:update to update value with map. I do the following steps :
> >
> > - Get FetchData map with  fetch_type
> > - Extract key, value, context from FetchData
> > - Obtain UpdateData with:
> >
> > + Init map with context
> 
> I don’t understand this step
> 
> > + Use :
> >
> >riakc_map:update({K, register}, fun(R) -> riakc_register:set(V,  R) end, 
> >  InitMap)
> >
> > to obtain UpdateData
> >
> > Note:
> > K : key
> > V:  value
> >
> > - Then  update UpdateData with update_type
> >
> 
> Can you run riakc_map:to_op(Map). and show me the output of that, please?
> 
> > The following is sample about Update data :
> >
> > {map, [] ,
> >  
> > [{{<<"account_id">>,register},{register,<<>>,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{{<<"created_by_id">>,register},{register,<<>>,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{{<<"created_time_dt">>,register},{register,<<>>,<<"2017-01-27T03:34:04Z">>}},{{<<"currency">>,register},{register,<<>>,<<"cad">>}},{{<<"end_time">>,register},{register,<<>>,<<"dont_use">>}},{{<<"id">>,register},{register,<<>>,<<"menufe89488afa948875cab6b0b18d579f21">>}},{{<<"maintain_mode_b">>,register},{register,<<>>,<<"false">>}},{{<<"menu_category_revision_id">>,register},{register,<<>>,<<"0-634736bc14e0bd3ed

Re: [Basho Riak] Fail To Update Document Repeatly With Cluster of 5 Nodes

2017-02-07 Thread Russell Brown

On 7 Feb 2017, at 09:34, my hue  wrote:

> Dear Russell,
> 
> >What operation are you performing? What is the update you perform? Do you 
> >set a register value, add a register, remove a register?
> 
> I used riakc_map:update to update value with map. I do the following steps :
> 
> - Get FetchData map with  fetch_type 
> - Extract key, value, context from FetchData
> - Obtain UpdateData with:   
> 
> + Init map with context 

I don’t understand this step

> + Use :
> 
>riakc_map:update({K, register}, fun(R) -> riakc_register:set(V,  R) end,  
> InitMap)
>  
> to obtain UpdateData
> 
> Note: 
> K : key 
> V:  value 
> 
> - Then  update UpdateData with update_type
> 

Can you run riakc_map:to_op(Map). and show me the output of that, please?

> The following is sample about Update data :
> 
> {map, [] ,  
>  
> [{{<<"account_id">>,register},{register,<<>>,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{{<<"created_by_id">>,register},{register,<<>>,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{{<<"created_time_dt">>,register},{register,<<>>,<<"2017-01-27T03:34:04Z">>}},{{<<"currency">>,register},{register,<<>>,<<"cad">>}},{{<<"end_time">>,register},{register,<<>>,<<"dont_use">>}},{{<<"id">>,register},{register,<<>>,<<"menufe89488afa948875cab6b0b18d579f21">>}},{{<<"maintain_mode_b">>,register},{register,<<>>,<<"false">>}},{{<<"menu_category_revision_id">>,register},{register,<<>>,<<"0-634736bc14e0bd3ed7e3fe0f1ee64443">>}},{{<<"name">>,register},{register,<<>>,<<"fullmenu">>}},{{<<"order_i">>,register},{register,<<>>,<<"0">>}},{{<<"rest_location_p">>,register},{register,<<>>,<<"10.844117421366443,106.63982392275398">>}},{{<<"restaurant_id">>,register},{register,<<>>,<<"rest848e042b3a0488640981c8a6dc4a8281">>}},{{<<"restaurant_status_id">>,register},{register,<<>>,<<"inactive">>}},{{<<"start_time">>,register},{register,<<>>,<<"dont_use">>}},{{<<"status_id">>,register},{register,<<>>,<<"show">>}},{{<<"updated_by_id">>,register},{register,<<>>,<<"accounta25a424b8484181e8ba1bec25bf7c491">>}},{{<<"updated_time_dt">>,register},{register,<<>>,<<"2017-02-06T17:22:39Z">>}}],
>  
>  [] ,  
> <<131,108,0,0,0,3,104,2,109,0,0,0,12,39,21,84,209,219,42,57,233,0,0,156,252,97,34,104,2,109,0,0,0,12,132,107,248,226,103,5,182,208,0,0,118,2,97,39,104,2,109,0,0,0,12,137,252,139,186,176,202,25,96,0,0,195,164,97,53,106>>
> }
> 
> 
> On Tue, Feb 7, 2017 at 3:43 PM, Russell Brown  wrote:
> 
> On 7 Feb 2017, at 08:17, my hue  wrote:
> 
> > Dear John and Russell Brown,
> >
> > * How fast is your turnaround time between an update and a fetch?
> >
> > The turnaround time between an update and a fetch about 1 second.
> > During my team and I  debug, we adjusted haproxy with the scenario as 
> > follow:
> >
> > Scenario 1 : round robin via 5 nodes of cluster
> >
> > We meet issue at scenario 1 and we are afraid of that timeout can be occurs 
> > between nodes,
> > make us still get stale data. Then we performed scenario 2
> >
> > Scenario 2:  Disable round robin and only route request to node 1. Cluster 
> > still is 5 nodes.
> > With this case we ensure that request update and fetch always come to and 
> > from node 1.
> > And the issue still occurs.
> >
> > At the fail time, I hoped that can get any error log from riak nodes to 
> > give me any information.
> > But riak log show to me nothing and everything is ok.
> >
> > * What operation are you performing?
> >
> > I used :
> >
> > riakc_pb_socket:update_type(Pid, {Bucket-Type, Bucket}, Key, 
> > riakc_map:to_op(Map), []).
> > riakc_pb_socket:fetch_type(Pid, {BucketType, Bucket}, Key, []).
> 
> What operation are you performing? What is the update you perform? Do you set 
> a register value, add a register, remove a

Re: [Basho Riak] Fail To Update Document Repeatly With Cluster of 5 Nodes

2017-02-07 Thread Russell Brown

On 7 Feb 2017, at 08:17, my hue  wrote:

> Dear John and Russell Brown,
> 
> * How fast is your turnaround time between an update and a fetch?  
> 
> The turnaround time between an update and a fetch about 1 second. 
> During my team and I  debug, we adjusted haproxy with the scenario as follow:
> 
> Scenario 1 : round robin via 5 nodes of cluster 
> 
> We meet issue at scenario 1 and we are afraid of that timeout can be occurs 
> between nodes,
> make us still get stale data. Then we performed scenario 2 
> 
> Scenario 2:  Disable round robin and only route request to node 1. Cluster 
> still is 5 nodes. 
> With this case we ensure that request update and fetch always come to and 
> from node 1.
> And the issue still occurs. 
> 
> At the fail time, I hoped that can get any error log from riak nodes to give 
> me any information.
> But riak log show to me nothing and everything is ok. 
> 
> * What operation are you performing? 
> 
> I used :
> 
> riakc_pb_socket:update_type(Pid, {Bucket-Type, Bucket}, Key, 
> riakc_map:to_op(Map), []).
> riakc_pb_socket:fetch_type(Pid, {BucketType, Bucket}, Key, []). 

What operation are you performing? What is the update you perform? Do you set a 
register value, add a register, remove a register?
> 
> * It looks like the map is a single level map of last-write-wins registers. 
> Is there a chance that the time on the node handling the update is behind the 
> value in the lww-register? 
> 
> => I am not sure about logic show conflict of internal riak node. And the 
> issue  never happens if I used single node. 
> My bucket properties as follow :
> 
> {"props":{"name":"menu","active":true,"allow_mult":true,"backend":"bitcask_mult","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak-node1@64.137.190.244","datatype":"map","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"menu","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","search_index":"menu_idx","small_vclock":50,"w":"quorum","young_vclock":20}}
> 
> Note :   
> + "datatype":"map" 
> + "last_write_wins": false
> + "dvv_enabled": true
> + "allow_mult": true 
> 
> 
> * Have you tried using the `modify_type` operation in riakc_pb_socket which 
> does the fetch/update operation in sequence for you?
> 
> => I dot not use yet, but my action is sequence with fetch and then update.  
> Might be I will try modify_type to see. 
> 
> * Anything in the error logs on any of the nodes?
> 
> => From the node log,  no errror report at fail time. 
> 
> * Is the opaque context identical from the fetch and then later after the 
> update? 
> 
> => There is the context  got from fetch and that context used with update.  
> And during our debug time with string of sequence : fetch , update, fetch , 
> update ,   the context I saw always the same at
> fetch data. 
> 
> Best regards,
> Hue Tran
>  
> 
> 
> On Tue, Feb 7, 2017 at 2:11 AM, John Daily  wrote:
> Originally I suspected the context which allows Riak to resolve conflicts was 
> not present in your data, but I see it in your map structure. Thanks for 
> supplying such a detailed description.
> 
> How fast is your turnaround time between an update and a fetch? Even if the 
> cluster is healthy it’s not impossible to see a timeout between nodes, which 
> could result in a stale retrieval. Have you verified whether the stale data 
> persists?
> 
> A single node cluster gives an advantage that you’ll never see in a real 
> cluster: a perfectly synchronized clock. It also reduces (but does not 
> completely eliminate) the possibility of an internal timeout between 
> processes.
> 
> -John
> 
>> On Feb 6, 2017, at 1:02 PM, my hue  wrote:
>> 
>> Dear Riak Team,
>> 
>> I and my team used riak as database for my production with an cluster 
>> including 5 nodes. 
>> While production run, we meet an critical bug that is sometimes fail to 
>> update document. 
>> I and my colleagues performed debug and detected an issue with the scenario 
>&g

Re: [Basho Riak] Fail To Update Document Repeatly With Cluster of 5 Nodes

2017-02-06 Thread Russell Brown
What operation are you performing? It looks like the map is a single level map 
of last-write-wins registers. Are you updating a value? Is there a chance that 
the time on the node handling the update is behind the value in the 
lww-register?

Have you tried using the `modify_type` operation in riakc_pb_socket which does 
the fetch/update operation in sequence for you?

Anything in the error logs on any of the nodes?

Is the opaque context identical from the fetch and then later after the update?

Thanks

Russell

On 6 Feb 2017, at 19:11, John Daily  wrote:

> Originally I suspected the context which allows Riak to resolve conflicts was 
> not present in your data, but I see it in your map structure. Thanks for 
> supplying such a detailed description.
> 
> How fast is your turnaround time between an update and a fetch? Even if the 
> cluster is healthy it’s not impossible to see a timeout between nodes, which 
> could result in a stale retrieval. Have you verified whether the stale data 
> persists?
> 
> A single node cluster gives an advantage that you’ll never see in a real 
> cluster: a perfectly synchronized clock. It also reduces (but does not 
> completely eliminate) the possibility of an internal timeout between 
> processes.
> 
> -John
> 
>> On Feb 6, 2017, at 1:02 PM, my hue  wrote:
>> 
>> Dear Riak Team,
>> 
>> I and my team used riak as database for my production with an cluster 
>> including 5 nodes. 
>> While production run, we meet an critical bug that is sometimes fail to 
>> update document. 
>> I and my colleagues performed debug and detected an issue with the scenario 
>> as follow: 
>> 
>> +  fetch document  
>> +  change value of document 
>> +  update document
>> 
>> Repeat about 10 times and will meet fail. With the document is updated 
>> continually, 
>> sometimes will face update fail.
>> 
>> The first time,  5 nodes of cluster we used riak version 2.1.1.  
>> After meet above bug, we upgraded to use riak version 2.2.0 and this issue 
>> still occurs.
>> 
>> Via many time test,  debug using  Tcpdump at riak node :
>> 
>> tcpdump -A -ttt  -i {interface} src host {host} and dst port {port} 
>> 
>> And together with the command: 
>> 
>> riak-admin status | grep "node_puts_map\| node_puts_map_total\| 
>> node_puts_total\| vnode_map_update_total\| vnode_puts_total\"
>> 
>> we  got that the riak server already get the update request. 
>> However, do not know why riak backend fail to update document.  
>> At the fail time,  from riak server log everything is ok. 
>> 
>> Then we removed cluster and use a single riak server,  and see that above 
>> bug never happen.
>>  
>> For that reason, think that is only happen with cluster work. We took 
>> research on basho riak document and our riak configure 
>> seems that like suggestion from document.  We totally blocked on this issue 
>> and hope that can get support from you  
>> so that can obtain a stable work from riak database for our production. 
>> Thank you so much.  Hope that can get your reply soon.
>> 
>> 
>> * The following is our riak node information : 
>> 
>> Riak version:  2.2.0
>> OS :  CentOS Linux release 7.2.1511
>> Cpu :  4 core
>> Memory : 4G  
>> Riak configure : the attached file "riak.conf"
>> 
>> Note : 
>> 
>> - We mostly using default configure of riak configure except that  we used 
>> storage backend is multi  
>> 
>> storage_backend = multi
>> multi_backend.bitcask_mult.storage_backend = bitcask
>> multi_backend.bitcask_mult.bitcask.data_root = /var/lib/riak/bitcask_mult
>> multi_backend.default = bitcask_mult
>> 
>> -
>> 
>> - Bucket type created with the following command:
>> 
>> riak-admin bucket-type create dev_restor 
>> '{"props":{"backend":"bitcask_mult","datatype":"map"}}'
>> riak-admin bucket-type activate dev_restor
>> 
>> -
>> 
>> - Bucket Type Status :
>> 
>> >> riak-admin bucket-type status dev_restor
>> 
>> dev_restor is active
>> young_vclock: 20
>> w: quorum
>> small_vclock: 50
>> rw: quorum
>> r: quorum
>> pw: 0
>> precommit: []
>> pr: 0
>> postcommit: []
>> old_vclock: 86400
>> notfound_ok: true
>> n_val: 3
>> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
>> last_write_wins: false
>> dw: quorum
>> dvv_enabled: true
>> chash_keyfun: {riak_core_util,chash_std_keyfun}
>> big_vclock: 50
>> basic_quorum: false
>> backend: <<"bitcask_mult">>
>> allow_mult: true
>> datatype: map
>> active: true
>> claimant: 'riak-node1@64.137.190.244'
>> 
>> -
>> 
>> - Bucket Property :
>> 
>> {"props":{"name":"menu","active":true,"allow_mult":true,"backend":"bitcask_mult","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_

Re: Secondary indexes or Riak search ?

2017-02-06 Thread Russell Brown
It’s worth noting that secondary indexes (2i) has some other advantages over 
solr search. If you _can_ model your queries in 2i then I'd recommend it.

Secondary indexes  have a richer API than is currently documented, if you look 
at https://docs.basho.com/riak/1.4.7/dev/using/2i/ you’ll see that documents a 
feature that allows the index terms to be filtered via reg ex. There is also 
the feature that can return the actual riak objects for a $keys index search,
You can pack the index terms with data and return the terms in a query so that 
you don’t need a further object fetch (see return_terms in docs.)
Secondary indexes are written atomically with the object they index.
Operationally they don’t require you run a JVM and Solr alongside your riak 
nodes.

You have the tools with basho_bench to answer the question about performance 
and overhead for your workload. I suspect for “overhead” 2i wins, as there is 
no JVM-per-node.

Modelling for 2i is perhaps harder, in the classical nosql way, you have to do 
more work upfront when designing your querying.

I hope that helps a little. I worked quite a lot on 2i and never really 
understood why riak-search was seen as a replacment, imo they’re complementary, 
and you pick the one that best fits.

Cheers

Russell

On 2 Feb 2017, at 09:43, Alex Feng  wrote:

> Hello Riak-users,
> 
> I am currently using Riak search to do some queries, since my queries are 
> very simple, it should be fulfilled by secondary indexes as well. 
> So, my question is which one has better performance and less overhead, let's 
> say both can fulfill the query requirement.
> 
> Many thanks in advance.
> 
> Br,
> Alex
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Efficient way to fetch all records from a bucket

2017-01-30 Thread Russell Brown
IF you use leveldb, there is a function in the riak-erlang-client that gets all 
objects in a bucket, I don’t know if it has been implemented in the java client 
as it was written specifically for riak-cs.

https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130

Cheers

Russell

On 24 Jan 2017, at 07:54, Sat  wrote:

> Hi ,
> I am planning to use riak-client 2.2.0 with java technology.
> What I understood is, If I have to fetch all records which are falling under
> one bucket, I need to first fetch all keys then based on those keys I need
> to fetch these values. This is going to be in two steps. And first step
> seems to be very costly operation.
> Just wanted to know is there any efficient way by which I can fetch all
> record from the bucket in a single call ?
> 
> Help would be appreciated.
> 
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Efficient-way-to-fetch-all-records-from-a-bucket-tp4034810.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Efficient way to fetch all records from a bucket

2017-01-30 Thread Russell Brown
I the link I provided gives you the _objects_ too. list_keys gives only keys.

On 28 Jan 2017, at 12:21, Grigory Fateyev  wrote:

> Hello!
> 
> I think this link 
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506
>  ? You need list_keys/2 function.
> 
> 2017-01-28 13:33 GMT+03:00 Russell Brown :
> IF you use leveldb, there is a function in the riak-erlang-client that gets 
> all objects in a bucket, I don’t know if it has been implemented in the java 
> client as it was written specifically for riak-cs.
> 
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130
> 
> Cheers
> 
> Russell
> 
> On 24 Jan 2017, at 07:54, Sat  wrote:
> 
> > Hi ,
> > I am planning to use riak-client 2.2.0 with java technology.
> > What I understood is, If I have to fetch all records which are falling under
> > one bucket, I need to first fetch all keys then based on those keys I need
> > to fetch these values. This is going to be in two steps. And first step
> > seems to be very costly operation.
> > Just wanted to know is there any efficient way by which I can fetch all
> > record from the bucket in a single call ?
> >
> > Help would be appreciated.
> >
> >
> >
> > --
> > View this message in context: 
> > http://riak-users.197444.n3.nabble.com/Efficient-way-to-fetch-all-records-from-a-bucket-tp4034810.html
> > Sent from the Riak Users mailing list archive at Nabble.com.
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Efficient way to fetch all records from a bucket

2017-01-28 Thread Russell Brown
I the link I provided gives you the _objects_ too. list_keys gives only keys.

On 28 Jan 2017, at 12:21, Grigory Fateyev  wrote:

> Hello!
> 
> I think this link 
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L506
>  ? You need list_keys/2 function.
> 
> 2017-01-28 13:33 GMT+03:00 Russell Brown :
> IF you use leveldb, there is a function in the riak-erlang-client that gets 
> all objects in a bucket, I don’t know if it has been implemented in the java 
> client as it was written specifically for riak-cs.
> 
> https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130
> 
> Cheers
> 
> Russell
> 
> On 24 Jan 2017, at 07:54, Sat  wrote:
> 
>> Hi ,
>> I am planning to use riak-client 2.2.0 with java technology.
>> What I understood is, If I have to fetch all records which are falling under
>> one bucket, I need to first fetch all keys then based on those keys I need
>> to fetch these values. This is going to be in two steps. And first step
>> seems to be very costly operation.
>> Just wanted to know is there any efficient way by which I can fetch all
>> record from the bucket in a single call ?
>> 
>> Help would be appreciated.
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://riak-users.197444.n3.nabble.com/Efficient-way-to-fetch-all-records-from-a-bucket-tp4034810.html
>> Sent from the Riak Users mailing list archive at Nabble.com.
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Efficient way to fetch all records from a bucket

2017-01-28 Thread Russell Brown
IF you use leveldb, there is a function in the riak-erlang-client that gets all 
objects in a bucket, I don’t know if it has been implemented in the java client 
as it was written specifically for riak-cs.

https://github.com/basho/riak-erlang-client/blob/develop/src/riakc_pb_socket.erl#L1130

Cheers

Russell

On 24 Jan 2017, at 07:54, Sat  wrote:

> Hi ,
> I am planning to use riak-client 2.2.0 with java technology.
> What I understood is, If I have to fetch all records which are falling under
> one bucket, I need to first fetch all keys then based on those keys I need
> to fetch these values. This is going to be in two steps. And first step
> seems to be very costly operation.
> Just wanted to know is there any efficient way by which I can fetch all
> record from the bucket in a single call ?
> 
> Help would be appreciated.
> 
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Efficient-way-to-fetch-all-records-from-a-bucket-tp4034810.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: secondary indexes

2017-01-17 Thread Russell Brown

On 18 Jan 2017, at 03:07, Andy leu  wrote:

> thank you guys, that helps a lot.
> one more question, do you think solr is a better solution than seconary 
> indexes?

It depends on your use case. There are Riak users who have great success with 
secondary indexes for their use case, for others solr makes more sense. I 
personally disagree with the statement at the top of 
http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/

Cheers

Russell

> 
> andrew
> 
> From: Russell Brown 
> Sent: Tuesday, January 17, 2017 3:45:34 PM
> To: Andy leu
> Cc: riak-users@lists.basho.com
> Subject: Re: secondary indexes
>  
> Hi,
> Riak's secondary indexes require a sorted backend, either of the memory or 
> leveldb backends will work, bitcask does not support secondary indexes.
> 
> More details here 
> http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/
> 
> Cheers
> 
> Russell
> 
> On Jan 17, 2017, at 07:13 AM, Andy leu  wrote:
> 
>> hi:
>> I ran the code as shown in 
>> http://docs.basho.com/riak/kv/2.2.0/developing/getting-started/python/querying/.
>> when I tried the lines about secondary index :
>> 
>> october_orders = order_bucket.get_index("order_date_bin"
>> ,
>> 
>> "2013-10-01", "2013-10-31"
>> )
>> october_orders.results
>> 
>> 
>> an exception was raised:
>> 
>> riak.riak_error.RiakError: 
>> '{error,{indexes_not_supported,riak_kv_bitcask_backend}}'
>> 
>> 
>> can anyone tell me why and how to fix it?
>> Thanks
>> 
>> Andrew
>> 
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: secondary indexes

2017-01-16 Thread Russell Brown

Hi,
Riak's secondary indexes require a sorted backend, either of the memory or 
leveldb backends will work, bitcask does not support secondary indexes.

More details here 
http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/

Cheers

Russell

On Jan 17, 2017, at 07:13 AM, Andy leu  wrote:

hi:
I ran the code as shown in 
http://docs.basho.com/riak/kv/2.2.0/developing/getting-started/python/querying/.
when I tried the lines about secondary index :

october_orders = order_bucket.get_index("order_date_bin",
   "2013-10-01", "2013-10-31")
october_orders.results

an exception was raised:

riak.riak_error.RiakError: 
'{error,{indexes_not_supported,riak_kv_bitcask_backend}}'


can anyone tell me why and how to fix it?
Thanks

Andrew



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: List all keys on a small bucket

2016-12-08 Thread Russell Brown
Depends on what backend you are running, no? If leveldb then this list keys 
operation can be pretty cheap.

It’s a coverage query, but if it’s leveldb at least you will seek to the start 
of the bucket and iterate over only the keys in that bucket.

Cheers

Russell

On 8 Dec 2016, at 21:19, John Daily  wrote:

> The size of the bucket has no real impact on the cost of a list keys 
> operation because each key on the cluster must be examined to determined 
> whether it resides in the relevant bucket.
> 
> -John
> 
>> On Dec 8, 2016, at 4:17 PM, Arun Rajagopalan  
>> wrote:
>> 
>> Hello Riak Users
>> 
>> I have a use case where I would really like to list all keys of a bucket 
>> despite all the warnings about performance. The number of keys is relatively 
>> small - in the few thousands at the very most, Usually its no more than 100
>> 
>> I also have other buckets in the same cluster that have millions of keys and 
>> tens of tera bytes of data
>> 
>> Question: Will listing all keys on the small bucket adversely impact 
>> performance of the other larger buckets?
>> 
>> Thanks
>> Arun
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Detecting Hinted Handoff

2016-07-25 Thread Russell Brown
But I still don’t understand _why_? What is the aim, ‘cos I’m not sure it’s 
possible.

For quorum reads on indexes then Sargun’s “roll your own” idea is the better 
answer at the moment.

On 25 Jul 2016, at 11:25, DeadZen  wrote:

> Hinted handoff from adding a node, removing a node or failing a node? Could 
> probably get some idea from a ring handler, hinted handoff could likely very 
> well trigger its own event as well without a large modification to riak_core 
> 
> 
> On Sunday, July 24, 2016, Sargun Dhillon  wrote:
> It might also make a lot of sense to roll your own secondary indices. That 
> is, have a CRDT set represent the primary key of the rows which meet the 2i 
> condition. In that, you can query the CRDT set, and ensure some level of 
> consistency. There are further tricks to be played here if interested. 
> 
> I'm curious, what is your data model?
> 
> On Fri, Jul 22, 2016 at 11:01 AM, Alexander Sicular  
> wrote:
> Take a look at the "pw" and "pr" tunable consistency options for gets and 
> puts. The base level of abstraction in Riak is the virtual node - not the 
> physical machine. When data is replicated it is replicated to a replica set 
> of virtual nodes. Those virtual nodes have primary and secondary (due to 
> failures) allocations to physical machines. When using "pr" and "pw" options 
> you instruct Riak to only service the request from virtual nodes that are 
> residing on their primarily allocated physical machines. In short, by abusing 
> pr/pw you can infer the state of your cluster from your application. 
> 
> Obviously, this is not foolproof. There may also be additional 2i specific 
> issues to consider. Nevertheless, I always liked this trick. 
> 
> Also, review this four part series on tunable consistency :
> 
> http://basho.com/posts/technical/understanding-riaks-configurable-behaviors-part-1/
> http://basho.com/posts/technical/riaks-config-behaviors-part-2/
> http://basho.com/posts/technical/riaks-config-behaviors-part-3/
> http://basho.com/posts/technical/riaks-config-behaviors-part-4/
> 
> -Alexander 
> 
> 
> @siculars
> http://siculars.posthaven.com
> 
> Sent from my iRotaryPhone
> 
> On Jul 22, 2016, at 12:28, Hawk Newton  wrote:
> 
>> I've got a use case in which I'd like to use a secondary index but can't
>> tolerate partial result sets caused by hinted handoffs.  I'm not currently
>> running riak search and, as this is a fringe case, would prefer not add the
>> additional overhead and complexity if I can help it.
>> 
>> I'd like to detect a hinted handoff operation and throw a 503, if possible.
>> 
>> Does anyone know of a way I can programatically detect if a hinted handoff
>> is underway without having to shell out to riak-admin (yuck!) and parse the
>> results? I'm running riak 2.0.5 at the moment.
>> 
>> Thank you in advance.
>> 
>> -- Hawk
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://riak-users.197444.n3.nabble.com/Detecting-Hinted-Handoff-tp4034489.html
>> Sent from the Riak Users mailing list archive at Nabble.com.
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to best store arbitrarily large Java objects

2016-07-22 Thread Russell Brown

On 22 Jul 2016, at 09:12, Henning Verbeek  wrote:

> Alex,
> thanks for the very quick response.
> 
> On Thu, Jul 21, 2016 at 5:36 PM, Alex Moore  wrote:
>>> I'm beginning to think that I'll need to remodel my data and use CRDTs
>>> for individual fields such as the `TreeMap`. Would that be a better
>>> way?
>> 
>> 
>> This sounds like a plausible idea.  If you do a lot of possibly conflicting
>> updates to the Tree, then a CRDT map would be the way to go.  You could
>> reuse the key from the main object, and just put it in the new
>> buckettype/bucket.
> 
> Looking at the 
> [documentation](http://docs.basho.com/riak/kv/2.1.4/developing/data-types/maps/)
> I assume there are no limits to the amount of entries, right?

There is a size limit as a map is just a riak object like any other. We’re 
working on decomposed CRDTs, where the Set/Map/etc are split across many keys. 
We expect Sets are coming soon, Maps are a little further out.

> 
>> If you don't need to update the tree much, you could also just serialize the
>> tree into it's own object - split up the static data and the often updated
>> data, and put them in different buckets that share the same key.
> 
> The tree is built once and read often, rarely appended to. The problem
> with splitting up the object is that the tree makes up about 95% of
> the size, so unless I can split up the tree, it wont help much.

Splitting up CRDTs that are related is probably going to be a problem too, as 
they need to share some common causal information to merge correctly. See above.

> 
> Thanks again!
> Henning
> 
> PS: It'd be great to have a `Converter` that can be instructed to map
> fields to CRDT through annotations :)

Is there not a java converter that maps an object to a CRDT map already? That 
would seem like a nice thing to have, though you’d be limited in the types of 
your fields to sets/registers/booleans/counters/maps it should work nicely.

> 
> -- 
> My other signature is a regular expression.
> http://www.pray4snow.de
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Put failure: too many siblings

2016-06-16 Thread Russell Brown
What version of riak_kv is behind this riak_cs install, please? Is it really 
2.1.3 as stated below? This looks and sounds like sibling explosion, which is 
fixed in riak 2.0 and above. Are you sure that you are using the DVV enabled 
setting for riak_cs bucket properties? Can you post your bucket properties 
please?

On 16 Jun 2016, at 23:54, Vladyslav Zakhozhai  
wrote:

> Hello.
> 
> I see very interesting and confusing thing.
> 
> From my previous letter you can see that siblings count on manifest objects 
> is about 100 (actualy it is in range 100-300). Unfortunately my problem is 
> that almost all PUT requests are failing with 500 Internal Server error.
> 
> I've tried today set max_siblings riak option to 500. And there were 
> successfull PUT requests but not for long. Now I see in riak logs error with 
> "max siblings", but actual count of them is 500+ (earlier it was 100-300 as 
> I've mentioned).
> 
> Period of time between max_siblings=500 and errors in log is about 30 
> minutes. And I want to point your attention that I've forbid PUT method on 
> haproxy - frontend for riak cs.
> 
> 
> 
> On Mon, Jun 6, 2016 at 1:17 AM Vladyslav Zakhozhai 
>  wrote:
> Hi, Luke.
> 
> Thank you for your answer. I did not understand you completely about 
> transfer-limit. How does it relate to my problem. Transfer limit - is a limit 
> of concurrent data transfer from different nodes. Am I wright? You mean that 
> riak can handoff one partition from several nodes concurrently?
> 
> Now I have transfer-limit 1 on all riak nodes.
> 
> But I am not sure that my cluster will be converged ever. All nodes 
> experiences low memory and are killed by OOM Killer periodically. I try to 
> add new nodes to the cluster but due problem with OOM killer this process is 
> very-very slow.
> 
> In the official docs I've read:
> 
> "Sibling explosion occurs when an object rapidly collects siblings that are 
> not reconciled. This can lead to a variety of problems, including degraded 
> performance, especially if many objects in a cluster suffer from siblings 
> explosion. At the extreme, having an enormous object in a node can cause 
> reads of that object to crash the entire node. Other issues include undue 
> latency and out-of-memory errors."
> 
> I mentioned that new nodes in the cluster do not experience such problems (I 
> mean out of RAM).
> 
> Regarding to siblings maybe your are right, this is manifest object. I can 
> recognize key name but not bucket name. But more than 100 siblings on many 
> keys is really confused me. Each time I try to PUT some object to Riak via 
> Riak CS S3 interface I got errors with siblings.
> 
> On Fri, Jun 3, 2016 at 6:43 PM Luke Bakken  wrote:
> Hi Vladyslav,
> 
> If you recognize the full name of the object raising the sibling
> warning, it is most likely a manifest object. Sometimes, during hinted
> handoff, you can see these messages. They should resolve after handoff
> completes.
> 
> Please see the documentation for the transfer-limit command as well:
> 
> http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#transfer-limit
> 
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
> 
> 
> On Fri, Jun 3, 2016 at 2:55 AM, Vladyslav Zakhozhai
>  wrote:
> > Hi.
> >
> > I have a trouble with PUT to Riak CS cluster. During this process I
> > periodically see the following message in Riak error.log:
> >
> > 2016-06-03 11:15:55.201 [error]
> > <0.15536.142>@riak_kv_vnode:encode_and_put:2253 Put failure: too many
> > siblings for object OBJECT_NAME (101)
> >
> > and also
> >
> > 2016-06-03 12:41:50.678 [error]
> > <0.20448.515>@riak_api_pb_server:handle_info:331 Unrecognized message
> > {7345880,{error,{too_many_siblings,101}}}
> >
> > Here OBJECT_NAME - is the name of object in Riak which has too many
> > siblings.
> >
> > I definitely sure that this objects are static. Nobody deletes is, nobody
> > rewrites it. I have no idea why more than 100 siblings of this object
> > occurs.
> >
> > The following effect of this issue occurs:
> >
> > Great amount of keys are loaded to RAM. I almost out of RAM (Do each sibling
> > has it own key or key duplicate?).
> > Nodes are slow - adding new nodes are too slow
> > Presence of "too many siblings" affects ownership handoffs
> >
> > So I have several questions:
> >
> > Do hinted or ownership handoffs can affect siblings count (I mean can
> > siblings be created during ownership of hinted handoffs)
> > Is there any workaround of this issue. Do I need remove siblings manually or
> > it removes during merges, read repairs and so on
> >
> >
> > My configuration:
> >
> > riak from basho's packages - 2.1.3-1
> > riak cs from basho's packages - 2.1.0-1
> > 24 riak/riak-cs nodes
> > 32 GB RAM per node
> > AAE is disabled
> >
> >
> > I appreciate you help.
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> 

Re: Riak DT refresh

2016-05-17 Thread Russell Brown
On 17 May 2016, at 09:19, Benoit Chesneau  wrote:

> 
> 
> On Tue, May 17, 2016 at 10:15 AM Russell Brown  wrote:
> There’ll still be the need of a version vector, yes. Any advice/contribution 
> on optimisations there greatly appreciated, thanks.
> 
> maybe moving to SWC?
> https://github.com/ricardobcl/ServerWideClocks

I don’t think server wide clocks is the way to go for a library like riak_dt. 
It imposes too much in terms of the system. I also think ServerWideClocks, 
while very interesting and promising, have some pretty hard unanswered 
questions still. For riak_dt I tried not to impose a system model, so maybe 
every actor is a replica, or maybe there is a server, or set of servers. But 
this does need documenting.

> 
> would be good in systems where you have a lot of nodes going in and out in 
> the topology.

How do serverwideclocks help with high membership churn?

> 
> For me a must have would be a clear documentation on how using riak_dt, what 
> to share with the client, how/when to merge etc... It would maybe attract 
> more users too. Going threw the code is quite painful :)

I agree, this is something that is required. Things like when to use deferred 
operations and a context etc are all absent. I hope to spend some time on this 
soon. But yes. I will add that to the list of things that are needed to bring 
some love to riak_dt.

Cheers

Russell

> 
> - benoit
>  
> 
> On 17 May 2016, at 09:05, Sargun Dhillon  wrote:
> 
> > Is the plan to keep using riak_dt_vclock? If so, I might contribute
> > some optimizations for large numbers of actor entries (1000s).
> >
> > On Thu, Apr 28, 2016 at 12:55 AM, Russell Brown  
> > wrote:
> >> Hi,
> >> Riak DT[1] is in need of some love. I know that some of you on this list 
> >> (Sargun, are you here? Heinz?) have expressed opinions on the work that 
> >> needs doing. Here is my short list, I would love to hear opinions on 
> >> priority, and any additions to this list:
> >>
> >> 1. merger smaller map branch
> >> 2. deltas
> >> 3. new data types (we have a range register and some and 
> >> Multi-Value-Register to add, any more?)
> >> 4. Internal state as records or maps (but not these messy tuples)
> >> 5. update to rebar3
> >> 6. update to latest erlang
> >>
> >> I’m pretty sure there is plenty more. Would greatly appreciate your 
> >> feedback.
> >>
> >> Many thanks
> >>
> >> Russell
> >>
> >> [1] https://github.com/basho/riak_dt/
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak DT refresh

2016-05-17 Thread Russell Brown
There’ll still be the need of a version vector, yes. Any advice/contribution on 
optimisations there greatly appreciated, thanks.

On 17 May 2016, at 09:05, Sargun Dhillon  wrote:

> Is the plan to keep using riak_dt_vclock? If so, I might contribute
> some optimizations for large numbers of actor entries (1000s).
> 
> On Thu, Apr 28, 2016 at 12:55 AM, Russell Brown  wrote:
>> Hi,
>> Riak DT[1] is in need of some love. I know that some of you on this list 
>> (Sargun, are you here? Heinz?) have expressed opinions on the work that 
>> needs doing. Here is my short list, I would love to hear opinions on 
>> priority, and any additions to this list:
>> 
>> 1. merger smaller map branch
>> 2. deltas
>> 3. new data types (we have a range register and some and 
>> Multi-Value-Register to add, any more?)
>> 4. Internal state as records or maps (but not these messy tuples)
>> 5. update to rebar3
>> 6. update to latest erlang
>> 
>> I’m pretty sure there is plenty more. Would greatly appreciate your feedback.
>> 
>> Many thanks
>> 
>> Russell
>> 
>> [1] https://github.com/basho/riak_dt/
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak DT refresh

2016-05-07 Thread Russell Brown
In erlang? Might save myself the trouble and wait if so.

On 7 May 2016, at 13:12, Christopher Meiklejohn 
 wrote:

> We are also using Riak DT as a base.  However, we are doing a greenfield 
> implementation of the new delta work in Lasp and hope to have an open-source 
> library of these implementations available soon.
> 
> - Christopher 
> 
> On Saturday, May 7, 2016, DeadZen  wrote:
> +1
> 
> On Fri, May 6, 2016 at 8:17 PM, Sargun Dhillon  wrote:
> > We're using riak_dt in anger in our product. We are already using it
> > with rebar3, and Erlang 18.3 through some super messy patches.
> >
> > I would love to see a register that takes the logical clock, and
> > timestamp for resolution, rather than just a straightup timestamp. My
> > biggest ask though is delta-CRDTs. Delta-CRDTs with a decent
> > anti-entropy algorithm would allow our system to scale from 1000s of
> > keys to 100ks of keys.
> >
> > Nonetheless, Riak_dt is an awesome library, and we're probably
> > planning on contributing back to it soonish, especially if any of the
> > other comments on this thread are things we need, and Basho doesn't.
> >
> > On Thu, Apr 28, 2016 at 12:55 AM, Russell Brown  
> > wrote:
> >> Hi,
> >> Riak DT[1] is in need of some love. I know that some of you on this list 
> >> (Sargun, are you here? Heinz?) have expressed opinions on the work that 
> >> needs doing. Here is my short list, I would love to hear opinions on 
> >> priority, and any additions to this list:
> >>
> >> 1. merger smaller map branch
> >> 2. deltas
> >> 3. new data types (we have a range register and some and 
> >> Multi-Value-Register to add, any more?)
> >> 4. Internal state as records or maps (but not these messy tuples)
> >> 5. update to rebar3
> >> 6. update to latest erlang
> >>
> >> I’m pretty sure there is plenty more. Would greatly appreciate your 
> >> feedback.
> >>
> >> Many thanks
> >>
> >> Russell
> >>
> >> [1] https://github.com/basho/riak_dt/
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to increase Riak write performance for sequential alpha-numeric keys

2016-05-05 Thread Russell Brown
On 5 May 2016, at 15:11, alexc155  wrote:

> Hi,
> 
> Thanks for your reply.
> 
> I don't think that write_once is going to work for us as we have to
> periodically update the data (although if we remove the data before
> re-inserting it, would that work?)
> 
> Why does read-before-write slow down new writes so much?

Interesting. What version of riak is this, please?

> 
> Some new information we've found - it seems that if we write the data and
> then update it, we get fast speeds too. It's just the initial write of the
> data that is slow.
> 
> So why is writing sequential keys so much slower than updating them or
> writing non-sequential keys?
> 
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/How-to-increase-Riak-write-performance-for-sequential-alpha-numeric-keys-tp4034219p4034225.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to increase Riak write performance for sequential alpha-numeric keys

2016-05-05 Thread Russell Brown

On 5 May 2016, at 14:42, Sanket Agrawal  wrote:

> 
> On Thu, May 5, 2016 at 9:28 AM, Russell Brown  wrote:
> CRDTs are all about mutation, why would you use a CRDT for immutable data? I 
> think write-once is what you need.
> 
> No particular reason - most likely ignorance on my part, I think :) Basho 
> documentation mentioned modeling the data as CRDT if possible. That is the 
> open question for me - if the data is immutable, is write-once type with 
> key-value better than CRDT from general performance and scalability 
> perspective?
> 

Yes. Write-once will perform better than CRDTs. You pay for the 
auto-merging-magic of CRDTs. If you don’t mutate the data you can’t get 
siblings, and don’t need CRDTs. I guess the docs need to be clearer about when 
to pick CRDTs.


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to increase Riak write performance for sequential alpha-numeric keys

2016-05-05 Thread Russell Brown
CRDTs are all about mutation, why would you use a CRDT for immutable data? I 
think write-once is what you need.

On 5 May 2016, at 14:14, Sanket Agrawal  wrote:

> Two questions about write-once bucket:
> - is it useful from performance-perspective for immutable CRDTs as well? 
> - If the objects are immutable, is map CRDT generally performant, compared to 
> equivalent JSON with write-once property? Let us say map has 1-9 registers 
> with about 2KB size total in string/number.
> 
> It seems to me there is no way to enforce key check before writing CRDTs (to 
> enforce true immutability by making sure the new key doesn't exist already) 
> as it is for key-value. We could tell Riak not to insert a key if it already 
> exists for key-value objects. I don't know of any such mechanism for CRDTs.
> 
> I have immutable CRDTs with map type, allow_mult true, and lww false. I 
> looked into "write-once" property before but couldn't figure out how to truly 
> enforce it for CRDT.
> 
> 
> 
> 
> On Thu, May 5, 2016 at 8:51 AM, Matthew Von-Maszewski  
> wrote:
> Alex,
> 
> The successor to "last_write_wins" is the "write_once" bucket type.  You can 
> read about its characteristics and limitations here:
> 
>http://docs.basho.com/riak/kv/2.1.4/developing/app-guide/write-once/
> 
> This bucket type eliminates the Riak's typical read-before-write operation.  
> Your experience with better performance by reversing the keys suggests to me 
> that this bucket type might be what you need.
> 
> Also, I would be willing to review your general environment and particularly 
> leveldb's actions.  I would need you to run "riak-debug" on one of the 
> servers then post the tar file someplace private such as dropbox.  There 
> might be other insights I can share based upon leveldb's actions and your 
> physical server configuration.
> 
> Matthew
> 
> 
> 
>> On May 5, 2016, at 8:32 AM, alexc155  wrote:
>> 
>> We're using Riak as a simple key value store and we're having write 
>> performance problems which we think is due to the format of our keys which 
>> we can't easily change because they're tied into different parts of the 
>> business and systems.
>> 
>> 
>> We're not trying to do anything complicated with Riak: No solr, secondary 
>> indexes or map reducing - just simple keys to strings of around 10Kb of JSON.
>> 
>> 
>> We've got upwards of 3 billion records to store so we've opted for LevelDb 
>> as the backend.
>> 
>> 
>> It's a 3 node cluster running on 3 dedicated Ubuntu VMs each with 16 cpu 
>> cores and 12GB memory backed by SSDs on a 10Gb network.
>> 
>> 
>> Using basho bench we know that it's capable of speeds upwards of ​5000 rows 
>> per sec when using randomised keys, but the problem comes when we use our 
>> actual data.
>> 
>> 
>> The keys are formatted using the following pattern:
>> 
>> 
>> USC~1930~1~1~001
>> USC~1930~1~1~002
>> USC~1930~1~1~003
>> USC~1930~1~1~004
>> 
>> 
>> Most of the long key stays the same with numbers at the end going up. (The 
>> "~" are changeable - we can set them to whatever character. They're just 
>> delimiters in the keys)
>> 
>> 
>> Using these keys, write performance is a tenth of the speed at 400 rows per 
>> sec.
>> 
>> 
>> We don't need to worry about different versions of the data so we've set the 
>> following in our bucket type:
>> 
>> 
>> "allow_mult": false
>> "last_write_wins": true
>> "DW": 0
>> "n_val": 2
>> "w": 1
>> "r": 1
>> "basic_quorum": false
>> 
>> 
>> On the riak servers we've set the ulimit to:
>> 
>> 
>> riak soft nofile 32768
>> riak hard nofile 65536
>> 
>> 
>> and other settings like this:
>> 
>> 
>> ring_size = 128
>> protobuf.backlog = 1024
>> anti_entropy = passive
>> 
>> 
>> We're using the v2 .net client from basho to do the putting which runs in an 
>> API on 3 machines.
>> 
>> 
>> We've checked all the usual bottlenecks: CPU, memory, network IO, disk IO 
>> and throttles on the Riak servers and windows API servers.
>> 
>> 
>> As a kicker, if we reverse the keys e.g.
>> 
>> 
>> 100~1~1~0391~CSU
>> speed goes up to over ​3000 rows, but that's a dirty kludge.
>> 
>> 
>> Can anyone explain why Riak doesn't like sequential alpha-numeric keys and 
>> what we can change to improve performance?
>> 
>> 
>> Thanks!
>> 
>> 
>> View this message in context: How to increase Riak write performance for 
>> sequential alpha-numeric keys
>> Sent from the Riak Users mailing list archive at Nabble.com.
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinf

Riak DT refresh

2016-04-28 Thread Russell Brown
Hi,
Riak DT[1] is in need of some love. I know that some of you on this list 
(Sargun, are you here? Heinz?) have expressed opinions on the work that needs 
doing. Here is my short list, I would love to hear opinions on priority, and 
any additions to this list:

1. merger smaller map branch
2. deltas
3. new data types (we have a range register and some and Multi-Value-Register 
to add, any more?)
4. Internal state as records or maps (but not these messy tuples)
5. update to rebar3
6. update to latest erlang

I’m pretty sure there is plenty more. Would greatly appreciate your feedback.

Many thanks

Russell

[1] https://github.com/basho/riak_dt/
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using $bucket index for listing keys

2016-03-11 Thread Russell Brown
But still requires ordered keys to do pagination. the $bucket index is only 
available if you’re using 2i, which requires leveldb.

On 11 Mar 2016, at 16:32, Oleksiy Krivoshey  wrote:

> I know that 2i requires leveldb, but I'm not using my custom index, I'm using 
> $bucket, I thought $bucket index is some kind of special internal index?
> 
> On 11 March 2016 at 18:28, Russell Brown  wrote:
> You can’t…can you? I mean, 2i requires level. It requires ordered keys. Which 
> explains your problem, but you should have failed a lot earlier.
> 
> On 11 Mar 2016, at 16:26, Oleksiy Krivoshey  wrote:
> 
> > Hi Magnus,
> >
> > The bucket type has the following properties:
> >
> > '{"props":{"backend":"fs_chunks","allow_mult":"false","r":1,"notfound_ok":"false","basic_quorum":"false"}}'
> >
> > fs_chunks backend is configured as part of riak_kv_multi_backend:
> >
> > {<<"fs_chunks">>, riak_kv_bitcask_backend, [
> > {data_root, "/var/lib/riak/fs_chunks"}
> > ]},
> >
> > Objects stored are chunks of binary data (ContentType=binary/octet-stream) 
> > with a maximum size of 256Kb. Each object also has a single key/value pair 
> > in user metadata (usermeta).
> >
> > I'm not able to replicate this on my local single node setup, it only 
> > happens on a production 5 node cluster.
> >
> >
> > On 11 March 2016 at 15:53, Magnus Kessler  wrote:
> > Hi Oleksiy,
> >
> > Could you please share the bucket or bucket-type properties for that small 
> > bucket? If you open an issue on github, please add the properties there, 
> > too.
> >
> > Many Thanks,
> >
> > On 11 March 2016 at 13:46, Oleksiy Krivoshey  wrote:
> > I got the recursive behavior with other, larger buckets but I had no 
> > logging so when I enabled debugging this was the first bucket to replicate 
> > the problem. I have a lot of buckets of the same type, some have many 
> > thousands keys some are small. My task is to iterate the keys (once only) 
> > of all buckets. Either with 2i or with Yokozuna.
> > On Fri, Mar 11, 2016 at 15:32 Russell Brown  wrote:
> > Not the answer, by why pagination for 200 keys? Why the cost of doing the 
> > query 20 times vs once?
> >
> > On 11 Mar 2016, at 13:28, Oleksiy Krivoshey  wrote:
> >
> > > Unfortunately there are just 200 keys in that bucket. So with larger 
> > > max_results I just get all the keys without continuation. I'll try to 
> > > replicate this with a bigger bucket.
> > > On Fri, Mar 11, 2016 at 15:21 Russell Brown  wrote:
> > > That seems very wrong. Can you do me a favour and try with a larger 
> > > max_results. I remember a bug with small results set, I thought it was 
> > > fixed, I’m looking into the past issues, but can you try 
> > > “max_results=1000” or something, and let me know what you see?
> > >
> > > On 11 Mar 2016, at 13:03, Oleksiy Krivoshey  wrote:
> > >
> > > > Here it is without the `value` part of request:
> > > >
> > > > curl 
> > > > 'http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10&continuation=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU='
> > > >
> > > > {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
> > > >
> > > > On 11 March 2016 at 14:58, Oleksiy Krivoshey  wrote:
> > > > I'm actually using PB interface, but I can replicate the problem with 
> > > > HTTP as in my previous email. Request with '&continuation=' returns 
> > > > the result set with the same continuation .
> > > >
> > > > On 11 March 2016 at 14:55, Magnus Kessler  wrote:
> > > > Hi Oleksiy,
> > > >
> > > > How are you performing your 2i-based key listing? Querying with 
> > > > pagination as shown in the documentation[0] sh

Re: Using $bucket index for listing keys

2016-03-11 Thread Russell Brown
You can’t…can you? I mean, 2i requires level. It requires ordered keys. Which 
explains your problem, but you should have failed a lot earlier.

On 11 Mar 2016, at 16:26, Oleksiy Krivoshey  wrote:

> Hi Magnus,
> 
> The bucket type has the following properties:
> 
> '{"props":{"backend":"fs_chunks","allow_mult":"false","r":1,"notfound_ok":"false","basic_quorum":"false"}}'
> 
> fs_chunks backend is configured as part of riak_kv_multi_backend:
> 
> {<<"fs_chunks">>, riak_kv_bitcask_backend, [
> {data_root, "/var/lib/riak/fs_chunks"}
> ]},
> 
> Objects stored are chunks of binary data (ContentType=binary/octet-stream) 
> with a maximum size of 256Kb. Each object also has a single key/value pair in 
> user metadata (usermeta).
> 
> I'm not able to replicate this on my local single node setup, it only happens 
> on a production 5 node cluster.
> 
> 
> On 11 March 2016 at 15:53, Magnus Kessler  wrote:
> Hi Oleksiy,
> 
> Could you please share the bucket or bucket-type properties for that small 
> bucket? If you open an issue on github, please add the properties there, too.
> 
> Many Thanks,
> 
> On 11 March 2016 at 13:46, Oleksiy Krivoshey  wrote:
> I got the recursive behavior with other, larger buckets but I had no logging 
> so when I enabled debugging this was the first bucket to replicate the 
> problem. I have a lot of buckets of the same type, some have many thousands 
> keys some are small. My task is to iterate the keys (once only) of all 
> buckets. Either with 2i or with Yokozuna. 
> On Fri, Mar 11, 2016 at 15:32 Russell Brown  wrote:
> Not the answer, by why pagination for 200 keys? Why the cost of doing the 
> query 20 times vs once?
> 
> On 11 Mar 2016, at 13:28, Oleksiy Krivoshey  wrote:
> 
> > Unfortunately there are just 200 keys in that bucket. So with larger 
> > max_results I just get all the keys without continuation. I'll try to 
> > replicate this with a bigger bucket.
> > On Fri, Mar 11, 2016 at 15:21 Russell Brown  wrote:
> > That seems very wrong. Can you do me a favour and try with a larger 
> > max_results. I remember a bug with small results set, I thought it was 
> > fixed, I’m looking into the past issues, but can you try “max_results=1000” 
> > or something, and let me know what you see?
> >
> > On 11 Mar 2016, at 13:03, Oleksiy Krivoshey  wrote:
> >
> > > Here it is without the `value` part of request:
> > >
> > > curl 
> > > 'http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10&continuation=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU='
> > >
> > > {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
> > >
> > > On 11 March 2016 at 14:58, Oleksiy Krivoshey  wrote:
> > > I'm actually using PB interface, but I can replicate the problem with 
> > > HTTP as in my previous email. Request with '&continuation=' returns 
> > > the result set with the same continuation .
> > >
> > > On 11 March 2016 at 14:55, Magnus Kessler  wrote:
> > > Hi Oleksiy,
> > >
> > > How are you performing your 2i-based key listing? Querying with 
> > > pagination as shown in the documentation[0] should work.
> > >
> > > As an example here is the HTTP invocation:
> > >
> > > curl 
> > > "https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10&continuation=g20CNTM=";
> > >
> > > Once the end of the key list is reached, the server returns an empty keys 
> > > list and no further continuation value.
> > >
> > > Please let me know if this works for you.
> > >
> > > Kind Regards,
> > >
> > > Magnus
> > >
> > >
> > > [0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying
> > >
> > > On 11 March 2016 at 10:06, Oleksiy Krivoshey  wrote:
> > >

Re: Using $bucket index for listing keys

2016-03-11 Thread Russell Brown
I’ve never seen this before. I’m pretty sure our tests would catch it if it 
were a general issue. 

Would you mind opening an issue on github for riak? If possible any information 
you can give that can help us repro would be great.

Thanks

Russell

On 11 Mar 2016, at 13:46, Oleksiy Krivoshey  wrote:

> I got the recursive behavior with other, larger buckets but I had no logging 
> so when I enabled debugging this was the first bucket to replicate the 
> problem. I have a lot of buckets of the same type, some have many thousands 
> keys some are small. My task is to iterate the keys (once only) of all 
> buckets. Either with 2i or with Yokozuna. 
> On Fri, Mar 11, 2016 at 15:32 Russell Brown  wrote:
> Not the answer, by why pagination for 200 keys? Why the cost of doing the 
> query 20 times vs once?
> 
> On 11 Mar 2016, at 13:28, Oleksiy Krivoshey  wrote:
> 
> > Unfortunately there are just 200 keys in that bucket. So with larger 
> > max_results I just get all the keys without continuation. I'll try to 
> > replicate this with a bigger bucket.
> > On Fri, Mar 11, 2016 at 15:21 Russell Brown  wrote:
> > That seems very wrong. Can you do me a favour and try with a larger 
> > max_results. I remember a bug with small results set, I thought it was 
> > fixed, I’m looking into the past issues, but can you try “max_results=1000” 
> > or something, and let me know what you see?
> >
> > On 11 Mar 2016, at 13:03, Oleksiy Krivoshey  wrote:
> >
> > > Here it is without the `value` part of request:
> > >
> > > curl 
> > > 'http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10&continuation=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU='
> > >
> > > {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
> > >
> > > On 11 March 2016 at 14:58, Oleksiy Krivoshey  wrote:
> > > I'm actually using PB interface, but I can replicate the problem with 
> > > HTTP as in my previous email. Request with '&continuation=' returns 
> > > the result set with the same continuation .
> > >
> > > On 11 March 2016 at 14:55, Magnus Kessler  wrote:
> > > Hi Oleksiy,
> > >
> > > How are you performing your 2i-based key listing? Querying with 
> > > pagination as shown in the documentation[0] should work.
> > >
> > > As an example here is the HTTP invocation:
> > >
> > > curl 
> > > "https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10&continuation=g20CNTM=";
> > >
> > > Once the end of the key list is reached, the server returns an empty keys 
> > > list and no further continuation value.
> > >
> > > Please let me know if this works for you.
> > >
> > > Kind Regards,
> > >
> > > Magnus
> > >
> > >
> > > [0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying
> > >
> > > On 11 March 2016 at 10:06, Oleksiy Krivoshey  wrote:
> > > Anyone?
> > >
> > > On 4 March 2016 at 19:11, Oleksiy Krivoshey  wrote:
> > > I have a bucket with ~200 keys in it and I wanted to iterate them with 
> > > the help of $bucket index and 2i request, however I'm facing the 
> > > recursive behaviour, for example I send the following 2i request:
> > >
> > > {
> > > bucket: 'BUCKET_NAME',
> > > type: 'BUCKET_TYPE',
> > > index: '$bucket',
> > > key: 'BUCKET_NAME',
> > > qtype: 0,
> > > max_results: 10,
> > > continuation: ''
> > > }
> > >
> > > I receive 10 keys and continuation '', I then repeat the request with 
> > > continuation '' and at this point I can receive a reply with 
> > > continuation '' or '' or even '' and its going in never 
> > > ending recursion.
> > >
> > > I'm running this on a 5 node 2.1.3 cluster.
> > >
> > > What I'm doing wrong? Or is this not supported at all?
> > >
> > > Thanks!
> > >
> > >
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > >
> > >
> > >
> > >
> > > --
> > > Magnus Kessler
> > > Client Services Engineer
> > > Basho Technologies Limited
> > >
> > > Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> > >
> > >
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using $bucket index for listing keys

2016-03-11 Thread Russell Brown
Not the answer, by why pagination for 200 keys? Why the cost of doing the query 
20 times vs once?

On 11 Mar 2016, at 13:28, Oleksiy Krivoshey  wrote:

> Unfortunately there are just 200 keys in that bucket. So with larger 
> max_results I just get all the keys without continuation. I'll try to 
> replicate this with a bigger bucket. 
> On Fri, Mar 11, 2016 at 15:21 Russell Brown  wrote:
> That seems very wrong. Can you do me a favour and try with a larger 
> max_results. I remember a bug with small results set, I thought it was fixed, 
> I’m looking into the past issues, but can you try “max_results=1000” or 
> something, and let me know what you see?
> 
> On 11 Mar 2016, at 13:03, Oleksiy Krivoshey  wrote:
> 
> > Here it is without the `value` part of request:
> >
> > curl 
> > 'http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10&continuation=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU='
> >
> > {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
> >
> > On 11 March 2016 at 14:58, Oleksiy Krivoshey  wrote:
> > I'm actually using PB interface, but I can replicate the problem with HTTP 
> > as in my previous email. Request with '&continuation=' returns the 
> > result set with the same continuation .
> >
> > On 11 March 2016 at 14:55, Magnus Kessler  wrote:
> > Hi Oleksiy,
> >
> > How are you performing your 2i-based key listing? Querying with pagination 
> > as shown in the documentation[0] should work.
> >
> > As an example here is the HTTP invocation:
> >
> > curl 
> > "https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10&continuation=g20CNTM=";
> >
> > Once the end of the key list is reached, the server returns an empty keys 
> > list and no further continuation value.
> >
> > Please let me know if this works for you.
> >
> > Kind Regards,
> >
> > Magnus
> >
> >
> > [0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying
> >
> > On 11 March 2016 at 10:06, Oleksiy Krivoshey  wrote:
> > Anyone?
> >
> > On 4 March 2016 at 19:11, Oleksiy Krivoshey  wrote:
> > I have a bucket with ~200 keys in it and I wanted to iterate them with the 
> > help of $bucket index and 2i request, however I'm facing the recursive 
> > behaviour, for example I send the following 2i request:
> >
> > {
> > bucket: 'BUCKET_NAME',
> > type: 'BUCKET_TYPE',
> > index: '$bucket',
> > key: 'BUCKET_NAME',
> > qtype: 0,
> > max_results: 10,
> > continuation: ''
> > }
> >
> > I receive 10 keys and continuation '', I then repeat the request with 
> > continuation '' and at this point I can receive a reply with 
> > continuation '' or '' or even '' and its going in never ending 
> > recursion.
> >
> > I'm running this on a 5 node 2.1.3 cluster.
> >
> > What I'm doing wrong? Or is this not supported at all?
> >
> > Thanks!
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> >
> >
> >
> > --
> > Magnus Kessler
> > Client Services Engineer
> > Basho Technologies Limited
> >
> > Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using $bucket index for listing keys

2016-03-11 Thread Russell Brown
That seems very wrong. Can you do me a favour and try with a larger 
max_results. I remember a bug with small results set, I thought it was fixed, 
I’m looking into the past issues, but can you try “max_results=1000” or 
something, and let me know what you see?

On 11 Mar 2016, at 13:03, Oleksiy Krivoshey  wrote:

> Here it is without the `value` part of request:
> 
> curl 
> 'http://127.0.0.1:8098/types/fs_chunks/buckets/0r0e5wahrhsgpolk9stbnrqmp77fjjye.chunks/index/$bucket/_?max_results=10&continuation=g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU='
> 
> {"keys":["4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:0","4rpG2PwRTs3YqasGGYrhACBvZqTg7mQW:2","FSEky50kr2TLkBuo1JKv6sphINYwnJfV:1","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:0","RToMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
> 
> On 11 March 2016 at 14:58, Oleksiy Krivoshey  wrote:
> I'm actually using PB interface, but I can replicate the problem with HTTP as 
> in my previous email. Request with '&continuation=' returns the result 
> set with the same continuation . 
> 
> On 11 March 2016 at 14:55, Magnus Kessler  wrote:
> Hi Oleksiy,
> 
> How are you performing your 2i-based key listing? Querying with pagination as 
> shown in the documentation[0] should work.
> 
> As an example here is the HTTP invocation:
> 
> curl 
> "https://localhost:8098/types/default/buckets/test/index/\$bucket/_?max_results=10&continuation=g20CNTM=";
> 
> Once the end of the key list is reached, the server returns an empty keys 
> list and no further continuation value.
> 
> Please let me know if this works for you.
> 
> Kind Regards,
> 
> Magnus
> 
> 
> [0]: http://docs.basho.com/riak/latest/dev/using/2i/#Querying
> 
> On 11 March 2016 at 10:06, Oleksiy Krivoshey  wrote:
> Anyone?
> 
> On 4 March 2016 at 19:11, Oleksiy Krivoshey  wrote:
> I have a bucket with ~200 keys in it and I wanted to iterate them with the 
> help of $bucket index and 2i request, however I'm facing the recursive 
> behaviour, for example I send the following 2i request:
> 
> { 
> bucket: 'BUCKET_NAME', 
> type: 'BUCKET_TYPE', 
> index: '$bucket', 
> key: 'BUCKET_NAME', 
> qtype: 0, 
> max_results: 10, 
> continuation: ''
> }
> 
> I receive 10 keys and continuation '', I then repeat the request with 
> continuation '' and at this point I can receive a reply with continuation 
> '' or '' or even '' and its going in never ending recursion. 
> 
> I'm running this on a 5 node 2.1.3 cluster.
> 
> What I'm doing wrong? Or is this not supported at all?
> 
> Thanks!
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> 
> -- 
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
> 
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 2i indexes and keys request inconsistencies

2016-03-09 Thread Russell Brown
I wonder if you have some keys that are hanging around on one of the N 
partitions but are deleted elsewhere (and the tombstones reaped?). 2i uses 
coverage which is essentially r=1 on a covering set of vnodes. But you think 
read-repair/AAE would bring convergence, so it is perplexing. I wonder if we 
can get a look at the “phantom” keys. Either via r=1 GET or some attach to 
console, get directly from the backend magic.


On 9 Mar 2016, at 09:06, Alexander Popov  wrote:

> @Matthew 
> 
> No, db activity was very low at this time, and keys returned by this queries  
> was deleted long time ago ( some of them at Dec 2015 )
> 
> I got this  issue when proceed maintenance task  which touch all keys in DB, 
> list all keys by /keys?keys=true query, read, upgrade, save.
> 
> We have some logic depends on 2i indexes results -  ( count number of related 
>   keys ). But if it returns phantom keys,
>  I cannot trust this data, need to double check by getting each object, but 
> this is more expensive operation
> 
> 
> 
> 
> On Tue, Mar 8, 2016 at 10:21 PM, Matthew Von-Maszewski  
> wrote:
> Is the database being actively modified during your queries?  
> 
> Queries can lock down a "snapshot" within leveldb.  The query operation can 
> return keys that existed at the time of the snapshot, but have been 
> subsequently deleted by normal operations.
> 
> In such a case, the query is correct in giving you the key and the 404 
> afterward is also correct.  They represent two different versions of the 
> database over time.
> 
> Not sure if this is a valid scenario for you or not.
> 
> Matthew
> 
> 
>> On Mar 8, 2016, at 1:22 PM, Alexander Popov  wrote:
>> 
>> Noticied that sometimes 2i query and all keys requesrs returns extra records 
>> ~2% of all records.
>> 
>> When call this items by get request after,  it returns 404 and after that 
>> key stops to returns in 2i and keys requests.
>> 
>> Does it normally or my database is corrupted?
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Continuous HTTP POSTs to Riak

2016-03-05 Thread Russell Brown
And you pass the vclock back after a GET with the next POST? Unless we get a 
look at the vclocks then it's hard to say why or where you have concurrency. 
But since concurrency is ultimately unavoidable using riak, why the concern? 
Can your application/data model handle it is the main question?

On 5 Mar 2016, at 19:04, Qiang Cao  wrote:

> Thanks, Russell! I do a GET immediately after a POST is done. I use apache 
> httpclient, which handles requests synchronously. On the client, POSTs and 
> GETs are sent out sequentially.
> 
> On Sat, Mar 5, 2016 at 1:57 PM, Russell Brown  wrote:
> 
> On 5 Mar 2016, at 18:43, Qiang Cao  wrote:
> 
> > Just curious. The POSTs are sent out sequentially and a quorum is set up on 
> > Riak. I wonder how would it happen that Riak still considers the POST 
> > requests concurrent?
> Did you read the result of POST 1 before sending POST 2? If not, and you 
> don’t send the causal context, Riak has to view them as concurrent.
> 
> How does the quorum work here?
> 
> Thanks,
> Qiang
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Continuous HTTP POSTs to Riak

2016-03-05 Thread Russell Brown

On 5 Mar 2016, at 18:43, Qiang Cao  wrote:

> Just curious. The POSTs are sent out sequentially and a quorum is set up on 
> Riak. I wonder how would it happen that Riak still considers the POST 
> requests concurrent?
Did you read the result of POST 1 before sending POST 2? If not, and you don’t 
send the causal context, Riak has to view them as concurrent.

> 
> -Qiang
> 
> On Fri, Mar 4, 2016 at 12:27 PM, Qiang Cao  wrote:
> This worked!  Thank you, Vitaly!
> 
> 
> On Fri, Mar 4, 2016 at 3:18 AM, Vitaly E <13vitam...@gmail.com> wrote:
> Hi Qiang,
> 
> Since you are running with allow_mult=false, make sure the clocks of your 
> Riak nodes are synchronized. If they are out of sync, newer values may get 
> overridden by older ones on read, depending on the node a request hits first. 
> Of course this won't cover 100% of cases because a perfect clock 
> synchronization is just impossible.
> 
> Also, setting notfound_ok to "false" may help if you encounter not founds for 
> keys you are sure have been written.
> 
> Good luck!
> Vitaly
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Get all keys from an bucket

2016-02-29 Thread Russell Brown
Use “max_size” to set a limit on how many results you receive, as documented 
here:

http://docs.basho.com/riak/latest/dev/using/2i/#Querying

pass the parameter “stream=true” to stream.

Do you want the object bodies back too? There is a super secret interface in PB 
to achieve this, have a look at 
https://github.com/basho/riak-erlang-client/blob/master/src/riakc_pb_socket.erl#L.
 It at least illustrates the use of the query parameters for a $bucket index 
that uses streaming, pagination, etc

Russell

On 26 Feb 2016, at 17:17, Markus Geck  wrote:

> 
> Anyone?
> 
> Saturday, January 30, 2016 1:42 AM +03:00 from Markus Geck 
> :
> 
> Do you have an example how to stream them? The url I've posted in my initial 
> mail explains how to use that index, but not how to stream the results. 
> Unfortunately accessing the keys that way overloads the node.
> 
> 
> Friday, January 29, 2016 2:06 PM UTC from Russell Brown 
> :
> 
> With leveldb you can use the special $bucket index. You can also stream the 
> keys, and paginate them, meaning you can get them in smaller lumps, hopefully 
> this will appear faster and avoid the timeout you're seeing.
> 
> 
> On 29 Jan 2016, at 14:03, Markus Geck  wrote:
> 
>> Yes, sorry I forgot to mention that.
>> 
>> 
>> Monday, January 25, 2016 10:10 AM UTC from Russell Brown 
>> :
>> 
>> Hi Markus,
>> Are you using leveldb backend?
>> 
>> Russell
>> 
>> On 22 Jan 2016, at 19:05, Markus Geck  wrote:
>> 
>> > Hello,
>> > 
>> > is there any way to get all keys from an bucket?
>> > 
>> > I've already tried this guide: 
>> > http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html But 
>> > riak always wents unresponsive with a huge server load.
>> > 
>> > and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.
>> > 
>> > Is there any other way?
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client - Maps - fetch only one entry

2016-02-25 Thread Russell Brown
Not yet. I think that would be cool too.

Right now, a map is just a riak object.

I guess with Riak Search indexing though, you could get something similar, 
maybe?

On 25 Feb 2016, at 19:40, Cosmin Marginean  wrote:

> On 25 Feb 2016, at 19:26, Cosmin Marginean  wrote:
> 
> Hi,
> 
> I couldn’t find this anywhere in the docs: is there a mechanism in Riak to 
> fetch only one Register (or a specific entry) from a map?
> We have a use case where we have a map and need to only get the value for a 
> key in the map, rather than the whole map, and I would rather not transfer 
> the entire thing on the client.
> 
> Cheers
> Cos
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Re[2]: Get all keys from an bucket

2016-01-29 Thread Russell Brown
With leveldb you can use the special $bucket index. You can also stream the 
keys, and paginate them, meaning you can get them in smaller lumps, hopefully 
this will appear faster and avoid the timeout you're seeing.


On 29 Jan 2016, at 14:03, Markus Geck  wrote:

> Yes, sorry I forgot to mention that.
> 
> 
> Monday, January 25, 2016 10:10 AM UTC from Russell Brown 
> :
> 
> Hi Markus,
> Are you using leveldb backend?
> 
> Russell
> 
> On 22 Jan 2016, at 19:05, Markus Geck  wrote:
> 
> > Hello,
> > 
> > is there any way to get all keys from an bucket?
> > 
> > I've already tried this guide: 
> > http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html But 
> > riak always wents unresponsive with a huge server load.
> > 
> > and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.
> > 
> > Is there any other way?
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Get all keys from an bucket

2016-01-25 Thread Russell Brown
Hi Markus,
Are you using leveldb backend?

Russell

On 22 Jan 2016, at 19:05, Markus Geck  wrote:

> Hello,
> 
> is there any way to get all keys from an bucket?
> 
> I've already tried this guide: 
> http://www.paperplanes.de/2011/12/13/list-all-of-the-riak-keys.html But riak 
> always wents unresponsive with a huge server load.
> 
> and "GET /buckets/bucket/keys?keys=stream" returns an timeout error.
> 
> Is there any other way?
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


  1   2   3   4   >