Dear Riak Team,
I and my team used riak as database for my production with an cluster
including 5 nodes.
While production run, we meet an critical bug that is sometimes fail to
update document.
I and my colleagues performed debug and detected an issue with the scenario
as follow:
+ fetch documen
Originally I suspected the context which allows Riak to resolve conflicts was
not present in your data, but I see it in your map structure. Thanks for
supplying such a detailed description.
How fast is your turnaround time between an update and a fetch? Even if the
cluster is healthy it’s not i
What operation are you performing? It looks like the map is a single level map
of last-write-wins registers. Are you updating a value? Is there a chance that
the time on the node handling the update is behind the value in the
lww-register?
Have you tried using the `modify_type` operation in ria
Dear John and Russell Brown,
* How fast is your turnaround time between an update and a fetch?
The turnaround time between an update and a fetch about 1 second.
During my team and I debug, we adjusted haproxy with the scenario as
follow:
Scenario 1 : round robin via 5 nodes of cluster
We meet
On 7 Feb 2017, at 08:17, my hue wrote:
> Dear John and Russell Brown,
>
> * How fast is your turnaround time between an update and a fetch?
>
> The turnaround time between an update and a fetch about 1 second.
> During my team and I debug, we adjusted haproxy with the scenario as follow:
>
Dear Russell,
>What operation are you performing? What is the update you perform? Do you
set a register value, add a register, remove a register?
I used riakc_map:update to update value with map. I do the following steps :
- Get FetchData map with fetch_type
- Extract key, value, context from F
On 7 Feb 2017, at 09:34, my hue wrote:
> Dear Russell,
>
> >What operation are you performing? What is the update you perform? Do you
> >set a register value, add a register, remove a register?
>
> I used riakc_map:update to update value with map. I do the following steps :
>
> - Get FetchDa
Dear Russel,
> Can you run riakc_map:to_op(Map). and show me the output of that, please?
The following is output of riakc_map:to_op(Map) :
{map, {update, [{update,
{<<"updated_time_dt">>,register},{assign,<<"2017-02-06T17:22:39Z">>}},
{update,{<<"updated_by_id">>,register},
{assign,<<"accounta25
So in you’re updating all those registers in one go? Out of interest, what
happens if you update a single register at a time?
On 7 Feb 2017, at 10:02, my hue wrote:
> Dear Russel,
>
> > Can you run riakc_map:to_op(Map). and show me the output of that, please?
>
> The following is output of r
Dear Russell,
Yes, I updated all registers in one go.
And I do not try yet with updating a single register at a time.
let me try to see. But I wonder that any affect on solving conflict at
riak cluster
if update all in one go?
On Tue, Feb 7, 2017 at 5:18 PM, Russell Brown wrote:
> So in you’
On 7 Feb 2017, at 10:27, my hue wrote:
> Dear Russell,
>
> Yes, I updated all registers in one go.
> And I do not try yet with updating a single register at a time.
> let me try to see. But I wonder that any affect on solving conflict at riak
> cluster
> if update all in one go?
>
Just tr
Dear Russel,
Let me try with your suggestion with an new, empty key and use modify_type
to update a single register.
And I will feedback to you on my result test.
Best regards,
Hue Tran
On Tue, Feb 7, 2017 at 5:37 PM, Russell Brown wrote:
>
> On 7 Feb 2017, at 10:27, my hue wrote:
>
> > Dear
Speaking of timings:
ring_members : ['riak-node1@64.137.190.244','riak-node2@64.137.247.82',
'riak-node3@64.137.162.64','riak-node4@64.137.161.229',
'riak-node5@64.137.217.73']
Are these nodes in the same local area network?
On Thu, Feb 9, 2017 at 12:49 PM, my hue wrote:
> Dear Russel,
>
> I di
Why are they public?
On Thu, Feb 9, 2017 at 3:11 PM, Alexander Sicular wrote:
> Speaking of timings:
>
> ring_members : ['riak-node1@64.137.190.244','riak-node2@64.137.247.82',
> 'riak-node3@64.137.162.64','riak-node4@64.137.161.229',
> 'riak-node5@64.137.217.73']
>
> Are these nodes in the same
The questions about your IP addresses are good ones: you’re likely to run into
more trouble when a Riak cluster is spread across multiple networks, and from a
security standpoint I would recommend against exposing Riak KV to an untrusted
network, even if its security features are enabled.
Would
Hi All,
These nodes are not in Local Area Network because our host provider doesn't
have a local IP. It is only our DEV environments. If there are problems
with cluster I will setup a local environment to try.
Do you think that it is the reason of our issues ?
Best Regards,
2017-02-10 3:21 GMT
Yes. For a number of reasons a single Riak cluster is not designed to run
over a WAN (Riak EE is specifically designed to connect two or more
separate clusters over a WAN or LAN.)
On Fri, Feb 10, 2017 at 02:40 Nguyễn Văn Nghĩa Em wrote:
> Hi All,
>
> These nodes are not in Local Area Network bec
I would like to see the results of experimenting with pr and pw as per my
earlier message in case there is a genuine bug lurking somewhere, but yes,
running inside a single network will make Riak much happier.
-John
> On Feb 9, 2017, at 9:01 PM, Nguyễn Văn Nghĩa Em wrote:
>
> Hi All,
>
> The
Dear John,
I performed the test with the following scenario :
* Old cluster of 5 nodes which nodes are not belong to the local
network. I performed test with :
- pw and pr = 0
Bucket Property :
{"props":{"name":*"bucket-name"*,"active":true,*"allow_mult":true*
,"backend":"bitcask_mult","ba
It definitely sounds as if your nodes are having problems talking to each
other, as suspected. Putting them on a single network is a significant
improvement.
The default values for pw and pr are 0; Riak by default prefers to always be
available, even if the cluster is in a degraded state. This
Hi John,
We are working on suggestion what you guys provided, we will send our
report shortly
Best
Dao
On Fri, Feb 10, 2017 at 11:26 AM, John Daily wrote:
> I would like to see the results of experimenting with pr and pw as per my
> earlier message in case there is a genuine bug lurking somew
21 matches
Mail list logo