Hi Timo,
...So I stopped AAE on all nodes (with riak attach), removed the AAE
folders on all the nodes. And then restarted them one-by-one, so they
all started with a clean AAE state. Then about a day later the cluster
was finally in a normal state.
I don't understand the difference between what
...So I stopped AAE on all nodes (with riak attach), removed the AAE folders
on all the nodes. And then restarted them one-by-one, so they all started
with a clean AAE state. Then about a day later the cluster was finally in a
normal state.
I don't understand the difference between what
Well, my anti-entropy folders in each machine have ~120G, It's quite a
lot!!!
I have ~600G of data per server and a cluster of 6 servers with level-db.
Just for comparison effects, what about you?
Someone of basho, can you please advise on this one?
Best regards! :)
On 8 April 2014 11:02,
I also have 6 servers. Each server has about 2Tb of data. So maybe my
anti_entropy dir size is “normal”.
Kind regards,
Timo
p.s. I’m using the multi_backend as I have some data only in memory
(riak_kv_memory_backend); all data on disk is in riak_kv_eleveldb_backend.
Well, my anti-entropy
So basho, to resume:
I've upgraded to the latest 1.4.8 version without removing the anti-entropy
data dir because at the time that note wasn't already on the Release Notes
of 1.4.8.
A few days later, I've made it: Stopped the aae via riak attach, restarted
all the nodes one by one removing the
Thanks again Matthew, you've been very helpful!
Maybe you can give me some kind of advise on this issue I'm having since
I've upgraded to 1.4.8.
Since I've upgraded my anti-entropy data has been growing a lot and has
only stabilised in very high values... Write now my cluster has 6 machines
each
AAE is broken and brain dead in releases 1.4.3 through 1.4.7. That might be
your problem.
I have a two billion key data set building now. I will forward node disk usage
when available.
Matthew
Sent from my iPhone
On Apr 8, 2014, at 6:51 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Argh. Missed where you said you had upgraded. Ok it will proceed with getting
you comparison numbers.
Sent from my iPhone
On Apr 8, 2014, at 6:51 AM, Edgar Veiga edgarmve...@gmail.com wrote:
Thanks again Matthew, you've been very helpful!
Maybe you can give me some kind of advise on
Thanks a lot Matthew!
A little bit of more info, I've gathered a sample of the contents of
anti-entropy data of one of my machines:
- 44 folders with the name equal to the name of the folders in level-db dir
(i.e. 393920363186844927172086927568060657641638068224/)
- each folder has a 5 files
I'm about to have two data centers bi-directionally replicating and I'm looking
to confirm that Riak Multi Data Center fullsync Replication never deletes
during replication.
The MDC Architecture Page says that the two clusters stream missing
objects/updates back and forth.
It does not state
It makes sense, I do a lot, and I really mean a LOT of updates per key,
maybe thousands a day! The cluster is experiencing a lot more updates per
each key, than new keys being inserted.
The hash trees will rebuild during the next weekend (normally it takes
about two days to complete the
Edgar,
The test I have running currently has reach 1 Billion keys. It is running
against a single node with N=1. It has 42G of AAE data. Here is my
extrapolation to compare your numbers:
You have ~2.5 Billion keys. I assume you are running N=3 (the default). AAE
therefore is actually
Hi Ray,
Deletion is replicated by both real-time and full-sync MDC - specifically,
the tombstone representing the deletion will be replicated. The delete_mode
setting affects how long a tombstone remains alive for MDC to replicate
it (3 seconds by default). If only full-sync MDC is used, the
Hi Luke:
Thanks, that's very clear.
I'll adjust our configuration accordingly.
--Ray
- Original Message -
From: Luke Bakken lbak...@basho.com
To: Ray Cote rgac...@appropriatesolutions.com
Cc: riak-users riak-users@lists.basho.com
Sent: Tuesday, April 8, 2014 12:20:49 PM
Lee,
That looks like a legitimate permissions issue during the multipart upload
initiation. Perhaps check the Riak CS logs for any errors or clues and verify
you do have proper ACL permissions for the bucket you are attempting to upload
to. We don't use that function in our testing so there
Edgar,
Today we disclosed a new feature for Riak's leveldb, Tiered Storage. The
details are here:
https://github.com/basho/leveldb/wiki/mv-tiered-options
This feature might give you another option in managing your storage volume.
Matthew
On Apr 8, 2014, at 11:07 AM, Edgar Veiga
Thanks Matthew!
Today this situation has become unsustainable, In two of the machines I
have an anti-entropy dir of 250G... It just keeps growing and growing and
I'm almost reaching max size of the disks.
Maybe I'll just turn off aae in the cluster, remove all the data in the
anti-entropy
No. I do not see a problem with your plan. But ...
I would prefer to see you add servers to your cluster. Scalabilty is one of
Riak's fundamental characteristics. As your database needs grow, we grow with
you … just add another server and migrate some of the vnodes there.
I obviously
I'll wait a few more days, see if the AAE maybe stabilises and only after
that make a decision regarding this.
The cluster expanding was on the roadmap, but not right now :)
I've attached a few screenshot, you can clearly observe the evolution of
one of the machines after the anti-entropy data
19 matches
Mail list logo