Update PB ip in runtime

2016-04-11 Thread Edgar Veiga
Hi everyone! Quick question, is it possible to update the PB ip:port entry in runtime without restarting de node? Best regards! -- *Edgar Veiga* __ *BySide* *edgar.ve...@byside.com * http://www.byside.com Rua Visconde Bóbeda, 70 r/c 4000-108 Porto

Update PB ip in runtime

2016-04-06 Thread Edgar Veiga
Hi everyone! Quick question, is it possible to update the PB entry in runtime without restarting de node? Best regards! ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Re: Memory usage

2015-02-13 Thread Edgar Veiga
_info:289 An outbound handoff of partition riak_kv_vnode 742168800207099138150308704113737470919028244480 was terminated for reason: noproc Please, can anyone give me a help on this? I'm starting to get worried with this behaviour. Tell me if you need more info! Thanks and Best regards, Edgar

Memory usage

2015-02-10 Thread Edgar Veiga
Hi all! I have a riak cluster, working smoothly in production for about one year, with the following characteristics: - Version 1.4.12 - 6 nodes - leveldb backend - replication (n) = 3 ~ 3 billion keys ~ 1.2Tb per node - AAE disabled Two days ago I've upgraded all of the 6 nodes from riak

Re: Adding nodes to cluster

2015-02-09 Thread Edgar Veiga
suggests that some tuning may be needed.. Best regards, Edgar On 9 February 2015 at 14:54, Christopher Meiklejohn wrote: > > > On Feb 6, 2015, at 1:53 AM, Edgar Veiga wrote: > > > > It is expected that the total amount of data per node lowers quite a > lot, correct? I&#x

Re: Adding nodes to cluster

2015-02-05 Thread Edgar Veiga
n Sat, Jan 24, 2015 at 9:49 PM, Edgar Veiga wrote: > Yeah, after sending the email I realized both! :) > Thanks! Have a nice weekend > On 24 January 2015 at 21:46, Sargun Dhillon wrote: >> 1) Potentially re-enable AAE after migration. As your cluster gets >> bigger, the likelih

Re: Adding nodes to cluster

2015-01-24 Thread Edgar Veiga
ence only becomes scarier in light of this. Losing data > != awesome. > > 6) There shouldn't be any problems, but for safe measures you should > probably upgrade the old ones before the migration. > > > > On Sat, Jan 24, 2015 at 1:31 PM, Edgar Veiga > wrote:

Re: Adding nodes to cluster

2015-01-24 Thread Edgar Veiga
Sargun, Regarding 1) - AAE is disabled. We had a problems with it and there's a lot of threads here in the mailing list regarding this. AAE won't stop using more and more disk space and the only solution was disabling it! Since then the cluster has been pretty stable... Regarding 6) Can you or an

Re: Adding nodes to cluster

2015-01-24 Thread Edgar Veiga
Hi Alexander! Thanks for the reply. Ring actual size: 256; Total amount of data on cluster: ~6.6TB (~1.1TB per node) Best regards, Edgar On 24 January 2015 at 20:42, Alexander Sicular wrote: > I would probably add them all in one go so you have one vnode migration > plan that gets executed. Wh

Re: RIAK 1.4.6 - Mass key deletion

2014-04-10 Thread Edgar Veiga
es. Your throughput should not be impacted. But this is > not something I have personally measured/validated. > > Matthew > > > On Apr 10, 2014, at 7:33 AM, Edgar Veiga wrote: > > Hi Matthew! > > I have a possibility of moving the data of anti-entropy directory to a >

Re: RIAK 1.4.6 - Mass key deletion

2014-04-10 Thread Edgar Veiga
store the anti-entropy data? Best regards! On 8 April 2014 23:58, Edgar Veiga wrote: > I'll wait a few more days, see if the AAE maybe "stabilises" and only > after that make a decision regarding this. > The cluster expanding was on the roadmap, but not right now :)

Re: RIAK 1.4.6 - Mass key deletion

2014-04-08 Thread Edgar Veiga
extra servers on Ebay. > > Matthew > > > On Apr 8, 2014, at 6:31 PM, Edgar Veiga wrote: > > Thanks Matthew! > > Today this situation has become unsustainable, In two of the machines I > have an anti-entropy dir of 250G... It just keeps growing and growing and > I'

Re: RIAK 1.4.6 - Mass key deletion

2014-04-08 Thread Edgar Veiga
.com/basho/leveldb/wiki/mv-tiered-options > > This feature might give you another option in managing your storage > volume. > > > Matthew > > On Apr 8, 2014, at 11:07 AM, Edgar Veiga wrote: > > It makes sense, I do a lot, and I really mean a LOT of updates per key,

Re: RIAK 1.4.6 - Mass key deletion

2014-04-08 Thread Edgar Veiga
hen you > should see AAE space usage dropping dramatically. > > Matthew > > > On Apr 8, 2014, at 9:31 AM, Edgar Veiga wrote: > > Thanks a lot Matthew! > > A little bit of more info, I've gathered a sample of the contents of

Re: RIAK 1.4.6 - Mass key deletion

2014-04-08 Thread Edgar Veiga
13:24, Matthew Von-Maszewski wrote: > Argh. Missed where you said you had upgraded. Ok it will proceed with > getting you comparison numbers. > > Sent from my iPhone > > On Apr 8, 2014, at 6:51 AM, Edgar Veiga wrote: > > Thanks again Matthew, you've been very helpful! &

Re: RIAK 1.4.6 - Mass key deletion

2014-04-08 Thread Edgar Veiga
deletion discussion. I made changes > recently to the aggressive delete code. The second section of the > following (updated) web page discusses the adjustments: > > https://github.com/basho/leveldb/wiki/Mv-aggressive-delete > > Matthew > > > On Apr 6, 2014, at 4:29 P

Re: Update to 1.4.8

2014-04-08 Thread Edgar Veiga
So basho, to resume: I've upgraded to the latest 1.4.8 version without removing the anti-entropy data dir because at the time that note wasn't already on the Release Notes of 1.4.8. A few days later, I've made it: Stopped the aae via riak attach, restarted all the nodes one by one removing the an

Re: Update to 1.4.8

2014-04-08 Thread Edgar Veiga
Well, my anti-entropy folders in each machine have ~120G, It's quite a lot!!! I have ~600G of data per server and a cluster of 6 servers with level-db. Just for comparison effects, what about you? Someone of basho, can you please advise on this one? Best regards! :) On 8 April 2014 11:02, Timo

Re: Update to 1.4.8

2014-04-08 Thread Edgar Veiga
Hi Timo, "...So I stopped AAE on all nodes (with riak attach), removed the AAE folders on all the nodes. And then restarted them one-by-one, so they all started with a clean AAE state. Then about a day later the cluster was finally in a normal state." I don't understand the difference between wha

Re: RIAK 1.4.6 - Mass key deletion

2014-04-06 Thread Edgar Veiga
t allows it to also be > removed if and only if there is no chance for an identical key to exist on > a higher level. > > I apologize that I cannot give you a more useful answer. 2.0 is on the > horizon. > > Matthew > > > On Apr 6, 2014, at 7:04 AM, Edgar Veiga wrote: &g

Re: RIAK 1.4.6 - Mass key deletion

2014-04-06 Thread Edgar Veiga
see any problem with this? I no longer need those millions of values that are living in the cluster... When the version 2.0 of riak runs stable I'll do the update and only then delete those keys! Best regards On 18 February 2014 16:32, Edgar Veiga wrote: > Ok, thanks a lot Matthew. >

Re: Riak node down after ssd failure

2014-03-18 Thread Edgar Veiga
dename distinct from that of the last > one, and force-replace the new node for the old, dead one. > On Tue, Mar 18, 2014 at 1:03 PM, Edgar Veiga wrote: >> Hello all! >> >> I have a 6 machine cluster with leveldb as backend, using riak 1.4.8 >> version. >> >

Riak node down after ssd failure

2014-03-18 Thread Edgar Veiga
Hello all! I have a 6 machine cluster with leveldb as backend, using riak 1.4.8 version. Today, the ssd of one of the machine's has failed and is not recoverable, so I have already ordered a new one that should arrive by thursday! In the meanwhile what should I do? I have no backup of the old ss

Re: Update to 1.4.8

2014-03-06 Thread Edgar Veiga
e notes after I've started the upgrade process... Best regards! On 6 March 2014 03:08, Scott Lystig Fritchie wrote: > Edgar Veiga wrote: > > ev> Is this normal? > > Yes. One or more of your vnodes can't keep up with the workload > generated by AAE repair

Update to 1.4.8

2014-03-03 Thread Edgar Veiga
Hi all, I have a cluster of 6 servers, with level-db as backend. Previously I was with 1.4.6 version of riak. I've updated to 1.4.8 and since then I have the output log of all of the nodes flooded with messages like this: 2014-03-03 08:52:21.619 [info] <0.776.0>@riak_kv_entropy_manager:perhaps_l

1.4.8 - AAE regression fixed

2014-02-27 Thread Edgar Veiga
On the Release notes, theres a new section recommending the deletion of all previous AAE info before upgrading to 1.4.8. What are the risks (if any) of not doing this (the deletion) beside the wasting resources? Best Regards ___ riak-users mailing list

Re: RIAK 1.4.6 - Mass key deletion

2014-02-18 Thread Edgar Veiga
atthew > > > > On Feb 18, 2014, at 11:10 AM, Edgar Veiga wrote: > > The only/main purpose is to free disk space.. > > I was a little bit concerned regarding this operation, but now with your > feedback I'm tending to don't do nothing, I can't risk the

Re: RIAK 1.4.6 - Mass key deletion

2014-02-18 Thread Edgar Veiga
ed to carefully throttle your delete rate or > the overhead will likely impact your production throughput. > > We have new code to help quicken the actual purge of deleted data in Riak > 2.0. But that release is not quite ready for production usage. > > > What do you hope to

Re: RIAK 1.4.6 - Mass key deletion

2014-02-18 Thread Edgar Veiga
Sorry, forgot that info! It's leveldb. Best regards On 18 February 2014 15:27, Matthew Von-Maszewski wrote: > Which Riak backend are you using: bitcask, leveldb, multi? > > Matthew > > > On Feb 18, 2014, at 10:17 AM, Edgar Veiga wrote: > > > Hi all! > >

RIAK 1.4.6 - Mass key deletion

2014-02-18 Thread Edgar Veiga
Hi all! I have a fairly trivial question regarding mass deletion on a riak cluster, but firstly let me give you just some context. My cluster is running with riak 1.4.6 on 6 machines with a ring of 256 nodes and 1Tb ssd disks. I need to execute a massive object deletion on a bucket, I'm talking o

Re: Level-db cluster

2014-02-11 Thread Edgar Veiga
here doesn't seem to be any real advantage to lowering the thread > count. > > Thanks for raising the issue. > > -John > > > On Feb 3, 2014, at 10:51 AM, Edgar Veiga wrote: > > Hi all, > > I have a 6 machines cluster with a ring of 256 nodes with levelDB a

Level-db cluster

2014-02-03 Thread Edgar Veiga
Hi all, I have a 6 machines cluster with a ring of 256 nodes with levelDB as backend. I've seen that recently in the documentation, this has appeared: If using LevelDB as the storage backend (which maintains its own I/O thread pool), the number of async threads in Riak's default pool can be decr

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
ere (although I clearly > misunderstood). There is a tradeoff between speed and > consistency/reliability, and the whole application has to take advantage of > the extra consistency and reliability for it to make sense. > > Sorry again, > Jason Campbell > > - Original Mes

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
Here's a (bad) mockup of the solution: https://cloudup.com/cOMhcPry38U Hope that this time I've made myself a little more clear :) Regards On 30 January 2014 23:04, Edgar Veiga wrote: > Yes Eric, I understood :) > > > On 30 January 2014 23:00, Eric Redmond wrote: &

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
Yes Eric, I understood :) On 30 January 2014 23:00, Eric Redmond wrote: > For clarity, I was responding to Jason's assertion that Riak shouldn't be > used as a cache, not to your specific issue, Edgar. > > Eric > > On Jan 30, 2014, at 2:54 PM, Edgar Veiga wrote:

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
used for > all sorts of uses, but I believe in the right tool for the right job. > Unless there is something I don't understand, Riak is probably the wrong > tool. It will work, but there is other software that will work much better. > > I hope this helps, > Jason Campbell

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
gards, On 30 January 2014 10:46, Russell Brown wrote: > > On 30 Jan 2014, at 10:37, Edgar Veiga wrote: > > Also, > > Using last_write_wins = true, do I need to always send the vclock while on > a PUT request? In the official documention it says that riak will look only

Re: last_write_wins

2014-01-30 Thread Edgar Veiga
Also, Using last_write_wins = true, do I need to always send the vclock while on a PUT request? In the official documention it says that riak will look only at the timestamp of the requests. Best regards, On 29 January 2014 10:29, Edgar Veiga wrote: > Hi Russel, > > No, it doesn

Re: last_write_wins

2014-01-29 Thread Edgar Veiga
Hi Russel, No, it doesn't depend. It's always a new value. Best regards On 29 January 2014 10:10, Russell Brown wrote: > > On 29 Jan 2014, at 09:57, Edgar Veiga wrote: > > tl;dr > > If I guarantee that the same key is only written with a 5 second interva

Re: last_write_wins

2014-01-29 Thread Edgar Veiga
tl;dr If I guarantee that the same key is only written with a 5 second interval, is last_write_wins=true profitable? On 27 January 2014 23:25, Edgar Veiga wrote: > Hi there everyone! > > I would like to know, if my current application is a good use case to set > last_write_

last_write_wins

2014-01-27 Thread Edgar Veiga
Hi there everyone! I would like to know, if my current application is a good use case to set last_write_wins to true. Basically I have a cluster of node.js workers reading and writing to riak. Each node.js worker is responsible for a set of keys, so I can guarantee some kind of non distributed ca

Re: anti_entropy_expire

2014-01-03 Thread Edgar Veiga
etween {473846233978378680511350941857232385279071879168,'riak@192.168.20.112'} and {479555224749202520035584085735030365824602865664,'riak@192.168.20.107'} I have few but consistent lines like this (every two hours, during this process). Best regards. On 2 January 2014 10:05, Edgar Veiga wrote:

Re: anti_entropy_expire

2014-01-02 Thread Edgar Veiga
{anti_entropy_leveldb_opts, [{write_buffer_size, 4194304}, >{use_bloomfilter, true}, > {max_open_files, 20}]}, > > > This might not solve your specific problem, but it will certainly improve > your AAE

Re: anti_entropy_expire

2013-12-31 Thread Edgar Veiga
Hey guys! Nothing on this one? Btw: Happy new year :) On 27 December 2013 22:35, Edgar Veiga wrote: > This is a du -hs * of the riak folder: > > 44G anti_entropy > 1.1M kv_vnode > 252G leveldb > 124K ring > > It's a 6 machine cluster, so ~1512G of levelDB.

Re: anti_entropy_expire

2013-12-27 Thread Edgar Veiga
be nice to have > that info available. > > Matthew > > P.S. Unrelated to your question: Riak 1.4.4 is available for download. > It has a couple of nice bug fixes for leveldb. > > > On Dec 27, 2013, at 2:08 PM, Edgar Veiga wrote: > > Ok, thanks for confirming! > &

Re: anti_entropy_expire

2013-12-27 Thread Edgar Veiga
re are options available in app.config to control how often this occurs > and how many vnodes rehash at once: defaults are every 7 days and two > vnodes per server at a time. > > Matthew Von-Maszewski > > > On Dec 27, 2013, at 13:50, Edgar Veiga wrote: > > Hi! > >

anti_entropy_expire

2013-12-27 Thread Edgar Veiga
rom the on-disk K/V data. Can you confirm me that this may be the root of the "problem" and if it's normal for the action to last for two days? I'm using riak 1.4.2 on 6 machines, with centOS. The backend is levelDB. Best Regards, Edgar Veiga

Re: Riak cluster all nodes down

2013-09-20 Thread Edgar Veiga
gt; https://github.com/basho/riak_kv/issues/666 will track adding some > validation code to protect against similar incidents. > > Jon > > > On Fri, Sep 20, 2013 at 8:59 AM, Edgar Veiga wrote: > >> Problem solved. >> >> The n_val = "3" caused the crash! I

Re: Riak cluster all nodes down

2013-09-20 Thread Edgar Veiga
Problem solved. The n_val = "3" caused the crash! I had a window of time while starting a node to send a new PUT command and restore the correct value. Best regards, thanks Jon On 20 September 2013 15:42, Edgar Veiga wrote: > Yes I did, via CURL command: > > curl -v -XP

Riak cluster all nodes down

2013-09-20 Thread Edgar Veiga
Hello everyone, Please lend me a hand here... I'm running a riak cluster of 6 machines (version 1.4.1). Suddenly all the nodes in the cluster went down and they are refusing to go up again. It keeps crashing all the the time, this is just a sample of what I get when starting a node: 2013-09-20 1

Re: Riak cluster all nodes down

2013-09-20 Thread Edgar Veiga
2013 15:32, Jon Meredith wrote: > Looks like the nval is set to a binary <<"3">> rather than an integer. > Have you changed it recently and how? > > On Sep 20, 2013, at 8:25 AM, Edgar Veiga wrote: > > Hello everyone, > > Please lend me a hand here.

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
r and 0=numbers ): xxx___0x_00_000_000xx We are using the php serialize native function! Best regards On 10 July 2013 11:43, damien krotkine wrote: > > > > On 10 July 2013 11:03, Edgar Veiga wrote: > >> Hi Guido. >> >> T

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Guido, we'r not using Java and that won't be an option. The technology stack is php and/or node.js Thanks anyway :) Best regards On 10 July 2013 10:35, Edgar Veiga wrote: > Hi Damien, > > We have ~11 keys and we are using ~2TB of disk space. > (The average

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
benchmark different > compressions algorithms on the application side. > > > [1]: https://github.com/Sereal/Sereal/wiki/Sereal-Comparison-Graphs > [2]: https://github.com/tobyink/php-sereal/tree/master/PHP > > On 10 July 2013 10:49, Edgar Veiga wrote: > >> Hello all

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
ido Medina wrote: > > Then you are better off with Bitcask, that will be the fastest in your > case (no 2i, no searches, no M/R) > > HTH, > > Guido. > > On 10/07/13 09:49, Edgar Veiga wrote: > > Hello all! > > I have a couple of questions that I would like to

Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
evelDB compression. Do you think we should compress our objects to on the client? Best regards, Edgar Veiga ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com