On July 15, 2013 02:16:47 PM Gabriel Littman wrote:
> Hi,
>
> Another posibility is to create a new bucket for each test run.
>
> Gabe
>
Also a good idea, but not easily implemented against my current library. I've
got a solution in using random addition to keys that works for now.
My only pr
On 16/07/13 07:16, Gabriel Littman wrote:
Another posibility is to create a new bucket for each test run.
+1
I think using a unique bucket name is definitely the only sane way to go.
It also prevents you from having weird issues if you run two tests
simultaneously (as can easily happen once y
Mark, Dave, thanks you for your help.
Also any feedback and pull requests are welcome.
2013/7/16 Igor Karymov
> We have two different unlinked riak clusters.
> One used for statistic aggregation across the system. This one heavily
> abuse riak secondary indexes range query feature, so we have t
We have two different unlinked riak clusters.
One used for statistic aggregation across the system. This one heavily
abuse riak secondary indexes range query feature, so we have to grow it
with caution.
Another one used as our main storage. Both clusters have different
backends, different nodes cou
Hello Igor -
You can submit pull requests against the docs repo:
https://github.com/basho/basho_docs
Cheers,
Dave
On Jul 15, 2013, at 6:00 PM, Igor Karymov wrote:
> Unfortunately this library is not included into "community projects" wiki
> page list and i have no idea how-to fix this. May
Hi Igor,
On Mon, Jul 15, 2013 at 3:00 PM, Igor Karymov wrote:
> Unfortunately this library is not included into "community projects" wiki
> page list and i have no idea how-to fix this. Maybe somebody from basho
> guys can help us.
>
>
I just went ahead and opened a pull request to add it the
This is what we do for CorrugatedIron integration testing. Test buckets
typically have a test name + UUID. Which makes it interesting when I try to
verify data via curl while I'm debugging. But it also keeps me from
polluting my testing buckets with the output of other failed tests.
---
Jeremiah P
Unfortunately this library is not included into "community projects" wiki
page list and i have no idea how-to fix this. Maybe somebody from basho
guys can help us.
2013/7/16 Konstantin Kalin
> Thank you for sharing it. I built similar application already (pool +
> multi-node connections) after
Thank you for sharing it. I built similar application already (pool +
multi-node connections) after I had got confirmation that there is no
"official way" :) Lucky it wasn't hard at all and laid down into existing
prototype nicely (mostly because it's in-house app).
Thank you,
Konstantin.
On Mon
Hi. Maybe this will be usefull for you:
https://github.com/unisontech/uriak_pool
We was have to write our own solution, because poolboy was not be able
handle riak nodes ups and downs gracefully.
Also our library provide interface wrapper and multicluster configuration.
2013/7/11 Sean Cribbs
>
Hi,
Another posibility is to create a new bucket for each test run.
Gabe
On Mon, Jul 15, 2013 at 12:06 PM, Matthew Dawson wrote:
> Hi Seth,
> On July 14, 2013 02:25:02 PM Seth Bunce wrote:
> > I had a similar problem. I mitigated the problem by appending a
> > current timestamp (in nanoseconds)
I've tested this using the PBC interface and a build from the source branch
as well.
The timeout occurs waiting for any response from protobufs waiting for
message size.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
Hi everyone. First post, if I leave anything out just let me know.
I have been using vagrant in testing Yokozuna with 1.3.0 (the official
0.7.0 “release") and it runs swimmingly. When 1.4 was released and someone
pointed me to the YZ integration branch, I decided to give it a go.
I realize that Y
Hi everyone,
We'd sincerely appreciate it if you could take some time to complete
our second quarterly Riak community survey:
http://riak.polldaddy.com/s/community-survey
Same as last time, all contributors to the survey will be provided
Basho swag, as well as discounted RICON tickets. One lucky
Hi Seth,
On July 14, 2013 02:25:02 PM Seth Bunce wrote:
> I had a similar problem. I mitigated the problem by appending a
> current timestamp (in nanoseconds) to the keys I'm using in my tests.
> This way I don't have to worry about waiting for Riak to reap
> tombstones after 3 seconds and I don't
On July 15, 2013 09:55:05 AM Andrew Thompson wrote:
> See this old post:
>
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-June/004601
> .html
>
> Effectively, you should be doing a GET before ANY PUT, and using the
> deletedvclock option if you're using PB or checking for the
It's unlikely to be that problem since I don't have any test data in my system
yet. My "batch functions" are just running in the background doing what they
normally do (i.e. running a mapreduce as well as querying all keys from a
specific bucket). However, since there is no actual data yet, I'
Thanks for the quick answers Jared & Kelly,
+zdbbl 16384 is working ok for our production cluster (5 nodes), but for
development for some large 2i (a list of keys of 4m+) I get some
time-outs so I asked the question, cause there are memory implications,
I will try 32768 at our dev cluster (4 n
Guido,
So higher +zdbbl numbers will allow for larger message buffers, but at the
expense of memory usage. We've actually added better wording in the
default vm.args file for 1.4+
https://github.com/basho/riak/blob/master/rel/files/vm.args#L40
So I guess the best answer is trial & error. We add
Hi Guido. The docs section you referenced is for RiakCS. We do recommend a
substantially higher value for zdbbl when using RiakCS because the object
data being stored is divided into 1MB chunks which is still a relatively
large object size for Riak to handle. Riak used by itself may not need the
va
Hi,
We had an issue in 1.4 where 2i operations were timing out, after going
through support we were suggested to use "+zdbbl 16384", on the
"Configuring Riak Docs" it is strongly suggested (unless doc need to be
re-phrased) it should be higher:
*Source:*
http://docs.basho.com/riakcs/latest/
Thanks Jared
I'm aware of limitations of 3-node cluster. If I understand it correctly
there are some corner cases where certain copies for some vnodes can land
on the same physical node. But I would
assume there is no case where all 3 copies (for N=3) should land on the
same physical node. Hence I
See this old post:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-June/004601.html
Effectively, you should be doing a GET before ANY PUT, and using the
deletedvclock option if you're using PB or checking for the
X-Riak-VClock header on the 404 response. If you get back a tombsto
Hi Deyan,
When running mapreduce jobs, reduce phases often end up being the bottleneck.
This is especially true when all input data needs to be gathered on the
coordinating node before it can be executed, as is the case if the
reduce_phase_only_1 flag is enabled. Having this flag set will cause
Riak 1.4 introduced a time out setting that defaults to 60,000 seconds. If
you need to read for longer periods of time, you'll need to increase the
timeout. You can set it on the messages that are being sent in to Riak.
Is this, perhaps, what you're running into?
---
Jeremiah Peschka - Founder, B
25 matches
Mail list logo