Hey All,
FIrst time on the list, so go easy. I've been working with the
python-riak-client for a while and one of the biggest blockers has been its
lack of connection pooling. Until recently the fork maintained at
https://github.com/bretthoerner/riak-python-client has been sufficient for
my needs
Charlies,
Upgrading riaks version or switching the the storage backend is typically
best down by rolling in the new servers then rolling out the old servers.
This takes a while but keeps you downtime at ~0. After the old servers are
removed you should be able to start using the riak's newest power
might be the most sane default for longterm or short term connections.
Thanks again for replying, I'll see what happens when I try this with
multiple live nodes, and get back with more thoughts.
On Mon, Jan 23, 2012 at 7:08 PM, Greg Stein wrote:
> On Fri, Jan 20, 2012 at 15:53, Michael
ta is indirectly returned from the previously failed node b) all
other connections have failed. The goals are two fold 1) decrease the
connection time to that of a single node when possible, 2) increase the
liklyhood of a connection will succeed(since failures waste time).
On Tue, Jan 24, 2012 at 9:51 AM
I think I know the answer to this but would using a mix of 1.0.X and 1.1.0
together cause the same MapReduce issue?
On Fri, Feb 24, 2012 at 4:56 PM, Jon Meredith wrote:
> Issues have been reported by users and customers with the MapReduce
> subsystem in Riak 1.1.0.
>
> 1) MapReduce fails in cl
Andrey / Shuhao,
I had been working on a similar project to riakkit. Its not near as
complete but had some nice feature like lazy loading and basically the same
bucket/client defined class structure.
I have similar concerns as Andrey about riakkit's current beta pep8
compliance being a big one, bu
Phil,
Its a pretty well known issue. If you don't mind an older version of the
riak client there is a folk on github that does support this
https://github.com/bretthoerner/riak-python-client/branches.
Your other option is to manage it in your app or subclass the transport
object.
Some people use h
[apologies for the delay on this email sent to armon only first]
I'm having similar issues on a testing cluster for 1.1.1rc1. I'm having 1
out of 4 nodes failing multiple times and not restarting well, there are
like 100 pending transfers. Only thing node is failing. I've stopped
pointing traff
y
> to avoid printing out handoff starting messages until handoff succeeds. In
> 1.1.0/1.1.1 when handoff concurrency is exceeded you may see repeats of
> 'Starting handoff' messages if the destination node denies the transfer due
> to hitting the limit.
>
> Cheers, Jo
If your running multiple clients use an LB on each and do failover if the
local lb is down.
On Mon, Jun 25, 2012 at 11:41 AM, Swinney, Austin wrote:
> I use the Amazon Elastic Load Balancer (ELB) on my ec2 riak cluster. I
> understand the concerns of LB fail, but for me, using Riak is largely
David,
The way you're doing it is correct. As far as I can tell populate is
mostly meant for internal use similar to set_siblings which has this
comment in the code
Set the array of siblings - used internally
Not sure why they arnt using the convention of setting internal methods
with _
NET driver. I would rather not take the time
> to learn python and how it interfaces with SQL Server if possible. Any
> suggestions?
>
> ** **
>
> *From:* riak-users
> [mailto:riak-users-boun...@lists.basho.com]
> *On Behalf Of *Michael Clemmons
> *Sent
Ok so i went through with a quick find replace updated the filenames and
references afew days ago. Didn't get a chance to finish building the doc
files but the tests passed and was able to use riak locally without issue.
Gonna look around and see how bashos done sphinx integration since this
chan
Manipulating the vclock client side in theory could be used to affect what
data is stored. I wouldnt say this is a large problem but I would think
about whats being stored and if being able to say force a revert is
profitable.
On Jan 17, 2013 6:07 PM, "Brian Picciano" wrote:
> A web app that we'
14 matches
Mail list logo