Oh, sorry I misunderstood. I thought celery workers were threads within the
same Python interpreter. By all means have one client per Python process,
but see if you can keep it around between tasks. If the throughput on your
task queue is high enough, you should be able to make use of the benefit o
Hi Sean,
I would indeed like to take advantage of the pooling features of the new
client. Shared objects across Celery workers isn't something I'd ever
really looked at before - but it seems the only real way to share across
workers (ie, processes) is to use memcache or similar. Which makes sense.
Hey Basho peeps,
Looks like you might have signed the latest Riak release with new
certificate (or something) - Apt is reporting that the key from
http://apt.basho.com/gpg/basho.apt.key is incorrect this morning.
> apt-get update
W: GPG error: http://apt.basho.com precise Release: The following
s
Shane,
This bug was fixed in Riak 1.4.x in this commit
https://github.com/basho/riak_pipe/pull/73 which was backported to 1.3.2
with this https://github.com/basho/riak_pipe/pull/74. I'm not an expert on
the issue itself, so I'll have to ask if those messages are something that
are safe to ignore.
Deyan,
What Riak version are you running? There was a corruption issue discovered
and fixed in the 1.4.0 release.
https://github.com/basho/riak/blob/riak-1.4.0/RELEASE-NOTES.md#issues--prs-resolved
https://github.com/basho/merge_index/pull/30
As for fixing, you'll want to delete the buffer file
Hey Mark,
~ 613GB
ls -lhtra /var/lib/riak/backups/all_nodes.20130725.bak
-rw-r--r-- 1 riak riak 613G Jul 25 20:47
/var/lib/riak/backups/all_nodes.20130725.bak
On Tue, Aug 6, 2013 at 2:24 PM, Mark Phillips wrote:
> Hi Justin,
>
> For starters, how much data are you restoring?
>
> Mark
>
>
> On
Hi Justin,
For starters, how much data are you restoring?
Mark
On Mon, Aug 5, 2013 at 2:46 PM, Justin wrote:
> Hello all,
>
> Is there any way to determine progress/percent complete?
>
> This has been running for 3 days now. I figured it would finish over the
> weekend but it hasn't.
>
> # ri
Thanks Chris! Hopefully its just a rookie mistake...
Paul Ingalls
Founder & CEO Fanzo
p...@fanzo.me
@paulingalls
http://www.linkedin.com/in/paulingalls
On Aug 6, 2013, at 10:28 AM, Chris Meiklejohn wrote:
> Hi Paul,
>
> I just wanted to let you know that I'm currently investigating this and
Hi Paul,
I just wanted to let you know that I'm currently investigating this and
will get back to you shortly.
I've opened the following issue for tracking it:
https://github.com/basho/riak_control/issues/128
In addition, in retrospect I've realized that using the "incompatible"
language is a b
Hi Matt,
I've submitted a PR to fix the instructions.
https://github.com/basho/basho_docs/pull/534
On Mon, Aug 5, 2013 at 8:26 PM, Sean Cribbs wrote:
> Yes, that page went out before I had a chance to review it. I'll try to
> get the installation instructions patched this week.
>
>
> On Mon, A
11 + 4 + 16, so 31.
18 bytes there are the actual data, so that can't go away. Since the
allocation sized are going to be word aligned, the least overhead
there is going to be word aligning the entire structure, i.e. where
(key_len + bucket_len + 2 + 18) % 8 == 0, but that sort of
optimization on
Hi Matt,
You are correct about the Decaying class, it is a wrapper for an
exponentially-decaying error rate. The idea is, you configure your client
to connect to multiple Riak nodes, and if one goes down or is restarted,
the possibility of its selection for new connections can be automatically
red
So if you will succeed with all your patches the memory overhead will
decrease by 22 (=16+4+2) bytes, am I right?
On 5 August 2013 16:38, Evan Vigil-McClanahan wrote:
> Before I'd done the research, I too thought that the overheads were a
> much lower, near to what the calculator said, but not t
The ring state looks OK; the ring does not look polluted with random state, the
strange thing is why the get_fsm process <0.83.0> has a +100M heap. Would be
interesting to figure out what's on that heap; which you can learn from the
crash dump.
Perhaps you can load the crash dump into the Cras
14 matches
Mail list logo