http://siculars.posthaven.com
Sent from my iRotaryPhone
On Apr 17, 2015, at 16:19, Andrew Stone ast...@basho.com wrote:
Hi Jonathan,
Sorry for the late reply. It looks like riak_ensemble still thinks that
those old nodes are part of the cluster. Did you remove them with
'riak-admin
Hi Jonathan,
Sorry for the late reply. It looks like riak_ensemble still thinks that
those old nodes are part of the cluster. Did you remove them with
'riak-admin cluster leave' ? If so they should have been removed from the
root ensemble also, and the machines shouldn't have actually left the
Thanks for all the work Seth!
On Sat, Oct 18, 2014 at 11:42 AM, Seth Thomas stho...@opscode.com wrote:
An update for the Riak CS Chef cookbook has been released, bringing
support for Riak CS 1.5.1 along with updates to dependences and some bug
fixes.
You can grab it from github [1] or the
Hi Charles,
AFAIK we haven't ever tested Riak Cs with the MapR connector. However, if
MapR works with S3, you should just have to change the IP to point to a
load balancer in front of your local Riak CS cluster. I'm unaware of how to
change that setting in MapR though. It seems like a question
Hi Toby,
We've seen this scenario before. It occurs because riak-cs stores bucket
information in 2 places on disk:
1) Inside the user record (for bucket permissions)
2) Inside a global list of buckets, since each bucket must be unique
What has happened most likely is that the bucket is no
Oops. Didn't reply to the list. Sorry for the dupe Matt.
Hi Matt,
My guess is that this has to do with a fairly recent change to the cluster
join mechanism.
Try attaching to the erlang shell with riak attach and running the
following command:
riak_ensemble_manager:enable().
In the future a
Think of an object with thousands of siblings. That's an object that has 1
copy of the data for each sibling. That object could be on the order of
100s of megabytes. Everytime an object is read off disk and returned to the
client 100mb is being transferred. Furthermore leveldb must rewrite the
Hi Georgio,
There are many possible ways to do something like this. Riak CS in
particular chunks large files into immutable data blocks, and has manifests
pointing to those blocks to track versions of files. Manifests and blocks
are each stored in their own riak object. There are some tricks
./configure --prefix=/home/dave make make install
On Fri, Oct 11, 2013 at 3:10 PM, Dave King djk...@gmail.com wrote:
That would be great if installing erlang didn't require sudo...
Dave
On Fri, Oct 11, 2013 at 11:50 AM, Jon Meredith jmered...@basho.comwrote:
You should be able to to
Good to hear. :)
On Wed, Sep 18, 2013 at 11:52 PM, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
Hi Andrew,
Thanks for that -- so far things are looking stable after making that
change.
-T
On 19/09/13 13:27, Andrew Stone wrote:
Hi Toby,
Can you try raising
Hi Thomas,
Yes you can change the n_val in the default bucket properties of the riak
cluster. That's how CS determines how many replicas to store. However,
please keep in mind that reducing the replicas can reduce your availability.
-Andrew
On Wed, Aug 21, 2013 at 5:27 AM, Thomas Dunham
Hi Andre,
The blocks are going to be spread across some subset of that 100 servers.
Since Riak CS stores the chunks inside Riak they are hashed based on
primary key. Currently there is no way to co-locate chunks in Riak CS. You
can read more about How riak manages storage here:
, Rahul Bongirwar
bongirwar.rahul...@gmail.com wrote:
Hi,
All process are running on my node ( riak, riak-cs, stanchion ) but still
getting same error.
I tried this on atleast 3 different node but facing same problem.
Thanks,
Rahul
On Thu, Jun 13, 2013 at 9:36 PM, Andrew Stone ast
after all
configuration done. So not able to create admin user also.
Thanks,
Rahul
On Thu, Jun 20, 2013 at 5:25 PM, Andrew Stone ast...@basho.com wrote:
Hi Rahul, I'm bringing this back on list in case anyone else has this
issue.
Did you create an admin user as in step 4 listed here
Hi Rahul,
That error message is misleading. In general it can be treated as a 500
error. My guess is that you do not have stanchion running. Stanchion
provides a serialization layer for creating users and buckets. See
Hi,
I've been reading up on Riak search and am very pleased to see the new
index repair command in 1.2. However, it seems that in order to use it you
must be able to somehow detect when there are inconsistencies in your
search indexes across vnodes. While it's likely the inconsistencies would
be
Hi Stefan,
You need to configure Riak to listen on the right interface. You are trying
to hit it from 127.0.0.1 which is only available from the local machine.
If you set web_ip to 0.0.0.0 in app.config for riak_core it will listen on
all interfaces. Then you can try to hit it with a curl
like nginx.
-alexander
On 2010-12-01, Andrew Stone andrew.j.ston...@gmail.com wrote:
Hi Stefan,
You need to configure Riak to listen on the right interface. You are
trying
to hit it from 127.0.0.1 which is only available from the local machine.
If you set web_ip to 0.0.0.0
18 matches
Mail list logo