Hi,
Putting varnish in front of Riak CS nodes seems a good idea, if the
load concentrates on small number of videos whose total size fits in
varnish's memory. Yet I'm not sure how varnish works in caching.
Putting varnish between Riak CS and Riak is not a good idea because
they communicates with
I have a problem to install riak, I am on a OSX of 2013
git clone https://github.com/basho/riak
cd riak
make rel
cd linking; make export
cc -o prlink.o -c -m32 -Wall -fno-common -pthread -O2 -fPIC -UDEBUG
-DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
Here's another thread with a similar issue:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html
On Tue, Aug 13, 2013 at 7:58 AM, Federico Mosca feg...@gmail.com wrote:
I have a problem to install riak, I am on a OSX of 2013
git clone
Hi riak peoples,
I'm in the process of adding a new node to a aging (1 node) cluster. I
would like to know what would be the prefered incrementing upgrade to get
all my nodes on the latest riak version. The best scenario would also have
the least downtime. The old node is at riak version 1.2.1.
I saw it, but how can I remove all the erlang?
the ln -s does not work for me
2013/8/13 Bhuwan Chawla bhu...@zanthos.com
Here's another thread with a similar issue:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html
On Tue, Aug 13, 2013 at 7:58 AM,
From http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it looks
like you should upgrade to 1.3.2 and then 1.4.1
Depending on how badly you need the extra capacity, it would probably be better
to start by upgrading all nodes and then adding the new one.
--
Jeremiah Peschka -
Having done a similar upgrade, a gotcha to keep in mind:
Note for Secondary Index users
If you use Riak's Secondary Indexes and are upgrading from a version prior
to Riak version 1.3.1, you need to reformat the indexes using the
riak-admin reformat-indexes command
On Tue, Aug 13, 2013 at 8:36
Same here, except that Riak 1.3.2 did that for me automatically. As
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per node
the first time Riak starts it will take some time upgrading the 2i
indexes storage format, if you see any weirdness then execute
riak-admin
Hi Federico,
Are you building Riak for production or as a development/test environment?
Can I ask why you aren't using the precompiled tarball? [0]
If you are looking for a quick dev/test environment, there is an OSX devrel
[1] launcher on github [2]. We use it all the time to get a devrel
Apologies, I forgot to send the URL to the devrel launcher repo. Here it
is:
https://github.com/basho/riak-dev-cluster
On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree t...@basho.com wrote:
Hi Federico,
Are you building Riak for production or as a development/test environment?
Can I ask why
Hi,
How do I change the filesystem where the RIAK CS buckets could run. Changing
the data_root values in storage_backend is not working as it is specified in a
FAQ
(http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).
When I change
Hi All,On behalf of Basho, I'm excited to announce that Riak CS 1.4.0 is now official. Riak CS is Basho's open source cloud storage software.The biggest feature additions are support for the Swift API and Keystone authentication, which enables CS to be a drop-in storage replacement for OpenStack
Also, in theory if you have at least 5 nodes in the cluster one node
down at a time doesn't stop your cluster from working properly.
You could do the following node by node which I have done several times:
1. Stop Riak on the upgrading node and in another node mark the
upgrading node as
I followed also the [1],
anyway I had two version of erlang
2013/8/13 Todd Tyree t...@basho.com
Apologies, I forgot to send the URL to the devrel launcher repo. Here it
is:
https://github.com/basho/riak-dev-cluster
On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree t...@basho.com wrote:
Hi
Louis-Philippe et al:
You can follow the rolling upgrade procedure to upgrade a node from 1.2 to
1.4.x directly. The note in the instructions only concerns upgrading from 1.0
to 1.4.
No need to stop at 1.3.2.
Thanks,
Charlie Voiselle
On Aug 13, 2013, at 9:23 AM, Guido Medina
Hi!
OS - Debian 6 2.6.32-5-amd64
gcc - version 4.7.2 (GCC)
boost version 1.51
make
...
error: #error Threading support unavaliable: it has been explicitly
disabled with BOOST_DISABLE_THREADS
...
How to solve that error?
Thanks
--
View this message in context:
Hello,
I need to decide what database we will choose for our project. Certainly, we
need only 2 physical nodes (active-standby). Riak is good for us, becase it is
Erlang-based, as our project. But is's known that riak cluster should have at
least five nodes. I have some problems with my
Hi Hector,
This is what happens, after changing the directories in riak_kv section on
/etc/riak/app.config:
# riak restart
ok
# stanchion restart
ok
# riak-cs start
riak-cs failed to start within 15 seconds,
see the output of 'riak-cs console' for more information.
If you want to
Dilip,Can you restart Riak with a riak stop then riak start? If this fails a riak ping, can you please attach a riak console output.--John White On August 13, 2013 at 7:25:51 PM, dilip kumar (dilip_nuta...@yahoo.co.in) wrote: Hi Hector,This is what happens, after changing the directories in
Hi guys,
I am setting up a new Riak cluster and I was wondering if there is any
drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is
that we have all of the values way bigger than 4K and I guess from the
performance point of view it would make sense to increase the block
Istvan,
block_size is not a size, it is a threshold. Data is never split across
blocks. A single block contains one or more key/value pairs. leveldb starts a
new block only when the total size of all key/values in the current block
exceed the threshold.
Your must set block_size to a
An interesting hybrid that I'm coming around to seems to be using a Unix
release - OmniOS has an AMI, for instance - and ZFS. With a large-enough
store, I can run without EBS on my nodes, and have a single ZFS backup
instance with a huge amount of slow-EBS storage for accepting ZFS snapshots.
I'm
That *does* sound like an interesting way to do it. Kinda
best-of-both-worlds, depending on your backup schemes and whatnot. I'm
definitely curious to hear about how it works out for you.
-B.
On Tue, Aug 13, 2013 at 4:03 PM, Dave Martorana d...@flyclops.com wrote:
An interesting hybrid that
Hi Matthew,
Thank you for the explanation.
I am experimenting with different block size and making sure I have at
least 100G data on disk for the tests.
I.
On Tue, Aug 13, 2013 at 12:11 PM, Matthew Von-Maszewski
matth...@basho.comwrote:
Istvan,
block_size is not a size, it is a
Brady Wetherington wrote:
First off - I know 5 instances is the magic number of instances to have.
If I understand the thinking here, it's that at the default redundancy
level ('n'?) of 3, it is most likely to start getting me some scaling
(e.g., performance just that of a single node), and
** The following is copied from Basho's leveldb wiki page:
https://github.com/basho/leveldb/wiki/Riak-tuning-1
Summary:
leveldb has a higher read and write throughput in Riak if the Erlang scheduler
count is limited to half the number of CPU cores. Tests have demonstrated
improvements of
It seems Riak does not like the leveldb block_size to be changed to 64k.
App config:
app.config: {sst_block_size, 65536},
basho_bench logs:
18:04:38.010 [info]
When you say CPU does that mean logical CPU core? Or is this actually
referring to physical CPU cores?
E.g. On my laptop with 4 physical cores + HyperThreading, should I set +S
to +S 4:4
You hint that it doesn't matter, but I just wanted to trick you into
explicitly saying something.
---
On August 13, 2013 10:20:48 PM Brady Wetherington wrote:
One thing that I *think* I've figured out is that the number of how many
replicas can you lose and stay up is actually n-w for writes, and n-r for
reads -
So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
means
29 matches
Mail list logo