Riak Crash

2013-08-09 Thread Lucas Cooper
I had a crash of an entire cluster early this morning, I'm not entirely
sure why, seems to be something with the indexer or Protocol Buffers (or my
use of them). Here are the various logs: http://egf.me/logs.tar.xz

Any idea what's up? All this crashing is kinda putting me off Riak ._.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[no subject]

2013-08-09 Thread Jeremy Ong
Hi Riak users,

I'm wondering what the best approach to this is. The scenario is that
I have mounted a new drive to the machine and want to have the node
leverage that drive to save data as opposed to the mount point it is
currently writing to.

My current plan is to start a second instance of riak (by changing the
name in the vm.args file) and having it join the cluster, followed by
removing the previous node. Then to repeat this for all nodes for
which the upgrade applies while monitoring the dashboard to see if the
handoffs are occurring appropriately.

Is there an easier way or better way, and are they any potential
issues with this approach?

Thanks,
Jeremy

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re:

2013-08-09 Thread Jeremiah Peschka
You *should* be able to

1) stop the node
2) change platform_data_dir and ring_state_dir in the app.config file
3) move data to the new platform_data_dir
4) move ring state to the new ring_state_dir
5) restart the node

This should avoid having to re-write all of your data one node at a time.

---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop


On Fri, Aug 9, 2013 at 7:23 PM, Jeremy Ong  wrote:

> Hi Riak users,
>
> I'm wondering what the best approach to this is. The scenario is that
> I have mounted a new drive to the machine and want to have the node
> leverage that drive to save data as opposed to the mount point it is
> currently writing to.
>
> My current plan is to start a second instance of riak (by changing the
> name in the vm.args file) and having it join the cluster, followed by
> removing the previous node. Then to repeat this for all nodes for
> which the upgrade applies while monitoring the dashboard to see if the
> handoffs are occurring appropriately.
>
> Is there an easier way or better way, and are they any potential
> issues with this approach?
>
> Thanks,
> Jeremy
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com