Re: Riak Crash
It's happened again. The crash and error logs look pretty much the same. On Sat, Aug 10, 2013 at 8:17 AM, Lucas Cooper bobobo1...@gmail.com wrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Crash
And another one... I thought the point of Erlang was that this kind of thing wasn't meant to happen ._. On Sat, Aug 10, 2013 at 6:12 PM, Lucas Cooper bobobo1...@gmail.com wrote: It's happened again. The crash and error logs look pretty much the same. On Sat, Aug 10, 2013 at 8:17 AM, Lucas Cooper bobobo1...@gmail.comwrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Crash
I won't have time to look at these logs in more detail until later, but are you sure you're not getting OOM killed? Usually there is some sign of what happened when a node went down in console.log as well, but it just sort of ends. Check the syslog for OS messages, you might find something there. On Sat, Aug 10, 2013 at 6:08 AM, Lucas Cooper bobobo1...@gmail.com wrote: And another one... I thought the point of Erlang was that this kind of thing wasn't meant to happen ._. On Sat, Aug 10, 2013 at 6:12 PM, Lucas Cooper bobobo1...@gmail.com wrote: It's happened again. The crash and error logs look pretty much the same. On Sat, Aug 10, 2013 at 8:17 AM, Lucas Cooper bobobo1...@gmail.com wrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Crash
Yes, Search indexing is enabled. On 10/08/2013 11:27 PM, Russell Brown russell.br...@mac.com wrote: Are you using riak search? On 9 Aug 2013, at 18:17, Lucas Cooper bobobo1...@gmail.com wrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Crash
Are you using riak search? On 9 Aug 2013, at 18:17, Lucas Cooper bobobo1...@gmail.com wrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Crash
Hey Lucas, Like Evan, I haven't had time to dig into the logs. Just so we know what the baseline is here: * How many nodes of Riak? * What version of Riak? * What type of hardware? * What's your open files limit [1] ? Mark [1] http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/ On Sat, Aug 10, 2013 at 9:27 AM, Lucas Cooper bobobo1...@gmail.com wrote: Yes, Search indexing is enabled. On 10/08/2013 11:27 PM, Russell Brown russell.br...@mac.com wrote: Are you using riak search? On 9 Aug 2013, at 18:17, Lucas Cooper bobobo1...@gmail.com wrote: I had a crash of an entire cluster early this morning, I'm not entirely sure why, seems to be something with the indexer or Protocol Buffers (or my use of them). Here are the various logs: http://egf.me/logs.tar.xz Any idea what's up? All this crashing is kinda putting me off Riak ._. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re:
Just a quick follow up, I did this exact approach and things went smoothly as expected. I had forgotten that stopping a node does not mark the node as down indicating that vnodes should be handed off. One last step I would add is that the log directory also needs to be changed in the vm.args file as that is where the erl_crash.dump file gets written (if the log dir is moved as well). Thanks, Jeremy On Fri, Aug 9, 2013 at 7:57 PM, Jeremiah Peschka jeremiah.pesc...@gmail.com wrote: You *should* be able to 1) stop the node 2) change platform_data_dir and ring_state_dir in the app.config file 3) move data to the new platform_data_dir 4) move ring state to the new ring_state_dir 5) restart the node This should avoid having to re-write all of your data one node at a time. --- Jeremiah Peschka - Founder, Brent Ozar Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Fri, Aug 9, 2013 at 7:23 PM, Jeremy Ong jer...@quarkgames.com wrote: Hi Riak users, I'm wondering what the best approach to this is. The scenario is that I have mounted a new drive to the machine and want to have the node leverage that drive to save data as opposed to the mount point it is currently writing to. My current plan is to start a second instance of riak (by changing the name in the vm.args file) and having it join the cluster, followed by removing the previous node. Then to repeat this for all nodes for which the upgrade applies while monitoring the dashboard to see if the handoffs are occurring appropriately. Is there an easier way or better way, and are they any potential issues with this approach? Thanks, Jeremy ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Java client - conflict resolver on both fetch() and store()?
Hi, I've just rolled up my sleeves and have started to make my application more robust with conflict resolution. I am currently using a @RiakVClock in my POJO (I need to think more about whether the read/modify/write approach is preferable or whether I'd have to rearchitect things). I read in the Riak Handbook the recommendation that conflicts are best resolved on read - not write - however the example App.java snipping on the Storing data in Riakhttps://github.com/basho/riak-java-client/wiki/Storing-data-in-riak#appjava page in the Java client's doco uses a resolver on both the store() and fetch()operations. Indeed, if I don't specify my conflict resolver in my store(), things blow up (in my unit test, mind - I'm still getting my head around the whole area so my test may be a bit contrived). However when I use it in both places, my conflicts are being resolved twice. Is this anticipated? My store is: bucket.store(record).returnBody(true). withoutFetch().withResolver(myConflictResolver); and my fetch is: bucket.fetch(id, Record.class).withResolver(myConflictResolver).execute(); The order of operations in my test is: 1. Store new record 2. Fetch the record as firstRecord 3. Fetch the record as secondRecord 4. Modify a field on firstRecord and secondRecord 5. Save firstRecord 6. Save secondRecord - this invokes my resolver with two siblings 7. Read record - this also invokes my resolver with the two siblings Am I missing something? Or is this what's supposed to happen? I'm not too worried - the double-handling is hardly that intensive - but I'm keen to get it right. Thanks in advance, Matt ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java client - conflict resolver on both fetch() and store()?
I'm sure Roach will correct me if I'm off-base, but I believe the store operation does a fetch and resolve before writing. I think the ideal way to do that is to create a MutationT (T being your POJO) as well, in which case it's less of a store and more of a fetch-modify-write. The way to skip the fetch/modify is to use the withoutFetch() option on the operation builder. On Sat, Aug 10, 2013 at 6:50 PM, Matt Painter m...@deity.co.nz wrote: Hi, I've just rolled up my sleeves and have started to make my application more robust with conflict resolution. I am currently using a @RiakVClock in my POJO (I need to think more about whether the read/modify/write approach is preferable or whether I'd have to rearchitect things). I read in the Riak Handbook the recommendation that conflicts are best resolved on read - not write - however the example App.java snipping on the Storing data in Riakhttps://github.com/basho/riak-java-client/wiki/Storing-data-in-riak#appjava page in the Java client's doco uses a resolver on both the store() and fetch()operations. Indeed, if I don't specify my conflict resolver in my store(), things blow up (in my unit test, mind - I'm still getting my head around the whole area so my test may be a bit contrived). However when I use it in both places, my conflicts are being resolved twice. Is this anticipated? My store is: bucket.store(record).returnBody(true). withoutFetch().withResolver(myConflictResolver); and my fetch is: bucket.fetch(id, Record.class).withResolver(myConflictResolver).execute(); The order of operations in my test is: 1. Store new record 2. Fetch the record as firstRecord 3. Fetch the record as secondRecord 4. Modify a field on firstRecord and secondRecord 5. Save firstRecord 6. Save secondRecord - this invokes my resolver with two siblings 7. Read record - this also invokes my resolver with the two siblings Am I missing something? Or is this what's supposed to happen? I'm not too worried - the double-handling is hardly that intensive - but I'm keen to get it right. Thanks in advance, Matt ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com -- Sean Cribbs s...@basho.com Software Engineer Basho Technologies, Inc. http://basho.com/ ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java client - conflict resolver on both fetch() and store()?
Thanks Sean. I understand that the fetch-modify-write is the approach that I'm *not* taking in this case, as I am using withoutFetch() and setting a @RiakVClock in my POJO. I just need to weigh up whether the ideal way is sufficiently better to rejig bits of my code - but I think that's a different issue :) M On 11 August 2013 14:32, Sean Cribbs s...@basho.com wrote: I'm sure Roach will correct me if I'm off-base, but I believe the store operation does a fetch and resolve before writing. I think the ideal way to do that is to create a MutationT (T being your POJO) as well, in which case it's less of a store and more of a fetch-modify-write. The way to skip the fetch/modify is to use the withoutFetch() option on the operation builder. On Sat, Aug 10, 2013 at 6:50 PM, Matt Painter m...@deity.co.nz wrote: Hi, I've just rolled up my sleeves and have started to make my application more robust with conflict resolution. I am currently using a @RiakVClock in my POJO (I need to think more about whether the read/modify/write approach is preferable or whether I'd have to rearchitect things). I read in the Riak Handbook the recommendation that conflicts are best resolved on read - not write - however the example App.java snipping on the Storing data in Riakhttps://github.com/basho/riak-java-client/wiki/Storing-data-in-riak#appjava page in the Java client's doco uses a resolver on both the store() and fetch()operations. Indeed, if I don't specify my conflict resolver in my store(), things blow up (in my unit test, mind - I'm still getting my head around the whole area so my test may be a bit contrived). However when I use it in both places, my conflicts are being resolved twice. Is this anticipated? My store is: bucket.store(record).returnBody(true). withoutFetch().withResolver(myConflictResolver); and my fetch is: bucket.fetch(id, Record.class).withResolver(myConflictResolver ).execute(); The order of operations in my test is: 1. Store new record 2. Fetch the record as firstRecord 3. Fetch the record as secondRecord 4. Modify a field on firstRecord and secondRecord 5. Save firstRecord 6. Save secondRecord - this invokes my resolver with two siblings 7. Read record - this also invokes my resolver with the two siblings Am I missing something? Or is this what's supposed to happen? I'm not too worried - the double-handling is hardly that intensive - but I'm keen to get it right. Thanks in advance, Matt ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com -- Sean Cribbs s...@basho.com Software Engineer Basho Technologies, Inc. http://basho.com/ -- Matt Painter m...@deity.co.nz +64 21 115 9378 ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com