Thought so. Thanks Aaron !
On Thu, May 9, 2013 at 6:09 PM, aaron morton <aa...@thelastpickle.com>wrote: > This raises an important question, where does Cassandra get the time > information from ? > > http://docs.oracle.com/javase/6/docs/api/java/lang/System.html > normally milliSeconds, not sure if 1.0.12 may use nanoTime() which is less > reliable on some VM's. > > and is it required (I know it is highly highly advisable to) to keep > clocks in sync, any suggestions/best practices on how to keep the clocks in > sync ? > > http://en.wikipedia.org/wiki/Network_Time_Protocol > > Hope that helps. > > ----------------- > Aaron Morton > Freelance Cassandra Consultant > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 10/05/2013, at 9:16 AM, srmore <comom...@gmail.com> wrote: > > Thanks Rob ! > > Tried the steps, that did not work, however I was able to resolve the > problem by syncing the clocks. The thing that confuses me is that, the FAQ > says "Before 0.7.6, this can also be caused by cluster system clocks being > substantially out of sync with each other". The version I am using was > 1.0.12. > > This raises an important question, where does Cassandra get the time > information from ? and is it required (I know it is highly highly advisable > to) to keep clocks in sync, any suggestions/best practices on how to keep > the clocks in sync ? > > > > /srm > > > On Thu, May 9, 2013 at 1:58 PM, Robert Coli <rc...@eventbrite.com> wrote: > >> On Wed, May 8, 2013 at 5:40 PM, srmore <comom...@gmail.com> wrote: >> > After running the commands, I get back to the same issue. Cannot afford >> to >> > lose the data so I guess this is the only option for me. And >> unfortunately I >> > am using 1.0.12 ( cannot upgrade as of now ). Any, ideas on what might >> be >> > happening or any pointers will be greatly appreciated. >> >> If you can afford downtime on the cluster, the solution to this >> problem with the highest chance of success is : >> >> 1) dump the existing schema from a good node >> 2) nodetool drain on all nodes >> 3) stop cluster >> 4) move schema and migration CF tables out of the way on all nodes >> 5) start cluster >> 6) re-load schema, being careful to explicitly check for schema >> agreement on all nodes between schema modifying statements >> >> In many/most cases of schema disagreement, people try the FAQ approach >> and it doesn't work and they end up being forced to do the above >> anyway. In general if you can tolerate the downtime, you should save >> yourself the effort and just do the above process. >> >> =Rob >> > > >