cassandra node is not starting
Hi, During restart cassandra not is failing to start The error is ERROR [main] 2012-01-01 05:03:42,903 AbstractCassandraDaemon.java (line 354) Exception encountered during startup java.lang.AssertionError: attempted to delete non-existing file AttractionUserIdx.AttractionUserIdx_09partition_idx-h-1-Data.db at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:49) at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:44) at org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:133) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyS tore.java:355) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyS tore.java:402) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandra Daemon.java:174) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassan draDaemon.java:337) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107) can someone tell me how to recover from that thanks Michael
Re: How to convert start_token,end_token to real key value?
A token is a MD5 hash (one way hash). You cannot compute the key given a token. You can however compute MD5 hash of your keys and compare them with tokens. -Naren On Sat, Dec 31, 2011 at 2:07 PM, ravikumar visweswara talk2had...@gmail.com wrote: Hello All, I have requirement to copy data from cassandra to hadoop from/to a specific key. This is supported in 1.0.0. But I am using cassandra version 0.7.1 and hadoop version 20.2. In my mapreduce job(InputFormat class) i have an object of TokenRange. I need to filter certain ranges based on some exclusion rules. i have readable key range to include. Could some one help me on how to convert start_token and end_token to readable format and compare with my input keys (range)? I know that 1.0.0 have better capabilities to specify keyRanges in hadoop mapreduce. But for now, i will have to work with 0.7.1 Thanks and Regards Ravi -- Narendra Sharma Software Engineer *http://www.aeris.com http://www.persistentsys.com* *http://narendrasharma.blogspot.com/*
Composite column names: How much space do they occupy ?
I am storing composite column names which are made up of two integer components. However I am shocked after seeing the storage overhead of these. I just tried out a composite name (with single integer component): Composite composite = new Composite(); composite.addComponent(-165376575,is); System.out.println(CS.toByteBuffer( composite ).array().length); // the result is 256 After writing then reading back this composite column from cassandra: System.out.println(CS.toByteBuffer( readColumn.getName() ).array().length); // the result is 91 How much is the storage overhead, as I am quite sure that I'm making a mistake in realizing the actual values ?
Re: How to convert start_token,end_token to real key value?
Thank You Naren. If key k1k2 (lexicologicaly), will md5(k1) md5(k2)?. - R On Sun, Jan 1, 2012 at 7:07 PM, Narendra Sharma narendra.sha...@gmail.comwrote: A token is a MD5 hash (one way hash). You cannot compute the key given a token. You can however compute MD5 hash of your keys and compare them with tokens. -Naren On Sat, Dec 31, 2011 at 2:07 PM, ravikumar visweswara talk2had...@gmail.com wrote: Hello All, I have requirement to copy data from cassandra to hadoop from/to a specific key. This is supported in 1.0.0. But I am using cassandra version 0.7.1 and hadoop version 20.2. In my mapreduce job(InputFormat class) i have an object of TokenRange. I need to filter certain ranges based on some exclusion rules. i have readable key range to include. Could some one help me on how to convert start_token and end_token to readable format and compare with my input keys (range)? I know that 1.0.0 have better capabilities to specify keyRanges in hadoop mapreduce. But for now, i will have to work with 0.7.1 Thanks and Regards Ravi -- Narendra Sharma Software Engineer *http://www.aeris.com http://www.persistentsys.com* *http://narendrasharma.blogspot.com/*