[jira] [Updated] (CASSANDRA-5574) Add trigger examples
[ https://issues.apache.org/jira/browse/CASSANDRA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-5574: - Attachment: 0001-CASSANDRA-5574.patch Add trigger examples - Key: CASSANDRA-5574 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574 Project: Cassandra Issue Type: Test Reporter: Vijay Assignee: Vijay Priority: Trivial Attachments: 0001-CASSANDRA-5574.patch Since 1311 is committed we need some example code to show the power and usage of triggers. Similar to the ones in examples directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5574) Add trigger examples
[ https://issues.apache.org/jira/browse/CASSANDRA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-5574: - Reviewer: jbellis Add trigger examples - Key: CASSANDRA-5574 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574 Project: Cassandra Issue Type: Test Reporter: Vijay Assignee: Vijay Priority: Trivial Attachments: 0001-CASSANDRA-5574.patch Since 1311 is committed we need some example code to show the power and usage of triggers. Similar to the ones in examples directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5574) Add trigger examples
[ https://issues.apache.org/jira/browse/CASSANDRA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-5574: - Attachment: (was: 0001-CASSANDRA-5574.patch) Add trigger examples - Key: CASSANDRA-5574 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574 Project: Cassandra Issue Type: Test Reporter: Vijay Assignee: Vijay Priority: Trivial Since 1311 is committed we need some example code to show the power and usage of triggers. Similar to the ones in examples directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5574) Add trigger examples
[ https://issues.apache.org/jira/browse/CASSANDRA-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-5574: - Attachment: 0001-CASSANDRA-5574.patch Add trigger examples - Key: CASSANDRA-5574 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574 Project: Cassandra Issue Type: Test Reporter: Vijay Assignee: Vijay Priority: Trivial Attachments: 0001-CASSANDRA-5574.patch Since 1311 is committed we need some example code to show the power and usage of triggers. Similar to the ones in examples directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5383) Windows 7 deleting/renaming files problem
[ https://issues.apache.org/jira/browse/CASSANDRA-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672893#comment-13672893 ] Marcus Eriksson commented on CASSANDRA-5383: im stumped (and cant really debug since i dont own a windows machine) anyone with windows skills want to take a look? Windows 7 deleting/renaming files problem - Key: CASSANDRA-5383 URL: https://issues.apache.org/jira/browse/CASSANDRA-5383 Project: Cassandra Issue Type: Bug Components: Tests Affects Versions: 2.0 Reporter: Ryan McGuire Assignee: Marcus Eriksson Fix For: 2.0.1 Attachments: 0001-CASSANDRA-5383-v2.patch, 0001-use-Java7-apis-for-deleting-and-moving-files-and-cre.patch, 5383_patch_v2_system.log, test_log.5383.patch_v2.log.txt Two unit tests are failing on Windows 7 due to errors in renaming/deleting files: org.apache.cassandra.db.ColumnFamilyStoreTest: {code} [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest [junit] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.904 sec [junit] [junit] - Standard Error - [junit] ERROR 13:06:46,058 Unable to delete build\test\cassandra\data\Keyspace1\Indexed2\Keyspace1-Indexed2.birthdate_index-ja-1-Data.db (it will be removed on server restart; we'll also retry after GC) [junit] ERROR 13:06:48,508 Fatal exception in thread Thread[NonPeriodicTasks:1,5,main] [junit] java.lang.RuntimeException: Tried to hard link to file that does not exist build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-7-Statistics.db [junit] at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:72) [junit] at org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1057) [junit] at org.apache.cassandra.db.DataTracker$1.run(DataTracker.java:168) [junit] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) [junit] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [junit] at java.util.concurrent.FutureTask.run(FutureTask.java:138) [junit] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) [junit] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [junit] at java.lang.Thread.run(Thread.java:662) [junit] - --- [junit] Testcase: testSliceByNamesCommandOldMetatada(org.apache.cassandra.db.ColumnFamilyStoreTest): Caused an ERROR [junit] Failed to rename build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp to build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db [junit] java.lang.RuntimeException: Failed to rename build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp to build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db [junit] at org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133) [junit] at org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:122) [junit] at org.apache.cassandra.db.compaction.LeveledManifest.mutateLevel(LeveledManifest.java:575) [junit] at org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:589) [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetatada(ColumnFamilyStoreTest.java:885) [junit] [junit] [junit] Testcase: testRemoveUnifinishedCompactionLeftovers(org.apache.cassandra.db.ColumnFamilyStoreTest): Caused an ERROR [junit] java.io.IOException: Failed to delete c:\Users\Ryan\git\cassandra\build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db [junit] FSWriteError in build\test\cassandra\data\Keyspace1\Standard3\Keyspace1-Standard3-ja-2-Data.db [junit] at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:112) [junit] at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:103) [junit] at org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:139) [junit] at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:507) [junit] at
[jira] [Updated] (CASSANDRA-4476) Support 2ndary index queries with only non-EQ clauses
[ https://issues.apache.org/jira/browse/CASSANDRA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-4476: --- Fix Version/s: (was: 2.0) 2.1 pushing this to 2.1 - wont have time to fix now, prod issue overflow Support 2ndary index queries with only non-EQ clauses - Key: CASSANDRA-4476 URL: https://issues.apache.org/jira/browse/CASSANDRA-4476 Project: Cassandra Issue Type: Improvement Components: API, Core Reporter: Sylvain Lebresne Assignee: Marcus Eriksson Priority: Minor Fix For: 2.1 Currently, a query that uses 2ndary indexes must have at least one EQ clause (on an indexed column). Given that indexed CFs are local (and use LocalPartitioner that order the row by the type of the indexed column), we should extend 2ndary indexes to allow querying indexed columns even when no EQ clause is provided. As far as I can tell, the main problem to solve for this is to update KeysSearcher.highestSelectivityPredicate(). I.e. how do we estimate the selectivity of non-EQ clauses? I note however that if we can do that estimate reasonably accurately, this might provide better performance even for index queries that both EQ and non-EQ clauses, because some non-EQ clauses may have a much better selectivity than EQ ones (say you index both the user country and birth date, for SELECT * FROM users WHERE country = 'US' AND birthdate 'Jan 2009' AND birtdate 'July 2009', you'd better use the birthdate index first). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673060#comment-13673060 ] Kévin LOVATO commented on CASSANDRA-5424: - We just applied 1.2.5 on our cluster and the repair hanging is fixed, but the -pr is still not working as expected. Our cluster has two datacenters, let's call them dc1 and dc2, we created a Keyspace Test_Replication with replication factor _\{ dc1: 3 \}_ (no info for dc2) and ran a nodetool repair Test_Replication (that used to hang) on dc2 and it exited saying there was nothing to do (which is OK). Then we changed the replication factor to _\{ dc1: 3, dc2: 3 \}_ and started a nodetool repair -pr Test_Replication on cassandra11@dc2 which output this: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} But even after flushing the Keyspace, there was no data on the server. We then ran a full repair: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 14:01:56,679] Starting repair command #2, repairing 6 ranges for keyspace Test_Replication [2013-06-03 14:01:57,260] Repair session 63632d70-cc45-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 14:01:57,260] Repair session 63650230-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 14:01:57,260] Repair session 6385d0a0-cc45-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 14:01:57,260] Repair session 639f7320-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 14:01:57,260] Repair session 63af51a0-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 14:01:57,295] Repair session 63b12660-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 14:01:57,295] Repair command #2 finished {code} After which we could find the data on dc2 as expected. So it seems that -pr is still not working as expected, or maybe we're doing/understanding something wrong. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673060#comment-13673060 ] Kévin LOVATO edited comment on CASSANDRA-5424 at 6/3/13 12:21 PM: -- We just applied 1.2.5 on our cluster and the repair hanging is fixed, but the -pr is still not working as expected. Our cluster has two datacenters, let's call them dc1 and dc2, we created a Keyspace Test_Replication with replication factor _\{ dc1: 3 \}_ (no info for dc2) and ran a nodetool repair Test_Replication (that used to hang) on dc2 and it exited saying there was nothing to do (which is OK). Then we changed the replication factor to _\{ dc1: 3, dc2: 3 \}_ and started a nodetool repair -pr Test_Replication on cassandra11@dc2 which output this: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} But even after flushing the Keyspace, there was no data on the server. We then ran a full repair: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 14:01:56,679] Starting repair command #2, repairing 6 ranges for keyspace Test_Replication [2013-06-03 14:01:57,260] Repair session 63632d70-cc45-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 14:01:57,260] Repair session 63650230-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 14:01:57,260] Repair session 6385d0a0-cc45-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 14:01:57,260] Repair session 639f7320-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 14:01:57,260] Repair session 63af51a0-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 14:01:57,295] Repair session 63b12660-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 14:01:57,295] Repair command #2 finished {code} After which we could find the data on dc2 as expected. So it seems that -pr is still not working as expected, or maybe we're doing/understanding something wrong. (I was not sure if I should open a new ticket or comment this one so please let me know if I should move it) was (Author: alprema): We just applied 1.2.5 on our cluster and the repair hanging is fixed, but the -pr is still not working as expected. Our cluster has two datacenters, let's call them dc1 and dc2, we created a Keyspace Test_Replication with replication factor _\{ dc1: 3 \}_ (no info for dc2) and ran a nodetool repair Test_Replication (that used to hang) on dc2 and it exited saying there was nothing to do (which is OK). Then we changed the replication factor to _\{ dc1: 3, dc2: 3 \}_ and started a nodetool repair -pr Test_Replication on cassandra11@dc2 which output this: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} But even after flushing the Keyspace, there was no data on the server. We then ran a full repair: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 14:01:56,679] Starting repair command #2, repairing 6 ranges for keyspace Test_Replication [2013-06-03 14:01:57,260] Repair session 63632d70-cc45-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 14:01:57,260] Repair session 63650230-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 14:01:57,260] Repair session 6385d0a0-cc45-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 14:01:57,260] Repair session 639f7320-cc45-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 14:01:57,260] Repair session 63af51a0-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 14:01:57,295] Repair session 63b12660-cc45-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 14:01:57,295] Repair command #2 finished {code} After which we could find the data on dc2 as expected. So it
[jira] [Commented] (CASSANDRA-5426) Redesign repair messages
[ https://issues.apache.org/jira/browse/CASSANDRA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673105#comment-13673105 ] Jason Brown commented on CASSANDRA-5426: In StreamingRepairTask.initiateStreaming(), there's this block {code}try { ... StreamOut.transferSSTables(outsession, sstables, request.ranges, OperationType.AES); // request ranges from the remote node StreamIn.requestRanges(request.dst, desc.keyspace, Collections.singleton(cfstore), request.ranges, this, OperationType.AES); } catch(Exception e) ...{code} Is there any value in putting the StreamIn.requestRanges() in a separate try block and not (immediately) fail if StreamOut has a problem? Then, we could potentially make some forward progress (for the stream StreamIn) even if StreamOut fails? I'll note that 1.2 has the same try/catch as Yuki's new work, so it has not changed in that regard. Redesign repair messages Key: CASSANDRA-5426 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426 Project: Cassandra Issue Type: Improvement Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Labels: repair Fix For: 2.0 Many people have been reporting 'repair hang' when something goes wrong. Two major causes of hang are 1) validation failure and 2) streaming failure. Currently, when those failures happen, the failed node would not respond back to the repair initiator. The goal of this ticket is to redesign message flows around repair so that repair never hang. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673125#comment-13673125 ] Jonathan Ellis commented on CASSANDRA-5424: --- What *should* happen is that if you repair -pr on each node in dc2, then you will repair the full token space. But for a single node, YMMV. In particular, it's quite possible that this is correct: bq. Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished Note the tiny range involved. (This indicates that your dc2 tokens are not balanced, btw.) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting
[jira] [Commented] (CASSANDRA-5426) Redesign repair messages
[ https://issues.apache.org/jira/browse/CASSANDRA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673145#comment-13673145 ] Yuki Morishita commented on CASSANDRA-5426: --- [~jasobrown] Actually, I think that try catch block is redundant. Streaming does not run on the same thread as StreamingRepairTask does and exception should be handled at IStreamCallback's onError method(which is empty in current 1.2). I'm trying to overhaul streaming API for 2.0(CASSANDRA-5286) and it should have more fine grained control over streaming. Redesign repair messages Key: CASSANDRA-5426 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426 Project: Cassandra Issue Type: Improvement Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Labels: repair Fix For: 2.0 Many people have been reporting 'repair hang' when something goes wrong. Two major causes of hang are 1) validation failure and 2) streaming failure. Currently, when those failures happen, the failed node would not respond back to the repair initiator. The goal of this ticket is to redesign message flows around repair so that repair never hang. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673163#comment-13673163 ] Jonathan Ellis commented on CASSANDRA-5424: --- bq. This indicates that your dc2 tokens are not balanced, btw Hmm. Actually I don't see how repair could generate only a single range in a 2-DC setup and NTS. Can you post your ring? nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef for range
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673176#comment-13673176 ] Jeremiah Jordan commented on CASSANDRA-5424: With the following replication: {noformat} { dc1: 3, dc2: 3 } {noformat} And the following ring: {noformat} node dc token n0 dc1 0 n1 dc2 1 {noformat} That is the expected output from nodetool -h n1 repair -pr. Do a nodetool -h n0 repair -pr and n1 will get a bunch of data. -pr only repairs from current token to previous token, if you don't have any data with a token of 1, then repair -pr won't do much for repairing n1. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673203#comment-13673203 ] Jonathan Ellis commented on CASSANDRA-5424: --- I should have said, 2-DC setup, NTS, and replicas in both DC. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef for range (42535295865117307932921825928971026432,127605887595351923798765477786913079296] finished [2013-04-03 15:48:02,033] Repair command #1 finished
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673203#comment-13673203 ] Jonathan Ellis edited comment on CASSANDRA-5424 at 6/3/13 3:29 PM: --- I should have said, 2-DC setup, NTS, and replicas in both DC. And more than one node in each DC. In any case, I do see the problem now. Working on a fix. was (Author: jbellis): I should have said, 2-DC setup, NTS, and replicas in both DC. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for keyspace
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673235#comment-13673235 ] Jeremiah Jordan commented on CASSANDRA-5424: If there is a problem, glad you found it, but I don't see how multiple nodes changes the fact that the primary range of n1 is only (0,1] if both DC's have replicas. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef for range
[jira] [Commented] (CASSANDRA-5376) CQL3: IN clause on last key not working when schema includes set,list or map
[ https://issues.apache.org/jira/browse/CASSANDRA-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673241#comment-13673241 ] Tilani Gunawardena commented on CASSANDRA-5376: --- how to apply patch ? CQL3: IN clause on last key not working when schema includes set,list or map Key: CASSANDRA-5376 URL: https://issues.apache.org/jira/browse/CASSANDRA-5376 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Reporter: Christiaan Willemsen Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.4 Attachments: 5376.txt This is an exception on the fix of https://issues.apache.org/jira/browse/CASSANDRA-5230 Looks like any schema using map,list or set won't work with IN clauses on the last key (in this example c) Schema: {code} CREATE TABLE foo2 ( key text, c bigint, v text, x settext, PRIMARY KEY (key, c) ); {code} Query: {code}select * from foo2 where key = 'foo' and c in (1,3,4) ;{code} This will lead to an assertion error on the nodes: {code}java.lang.AssertionError at org.apache.cassandra.cql3.statements.SelectStatement.buildBound(SelectStatement.java:540) at org.apache.cassandra.cql3.statements.SelectStatement.getRequestedBound(SelectStatement.java:568) at org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:308) at org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:219) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:132) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:62) at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132) at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:143) at org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1726) at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074) at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5607) don't 2xsort data directories if you only have 1 (common case)
[ https://issues.apache.org/jira/browse/CASSANDRA-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673271#comment-13673271 ] Jonathan Ellis commented on CASSANDRA-5607: --- This isn't on a hot-code path so I'd rather leave the code simpler and special-case-free tbh. don't 2xsort data directories if you only have 1 (common case) -- Key: CASSANDRA-5607 URL: https://issues.apache.org/jira/browse/CASSANDRA-5607 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 2.0 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 2.0 Attachments: 5607.txt getLocationCapableOfSize() sorts candidate directories by freespace, then again by load, even if there are 0 or 1 candidates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5608) Primary range repair still isn't quite NTS-aware
Jonathan Ellis created CASSANDRA-5608: - Summary: Primary range repair still isn't quite NTS-aware Key: CASSANDRA-5608 URL: https://issues.apache.org/jira/browse/CASSANDRA-5608 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Consider the case of a four node cluster, with nodes A and C in DC1, and nodes B and D in DC2. TokenMetadata will break this into ranges of (A-B], (B-C], (C-D], (D-A]. If we have a single copy of a keyspace stored in DC1 only (none in DC2), then the current code correctly calculates that node A is responsible for ranges (C-D], (D-A]. But, if we add a copy in DC1, then we only calculate (D-A] as primary range. This is a bug; we should not care what copies are in other datacenters, when computing what to repair in the local one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673245#comment-13673245 ] Kévin LOVATO commented on CASSANDRA-5424: - You were right to say that I need to run the repair -pr on the three nodes, because I only have one row (it's a test) in the CF so I guess I had to run the repair -pr on the node in charge of this key. But I restarted my test and did the repair on all three nodes, and it didn't work either; here's the output: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} {code} user@cassandra12:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:17,844] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:17,866] Repair session e9f38c50-cc62-11e2-af47-db8ca926a9c5 for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:33:17,866] Repair command #1 finished {code} {code} user@cassandra13:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:29,689] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:29,712] Repair session f102f3a0-cc62-11e2-ae98-39da3e693be3 for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:33:29,712] Repair command #1 finished {code} The data is still not copied to the new datacenter, and I don't understand why the repair is made for those ranges (a range of 1??), it could be a problem of unbalanced cluster as you suggested, but we distributed the tokens as advised (+1 on the nodes of the new datacenter) as you can see in the following nodetool status: {code} user@cassandra13:~$ nodetool status Datacenter: dc1 = Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra01 102 GB 33.3% fa7672f5-77f0-4b41-b9d1-13bf63c39122 0 RC1 UN cassandra02 88.73 GB 33.3% c799df22-0873-4a99-a901-5ef5b00b7b1e 56713727820156410577229101238628035242 RC1 UN cassandra03 50.86 GB 33.3% 5b9c6bc4-7ec7-417d-b92d-c5daa787201b 113427455640312821154458202477256070484 RC1 Datacenter: dc2 == Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra11 51.21 GB 0.0% 7b610455-3fd2-48a3-9315-895a4609be42 1 RC2 UN cassandra12 45.02 GB 0.0% 8553f2c0-851c-4af2-93ee-2854c96de45a 56713727820156410577229101238628035243 RC2 UN cassandra13 36.8 GB0.0% 7f537660-9128-4c13-872a-6e026104f30e 113427455640312821154458202477256070485 RC2 {code} Furthermore the full repair works, as you can see in this log: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 17:44:07,570] Starting repair command #5, repairing 6 ranges for keyspace Test_Replication [2013-06-03 17:44:07,903] Repair session 6d37b720-cc64-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 17:44:07,903] Repair session 6d3a0110-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 17:44:07,903] Repair session 6d4d6200-cc64-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 17:44:07,903] Repair session 6d581060-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:44:07,903] Repair session 6d5ea010-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:44:07,934] Repair session 6d604dc0-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 17:44:07,934] Repair command #5 finished {code} I hope this information can help, please let me know if you think it's a configuration issue, in which case I would talk to the mailing list. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra
[jira] [Commented] (CASSANDRA-5376) CQL3: IN clause on last key not working when schema includes set,list or map
[ https://issues.apache.org/jira/browse/CASSANDRA-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673251#comment-13673251 ] Aleksey Yeschenko commented on CASSANDRA-5376: -- bq. how to apply patch ? It's been already applied to 1.2.4. Also, make sure to read Sylvain's comment. CQL3: IN clause on last key not working when schema includes set,list or map Key: CASSANDRA-5376 URL: https://issues.apache.org/jira/browse/CASSANDRA-5376 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Reporter: Christiaan Willemsen Assignee: Sylvain Lebresne Priority: Minor Fix For: 1.2.4 Attachments: 5376.txt This is an exception on the fix of https://issues.apache.org/jira/browse/CASSANDRA-5230 Looks like any schema using map,list or set won't work with IN clauses on the last key (in this example c) Schema: {code} CREATE TABLE foo2 ( key text, c bigint, v text, x settext, PRIMARY KEY (key, c) ); {code} Query: {code}select * from foo2 where key = 'foo' and c in (1,3,4) ;{code} This will lead to an assertion error on the nodes: {code}java.lang.AssertionError at org.apache.cassandra.cql3.statements.SelectStatement.buildBound(SelectStatement.java:540) at org.apache.cassandra.cql3.statements.SelectStatement.getRequestedBound(SelectStatement.java:568) at org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:308) at org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:219) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:132) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:62) at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132) at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:143) at org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1726) at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074) at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673260#comment-13673260 ] Jeremiah Jordan commented on CASSANDRA-5424: [~alprema] you need to run it on all 6 nodes. repair -pr only repairs the primary range, when ever you use repair -pr you must run repair on every node which owns the data for the KS you are repairing. If the KS is only in DC1, that is 3 nodes, if it is in DC1 and DC2 that is 6 nodes. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03
[jira] [Resolved] (CASSANDRA-5608) Primary range repair still isn't quite NTS-aware
[ https://issues.apache.org/jira/browse/CASSANDRA-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-5608. --- Resolution: Invalid Fix Version/s: (was: 1.2.6) Reviewer: (was: yukim) Assignee: (was: Jonathan Ellis) The right way to use -pr is still to repair everywhere the data exists; if we made -pr affect everything in the DC regardless of other replicas, then repairing the full cluster would repair each range 1x for each DC, which is not what we want Primary range repair still isn't quite NTS-aware -- Key: CASSANDRA-5608 URL: https://issues.apache.org/jira/browse/CASSANDRA-5608 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Jonathan Ellis Consider the case of a four node cluster, with nodes A and C in DC1, and nodes B and D in DC2. TokenMetadata will break this into ranges of (A-B], (B-C], (C-D], (D-A]. If we have a single copy of a keyspace stored in DC1 only (none in DC2), then the current code correctly calculates that node A is responsible for ranges (C-D], (D-A]. But, if we add a copy in DC2, then we only calculate (D-A] as primary range. This is a bug; we should not care what copies are in other datacenters, when computing what to repair in the local one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673245#comment-13673245 ] Kévin LOVATO edited comment on CASSANDRA-5424 at 6/3/13 4:09 PM: - *[EDIT] I didn't see your latests posts before posting, but I hope the extra data can help anyway* You were right to say that I need to run the repair -pr on the three nodes, because I only have one row (it's a test) in the CF so I guess I had to run the repair -pr on the node in charge of this key. But I restarted my test and did the repair on all three nodes, and it didn't work either; here's the output: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} {code} user@cassandra12:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:17,844] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:17,866] Repair session e9f38c50-cc62-11e2-af47-db8ca926a9c5 for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:33:17,866] Repair command #1 finished {code} {code} user@cassandra13:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:29,689] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:29,712] Repair session f102f3a0-cc62-11e2-ae98-39da3e693be3 for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:33:29,712] Repair command #1 finished {code} The data is still not copied to the new datacenter, and I don't understand why the repair is made for those ranges (a range of 1??), it could be a problem of unbalanced cluster as you suggested, but we distributed the tokens as advised (+1 on the nodes of the new datacenter) as you can see in the following nodetool status: {code} user@cassandra13:~$ nodetool status Datacenter: dc1 = Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra01 102 GB 33.3% fa7672f5-77f0-4b41-b9d1-13bf63c39122 0 RC1 UN cassandra02 88.73 GB 33.3% c799df22-0873-4a99-a901-5ef5b00b7b1e 56713727820156410577229101238628035242 RC1 UN cassandra03 50.86 GB 33.3% 5b9c6bc4-7ec7-417d-b92d-c5daa787201b 113427455640312821154458202477256070484 RC1 Datacenter: dc2 == Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra11 51.21 GB 0.0% 7b610455-3fd2-48a3-9315-895a4609be42 1 RC2 UN cassandra12 45.02 GB 0.0% 8553f2c0-851c-4af2-93ee-2854c96de45a 56713727820156410577229101238628035243 RC2 UN cassandra13 36.8 GB0.0% 7f537660-9128-4c13-872a-6e026104f30e 113427455640312821154458202477256070485 RC2 {code} Furthermore the full repair works, as you can see in this log: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 17:44:07,570] Starting repair command #5, repairing 6 ranges for keyspace Test_Replication [2013-06-03 17:44:07,903] Repair session 6d37b720-cc64-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 17:44:07,903] Repair session 6d3a0110-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 17:44:07,903] Repair session 6d4d6200-cc64-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 17:44:07,903] Repair session 6d581060-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:44:07,903] Repair session 6d5ea010-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:44:07,934] Repair session 6d604dc0-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 17:44:07,934] Repair command #5 finished {code} I hope this information can help, please let me know if you think it's a configuration issue, in which case I would talk to the mailing list. was (Author: alprema): *[EDIT] I didn't see your latests posts before posting, but I hope the extra data can help* You were right to say that I need to run the repair -pr on the three
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673261#comment-13673261 ] Jonathan Ellis commented on CASSANDRA-5424: --- I was right the first time; this is correct behavior. Quoting from CASSANDRA-5608: bq. The right way to use -pr is still to repair everywhere the data exists; if we made -pr affect everything in the DC regardless of other replicas, then repairing the full cluster would repair each range 1x for each DC, which is not what we want nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673245#comment-13673245 ] Kévin LOVATO edited comment on CASSANDRA-5424 at 6/3/13 4:08 PM: - *[EDIT] I didn't see your latests posts before posting, but I hope the extra data can help* You were right to say that I need to run the repair -pr on the three nodes, because I only have one row (it's a test) in the CF so I guess I had to run the repair -pr on the node in charge of this key. But I restarted my test and did the repair on all three nodes, and it didn't work either; here's the output: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} {code} user@cassandra12:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:17,844] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:17,866] Repair session e9f38c50-cc62-11e2-af47-db8ca926a9c5 for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:33:17,866] Repair command #1 finished {code} {code} user@cassandra13:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:29,689] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:29,712] Repair session f102f3a0-cc62-11e2-ae98-39da3e693be3 for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:33:29,712] Repair command #1 finished {code} The data is still not copied to the new datacenter, and I don't understand why the repair is made for those ranges (a range of 1??), it could be a problem of unbalanced cluster as you suggested, but we distributed the tokens as advised (+1 on the nodes of the new datacenter) as you can see in the following nodetool status: {code} user@cassandra13:~$ nodetool status Datacenter: dc1 = Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra01 102 GB 33.3% fa7672f5-77f0-4b41-b9d1-13bf63c39122 0 RC1 UN cassandra02 88.73 GB 33.3% c799df22-0873-4a99-a901-5ef5b00b7b1e 56713727820156410577229101238628035242 RC1 UN cassandra03 50.86 GB 33.3% 5b9c6bc4-7ec7-417d-b92d-c5daa787201b 113427455640312821154458202477256070484 RC1 Datacenter: dc2 == Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra11 51.21 GB 0.0% 7b610455-3fd2-48a3-9315-895a4609be42 1 RC2 UN cassandra12 45.02 GB 0.0% 8553f2c0-851c-4af2-93ee-2854c96de45a 56713727820156410577229101238628035243 RC2 UN cassandra13 36.8 GB0.0% 7f537660-9128-4c13-872a-6e026104f30e 113427455640312821154458202477256070485 RC2 {code} Furthermore the full repair works, as you can see in this log: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 17:44:07,570] Starting repair command #5, repairing 6 ranges for keyspace Test_Replication [2013-06-03 17:44:07,903] Repair session 6d37b720-cc64-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 17:44:07,903] Repair session 6d3a0110-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 17:44:07,903] Repair session 6d4d6200-cc64-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 17:44:07,903] Repair session 6d581060-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:44:07,903] Repair session 6d5ea010-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:44:07,934] Repair session 6d604dc0-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 17:44:07,934] Repair command #5 finished {code} I hope this information can help, please let me know if you think it's a configuration issue, in which case I would talk to the mailing list. was (Author: alprema): [EDIT] I didn't see your latests posts before posting, but I hope the extra data can help You were right to say that I need to run the repair -pr on the three nodes,
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673245#comment-13673245 ] Kévin LOVATO edited comment on CASSANDRA-5424 at 6/3/13 4:07 PM: - [EDIT] I didn't see your latests posts before posting, but I hope the extra data can help You were right to say that I need to run the repair -pr on the three nodes, because I only have one row (it's a test) in the CF so I guess I had to run the repair -pr on the node in charge of this key. But I restarted my test and did the repair on all three nodes, and it didn't work either; here's the output: {code} user@cassandra11:~$ nodetool repair -pr Test_Replication [2013-06-03 13:54:53,948] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 13:54:53,985] Repair session 676c00f0-cc44-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 13:54:53,985] Repair command #1 finished {code} {code} user@cassandra12:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:17,844] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:17,866] Repair session e9f38c50-cc62-11e2-af47-db8ca926a9c5 for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:33:17,866] Repair command #1 finished {code} {code} user@cassandra13:~$ nodetool repair -pr Test_Replication [2013-06-03 17:33:29,689] Starting repair command #1, repairing 1 ranges for keyspace Test_Replication [2013-06-03 17:33:29,712] Repair session f102f3a0-cc62-11e2-ae98-39da3e693be3 for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:33:29,712] Repair command #1 finished {code} The data is still not copied to the new datacenter, and I don't understand why the repair is made for those ranges (a range of 1??), it could be a problem of unbalanced cluster as you suggested, but we distributed the tokens as advised (+1 on the nodes of the new datacenter) as you can see in the following nodetool status: {code} user@cassandra13:~$ nodetool status Datacenter: dc1 = Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra01 102 GB 33.3% fa7672f5-77f0-4b41-b9d1-13bf63c39122 0 RC1 UN cassandra02 88.73 GB 33.3% c799df22-0873-4a99-a901-5ef5b00b7b1e 56713727820156410577229101238628035242 RC1 UN cassandra03 50.86 GB 33.3% 5b9c6bc4-7ec7-417d-b92d-c5daa787201b 113427455640312821154458202477256070484 RC1 Datacenter: dc2 == Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Owns Host ID TokenRac UN cassandra11 51.21 GB 0.0% 7b610455-3fd2-48a3-9315-895a4609be42 1 RC2 UN cassandra12 45.02 GB 0.0% 8553f2c0-851c-4af2-93ee-2854c96de45a 56713727820156410577229101238628035243 RC2 UN cassandra13 36.8 GB0.0% 7f537660-9128-4c13-872a-6e026104f30e 113427455640312821154458202477256070485 RC2 {code} Furthermore the full repair works, as you can see in this log: {code} user@cassandra11:~$ nodetool repair Test_Replication [2013-06-03 17:44:07,570] Starting repair command #5, repairing 6 ranges for keyspace Test_Replication [2013-06-03 17:44:07,903] Repair session 6d37b720-cc64-11e2-bfd5-3d9212e452cc for range (0,1] finished [2013-06-03 17:44:07,903] Repair session 6d3a0110-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035243,113427455640312821154458202477256070484] finished [2013-06-03 17:44:07,903] Repair session 6d4d6200-cc64-11e2-bfd5-3d9212e452cc for range (1,56713727820156410577229101238628035242] finished [2013-06-03 17:44:07,903] Repair session 6d581060-cc64-11e2-bfd5-3d9212e452cc for range (56713727820156410577229101238628035242,56713727820156410577229101238628035243] finished [2013-06-03 17:44:07,903] Repair session 6d5ea010-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070484,113427455640312821154458202477256070485] finished [2013-06-03 17:44:07,934] Repair session 6d604dc0-cc64-11e2-bfd5-3d9212e452cc for range (113427455640312821154458202477256070485,0] finished [2013-06-03 17:44:07,934] Repair command #5 finished {code} I hope this information can help, please let me know if you think it's a configuration issue, in which case I would talk to the mailing list. was (Author: alprema): You were right to say that I need to run the repair -pr on the three nodes, because I only have one row (it's a test) in the CF so I guess I had to run the repair -pr on
[jira] [Updated] (CASSANDRA-5608) Primary range repair still isn't quite NTS-aware
[ https://issues.apache.org/jira/browse/CASSANDRA-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5608: -- Description: Consider the case of a four node cluster, with nodes A and C in DC1, and nodes B and D in DC2. TokenMetadata will break this into ranges of (A-B], (B-C], (C-D], (D-A]. If we have a single copy of a keyspace stored in DC1 only (none in DC2), then the current code correctly calculates that node A is responsible for ranges (C-D], (D-A]. But, if we add a copy in DC2, then we only calculate (D-A] as primary range. This is a bug; we should not care what copies are in other datacenters, when computing what to repair in the local one. was: Consider the case of a four node cluster, with nodes A and C in DC1, and nodes B and D in DC2. TokenMetadata will break this into ranges of (A-B], (B-C], (C-D], (D-A]. If we have a single copy of a keyspace stored in DC1 only (none in DC2), then the current code correctly calculates that node A is responsible for ranges (C-D], (D-A]. But, if we add a copy in DC1, then we only calculate (D-A] as primary range. This is a bug; we should not care what copies are in other datacenters, when computing what to repair in the local one. Primary range repair still isn't quite NTS-aware -- Key: CASSANDRA-5608 URL: https://issues.apache.org/jira/browse/CASSANDRA-5608 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.5 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Fix For: 1.2.6 Consider the case of a four node cluster, with nodes A and C in DC1, and nodes B and D in DC2. TokenMetadata will break this into ranges of (A-B], (B-C], (C-D], (D-A]. If we have a single copy of a keyspace stored in DC1 only (none in DC2), then the current code correctly calculates that node A is responsible for ranges (C-D], (D-A]. But, if we add a copy in DC2, then we only calculate (D-A] as primary range. This is a bug; we should not care what copies are in other datacenters, when computing what to repair in the local one. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673327#comment-13673327 ] Kévin LOVATO commented on CASSANDRA-5424: - I redid the same test (creating the keyspace with data, then changing its replication factor so it's replicated in DC2, then repairing) and it turns out that if you don't run a repair on DC2 before changing the replication factor, the repair -pr works fine -_-. Anyway, your solution worked, thank you for your help and sorry I polluted JIRA with my questions. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed merkle tree to /10.2.29.38 for (Keyspace1,Standard1) root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log root@ip-10-72-111-225:/home/ubuntu# --- # nodetool -h 10.46.113.236 repair -pr Keyspace1 Standard1 [2013-04-03 15:48:00,274] Starting repair command
[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's
[ https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673327#comment-13673327 ] Kévin LOVATO edited comment on CASSANDRA-5424 at 6/3/13 5:15 PM: - I redid the same test (creating the keyspace with data, then changing its replication factor so it's replicated in DC2, then repairing) and it turns out that if you don't run a repair on DC2 before changing the replication factor, the repair -pr works fine \-_\-. Anyway, your solution worked, thank you for your help and sorry I polluted JIRA with my questions. was (Author: alprema): I redid the same test (creating the keyspace with data, then changing its replication factor so it's replicated in DC2, then repairing) and it turns out that if you don't run a repair on DC2 before changing the replication factor, the repair -pr works fine -_-. Anyway, your solution worked, thank you for your help and sorry I polluted JIRA with my questions. nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's -- Key: CASSANDRA-5424 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Reporter: Jeremiah Jordan Assignee: Yuki Morishita Priority: Critical Fix For: 1.2.5 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's Commands follow, but the TL;DR of it, range (127605887595351923798765477786913079296,0] doesn't get repaired between .38 node and .236 node until I run a repair, no -pr, on .38 It seems like primary arnge calculation doesn't take schema into account, but deciding who to ask for merkle tree's from does. {noformat} Address DC RackStatus State LoadOwns Token 127605887595351923798765477786913079296 10.72.111.225 Cassandra rack1 Up Normal 455.87 KB 25.00% 0 10.2.29.38 Analytics rack1 Up Normal 40.74 MB25.00% 42535295865117307932921825928971026432 10.46.113.236 Analytics rack1 Up Normal 20.65 MB50.00% 127605887595351923798765477786913079296 create keyspace Keyspace1 with placement_strategy = 'NetworkTopologyStrategy' and strategy_options = {Analytics : 2} and durable_writes = true; --- # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for keyspace Keyspace1 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e for range (0,42535295865117307932921825928971026432] finished [2013-04-03 15:47:00,881] Repair command #1 finished root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will sync a1/10.2.29.38, /10.46.113.236 on range (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1] INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38]) INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from /10.46.113.236 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree for Standard1 from a1/10.2.29.38 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully synced INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed successfully root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e /var/log/cassandra/system.log INFO
[jira] [Commented] (CASSANDRA-5498) Possible NPE on EACH_QUORUM writes
[ https://issues.apache.org/jira/browse/CASSANDRA-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673353#comment-13673353 ] Jeremiah Jordan commented on CASSANDRA-5498: I don't think an exception is being caught and passed to the client. I think the connection closes so org.apache.thrift.TApplicationException gets thrown. [~jasobrown] have you done any more debug on what is causing this? Possible NPE on EACH_QUORUM writes -- Key: CASSANDRA-5498 URL: https://issues.apache.org/jira/browse/CASSANDRA-5498 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.10 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: each_quorum, ec2 Fix For: 1.2.6 Attachments: 5498-v1.patch, 5498-v2.patch When upgrading from 1.0 to 1.1, we observed that DatacenterSyncWriteResponseHandler.assureSufficientLiveNodes() can throw an NPE if one of the writeEndpoints has a DC that is not listed in the keyspace while one of the nodes is down. We observed this while running in EC2, and using the Ec2Snitch. The exception typically was was brief, but a certain segment of writes (using EACH_QUORUM) failed during that time. This ticket will address the NPE in DSWRH, while a followup ticket will be created once we get to the bottom of the incorrect DC being reported from Ec2Snitch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673382#comment-13673382 ] Mike Schrag commented on CASSANDRA-4421: I just noticed that if executeQuery times out, you just lose rows (along with all the other failure conditions), and that doesn't actually bubble the exception up to the runtime, so the consumer can't actually respond to the failure, so you end up with a null result, which i believe will make the page appear to be done. It would be much better to either support retry here, or throw the exception all the way up so that clients can retry on their own. I haven't checked the new round of patches to see if this behaves the same way. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673427#comment-13673427 ] Mike Schrag commented on CASSANDRA-4421: Note that to recover from some of those failures, you have to reconnect the ColumnFamilyRecordReader. It would be helpful to have the connection code moved into a connect() method and make close() null out the client so you can cycle the underlying connection in the event of a more traumatic failure case. Ideally, the iterators and tokens would all be left alone so that you can pickup where you left off. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5498) Possible NPE on EACH_QUORUM writes
[ https://issues.apache.org/jira/browse/CASSANDRA-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673426#comment-13673426 ] Jason Brown commented on CASSANDRA-5498: [~jjordan] working on it now on #cassandra-dev IRC. My suspicion is a problem with Gossiper.addSavedEndopint(), which clears out the endpoint's previous data from the endpointStateMap when a node with a greater messaging version attempts to connect. Which then causes the downstream affect in DSWRH when it requests the DC data from the EC2Snitch, which gets it from Gossiper.endopintStateMap. Here's the server-side stacktrace: {code}ERROR [RPC-Thread:150339] 2013-05-08 17:29:55,048 Cassandra.java (line 3462) Internal error processing batch_mutate java.lang.NullPointerException at org.apache.cassandra.service.DatacenterSyncWriteResponseHandler.assureSufficientLiveNodes(DatacenterSyncWriteResponseHandler.java:109) at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:253) at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:194) at org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:639) at org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:590) at org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:598) at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3454) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631) at org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662){code} Possible NPE on EACH_QUORUM writes -- Key: CASSANDRA-5498 URL: https://issues.apache.org/jira/browse/CASSANDRA-5498 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.10 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: each_quorum, ec2 Fix For: 1.2.6 Attachments: 5498-v1.patch, 5498-v2.patch When upgrading from 1.0 to 1.1, we observed that DatacenterSyncWriteResponseHandler.assureSufficientLiveNodes() can throw an NPE if one of the writeEndpoints has a DC that is not listed in the keyspace while one of the nodes is down. We observed this while running in EC2, and using the Ec2Snitch. The exception typically was was brief, but a certain segment of writes (using EACH_QUORUM) failed during that time. This ticket will address the NPE in DSWRH, while a followup ticket will be created once we get to the bottom of the incorrect DC being reported from Ec2Snitch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-5607) don't 2xsort data directories if you only have 1 (common case)
[ https://issues.apache.org/jira/browse/CASSANDRA-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius resolved CASSANDRA-5607. - Resolution: Won't Fix don't 2xsort data directories if you only have 1 (common case) -- Key: CASSANDRA-5607 URL: https://issues.apache.org/jira/browse/CASSANDRA-5607 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 2.0 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 2.0 Attachments: 5607.txt getLocationCapableOfSize() sorts candidate directories by freespace, then again by load, even if there are 0 or 1 candidates. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673454#comment-13673454 ] Mike Schrag commented on CASSANDRA-4421: Ugh .. Kind of nasty. AbstractIterator (which RowIterator extends) poisons itself on failure without any apparent way to recover. It gets stuck in State.FAILED and you can't reset it. That seems overly aggressive. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin B. updated CASSANDRA-4421: Attachment: 4421-8-cb.txt Attached is patch 4421-8-cb containing the changes from jbellis's branch. It applies onto cassandra-1.2 cleanly (11eb352). Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin B. updated CASSANDRA-4421: Attachment: (was: 4421-8-cb.txt) Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin B. updated CASSANDRA-4421: Attachment: 4421-8-cb.txt Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5609) Create a dtest for CASSANDRA-5225
Brandon Williams created CASSANDRA-5609: --- Summary: Create a dtest for CASSANDRA-5225 Key: CASSANDRA-5609 URL: https://issues.apache.org/jira/browse/CASSANDRA-5609 Project: Cassandra Issue Type: Test Reporter: Brandon Williams Priority: Minor As the title suggests. A small complication is the the test will need to ensure it reduces the column_index_size_in_kb and then writes more columns than that, -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-5609) Create a dtest for CASSANDRA-5225
[ https://issues.apache.org/jira/browse/CASSANDRA-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reassigned CASSANDRA-5609: --- Assignee: Daniel Meyer Create a dtest for CASSANDRA-5225 - Key: CASSANDRA-5609 URL: https://issues.apache.org/jira/browse/CASSANDRA-5609 Project: Cassandra Issue Type: Test Reporter: Brandon Williams Assignee: Daniel Meyer Priority: Minor As the title suggests. A small complication is the the test will need to ensure it reduces the column_index_size_in_kb and then writes more columns than that, -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5610) ORDER BY desc breaks cqlsh COPY
Jeremiah Jordan created CASSANDRA-5610: -- Summary: ORDER BY desc breaks cqlsh COPY Key: CASSANDRA-5610 URL: https://issues.apache.org/jira/browse/CASSANDRA-5610 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jeremiah Jordan Assignee: Aleksey Yeschenko Priority: Minor If you have a reversed text field, COPY chokes on it because the type is 'org.apache.cassandra.db.marshal.ReversedType'text not just 'text' so the strings don't get quoted in the generated CQL. {noformat} def do_import_row(self, columns, nullval, layout, row): rowmap = {} for name, value in zip(columns, row): if value != nullval: type = layout.get_column(name).cqltype.cql_parameterized_type() if type in ('ascii', 'text', 'timestamp', 'inet'): rowmap[name] = self.cql_protect_value(value) else: rowmap[name] = value else: rowmap[name] = 'null' return self.do_import_insert(layout, rowmap) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5576) CREATE/DROP TRIGGER in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673552#comment-13673552 ] Aleksey Yeschenko commented on CASSANDRA-5576: -- Was there an issue with any of the suggestions in https://issues.apache.org/jira/browse/CASSANDRA-5576?focusedCommentId=13669381page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13669381? CQL is part of public API, we *try* not to change it too often, and to keep it consistent. One issue with {noformat} CREATE TRIGGER index_logger ON Keyspace1.Standard1 EXECUTE ('org.apache.cassandra.triggers.InvertedIndex', 'org.apache.cassandra.triggers.LogColumnUpdates'); {noformat} Is that it closes the door to future parametrization of trigger classes (which is not planned for 2.0, but will probably happen eventually). Another is that I just see no value in being able to bundle several trigger classes under one name - what exactly does it buy us? So, ideally, I'd rather see {noformat} CREATE TRIGGER indexer ON Keyspace1.Standard1 WITH options = {'class': 'org.apache.cassandra.triggers.InvertedIndex'} {noformat} (Use map syntax, even if we are only going to support one option - 'class', for now. You can look at CREATE CUSTOM INDEX implementation to see what exactly I mean). We also *don't* want to serialize anything as JSON in schema columns anymore (this is what CASSANDRA-5578 was about). The original plan was to use a settext field for the triggers, but now that we can attach names to them, and potentially want to be able to parametrize them, neither a settext or a maptext, text will do. So we need a separate schema-table - I listed one example schema in the comment before this one. CREATE/DROP TRIGGER in CQL -- Key: CASSANDRA-5576 URL: https://issues.apache.org/jira/browse/CASSANDRA-5576 Project: Cassandra Issue Type: Bug Components: API, Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673576#comment-13673576 ] Mike Schrag commented on CASSANDRA-4421: Yeah, recycling the RowIterator is probably too complicated. However, I do think the Exception should bubble up. I can then catch it and reread the split at the app level to pick back up where i left off. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5610) ORDER BY desc breaks cqlsh COPY
[ https://issues.apache.org/jira/browse/CASSANDRA-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5610: - Attachment: 5610.txt ORDER BY desc breaks cqlsh COPY --- Key: CASSANDRA-5610 URL: https://issues.apache.org/jira/browse/CASSANDRA-5610 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jeremiah Jordan Assignee: Aleksey Yeschenko Priority: Minor Attachments: 5610.txt If you have a reversed text field, COPY chokes on it because the type is 'org.apache.cassandra.db.marshal.ReversedType'text not just 'text' so the strings don't get quoted in the generated CQL. {noformat} def do_import_row(self, columns, nullval, layout, row): rowmap = {} for name, value in zip(columns, row): if value != nullval: type = layout.get_column(name).cqltype.cql_parameterized_type() if type in ('ascii', 'text', 'timestamp', 'inet'): rowmap[name] = self.cql_protect_value(value) else: rowmap[name] = value else: rowmap[name] = 'null' return self.do_import_insert(layout, rowmap) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5610) ORDER BY desc breaks cqlsh COPY
[ https://issues.apache.org/jira/browse/CASSANDRA-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673646#comment-13673646 ] Brandon Williams commented on CASSANDRA-5610: - +1 ORDER BY desc breaks cqlsh COPY --- Key: CASSANDRA-5610 URL: https://issues.apache.org/jira/browse/CASSANDRA-5610 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jeremiah Jordan Assignee: Aleksey Yeschenko Priority: Minor Labels: cqlsh Fix For: 1.2.6 Attachments: 5610.txt If you have a reversed text field, COPY chokes on it because the type is 'org.apache.cassandra.db.marshal.ReversedType'text not just 'text' so the strings don't get quoted in the generated CQL. {noformat} def do_import_row(self, columns, nullval, layout, row): rowmap = {} for name, value in zip(columns, row): if value != nullval: type = layout.get_column(name).cqltype.cql_parameterized_type() if type in ('ascii', 'text', 'timestamp', 'inet'): rowmap[name] = self.cql_protect_value(value) else: rowmap[name] = value else: rowmap[name] = 'null' return self.do_import_insert(layout, rowmap) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cqlsh: fix COPY FROM with ReversedType
Updated Branches: refs/heads/cassandra-1.2 11eb35291 - 46273c4dd cqlsh: fix COPY FROM with ReversedType patch by Aleksey Yeschenko; reviewed by Brandon Williams for CASSANDRA-5610 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46273c4d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46273c4d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46273c4d Branch: refs/heads/cassandra-1.2 Commit: 46273c4dd4de28e596eb2c1eb272f6da60b06d57 Parents: 11eb352 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 00:58:31 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 00:58:31 2013 +0300 -- CHANGES.txt |1 + bin/cqlsh |8 +--- 2 files changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/46273c4d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6e05a51..09e9119 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -16,6 +16,7 @@ * have BulkLoader ignore snapshots directories (CASSANDRA-5587) * fix SnitchProperties logging context (CASSANDRA-5602) * Expose whether jna is enabled and memory is locked via JMX (CASSANDRA-5508) + * cqlsh: fix COPY FROM with ReversedType (CASSANDRA-5610) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/46273c4d/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 1abd078..dd4c00d 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -32,7 +32,7 @@ exit 1 from __future__ import with_statement description = CQL Shell for Apache Cassandra -version = 3.1.0 +version = 3.1.1 from StringIO import StringIO from itertools import groupby @@ -1681,8 +1681,10 @@ class Shell(cmd.Cmd): rowmap = {} for name, value in zip(columns, row): if value != nullval: -type = layout.get_column(name).cqltype.cql_parameterized_type() -if type in ('ascii', 'text', 'timestamp', 'inet'): +type = layout.get_column(name).cqltype +if issubclass(type, ReversedType): +type = type.subtypes[0] +if type.cql_parameterized_type() in ('ascii', 'text', 'timestamp', 'inet'): rowmap[name] = self.cql_protect_value(value) else: rowmap[name] = value
[1/2] git commit: cqlsh: fix COPY FROM with ReversedType
Updated Branches: refs/heads/trunk 9e8691c26 - ccc87fb30 cqlsh: fix COPY FROM with ReversedType patch by Aleksey Yeschenko; reviewed by Brandon Williams for CASSANDRA-5610 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46273c4d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46273c4d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46273c4d Branch: refs/heads/trunk Commit: 46273c4dd4de28e596eb2c1eb272f6da60b06d57 Parents: 11eb352 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 00:58:31 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 00:58:31 2013 +0300 -- CHANGES.txt |1 + bin/cqlsh |8 +--- 2 files changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/46273c4d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6e05a51..09e9119 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -16,6 +16,7 @@ * have BulkLoader ignore snapshots directories (CASSANDRA-5587) * fix SnitchProperties logging context (CASSANDRA-5602) * Expose whether jna is enabled and memory is locked via JMX (CASSANDRA-5508) + * cqlsh: fix COPY FROM with ReversedType (CASSANDRA-5610) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488) http://git-wip-us.apache.org/repos/asf/cassandra/blob/46273c4d/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 1abd078..dd4c00d 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -32,7 +32,7 @@ exit 1 from __future__ import with_statement description = CQL Shell for Apache Cassandra -version = 3.1.0 +version = 3.1.1 from StringIO import StringIO from itertools import groupby @@ -1681,8 +1681,10 @@ class Shell(cmd.Cmd): rowmap = {} for name, value in zip(columns, row): if value != nullval: -type = layout.get_column(name).cqltype.cql_parameterized_type() -if type in ('ascii', 'text', 'timestamp', 'inet'): +type = layout.get_column(name).cqltype +if issubclass(type, ReversedType): +type = type.subtypes[0] +if type.cql_parameterized_type() in ('ascii', 'text', 'timestamp', 'inet'): rowmap[name] = self.cql_protect_value(value) else: rowmap[name] = value
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Conflicts: bin/cqlsh Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ccc87fb3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ccc87fb3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ccc87fb3 Branch: refs/heads/trunk Commit: ccc87fb30f9435d2d3aebf8658ab355c3f0c3257 Parents: 9e8691c 46273c4 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 01:00:14 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 01:00:14 2013 +0300 -- CHANGES.txt |1 + bin/cqlsh |6 -- 2 files changed, 5 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ccc87fb3/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ccc87fb3/bin/cqlsh --
[jira] [Commented] (CASSANDRA-5422) Native protocol sanity check
[ https://issues.apache.org/jira/browse/CASSANDRA-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673751#comment-13673751 ] Daniel Norberg commented on CASSANDRA-5422: --- Our corporate CLA has also been submitted. Native protocol sanity check Key: CASSANDRA-5422 URL: https://issues.apache.org/jira/browse/CASSANDRA-5422 Project: Cassandra Issue Type: Bug Components: API Reporter: Jonathan Ellis Assignee: Daniel Norberg Attachments: 5422-test.txt, ExecuteMessage Profiling - Call Tree.png, ExecuteMessage Profiling - Hot Spots.png With MutationStatement.execute turned into a no-op, I only get about 33k insert_prepared ops/s on my laptop. That is: this is an upper bound for our performance if Cassandra were infinitely fast, limited by netty handling the protocol + connections. This is up from about 13k/s with MS.execute running normally. ~40% overhead from netty seems awfully high to me, especially for insert_prepared where the return value is tiny. (I also used 4-byte column values to minimize that part as well.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673854#comment-13673854 ] Alex Liu commented on CASSANDRA-4421: - I am fixing the new merged code infinite loop issue and make the example work. I will post the final merge later. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673909#comment-13673909 ] Alex Liu commented on CASSANDRA-4421: - [~mikeschrag] For any cql timeout or other uncaught exception, we have two choices: 1. Catch it at your client code, so you can handle it after job is done. 2. Write it to log, so you can check it where is wrong. 3. Like you said to retry at the point where it fails. So choice 1 is a common solution, 2 is the easiest to implement. 3 is quite some work to do to make it's reliable and robust. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-4421: Attachment: 4421-8-je.txt 4421-8-je.txt is attached to merge the final code and some fixes on example on top of trunk commit 9e8691c26283f2532be3101486a8290ed5128c18 Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421-8-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Set -Djava.awt.headless=true for ant test
Updated Branches: refs/heads/cassandra-1.1 2dd73d171 - 99824496a Set -Djava.awt.headless=true for ant test Makes running tests less annoying with OS X/Java7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99824496 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99824496 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99824496 Branch: refs/heads/cassandra-1.1 Commit: 99824496aa359fcbe5e71f4e54f2738f09524a87 Parents: 2dd73d1 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:42:07 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:42:07 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/99824496/build.xml -- diff --git a/build.xml b/build.xml index 945fff7..8f87abd 100644 --- a/build.xml +++ b/build.xml @@ -1048,6 +1048,7 @@ jvmarg value=-Dstorage-config=${test.conf}/ jvmarg value=-Daccess.properties=${test.conf}/access.properties/ jvmarg value=-Dlog4j.configuration=log4j-junit.properties / +jvmarg value=-Djava.awt.headless=true/ jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.5.jar / jvmarg value=-ea/ optjvmargs/
[1/2] git commit: Set -Djava.awt.headless=true for ant test
Updated Branches: refs/heads/cassandra-1.2 46273c4dd - 15dfdef01 Set -Djava.awt.headless=true for ant test Makes running tests less annoying with OS X/Java7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99824496 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99824496 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99824496 Branch: refs/heads/cassandra-1.2 Commit: 99824496aa359fcbe5e71f4e54f2738f09524a87 Parents: 2dd73d1 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:42:07 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:42:07 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/99824496/build.xml -- diff --git a/build.xml b/build.xml index 945fff7..8f87abd 100644 --- a/build.xml +++ b/build.xml @@ -1048,6 +1048,7 @@ jvmarg value=-Dstorage-config=${test.conf}/ jvmarg value=-Daccess.properties=${test.conf}/access.properties/ jvmarg value=-Dlog4j.configuration=log4j-junit.properties / +jvmarg value=-Djava.awt.headless=true/ jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.5.jar / jvmarg value=-ea/ optjvmargs/
[2/2] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2
Merge branch 'cassandra-1.1' into cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15dfdef0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15dfdef0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15dfdef0 Branch: refs/heads/cassandra-1.2 Commit: 15dfdef0170dad167040d236e63dd88d47920e8b Parents: 46273c4 9982449 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:43:24 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:43:24 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/15dfdef0/build.xml -- diff --cc build.xml index 35ac747,8f87abd..46ffbb1 --- a/build.xml +++ b/build.xml @@@ -1053,7 -1046,9 +1053,8 @@@ formatter type=xml usefile=true/ formatter type=brief usefile=false/ jvmarg value=-Dstorage-config=${test.conf}/ -jvmarg value=-Daccess.properties=${test.conf}/access.properties/ jvmarg value=-Dlog4j.configuration=log4j-junit.properties / + jvmarg value=-Djava.awt.headless=true/ jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.5.jar / jvmarg value=-ea/ optjvmargs/
[2/3] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2
Merge branch 'cassandra-1.1' into cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15dfdef0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15dfdef0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15dfdef0 Branch: refs/heads/trunk Commit: 15dfdef0170dad167040d236e63dd88d47920e8b Parents: 46273c4 9982449 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:43:24 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:43:24 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/15dfdef0/build.xml -- diff --cc build.xml index 35ac747,8f87abd..46ffbb1 --- a/build.xml +++ b/build.xml @@@ -1053,7 -1046,9 +1053,8 @@@ formatter type=xml usefile=true/ formatter type=brief usefile=false/ jvmarg value=-Dstorage-config=${test.conf}/ -jvmarg value=-Daccess.properties=${test.conf}/access.properties/ jvmarg value=-Dlog4j.configuration=log4j-junit.properties / + jvmarg value=-Djava.awt.headless=true/ jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.5.jar / jvmarg value=-ea/ optjvmargs/
[1/3] git commit: Set -Djava.awt.headless=true for ant test
Updated Branches: refs/heads/trunk ccc87fb30 - db4c27e91 Set -Djava.awt.headless=true for ant test Makes running tests less annoying with OS X/Java7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99824496 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99824496 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99824496 Branch: refs/heads/trunk Commit: 99824496aa359fcbe5e71f4e54f2738f09524a87 Parents: 2dd73d1 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:42:07 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:42:07 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/99824496/build.xml -- diff --git a/build.xml b/build.xml index 945fff7..8f87abd 100644 --- a/build.xml +++ b/build.xml @@ -1048,6 +1048,7 @@ jvmarg value=-Dstorage-config=${test.conf}/ jvmarg value=-Daccess.properties=${test.conf}/access.properties/ jvmarg value=-Dlog4j.configuration=log4j-junit.properties / +jvmarg value=-Djava.awt.headless=true/ jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.5.jar / jvmarg value=-ea/ optjvmargs/
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/db4c27e9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/db4c27e9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/db4c27e9 Branch: refs/heads/trunk Commit: db4c27e91b3d9c9027615b9b1ac085524f3090ba Parents: ccc87fb 15dfdef Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jun 4 04:46:51 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Jun 4 04:46:51 2013 +0300 -- build.xml |1 + 1 files changed, 1 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/db4c27e9/build.xml --
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673942#comment-13673942 ] Mike Schrag commented on CASSANDRA-4421: Alex - I vote #1, definitely. I need to be able to handle these conditions inline, not after the job is done. If you implement #1, the user can choose #2 if s/he wants. If the library chooses #2 for you, you're just out-of-luck. Particularly in the case of a timeout, that's a relatively straightforward situation to resolve in many cases. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421-8-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5611) OS X puts an icon for Cassandra in the dock
Aleksey Yeschenko created CASSANDRA-5611: Summary: OS X puts an icon for Cassandra in the dock Key: CASSANDRA-5611 URL: https://issues.apache.org/jira/browse/CASSANDRA-5611 Project: Cassandra Issue Type: Improvement Affects Versions: 1.2.5, 1.1.12, 2.0 Environment: OS X, JKD 1.7 Reporter: Aleksey Yeschenko Priority: Trivial Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test put an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5611) OS X puts an icon for Cassandra in the dock
[ https://issues.apache.org/jira/browse/CASSANDRA-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5611: - Environment: OS X, JDK 1.7 (was: OS X, JKD 1.7) OS X puts an icon for Cassandra in the dock --- Key: CASSANDRA-5611 URL: https://issues.apache.org/jira/browse/CASSANDRA-5611 Project: Cassandra Issue Type: Improvement Affects Versions: 1.1.12, 1.2.5, 2.0 Environment: OS X, JDK 1.7 Reporter: Aleksey Yeschenko Priority: Trivial Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test put an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5611) OS X puts an icon for Cassandra in the dock
[ https://issues.apache.org/jira/browse/CASSANDRA-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5611: - Description: Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test puts an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). was: Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test put an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). OS X puts an icon for Cassandra in the dock --- Key: CASSANDRA-5611 URL: https://issues.apache.org/jira/browse/CASSANDRA-5611 Project: Cassandra Issue Type: Improvement Affects Versions: 1.1.12, 1.2.5, 2.0 Environment: OS X, JDK 1.7 Reporter: Aleksey Yeschenko Priority: Trivial Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test puts an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5611) OS X puts an icon for Cassandra in the dock
[ https://issues.apache.org/jira/browse/CASSANDRA-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673949#comment-13673949 ] Jonathan Ellis commented on CASSANDRA-5611: --- bin/cassandra (with or w/o -f) does not give me a dock icon. Older version of Java? {noformat} $ java -version java version 1.7.0_09 Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) {noformat} OS X puts an icon for Cassandra in the dock --- Key: CASSANDRA-5611 URL: https://issues.apache.org/jira/browse/CASSANDRA-5611 Project: Cassandra Issue Type: Improvement Affects Versions: 1.1.12, 1.2.5, 2.0 Environment: OS X, JDK 1.7 Reporter: Aleksey Yeschenko Priority: Trivial Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test puts an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673950#comment-13673950 ] Jonathan Ellis commented on CASSANDRA-4421: --- +1 for #1. We'll want a patch against 1.2 as well as trunk, btw. Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421-8-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5611) OS X puts an icon for Cassandra in the dock
[ https://issues.apache.org/jira/browse/CASSANDRA-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673955#comment-13673955 ] Aleksey Yeschenko commented on CASSANDRA-5611: -- maybe {noformat} ➤ java -version java version 1.7.0_21 Java(TM) SE Runtime Environment (build 1.7.0_21-b12) Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode) {noformat} OS X puts an icon for Cassandra in the dock --- Key: CASSANDRA-5611 URL: https://issues.apache.org/jira/browse/CASSANDRA-5611 Project: Cassandra Issue Type: Improvement Affects Versions: 1.1.12, 1.2.5, 2.0 Environment: OS X, JDK 1.7 Reporter: Aleksey Yeschenko Priority: Trivial Even when a Java program doesn't display any windows or other visible elements, if it accesses the AWT subsystem in some way (e.g., to do image processing internally), OS X will still put an icon for the Java program in the dock as if it were a GUI-based app. (When the program quits, the dock icon goes away as usual.) (more details at http://hints.macworld.com/article.php?story=20071208235352641) Can't remember when it started happening, but it wasn't always the case for Cassandra. Now launching Cassandra puts an icon in the dock, and, worse, running ant test puts an icon in the dock for each test, stealing focus, too. This is extremely annoying. I ninja-d a workaround (-Djava.awt.headless=true) for ant test in 99824496aa359fcbe5e71f4e54f2738f09524a87, but we should try and find the real source of this (my guess is that some dependency of ours is to blame). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5612) NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2
[ https://issues.apache.org/jira/browse/CASSANDRA-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5612: Attachment: upgrade_through_versions_test.py NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2 --- Key: CASSANDRA-5612 URL: https://issues.apache.org/jira/browse/CASSANDRA-5612 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.6 Reporter: Ryan McGuire Attachments: logs.tar.gz, upgrade_through_versions_test.py See the attached upgrade_through_versions_test.py upgrade_test_mixed(). Conceptually this method does the following: * Instantiates a 3 node 1.1.9 cluster * Writes some data * Shuts down node 1 and upgrades it to 1.2 (HEAD) * Brings the node1 back up, making a the cluster a mixed version 1.1/1.2 * Brings down node2 and node3 and does the same upgrade making it all the same version. * At this point, I would run upgradesstables on each of the nodes, but there is already an error on node3 directly after it's upgrade: {code} INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op s) INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst em-peers-ic-2-Data.db (291 bytes) for commitlog position ReplayPosition(segmentId=1370314185862, position=58616) INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) Node /127.0.0.2 state jump to normal ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 192) Exception in thread Thread[MigrationStage:1,5,main] java.lang.NullPointerException at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511) at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445) at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355) at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} This error is repeatable, but inconsistent. Interestingly, it is always node3 with the error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5612) NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2
Ryan McGuire created CASSANDRA-5612: --- Summary: NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2 Key: CASSANDRA-5612 URL: https://issues.apache.org/jira/browse/CASSANDRA-5612 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.6 Reporter: Ryan McGuire Attachments: logs.tar.gz, upgrade_through_versions_test.py See the attached upgrade_through_versions_test.py upgrade_test_mixed(). Conceptually this method does the following: * Instantiates a 3 node 1.1.9 cluster * Writes some data * Shuts down node 1 and upgrades it to 1.2 (HEAD) * Brings the node1 back up, making a the cluster a mixed version 1.1/1.2 * Brings down node2 and node3 and does the same upgrade making it all the same version. * At this point, I would run upgradesstables on each of the nodes, but there is already an error on node3 directly after it's upgrade: {code} INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op s) INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst em-peers-ic-2-Data.db (291 bytes) for commitlog position ReplayPosition(segmentId=1370314185862, position=58616) INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) Node /127.0.0.2 state jump to normal ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 192) Exception in thread Thread[MigrationStage:1,5,main] java.lang.NullPointerException at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511) at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445) at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355) at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} This error is repeatable, but inconsistent. Interestingly, it is always node3 with the error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5612) NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2
[ https://issues.apache.org/jira/browse/CASSANDRA-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5612: Attachment: logs.tar.gz NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2 --- Key: CASSANDRA-5612 URL: https://issues.apache.org/jira/browse/CASSANDRA-5612 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.6 Reporter: Ryan McGuire Attachments: logs.tar.gz, upgrade_through_versions_test.py See the attached upgrade_through_versions_test.py upgrade_test_mixed(). Conceptually this method does the following: * Instantiates a 3 node 1.1.9 cluster * Writes some data * Shuts down node 1 and upgrades it to 1.2 (HEAD) * Brings the node1 back up, making a the cluster a mixed version 1.1/1.2 * Brings down node2 and node3 and does the same upgrade making it all the same version. * At this point, I would run upgradesstables on each of the nodes, but there is already an error on node3 directly after it's upgrade: {code} INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op s) INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst em-peers-ic-2-Data.db (291 bytes) for commitlog position ReplayPosition(segmentId=1370314185862, position=58616) INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) Node /127.0.0.2 state jump to normal ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 192) Exception in thread Thread[MigrationStage:1,5,main] java.lang.NullPointerException at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511) at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445) at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355) at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} This error is repeatable, but inconsistent. Interestingly, it is always node3 with the error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5612) NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2
[ https://issues.apache.org/jira/browse/CASSANDRA-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-5612: Description: See the attached upgrade_through_versions_test.py upgrade_test_mixed(). Conceptually this method does the following: * Instantiates a 3 node 1.1.9 cluster * Writes some data * Shuts down node 1 and upgrades it to 1.2 (HEAD) * Brings the node1 back up, making the cluster a mixed version 1.1/1.2 * Brings down node2 and node3 and does the same upgrade making it all the same version. * At this point, I would run upgradesstables on each of the nodes, but there is already an error on node3 directly after it's upgrade: {code} INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op s) INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst em-peers-ic-2-Data.db (291 bytes) for commitlog position ReplayPosition(segmentId=1370314185862, position=58616) INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) Node /127.0.0.2 state jump to normal ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 192) Exception in thread Thread[MigrationStage:1,5,main] java.lang.NullPointerException at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511) at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445) at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355) at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} This error is repeatable, but inconsistent. Interestingly, it is always node3 with the error. was: See the attached upgrade_through_versions_test.py upgrade_test_mixed(). Conceptually this method does the following: * Instantiates a 3 node 1.1.9 cluster * Writes some data * Shuts down node 1 and upgrades it to 1.2 (HEAD) * Brings the node1 back up, making a the cluster a mixed version 1.1/1.2 * Brings down node2 and node3 and does the same upgrade making it all the same version. * At this point, I would run upgradesstables on each of the nodes, but there is already an error on node3 directly after it's upgrade: {code} INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op s) INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst em-peers-ic-2-Data.db (291 bytes) for commitlog position ReplayPosition(segmentId=1370314185862, position=58616) INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) Node /127.0.0.2 state jump to normal ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 192) Exception in thread Thread[MigrationStage:1,5,main] java.lang.NullPointerException at org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511) at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445) at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355) at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) {code} This error is repeatable, but inconsistent. Interestingly, it is always node3 with the error. NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2 --- Key: CASSANDRA-5612 URL: https://issues.apache.org/jira/browse/CASSANDRA-5612 Project: Cassandra Issue Type: Bug
[jira] [Commented] (CASSANDRA-5576) CREATE/DROP TRIGGER in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673996#comment-13673996 ] Vijay commented on CASSANDRA-5576: -- {quote} Was there an issue with any of the suggestions {quote} Didn't we just drop the options (MapString, String) style configuration on CF in CASSANDRA-4795? Future extension can do simple match on the input parameter which i think is not that bad... {quote} what exactly does it buy us? {quote} It allows the user to group things so they can drop and manage a set of triggers than individual ones... I am fine dropping that and having one to one relationship (name to class names). {quote} We also don't want to serialize anything as JSON in schema columns anymore {quote} Well it was not just simple type specification in table definition the announce method has to change, let me spend more time troubleshooting. Hope everything else is alright. CREATE/DROP TRIGGER in CQL -- Key: CASSANDRA-5576 URL: https://issues.apache.org/jira/browse/CASSANDRA-5576 Project: Cassandra Issue Type: Bug Components: API, Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 2.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-4421: Attachment: 4421-9-je.txt 4421-9-je.txt patch is attached on top 9e8691c26283f2532be3101486a8290ed5128c18 of trunk to add exception handling. try three time for TimedOutException and UnavailableException, any other exception is thrown back to client as IOException with the original cause throwable. Client side can catch it and handle at client side. check this link for example http://stackoverflow.com/questions/14920236/how-to-prevent-hadoop-job-to-fail-on-corrupted-input-file Support cql3 table definitions in Hadoop InputFormat Key: CASSANDRA-4421 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421 Project: Cassandra Issue Type: Improvement Components: API Affects Versions: 1.1.0 Environment: Debian Squeeze Reporter: bert Passek Labels: cql3 Fix For: 1.2.6 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421-4.txt, 4421-5.txt, 4421-6.cb.txt, 4421-6-je.txt, 4421-7-je.txt, 4421-8-cb.txt, 4421-8-je.txt, 4421-9-je.txt, 4421.txt Hello, i faced a bug while writing composite column values and following validation on server side. This is the setup for reproduction: 1. create a keyspace create keyspace test with strategy_class = 'SimpleStrategy' and strategy_options:replication_factor = 1; 2. create a cf via cql (3.0) create table test1 ( a int, b int, c int, primary key (a, b) ); If i have a look at the schema in cli i noticed that there is no column metadata for columns not part of primary key. create column family test1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)' and default_validation_class = 'UTF8Type' and key_validation_class = 'Int32Type' and read_repair_chance = 0.1 and dclocal_read_repair_chance = 0.0 and gc_grace = 864000 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor'}; Please notice the default validation class: UTF8Type Now i would like to insert value 127 via cassandra client (no cql, part of mr-jobs). Have a look at the attachement. Batch mutate fails: InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] failed validation) A validator for column value is fetched in ThriftValidation::validateColumnData which returns always the default validator which is UTF8Type as described above (The ColumnDefinition for given column name c is always null) In UTF8Type there is a check for if (b 127) return false; Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table creation. I assigned data types to all columns, but i can not set values for a composite column because the default validation class is used. I think the schema should know the correct validator even for composite columns. The usage of the default validation class does not make sense. Best Regards Bert Passek -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira