[1/2] git commit: fix bad merge

2014-04-29 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 10373762e - 212698501


fix bad merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a4664328
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a4664328
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a4664328

Branch: refs/heads/trunk
Commit: a4664328c2be71a39301b1fe92eb9fb4e2d4755b
Parents: bcd1411
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Apr 29 20:39:24 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Apr 29 20:39:24 2014 -0400

--
 src/java/org/apache/cassandra/tracing/Tracing.java | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4664328/src/java/org/apache/cassandra/tracing/Tracing.java
--
diff --git a/src/java/org/apache/cassandra/tracing/Tracing.java 
b/src/java/org/apache/cassandra/tracing/Tracing.java
index 31cc1ab..f650d16 100644
--- a/src/java/org/apache/cassandra/tracing/Tracing.java
+++ b/src/java/org/apache/cassandra/tracing/Tracing.java
@@ -206,16 +206,10 @@ public class Tracing
 ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(cfMeta);
 addColumn(cf, buildName(cfMeta, coordinator), 
FBUtilities.getBroadcastAddress());
 addParameterColumns(cf, parameters);
- HEAD
-addColumn(cf, buildName(cfMeta, request), request);
-addColumn(cf, buildName(cfMeta, started_at), started_at);
-mutateWithCatch(new Mutation(TRACE_KS, sessionIdBytes, cf));
-===
 addColumn(cf, buildName(cfMeta, bytes(request)), request);
 addColumn(cf, buildName(cfMeta, bytes(started_at)), 
started_at);
 addParameterColumns(cf, parameters);
-mutateWithCatch(new RowMutation(TRACE_KS, sessionIdBytes, cf));
- cassandra-2.0
+mutateWithCatch(new Mutation(TRACE_KS, sessionIdBytes, cf));
 }
 });
 }



[jira] [Updated] (CASSANDRA-6762) 2.1-beta1 missing cassandra-driver-core jar in -bin.tar.gz

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6762:


Labels: qa-resolved  (was: )

 2.1-beta1 missing cassandra-driver-core jar in -bin.tar.gz
 --

 Key: CASSANDRA-6762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6762
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1 beta2

 Attachments: 
 0001-Add-cassandra-driver-core-jar-to-artifacts-and-deb.patch, 
 debian_stress_jars.diff


 {noformat}
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ tar tzvf 
 build/apache-cassandra-2.1.0-beta1-SNAPSHOT-bin.tar.gz |grep 'tools/lib'
 drwxr-xr-x 0/0   0 2014-02-24 18:53 
 apache-cassandra-2.1.0-beta1-SNAPSHOT/tools/lib/
 -rw-r--r-- 0/0  257908 2014-02-24 18:53 
 apache-cassandra-2.1.0-beta1-SNAPSHOT/tools/lib/stress.jar
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ ls -l tools/lib/
 total 504
 -rw-r--r-- 1 mshuler mshuler 515357 Feb 24 17:14 
 cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar
 (cassandra-2.1)mshuler@hana:~/git/cassandra$
 {noformat}
 It looks like once this gets copied from tools/lib/ to build/tools/lib it 
 should be included in the tar.gz (correct me, if that's wrong)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6553:


Labels: qa-resolved  (was: )

 Benchmark counter improvements (counters++)
 ---

 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Russ Hatch
  Labels: qa-resolved
 Fix For: 2.1 beta2

 Attachments: 6553.txt, 6553.uber.quorum.bdplab.read.png, 
 6553.uber.quorum.bdplab.write.png, high_cl_one.png, high_cl_quorum.png, 
 logs.tar.gz, low_cl_one.png, low_cl_quorum.png, tracing.txt, uber_cl_one.png, 
 uber_cl_quorum.png


 Benchmark the difference in performance between CASSANDRA-6504 and trunk.
 * Updating totally unrelated counters (different partitions)
 * Updating the same counters a lot (same cells in the same partition)
 * Different cells in the same few partitions (hot counter partition)
 benchmark: 
 https://github.com/apache/cassandra/tree/1218bcacba7edefaf56cf8440d0aea5794c89a1e
  (old counters)
 compared to: 
 https://github.com/apache/cassandra/tree/714c423360c36da2a2b365efaf9c5c4f623ed133
  (new counters)
 So far, the above changes should only affect the write path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6987) sstablesplit fails in 2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6987:


Labels: qa-resolved  (was: )

 sstablesplit fails in 2.1
 -

 Key: CASSANDRA-6987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6987
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Testing/Jessie
 Oracle JDK 1.7.0_51
 c*-2.1 branch, commit 5ebadc11e36749e6479f9aba19406db3aacdaf41
Reporter: Michael Shuler
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 beta2

 Attachments: 6987.txt


 sstablesplit dtest began failing in 2.1 at 
 http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/95/ triggered by 
 http://cassci.datastax.com/job/cassandra-2.1/186/
 repro:
 {noformat}
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/cassandra /dev/null 
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress 
 write n=100
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Connected to cluster: Test Cluster
 Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
 Sleeping 2s...
 Running WRITE with 50 threads  for 100 iterations
 ops   ,op/s,   key/s,mean, med, .95, .99,.999,
  max,   time,   stderr
 26836 ,   26830,   26830, 2.0, 1.1, 4.0,20.8,   131.4,   
 207.4,1.0,  0.0
 64002 ,   36236,   36236, 1.4, 0.8, 4.2,13.8,41.3,   
 234.8,2.0,  0.0
 105604,   38188,   38188, 1.3, 0.8, 3.2,10.6,78.4,
 93.7,3.1,  0.10546
 156179,   36750,   36750, 1.4, 0.9, 2.9, 8.8,   117.0,   
 139.8,4.5,  0.08482
 202092,   40487,   40487, 1.2, 0.9, 2.9, 7.3,45.6,   
 122.5,5.6,  0.07231
 246947,   40583,   40583, 1.2, 0.8, 3.0, 7.6,98.2,   
 152.1,6.7,  0.07056
 290186,   39867,   39867, 1.3, 0.8, 2.6, 8.9,   113.3,   
 126.4,7.8,  0.06391
 331609,   40155,   40155, 1.2, 0.8, 3.1, 8.7,99.1,   
 124.9,8.8,  0.05731
 371813,   38742,   38742, 1.3, 0.8, 3.1, 9.2,   117.2,   
 123.9,9.9,  0.05153
 416853,   40024,   40024, 1.2, 0.8, 3.2, 8.1,70.4,   
 119.8,   11.0,  0.04634
 458389,   39045,   39045, 1.3, 0.8, 3.2, 9.1,   106.4,   
 135.9,   12.1,  0.04236
 511323,   36513,   36513, 1.4, 0.8, 3.3, 9.2,   120.2,   
 161.0,   13.5,  0.03883
 549872,   34296,   34296, 1.5, 0.9, 3.4,11.5,   106.7,   
 132.7,   14.6,  0.03678
 589405,   34535,   34535, 1.4, 0.9, 2.9,10.6,   106.2,   
 147.9,   15.8,  0.03607
 633225,   39472,   39472, 1.3, 0.8, 3.0, 7.6,   106.3,   
 125.1,   16.9,  0.03374
 672751,   38251,   38251, 1.3, 0.8, 3.0, 8.0,94.7,   
 157.5,   17.9,  0.03193
 714762,   38047,   38047, 1.3, 0.8, 3.0, 9.3,   102.6,   
 167.8,   19.0,  0.03001
 756629,   38080,   38080, 1.3, 0.8, 3.2, 8.8,   101.7,   
 117.4,   20.1,  0.02847
 802981,   38955,   38955, 1.3, 0.8, 3.0, 9.1,   105.2,   
 164.6,   21.3,  0.02708
 847262,   38817,   38817, 1.3, 0.7, 3.2, 9.8,   112.1,   
 137.4,   22.5,  0.02581
 887639,   38403,   38403, 1.3, 0.8, 2.9,10.0,99.1,   
 147.8,   23.5,  0.02470
 929362,   35056,   35056, 1.4, 0.8, 3.3,11.5,   111.8,   
 149.3,   24.7,  0.02360
 980996,   38247,   38247, 1.3, 0.8, 3.5, 8.3,78.8,   
 129.0,   26.1,  0.02338
 100   ,   39379,   39379, 1.2, 0.9, 3.1, 9.0,29.4,
 83.8,   26.5,  0.02238
 Results:
 real op rate  : 37673
 adjusted op rate stderr   : 0
 key rate  : 37673
 latency mean  : 1.3
 latency median: 0.8
 latency 95th percentile   : 3.2
 latency 99th percentile   : 10.4
 latency 99.9th percentile : 92.1
 latency max   : 234.8
 Total operation time  : 00:00:26
 END
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/nodetool compact Keyspace1
 (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/sstablesplit 
 /var/lib/cassandra/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-2-Data.db
 Exception in thread main java.lang.AssertionError
 at 
 org.apache.cassandra.db.Keyspace.openWithoutSSTables(Keyspace.java:104)
 at 
 org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:108)
 {noformat}
 There are no errors in system.log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6943) UpdateFunction.abortEarly can cause BTree.update to leave its Builder in a bad state, which affects future operations

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6943:


Labels: qa-resolved  (was: )

 UpdateFunction.abortEarly can cause BTree.update to leave its Builder in a 
 bad state, which affects future operations
 -

 Key: CASSANDRA-6943
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6943
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 beta2

 Attachments: 6943.2.txt, 6943.txt, node4.log, node4_jstack.log, 
 node5.log, node5_jstack.log, node6.log, node6_jstack.log, node7.log, 
 node7_jstack.log, screenshot.png, screenshot2.png, stress_jstack.log


 Running performance scenarios I have seen this characteristic drop in 
 performance happen several times. A similar effect was reproduced in another 
 test cluster. Operations eventually come to a standstill.
 !screenshot.png!
 !screenshot2.png!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7037) thrift_hsha_test.py dtest hangs in 2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7037:


Labels: qa-resolved  (was: )

 thrift_hsha_test.py dtest hangs in 2.1
 --

 Key: CASSANDRA-7037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7037
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
  Labels: qa-resolved

 system.log from node1:
 {noformat}
 INFO  [main] 2014-04-14 19:18:53,829 CassandraDaemon.java:102 - Hostname: 
 buildbot-ccm
 INFO  [main] 2014-04-14 19:18:53,868 YamlConfigurationLoader.java:80 - 
 Loading settings from file:/tmp/dtest-pRNmjg/test/node1/conf/cassandra.yaml
 INFO  [main] 2014-04-14 19:18:54,031 YamlConfigurationLoader.java:123 - Node 
 configuration:[authenticator=AllowAllAuthenticator; 
 authorizer=AllowAllAuthorizer; auto_bootstrap=false; auto_snapshot=true; 
 batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; 
 client_encryption_options=REDACTED; cluster_name=test; 
 column_index_size_in_kb=64; 
 commitlog_directory=/tmp/dtest-pRNmjg/test/node1/commitlogs; 
 commitlog_segment_size_in_mb=32; commitlog_sync=periodic; 
 commitlog_sync_period_in_ms=1; compaction_preheat_key_cache=true; 
 compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; 
 concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; 
 counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
 cross_node_timeout=false; 
 data_file_directories=[/tmp/dtest-pRNmjg/test/node1/data]; 
 disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; 
 dynamic_snitch_reset_interval_in_ms=60; 
 dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=SimpleSnitch; 
 flush_directory=/tmp/dtest-pRNmjg/test/node1/flush; 
 hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
 in_memory_compaction_limit_in_mb=64; incremental_backups=false; 
 index_summary_capacity_in_mb=null; 
 index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; 
 internode_compression=all; key_cache_save_period=14400; 
 key_cache_size_in_mb=null; listen_address=127.0.0.1; 
 max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
 memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=0.4; 
 native_transport_port=9042; num_tokens=256; 
 partitioner=org.apache.cassandra.dht.Murmur3Partitioner; 
 permissions_validity_in_ms=2000; phi_convict_threshold=5; 
 preheat_kernel_page_cache=false; range_request_timeout_in_ms=1; 
 read_request_timeout_in_ms=1; 
 request_scheduler=org.apache.cassandra.scheduler.NoScheduler; 
 request_timeout_in_ms=1; row_cache_save_period=0; row_cache_size_in_mb=0; 
 rpc_address=127.0.0.1; rpc_keepalive=true; rpc_port=9160; 
 rpc_server_type=hsha; 
 saved_caches_directory=/tmp/dtest-pRNmjg/test/node1/saved_caches; 
 seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, 
 parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=REDACTED; 
 snapshot_before_compaction=false; ssl_storage_port=7001; 
 start_native_transport=true; start_rpc=true; storage_port=7000; 
 thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=10; 
 tombstone_warn_threshold=1000; trickle_fsync=false; 
 trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=1; 
 write_request_timeout_in_ms=1]
 INFO  [main] 2014-04-14 19:18:54,339 DatabaseDescriptor.java:197 - 
 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
 INFO  [main] 2014-04-14 19:18:54,351 DatabaseDescriptor.java:285 - Global 
 memtable on-heap threshold is enabled at 124MB
 INFO  [main] 2014-04-14 19:18:54,352 DatabaseDescriptor.java:289 - Global 
 memtable off-heap threshold is enabled at 124MB
 INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:113 - JVM 
 vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.7.0_51
 INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:141 - Heap size: 
 523501568/523501568
 INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:143 - Code Cache 
 Non-heap memory: init = 2555904(2496K) used = 686464(670K) committed = 
 2555904(2496K) max = 50331648(49152K)
 INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - Par Eden 
 Space Heap memory: init = 107479040(104960K) used = 71013720(69349K) 
 committed = 107479040(104960K) max = 107479040(104960K)
 INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - Par Survivor 
 Space Heap memory: init = 13369344(13056K) used = 0(0K) committed = 
 13369344(13056K) max = 13369344(13056K)
 INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - CMS Old Gen 
 Heap memory: init = 402653184(393216K) used = 0(0K) committed = 
 402653184(393216K) max = 402653184(393216K)
 INFO  

[jira] [Updated] (CASSANDRA-6883) stress read fails with IOException Data returned was not validated

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6883:


Labels: qa-resolved  (was: )

 stress read fails with IOException Data returned was not validated
 

 Key: CASSANDRA-6883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6883
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: ccm 3 node cluster, java 1.7.0_51
 cassandra 2.1 branch -- 615612f61566
Reporter: Russ Hatch
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 beta2

 Attachments: 6883.txt


 I'm working to do some basic testing of read/write with the new stress. 
 First, I populate data using write like so:
 {noformat}
 rhatch@whatup:~/git/cstar/cassandra$ tools/bin/cassandra-stress write 
 n=100 CL=ONE -key dist=FIXED\(1\) -col n=UNIFORM\(1..100\) -schema 
 replication\(factor=3\) -rate threads=50
 {noformat}
 Then I attempt a read test, and this happens:
 {noformat}
 rhatch@whatup:~/git/cstar/cassandra$ tools/bin/cassandra-stress read 
 n=100 CL=ONE -key dist=FIXED\(1\) -col n=UNIFORM\(1..100\) -schema 
 replication\(factor=3\) -rate threads=50
 Warming up READ with 5 iterations...
 INFO  19:38:14 New Cassandra host /127.0.0.3 added
 INFO  19:38:14 New Cassandra host /127.0.0.2 added
 Connected to cluster: test_stress
 Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
 Datatacenter: datacenter1; Host: /127.0.0.2; Rack: rack1
 Datatacenter: datacenter1; Host: /127.0.0.3; Rack: rack1
 java.io.IOException: Operation [11055] x0 key 01 
 (0x30303030303030303031) Data returned was not validated
   at org.apache.cassandra.stress.Operation.error(Operation.java:298)
   at 
 org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:276)
   at 
 org.apache.cassandra.stress.operations.ThriftReader.run(ThriftReader.java:46)
   at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:304)
 Sleeping 2s...
 Running READ with 50 threads  for 100 iterations
 ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
 .999, max,   time,   stderr
 11287 ,   11286,   11286,   11286, 4.7, 3.8,11.9,21.8,
 34.0,52.9,1.0,  0.0
 23263 ,   11764,   11764,   11764, 4.3, 3.5,10.4,18.1,
 29.8,51.8,2.0,  0.0
 35300 ,   11889,   11889,   11889, 4.2, 3.7, 9.6,15.3,
 27.3,40.0,3.0,  0.01467
 47239 ,   11737,   11737,   11737, 4.3, 3.7,10.1,17.0,
 26.9,47.7,4.0,  0.01289
 59140 ,   11729,   11729,   11729, 4.3, 3.8, 9.6,15.1,
 25.7,47.9,5.1,  0.00979
 java.io.IOException: Operation [66434] x0 key 01 
 (0x30303030303030303031) Data returned was not validated
   at org.apache.cassandra.stress.Operation.error(Operation.java:298)
   at 
 org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:276)
   at 
 org.apache.cassandra.stress.operations.ThriftReader.run(ThriftReader.java:46)
   at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:304)
 66584 ,   11952,   11952,   11952, 4.2, 3.6, 9.9,15.7,
 24.7,64.9,5.7,  0.00788
 FAILURE
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4681) SlabAllocator spends a lot of time in Thread.yield

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-4681:


Flagged: Impediment

 SlabAllocator spends a lot of time in Thread.yield
 --

 Key: CASSANDRA-4681
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4681
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: OEL Linux
Reporter: Oleg Kibirev
Assignee: Jonathan Ellis
Priority: Minor
  Labels: performance
 Attachments: 4681-v3.txt, 4691-short-circuit.txt, 
 4691-v3-rebased.txt, SlabAllocator.java, SlabAllocator.java.list, 
 slab-list.patch


 When profiling high volume inserts into Cassandra running on a host with fast 
 SSD and CPU, Thread.yield() invoked by SlabAllocator appeared as the top item 
 in CPU samples. The fix is to return a regular byte buffer if current slab is 
 being initialized by another thread. So instead of:
if (oldOffset == UNINITIALIZED)
 {
 // The region doesn't have its data allocated yet.
 // Since we found this in currentRegion, we know that 
 whoever
 // CAS-ed it there is allocating it right now. So 
 spin-loop
 // shouldn't spin long!
 Thread.yield();
 continue;
 }
 do:
 if (oldOffset == UNINITIALIZED)
 return ByteBuffer.allocate(size);
 I achieved 4x speed up in my (admittedly specialized) benchmark by using an 
 optimized version of SlabAllocator attached. Since this code is in the 
 critical path, even doing excessive atomic instructions or allocating 
 unneeded extra ByteBuffer instances has a measurable effect on performance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4681) SlabAllocator spends a lot of time in Thread.yield

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-4681:


Flagged:   (was: Impediment)

 SlabAllocator spends a lot of time in Thread.yield
 --

 Key: CASSANDRA-4681
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4681
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: OEL Linux
Reporter: Oleg Kibirev
Assignee: Jonathan Ellis
Priority: Minor
  Labels: performance
 Attachments: 4681-v3.txt, 4691-short-circuit.txt, 
 4691-v3-rebased.txt, SlabAllocator.java, SlabAllocator.java.list, 
 slab-list.patch


 When profiling high volume inserts into Cassandra running on a host with fast 
 SSD and CPU, Thread.yield() invoked by SlabAllocator appeared as the top item 
 in CPU samples. The fix is to return a regular byte buffer if current slab is 
 being initialized by another thread. So instead of:
if (oldOffset == UNINITIALIZED)
 {
 // The region doesn't have its data allocated yet.
 // Since we found this in currentRegion, we know that 
 whoever
 // CAS-ed it there is allocating it right now. So 
 spin-loop
 // shouldn't spin long!
 Thread.yield();
 continue;
 }
 do:
 if (oldOffset == UNINITIALIZED)
 return ByteBuffer.allocate(size);
 I achieved 4x speed up in my (admittedly specialized) benchmark by using an 
 optimized version of SlabAllocator attached. Since this code is in the 
 critical path, even doing excessive atomic instructions or allocating 
 unneeded extra ByteBuffer instances has a measurable effect on performance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4681) SlabAllocator spends a lot of time in Thread.yield

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-4681:


Labels: performance qa-resolved  (was: performance)

 SlabAllocator spends a lot of time in Thread.yield
 --

 Key: CASSANDRA-4681
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4681
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: OEL Linux
Reporter: Oleg Kibirev
Assignee: Jonathan Ellis
Priority: Minor
  Labels: performance, qa-resolved
 Attachments: 4681-v3.txt, 4691-short-circuit.txt, 
 4691-v3-rebased.txt, SlabAllocator.java, SlabAllocator.java.list, 
 slab-list.patch


 When profiling high volume inserts into Cassandra running on a host with fast 
 SSD and CPU, Thread.yield() invoked by SlabAllocator appeared as the top item 
 in CPU samples. The fix is to return a regular byte buffer if current slab is 
 being initialized by another thread. So instead of:
if (oldOffset == UNINITIALIZED)
 {
 // The region doesn't have its data allocated yet.
 // Since we found this in currentRegion, we know that 
 whoever
 // CAS-ed it there is allocating it right now. So 
 spin-loop
 // shouldn't spin long!
 Thread.yield();
 continue;
 }
 do:
 if (oldOffset == UNINITIALIZED)
 return ByteBuffer.allocate(size);
 I achieved 4x speed up in my (admittedly specialized) benchmark by using an 
 optimized version of SlabAllocator attached. Since this code is in the 
 critical path, even doing excessive atomic instructions or allocating 
 unneeded extra ByteBuffer instances has a measurable effect on performance



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6600) Huge read latency with LOCAL_ONE when RF nodes are up

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6600:


Labels: qa-resolved  (was: )

 Huge read latency with LOCAL_ONE when  RF nodes are up
 ---

 Key: CASSANDRA-6600
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6600
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Duncan Sands
Assignee: Michael Shuler
  Labels: qa-resolved

 I recently upgraded a multi data centre cluster from 1.2.12 to 2.0.4.
 In one data centre there are 3 nodes with an RF of 3.  Clients are reading 
 from these nodes using CQL3 and LOCAL_ONE.  At one point during the upgrade 1 
 node was down, so less than RF nodes were up.  Read latency went from  1ms 
 to  1 second.  Once all nodes were up, read latency went back down to  1ms. 
  If I stop a node then read latency shoots back up again.
 This is not due to my client as I was able to reproduce this as follows.  
 With all RF nodes up:
   connect to a node using cqlsh
   set the consistency level to LOCAL_ONE
   use cqlsh to read a few values from a random table - it completes instantly
   bring down one of the other nodes in the same data centre
   do the same query again in cqlsh.  It times out with Request did not 
 complete within rpc_timeout.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5326) Automatically report on test coverage on commit

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5326:


Labels: qa-resolved  (was: )

 Automatically report on test coverage on commit
 ---

 Key: CASSANDRA-5326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5326
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
  Labels: qa-resolved
 Attachments: trunk.cobertura-reports.patch


 We need a test coverage report that is always up to date with trunk.
 This should include coverage from the unit tests as well as cassandra-dtest.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6321) Unit Tests Failing in trunk

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6321:


Labels: qa-resolved  (was: )

 Unit Tests Failing in trunk
 ---

 Key: CASSANDRA-6321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6321
 Project: Cassandra
  Issue Type: Test
  Components: Tests
 Environment: Debian Wheezy, Oracle Java 1.7.0_25 
 (buildbot.datastax.com VM)
Reporter: Michael Shuler
Priority: Minor
  Labels: qa-resolved

 Many unit tests are failing in trunk [0] due to a timeout, but there are 
 relatively few in the cassandra-2.0 branch [1].
 I bisected the first test failure, CacheProviderTest, to commit:
 {code}
 ((6c9efb0...)|BISECTING)mshuler@mana:~/DataStax/repos/cassandra$ git bisect 
 good
 a552b305f3d1b17e394744b18efd7f40599f3c2e is the first bad commit
 commit a552b305f3d1b17e394744b18efd7f40599f3c2e
 Author: Sylvain Lebresne sylv...@datastax.com
 Date:   Thu Sep 19 09:20:13 2013 +0200
 Add user-defined types to CQL3
 
 patch by slebresne; reviewed by iamaleskey for CASSANDRA-5590
 :100644 100644 e55c8f79d66e91e896dc5ef1f0b7567b03e3910b 
 4231b0b1a42a2d243fd6bcd9367ab7299f6ce680 M  CHANGES.txt
 :04 04 b838b9b052f19eafe2f927188f8c46a48feae01e 
 c44978f3a5e2c69333fb7036a32b5f8a81f5d28a M  src
 {code}
 It is possible that the large number of subsequent failures may be due to 
 some of the substantial changes in the addition of user types (? that's a 
 guess).  I will see if I can bisect and identify some other individual test 
 failures, but wanted to get this general ticket started.
 Example trunk unit test:
 [0] 
 http://buildbot.datastax.com:8020/builders/trunk/builds/2531/steps/shell/logs/stdio
 Example 2.0 unit test:
 [1] 
 http://buildbot.datastax.com:8020/builders/cassandra-2.0/builds/340/steps/shell/logs/stdio
 Thanks!
 Michael Shuler



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6654) Droppable tombstones are not being removed from LCS table despite being above 20%

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6654:


Labels: qa-resolved  (was: )

 Droppable tombstones are not being removed from LCS table despite being above 
 20%
 -

 Key: CASSANDRA-6654
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6654
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.13 VNodes with murmur3
Reporter: Keith Wright
Assignee: Marcus Eriksson
  Labels: qa-resolved
 Attachments: Screen Shot 2014-02-05 at 9.38.20 AM.png, dtrlog.txt, 
 repro3.py


 JMX is showing that one of our CQL3 LCS tables has a droppable tombstone 
 ratio above 20% and increasing (currently at 28%).  Compactions are not 
 falling behind and we are using the OOTB setting for this feature so I would 
 expect not to go above 20% (will attach screen shot from JMX).   Table 
 description:
 CREATE TABLE global_user (
   user_id timeuuid,
   app_id int,
   type text,
   name text,
   extra_param maptext, text,
   last timestamp,
   paid boolean,
   sku_time maptext, timestamp,
   values maptimestamp, float,
   PRIMARY KEY (user_id, app_id, type, name)
 ) WITH
   bloom_filter_fp_chance=0.10 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=86400 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'sstable_size_in_mb': '160', 'class': 
 'LeveledCompactionStrategy'} AND
   compression={'chunk_length_kb': '8', 'crc_check_chance': '0.1', 
 'sstable_compression': 'LZ4Compressor'}; 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5568) Invalid tracing info for execute_prepared_cql3_query

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5568:


Labels: qa-resolved  (was: )

 Invalid tracing info for execute_prepared_cql3_query
 

 Key: CASSANDRA-5568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5568
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Windows 8 x64
 Java HotSpot(TM) 64-Bit Server VM/1.7.0_11
Reporter: Pierre Chalamet
Assignee: Ryan McGuire
Priority: Minor
  Labels: qa-resolved
 Attachments: 5568-logs.tar.gz, 5568_test.py


 When using trace_next_query() then execute_prepared_cql3_query(), it looks 
 like tracing info are invalid (number of sessions/events and columns values 
 are wrong).
 How to reproduce:
 {code}
 create keyspace Tests with replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 create table Tests.stresstest (strid varchar,intid int, primary key (strid))
 {code}
 and then executing the following prepared query 50,000 times:
 {code}
 insert into Tests.stresstest (intid, strid) values (?, ?)
 {code}
 produces the following results:
 {code}
 localhost select count(*) from Tests.stresstest
 ++
 | count  |
 ++
 | 5  |
 ++
 localhost select count(*) from system_traces.events
 ++
 | count  |
 ++
 | 20832  |
 ++
 localhost select count(*) from system_traces.sessions
 ++
 | count  |
 ++
 | 26717  |
 ++
 localhost select * from system_traces.sessions limit 10
 +--+-+--++-+---+
 | sessionid| coordinator | duration | 
 parameters | request | startedat |
 +==+=+==++=+===+
 | 9aefc263-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:16:55 PM  |
 +--+-+--++-+---+
 | 9ce0bcf7-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:16:59 PM  |
 +--+-+--++-+---+
 | 9dbe4ba4-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:17:00 PM  |
 +--+-+--++-+---+
 | 9d4d3a54-bcdb-11e2-8c60-fb495ee6a12c | | 44   | 
| |   |
 +--+-+--++-+---+
 | 9a790bc1-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:16:55 PM  |
 +--+-+--++-+---+
 | 9c992c98-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:16:58 PM  |
 +--+-+--++-+---+
 | 9e27e2f6-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   | 53   | 
| execute_prepared_cql3_query | 5/14/2013 9:17:01 PM  |
 +--+-+--++-+---+
 | 9b172074-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   |  | 
| execute_prepared_cql3_query | 5/14/2013 9:16:56 PM  |
 +--+-+--++-+---+
 | 9a7cdc53-bcdb-11e2-8c60-fb495ee6a12c | 127.0.0.1   | 53   | 
| 

[jira] [Updated] (CASSANDRA-5223) Secondary index doesn't get repaired

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5223:


Labels: qa-resolved  (was: )

 Secondary index doesn't get repaired
 

 Key: CASSANDRA-5223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5223
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
 Environment: Ubuntu dtest, laptop with 8G RAM
Reporter: Alexei Bakanov
Assignee: Ryan McGuire
  Labels: qa-resolved
 Attachments: secondary_index_repair_test.py, 
 secondary_index_repair_test_2.py, secondary_index_repair_test_3.py


 Looks like secondary indexes don't get repaired by any of the Cassandra 
 repair mechanisms for NetworkTopologyStrategy. SimpleStrategy however works 
 fine.
 The issue started as a mail to Cassandra userlist: 
 http://article.gmane.org/gmane.comp.db.cassandra.user/30909
 I striped reproduction recipe down and made a d-test which inserts two rows 
 into a cluster of two nodes, where the first node goes down after the first 
 insert and goes back up after the second insert. Secondary index doesn't get 
 repaired by neither hinted-handoff, read repair, nor manual 'repair' 
 operation which the d-test triggers before reading the data.
 Moreover the second row is not visible in the index unless I do CL.ALL read 
 or drop/create the index.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6519) cqlsh hangs indefinitely when dropping table

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6519:


Labels: qa-resolved  (was: )

 cqlsh hangs indefinitely when dropping table
 

 Key: CASSANDRA-6519
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6519
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: C* from trunk -- cassandra-2.0.3-709-g486f079
 java 1.7.0_45 (on linux 64 bit)
 [cqlsh 4.1.0 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 3 node cluster built on my machine using ccm
Reporter: Russ Hatch
Assignee: Russ Hatch
  Labels: qa-resolved

 Using ccqlsh, I issue a drop statement for a table and it hangs indefinitely 
 (running cassandra-2.0.3-709-g486f079 from trunk).
 Here's the statement:
 cqlsh:taskapp drop table user_task;
 Here's the full setup I used:
 {noformat}
 ccm create test_cluster
 ccm populate -n 3
 ccm start
 ccm node1 cqlsh
 CREATE KEYSPACE taskapp WITH replication = {
   'class': 'SimpleStrategy',
   'replication_factor': '3'
 };
 use taskapp;
 create table user (
 user_id timeuuid PRIMARY KEY,
 first_name text,
 last_name text,
 email text
 );
 create table user_task (
 task_id timeuuid PRIMARY KEY,
 user_id timeuuid,
 task_order int,
 task_description text,
 is_complete boolean,
 is_top_level boolean,
 subtask_ids listtimeuuid
 );
 {noformat}
 and then the statement which hangs:
 drop table user_task;
 I also checked that all 3 nodes have the same schema version uuid, using 
 these queries someone shared with me:
 {noformat}
 cqlsh:taskapp SELECT rpc_address, schema_version FROM system.peers
... ;
  rpc_address | schema_version
 -+--
127.0.0.3 | 6e782241-91e9-3cfa-88c0-88f445a573c1
127.0.0.2 | 6e782241-91e9-3cfa-88c0-88f445a573c1
 (2 rows)
 cqlsh:taskapp SELECT schema_version FROM system.local WHERE key='local';
  schema_version
 --
  6e782241-91e9-3cfa-88c0-88f445a573c1
 (1 rows)
 {noformat}
 I checked the logs for all 3 nodes, which I think were normal. Node1 (used in 
 the cqlsh session) showed this message:
 {noformat}
 INFO  [Thrift:3] 2013-12-20 14:29:23,200 MigrationManager.java:289 - Drop 
 ColumnFamily 'taskapp/user_task'
 {noformat}
 The other node logs showed no activity that looked related to the attempted 
 drop statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5360) Disable Thread Biased Locking in the JVM

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5360:


Labels: qa-resolved  (was: )

 Disable Thread Biased Locking in the JVM
 

 Key: CASSANDRA-5360
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5360
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.12, 1.1.10, 1.2.3
Reporter: amorton
Assignee: amorton
Priority: Minor
  Labels: qa-resolved
 Attachments: 5360.txt, cassandra-biased-locking-tests.txt


 Biased Locking 
 (https://blogs.oracle.com/dave/entry/biased_locking_in_hotspot) is enabled by 
 default in JVM 6 and is designed to optimise applications where a locks are 
 normally used by one thread. This is the opposite of how the worker pools 
 cassandra work. Disabling Biased Locking (-XX:-UseBiasedLocking) has yielded 
 improvements of 5% to 10% in throughput and reduced JVM pauses. 
 Details follow.
 h1. Application Pausing 
 The following was observed on a 16 core EC2 SSD instance...
 {noformat}
 Heap after GC invocations=32 (full 0):
  par new generation   total 1024000K, used 59799K [0x0006fae0, 
 0x000745e0, 0x000745e0)
   eden space 819200K,   0% used [0x0006fae0, 0x0006fae0, 
 0x00072ce0)
   from space 204800K,  29% used [0x00072ce0, 0x000730865ff8, 
 0x00073960)
   to   space 204800K,   0% used [0x00073960, 0x00073960, 
 0x000745e0)
  concurrent mark-sweep generation total 2965504K, used 416618K 
 [0x000745e0, 0x0007fae0, 0x0007fae0)
  concurrent-mark-sweep perm gen total 22592K, used 22578K 
 [0x0007fae0, 0x0007fc41, 0x0008)
 }
 Total time for which application threads were stopped: 0.0175680 seconds
 Total time for which application threads were stopped: 0.0008680 seconds
 Total time for which application threads were stopped: 0.0004030 seconds
 Total time for which application threads were stopped: 0.0006460 seconds
 Total time for which application threads were stopped: 0.0009210 seconds
 Total time for which application threads were stopped: 0.0007250 seconds
 Total time for which application threads were stopped: 0.0016340 seconds
 Total time for which application threads were stopped: 0.0005570 seconds
 Total time for which application threads were stopped: 0.0007270 seconds
 Total time for which application threads were stopped: 0.0010170 seconds
 Total time for which application threads were stopped: 0.0006240 seconds
 Total time for which application threads were stopped: 0.0013250 seconds
 {Heap before GC invocations=32 (full 0):
  par new generation   total 1024000K, used 878999K [0x0006fae0, 
 0x000745e0, 0x000745e0)
   eden space 819200K, 100% used [0x0006fae0, 0x00072ce0, 
 0x00072ce0)
   from space 204800K,  29% used [0x00072ce0, 0x000730865ff8, 
 0x00073960)
   to   space 204800K,   0% used [0x00073960, 0x00073960, 
 0x000745e0)
  concurrent mark-sweep generation total 2965504K, used 416618K 
 [0x000745e0, 0x0007fae0, 0x0007fae0)
  concurrent-mark-sweep perm gen total 22784K, used 22591K 
 [0x0007fae0, 0x0007fc44, 0x0008)
 2013-03-15T21:21:17.015+: 1038.849: [GC Before GC:
 Statistics for BinaryTreeDictionary:
 {noformat}
 The extra were stopped lines were annoying me, and the JVM Performance book 
 offered this explanation:
 bq. If there happens to be additional safepoints between garbage collections, 
 the output will show Application time: and Total time for which application 
 threads were stopped: messages for each safepoint that occurs between garbage 
 collections.
 h1. Safepoints 
 Safepoints are times when the JVM pauses all application threads to run a 
 single VM thread that needs to know the heap is not going to change. GC is 
 one cause, others are (from Java Performance):
 bq. There many other safepoints, such as biased locking revocation, thread 
 stack dumps, thread suspension or stopping (i.e., java.lang.Thread.stop() 
 method), and numerous inspection and modification operations requested 
 through JVMTI.
 On my MBP (corei7, 16Gb, ssd) I ran cassandra with the stress test with 
 -XX:-PrintGCApplicationConcurrentTime and -XX:-PrintSafepointStatistics which 
 outputs information when the JVM exits. The biased-locking-tests.txt 
 attachment shows that stress took  1m 23 seconds to complete and the 
 safepoint statistics show most of the pauses were to revoke biased locks. 
 A second test was run (both with a clean data dir) with biased locking 
 disabled that took 1 minute 18 seconds. The  safepoint stats did not 

[jira] [Updated] (CASSANDRA-5323) Revisit disabled dtests

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5323:


Labels: qa-resolved  (was: )

 Revisit disabled dtests
 ---

 Key: CASSANDRA-5323
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5323
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Michael Shuler
  Labels: qa-resolved

 The following dtests are disabled in buildbot, if they can be re-enabled 
 great, if they can't can they be fixed? 
 upgrade|decommission|sstable_gen|global_row|putget_2dc|cql3_insert



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5325) Report generator for stress testing two branches

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5325:


Labels: qa-resolved  (was: )

 Report generator for stress testing two branches
 

 Key: CASSANDRA-5325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5325
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
  Labels: qa-resolved

 We need a simple and automatic way of reporting/charting the performance 
 differences between two different branches of C* using cassandra-stress. 
 * Bootstrap appropriate java and cassandra onto a set of nodes
 * Create cluster out of those nodes
 * Run cassandra-stress write
 * Allow compaction to settle
 * Run cassandra-stress read
 * Gather statistics and chart 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6457) Bisect unit test failures on trunk (2.1)

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6457:


Labels: qa-resolved  (was: )

 Bisect unit test failures on trunk (2.1)
 

 Key: CASSANDRA-6457
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6457
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
  Labels: qa-resolved
 Attachments: tmp.patch


 Identify and bisect tests failing in trunk (2.1).
 BlacklistingCompactionsTest - CASSANDRA-6414



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5612) NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5612:


Labels: qa-resolved  (was: )

 NPE when upgrading a mixed version 1.1/1.2 cluster fully to 1.2
 ---

 Key: CASSANDRA-5612
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5612
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Ryan McGuire
Assignee: Aleksey Yeschenko
  Labels: qa-resolved
 Attachments: logs.tar.gz, upgrade_through_versions_test.py


 See the attached upgrade_through_versions_test.py upgrade_test_mixed().
 Conceptually this method does the following:
 * Instantiates a 3 node 1.1.9 cluster
 * Writes some data
 * Shuts down node 1 and upgrades it to 1.2 (HEAD)
 * Brings the node1 back up, making the cluster a mixed version 1.1/1.2
 * Brings down node2 and node3 and does the same upgrade making it all the 
 same version.
 * At this point, I would run upgradesstables on each of the nodes, but there 
 is already an error on node3 directly after it's upgrade:
 {code}
 INFO [FlushWriter:1] 2013-06-03 22:49:46,543 Memtable.java (line 461) Writing 
 Memtable-peers@1023263314(237/237 serialized/live bytes, 14 op
 s)
  INFO [FlushWriter:1] 2013-06-03 22:49:46,556 Memtable.java (line 495) 
 Completed flushing /tmp/dtest-YqMtHN/test/node3/data/system/peers/syst
 em-peers-ic-2-Data.db (291 bytes) for commitlog position 
 ReplayPosition(segmentId=1370314185862, position=58616)
  INFO [GossipStage:1] 2013-06-03 22:49:46,568 StorageService.java (line 1330) 
 Node /127.0.0.2 state jump to normal
 ERROR [MigrationStage:1] 2013-06-03 22:49:46,655 CassandraDaemon.java (line 
 192) Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.DefsTable.addColumnFamily(DefsTable.java:511)
 at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:445)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:55)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
 at java.util.concurrent.FutureTask.run(FutureTask.java:166)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 {code}
 This error is repeatable, but inconsistent. Interestingly, it is always node3 
 with the error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6536) SStable gets corrupted after keyspace drop and recreation

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6536:


Labels: qa-resolved  (was: )

 SStable gets corrupted after keyspace drop and recreation
 -

 Key: CASSANDRA-6536
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6536
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.12  1.2.13
Reporter: Dominic Letz
Assignee: Russ Hatch
  Labels: qa-resolved
 Attachments: 6536.py


 ERROR [ReadStage:41] 2014-01-02 14:27:00,629 CassandraDaemon.java (line 191) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
 Corrupt (negative) value length encountered
 When running a test like this the SECOND TIME:
 DROP KEYSPACE testspace;
 CREATE KEYSPACE testspace with REPLICATION = {'class':'SimpleStrategy', 
 'replication_factor':1} AND durable_writes = false;
 USE testspace;
 CREATE TABLE testtable (id text PRIMARY KEY, group text) WITH compression = 
 {'sstable_compression':'LZ4Compressor'};
 CREATE INDEX testindex ON testtable (group);
 INSERT INTO testtable (id, group) VALUES ('1', 'beta');
 INSERT INTO testtable (id, group) VALUES ('2', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('3', 'delta');
 INSERT INTO testtable (id, group) VALUES ('4', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('5', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('6', 'beta');
 INSERT INTO testtable (id, group) VALUES ('7', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('8', 'delta');
 INSERT INTO testtable (id, group) VALUES ('9', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('00010', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('00011', 'beta');
 INSERT INTO testtable (id, group) VALUES ('00012', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('00013', 'delta');
 INSERT INTO testtable (id, group) VALUES ('00014', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('00015', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('00016', 'beta');
 INSERT INTO testtable (id, group) VALUES ('00017', 'gamma');
 ... 
 INSERT INTO testtable (id, group) VALUES ('10', 'alpha');
 SELECT COUNT(*) FROM testspace.testtable WHERE group = 'alpha' LIMIT 11;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5724) Timeouts for slice/rangeslice queries while some nodes versions are lower than 1.2 and some higher.

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5724:


Labels: qa-resolved  (was: )

 Timeouts for slice/rangeslice queries while some nodes versions are lower 
 than 1.2 and some higher.
 ---

 Key: CASSANDRA-5724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5724
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Or Sher
Assignee: Ryan McGuire
  Labels: qa-resolved

 When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range 
 slice queries executed against a 1.2.* node fails due to timeout exception:
 [default@orTestKS] list orTestCF;
 Using default limit of 100
 Using default column limit of 100
 null
 TimedOutException()
   at 
 org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
   at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
 It seems this issue is because of the new parameter in 1.2.*: 
 internode_compression which is set to all by default.
 It seems that by setting this parameter to none solves the problem.
 I think the question is if Cassandra should support somehow nodes with 
 different configuration for this parameter?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5586) Remove cli usage from dtests

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5586:


Labels: qa-resolved  (was: )

 Remove cli usage from dtests
 

 Key: CASSANDRA-5586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5586
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Assignee: Ryan McGuire
Priority: Minor
  Labels: qa-resolved

 The dtests in some situations fork the cli.  With the cli essentially 
 stagnant now, there's no need to do this when the same thing can be 
 accomplished with a thrift or cql call. (ccm's convenience api for invoking 
 the cli could probably also be removed at this point)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5321) Fix the dtest for upgrading a cluster

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5321:


Labels: qa-resolved  (was: )

 Fix the dtest for upgrading a cluster
 -

 Key: CASSANDRA-5321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5321
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
  Labels: qa-resolved

 Fix the upgrade test, have it perform a 1.1-1.2 upgrade (and forget 
 everything else), and perhaps do some things that would be valid when the 
 cluster is still mixed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7003) cqlsh_tests test_eat_glass dtest fails on 2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7003:


Labels: qa-resolved  (was: )

 cqlsh_tests test_eat_glass dtest fails on 2.1
 -

 Key: CASSANDRA-7003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7003
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
  Labels: qa-resolved

 {noformat}
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/cqlsh_tests.py, line 276, in 
 test_eat_glass
 self.assertEquals(output.count('Можам да јадам стакло, а не ме штета.'), 
 16)
 AssertionError: 0 != 16
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5658) TracingStage frequently times out

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5658:


Labels: qa-resolved  (was: )

 TracingStage frequently times out
 -

 Key: CASSANDRA-5658
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5658
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4, 1.2.6, 2.0 beta 1
Reporter: Ryan McGuire
  Labels: qa-resolved
 Attachments: 5658-logs.tar.gz, trace_bug.cql, trace_bug.py, 
 trace_bug_cqlsh.py


 I am seeing frequent timeout errors when doing programmatic traces via 
 trace_next_query()
 {code}
 ERROR [TracingStage:1] 2013-06-18 19:10:20,669 CassandraDaemon.java (line 
 196) Exception in thread Thread[TracingStage:1,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
 received only 0 responses.
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: Operation 
 timed out - received only 0 responses.
 at 
 org.apache.cassandra.service.AbstractWriteResponseHandler.get(AbstractWriteResponseHandler.java:81)
 at 
 org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:454)
 at 
 org.apache.cassandra.tracing.TraceState$1.runMayThrow(TraceState.java:100)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 3 more
 {code}
 Attached is the sample code which produced this error and the logs. The error 
 occurs directly after the INSERT statement.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5937) check read performance between 1.2.5 and 1.2.8

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5937:


Labels: qa-resolved  (was: )

 check read performance between 1.2.5 and 1.2.8
 --

 Key: CASSANDRA-5937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5937
 Project: Cassandra
  Issue Type: Test
Reporter: Chris Burroughs
Assignee: Ryan McGuire
Priority: Minor
  Labels: qa-resolved

 We upgraded from 1.2.5 to 1.2.8 on 08-02 and saw a significant increase in 
 read latency:
  * median:  1ms to  1ms
  * 75p: ~ 1 ms to  3.5 ms
  * 95p: ~ 3 ms to ~ 8 ms
 Cluster is a 2 DC cluster with about 20 nodes per DC (using 
 GossipingPropertyFileSnitch).  All queries are for one Keyspace filled with 
 skinny rows so we have Row cache enabled and the Key cache disabled.  The row 
 cache hit is close to 70%, which makes the magnitude of the new median/75th 
 hard to understand.
 I have not been able to demonstrate a regression with stress on either a 
 single  or dozen node cluster.  For internal reasons invovling client side 
 problem we have not tried rolling back so I'm not positive it's a cassandra 
 code problem (as opposed to something else we did/configured).
 Thanks for checking it out!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6482) Add junitreport to ant test target

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6482:


Labels: qa-resolved  (was: )

 Add junitreport to ant test target
 --

 Key: CASSANDRA-6482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6482
 Project: Cassandra
  Issue Type: Improvement
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
  Labels: qa-resolved

 Adding junitreport XML output for the unit tests will allow detailed 
 reporting and historical tracking in Jenkins.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5487) Promote row-level tombstones to index file

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5487:


Labels: qa-resolved  (was: )

 Promote row-level tombstones to index file
 --

 Key: CASSANDRA-5487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5487
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
  Labels: qa-resolved
 Attachments: 5487.txt


 The idea behind promoted indexes (CASSANDRA-2319) was we could skip a seek to 
 the row header by keeping the column index in the index file.  But, we skip 
 writing the row-level tombstone to the index file unless it also has some 
 column data.  So unless we read the tombstone from the data file (where it is 
 guaranteed to exist) we can return incorrect results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7004) cqlsh_tests test_simple_insert dtest fails on 2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7004:


Labels: qa-resolved  (was: )

 cqlsh_tests test_simple_insert dtest fails on 2.1
 -

 Key: CASSANDRA-7004
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7004
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Ryan McGuire
  Labels: qa-resolved

 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 cqlsh_tests.py:TestCqlsh.test_simple_insert
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 test_simple_insert (cqlsh_tests.TestCqlsh) ... cluster ccm directory: 
 /tmp/dtest-aCgjRr
 (EE) Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
 error(111, 'ECONNREFUSED')})
 ERROR
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/cqlsh_tests.py, line 34, in 
 test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File /home/mshuler/git/cassandra-dbapi2/cql/cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File /home/mshuler/git/cassandra-dbapi2/cql/thrifteries.py, line 78, in 
 get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File /home/mshuler/git/cassandra-dbapi2/cql/thrifteries.py, line 100, in 
 handle_cql_execution_errors
 raise cql.ProgrammingError(Bad Request: %s % ire.why)
 ProgrammingError: Bad Request: Keyspace simple does not exist
 --
 Ran 1 test in 4.278s
 FAILED (errors=1)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5848) Write some CAS related dtests

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5848:


Labels: qa-resolved  (was: )

 Write some CAS related dtests
 -

 Key: CASSANDRA-5848
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5848
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
  Labels: qa-resolved

 Write some distributed tests using CAS.
 See CASSANDRA-5443 for CQL examples.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4981) Error when starting a node with vnodes while counter-add operations underway

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-4981:


Labels: qa-resolved  (was: )

 Error when starting a node with vnodes while counter-add operations underway
 

 Key: CASSANDRA-4981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4981
 Project: Cassandra
  Issue Type: Bug
 Environment: 2-node cluster on ec2, ubuntu, cassandra-1.2.0 commit 
 a32eb9f7d2f2868e8154d178e96e045859e1d855
Reporter: Tyler Patterson
Assignee: Ryan McGuire
Priority: Minor
  Labels: qa-resolved
 Attachments: system.log


 Start both nodes, start stress on one node like this: cassandra-stress 
 --replication-factor=2 --operation=COUNTER_ADD
 While that is running: On the other node, kill cassandra, wait for nodetool 
 status to show the node as down, and restart cassandra. I sometimes have to 
 kill and restart cassandra several times to get the problem to happen.
 I get this error several times in the log:
 {code}
 ERROR 15:39:33,198 Exception in thread Thread[MutationStage:16,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:748)
   at 
 org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:762)
   at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:95)
   at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2426)
   at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:396)
   at 
 org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:755)
   at 
 org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:53)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5843) SStableReader should enforce through code that no one creates multiple objects of it for same sstable.

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5843:


Labels: qa-resolved  (was: )

 SStableReader should enforce through code that no one creates multiple 
 objects of it for same sstable. 
 ---

 Key: CASSANDRA-5843
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5843
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.12
Reporter: sankalp kohli
Priority: Minor
  Labels: qa-resolved
 Attachments: 5843.txt


 CUrrently this is not enforced in the code. 
 In the View, we hold these readers in the set but the SStableReader equals 
 method is not implemented which makes the set meaningless. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5789) Data not fully replicated with 2 nodes and replication factor 2

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5789:


Labels: qa-resolved  (was: )

 Data not fully replicated with 2 nodes and replication factor 2
 ---

 Key: CASSANDRA-5789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5789
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.2, 1.2.6
 Environment: Official Datastax Cassandra 1.2.6, running on Linux RHEL 
 6.2.  I've seen the same behavior with Cassandra 1.2.2.
 Sun Java 1.7.0_10-b18 64-bit
 Java heap settings: -Xms8192M -Xmx8192M -Xmn2048M
Reporter: James Lee
Assignee: Russ Hatch
  Labels: qa-resolved
 Attachments: 5789.py, CassBugRepro.py, CassTestData.py


 I'm seeing a problem with a 2-node Cassandra test deployment, where it seems 
 that data isn't being replicated among the nodes as I would expect.
 The setup and test is as follows:
 - Two Cassandra nodes in the cluster (they each have themselves and the other 
 node as seeds in cassandra.yaml).
 - Create 40 keyspaces, each with simple replication strategy and 
 replication factor 2.
 - Populate 125,000 rows into each keyspace, using a pycassa client with a 
 connection pool pointed at both nodes.  These are populated with writes using 
 consistency level of 1.
 - Wait until nodetool on each node reports that there are no hinted handoffs 
 outstanding (see output below).
 - Do random reads of the rows in the keyspaces, again using a pycassa client 
 with a connection pool pointed at both nodes.  These are read using 
 consistency level 1.
 I'm finding that the vast majority of reads are successful, but a small 
 proportion (~0.1%) are returned as Not Found.  If I manually try to look up 
 those keys using cassandra-cli, I see that they are returned when querying 
 one of the nodes, but not when querying the other.  So it seems like some of 
 the rows have simply not been replicated, even though the write for these 
 rows was reported to the client as successful.
 If I reduce the rate at which the test tool initially writes data into the 
 database then I don't see any failed reads, so this seems like a load-related 
 issue.  My understanding is that if all writes were successful and there are 
 no pending hinted handoffs, then the data should be fully-replicated and 
 reads should return it (even with read and write consistency of 1).
 Here's the output from notetool on the two nodes:
 comet-mvs01:/dsc-cassandra-1.2.6# ./bin/nodetool tpstats
 Pool NameActive   Pending  Completed   Blocked  All 
 time blocked
 ReadStage 0 0  2 0
  0
 RequestResponseStage  0 0 878494 0
  0
 MutationStage 0 02869107 0
  0
 ReadRepairStage   0 0  0 0
  0
 ReplicateOnWriteStage 0 0  0 0
  0
 GossipStage   0 0   2208 0
  0
 AntiEntropyStage  0 0  0 0
  0
 MigrationStage0 0994 0
  0
 MemtablePostFlusher   0 0   4399 0
  0
 FlushWriter   0 0   2264 0
556
 MiscStage 0 0  0 0
  0
 commitlog_archiver0 0  0 0
  0
 InternalResponseStage 0 0153 0
  0
 HintedHandoff 0 0  2 0
  0
 Message type   Dropped
 RANGE_SLICE  0
 READ_REPAIR  0
 BINARY   0
 READ 0
 MUTATION 87655
 _TRACE   0
 REQUEST_RESPONSE 0
 comet-mvs02:/dsc-cassandra-1.2.6# ./bin/nodetool tpstats
 Pool NameActive   Pending  Completed   Blocked  All 
 time blocked
 ReadStage 0 0868 0
  0
 RequestResponseStage  0 03919665 0
  0
 MutationStage 0 08177325 0
  0
 ReadRepairStage   0 0113 0
  0
 ReplicateOnWriteStage 0 0  0 0
  0
 GossipStage

[jira] [Updated] (CASSANDRA-6365) Bisect unit test failures on 2.0 branch

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6365:


Labels: qa-resolved  (was: )

 Bisect unit test failures on 2.0 branch
 ---

 Key: CASSANDRA-6365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6365
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Jonathan Ellis
Assignee: Michael Shuler
  Labels: qa-resolved
 Attachments: 2.0.1-utest.txt, C-2.0.1_tag_utests.txt


 Unit tests pass in 2.0.1.
 They do not in 2.0.2.
 Let's find where the failures were introduced.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7008) upgrade_supercolumns_test dtest failing in 2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7008:


Labels: qa-resolved  (was: )

 upgrade_supercolumns_test dtest failing in 2.1
 --

 Key: CASSANDRA-7008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7008
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Ryan McGuire
  Labels: qa-resolved

 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 upgrade_supercolumns_test.py 
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 upgrade_with_index_creation_test (upgrade_supercolumns_test.TestSCUpgrade) 
 ... cluster ccm directory: /tmp/dtest-UWLi7s
 ERROR
 ==
 ERROR: upgrade_with_index_creation_test 
 (upgrade_supercolumns_test.TestSCUpgrade)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/upgrade_supercolumns_test.py, line 
 37, in upgrade_with_index_creation_test
 node1.start(wait_other_notice=True)
   File /home/mshuler/git/ccm/ccmlib/node.py, line 427, in start
 raise NodeError(Error starting node %s % self.name, process)
 NodeError: Error starting node node1
 --
 Ran 1 test in 69.320s
 FAILED (errors=1)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7114) Paging dtests

2014-04-29 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-7114:
---

 Summary: Paging dtests
 Key: CASSANDRA-7114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7114
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Russ Hatch


[Test plan is 
here|https://github.com/riptano/cassandra-test-plans/wiki/Paging-Tests]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7114) Paging dtests

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7114:


Fix Version/s: 2.1 beta1
   Labels: qa-resolved  (was: )

 Paging dtests
 -

 Key: CASSANDRA-7114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7114
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Russ Hatch
  Labels: qa-resolved
 Fix For: 2.1 beta1


 [Test plan is 
 here|https://github.com/riptano/cassandra-test-plans/wiki/Paging-Tests]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: fix compile

2014-04-29 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 212698501 - 1b51bec14


fix compile


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b51bec1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b51bec1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b51bec1

Branch: refs/heads/trunk
Commit: 1b51bec1455265de050fa85f1a2f90c26d28f591
Parents: 2126985
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Apr 29 21:07:09 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Apr 29 21:07:09 2014 -0400

--
 .../unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b51bec1/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java 
b/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java
index 131a755..968fb93 100644
--- a/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java
+++ b/test/unit/org/apache/cassandra/db/ArrayBackedSortedColumnsTest.java
@@ -223,7 +223,7 @@ public class ArrayBackedSortedColumnsTest extends 
SchemaLoader
 int[] values = new int[]{ 1, 2, 3, 5, 9, 15, 21, 22 };
 
 for (int i = 0; i  values.length; ++i)
-map.addColumn(new Cell(type.makeCellName(values[i])));
+map.addColumn(new BufferCell(type.makeCellName(values[i])));
 
 SearchIteratorCellName, Cell iter = map.searchIterator();
 for (int i = 0 ; i  values.length ; i++)



[jira] [Resolved] (CASSANDRA-7114) Paging dtests

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire resolved CASSANDRA-7114.
-

Resolution: Fixed

 Paging dtests
 -

 Key: CASSANDRA-7114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7114
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Russ Hatch
  Labels: qa-resolved
 Fix For: 2.1 beta1


 [Test plan is 
 here|https://github.com/riptano/cassandra-test-plans/wiki/Paging-Tests]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: fix comment

2014-04-29 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1b51bec14 - e02d5b354


fix comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e02d5b35
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e02d5b35
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e02d5b35

Branch: refs/heads/trunk
Commit: e02d5b354fb25f37580ad1c96453d225104f1a8e
Parents: 1b51bec
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Apr 29 20:07:07 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Apr 29 20:07:07 2014 -0500

--
 test/unit/org/apache/cassandra/db/ScrubTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e02d5b35/test/unit/org/apache/cassandra/db/ScrubTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ScrubTest.java 
b/test/unit/org/apache/cassandra/db/ScrubTest.java
index 220e2a4..7dc7c5c 100644
--- a/test/unit/org/apache/cassandra/db/ScrubTest.java
+++ b/test/unit/org/apache/cassandra/db/ScrubTest.java
@@ -187,7 +187,7 @@ public class ScrubTest extends SchemaLoader
  * The test also assumes an ordered partitioner.
  *
 ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(cfs.metadata);
-cf.addColumn(new Cell(ByteBufferUtil.bytes(someName), 
ByteBufferUtil.bytes(someValue), 0L));
+cf.addColumn(new BufferCell(ByteBufferUtil.bytes(someName), 
ByteBufferUtil.bytes(someValue), 0L));
 
 SSTableWriter writer = new SSTableWriter(cfs.getTempSSTablePath(new 
File(System.getProperty(corrupt-sstable-root))),
  
cfs.metadata.getIndexInterval(),



[jira] [Commented] (CASSANDRA-7114) Paging dtests

2014-04-29 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13985024#comment-13985024
 ] 

Ryan McGuire commented on CASSANDRA-7114:
-

Paging tests are here : 
https://github.com/riptano/cassandra-dtest-jython/blob/master/paging_test.py

Results here: http://cassci.datastax.com/job/cassandra-2.1_paging_dtest/

 Paging dtests
 -

 Key: CASSANDRA-7114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7114
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Russ Hatch
  Labels: qa-resolved
 Fix For: 2.1 beta1


 [Test plan is 
 here|https://github.com/riptano/cassandra-test-plans/wiki/Paging-Tests]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix StorageProxy#syncWriteToBatchlog()

2014-04-29 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a4664328c - 02ffaff69


Fix StorageProxy#syncWriteToBatchlog()


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02ffaff6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02ffaff6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02ffaff6

Branch: refs/heads/cassandra-2.1
Commit: 02ffaff69df86eab4f3f026732466b9eb1578314
Parents: a466432
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 30 03:11:24 2014 +0200
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 30 03:11:24 2014 +0200

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/02ffaff6/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 269d68f..1a41aaa 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -619,14 +619,15 @@ public class StorageProxy implements StorageProxyMBean
 }
 else if (targetVersion == MessagingService.current_version)
 {
-MessagingService.instance().sendRR(message, target, handler);
+MessagingService.instance().sendRR(message, target, handler, 
false);
 }
 else
 {
 
MessagingService.instance().sendRR(BatchlogManager.getBatchlogMutationFor(mutations,
 uuid, targetVersion)
   
.createMessage(),
target,
-   handler);
+   handler,
+   false);
 }
 }
 



[jira] [Commented] (CASSANDRA-7068) AssertionError when running putget_test

2014-04-29 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13985027#comment-13985027
 ] 

Aleksey Yeschenko commented on CASSANDRA-7068:
--

Fixed in 02ffaff69df86eab4f3f026732466b9eb1578314. 2.1 had two extra places 
that needed to be updated relative to 2.0.

 AssertionError when running putget_test
 ---

 Key: CASSANDRA-7068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7068
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Aleksey Yeschenko

 running the putget_test like so:
 {code}
 nosetests2 -x -s -v putget_test.py:TestPutGet.non_local_read_test
 {code}
 Yields this error in the logs on cassandra-2.0:
 {code}
 ERROR [Thrift:1] 2014-04-22 14:25:37,584 CassandraDaemon.java (line 198) 
 Exception in thread Thread[Thrift:1,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.net.MessagingService.addCallback(MessagingService.java:542)
 at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:595)
 at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:579)
 at 
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:817)
 at 
 org.apache.cassandra.service.StorageProxy$2.apply(StorageProxy.java:119)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:693)
 at 
 org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:465)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:535)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:542)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:526)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1959)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 On cassandra-2.1 I don't get any errors in the logs, but the test doesn't run 
 , instead I get a 'TSocket read 0 bytes' error. 
 Test on 1.2 is fine.
 After bisecting, it appears that a common commit 
 3a73e392fa424bff5378d4bb72117cfa28f9b0b7 is the cause.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-29 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d30b7451
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d30b7451
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d30b7451

Branch: refs/heads/trunk
Commit: d30b745154da9daf0fd91dfc758a6b39e69f4a16
Parents: e02d5b3 02ffaff
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 30 03:14:11 2014 +0200
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 30 03:14:11 2014 +0200

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--




[1/2] git commit: Fix StorageProxy#syncWriteToBatchlog()

2014-04-29 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e02d5b354 - d30b74515


Fix StorageProxy#syncWriteToBatchlog()


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02ffaff6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02ffaff6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02ffaff6

Branch: refs/heads/trunk
Commit: 02ffaff69df86eab4f3f026732466b9eb1578314
Parents: a466432
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 30 03:11:24 2014 +0200
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 30 03:11:24 2014 +0200

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/02ffaff6/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 269d68f..1a41aaa 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -619,14 +619,15 @@ public class StorageProxy implements StorageProxyMBean
 }
 else if (targetVersion == MessagingService.current_version)
 {
-MessagingService.instance().sendRR(message, target, handler);
+MessagingService.instance().sendRR(message, target, handler, 
false);
 }
 else
 {
 
MessagingService.instance().sendRR(BatchlogManager.getBatchlogMutationFor(mutations,
 uuid, targetVersion)
   
.createMessage(),
target,
-   handler);
+   handler,
+   false);
 }
 }
 



[jira] [Resolved] (CASSANDRA-7068) AssertionError when running putget_test

2014-04-29 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-7068.
--

Resolution: Fixed

 AssertionError when running putget_test
 ---

 Key: CASSANDRA-7068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7068
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Aleksey Yeschenko

 running the putget_test like so:
 {code}
 nosetests2 -x -s -v putget_test.py:TestPutGet.non_local_read_test
 {code}
 Yields this error in the logs on cassandra-2.0:
 {code}
 ERROR [Thrift:1] 2014-04-22 14:25:37,584 CassandraDaemon.java (line 198) 
 Exception in thread Thread[Thrift:1,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.net.MessagingService.addCallback(MessagingService.java:542)
 at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:595)
 at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:579)
 at 
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:817)
 at 
 org.apache.cassandra.service.StorageProxy$2.apply(StorageProxy.java:119)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:693)
 at 
 org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:465)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:535)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:542)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:526)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1959)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 On cassandra-2.1 I don't get any errors in the logs, but the test doesn't run 
 , instead I get a 'TSocket read 0 bytes' error. 
 Test on 1.2 is fine.
 After bisecting, it appears that a common commit 
 3a73e392fa424bff5378d4bb72117cfa28f9b0b7 is the cause.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6714) Fix replaying old (1.2) commitlog in Cassandra 2.0

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6714:


Labels: qa-resolved  (was: )

 Fix replaying old (1.2) commitlog in Cassandra 2.0
 --

 Key: CASSANDRA-6714
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6714
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Old node: Cassandra 1.2.15 (DSE)
 New node: Cassandra 2.0.5.1 (DSE)
Reporter: Piotr Kołaczkowski
Assignee: Aleksey Yeschenko
  Labels: qa-resolved
 Fix For: 2.0.6, 2.1 beta1

 Attachments: 6714.txt


 Our docs, and code, both explicitly say that you should drain a node before 
 upgrading to a new major release.
 If you don't do what the docs explicitly tell you to do, however, Cassandra 
 won't scream at you. Also, we *do* currently have logic to replay 1.2 
 commitlog in 2.0, but it seems to be slightly broken, unfortunately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5549) Remove Table.switchLock

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-5549:


Labels: qa-resolved  (was: performance)

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 beta1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6530) Fix logback configuration in scripts and debian packaging for trunk/2.1

2014-04-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6530:


Labels: qa-resolved  (was: )

 Fix logback configuration in scripts and debian packaging for trunk/2.1
 ---

 Key: CASSANDRA-6530
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6530
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1 beta1

 Attachments: logback_configurations_final.patch, 
 logback_configurations_final2.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7036) counter_tests.py:TestCounters.upgrade_test dtest hangs in 2.0 and 2.1

2014-04-29 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-7036:


Assignee: Aleksey Yeschenko

 counter_tests.py:TestCounters.upgrade_test dtest hangs in 2.0 and 2.1
 -

 Key: CASSANDRA-7036
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7036
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Aleksey Yeschenko
 Fix For: 2.1 beta2






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6861) Optimise our Netty 4 integration

2014-04-29 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13985064#comment-13985064
 ] 

T Jake Luciani commented on CASSANDRA-6861:
---

Just a cursory check I see the following read performance:

  - mode thrift: 23530
  - mode thrift cql3 prepared: 21169
  - mode native cql3 prepared: *14142*

 Optimise our Netty 4 integration
 

 Key: CASSANDRA-6861
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6861
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 2.1 beta2


 Now we've upgraded to Netty 4, we're generating a lot of garbage that could 
 be avoided, so we should probably stop that. Should be reasonably easy to 
 hook into Netty's pooled buffers, returning them to the pool once a given 
 message is completed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7115) Partitioned Column Family (Table) based on Column Keys (Sorta TTLed Table)

2014-04-29 Thread Haebin Na (JIRA)
Haebin Na created CASSANDRA-7115:


 Summary: Partitioned Column Family (Table) based on Column Keys 
(Sorta TTLed Table)
 Key: CASSANDRA-7115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7115
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Haebin Na
Priority: Minor


We need a better solution to expire columns than TTLed columns.

If you set TTL 6 months for a column in a frequently updated(deleted, yes, this 
is anti-pattern) wide row, it is not likely to be deleted since the row would 
be highly fragmented.

In order to solve the problem above, I suggest partitioning column family 
(table) with column key (column1) as partition key.

It is like a set of column families (tables) which share the same structure and 
cover certain range of columns per CF. This means that a row is 
deterministically fragmented by column key.

If you use timestamp like column key, then you would be able to truncate 
specific partition (a sub-table or CF with specific range) if it is older than 
certain age easily without worrying about zombie tombstones. 

It is not optimal to have many column families, yet even with small set like by 
biyearly or quarterly, we could achieve whole lot more efficient than TTLed 
columns.

What do you think?






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7115) Column Family (Table) partitioning with column keys as partition keys (Sorta TTLed Table)

2014-04-29 Thread Haebin Na (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haebin Na updated CASSANDRA-7115:
-

Description: 
We need a better solution to expire columns than TTLed columns.

If you set TTL 6 months for a column in a frequently updated(deleted, yes, this 
is anti-pattern) wide row, it is not likely to be deleted since the row would 
be highly fragmented.

In order to solve the problem above, I suggest partitioning column family 
(table) with column key (column1) as partition key.

It is like a set of column families (tables) which share the same structure and 
cover certain range of columns per CF. This means that a row is 
deterministically fragmented by column key.

If you use timestamp like column key, then you would be able to truncate 
specific partition (a sub-table or CF with specific range) if it is older than 
certain age easily without worrying about zombie tombstones. 

It is not optimal to have many column families, yet even with small set like by 
biyearly or quarterly, it could be whole lot more efficient than TTLed columns.

What do you think?




  was:
We need a better solution to expire columns than TTLed columns.

If you set TTL 6 months for a column in a frequently updated(deleted, yes, this 
is anti-pattern) wide row, it is not likely to be deleted since the row would 
be highly fragmented.

In order to solve the problem above, I suggest partitioning column family 
(table) with column key (column1) as partition key.

It is like a set of column families (tables) which share the same structure and 
cover certain range of columns per CF. This means that a row is 
deterministically fragmented by column key.

If you use timestamp like column key, then you would be able to truncate 
specific partition (a sub-table or CF with specific range) if it is older than 
certain age easily without worrying about zombie tombstones. 

It is not optimal to have many column families, yet even with small set like by 
biyearly or quarterly, we could achieve whole lot more efficient than TTLed 
columns.

What do you think?




Summary: Column Family (Table) partitioning with column keys as 
partition keys (Sorta TTLed Table)  (was: Partitioned Column Family (Table) 
based on Column Keys (Sorta TTLed Table))

 Column Family (Table) partitioning with column keys as partition keys (Sorta 
 TTLed Table)
 -

 Key: CASSANDRA-7115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7115
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Haebin Na
Priority: Minor
  Labels: features

 We need a better solution to expire columns than TTLed columns.
 If you set TTL 6 months for a column in a frequently updated(deleted, yes, 
 this is anti-pattern) wide row, it is not likely to be deleted since the row 
 would be highly fragmented.
 In order to solve the problem above, I suggest partitioning column family 
 (table) with column key (column1) as partition key.
 It is like a set of column families (tables) which share the same structure 
 and cover certain range of columns per CF. This means that a row is 
 deterministically fragmented by column key.
 If you use timestamp like column key, then you would be able to truncate 
 specific partition (a sub-table or CF with specific range) if it is older 
 than certain age easily without worrying about zombie tombstones. 
 It is not optimal to have many column families, yet even with small set like 
 by biyearly or quarterly, it could be whole lot more efficient than TTLed 
 columns.
 What do you think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: fix equals to compare this to the parm, rather than this to this

2014-04-29 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 02ffaff69 - 6b5b7f519


fix equals to compare this to the parm, rather than this to this


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b5b7f51
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b5b7f51
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b5b7f51

Branch: refs/heads/cassandra-2.1
Commit: 6b5b7f5193c63d19fe644c708efe100f6f520fa1
Parents: 02ffaff
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Apr 29 23:04:20 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Apr 29 23:04:48 2014 -0400

--
 src/java/org/apache/cassandra/db/NativeCounterCell.java  | 2 +-
 src/java/org/apache/cassandra/db/NativeExpiringCell.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b5b7f51/src/java/org/apache/cassandra/db/NativeCounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/NativeCounterCell.java 
b/src/java/org/apache/cassandra/db/NativeCounterCell.java
index abcf598..2828e13 100644
--- a/src/java/org/apache/cassandra/db/NativeCounterCell.java
+++ b/src/java/org/apache/cassandra/db/NativeCounterCell.java
@@ -180,7 +180,7 @@ public class NativeCounterCell extends NativeCell 
implements CounterCell
 
 public boolean equals(Cell cell)
 {
-return cell instanceof CounterCell  equals((CounterCell) this);
+return cell instanceof CounterCell  equals((CounterCell) cell);
 }
 
 public boolean equals(CounterCell cell)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b5b7f51/src/java/org/apache/cassandra/db/NativeExpiringCell.java
--
diff --git a/src/java/org/apache/cassandra/db/NativeExpiringCell.java 
b/src/java/org/apache/cassandra/db/NativeExpiringCell.java
index 5ac0e81..0822fbd 100644
--- a/src/java/org/apache/cassandra/db/NativeExpiringCell.java
+++ b/src/java/org/apache/cassandra/db/NativeExpiringCell.java
@@ -130,7 +130,7 @@ public class NativeExpiringCell extends NativeCell 
implements ExpiringCell
 
 public boolean equals(Cell cell)
 {
-return cell instanceof ExpiringCell  equals((ExpiringCell) this);
+return cell instanceof ExpiringCell  equals((ExpiringCell) cell);
 }
 
 protected boolean equals(ExpiringCell cell)



[1/2] git commit: fix equals to compare this to the parm, rather than this to this

2014-04-29 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk d30b74515 - c06ba25a5


fix equals to compare this to the parm, rather than this to this


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b5b7f51
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b5b7f51
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b5b7f51

Branch: refs/heads/trunk
Commit: 6b5b7f5193c63d19fe644c708efe100f6f520fa1
Parents: 02ffaff
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Apr 29 23:04:20 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Apr 29 23:04:48 2014 -0400

--
 src/java/org/apache/cassandra/db/NativeCounterCell.java  | 2 +-
 src/java/org/apache/cassandra/db/NativeExpiringCell.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b5b7f51/src/java/org/apache/cassandra/db/NativeCounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/NativeCounterCell.java 
b/src/java/org/apache/cassandra/db/NativeCounterCell.java
index abcf598..2828e13 100644
--- a/src/java/org/apache/cassandra/db/NativeCounterCell.java
+++ b/src/java/org/apache/cassandra/db/NativeCounterCell.java
@@ -180,7 +180,7 @@ public class NativeCounterCell extends NativeCell 
implements CounterCell
 
 public boolean equals(Cell cell)
 {
-return cell instanceof CounterCell  equals((CounterCell) this);
+return cell instanceof CounterCell  equals((CounterCell) cell);
 }
 
 public boolean equals(CounterCell cell)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b5b7f51/src/java/org/apache/cassandra/db/NativeExpiringCell.java
--
diff --git a/src/java/org/apache/cassandra/db/NativeExpiringCell.java 
b/src/java/org/apache/cassandra/db/NativeExpiringCell.java
index 5ac0e81..0822fbd 100644
--- a/src/java/org/apache/cassandra/db/NativeExpiringCell.java
+++ b/src/java/org/apache/cassandra/db/NativeExpiringCell.java
@@ -130,7 +130,7 @@ public class NativeExpiringCell extends NativeCell 
implements ExpiringCell
 
 public boolean equals(Cell cell)
 {
-return cell instanceof ExpiringCell  equals((ExpiringCell) this);
+return cell instanceof ExpiringCell  equals((ExpiringCell) cell);
 }
 
 protected boolean equals(ExpiringCell cell)



[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-29 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c06ba25a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c06ba25a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c06ba25a

Branch: refs/heads/trunk
Commit: c06ba25a51256324c68e958fec3298fc2f58983b
Parents: d30b745 6b5b7f5
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Apr 29 23:05:35 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Apr 29 23:05:35 2014 -0400

--
 src/java/org/apache/cassandra/db/NativeCounterCell.java  | 2 +-
 src/java/org/apache/cassandra/db/NativeExpiringCell.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--




[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-29 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Attachment: (was: cassandra-1.2-6831.patch)

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17, 2.0.8, 2.1 beta2

 Attachments: 6831-2.0-v2.txt


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-29 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Attachment: (was: cassandra-2.0-6831.patch)

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17, 2.0.8, 2.1 beta2

 Attachments: 6831-2.0-v2.txt


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-29 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Attachment: 6831-1.2.patch
6831-2.1.patch

Attaching the new patch for 1.2 (adjusted as per comments), and patch for 2.1 
based on Sylvain's patch.
I give up trying to commit those changes, I have no enough git powers to do 
that.

I'm able to apply them and merge the branches locally, but {{git pull}} becomes 
a nightmare after that. I have no guts to push them in the wild.

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17, 2.0.8, 2.1 beta2

 Attachments: 6831-1.2.patch, 6831-2.0-v2.txt, 6831-2.1.patch


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6559) cqlsh should warn about ALLOW FILTERING

2014-04-29 Thread Matt Stump (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13985165#comment-13985165
 ] 

Matt Stump commented on CASSANDRA-6559:
---

Allow filtering should be disabled by default and only allowed after the user 
explicitly enables the behavior in cassandra.yaml. It's a major source of 
escalations. 3 customers in the past 2 weeks. People either aren't heading, 
reading the docs/warnings or think it doesn't apply to them. 

 cqlsh should warn about ALLOW FILTERING
 ---

 Key: CASSANDRA-6559
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6559
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tupshin Harper
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.8


 ALLOW FILTERING can be a convenience for preliminary exploration of your 
 data, and can be useful for batch jobs, but it is such an anti-pattern for 
 regular production queries, that cqlsh should provie an explicit warn 
 ingwhenever such a query is performed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


<    1   2