[jira] [Comment Edited] (CASSANDRA-12415) dtest failure in replace_address_test.TestReplaceAddress.replace_stopped_node_test

2016-08-10 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415434#comment-15415434
 ] 

Jim Witschey edited comment on CASSANDRA-12415 at 8/10/16 3:17 PM:
---

+1 to moving to larger instances. Filed 
https://github.com/riptano/cassandra-dtest/pull/1204


was (Author: mambocab):
+1 to moving to larger instances. 

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_stopped_node_test
> --
>
> Key: CASSANDRA-12415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12415
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/31/testReport/replace_address_test/TestReplaceAddress/replace_stopped_node_test
> Node4 just doesn't start up in time. Maybe bump up the timeout, or move to 
> large test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12415) dtest failure in replace_address_test.TestReplaceAddress.replace_stopped_node_test

2016-08-10 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415434#comment-15415434
 ] 

Jim Witschey commented on CASSANDRA-12415:
--

+1 to moving to larger instances. 

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_stopped_node_test
> --
>
> Key: CASSANDRA-12415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12415
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/31/testReport/replace_address_test/TestReplaceAddress/replace_stopped_node_test
> Node4 just doesn't start up in time. Maybe bump up the timeout, or move to 
> large test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12424) Assertion failure in ViewUpdateGenerator

2016-08-10 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian reassigned CASSANDRA-12424:
--

Assignee: Carl Yeksigian

> Assertion failure in ViewUpdateGenerator
> 
>
> Key: CASSANDRA-12424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12424
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Keith Wansbrough
>Assignee: Carl Yeksigian
> Attachments: cassandra.log
>
>
> Using released apache-cassandra-3.7.0, we have managed to get a node into a 
> state where it won't start up. The exception is {{java.lang.AssertionError: 
> We shouldn't have got there is the base row had no associated entry}} and it 
> appears in 
> ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455).
> I still have the offending node; what diags/data would be useful for 
> diagnosis? I've attached the full cassandra.log. In summary, cassandra.log 
> contains multiple instances of the following when replacing the commit log on 
> startup, leading ultimately to failure to start up.
> {code}
> ERROR 15:24:17 Unknown exception caught while attempting to update 
> MaterializedView! edison.scs_subscriber
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  15:24:17 Uncaught exception on thread 
> Thread[SharedPool-Worker-4,5,main]: {}
> {code}
> and ultimately 
> {code}
> ERROR 15:24:18 Exception encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12430) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_token_boundaries

2016-08-10 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12430:


 Summary: dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_token_boundaries
 Key: CASSANDRA-12430
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12430
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.0_dtest/787/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_writing_with_token_boundaries

Failed on CassCI build cassandra-3.0_dtest build #787

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
1022, in test_writing_with_token_boundaries
self._test_writing_with_token_boundaries(1, None, 200)
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
1059, in _test_writing_with_token_boundaries
self.assertItemsEqual(csv_values, result)
  File "/usr/lib/python2.7/unittest/case.py", line 901, in assertItemsEqual
self.fail(msg)
  File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
raise self.failureException(msg)
"Element counts were not equal:\nFirst has 0, Second has 1:  ('130', 
-4364617663693876050L)\nFirst has 0, Second has 1:  ('1504', 
-4313346088993828066L)\nFirst has 0, Second has 1:  ('1657', 
-4298243044528711865L)\nFirst has 0, Second has 1:  ('1908', 
-4357762565998238890L)\nFirst has 0, Second has 1:  ('196', 
-4311842292754600676L)\nFirst has 0, Second has 1:  ('2069', 
-4364398944370882217L)\nFirst has 0, Second has 1:  ('2840', 
-4341639477649832153L)\nFirst has 0, Second has 1:  ('2887', 
-4318016824479819783L)\nFirst has 0, Second has 1:  ('2899', 
-4302748366908469185L)\nFirst has 0, Second has 1:  ('2928', 
-4320094196758787736L)\nFirst has 0, Second has 1:  ('2985', 
-4314356124534988584L)\nFirst has 0, Second has 1:  ('3684', 
-4338074463992249966L)\nFirst has 0, Second has 1:  ('371', 
-4314424123257001171L)\nFirst has 0, Second has 1:  ('3726', 
-4327342039280507889L)\nFirst has 0, Second has 1:  ('3767', 
-4314615789624913427L)\nFirst has 0, Second has 1:  ('3837', 
-4345782419910891107L)\nFirst has 0, Second has 1:  ('3917', 
-4288469607605675346L)\nFirst has 0, Second has 1:  ('4023', 
-4327319429102869913L)\nFirst has 0, Second has 1:  ('4340', 
-4364719196309290555L)\nFirst has 0, Second has 1:  ('4775', 
-4334399295585005795L)\nFirst has 0, Second has 1:  ('480', 
-4297721626756162038L)\nFirst has 0, Second has 1:  ('4927', 
-4363012199808638126L)\nFirst has 0, Second has 1:  ('5227', 
-4322405738833807588L)\nFirst has 0, Second has 1:  ('564', 
-4294201317243228473L)\nFirst has 0, Second has 1:  ('585', 
-435900129350319L)\nFirst has 0, Second has 1:  ('5869', 
-4350305245827564608L)\nFirst has 0, Second has 1:  ('6907', 
-4350623491924194304L)\nFirst has 0, Second has 1:  ('709', 
-4304008865600291097L)\nFirst has 0, Second has 1:  ('7415', 
-4315752378065264743L)\nFirst has 0, Second has 1:  ('7476', 
-4300546270541034340L)\nFirst has 0, Second has 1:  ('7805', 
-4344641724309508742L)\nFirst has 0, Second has 1:  ('7922', 
-4363605089028496367L)\nFirst has 0, Second has 1:  ('8026', 
-4319008002233878821L)\nFirst has 0, Second has 1:  ('8180', 
-4361912691055780971L)\nFirst has 0, Second has 1:  ('8371', 
-4309172311179179912L)\nFirst has 0, Second has 1:  ('8988', 
-432609343783541L)\nFirst has 0, Second has 1:  ('9492', 
-4347264403260361686L)\nFirst has 0, Second has 1:  ('9783', 
-4329297319597600121L)\nFirst has 0, Second has 1:  ('9911', 
-4320490295580904236L)\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/tmp/dtest-2v8O54\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\ndtest: DEBUG: Exporting to csv file: /tmp/tmpDS47Vq\ndtest: DEBUG: 
Exporting to csv file: /tmp/tmpQvnl6e\ndtest: DEBUG: Exporting to csv file: 
/tmp/tmpfEEiAz\ndtest: DEBUG: Exporting to csv file: /tmp/tmpc7T8av\ndtest: 
DEBUG: Exporting to csv file: /tmp/tmplxBTNa\ndtest: DEBUG: Exporting to csv 
file: /tmp/tmpb5VYGq\n- >> end captured logging << 
-"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12246) Cassandra v2.2 to v3.0.9 upgrade failed

2016-08-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Lété updated CASSANDRA-12246:

Attachment: schema_keyspaces.db
schema_columns.db
schema_columnfamilies.db

This is the snapshots.

> Cassandra v2.2 to v3.0.9 upgrade failed
> ---
>
> Key: CASSANDRA-12246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12246
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04, DSC 2.2 > DSC 3.0
>Reporter: Thomas Lété
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.9
>
> Attachments: describe.txt, schema_columnfamilies.db, 
> schema_columnfamilies.txt, schema_columns.db, schema_keyspaces.db, 
> system.schema_columns.txt, system.schema_keyspaces.txt
>
>
> Hi,
> I'm trying to upgrade our Cassandra cluster which has been created using 
> opscenter.
> Now that Opscenter dropped support for Datastax Community, upgraded manually.
> Unfortunately, the Schema Migrator seems to hang somewhere during startup.
> Here is the log I get :
> {code:title=debug.log}
> INFO  [main] 2016-07-20 15:34:49,381 SystemKeyspace.java:1283 - Detected 
> version upgrade from 2.2.7 to 3.0.8, snapshotting system keyspace
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,383 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> compaction_history
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,389 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> hints
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,389 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_aggregates
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,392 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> IndexInfo
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,393 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_columnfamilies
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,395 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_triggers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,398 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> size_estimates
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,401 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_functions
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,403 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> paxos
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,404 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> views_builds_in_progress
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,404 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> built_views
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,405 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> peer_events
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,405 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> range_xfers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,406 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> peers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,408 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> batches
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,408 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_keyspaces
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,410 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_usertypes
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,413 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> local
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,415 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> sstable_activity
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> available_ranges
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> batchlog
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_columns
> WARN  [main] 2016-07-20 15:34:49,634 CompressionParams.java:382 - The 
> sstable_compression option has been deprecated. You should use class instead
> WARN  [main] 2016-07-20 15:34:49,654 CompressionParams.java:333 - The 
> chunk_length_kb option has been deprecated. You

[jira] [Resolved] (CASSANDRA-12196) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_1_x_To_indev_3_x.bootstrap_test

2016-08-10 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12196.
-
Resolution: Fixed

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_1_x_To_indev_3_x.bootstrap_test
> --
>
> Key: CASSANDRA-12196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12196
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_1_x_To_indev_3_x/bootstrap_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 707, in bootstrap_test
> self.upgrade_scenario(after_upgrade_call=(self._bootstrap_new_node,))
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 383, in upgrade_scenario
> call()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 688, in _bootstrap_new_node
> nnode.start(use_jna=True, wait_other_notice=True, 
> wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 634, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 481, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 449, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "13 Jul 2016 02:23:05 [node2] Missing: ['127.0.0.4.* now UP']:\nINFO  
> [HANDSHAKE-/127.0.0.4] 2016-07-13 02:21:00,2.\nSee system.log for 
> remainder
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12428) dtest failure in topology_test.TestTopology.simple_decommission_test

2016-08-10 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12428:
-

 Summary: dtest failure in 
topology_test.TestTopology.simple_decommission_test
 Key: CASSANDRA-12428
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12428
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node2.log, node3.log

example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest/499/testReport/topology_test/TestTopology/simple_decommission_test

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 358, in run
self.tearDown()
  File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
raise AssertionError('Unexpected error in log, see stdout')
"Unexpected error in log, see stdout
{code}

{code}
Standard Output

Unexpected error in node2 log, error: 
ERROR [OptionalTasks:1] 2016-08-09 22:19:17,578 CassandraDaemon.java:231 - 
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.AssertionError: -1798176113661253264 not found in 
-9176030984652505006, -871714249145979, -8567082690920363685, 
-7728355195270516929, -7671560790707332672, -6815296744215479977, 
-6611548514765694876, -6137228431100324821, -5871381962314776798, 
-5709026171638534111, -5696874364498510312, -4663855838820854356, 
-3304329091857535864, -3251864206536309230, -3188788124715894197, 
-2549476409976316844, -2423479156112489442, -2389574204458609132, 
-2160965082438649456, -2046105283339446875, -1622678693166245335, 
-1421783322562475411, -503110248141412377, -256005860529123222, 
-229477804731423425, -144610334523764289, -64851179421923626, 
127314057436704028, 313816817127566322, 376139846959091135, 561504311435506912, 
858207556605072954, 1261151151588160011, 1454126256475083217, 
1618377671275204279, 2317929712453820894, 2560612758275508783, 
2587728682790085050, 2848178890309615427, 2885660694771463522, 
3140716395155672330, 3178980457497133951, 3591038406660159757, 
3766734787881223437, 3769457468208792646, 3824534990286253644, 
5183723622628782738, 5314317607985127226, 584580052753930, 
6235156095343170404, 6242029497543352525, 6281404742986921776, 
6589819833145109726, 6821551756387826137, 6889949766088620327, 
7754073703959464783, 7756209389182352710, 7952201212324370303, 
8053856175511744133, 8081402847785658462, 8227459864244671435, 
8350507973899452057, 8826283221671184683, 912045907067355
at 
org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:717)
 ~[main/:na]
at 
org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:661)
 ~[main/:na]
at 
org.apache.cassandra.db.SizeEstimatesRecorder.run(SizeEstimatesRecorder.java:69)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_80]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
[na:1.7.0_80]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 [na:1.7.0_80]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_80]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12429) dtest failure in compaction_test.TestCompaction_with_DateTieredCompactionStrategy.bloomfilter_size_test

2016-08-10 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12429:


 Summary: dtest failure in 
compaction_test.TestCompaction_with_DateTieredCompactionStrategy.bloomfilter_size_test
 Key: CASSANDRA-12429
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12429
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest/669/testReport/compaction_test/TestCompaction_with_DateTieredCompactionStrategy/bloomfilter_size_test

Failed on CassCI build cassandra-2.2_dtest build #669
{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/compaction_test.py", line 147, in 
bloomfilter_size_test
self.assertLessEqual(bfSize, size_factor * max_bf_size)
  File "/usr/lib/python2.7/unittest/case.py", line 936, in assertLessEqual
self.fail(self._formatMessage(msg, standardMsg))
  File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
raise self.failureException(msg)
"125456 not less than or equal to 0\n >> begin captured 
logging << \ndtest: DEBUG: cluster ccm directory: 
/tmp/dtest-XJ96MC\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'start_rpc': 'true'}\ndtest: DEBUG: bloom filter size is: 
125456\ndtest: DEBUG: size factor = 0\n- >> end captured 
logging << -"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12427) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-08-10 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12427:

Assignee: Sean McCarthy  (was: DS Test Eng)

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-12427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12427
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/669/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.2_dtest build #669
> {code}
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 319, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 96, in 
> _start_cluster
> cluster.start(wait_for_binary_proto=True, wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 414, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-an_vc5\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12427) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-08-10 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12427:


 Summary: dtest failure in 
consistency_test.TestAvailability.test_network_topology_strategy
 Key: CASSANDRA-12427
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12427
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest/669/testReport/consistency_test/TestAvailability/test_network_topology_strategy

Failed on CassCI build cassandra-2.2_dtest build #669
{code}

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/consistency_test.py", line 319, in 
test_network_topology_strategy
self._start_cluster()
  File "/home/automaton/cassandra-dtest/consistency_test.py", line 96, in 
_start_cluster
cluster.start(wait_for_binary_proto=True, wait_other_notice=True)
  File "/home/automaton/ccm/ccmlib/cluster.py", line 414, in start
raise NodeError("Error starting {0}.".format(node.name), p)
"Error starting node9.\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/tmp/dtest-an_vc5\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\n- >> end captured logging << -"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-10 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415273#comment-15415273
 ] 

ZhaoYang commented on CASSANDRA-12420:
--

[~thobbs] Hi, I understood that in order to fix this bug, we need to change 
QueryState and it maybe breaking change. but this bug may cause application 
server OOM. we would like this to be fixed in 2.1.x..

dtest: https://github.com/riptano/cassandra-dtest/pull/1199 

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.16
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12426) Writing a null value into a dense table results into a no-op

2016-08-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415271#comment-15415271
 ] 

Aleksey Yeschenko commented on CASSANDRA-12426:
---

I feel like changing it in 3.0 would break existing behaviour that some users 
might be relying on. I'd say leave it be.

And we'll soon have CASSANDRA-10857 in, for those who want to switch the 
behaviour.

> Writing a null value into a dense table results into a no-op 
> -
>
> Key: CASSANDRA-12426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12426
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>
> Making an insert into the dense table doesn't seem to create a live partition:
> {code}
> cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'datacenter1': '1' };
> cqlsh> use test ;
> cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
> (partition, key) ) WITH COMPACT STORAGE;
> cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
> cqlsh:test> select * from a;
>  partition | key | owner
> ---+-+
> {code}
> (same behaviour on 2.2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12426) Writing a null value into a dense table results into a no-op

2016-08-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415269#comment-15415269
 ] 

Alex Petrov commented on CASSANDRA-12426:
-

So the behaviour is correct: after insert, we can not differentiate between 
partition that was inserted with null and that was never existing? Should it be 
same for 3.x, or it should return {{'a', 'b', null}} in 3.0?

> Writing a null value into a dense table results into a no-op 
> -
>
> Key: CASSANDRA-12426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12426
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>
> Making an insert into the dense table doesn't seem to create a live partition:
> {code}
> cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'datacenter1': '1' };
> cqlsh> use test ;
> cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
> (partition, key) ) WITH COMPACT STORAGE;
> cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
> cqlsh:test> select * from a;
>  partition | key | owner
> ---+-+
> {code}
> (same behaviour on 2.2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-10 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12420:
-
Attachment: CASSANDRA-12420.patch

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.16
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12060) Different failure format for failed LWT between 2.x and 3.x

2016-08-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415266#comment-15415266
 ] 

Alex Petrov commented on CASSANDRA-12060:
-

Talked with [~slebresne] offline, and he suggested to fix the behaviour for all 
branches as follows: 

bq. we want x == null to mean "the container of x exists but x itself is null"

  * If partition exists, but column is null, {{= null}} LWT will succeed 
  * If partition does not exist, {{= null}} LWT will fail

I've made the required changes, although in the end it's blocked by 
[CASSANDRA-12426], since previously working (by accident) LWTs on dense tables 
are not working anymore since {{SELECT}} queries returns no partition. 

|[trunk |https://github.com/ifesdjeen/cassandra/tree/12060-trunk] 
|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12060-trunk-testall/]
 
|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12060-trunk-dtest/]
 |
|[2.2 |https://github.com/ifesdjeen/cassandra/tree/12060-2.2] 
|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12060-2.2-testall/]
 
|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12060-2.2-dtest/]
 |



> Different failure format for failed LWT between 2.x and 3.x
> ---
>
> Key: CASSANDRA-12060
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12060
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> When executing following CQL commands: 
> {code}
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'datacenter1': '1' };
> USE test;
> CREATE TABLE testtable (a int, b int, s1 int static, s2 int static, v int, 
> PRIMARY KEY (a, b));
> INSERT INTO testtable (a,b,s1,s2,v) VALUES (2,2,2,null,2);
> DELETE s1 FROM testtable WHERE a = 2 IF s2 IN (10,20,30);
> {code}
> The output is different between {{2.x}} and {{3.x}}:
> 2.x:
> {code}
> cqlsh:test> DELETE s1 FROM testtable WHERE a = 2 IF s2 = 5;
>  [applied] | s2
> ---+--
>  False | null
> {code}
> 3.x:
> {code}
> cqlsh:test> DELETE s1 FROM testtable WHERE a = 2 IF s2 = 5;
>  [applied]
> ---
>  False
> {code}
> {{2.x}} would although return same result if executed on a partition that 
> does not exist at all:
> {code}
> cqlsh:test> DELETE s1 FROM testtable WHERE a = 5 IF s2 = 5;
>  [applied]
> ---
>  False
> {code}
> It _might_ be related to static column LWTs, as I could not reproduce same 
> behaviour with non-static column LWTs. The most recent change was 
> [CASSANDRA-10532], which enabled LWT operations on static columns with 
> partition keys only. -Another possible relation is [CASSANDRA-9842], which 
> removed distinction between {{null}} column and non-existing row.- (striked 
> through since same happens on pre-[CASSANDRA-9842] code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-10 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12420:
-
Status: Patch Available  (was: Open)

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.16
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12426) Writing a null value into a dense table results into a no-op

2016-08-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415248#comment-15415248
 ] 

Aleksey Yeschenko commented on CASSANDRA-12426:
---

It's not supposed to. We are not using row markers for {{COMPACT STORAGE}} 
tables (in 2.x).

> Writing a null value into a dense table results into a no-op 
> -
>
> Key: CASSANDRA-12426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12426
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>
> Making an insert into the dense table doesn't seem to create a live partition:
> {code}
> cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'datacenter1': '1' };
> cqlsh> use test ;
> cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
> (partition, key) ) WITH COMPACT STORAGE;
> cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
> cqlsh:test> select * from a;
>  partition | key | owner
> ---+-+
> {code}
> (same behaviour on 2.2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12426) Writing a null value into a dense table results into a no-op

2016-08-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12426:

Description: 
Making an insert into the dense table doesn't seem to create a live partition:

{code}
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': '1' };
cqlsh> use test ;
cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
(partition, key) ) WITH COMPACT STORAGE;
cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
cqlsh:test> select * from a;

 partition | key | owner
---+-+
{code}

(same behaviour on 2.2)

  was:
{code}
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': '1' };
cqlsh> use test ;
cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
(partition, key) ) WITH COMPACT STORAGE;
cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
cqlsh:test> select * from a;

 partition | key | owner
---+-+
{code}


> Writing a null value into a dense table results into a no-op 
> -
>
> Key: CASSANDRA-12426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12426
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>
> Making an insert into the dense table doesn't seem to create a live partition:
> {code}
> cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'datacenter1': '1' };
> cqlsh> use test ;
> cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
> (partition, key) ) WITH COMPACT STORAGE;
> cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
> cqlsh:test> select * from a;
>  partition | key | owner
> ---+-+
> {code}
> (same behaviour on 2.2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12425) Log at DEBUG when running unit tests

2016-08-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12425:

Fix Version/s: 3.x
   3.0.x
   2.2.x
   Status: Patch Available  (was: Open)

patch is against 3.0, but it should probably go in 2.2+

> Log at DEBUG when running unit tests
> 
>
> Key: CASSANDRA-12425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12425
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> patch [here|https://github.com/krummas/cassandra/commits/marcuse/debuglogtest]
> sample run 
> [here|http://cassci.datastax.com/job/krummas-marcuse-debuglogtest-testall/3/] 
> - looks like the logs are 13MB gzipped vs about 1.5MB with only INFO logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12426) Writing a null value into a dense table results into a no-op

2016-08-10 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12426:
---

 Summary: Writing a null value into a dense table results into a 
no-op 
 Key: CASSANDRA-12426
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12426
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Petrov


{code}
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': '1' };
cqlsh> use test ;
cqlsh:test> CREATE TABLE a (partition text, key text, owner text, PRIMARY KEY 
(partition, key) ) WITH COMPACT STORAGE;
cqlsh:test> INSERT INTO a (partition, key, owner) VALUES ('a', 'b', null);
cqlsh:test> select * from a;

 partition | key | owner
---+-+
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12425) Log at DEBUG when running unit tests

2016-08-10 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-12425:
---

 Summary: Log at DEBUG when running unit tests
 Key: CASSANDRA-12425
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12425
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor


patch [here|https://github.com/krummas/cassandra/commits/marcuse/debuglogtest]

sample run 
[here|http://cassci.datastax.com/job/krummas-marcuse-debuglogtest-testall/3/] - 
looks like the logs are 13MB gzipped vs about 1.5MB with only INFO logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2016-08-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415098#comment-15415098
 ] 

Marcus Eriksson commented on CASSANDRA-12100:
-

LGTM, +1

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 3.0.x
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2016-08-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12100:

Status: Ready to Commit  (was: Patch Available)

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 3.0.x
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12424) Assertion failure in ViewUpdateGenerator

2016-08-10 Thread Keith Wansbrough (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wansbrough updated CASSANDRA-12424:
-
Attachment: cassandra.log

> Assertion failure in ViewUpdateGenerator
> 
>
> Key: CASSANDRA-12424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12424
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Keith Wansbrough
> Attachments: cassandra.log
>
>
> Using released apache-cassandra-3.7.0, we have managed to get a node into a 
> state where it won't start up. The exception is {{java.lang.AssertionError: 
> We shouldn't have got there is the base row had no associated entry}} and it 
> appears in 
> ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455).
> I still have the offending node; what diags/data would be useful for 
> diagnosis? I've attached the full cassandra.log. In summary, cassandra.log 
> contains multiple instances of the following when replacing the commit log on 
> startup, leading ultimately to failure to start up.
> {code}
> ERROR 15:24:17 Unknown exception caught while attempting to update 
> MaterializedView! edison.scs_subscriber
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  15:24:17 Uncaught exception on thread 
> Thread[SharedPool-Worker-4,5,main]: {}
> {code}
> and ultimately 
> {code}
> ERROR 15:24:18 Exception encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12424) Assertion failure in ViewUpdateGenerator

2016-08-10 Thread Keith Wansbrough (JIRA)
Keith Wansbrough created CASSANDRA-12424:


 Summary: Assertion failure in ViewUpdateGenerator
 Key: CASSANDRA-12424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12424
 Project: Cassandra
  Issue Type: Bug
Reporter: Keith Wansbrough


Using released apache-cassandra-3.7.0, we have managed to get a node into a 
state where it won't start up. The exception is {{java.lang.AssertionError: We 
shouldn't have got there is the base row had no associated entry}} and it 
appears in 
ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455).

I still have the offending node; what diags/data would be useful for diagnosis? 
I've attached the full cassandra.log. In summary, cassandra.log contains 
multiple instances of the following when replacing the commit log on startup, 
leading ultimately to failure to start up.

{code}
ERROR 15:24:17 Unknown exception caught while attempting to update 
MaterializedView! edison.scs_subscriber
java.lang.AssertionError: We shouldn't have got there is the base row had no 
associated entry
at 
org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
 [apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[apache-cassandra-3.7.0.jar:3.7.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_91]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.7.0.jar:3.7.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
WARN  15:24:17 Uncaught exception on thread Thread[SharedPool-Worker-4,5,main]: 
{}
{code}

and ultimately 

{code}
ERROR 15:24:18 Exception encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: We shouldn't have got there is the base row had no 
associated entry
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12039) Add an index callback to be notified post bootstrap and before joining the ring

2016-08-10 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415034#comment-15415034
 ] 

Sergio Bossa commented on CASSANDRA-12039:
--

I've rebased both the C* and dtests patches, but dtests keep failing in strange 
ways, i.e.: 
https://cassci.datastax.com/view/Dev/view/sbtourist/job/sbtourist-CASSANDRA-12039-3.0-dtest/15/testReport/junit/secondary_indexes_test/TestPreJoinCallback/manual_join_test/

I believe it's some sort of instability in the test run, rather than in the 
test itself, any suggestions how to move forward?

> Add an index callback to be notified post bootstrap and before joining the 
> ring
> ---
>
> Key: CASSANDRA-12039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12039
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sergio Bossa
>Assignee: Sergio Bossa
>
> Custom index implementations might need to be notified when the node finishes 
> bootstrapping in order to execute some blocking tasks before the node itself 
> goes into NORMAL state.
> This is a proposal to add such functionality, which should roughly require 
> the following:
> 1) Add a {{getPostBootstrapTask}} callback to the {{Index}} interface.
> 2) Add an {{executePostBootstrapBlockingTasks}} method to 
> {{SecondaryIndexManager}} calling into the previously mentioned callback.
> 3) Hook that into {{StorageService#joinTokenRing}}.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12246) Cassandra v2.2 to v3.0.9 upgrade failed

2016-08-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Lété updated CASSANDRA-12246:

Attachment: system.schema_keyspaces.txt
system.schema_columns.txt
schema_columnfamilies.txt
describe.txt

I'll give you the snapshot as soon as possible

> Cassandra v2.2 to v3.0.9 upgrade failed
> ---
>
> Key: CASSANDRA-12246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12246
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04, DSC 2.2 > DSC 3.0
>Reporter: Thomas Lété
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.9
>
> Attachments: describe.txt, schema_columnfamilies.txt, 
> system.schema_columns.txt, system.schema_keyspaces.txt
>
>
> Hi,
> I'm trying to upgrade our Cassandra cluster which has been created using 
> opscenter.
> Now that Opscenter dropped support for Datastax Community, upgraded manually.
> Unfortunately, the Schema Migrator seems to hang somewhere during startup.
> Here is the log I get :
> {code:title=debug.log}
> INFO  [main] 2016-07-20 15:34:49,381 SystemKeyspace.java:1283 - Detected 
> version upgrade from 2.2.7 to 3.0.8, snapshotting system keyspace
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,383 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> compaction_history
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,389 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> hints
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,389 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_aggregates
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,392 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> IndexInfo
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,393 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_columnfamilies
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,395 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_triggers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,398 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> size_estimates
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,401 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_functions
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,403 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> paxos
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,404 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> views_builds_in_progress
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,404 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> built_views
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,405 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> peer_events
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,405 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> range_xfers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,406 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> peers
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,408 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> batches
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,408 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_keyspaces
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,410 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_usertypes
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,413 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> local
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,415 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> sstable_activity
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> available_ranges
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> batchlog
> DEBUG [MemtablePostFlush:1] 2016-07-20 15:34:49,418 
> ColumnFamilyStore.java:898 - forceFlush requested but everything is clean in 
> schema_columns
> WARN  [main] 2016-07-20 15:34:49,634 CompressionParams.java:382 - The 
> sstable_compression option has been deprecated. You should use class instead
> WARN  [main] 2016-07-20 15:34:49,654 CompressionParams.java:333 - The 
> chunk_length_kb option has been deprecated. Yo

[jira] [Commented] (CASSANDRA-12403) Slow query detecting

2016-08-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414982#comment-15414982
 ] 

Stefania commented on CASSANDRA-12403:
--

Hello Hoshii-san

Thank you for the patch. The approach is correct, you understood exactly what I 
meant in my previous comment. 

I created a branch with the initial commit and some suggestions in a follow-up 
commit. I've also started the tests, the results are still pending:

|trunk|[patch|https://github.com/stef1927/cassandra/commits/12403]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12403-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12403-dtest/]|

The suggestions can be summarized as follows:

* {{logging_slow_query_threshold_in_ms}} in cassandra.yaml was renamed to 
{{query_log_timeout_in_ms}} and the default value increased from 200 to 500 
milliseconds; I know this is very high but I prefer to be on the safe side, 
WDYT?
* The methods associated with the retrieval of the configuration property have 
been renamed for consistency with the new yaml property name.
* With the initial version of the patch, the same query would have been 
potentially added to the slow queries queue several times, I've changed 
{{MonitorableImpl}} so that we log a slow query only when it is completed. This 
has the advantage that we have the accurate total time for the query, and that 
we do not log queries that time out twice.
* Given the change above, it was then possible to refactor further 
{{MonitorTask}}, which I did. There was also a problem in that the slow queries 
queue was bounded but we did not report in the log message that some slow query 
logs were missing due to a full queue (via {{...}} in the message), the 
refactoring should have fixed this. I've also added some comments (unrelated to 
this patch, they should already have been there). 
* I think we can log both slow and timed out queries with the same scheduling 
task.
* The unit tests have been changed to reflect the points above.
* In {{MonitorTask}} the summary log message for slow queries was downgraded 
from WARN to INFO, and the logger wrapped with {{NoSpamLogger}}, which will log 
the summary message at most once every 5 minutes. {{NoSpamLogger}} is also used 
for the summary log message for timed-out queries, although this is still at 
WARN level. However, detailed logs will still be logged every time at DEBUG 
level in debug.log.

Finally, I've created some distributed tests 
[here|https://github.com/stef1927/cassandra-dtest/commits/12403]. You can run 
them to see what the log messages will look like. Here is a sample:

{code}
INFO  [ScheduledTasks:1] 2016-08-10 16:56:50,209 NoSpamLogger.java:91 - Some 
operations were slow, details available at debug level (debug.log)
DEBUG [ScheduledTasks:1] 2016-08-10 16:56:50,211 MonitoringTask.java:173 - 3 
operations were slow in the last 4998 msecs:
, time 3026 msec - slow timeout 30 msec
, was slow 2 times: avg/min/max 
330/325/335 msec - slow timeout 30 msec
 0 LIMIT 5000>, time 1449 msec - slow 
timeout 30 msec
{code}

The INFO message appears in system.log and debug.log but not more than once 
every 5 minutes. The DEBUG log appears every 5 seconds (configurable) if there 
is at least a slow query.

Could you review the follow-up commit and let us know what you think? Please do 
not hesitate to express any concerns, especially regarding the logging choices, 
since they are intended for operators.

> Slow query detecting
> 
>
> Key: CASSANDRA-12403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12403
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Shogo Hoshii
>Assignee: Shogo Hoshii
> Attachments: sample.log, slow_query.patch, slow_query.patch
>
>
> Hello,
> In cassandra production environment, users sometimes build anti-pattern 
> tables and throw queries in inefficient manners.
> So I would like to suggest a feature that enables to log slow query.
> The feature can help cassandra operators to identify bad query patterns.
> Then operators can give advices about queries and data model to users who 
> don't know cassandra so much.
> This ticket is related to CASSANDRA-6226, and I focus on detecting bad query 
> patterns, not aborting them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-08-10 Thread Tomasz Grabiec (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414919#comment-15414919
 ] 

Tomasz Grabiec commented on CASSANDRA-12423:


The problem could be that org.apache.cassandra.db.LegacyLayout#decodeBound 
decodes upper bound of the range tombstone with eoc=0 the same as eoc=1 
(INCL_END_BOUND) and both will cover all cells which are prefixed by the short 
key. Whereas in 2.1.9, composite ordering orders eoc=0 bound before the cells 
prefixed by it.

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-08-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414913#comment-15414913
 ] 

Benedict commented on CASSANDRA-11363:
--

This blocking behaviour and default queue limit was carried forward from the 
prior code, so I'm afraid I don't have any insights.  It may be that the 
increased baseline performance of 2.1 permits worse outlier states to 
accumulate if the user exploits it.  

The old code was using the jboss MemoryAwareExecutorService, but estimated the 
size of each request as 1.  A value of 128 does seem very small for users 
performing very small operations, but conversely a few large reads could 
destroy the box, so we will have complaints whatever we pick.  Perhaps 
configuring this parameter should be explicitly called out in whatever best 
practices docs we have.  

Ideally, this limit would be removed entirely and better dynamic constraints 
applied - I think we have some tickets already for keeping the number of 
requests at a coordinator constrained.  If that were dealt with (for all 
request types), this limit could be removed entirely.

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: T Jake Luciani
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, 
> max_queued_ntr_property.txt, thread-queue-2.1.txt
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC

[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-08-10 Thread Tomasz Grabiec (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomasz Grabiec updated CASSANDRA-12423:
---
Summary: Cells missing from compact storage table after upgrading from 
2.1.9 to 3.7  (was: Cells missing from compact storage table after after 
upgrading from 2.1.9 to 3.7)

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12423) Cells missing form compact storage table after after upgrading from 2.1.9 to 3.7

2016-08-10 Thread Tomasz Grabiec (JIRA)
Tomasz Grabiec created CASSANDRA-12423:
--

 Summary: Cells missing form compact storage table after after 
upgrading from 2.1.9 to 3.7
 Key: CASSANDRA-12423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
 Project: Cassandra
  Issue Type: Bug
Reporter: Tomasz Grabiec


Schema:
{code}
create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
c2)) with compact storage and compression = {'sstable_compression': ''};
{code}

sstable2json before upgrading:
{code}
[
{"key": "1",
 "cells": [["","0",1470761440040513],
   ["a","asd",2470761440040513,"t",1470764842],
   ["asd:","0",1470761451368658],
   ["asd:asd","0",1470761449416613]]}
]
{code}

Query result with 2.1.9:
{code}
cqlsh> select * from ks1.test;

 id | c1  | c2   | v
+-+--+---
  1 | | null | 0
  1 | asd |  | 0
  1 | asd |  asd | 0

(3 rows)
{code}

Query result with 3.7:
{code}
cqlsh> select * from ks1.test;

 id | 6331 | 6332 | v
+--+--+---
  1 |  | null | 0

(1 rows)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after after upgrading from 2.1.9 to 3.7

2016-08-10 Thread Tomasz Grabiec (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomasz Grabiec updated CASSANDRA-12423:
---
Summary: Cells missing from compact storage table after after upgrading 
from 2.1.9 to 3.7  (was: Cells missing form compact storage table after after 
upgrading from 2.1.9 to 3.7)

> Cells missing from compact storage table after after upgrading from 2.1.9 to 
> 3.7
> 
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-08-10 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414891#comment-15414891
 ] 

Benjamin Lerer edited comment on CASSANDRA-12249 at 8/10/16 8:17 AM:
-

The patch looks good to me and the failling tests seems to be caused by some 
other problems (one problem is obviously the fact that the alias error message 
has been changed in the code but not in the tests).

+1

Thans for taking over [~thobbs]



was (Author: blerer):
The patch looks good to me and the failling tests seems to be caused by some 
other problems (one problem is obviously the fact that the alias error message 
has been changed in the code but not in the tests).

+1


> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.8, 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-08-10 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414891#comment-15414891
 ] 

Benjamin Lerer edited comment on CASSANDRA-12249 at 8/10/16 8:17 AM:
-

The patch looks good to me and the failling tests seems to be caused by some 
other problems (one problem is obviously the fact that the alias error message 
has been changed in the code but not in the tests).

+1

Thanks for taking over [~thobbs]



was (Author: blerer):
The patch looks good to me and the failling tests seems to be caused by some 
other problems (one problem is obviously the fact that the alias error message 
has been changed in the code but not in the tests).

+1

Thans for taking over [~thobbs]


> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.8, 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-08-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12249:
---
Fix Version/s: 3.9

> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.8, 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-08-10 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414891#comment-15414891
 ] 

Benjamin Lerer commented on CASSANDRA-12249:


The patch looks good to me and the failling tests seems to be caused by some 
other problems (one problem is obviously the fact that the alias error message 
has been changed in the code but not in the tests).

+1


> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-08-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12249:
---
Status: Ready to Commit  (was: Patch Available)

> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12253) Fix exceptions when enabling gossip on proxy nodes.

2016-08-10 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414847#comment-15414847
 ] 

Joel Knighton commented on CASSANDRA-12253:
---

It occurs to me that this still isn't sufficient. This patch would avoid the 
NPE, but we set the status to shutdown on stop gossiping, so when we start 
gossiping, we'll broadcast this shutdown status and repeatedly remove and 
re-add the endpoint to gossip.

The best ideas I have here are to add a status for coordinator-only nodes or 
add a way to force the removal of a state from the local EndpointState on 
startGossiping, so that we can remove the shutdown state and go back to 
gossiping no status. I don't like either of them very much.

Any brilliant ideas [~brandon.williams]?

> Fix exceptions when enabling gossip on proxy nodes.
> ---
>
> Key: CASSANDRA-12253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12253
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 0001-for-proxy-node-not-set-gossip-tokens.patch, 
> 0002-for-proxy-node-not-set-gossip-tokens.patch
>
>
> We have a tier of Cassandra nodes running with join_ring=false flag, which we 
> call proxy nodes, and they will never join the ring.
> The problem is that sometimes we need to disable and enable the gossip on 
> those nodes, and `nodetool enablegossip` throws exceptions when we do that:
> {code}
> java.lang.AssertionError
> at 
> org.apache.cassandra.service.StorageService.getLocalTokens(StorageService.java:2213)
> at 
> org.apache.cassandra.service.StorageService.startGossiping(StorageService.java:371)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
> at sun.rmi.transport.Transport$1.run(Transport.java:177)
> at sun.rmi.transport.Transport$1.run(Transport.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {code}



--
This message was sent b

[jira] [Commented] (CASSANDRA-12366) Fix compaction throttle

2016-08-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15414837#comment-15414837
 ] 

Marcus Eriksson commented on CASSANDRA-12366:
-

+1

created CASSANDRA-12422 for further cleanup

> Fix compaction throttle
> ---
>
> Key: CASSANDRA-12366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12366
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.x
>
>
> Compaction throttling is broken in the following ways:
>   * It throttles bytes read after being decompressed
>   * Compaction creates multiple scanners which share the rate limiter causing 
> too much throttling
>   * It bears no resemblance to the reported compaction time remaining 
> calculation (Bytes of source sstables processed since start of compaction)
> To fix this we need to simplify the throttling to be only at the 
> CompactionIterator level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12422) Clean up the SSTableReader#getScanner API

2016-08-10 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-12422:
---

 Summary: Clean up the SSTableReader#getScanner API
 Key: CASSANDRA-12422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12422
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor
 Fix For: 4.0


After CASSANDRA-12366 we only call the various getScanner methods in 
SSTableReader with null as a rate limiter - we should remove this parameter.

Targeting 4.0 as we probably shouldn't change the API in 3.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12366) Fix compaction throttle

2016-08-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12366:

Status: Ready to Commit  (was: Patch Available)

> Fix compaction throttle
> ---
>
> Key: CASSANDRA-12366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12366
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.x
>
>
> Compaction throttling is broken in the following ways:
>   * It throttles bytes read after being decompressed
>   * Compaction creates multiple scanners which share the rate limiter causing 
> too much throttling
>   * It bears no resemblance to the reported compaction time remaining 
> calculation (Bytes of source sstables processed since start of compaction)
> To fix this we need to simplify the throttling to be only at the 
> CompactionIterator level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2