[jira] [Commented] (CASSANDRA-9749) CommitLogReplayer continues startup after encountering errors

2015-08-19 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704365#comment-14704365
 ] 

Branimir Lambov commented on CASSANDRA-9749:


Test fix for 2.2 pushed 
[here|https://github.com/blambov/cassandra/tree/9749-2.2-testfix]. 
[testall|http://cassci.datastax.com/job/blambov-9749-2.2-testfix-testall/] 
[dtest|http://cassci.datastax.com/job/blambov-9749-2.2-testfix-dtest/]

> CommitLogReplayer continues startup after encountering errors
> -
>
> Key: CASSANDRA-9749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9749
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Branimir Lambov
> Fix For: 2.2.1, 3.0 beta 1
>
> Attachments: 9749-coverage.tgz
>
>
> There are a few places where the commit log recovery method either skips 
> sections or just returns when it encounters errors.
> Specifically if it can't read the header here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L298
> Or if there are compressor problems here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L314
>  and here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L366
> Whether these are user-fixable or not, I think we should require more direct 
> user intervention (ie: fix what's wrong, or remove the bad file and restart) 
> since we're basically losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9670) Cannot run CQL scripts on Windows AND having error Ubuntu Linux

2015-08-19 Thread Sanjay Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704280#comment-14704280
 ] 

Sanjay Patel commented on CASSANDRA-9670:
-

Carl, any plan for analysis?Thanks 

> Cannot run CQL scripts on Windows AND having error Ubuntu Linux
> ---
>
> Key: CASSANDRA-9670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9670
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: - Apache Cassandra 2.1.7 and 2.1.8 in Ubuntu
> - DataStax Community Edition  on Windows 7, 64 Bit and Ubuntu 
>Reporter: Sanjay Patel
>Assignee: Carl Yeksigian
>  Labels: cqlsh
> Fix For: 2.1.x
>
> Attachments: cities.cql, germany_cities.cql, germany_cities.cql, 
> india_cities.csv, india_states.csv, sp_setup.cql
>
>
> After installation of 2.1.6 and 2.1.7 it is not possible to execute cql 
> scripts, which were earlier executed on windows + Linux environment 
> successfully.
> I have tried to install Python 2 latest version and try to execute, but 
> having same error.
> Attaching cities.cql for reference.
> ---
> {code}
> cqlsh> source 'shoppoint_setup.cql' ;
> shoppoint_setup.cql:16:InvalidRequest: code=2200 [Invalid query] 
> message="Keyspace 'shopping' does not exist"
> shoppoint_setup.cql:647:'ascii' codec can't decode byte 0xc3 in position 57: 
> ordinal not in range(128)
> cities.cql:9:'ascii' codec can't decode byte 0xc3 in position 51: ordinal not 
> in range(128)
> cities.cql:14:
> Error starting import process:
> cities.cql:14:Can't pickle : it's not found as thread.lock
> cities.cql:14:can only join a started process
> cities.cql:16:
> Error starting import process:
> cities.cql:16:Can't pickle : it's not found as thread.lock
> cities.cql:16:can only join a started process
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "I:\programm\python2710\lib\multiprocessing\forking.py", line 380, in 
> main
> prepare(preparation_data)
>   File "I:\programm\python2710\lib\multiprocessing\forking.py", line 489, in 
> prepare
> Traceback (most recent call last):
>   File "", line 1, in 
> file, path_name, etc = imp.find_module(main_name, dirs)
> ImportError: No module named cqlsh
>   File "I:\programm\python2710\lib\multiprocessing\forking.py", line 380, in 
> main
> prepare(preparation_data)
>   File "I:\programm\python2710\lib\multiprocessing\forking.py", line 489, in 
> prepare
> file, path_name, etc = imp.find_module(main_name, dirs)
> ImportError: No module named cqlsh
> shoppoint_setup.cql:663:'ascii' codec can't decode byte 0xc3 in position 18: 
> ordinal not in range(128)
> ipcache.cql:28:ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> I:\var\lib\cassandra\data\syste
> m\schema_columns-296e9c049bec3085827dc17d3df2122a\system-schema_columns-ka-300-Data.db
>  (The process cannot access the file because it is being used by another 
> process)">
> ccavn_bulkupdate.cql:75:ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> I:\var\lib\cassandra\d
> ata\system\schema_columns-296e9c049bec3085827dc17d3df2122a\system-schema_columns-tmplink-ka-339-Data.db
>  (The process cannot access the file because it is being used by another 
> process)">
> shoppoint_setup.cql:680:'ascii' codec can't decode byte 0xe2 in position 14: 
> ordinal not in range(128){code}
> -
> In one of Ubuntu development environment we have similar errors.
> -
> {code}
> shoppoint_setup.cql:647:'ascii' codec can't decode byte 0xc3 in position 57: 
> ordinal not in range(128)
> cities.cql:9:'ascii' codec can't decode byte 0xc3 in position 51: ordinal not 
> in range(128)
> (corresponding line) COPY cities (city,country_code,state,isactive) FROM 
> 'testdata/india_cities.csv' ;
> [19:53:18] j.basu: shoppoint_setup.cql:663:'ascii' codec can't decode byte 
> 0xc3 in position 18: ordinal not in range(128)
> {code}
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9414) Windows utest 2.2: org.apache.cassandra.db.CommitLogTest.testDeleteIfNotDirty intermittent failure

2015-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704072#comment-14704072
 ] 

Paulo Motta commented on CASSANDRA-9414:


Sometimes I get a variant of this error even with dtest:

{noformat}
==
ERROR: split_test (sstablesplit_test.TestSSTableSplit)
--
Traceback (most recent call last):
  File "C:\Users\Paulo\Repositories\cassandra-dtest\dtest.py", line 542, in 
tearDown
self._cleanup_cluster()
  File "C:\Users\Paulo\Repositories\cassandra-dtest\dtest.py", line 239, in 
_cleanup_cluster
self.cluster.remove()
  File "c:\users\paulo\repositories\ccm\ccmlib\cluster.py", line 223, in remove
common.rmdirs(self.get_path())
  File "c:\users\paulo\repositories\ccm\ccmlib\common.py", line 156, in rmdirs
shutil.rmtree(u"?\\" + path)
  File "C:\tools\python2\lib\shutil.py", line 247, in rmtree
rmtree(fullname, ignore_errors, onerror)
  File "C:\tools\python2\lib\shutil.py", line 247, in rmtree
rmtree(fullname, ignore_errors, onerror)
  File "C:\tools\python2\lib\shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
  File "C:\tools\python2\lib\shutil.py", line 250, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 
u'?\\c:\\users\\paulo\\appdata\\local\\temp\\dtest-6dz1zm\\test\\node1\\commitlogs\\CommitLog-6-1440042394664.log'
{noformat}

> Windows utest 2.2: org.apache.cassandra.db.CommitLogTest.testDeleteIfNotDirty 
> intermittent failure
> --
>
> Key: CASSANDRA-9414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9414
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 2.2.x
>
>
> Failure is intermittent enough that bisect is proving to be more hassle than 
> it's worth. Seems pretty consistent in CI.
> {noformat}
> [junit] Testcase: 
> testDeleteIfNotDirty(org.apache.cassandra.db.CommitLogTest):  Caused an 
> ERROR
> [junit] java.nio.file.AccessDeniedException: 
> build\test\cassandra\commitlog;0\CommitLog-5-1431965988394.log
> [junit] FSWriteError in 
> build\test\cassandra\commitlog;0\CommitLog-5-1431965988394.log
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131)
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:148)
> [junit] at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:360)
> [junit] at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:166)
> [junit] at 
> org.apache.cassandra.db.commitlog.CommitLog.startUnsafe(CommitLog.java:416)
> [junit] at 
> org.apache.cassandra.db.commitlog.CommitLog.resetUnsafe(CommitLog.java:389)
> [junit] at 
> org.apache.cassandra.db.CommitLogTest.testDeleteIfNotDirty(CommitLogTest.java:178)
> [junit] Caused by: java.nio.file.AccessDeniedException: 
> build\test\cassandra\commitlog;0\CommitLog-5-1431965988394.log
> [junit] at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> [junit] at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> [junit] at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> [junit] at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
> [junit] at java.nio.file.Files.delete(Files.java:1126)
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9893) Fix upgrade tests from #9704 that are still failing

2015-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704060#comment-14704060
 ] 

Blake Eggleston commented on CASSANDRA-9893:


I've merged Tyler's 8099-backwards-compat branch with the current dtest master 
and run the tests in the upgrade_tests folder locally against the tip of 
cassandra-3.0 (CASSANDRA_DIR) and cassandra-2.1 (OLD_CASSANDRA_DIR) with 
Stupp's fix for CASSANDRA-8220 on osx. I'm getting 7 failures, and 9 other 
tests which are repeatably throwing various timeout themed errors.

These are the tests that are failing:
cql_tests:TestCQL.collection_indexing_test
cql_tests:TestCQL.composite_index_collections_test
cql_tests:TestCQL.edge_2i_on_complex_pk_test
cql_tests:TestCQL.indexed_with_eq_test
cql_tests:TestCQL.large_count_test
cql_tests:TestCQL.static_columns_with_distinct_test
paging_test:TestPagingWithDeletions.test_failure_threshold_deletions

these are the ones with timeout/no host errors:
cql_tests:TestCQL.batch_test
cql_tests:TestCQL.noncomposite_static_cf_test
cql_tests:TestCQL.static_cf_test
cql_tests:TestCQL.test_v2_protocol_IN_with_tuples
paging_test:TestPagingDatasetChanges.test_row_TTL_expiry_during_paging

and these had Truncate related timeouts:
paging_test:TestPagingData.test_paging_a_single_wide_row
paging_test:TestPagingData.test_paging_across_multi_wide_rows
paging_test:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging
paging_test:TestPagingSize.test_undefined_page_size_default

with some more info about the failures here:
https://gist.github.com/bdeggleston/cb29e277468a5861d33e
https://gist.github.com/bdeggleston/ead704e3f62d55adf5f0

This is mainly a checklist for myself, since 8099-backwards-compat support on 
cassci isn't there yet. I'm going to focus on the failures first, then take a 
closer look at the errors.

> Fix upgrade tests from #9704 that are still failing
> ---
>
> Key: CASSANDRA-9893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9893
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 2
>
>
> The first things to do on this ticket would be to commit Tyler's branch 
> (https://github.com/thobbs/cassandra-dtest/tree/8099-backwards-compat) to the 
> dtests so cassci run them. I've had to do a few minor modifications to have 
> them run locally so someone which access to cassci should do it and make sure 
> it runs properly.
> Once we have that, we should fix any test that isn't passing. I've ran the 
> tests locally and I had 8 failures. for 2 of them, it sounds plausible that 
> they'll get fixed by the patch of CASSANDRA-9775, though that's just a guess. 
>  The rest where test that timeouted without a particular error in the log, 
> and running some of them individually, they passed.  So we'll have to see if 
> it's just my machine being overly slow when running them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-8630:
---

Assignee: Stefania  (was: Benedict)

> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Stefania
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10129) Windows utest 2.2: RecoveryManagerTest.testRecoverPITUnordered failure

2015-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-10129.
-
Resolution: Duplicate

> Windows utest 2.2: RecoveryManagerTest.testRecoverPITUnordered failure
> --
>
> Key: CASSANDRA-10129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10129
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Paulo Motta
>  Labels: Windows
>
> {noformat}
> FSWriteError in 
> build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:359)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:167)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.startUnsafe(CommitLog.java:441)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.resetUnsafe(CommitLog.java:414)
>   at 
> org.apache.cassandra.db.RecoveryManagerTest.testRecoverPITUnordered(RecoveryManagerTest.java:166)
> Caused by: java.nio.file.AccessDeniedException: 
> build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126)
> {noformat}
> Failure started with build #89 but reverting CASSANDRA-9749 doesn't resolve 
> it; I can reproduce the error locally even after revert/rebuild.
> I've bashed my head against trying to get the CommitLogTests to behave on 
> Windows enough times that I think we could use a new set of eyes on them.
> Assigning to Paulo and taking review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704048#comment-14704048
 ] 

Ariel Weisberg commented on CASSANDRA-8630:
---

bq. You would think so. But take a look at its floorEntry implementation, which 
we would need to make use of. I'm terribly disappointed whenever I look beneath 
the hood of Guava.
It's pretty crazy. Technically it doesn't copy the entire thing if you follow 
it the entire way through. But yeah (jackie)



> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Benedict
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10129) Windows utest 2.2: RecoveryManagerTest.testRecoverPITUnordered failure

2015-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704047#comment-14704047
 ] 

Paulo Motta commented on CASSANDRA-10129:
-

Testing is passing again, both locally and on 
[CI|http://cassci.datastax.com/view/win32/job/cassandra-2.2_utest_win32/lastCompletedBuild/testReport/org.apache.cassandra.db/RecoveryManagerTest/history/].
 Closing as a duplicate of 
[CASSANDRA-9914|https://issues.apache.org/jira/browse/CASSANDRA-9414].

> Windows utest 2.2: RecoveryManagerTest.testRecoverPITUnordered failure
> --
>
> Key: CASSANDRA-10129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10129
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Paulo Motta
>  Labels: Windows
>
> {noformat}
> FSWriteError in 
> build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:359)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:167)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.startUnsafe(CommitLog.java:441)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.resetUnsafe(CommitLog.java:414)
>   at 
> org.apache.cassandra.db.RecoveryManagerTest.testRecoverPITUnordered(RecoveryManagerTest.java:166)
> Caused by: java.nio.file.AccessDeniedException: 
> build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126)
> {noformat}
> Failure started with build #89 but reverting CASSANDRA-9749 doesn't resolve 
> it; I can reproduce the error locally even after revert/rebuild.
> I've bashed my head against trying to get the CommitLogTests to behave on 
> Windows enough times that I think we could use a new set of eyes on them.
> Assigning to Paulo and taking review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Use the same repairedAt timestamp within incremental repair session

2015-08-19 Thread yukim
Use the same repairedAt timestamp within incremental repair session

patch by prmg; reviewed by yukim for CASSANDRA-9111


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/13172bd9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/13172bd9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/13172bd9

Branch: refs/heads/trunk
Commit: 13172bd993f86d44245e7140898c03db1a47073a
Parents: 4cc2b67
Author: prmg 
Authored: Wed Aug 19 18:12:36 2015 -0500
Committer: Yuki Morishita 
Committed: Wed Aug 19 18:12:36 2015 -0500

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/repair/RepairMessageVerbHandler.java |  3 ++-
 .../apache/cassandra/repair/messages/PrepareMessage.java  | 10 --
 .../org/apache/cassandra/service/ActiveRepairService.java |  9 +
 .../db/compaction/LeveledCompactionStrategyTest.java  |  2 +-
 .../org/apache/cassandra/repair/LocalSyncTaskTest.java|  2 +-
 6 files changed, 18 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0d17235..cea8c73 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -17,6 +17,7 @@
  * Replace usage of Adler32 with CRC32 (CASSANDRA-8684)
  * Fix migration to new format from 2.1 SSTable (CASSANDRA-10006)
  * SequentialWriter should extend BufferedDataOutputStreamPlus (CASSANDRA-9500)
+ * Use the same repairedAt timestamp within incremental repair session 
(CASSANDRA-9111)
 Merged from 2.2:
  * Fix histogram overflow exception (CASSANDRA-9973)
  * Route gossip messages over dedicated socket (CASSANDRA-9237)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java 
b/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
index 28a3bf5..942d902 100644
--- a/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
+++ b/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
@@ -72,7 +72,8 @@ public class RepairMessageVerbHandler implements 
IVerbHandler
 
ActiveRepairService.instance.registerParentRepairSession(prepareMessage.parentRepairSession,
 columnFamilyStores,
 prepareMessage.ranges,
-prepareMessage.isIncremental);
+prepareMessage.isIncremental,
+prepareMessage.timestamp);
 MessagingService.instance().sendReply(new 
MessageOut(MessagingService.Verb.INTERNAL_RESPONSE), id, message.from);
 break;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
--
diff --git a/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java 
b/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
index cd1b99d..0cd73db 100644
--- a/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
+++ b/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
@@ -40,14 +40,16 @@ public class PrepareMessage extends RepairMessage
 
 public final UUID parentRepairSession;
 public final boolean isIncremental;
+public final long timestamp;
 
-public PrepareMessage(UUID parentRepairSession, List cfIds, 
Collection> ranges, boolean isIncremental)
+public PrepareMessage(UUID parentRepairSession, List cfIds, 
Collection> ranges, boolean isIncremental, long timestamp)
 {
 super(Type.PREPARE_MESSAGE, null);
 this.parentRepairSession = parentRepairSession;
 this.cfIds = cfIds;
 this.ranges = ranges;
 this.isIncremental = isIncremental;
+this.timestamp = timestamp;
 }
 
 public static class PrepareMessageSerializer implements 
MessageSerializer
@@ -65,6 +67,7 @@ public class PrepareMessage extends RepairMessage
 Range.tokenSerializer.serialize(r, out, version);
 }
 out.writeBoolean(message.isIncremental);
+out.writeLong(message.timestamp);
 }
 
 public PrepareMessage deserialize(DataInputPlus in, int version) 
throws IOException
@@ -79,7 +82,8 @@ public class PrepareMessage extends RepairMessage
 for (int i = 0; i < rangeCount; i++)
 ranges.add((Range) 
Range.tokenSerializer.deserialize(in, MessagingService.globalPartitioner(), 
version));
 boo

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-19 Thread yukim
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/110e803e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/110e803e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/110e803e

Branch: refs/heads/trunk
Commit: 110e803edd877de7ce3bf457430b197e0d214b83
Parents: 6cad04b 13172bd
Author: Yuki Morishita 
Authored: Wed Aug 19 19:02:21 2015 -0500
Committer: Yuki Morishita 
Committed: Wed Aug 19 19:02:21 2015 -0500

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/repair/RepairMessageVerbHandler.java |  3 ++-
 .../apache/cassandra/repair/messages/PrepareMessage.java  | 10 --
 .../org/apache/cassandra/service/ActiveRepairService.java |  9 +
 .../db/compaction/LeveledCompactionStrategyTest.java  |  2 +-
 .../org/apache/cassandra/repair/LocalSyncTaskTest.java|  2 +-
 6 files changed, 18 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/110e803e/CHANGES.txt
--



[1/3] cassandra git commit: Use the same repairedAt timestamp within incremental repair session

2015-08-19 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 4cc2b67df -> 13172bd99
  refs/heads/trunk 6cad04b22 -> 110e803ed


Use the same repairedAt timestamp within incremental repair session

patch by prmg; reviewed by yukim for CASSANDRA-9111


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/13172bd9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/13172bd9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/13172bd9

Branch: refs/heads/cassandra-3.0
Commit: 13172bd993f86d44245e7140898c03db1a47073a
Parents: 4cc2b67
Author: prmg 
Authored: Wed Aug 19 18:12:36 2015 -0500
Committer: Yuki Morishita 
Committed: Wed Aug 19 18:12:36 2015 -0500

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/repair/RepairMessageVerbHandler.java |  3 ++-
 .../apache/cassandra/repair/messages/PrepareMessage.java  | 10 --
 .../org/apache/cassandra/service/ActiveRepairService.java |  9 +
 .../db/compaction/LeveledCompactionStrategyTest.java  |  2 +-
 .../org/apache/cassandra/repair/LocalSyncTaskTest.java|  2 +-
 6 files changed, 18 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0d17235..cea8c73 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -17,6 +17,7 @@
  * Replace usage of Adler32 with CRC32 (CASSANDRA-8684)
  * Fix migration to new format from 2.1 SSTable (CASSANDRA-10006)
  * SequentialWriter should extend BufferedDataOutputStreamPlus (CASSANDRA-9500)
+ * Use the same repairedAt timestamp within incremental repair session 
(CASSANDRA-9111)
 Merged from 2.2:
  * Fix histogram overflow exception (CASSANDRA-9973)
  * Route gossip messages over dedicated socket (CASSANDRA-9237)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java 
b/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
index 28a3bf5..942d902 100644
--- a/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
+++ b/src/java/org/apache/cassandra/repair/RepairMessageVerbHandler.java
@@ -72,7 +72,8 @@ public class RepairMessageVerbHandler implements 
IVerbHandler
 
ActiveRepairService.instance.registerParentRepairSession(prepareMessage.parentRepairSession,
 columnFamilyStores,
 prepareMessage.ranges,
-prepareMessage.isIncremental);
+prepareMessage.isIncremental,
+prepareMessage.timestamp);
 MessagingService.instance().sendReply(new 
MessageOut(MessagingService.Verb.INTERNAL_RESPONSE), id, message.from);
 break;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/13172bd9/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
--
diff --git a/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java 
b/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
index cd1b99d..0cd73db 100644
--- a/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
+++ b/src/java/org/apache/cassandra/repair/messages/PrepareMessage.java
@@ -40,14 +40,16 @@ public class PrepareMessage extends RepairMessage
 
 public final UUID parentRepairSession;
 public final boolean isIncremental;
+public final long timestamp;
 
-public PrepareMessage(UUID parentRepairSession, List cfIds, 
Collection> ranges, boolean isIncremental)
+public PrepareMessage(UUID parentRepairSession, List cfIds, 
Collection> ranges, boolean isIncremental, long timestamp)
 {
 super(Type.PREPARE_MESSAGE, null);
 this.parentRepairSession = parentRepairSession;
 this.cfIds = cfIds;
 this.ranges = ranges;
 this.isIncremental = isIncremental;
+this.timestamp = timestamp;
 }
 
 public static class PrepareMessageSerializer implements 
MessageSerializer
@@ -65,6 +67,7 @@ public class PrepareMessage extends RepairMessage
 Range.tokenSerializer.serialize(r, out, version);
 }
 out.writeBoolean(message.isIncremental);
+out.writeLong(message.timestamp);
 }
 
 public PrepareMessage deserialize(DataInputPlus in, int version) 
throws IOException
@@ -79,7 +82,8 @@ public class PrepareMessage extends RepairMessage
 for (int i = 0; i < rangeCount; i++)

[jira] [Commented] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704001#comment-14704001
 ] 

Aleksey Yeschenko commented on CASSANDRA-10136:
---

This is the same issue previously hit by CASSANDRA-10123 - 3.0 is failing to 
read 2.2-created hints sstables. I expect this is not just hints specific, but 
other {{COMPACT STORAGE}} + multiple clustering columns tables as well, maybe.

> startup error after upgrade to 3.0
> --
>
> Key: CASSANDRA-10136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10136
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
>
> Encountering this error after a node is upgraded to 3.0 HEAD.
> This is a rolling upgrade test, where a second node (of three) has been 
> upgraded to 3.0.
> {noformat}
> ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
> [main/:na]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
>  ~[main/:na]
> ... 5 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 

[jira] [Updated] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10136:
--
Assignee: Sylvain Lebresne

> startup error after upgrade to 3.0
> --
>
> Key: CASSANDRA-10136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10136
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
>
> Encountering this error after a node is upgraded to 3.0 HEAD.
> This is a rolling upgrade test, where a second node (of three) has been 
> upgraded to 3.0.
> {noformat}
> ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
> [main/:na]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
>  ~[main/:na]
> ... 5 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor

[jira] [Updated] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10136:
--
Assignee: (was: Aleksey Yeschenko)

> startup error after upgrade to 3.0
> --
>
> Key: CASSANDRA-10136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10136
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
> Fix For: 3.0 beta 2
>
>
> Encountering this error after a node is upgraded to 3.0 HEAD.
> This is a rolling upgrade test, where a second node (of three) has been 
> upgraded to 3.0.
> {noformat}
> ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
> [main/:na]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
>  ~[main/:na]
> ... 5 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45

[jira] [Assigned] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-10136:
-

Assignee: Aleksey Yeschenko

> startup error after upgrade to 3.0
> --
>
> Key: CASSANDRA-10136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10136
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Aleksey Yeschenko
> Fix For: 3.0 beta 2
>
>
> Encountering this error after a node is upgraded to 3.0 HEAD.
> This is a rolling upgrade test, where a second node (of three) has been 
> upgraded to 3.0.
> {noformat}
> ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
> [main/:na]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
>  ~[main/:na]
> ... 5 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo

[jira] [Updated] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-10136:
---
Description: 
Encountering this error after a node is upgraded to 3.0 HEAD.

This is a rolling upgrade test, where a second node (of three) has been 
upgraded to 3.0.

{noformat}
ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
[main/:na]
Caused by: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[na:1.8.0_45]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
 ~[main/:na]
... 5 common frames omitted
Caused by: java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
 ~[main/:na]
at 
org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
 ~[main/:na]
at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
ERROR [CompactionExecutor:1] 2015-08-19 17:30:16,153 CassandraDaemon.java:192 - 
Exception in thread Thread[CompactionExecutor:1,1,main]
java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:

[jira] [Resolved] (CASSANDRA-10123) UnsupportedOperationException after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-10123.

Resolution: Cannot Reproduce

> UnsupportedOperationException after upgrade to 3.0
> --
>
> Key: CASSANDRA-10123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10123
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrading from 2.2 HEAD to 3.0 HEAD triggers these exceptions:
> {noformat}
> ERROR [HintedHandoff:1] 2015-08-18 12:34:02,193 CassandraDaemon.java:191 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.HintedHandOffManager.compact(HintedHandOffManager.java:281)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:535)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.db.HintedHandOffManager.compact(HintedHandOffManager.java:277)
>  ~[main/:na]
> ... 4 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
> ... 3 common frames omitted
> {noformat}
> This occurs during a rolling upgrade where one node is on 3.0 and 2 nodes 
> still remain on 2.2. Nodes are running java8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-10122.

Resolution: Cannot Reproduce

> AssertionError after upgrade to 3.0
> ---
>
> Key: CASSANDRA-10122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10122
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrade tests are encountering this exception after upgrade from 2.2 HEAD to 
> 3.0 HEAD:
> {noformat}
> ERROR [SharedPool-Worker-4] 2015-08-18 12:33:57,858 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xa5ba2c7a, 
> /127.0.0.1:55048 => /127.0.0.1:9042]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {noformat}
>

[jira] [Commented] (CASSANDRA-10123) UnsupportedOperationException after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703981#comment-14703981
 ] 

Russ Hatch commented on CASSANDRA-10123:


This seemed to be happening in conjunction with CASSANDRA-10122 which doesn't 
seem to be occurring anymore. Resolving for now, will reopen if needed.

> UnsupportedOperationException after upgrade to 3.0
> --
>
> Key: CASSANDRA-10123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10123
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrading from 2.2 HEAD to 3.0 HEAD triggers these exceptions:
> {noformat}
> ERROR [HintedHandoff:1] 2015-08-18 12:34:02,193 CassandraDaemon.java:191 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.HintedHandOffManager.compact(HintedHandOffManager.java:281)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:535)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.UnsupportedOperationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.db.HintedHandOffManager.compact(HintedHandOffManager.java:277)
>  ~[main/:na]
> ... 4 common frames omitted
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
> ... 3 common frames omitted
> {noformat}
> This occurs during a r

[jira] [Commented] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703978#comment-14703978
 ] 

Russ Hatch commented on CASSANDRA-10122:


[~slebresne] Sorry for the delay. I'm not seeing this issue anymore, so I will 
resolve for now. If I can figure out what is going on, I will reopen.

> AssertionError after upgrade to 3.0
> ---
>
> Key: CASSANDRA-10122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10122
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrade tests are encountering this exception after upgrade from 2.2 HEAD to 
> 3.0 HEAD:
> {noformat}
> ERROR [SharedPool-Worker-4] 2015-08-18 12:33:57,858 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xa5ba2c7a, 
> /127.0.0.1:55048 => /127.0.0.1:9042]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]

[jira] [Updated] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-10136:
---
Description: 
Encountering this error after a node is upgraded to 3.0.

This is a rolling upgrade test, where a second node (of three) has been 
upgraded to 3.0.

{noformat}
ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
[main/:na]
Caused by: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[na:1.8.0_45]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
 ~[main/:na]
... 5 common frames omitted
Caused by: java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
 ~[main/:na]
at 
org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
 ~[main/:na]
at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
ERROR [CompactionExecutor:1] 2015-08-19 17:30:16,153 CassandraDaemon.java:192 - 
Exception in thread Thread[CompactionExecutor:1,1,main]
java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)

[jira] [Created] (CASSANDRA-10136) startup error after upgrade to 3.0

2015-08-19 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-10136:
--

 Summary: startup error after upgrade to 3.0
 Key: CASSANDRA-10136
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10136
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
 Fix For: 3.0 beta 2


Encountering this error after a node is upgraded to 3.0.

This is a rolling upgrade test, where a second node (of three) has been 
upgraded to 3.0.

{noformat}
ERROR [main] 2015-08-19 17:30:16,153 CassandraDaemon.java:635 - Exception 
encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:119)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.compactLegacyHints(LegacyHintsMigrator.java:108)
 ~[main/:na]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:92)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:281) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) 
[main/:na]
Caused by: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
~[na:1.8.0_45]
at 
org.apache.cassandra.hints.LegacyHintsMigrator.forceCompaction(LegacyHintsMigrator.java:115)
 ~[main/:na]
... 5 common frames omitted
Caused by: java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerializer(Serializers.java:52)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:171)
 ~[main/:na]
at 
org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:150)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:286)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:260)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:240)
 ~[main/:na]
at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:439)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$5.hasNext(UnfilteredPartitionIterators.java:234)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.prepareNext(WrappingUnfilteredPartitionIterator.java:71)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.WrappingUnfilteredPartitionIterator.hasNext(WrappingUnfilteredPartitionIterator.java:55)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:66)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:212)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:179)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:80)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionManager$9.runMayThrow(CompactionManager.java:638)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
ERROR [CompactionExecutor:1] 2015-08-19 17:30:16,153 CassandraDaemon.java:192 - 
Exception in thread Thread[CompactionExecutor:1,1,main]
java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.Serializers.clusteringPrefixSerial

[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703958#comment-14703958
 ] 

Benedict commented on CASSANDRA-8630:
-

bq. I don't see how an array of pairs can be less indirection then a map, or 
result in less boxing unless there are parallel arrays

Right, which is the standard approach for this kind of thing in Java.

bq. There might be something to not remapping entire files every 50 megabytes 
as part of early opening, but it's definitely better as a separate task. It's 
also not clear whether it's going to be faster or just feel better.

We've had a few weird kernel level memory interactions reported, and I cannot 
shake the feeling this was related. We never tracked down the cause, but also 
did not have follow up, so it's also quite possible it was an environmental 
issue. 

However, either way, if we're rewriting it right now (which to some extent we 
have to if we're eliminating the current ugliness of multiple readers, 
"potential boundaries" etc - cleanliness scope creep, I'll admit, but when 
refactoring a bunch of classes I don't think we should miss an opportunity to 
remove dead and complicating concepts, such as the need for Iterators of 
multiple FDI, that only makes sense for MFDI) we may as well do it correctly. 
If it's noticeably more work, then sure let's leave it. But if we're changing 
the behaviour, I don't think it is worth artificially reimplementing it the 
obviously worse way (irregardless of how much worse).

bq. ImmutableSortedMap (or is it navigable?) might split the difference between 
the two approaches.

You would think so. But take a look at its {{floorEntry}} implementation, which 
we would need to make use of. I'm terribly disappointed whenever I look beneath 
the hood of Guava.

bq. In SSTableReader you are adding and removing fields from files. What are 
the cross version compatibility issues with that?

This has been discussed already, I think?

> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Benedict
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-8630:
---

Assignee: Benedict  (was: Stefania)

> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Benedict
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10132) sstablerepairedset throws exception while loading metadata

2015-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703915#comment-14703915
 ] 

Yuki Morishita commented on CASSANDRA-10132:


Since BufferPool access DatabaseDescriptor, we need to set 
{{Config.setClientMode(true)}} in offline tool.
patch here: https://github.com/yukim/cassandra/tree/10132

incremental_repair_test.sstable_repairedset_test still can fail from 
CASSANDRA-6230 though.

> sstablerepairedset throws exception while loading metadata
> --
>
> Key: CASSANDRA-10132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10132
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
> Fix For: 3.0.0 rc1
>
>
> {{sstablerepairedset}} displays exception trying to load schema through 
> DatabaseDescriptor.
> {code}
> $ ./tools/bin/sstablerepairedset --really-set --is-repaired 
> ~/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Data.db
> 14:42:36.714 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Mutating 
> /home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Statistics.db
>  to repairedAt time 1440013248000
> 14:42:36.721 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
> /home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big
> Exception in thread "main" java.lang.ExceptionInInitializerError
> at 
> org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
> at 
> org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
> at 
> org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
> at 
> org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
> URI in variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the 
> file with [file:///] for local files and [file:///] for remote files. 
> If you are executing this from an external tool, it needs to set 
> Config.setClientMode(true) to avoid loading configuration.
> at 
> org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:78)
> at 
> org.apache.cassandra.config.YamlConfigurationLoader.(YamlConfigurationLoader.java:92)
> ... 10 more
> {code}
> MetadataSerializer uses RandomAccessReader which allocates buffer through 
> BufferPool. BufferPool gets its settings from DatabaseDescriptor and it won't 
> work in offline tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10108) Windows dtest 3.0: sstablesplit_test.py:TestSSTableSplit.split_test fails

2015-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-10108:
---

Assignee: Paulo Motta

> Windows dtest 3.0: sstablesplit_test.py:TestSSTableSplit.split_test fails
> -
>
> Key: CASSANDRA-10108
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10108
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Paulo Motta
>  Labels: Windows
> Fix For: 3.0.x
>
>
> Locally:
> {noformat}
> -- ma-28-big-Data.db-
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/supercsv/prefs/CsvPreference$Builder
> at org.apache.cassandra.config.Config.(Config.java:240)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:105)
> at 
> org.apache.cassandra.service.StorageService.getPartitioner(StorageService.java:220)
> at 
> org.apache.cassandra.service.StorageService.(StorageService.java:206)
> at 
> org.apache.cassandra.service.StorageService.(StorageService.java:211)
> at 
> org.apache.cassandra.schema.LegacySchemaTables.getSchemaPartitionsForTable(LegacySchemaTables.java:295)
> at 
> org.apache.cassandra.schema.LegacySchemaTables.readSchemaFromSystemTables(LegacySchemaTables.java:210)
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:108)
> at 
> org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:58)
> Caused by: java.lang.ClassNotFoundException: 
> org.supercsv.prefs.CsvPreference$Builder
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 9 more
> Number of sstables after split: 1. expected 21.0
> {noformat}
> on CI:
> {noformat}
> 21.0 not less than or equal to 2
> and
> [node1 ERROR] Exception calling "CompareTo" with "1" argument(s): "Object 
> must be of type 
> String."
> At D:\temp\dtest-i3xwjx\test\node1\conf\cassandra-env.ps1:336 char:9
> + if ($env:JVM_VERSION.CompareTo("1.8.0_40" -eq -1))
> + ~
> + CategoryInfo  : NotSpecified: (:) [], MethodInvocationException
> + FullyQualifiedErrorId : ArgumentException
> -- ma-28-big-Data.db-
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/lastCompletedBuild/testReport/sstablesplit_test/TestSSTableSplit/split_test/history/]
> Env: both CI and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703597#comment-14703597
 ] 

Ariel Weisberg edited comment on CASSANDRA-8630 at 8/19/15 10:52 PM:
-

Yes you should rebase to 3.0. We port changes forward (I learned this recently 
myself).

* MemoryInputStream.available() can wrap the addition between 
buffer.remaining() + Ints.saturatedCast(memRemaining()). Do the addition and 
then the saturating cast.
* Why does RandomAccessReader accept a builder and a parameter for initializing 
the buffer? Seems like we lose the bonus of a builder a builder allowing a 
constant signature.
* A nit in initializeBuffer, it does firstSegment.value().slice() which implies 
you want a subset of the buffer? duplicate() makes it obvious there is no such 
concern.
* I think there is a place for unit tests stressing the 2 gigabyte boundaries. 
That means testing available()/length()/remaining() style methods as well as 
being able to read and seek with instances of these things that are larger than 
2g. Doing it with the actual file based ones seems bad, but maybe you could 
intercept those to work with memory so they run fast or ingloriously mock their 
inputs.
* For rate limiting is your current solution to consume buffer size bytes from 
the limiter at a time for both mmap reads and standard? And you accomplish this 
by slicing the buffer then updating the position? I don't see you setting the 
limit before slicing?
* I thought NIODataInputStream had methods for reading into ByteBuffers, but I 
was wrong. It's kind of thorny to add one to RebufferingInputStream so I think 
you did the right thing putting it in FileSegmentedInputStream even though it's 
an odd concern to have in that class. Unless you have a better idea.

Stefania is your rework of segment handling still in progress? IOW should I 
hold off until you are done.

[~benedict] In what scenario would we not want to map the file with as few 2 
gigabyte buffers as possible?

-I am still digesting the segments/boundaries/mapping issues.-
Sort of digested it.

* In SSTableReader you are adding and removing fields from files. What are the 
cross version compatibility issues with that?
* There might be something to not remapping entire files every 50 megabytes as 
part of early opening, but it's definitely better as a separate task. It's also 
not clear whether it's going to be faster or just feel better.
* I don't see how an array of pairs can be less indirection then a map, or 
result in less boxing unless there are parallel arrays. One with primitives 
that are the offsets and another that are the ByteBuffers.
* ImmutableSortedMap (or is it navigable?) might split the difference between 
the two approaches.






was (Author: aweisberg):
Yes you should rebase to 3.0. We port changes forward (I learned this recently 
myself).

* MemoryInputStream.available() can wrap the addition between 
buffer.remaining() + Ints.saturatedCast(memRemaining()). Do the addition and 
then the saturating cast.
* Why does RandomAccessReader accept a builder and a parameter for initializing 
the buffer? Seems like we lose the bonus of a builder a builder allowing a 
constant signature.
* A nit in initializeBuffer, it does firstSegment.value().slice() which implies 
you want a subset of the buffer? duplicate() makes it obvious there is no such 
concern.
* I think there is a place for unit tests stressing the 2 gigabyte boundaries. 
That means testing available()/length()/remaining() style methods as well as 
being able to read and seek with instances of these things that are larger than 
2g. Doing it with the actual file based ones seems bad, but maybe you could 
intercept those to work with memory so they run fast or ingloriously mock their 
inputs.
* For rate limiting is your current solution to consume buffer size bytes from 
the limiter at a time for both mmap reads and standard? And you accomplish this 
by slicing the buffer then updating the position? I don't see you setting the 
limit before slicing?
* I thought NIODataInputStream had methods for reading into ByteBuffers, but 
was wrong. It's kind of thorny to add one to RebufferingInputStream so I think 
you did the right thing putting it in FileSegmentedInputStream even though it's 
an odd concern to have in that class. Unless you have a better idea.

Stefania is your rework of segment handling still in progress? IOW should I 
hold off until you are done.

[~benedict] In what scenario would we not want to map the file with as few 2 
gigabyte buffers as possible?

I am still digesting the segments/boundaries/mapping issues.




> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Proj

[jira] [Commented] (CASSANDRA-10135) Quoting changed for username in GRANT statement

2015-08-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703854#comment-14703854
 ] 

Aleksey Yeschenko commented on CASSANDRA-10135:
---

cc [~beobal]

> Quoting changed for username in GRANT statement
> ---
>
> Key: CASSANDRA-10135
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10135
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: cassandra 2.2.0
>Reporter: Bernhard K. Weisshuhn
>Priority: Minor
>
> We may have uncovered an undocumented api change between cassandra 2.1.x and 
> 2.2.0.
> When granting permissions to a username containing special characters, 2.1.x 
> needed single quotes around the username and refused doubles.
> 2.2.0 needs doubles and refuses singles.
> Working example for 2.1.x:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
> {code}
> Enclosing the username in double quotes instead of singles fails with the 
> following error message:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797";
> SyntaxException:  message="line 1:33 mismatched input 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797' 
> expecting set null (...SELECT ON ALL KEYSPACES TO 
> ["vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-144001779]...)">
> {quote}
> Singles fail in 2.2.0:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
> SyntaxException:  message="line 1:33 no viable alternative at input 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308' 
> (...SELECT ON ALL KEYSPACES TO 
> ['vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-144001930]...)">
> {quote}
> ... whereas double quotes succeed:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
> {code}
> If this is a deliberate change, I don't think it is reflected in the 
> documentation. I am temped to consider this a bug introduced with the role 
> additions.
> Motivation for this report: 
> https://github.com/hashicorp/vault/pull/545#issuecomment-132634630



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10135) Quoting changed for username in GRANT statement

2015-08-19 Thread Bernhard K. Weisshuhn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703788#comment-14703788
 ] 

Bernhard K. Weisshuhn commented on CASSANDRA-10135:
---

sorry, I had to edit the description because of a copy and paste failure.

> Quoting changed for username in GRANT statement
> ---
>
> Key: CASSANDRA-10135
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10135
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: cassandra 2.2.0
>Reporter: Bernhard K. Weisshuhn
>Priority: Minor
>
> We may have uncovered an undocumented api change between cassandra 2.1.x and 
> 2.2.0.
> When granting permissions to a username containing special characters, 2.1.x 
> needed single quotes around the username and refused doubles.
> 2.2.0 needs doubles and refuses singles.
> Working example for 2.1.x:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
> {code}
> Enclosing the username in double quotes instead of singles fails with the 
> following error message:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797";
> SyntaxException:  message="line 1:33 mismatched input 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797' 
> expecting set null (...SELECT ON ALL KEYSPACES TO 
> ["vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-144001779]...)">
> {quote}
> Singles fail in 2.2.0:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
> SyntaxException:  message="line 1:33 no viable alternative at input 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308' 
> (...SELECT ON ALL KEYSPACES TO 
> ['vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-144001930]...)">
> {quote}
> ... whereas double quotes succeed:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
> {code}
> If this is a deliberate change, I don't think it is reflected in the 
> documentation. I am temped to consider this a bug introduced with the role 
> additions.
> Motivation for this report: 
> https://github.com/hashicorp/vault/pull/545#issuecomment-132634630



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10135) Quoting changed for username in GRANT statement

2015-08-19 Thread Bernhard K. Weisshuhn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernhard K. Weisshuhn updated CASSANDRA-10135:
--
Description: 
We may have uncovered an undocumented api change between cassandra 2.1.x and 
2.2.0.
When granting permissions to a username containing special characters, 2.1.x 
needed single quotes around the username and refused doubles.
2.2.0 needs doubles and refuses singles.

Working example for 2.1.x:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
{code}

Enclosing the username in double quotes instead of singles fails with the 
following error message:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
"vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797";
SyntaxException: 
{quote}

Singles fail in 2.2.0:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
SyntaxException: 
{quote}

... whereas double quotes succeed:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
"vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
{code}

If this is a deliberate change, I don't think it is reflected in the 
documentation. I am temped to consider this a bug introduced with the role 
additions.

Motivation for this report: 
https://github.com/hashicorp/vault/pull/545#issuecomment-132634630

  was:
We may have uncovered an undocumented api change between cassandra 2.1.x and 
2.2.0.
When granting permissions to a username containing special characters, 2.1.x 
needed single quotes around the username and refused doubles.
2.2.0 needs doubles and refuses singles.

Working example for 2.1.x:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
{code}

Enclosing the username in double quotes instead of singles fails with the 
following error message:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
SyntaxException: 
{quote}

Singles fail in 2.2.0:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
SyntaxException: 
{quote}

... whereas double quotes succeed:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
"vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
{code}

If this is a deliberate change, I don't think it is reflected in the 
documentation. I am temped to consider this a bug introduced with the role 
additions.

Motivation for this report: 
https://github.com/hashicorp/vault/pull/545#issuecomment-132634630


> Quoting changed for username in GRANT statement
> ---
>
> Key: CASSANDRA-10135
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10135
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: cassandra 2.2.0
>Reporter: Bernhard K. Weisshuhn
>Priority: Minor
>
> We may have uncovered an undocumented api change between cassandra 2.1.x and 
> 2.2.0.
> When granting permissions to a username containing special characters, 2.1.x 
> needed single quotes around the username and refused doubles.
> 2.2.0 needs doubles and refuses singles.
> Working example for 2.1.x:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
> {code}
> Enclosing the username in double quotes instead of singles fails with the 
> following error message:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797";
> SyntaxException:  message="line 1:33 mismatched input 
> 'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797' 
> expecting set null (...SELECT ON ALL KEYSPACES TO 
> ["vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-144001779]...)">
> {quote}
> Singles fail in 2.2.0:
> {quote}
> cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
> SyntaxException:  message="line 1:33 no viable alternative at input 
> 'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308' 
> (...SELECT ON ALL KEYSPACES TO 
> ['vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-144001930]...)">
> {quote}
> ... whereas double quotes succeed:
> {code:sql}
> GRANT SELECT ON ALL KEYSPACES TO 
> "vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
> {code}
> If this is a deliberate change, I don't think it is reflected in the 
> documentation. I am temped to consider this a bug introduced with the role 
> additions.
> Motivation for this report: 
> https://github.com/hashicorp/vault/pull/54

[jira] [Created] (CASSANDRA-10135) Quoting changed for username in GRANT statement

2015-08-19 Thread Bernhard K. Weisshuhn (JIRA)
Bernhard K. Weisshuhn created CASSANDRA-10135:
-

 Summary: Quoting changed for username in GRANT statement
 Key: CASSANDRA-10135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10135
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: cassandra 2.2.0
Reporter: Bernhard K. Weisshuhn
Priority: Minor


We may have uncovered an undocumented api change between cassandra 2.1.x and 
2.2.0.
When granting permissions to a username containing special characters, 2.1.x 
needed single quotes around the username and refused doubles.
2.2.0 needs doubles and refuses singles.

Working example for 2.1.x:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
{code}

Enclosing the username in double quotes instead of singles fails with the 
following error message:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-79840dbb-917e-ed90-38e0-578226e6c1c6-1440017797';
SyntaxException: 
{quote}

Singles fail in 2.2.0:

{quote}
cassandra@cqlsh> GRANT SELECT ON ALL KEYSPACES TO 
'vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308';
SyntaxException: 
{quote}

... whereas double quotes succeed:

{code:sql}
GRANT SELECT ON ALL KEYSPACES TO 
"vault-readonly-root-e04e7a84-a7ba-d84f-f3c0-1e50e7590179-1440019308";
{code}

If this is a deliberate change, I don't think it is reflected in the 
documentation. I am temped to consider this a bug introduced with the role 
additions.

Motivation for this report: 
https://github.com/hashicorp/vault/pull/545#issuecomment-132634630



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10132) sstablerepairedset throws exception while loading metadata

2015-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-10132:
---
Description: 
{{sstablerepairedset}} displays exception trying to load schema through 
DatabaseDescriptor.

{code}
$ ./tools/bin/sstablerepairedset --really-set --is-repaired 
~/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Data.db
14:42:36.714 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Mutating 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Statistics.db
 to repairedAt time 1440013248000
14:42:36.721 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
URI in variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the 
file with [file:///] for local files and [file:///] for remote files. 
If you are executing this from an external tool, it needs to set 
Config.setClientMode(true) to avoid loading configuration.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:78)
at 
org.apache.cassandra.config.YamlConfigurationLoader.(YamlConfigurationLoader.java:92)
... 10 more
{code}

MetadataSerializer uses RandomAccessReader which allocates buffer through 
BufferPool. BufferPool gets its settings from DatabaseDescriptor and it won't 
work in offline tool.

  was:
{{sstablerepairedset}} displays exception trying to load schema through 
DatabaseDescriptor.

{code}
$ ./tools/bin/sstablerepairedset --really-set --is-repaired 
~/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Data.db
14:42:36.714 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Mutating 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Statistics.db
 to repairedAt time 1440013248000
14:42:36.721 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-bigException
 in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
URI in variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the 
file with [file:///] for local files and [file:///] for remote files. 
If you are executing this from an external tool, it needs to set 
Config.setClientMode(true) to avoid loading configuration.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:78)
at 
org.apache.cassandra.config.YamlConfigurationLoader.(YamlConfigurationLoader.java:92)
... 10 m

[1/2] cassandra git commit: Fix Coverity-flagged CASSANDRA-6230 issues

2015-08-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk b1c7f5955 -> 6cad04b22


Fix Coverity-flagged CASSANDRA-6230 issues


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4cc2b67d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4cc2b67d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4cc2b67d

Branch: refs/heads/trunk
Commit: 4cc2b67df369bc3e9587b9e4864b8058ca78cdf7
Parents: 51bc7f8
Author: Aleksey Yeschenko 
Authored: Thu Aug 20 00:30:30 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Thu Aug 20 00:30:53 2015 +0300

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 src/java/org/apache/cassandra/hints/HintsWriter.java | 7 ---
 src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index b3bc4d2..01455ac 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1527,7 +1527,7 @@ public class DatabaseDescriptor
 
 public static long getMaxHintsFileSize()
 {
-return conf.max_hints_file_size_in_mb * 1024 * 1024;
+return conf.max_hints_file_size_in_mb * 1024L * 1024L;
 }
 
 public static boolean isIncrementalBackupsEnabled()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/hints/HintsWriter.java
--
diff --git a/src/java/org/apache/cassandra/hints/HintsWriter.java 
b/src/java/org/apache/cassandra/hints/HintsWriter.java
index 300d9cc..5cadd35 100644
--- a/src/java/org/apache/cassandra/hints/HintsWriter.java
+++ b/src/java/org/apache/cassandra/hints/HintsWriter.java
@@ -22,6 +22,7 @@ import java.io.IOException;
 import java.io.OutputStream;
 import java.nio.ByteBuffer;
 import java.nio.channels.FileChannel;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.StandardOpenOption;
 import java.util.zip.CRC32;
@@ -98,7 +99,7 @@ final class HintsWriter implements AutoCloseable
 File checksumFile = new File(directory, descriptor.checksumFileName());
 try (OutputStream out = Files.newOutputStream(checksumFile.toPath()))
 {
-out.write(Integer.toHexString((int) 
globalCRC.getValue()).getBytes());
+out.write(Integer.toHexString((int) 
globalCRC.getValue()).getBytes(StandardCharsets.UTF_8));
 }
 catch (IOException e)
 {
@@ -255,7 +256,7 @@ final class HintsWriter implements AutoCloseable
 
 private void maybeFsync()
 {
-if (position() >= lastSyncPosition + 
DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 1024)
+if (position() >= lastSyncPosition + 
DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 1024L)
 fsync();
 }
 
@@ -265,7 +266,7 @@ final class HintsWriter implements AutoCloseable
 
 // don't skip page cache for tiny files, on the assumption that if 
they are tiny, the target node is probably
 // alive, and if so, the file will be closed and dispatched 
shortly (within a minute), and the file will be dropped.
-if (position >= DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 
1024)
+if (position >= DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 
1024L)
 CLibrary.trySkipCache(fd, 0, position - (position % 
PAGE_SIZE), file.getPath());
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
--
diff --git a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java 
b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
index 082e307..196f184 100644
--- a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
+++ b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
@@ -71,7 +71,7 @@ public final class LegacyHintsMigrator
 int meanCellCount = legacyHintsTable.getMeanColumns();
 double meanPartitionSize = legacyHintsTable.getMeanPartitionSize();
 
-if (meanCellCount != 0 || meanPartitionSize != 0)
+if (meanCellCount != 0 && meanPartitionSize != 0)
 {
 int avgHintSize = (int) meanPartitionSize / meanCellCount;
 size = Math.max(2, Math.min(

[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-19 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6cad04b2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6cad04b2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6cad04b2

Branch: refs/heads/trunk
Commit: 6cad04b220833e185e05aa229684ebbdc23f4212
Parents: b1c7f59 4cc2b67
Author: Aleksey Yeschenko 
Authored: Thu Aug 20 00:31:27 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Thu Aug 20 00:31:27 2015 +0300

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 src/java/org/apache/cassandra/hints/HintsWriter.java | 7 ---
 src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cad04b2/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



cassandra git commit: Fix Coverity-flagged CASSANDRA-6230 issues

2015-08-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 51bc7f87d -> 4cc2b67df


Fix Coverity-flagged CASSANDRA-6230 issues


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4cc2b67d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4cc2b67d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4cc2b67d

Branch: refs/heads/cassandra-3.0
Commit: 4cc2b67df369bc3e9587b9e4864b8058ca78cdf7
Parents: 51bc7f8
Author: Aleksey Yeschenko 
Authored: Thu Aug 20 00:30:30 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Thu Aug 20 00:30:53 2015 +0300

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 src/java/org/apache/cassandra/hints/HintsWriter.java | 7 ---
 src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index b3bc4d2..01455ac 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1527,7 +1527,7 @@ public class DatabaseDescriptor
 
 public static long getMaxHintsFileSize()
 {
-return conf.max_hints_file_size_in_mb * 1024 * 1024;
+return conf.max_hints_file_size_in_mb * 1024L * 1024L;
 }
 
 public static boolean isIncrementalBackupsEnabled()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/hints/HintsWriter.java
--
diff --git a/src/java/org/apache/cassandra/hints/HintsWriter.java 
b/src/java/org/apache/cassandra/hints/HintsWriter.java
index 300d9cc..5cadd35 100644
--- a/src/java/org/apache/cassandra/hints/HintsWriter.java
+++ b/src/java/org/apache/cassandra/hints/HintsWriter.java
@@ -22,6 +22,7 @@ import java.io.IOException;
 import java.io.OutputStream;
 import java.nio.ByteBuffer;
 import java.nio.channels.FileChannel;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.StandardOpenOption;
 import java.util.zip.CRC32;
@@ -98,7 +99,7 @@ final class HintsWriter implements AutoCloseable
 File checksumFile = new File(directory, descriptor.checksumFileName());
 try (OutputStream out = Files.newOutputStream(checksumFile.toPath()))
 {
-out.write(Integer.toHexString((int) 
globalCRC.getValue()).getBytes());
+out.write(Integer.toHexString((int) 
globalCRC.getValue()).getBytes(StandardCharsets.UTF_8));
 }
 catch (IOException e)
 {
@@ -255,7 +256,7 @@ final class HintsWriter implements AutoCloseable
 
 private void maybeFsync()
 {
-if (position() >= lastSyncPosition + 
DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 1024)
+if (position() >= lastSyncPosition + 
DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 1024L)
 fsync();
 }
 
@@ -265,7 +266,7 @@ final class HintsWriter implements AutoCloseable
 
 // don't skip page cache for tiny files, on the assumption that if 
they are tiny, the target node is probably
 // alive, and if so, the file will be closed and dispatched 
shortly (within a minute), and the file will be dropped.
-if (position >= DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 
1024)
+if (position >= DatabaseDescriptor.getTrickleFsyncIntervalInKb() * 
1024L)
 CLibrary.trySkipCache(fd, 0, position - (position % 
PAGE_SIZE), file.getPath());
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cc2b67d/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
--
diff --git a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java 
b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
index 082e307..196f184 100644
--- a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
+++ b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
@@ -71,7 +71,7 @@ public final class LegacyHintsMigrator
 int meanCellCount = legacyHintsTable.getMeanColumns();
 double meanPartitionSize = legacyHintsTable.getMeanPartitionSize();
 
-if (meanCellCount != 0 || meanPartitionSize != 0)
+if (meanCellCount != 0 && meanPartitionSize != 0)
 {
 int avgHintSize = (int) meanPartitionSize / meanCellCount;
 size = Math.

[jira] [Commented] (CASSANDRA-10125) ReadFailure is thrown instead of ReadTimeout for range queries

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703762#comment-14703762
 ] 

Ariel Weisberg commented on CASSANDRA-10125:


+1 I think I understand it and it looks like it does what you say it does. 
Tests also pass.

Is the plan to keep using two verbs to drive the different timeouts?

> ReadFailure is thrown instead of ReadTimeout for range queries
> --
>
> Key: CASSANDRA-10125
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10125
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
>
> CASSANDRA-8099 merged the way single partition and range read messages where 
> handled and has switch to using the same verb ({{Verb.READ}}) for both, 
> effectively deprecating {{Verb.RANGE_SLICE}}. Unfortunately, we are relying 
> on having 2 different verbs for timeouts. More precisely, when adding a 
> callback in the expiring map of {{MessagingService}}, we use the timeout from 
> the {{Verb}}. As a consequence, it's currently set with the single partition 
> read timeout (5s) even for range queries (which have a 10s timeout).  And 
> when a callback expires, it is notified as a failure to the callback (which 
> is debatable imo but a separate issue), which means range queries will 
> generally send a ReadFailure (after 5s) instead of a ReadTimeout (since they 
> do wait 10s before sending those).
> That is the reason for at least the failure of {{nosetests 
> replace_address_test:TestReplaceAddress.replace_first_boot_test}} dtest (the 
> test has 3 nodes, kill one and expects a timeout at CL.THREE but get a 
> failure instead).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9893) Fix upgrade tests from #9704 that are still failing

2015-08-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703759#comment-14703759
 ] 

Jim Witschey commented on CASSANDRA-9893:
-

[~slebresne] CassCI has been a bit of a mess today, but Philip got these tests 
running here:

http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/trunk_dtest_backwards_compat/

> Fix upgrade tests from #9704 that are still failing
> ---
>
> Key: CASSANDRA-9893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9893
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 2
>
>
> The first things to do on this ticket would be to commit Tyler's branch 
> (https://github.com/thobbs/cassandra-dtest/tree/8099-backwards-compat) to the 
> dtests so cassci run them. I've had to do a few minor modifications to have 
> them run locally so someone which access to cassci should do it and make sure 
> it runs properly.
> Once we have that, we should fix any test that isn't passing. I've ran the 
> tests locally and I had 8 failures. for 2 of them, it sounds plausible that 
> they'll get fixed by the patch of CASSANDRA-9775, though that's just a guess. 
>  The rest where test that timeouted without a particular error in the log, 
> and running some of them individually, they passed.  So we'll have to see if 
> it's just my machine being overly slow when running them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9901) Make AbstractType.isByteOrderComparable abstract

2015-08-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703710#comment-14703710
 ] 

Benedict commented on CASSANDRA-9901:
-

Patch available [here|https://github.com/belliottsmith/cassandra/tree/9901]

* This has kept the {{isByteOrderComparable}} flag in the 
{{ClusteringComparator}} as I hadn't factored in the extra cost of performing a 
{{List}} lookup in cases where we can avoid it
* In AbstractType we also extract the isByteOrderComparable into a boolean flag 
for one less level of indirection when performing this test
* We directly call {{compareCustom}} in {{ClusteringComparator}}, but 
everywhere else we rely on the final implementation of {{compare()}} 

> Make AbstractType.isByteOrderComparable abstract
> 
>
> Key: CASSANDRA-9901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9901
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0 beta 2
>
>
> I can't recall _precisely_ what was agreed at the NGCC, but I'm reasonably 
> sure we agreed to make this method abstract, put some javadoc explaining we 
> may require fields to yield true in the near future, and potentially log a 
> warning on startup if a user-defined type returns false.
> This should make it into 3.0, IMO, so that we can look into migrating to 
> byte-order comparable types in the post-3.0 world.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10127) Make naming for secondary indexes consistent

2015-08-19 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston reassigned CASSANDRA-10127:
---

Assignee: Blake Eggleston  (was: Sam Tunnicliffe)

> Make naming for secondary indexes consistent
> 
>
> Key: CASSANDRA-10127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10127
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 2
>
>
> We have a longstanding mismatch between the name of an index as defined in 
> schema and what gets returned from {{SecondaryIndex#getIndexName()}}, which 
> for the builtin index impls is the name of the underlying index CFS, of the 
> form {{.}}.
> This mismatch causes a number of UI inconsistencies:
> {code}nodetool rebuild_index   {code}
> {{}} must be qualified, i.e. include the redundant table name as without 
> it, the rebuild silently fails
> {{system.IndexInfo}} (which is also exposed over JMX) uses the form 
> {{.}}
> {code}cqlsh> describe index [.]{code}
> here, qualifying {{}} with the base table name is an error.
> Generally, anything CQL related uses the index name directly, whereas anthing 
> concerned with building or rebuiling requires the version based on an 
> underlying backing table name. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10068) Batchlog replay fails with exception after a node is decommissioned

2015-08-19 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston reassigned CASSANDRA-10068:
---

Assignee: Blake Eggleston  (was: Marcus Eriksson)

> Batchlog replay fails with exception after a node is decommissioned
> ---
>
> Key: CASSANDRA-10068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10068
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test of materialized views that 
> crashes and decommissions nodes throughout the test.
> At the conclusion of the test, a batchlog replay is initiated through 
> nodetool and hits the following assertion due to a missing host ID: 
> https://github.com/apache/cassandra/blob/3413e557b95d9448b0311954e9b4f53eaf4758cd/src/java/org/apache/cassandra/service/StorageProxy.java#L1197
> A nodetool status on the node with failed batchlog replay shows the following 
> entry for the decommissioned node:
> DN  10.0.0.5  ?  256  ?   null
>   rack1
> On the unaffected nodes, there is no entry for the decommissioned node as 
> expected.
> There are occasional hits of the same assertions for logs in other nodes; it 
> looks like the issue might occasionally resolve itself, but one node seems to 
> have the errant null entry indefinitely.
> In logs for the nodes, this possibly unrelated exception also appears:
> java.lang.RuntimeException: Trying to get the view natural endpoint on a 
> non-data replica
>   at 
> org.apache.cassandra.db.view.MaterializedViewUtils.getViewNaturalEndpoint(MaterializedViewUtils.java:91)
>  ~[apache-cassandra-3.0.0-alpha1-SNAPSHOT.jar:3.0.0-alpha1-SNAPSHOT]
> I have a running cluster with the issue on my machine; it is also repeatable.
> Nothing stands out in the logs of the decommissioned node (n4) for me. The 
> logs of each node in the cluster are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10134) Always require replace_address to replace existing address

2015-08-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703702#comment-14703702
 ] 

Tyler Hobbs commented on CASSANDRA-10134:
-

Here's a dtest PR that reproduces the problem: 
https://github.com/riptano/cassandra-dtest/pull/484

> Always require replace_address to replace existing address
> --
>
> Key: CASSANDRA-10134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Normally, when a node is started from a clean state with the same address as 
> an existing down node, it will fail to start with an error like this:
> {noformat}
> ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
> encountered during startup
> java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
>   at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
> [main/:na]
> {noformat}
> However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
> list, it will not throw this error and will start normally.  The new node 
> then takes over the host ID of the old node (even if the tokens are 
> different), and the only message you will see is a warning in the other 
> nodes' logs:
> {noformat}
> logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, 
> hostId);
> {noformat}
> This could cause an operator to accidentally wipe out the token information 
> for a down node without replacing it.  To fix this, we should check for an 
> endpoint collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10134) Always require replace_address to replace existing address

2015-08-19 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-10134:
---

 Summary: Always require replace_address to replace existing address
 Key: CASSANDRA-10134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
 Fix For: 3.x, 2.1.x, 2.2.x


Normally, when a node is started from a clean state with the same address as an 
existing down node, it will fail to start with an error like this:

{noformat}
ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
encountered during startup
java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
cancelling join. Use cassandra.replace_address if you want to replace this node.
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:720) 
~[main/:na]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
[main/:na]
{noformat}

However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
list, it will not throw this error and will start normally.  The new node then 
takes over the host ID of the old node (even if the tokens are different), and 
the only message you will see is a warning in the other nodes' logs:

{noformat}
logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, hostId);
{noformat}

This could cause an operator to accidentally wipe out the token information for 
a down node without replacing it.  To fix this, we should check for an endpoint 
collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10105) Windows dtest 3.0: TestOfflineTools failures

2015-08-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10105:
--
Fix Version/s: (was: 3.0.x)

> Windows dtest 3.0: TestOfflineTools failures
> 
>
> Key: CASSANDRA-10105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10105
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
>
> offline_tools_test.py:TestOfflineTools.sstablelevelreset_test
> offline_tools_test.py:TestOfflineTools.sstableofflinerelevel_test
> Both tests fail with the following:
> {noformat}
> Traceback (most recent call last):
>   File "c:\src\cassandra-dtest\dtest.py", line 532, in tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR [main] 2015-08-17 
> 15:55:05,060 NoSpamLogger.java:97 - This platform does not support atomic 
> directory streams (SecureDirectoryStream); race conditions when loading 
> sstable files could occurr']
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/jmx_test/TestJMX/netstats_test/history/]
> Env: ci and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10107) Windows dtest 3.0: TestScrub and TestScrubIndexes failures

2015-08-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10107:
--
Fix Version/s: (was: 3.0.x)

> Windows dtest 3.0: TestScrub and TestScrubIndexes failures
> --
>
> Key: CASSANDRA-10107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10107
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: dows dtest 3.0: TestScrub / TestScrubIndexes failures
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
>
> scrub_test.py:TestScrub.test_standalone_scrub
> scrub_test.py:TestScrub.test_standalone_scrub_essential_files_only
> scrub_test.py:TestScrubIndexes.test_standalone_scrub
> Somewhat different messages between CI and local, but consistent on env. 
> Locally, I see:
> {noformat}
> dtest: DEBUG: ERROR 20:41:20 This platform does not support atomic directory 
> streams (SecureDirectoryStream); race conditions when loading sstable files 
> could occurr
> {noformat}
> Consistently fails, both on CI and locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10133) Make updates to system_auth invalidate permission_validity cache

2015-08-19 Thread Darla Baker (JIRA)
Darla Baker created CASSANDRA-10133:
---

 Summary: Make updates to system_auth invalidate 
permission_validity cache
 Key: CASSANDRA-10133
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10133
 Project: Cassandra
  Issue Type: New Feature
Reporter: Darla Baker


Currently a change to system_auth (password, add/remove user) will not take 
effect until the permission_validity_in_ms time expires or the nodes in the 
cluster are recycled.  In larger clusters this setting can be rather long so 
changes don't take effect as quickly as desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10132) sstablerepairedset throws exception while loading metadata

2015-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-10132:
---
Description: 
{{sstablerepairedset}} displays exception trying to load schema through 
DatabaseDescriptor.

{code}
$ ./tools/bin/sstablerepairedset --really-set --is-repaired 
~/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Data.db
14:42:36.714 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Mutating 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Statistics.db
 to repairedAt time 1440013248000
14:42:36.721 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/home/yuki/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-bigException
 in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
URI in variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the 
file with [file:///] for local files and [file:///] for remote files. 
If you are executing this from an external tool, it needs to set 
Config.setClientMode(true) to avoid loading configuration.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:78)
at 
org.apache.cassandra.config.YamlConfigurationLoader.(YamlConfigurationLoader.java:92)
... 10 more
{code}

MetadataSerializer uses RandomAccessReader which allocates buffer through 
BufferPool. BufferPool gets its settings from DatabaseDescriptor and it won't 
work in offline tool.

  was:
{{sstablerepairedset}} displays exception trying to load schema through 
DatabaseDescriptor.

{code}
11:54:47.633 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/tmp/dtest-m5aJsz/test/node2/data/keyspace1/standard1-bcad849046a311e5a62251843b245f21/ma-1-big
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
{code}

MetadataSerializer uses RandomAccessReader which allocates buffer through 
BufferPool. BufferPool gets its settings from DatabaseDescriptor and it won't 
work in offline tool.


> sstablerepairedset throws exception while loading metadata
> --
>
> Key: CASSANDRA-10132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10132
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
> Fix For: 3.0.0 rc1
>
>
> {{sstablerepairedset}} displays exception trying to load schema through 
> DatabaseDescriptor.
> {code}
> $ ./tools/bin/sstablerepairedset --really-set --is-repaired 
> ~/.ccm/3.0/node1/data/keyspace1/standard1-2c0b226046aa11e596f58106a0d438e8/ma-1-big-Data.db
> 14:42:36.714 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Mutating 
> /home/yuk

[jira] [Created] (CASSANDRA-10132) sstablerepairedset throws exception while loading metadata

2015-08-19 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-10132:
--

 Summary: sstablerepairedset throws exception while loading metadata
 Key: CASSANDRA-10132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10132
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 3.0.0 rc1


{{sstablerepairedset}} displays exception trying to load schema through 
DatabaseDescriptor.

{code}
11:54:47.633 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/tmp/dtest-m5aJsz/test/node2/data/keyspace1/standard1-bcad849046a311e5a62251843b245f21/ma-1-big
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
{code}

MetadataSerializer uses RandomAccessReader which allocates buffer through 
BufferPool. BufferPool gets its settings from DatabaseDescriptor and it won't 
work in offline tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10103) Windows dtest 3.0: incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails

2015-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703629#comment-14703629
 ] 

Yuki Morishita commented on CASSANDRA-10103:


{code}
11:54:47.633 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/tmp/dtest-m5aJsz/test/node2/data/keyspace1/standard1-bcad849046a311e5a62251843b245f21/ma-1-big
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
{code}

I think this is a legit bug. I will create ticket.

> Windows dtest 3.0: 
> incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails
> --
>
> Key: CASSANDRA-10103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10103
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> {noformat}
> File "D:\Python27\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-3.0_dtest_win32\cassandra-dtest\incremental_repair_test.py",
>  line 165, in sstable_repairedset_test
> self.assertGreaterEqual(len(uniquematches), 2)
>   File "D:\Python27\lib\unittest\case.py", line 948, in assertGreaterEqual
> self.fail(self._formatMessage(msg, standardMsg))
>   File "D:\Python27\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> '0 not greater than or equal to 2\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-pq7lpx\ndtest: DEBUG: []\n- >> end 
> captured logging << -'
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_disabled_test/history/]
> Env: both CI and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10131) consistently sort DCs in nodetool:status

2015-08-19 Thread Chris Burroughs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Burroughs updated CASSANDRA-10131:

Attachment: j10131-2.1-v1.txt

> consistently sort DCs in nodetool:status
> 
>
> Key: CASSANDRA-10131
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10131
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Burroughs
>Assignee: Chris Burroughs
>Priority: Minor
> Attachments: j10131-2.1-v1.txt
>
>
> It's kind of annoying that the order flip flops as I look at different 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10131) consistently sort DCs in nodetool:status

2015-08-19 Thread Chris Burroughs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Burroughs reassigned CASSANDRA-10131:
---

Assignee: Chris Burroughs

> consistently sort DCs in nodetool:status
> 
>
> Key: CASSANDRA-10131
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10131
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Burroughs
>Assignee: Chris Burroughs
>Priority: Minor
>
> It's kind of annoying that the order flip flops as I look at different 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10131) consistently sort DCs in nodetool:status

2015-08-19 Thread Chris Burroughs (JIRA)
Chris Burroughs created CASSANDRA-10131:
---

 Summary: consistently sort DCs in nodetool:status
 Key: CASSANDRA-10131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Burroughs
Priority: Minor


It's kind of annoying that the order flip flops as I look at different clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9669) If sstable flushes complete out of order, on restart we can fail to replay necessary commit log records

2015-08-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703619#comment-14703619
 ] 

Benedict commented on CASSANDRA-9669:
-

I've updated the patch to include a unit test, and to fix two more problems. 
One typo, and the {{shouldReplay}} logic was still incorrect. Whenever I 
interface with ReplayPosition I have a strong urge to rewrite it. It is 
terribly counterintuitive, but I guess that's what we have unit tests for.

Long story short, the ranges are inclusive-start, exclusive-end, the inverse of 
what you expected. This confusion stems from the fact we use points to 
represent ranges (i.e. commit log entries) and points that demarcate ranges 
(those that have been persisted), which doesn't really make sense. But during 
replay, and on a write, the {{ReplayPosition}} for a record is the position in 
the segment _directly proceeding_ its serialization location, i.e. it is the 
exclusive upper bound of its bytes. It is, in effect, represented by the one 
number that falls outside of its on disk representation.

Anyway, it's good for a proper review now. I accidentally collapsed the most 
recent follow up commit with the one I uploaded this morning, but mostly this 
was just the unit test, plus those two items (one removed {{!}}, and the 
inverted bounds checking)

> If sstable flushes complete out of order, on restart we can fail to replay 
> necessary commit log records
> ---
>
> Key: CASSANDRA-9669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Critical
>  Labels: correctness
> Fix For: 3.x, 2.1.x, 2.2.x, 3.0.x
>
>
> While {{postFlushExecutor}} ensures it never expires CL entries out-of-order, 
> on restart we simply take the maximum replay position of any sstable on disk, 
> and ignore anything prior. 
> It is quite possible for there to be two flushes triggered for a given table, 
> and for the second to finish first by virtue of containing a much smaller 
> quantity of live data (or perhaps the disk is just under less pressure). If 
> we crash before the first sstable has been written, then on restart the data 
> it would have represented will disappear, since we will not replay the CL 
> records.
> This looks to be a bug present since time immemorial, and also seems pretty 
> serious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703606#comment-14703606
 ] 

Benedict edited comment on CASSANDRA-8630 at 8/19/15 7:26 PM:
--

bq. In what scenario would we not want to map the file with as few 2 gigabyte 
buffers as possible?

During early opening we currently remap our buffers every interval-, meaning 
for a 2Gb buffer by default we will map it 20 times (plus once every 2Gb)-. 
This is not horrible, but I would prefer if - at least during reopening - we 
only mapped once, and each time we reopened/extended the size of the file, we 
just mapped the bit that wasn't previously mapped. Once we cross a 2Gb boundary 
(or we are opening the final copy of the file) we should certainly remap into 
contiguous 2Gb chunks.

edit: currently we actually map it much more than this; we map each 2Gb range 
every 50Mb, so for a 100Gb file we might map several thousand times. So 
whatever we do will be a dramatic improvement, but I generally am on a mission 
to sanitise the code base, and while we're here we might as well do it right.


was (Author: benedict):
bq. In what scenario would we not want to map the file with as few 2 gigabyte 
buffers as possible?

During early opening we currently remap our buffers every interval, meaning for 
a 2Gb buffer by default we will map it 20 times (plus once every 2Gb). This is 
not horrible, but I would prefer if - at least during reopening - we only 
mapped once, and each time we reopened/extended the size of the file, we just 
mapped the bit that wasn't previously mapped. Once we cross a 2Gb boundary (or 
we are opening the final copy of the file) we should certainly remap into 
contiguous 2Gb chunks.


> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Stefania
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9738) Migrate key-cache to be fully off-heap

2015-08-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703613#comment-14703613
 ] 

Robert Stupp commented on CASSANDRA-9738:
-

Ouch, yes, names can contain UTF8 chars.
[Pushed a 
commit|https://github.com/snazy/cassandra/commit/2e2d572ea0c30c2ef1bba49df9a6667f7b51fc4a#diff-df8196b2b182d7e311c455c5d6115f80]
 as _a demo_ of how the lambda approach could look like. writeUTF could be nice 
- but readUTF would require two method refs (one for reading the len and one 
for reading bytes).
read/writeUTF in java.io also use unsigned short for serialization.

Hm - Mrs. cassci seems to be annoyed... ("Slave went offline during the build")

> Migrate key-cache to be fully off-heap
> --
>
> Key: CASSANDRA-9738
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9738
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0 beta 2
>
>
> Key cache still uses a concurrent map on-heap. This could go to off-heap and 
> feels doable now after CASSANDRA-8099.
> Evaluation should be done in advance based on a POC to prove that pure 
> off-heap counter cache buys a performance and/or gc-pressure improvement.
> In theory, elimination of on-heap management of the map should buy us some 
> benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703606#comment-14703606
 ] 

Benedict commented on CASSANDRA-8630:
-

bq. In what scenario would we not want to map the file with as few 2 gigabyte 
buffers as possible?

During early opening we currently remap our buffers every interval, meaning for 
a 2Gb buffer by default we will map it 20 times (plus once every 2Gb). This is 
not horrible, but I would prefer if - at least during reopening - we only 
mapped once, and each time we reopened/extended the size of the file, we just 
mapped the bit that wasn't previously mapped. Once we cross a 2Gb boundary (or 
we are opening the final copy of the file) we should certainly remap into 
contiguous 2Gb chunks.


> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Stefania
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10121) Fix *NEW* failing pig unit tests

2015-08-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703603#comment-14703603
 ] 

Aleksey Yeschenko commented on CASSANDRA-10121:
---

Thanks. Built a jar from java-driver {{java875}} branch and committed to 
cassandra-3.0 as {{51bc7f87d708abb66db73976e5a60df8e53c7c4a}}.

> Fix *NEW* failing pig unit tests
> 
>
> Key: CASSANDRA-10121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10121
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 1
>
>
> The latest CASSANDRA-6717 commit with the new driver deterministically broke 
> the pig tests, and the issue is schema-related now.
> See 
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_testall/lastCompletedBuild/testReport/
> Ideally we want a fix before beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix post-6717 driver issue with indexes

2015-08-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2c3167722 -> b1c7f5955


Fix post-6717 driver issue with indexes

patch by Blake Eggleston; reviewed by Aleksey Yeschenko for
CASSANDRA-10121


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51bc7f87
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51bc7f87
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51bc7f87

Branch: refs/heads/trunk
Commit: 51bc7f87d708abb66db73976e5a60df8e53c7c4a
Parents: cb8ff5d
Author: Blake Eggleston 
Authored: Wed Aug 19 22:15:51 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Wed Aug 19 22:17:05 2015 +0300

--
 ...core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar | Bin 2204619 -> 0 bytes
 ...core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar | Bin 0 -> 2209194 bytes
 2 files changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51bc7f87/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar
--
diff --git a/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar 
b/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar
deleted file mode 100644
index 14354bd..000
Binary files 
a/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/51bc7f87/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar
--
diff --git a/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar 
b/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar
new file mode 100644
index 000..f930cc6
Binary files /dev/null and 
b/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar differ



[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-19 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1c7f595
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1c7f595
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1c7f595

Branch: refs/heads/trunk
Commit: b1c7f59554fccc03cf7b3976dfa7c7d42de15154
Parents: 2c31677 51bc7f8
Author: Aleksey Yeschenko 
Authored: Wed Aug 19 22:17:45 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Wed Aug 19 22:17:45 2015 +0300

--
 ...core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar | Bin 2204619 -> 0 bytes
 ...core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar | Bin 0 -> 2209194 bytes
 2 files changed, 0 insertions(+), 0 deletions(-)
--




cassandra git commit: Fix post-6717 driver issue with indexes

2015-08-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 cb8ff5d3a -> 51bc7f87d


Fix post-6717 driver issue with indexes

patch by Blake Eggleston; reviewed by Aleksey Yeschenko for
CASSANDRA-10121


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51bc7f87
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51bc7f87
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51bc7f87

Branch: refs/heads/cassandra-3.0
Commit: 51bc7f87d708abb66db73976e5a60df8e53c7c4a
Parents: cb8ff5d
Author: Blake Eggleston 
Authored: Wed Aug 19 22:15:51 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Wed Aug 19 22:17:05 2015 +0300

--
 ...core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar | Bin 2204619 -> 0 bytes
 ...core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar | Bin 0 -> 2209194 bytes
 2 files changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51bc7f87/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar
--
diff --git a/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar 
b/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar
deleted file mode 100644
index 14354bd..000
Binary files 
a/lib/cassandra-driver-core-3.0.0-alpha2-188d996-SNAPSHOT-shaded.jar and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/51bc7f87/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar
--
diff --git a/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar 
b/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar
new file mode 100644
index 000..f930cc6
Binary files /dev/null and 
b/lib/cassandra-driver-core-3.0.0-alpha2-ae1e256-SNAPSHOT-shaded.jar differ



[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703597#comment-14703597
 ] 

Ariel Weisberg commented on CASSANDRA-8630:
---

Yes you should rebase to 3.0. We port changes forward (I learned this recently 
myself).

* MemoryInputStream.available() can wrap the addition between 
buffer.remaining() + Ints.saturatedCast(memRemaining()). Do the addition and 
then the saturating cast.
* Why does RandomAccessReader accept a builder and a parameter for initializing 
the buffer? Seems like we lose the bonus of a builder a builder allowing a 
constant signature.
* A nit in initializeBuffer, it does firstSegment.value().slice() which implies 
you want a subset of the buffer? duplicate() makes it obvious there is no such 
concern.
* I think there is a place for unit tests stressing the 2 gigabyte boundaries. 
That means testing available()/length()/remaining() style methods as well as 
being able to read and seek with instances of these things that are larger than 
2g. Doing it with the actual file based ones seems bad, but maybe you could 
intercept those to work with memory so they run fast or ingloriously mock their 
inputs.
* For rate limiting is your current solution to consume buffer size bytes from 
the limiter at a time for both mmap reads and standard? And you accomplish this 
by slicing the buffer then updating the position? I don't see you setting the 
limit before slicing?
* I thought NIODataInputStream had methods for reading into ByteBuffers, but 
was wrong. It's kind of thorny to add one to RebufferingInputStream so I think 
you did the right thing putting it in FileSegmentedInputStream even though it's 
an odd concern to have in that class. Unless you have a better idea.

Stefania is your rework of segment handling still in progress? IOW should I 
hold off until you are done.

[~benedict] In what scenario would we not want to map the file with as few 2 
gigabyte buffers as possible?

I am still digesting the segments/boundaries/mapping issues.




> Faster sequential IO (on compaction, streaming, etc)
> 
>
> Key: CASSANDRA-8630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, Tools
>Reporter: Oleg Anastasyev
>Assignee: Stefania
>  Labels: compaction, performance
> Fix For: 3.x
>
> Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
> flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz, 
> mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
> of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as 
> their matching write* are implemented with numerous calls of byte by byte 
> read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in 
> either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read and 
> SequencialWriter.write methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and 
> ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
> list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% 
> faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU 
> load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10106) Windows dtest 3.0: TestRepair multiple failures

2015-08-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-10106:
---

Assignee: Joshua McKenzie

> Windows dtest 3.0: TestRepair multiple failures
> ---
>
> Key: CASSANDRA-10106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10106
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> repair_test.py:TestRepair.dc_repair_test
> repair_test.py:TestRepair.local_dc_repair_test
> repair_test.py:TestRepair.simple_parallel_repair_test
> repair_test.py:TestRepair.simple_sequential_repair_test
> All failing w/the following error:
> {noformat}
> File "D:\Python27\lib\unittest\case.py", line 358, in run
> self.tearDown()
>   File 
> "D:\jenkins\workspace\cassandra-3.0_dtest_win32\cassandra-dtest\dtest.py", 
> line 532, in tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> "Unexpected error in node3 node log: ['ERROR [STREAM-IN-/127.0.0.1] 
> 2015-08-17 00:41:09,426 StreamSession.java:520 - [Stream 
> #a69fc140-4478-11e5-a8ae-4f8718583077] Streaming error occurred 
> java.io.IOException: An existing connection was forcibly closed by the remote 
> host \\tat sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_45] 
> \\tat sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) 
> ~[na:1.8.0_45] \\tat sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
> ~[na:1.8.0_45] \\tat sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_45] 
> \\tat sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
> ~[na:1.8.0_45] \\tat 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:53)
>  ~[main/:na] \\tat 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[main/:na] \\tat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_45]']\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-3kmbjb\ndtest: DEBUG: Starting cluster..\ndtest: DEBUG: 
> Inserting data...\ndtest: DEBUG: Checking data on node3...\ndtest: DEBUG: 
> Checking data on node1...\ndtest: DEBUG: Checking data on node2...\ndtest: 
> DEBUG: starting repair...\ndtest: DEBUG: Repair time: 5.3782098\ndtest: 
> DEBUG: removing ccm cluster test at: d:\\temp\\dtest-3kmbjb\ndtest: DEBUG: 
> clearing ssl stores from [d:\\temp\\dtest-3kmbjb] 
> directory\n- >> end captured logging << 
> -"
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/repair_test/TestRepair/dc_repair_test/history/]
> Env: ci and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10103) Windows dtest 3.0: incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails

2015-08-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703578#comment-14703578
 ] 

Joshua McKenzie commented on CASSANDRA-10103:
-

Trying to compare output from this test on Windows to output on Linux, and both 
are failing with multiple (different) errors.

Windows:
{noformat}
SUCCESS: The process with PID 13120 has been terminated.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:114)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
Started: node2 with pid: 17468
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:114)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
at 
org.apache.cassandra.tools.SSTableMetadataViewer.main(SSTableMetadataViewer.java:51)
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:114)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
at 
org.apache.cassandra.tools.SSTableMetadataViewer.main(SSTableMetadataViewer.java:51)
...
FAIL: sstable_repairedset_test (incremental_repair_test.TestIncRepair)
--
Traceback (most recent call last):
  File "c:\src\cassandra-dtest\incremental_repair_test.py", line 170, in 
sstable_repairedset_test
self.assertGreaterEqual(len(uniquematches), 2)
AssertionError: 0 not greater than or equal to 2
{noformat}

Linux:
{noformat}
11:54:47.633 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for 
/tmp/dtest-m5aJsz/test/node2/data/keyspace1/standard1-bcad849046a311e5a62251843b245f21/ma-1-big
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:123)
at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:110)
at 
org.apache.cassandra.utils.memory.BufferPool.(BufferPool.java:51)
at 
org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:76)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader$RandomAccessReaderWithChannel.(RandomAccessReader.java:89)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:108)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:91)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateRepairedAt(MetadataSerializer.java:143)
at 
org.apache.cassandra.tools.SSTableRepairedAtSetter.main(SSTableRepairedAtSetter.java:86)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
URI in variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the 
file with [file:///] for local files and [file:///] for remote files. 
If you are executing this from an external tool, it needs to set 
Config.setClientMode(true) to avoid loading configuration.
...
ERROR: sstable_repairedset_test (incremental_repair_test.TestIncRepair)
--
Traceback (most recent call last):
  File "/home/cassandra/src/cassandra-dtest/dtest.py", line 532, in tearDown
raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
errors))
AssertionError: Unexpected error in node2 node log: ['ERROR [HintsDispatcher:1] 
2015-08-19 11:55:35,403 HintsDispatchExecutor.java:186 - Failed to delete hints 
file 5040aad1-6b54-4382-9828-b232880a8222-1440010511636-1.hints']
{noformat}

dtests on 3.0 are in a fairly bad way right now. Going to sit on this until the 
linux failures get ironed out and then revisit the Win side as trying to track 
down both underlying test failures and env / os-specific failures 
simultaneously is unproductive.

> Windows dtest 3.0: 
> incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails
> --
>
> Key: CASSANDRA-10103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10103
> Project: Cassandr

[jira] [Comment Edited] (CASSANDRA-10068) Batchlog replay fails with exception after a node is decommissioned

2015-08-19 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703510#comment-14703510
 ] 

Joel Knighton edited comment on CASSANDRA-10068 at 8/19/15 6:27 PM:


I haven't had any luck repro-ing this with a dtest - the timing issues are too 
difficult.

I've narrowed down the cause slightly (maybe?) through watching Jepsen tests 
that reproduce the issue.

The null gossip entries are present in nodes that crash at a particular time 
(seems to be quite late) in the decommission of the node. When started (after 
the decommission has finished without an error present), they have the null 
entry. A restart of the affected node removes this null entry.

Hope this helps.

EDIT: I should add that I'm quite confident the view natural endpoint warning 
is unrelated. I think I've seen it before any decommissions/crashes. I'll open 
up a JIRA for that issue and link your patch.


was (Author: jkni):
I haven't had any luck repro-ing this with a dtest - the timing issues are too 
difficult.

I've narrowed down the cause slightly (maybe?) through watching Jepsen tests 
that reproduce the issue.

The null gossip entries are present in nodes that crash at a particular time 
(seems to be quite late) in the decommission of the node. When started (after 
the decommission has finished without an error present), they have the null 
entry. A restart removes this null entry.

Hope this helps.

EDIT: I should add that I'm quite confident the view natural endpoint warning 
is unrelated. I think I've seen it before any decommissions/crashes. I'll open 
up a JIRA for that issue and link your patch.

> Batchlog replay fails with exception after a node is decommissioned
> ---
>
> Key: CASSANDRA-10068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10068
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Marcus Eriksson
> Fix For: 3.0 beta 2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test of materialized views that 
> crashes and decommissions nodes throughout the test.
> At the conclusion of the test, a batchlog replay is initiated through 
> nodetool and hits the following assertion due to a missing host ID: 
> https://github.com/apache/cassandra/blob/3413e557b95d9448b0311954e9b4f53eaf4758cd/src/java/org/apache/cassandra/service/StorageProxy.java#L1197
> A nodetool status on the node with failed batchlog replay shows the following 
> entry for the decommissioned node:
> DN  10.0.0.5  ?  256  ?   null
>   rack1
> On the unaffected nodes, there is no entry for the decommissioned node as 
> expected.
> There are occasional hits of the same assertions for logs in other nodes; it 
> looks like the issue might occasionally resolve itself, but one node seems to 
> have the errant null entry indefinitely.
> In logs for the nodes, this possibly unrelated exception also appears:
> java.lang.RuntimeException: Trying to get the view natural endpoint on a 
> non-data replica
>   at 
> org.apache.cassandra.db.view.MaterializedViewUtils.getViewNaturalEndpoint(MaterializedViewUtils.java:91)
>  ~[apache-cassandra-3.0.0-alpha1-SNAPSHOT.jar:3.0.0-alpha1-SNAPSHOT]
> I have a running cluster with the issue on my machine; it is also repeatable.
> Nothing stands out in the logs of the decommissioned node (n4) for me. The 
> logs of each node in the cluster are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10068) Batchlog replay fails with exception after a node is decommissioned

2015-08-19 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703510#comment-14703510
 ] 

Joel Knighton edited comment on CASSANDRA-10068 at 8/19/15 6:24 PM:


I haven't had any luck repro-ing this with a dtest - the timing issues are too 
difficult.

I've narrowed down the cause slightly (maybe?) through watching Jepsen tests 
that reproduce the issue.

The null gossip entries are present in nodes that crash at a particular time 
(seems to be quite late) in the decommission of the node. When started (after 
the decommission has finished without an error present), they have the null 
entry. A restart removes this null entry.

Hope this helps.

EDIT: I should add that I'm quite confident the view natural endpoint warning 
is unrelated. I think I've seen it before any decommissions/crashes. I'll open 
up a JIRA for that issue and link your patch.


was (Author: jkni):
I haven't had any luck repro-ing this with a dtest - the timing issues are too 
difficult.

I've narrowed down the cause slightly (maybe?) through watching Jepsen tests 
that reproduce the issue.

The null gossip entries are present in nodes that crash at a particular time 
(seems to be quite late) in the decommission of the node. When started (after 
the decommission has finished without an error present), they have the null 
entry. A restart removes this null entry.

Hope this helps.

> Batchlog replay fails with exception after a node is decommissioned
> ---
>
> Key: CASSANDRA-10068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10068
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Marcus Eriksson
> Fix For: 3.0 beta 2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test of materialized views that 
> crashes and decommissions nodes throughout the test.
> At the conclusion of the test, a batchlog replay is initiated through 
> nodetool and hits the following assertion due to a missing host ID: 
> https://github.com/apache/cassandra/blob/3413e557b95d9448b0311954e9b4f53eaf4758cd/src/java/org/apache/cassandra/service/StorageProxy.java#L1197
> A nodetool status on the node with failed batchlog replay shows the following 
> entry for the decommissioned node:
> DN  10.0.0.5  ?  256  ?   null
>   rack1
> On the unaffected nodes, there is no entry for the decommissioned node as 
> expected.
> There are occasional hits of the same assertions for logs in other nodes; it 
> looks like the issue might occasionally resolve itself, but one node seems to 
> have the errant null entry indefinitely.
> In logs for the nodes, this possibly unrelated exception also appears:
> java.lang.RuntimeException: Trying to get the view natural endpoint on a 
> non-data replica
>   at 
> org.apache.cassandra.db.view.MaterializedViewUtils.getViewNaturalEndpoint(MaterializedViewUtils.java:91)
>  ~[apache-cassandra-3.0.0-alpha1-SNAPSHOT.jar:3.0.0-alpha1-SNAPSHOT]
> I have a running cluster with the issue on my machine; it is also repeatable.
> Nothing stands out in the logs of the decommissioned node (n4) for me. The 
> logs of each node in the cluster are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10083) Revert AutoSavingCache.IStreamFactory to return OutputStream

2015-08-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10083:
---
Reviewer: Benedict

> Revert AutoSavingCache.IStreamFactory to return OutputStream
> 
>
> Key: CASSANDRA-10083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10083
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Attachments: 
> 0002-reverting-autosaving-cache-stream-factory-to-use-out.patch
>
>
> CASSANDRA-9265 changed the AutoSavingCache.IStreamFactory to return a 
> SequentialWriter, instead of an OutputStream. This makes implementing custom 
> factories much less straightforward / clean.
> The attached patch reverts the change, however it is dependent on 
> CASSANDRA-10082 to work properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10121) Fix *NEW* failing pig unit tests

2015-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703511#comment-14703511
 ] 

Blake Eggleston commented on CASSANDRA-10121:
-

here's the patch with the updated driver: 
https://github.com/bdeggleston/cassandra/tree/10121-2

> Fix *NEW* failing pig unit tests
> 
>
> Key: CASSANDRA-10121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10121
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Blake Eggleston
> Fix For: 3.0 beta 1
>
>
> The latest CASSANDRA-6717 commit with the new driver deterministically broke 
> the pig tests, and the issue is schema-related now.
> See 
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_testall/lastCompletedBuild/testReport/
> Ideally we want a fix before beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10130) Node failure during MV/2i update after streaming can have incomplete MV/2i when restarted

2015-08-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10130:
---
Issue Type: Bug  (was: Improvement)

> Node failure during MV/2i update after streaming can have incomplete MV/2i 
> when restarted
> -
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.0 rc1
>
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10130) Node failure during MV/2i update after streaming can have incomplete MV/2i when restarted

2015-08-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10130:
---
Fix Version/s: 3.0.0 rc1

> Node failure during MV/2i update after streaming can have incomplete MV/2i 
> when restarted
> -
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.0 rc1
>
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10130) Node failure during MV/2i update after streaming can have incomplete MV/2i when restarted

2015-08-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10130:
---
Assignee: Carl Yeksigian

> Node failure during MV/2i update after streaming can have incomplete MV/2i 
> when restarted
> -
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.0.0 rc1
>
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10068) Batchlog replay fails with exception after a node is decommissioned

2015-08-19 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703510#comment-14703510
 ] 

Joel Knighton commented on CASSANDRA-10068:
---

I haven't had any luck repro-ing this with a dtest - the timing issues are too 
difficult.

I've narrowed down the cause slightly (maybe?) through watching Jepsen tests 
that reproduce the issue.

The null gossip entries are present in nodes that crash at a particular time 
(seems to be quite late) in the decommission of the node. When started (after 
the decommission has finished without an error present), they have the null 
entry. A restart removes this null entry.

Hope this helps.

> Batchlog replay fails with exception after a node is decommissioned
> ---
>
> Key: CASSANDRA-10068
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10068
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Marcus Eriksson
> Fix For: 3.0 beta 2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test of materialized views that 
> crashes and decommissions nodes throughout the test.
> At the conclusion of the test, a batchlog replay is initiated through 
> nodetool and hits the following assertion due to a missing host ID: 
> https://github.com/apache/cassandra/blob/3413e557b95d9448b0311954e9b4f53eaf4758cd/src/java/org/apache/cassandra/service/StorageProxy.java#L1197
> A nodetool status on the node with failed batchlog replay shows the following 
> entry for the decommissioned node:
> DN  10.0.0.5  ?  256  ?   null
>   rack1
> On the unaffected nodes, there is no entry for the decommissioned node as 
> expected.
> There are occasional hits of the same assertions for logs in other nodes; it 
> looks like the issue might occasionally resolve itself, but one node seems to 
> have the errant null entry indefinitely.
> In logs for the nodes, this possibly unrelated exception also appears:
> java.lang.RuntimeException: Trying to get the view natural endpoint on a 
> non-data replica
>   at 
> org.apache.cassandra.db.view.MaterializedViewUtils.getViewNaturalEndpoint(MaterializedViewUtils.java:91)
>  ~[apache-cassandra-3.0.0-alpha1-SNAPSHOT.jar:3.0.0-alpha1-SNAPSHOT]
> I have a running cluster with the issue on my machine; it is also repeatable.
> Nothing stands out in the logs of the decommissioned node (n4) for me. The 
> logs of each node in the cluster are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10103) Windows dtest 3.0: incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails

2015-08-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-10103:
---

Assignee: Joshua McKenzie

> Windows dtest 3.0: 
> incremental_repair_test.py:TestIncRepair.sstable_repairedset_test fails
> --
>
> Key: CASSANDRA-10103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10103
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> {noformat}
> File "D:\Python27\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-3.0_dtest_win32\cassandra-dtest\incremental_repair_test.py",
>  line 165, in sstable_repairedset_test
> self.assertGreaterEqual(len(uniquematches), 2)
>   File "D:\Python27\lib\unittest\case.py", line 948, in assertGreaterEqual
> self.fail(self._formatMessage(msg, standardMsg))
>   File "D:\Python27\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> '0 not greater than or equal to 2\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-pq7lpx\ndtest: DEBUG: []\n- >> end 
> captured logging << -'
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_disabled_test/history/]
> Env: both CI and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10107) Windows dtest 3.0: TestScrub and TestScrubIndexes failures

2015-08-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-10107.
-
Resolution: Duplicate

Resolving as duplicate of CASSANDRA-10109. Same error message, should be 
resolved there.

> Windows dtest 3.0: TestScrub and TestScrubIndexes failures
> --
>
> Key: CASSANDRA-10107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10107
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: dows dtest 3.0: TestScrub / TestScrubIndexes failures
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> scrub_test.py:TestScrub.test_standalone_scrub
> scrub_test.py:TestScrub.test_standalone_scrub_essential_files_only
> scrub_test.py:TestScrubIndexes.test_standalone_scrub
> Somewhat different messages between CI and local, but consistent on env. 
> Locally, I see:
> {noformat}
> dtest: DEBUG: ERROR 20:41:20 This platform does not support atomic directory 
> streams (SecureDirectoryStream); race conditions when loading sstable files 
> could occurr
> {noformat}
> Consistently fails, both on CI and locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10105) Windows dtest 3.0: TestOfflineTools failures

2015-08-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-10105.
-
Resolution: Duplicate

> Windows dtest 3.0: TestOfflineTools failures
> 
>
> Key: CASSANDRA-10105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10105
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> offline_tools_test.py:TestOfflineTools.sstablelevelreset_test
> offline_tools_test.py:TestOfflineTools.sstableofflinerelevel_test
> Both tests fail with the following:
> {noformat}
> Traceback (most recent call last):
>   File "c:\src\cassandra-dtest\dtest.py", line 532, in tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR [main] 2015-08-17 
> 15:55:05,060 NoSpamLogger.java:97 - This platform does not support atomic 
> directory streams (SecureDirectoryStream); race conditions when loading 
> sstable files could occurr']
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/jmx_test/TestJMX/netstats_test/history/]
> Env: ci and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10105) Windows dtest 3.0: TestOfflineTools failures

2015-08-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703484#comment-14703484
 ] 

Joshua McKenzie commented on CASSANDRA-10105:
-

Closing as duplicate of CASSANDRA-10109 as it's same issue and 
[~stefania_alborghetti] will be addressing it there.

> Windows dtest 3.0: TestOfflineTools failures
> 
>
> Key: CASSANDRA-10105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10105
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0.x
>
>
> offline_tools_test.py:TestOfflineTools.sstablelevelreset_test
> offline_tools_test.py:TestOfflineTools.sstableofflinerelevel_test
> Both tests fail with the following:
> {noformat}
> Traceback (most recent call last):
>   File "c:\src\cassandra-dtest\dtest.py", line 532, in tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR [main] 2015-08-17 
> 15:55:05,060 NoSpamLogger.java:97 - This platform does not support atomic 
> directory streams (SecureDirectoryStream); race conditions when loading 
> sstable files could occurr']
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/jmx_test/TestJMX/netstats_test/history/]
> Env: ci and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9738) Migrate key-cache to be fully off-heap

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703452#comment-14703452
 ] 

Ariel Weisberg commented on CASSANDRA-9738:
---

In OHCKeyCache when calculating the length of strings you are calculating the 
length using String.length() which returns a wrong answer if you encode using 
UTF-8. 

We went to some length to come up with an "optimal" no garbage string writing 
function for BufferedDataOutputStream (you can see it as a static method in 
UnbufferedDataOutputStream). It would be great if we could do the same thing 
here and not allocate byte arrays for the encoded names. Would it be crazy for 
it take a lambda that describes how to write the generated byte array to some 
output? Then we could use the same code everywhere. [~benedict] what do you 
think?

Since you are using a short length prefix what is your level of confidence it 
will always be enough? How does it fail if it isn't?

The 2i path testing is much appreciated. The utests didn't seem to make it in 
cassci.

> Migrate key-cache to be fully off-heap
> --
>
> Key: CASSANDRA-9738
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9738
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0 beta 2
>
>
> Key cache still uses a concurrent map on-heap. This could go to off-heap and 
> feels doable now after CASSANDRA-8099.
> Evaluation should be done in advance based on a POC to prove that pure 
> off-heap counter cache buys a performance and/or gc-pressure improvement.
> In theory, elimination of on-heap management of the map should buy us some 
> benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10043) A NullPointerException is thrown if the column name is unknown for an IN relation

2015-08-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703449#comment-14703449
 ] 

Robert Stupp commented on CASSANDRA-10043:
--

+1

> A NullPointerException is thrown if the column name is unknown for an IN 
> relation
> -
>
> Key: CASSANDRA-10043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10043
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Attachments: 10043-2.2.txt, 10043-3.0.txt
>
>
> {code}
> cqlsh:test> create table newTable (a int, b int, c int, primary key(a, b));
> cqlsh:test> select * from newTable where d in (1, 2);
> ServerError:  message="java.lang.NullPointerException">
> {code}
> The problem seems to occur only for {{IN}} restrictions 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10083) Revert AutoSavingCache.IStreamFactory to return OutputStream

2015-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703442#comment-14703442
 ] 

Blake Eggleston commented on CASSANDRA-10083:
-

here are the patches for 2.2 & 3.0, with the 10082 changes rolled into the 
patch:

https://github.com/bdeggleston/cassandra/tree/10083-2-squashed
https://github.com/bdeggleston/cassandra/tree/10083-2-squashed-3.0

> Revert AutoSavingCache.IStreamFactory to return OutputStream
> 
>
> Key: CASSANDRA-10083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10083
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Attachments: 
> 0002-reverting-autosaving-cache-stream-factory-to-use-out.patch
>
>
> CASSANDRA-9265 changed the AutoSavingCache.IStreamFactory to return a 
> SequentialWriter, instead of an OutputStream. This makes implementing custom 
> factories much less straightforward / clean.
> The attached patch reverts the change, however it is dependent on 
> CASSANDRA-10082 to work properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10082) Transactional classes shouldn't also implement streams, channels, etc

2015-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703441#comment-14703441
 ] 

Blake Eggleston commented on CASSANDRA-10082:
-

bq. I'd say this version should just be folded into your other commit that 
requires the change

Given planned refactoring for 3.x, that seems like the most reasonable thing to 
do. I'll roll your alternative into 10083

> Transactional classes shouldn't also implement streams, channels, etc
> -
>
> Key: CASSANDRA-10082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10082
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Attachments: 
> 0001-replacing-SequentialWriter-OutputStream-extension-wi.patch, 10082-2.txt
>
>
> Since the close method on the Transactional interface means "abort if commit 
> hasn't been called", mixing Transactional and AutoCloseable interfaces where 
> close means "we're done here" is pretty much never the right thing to do. 
> The only class that does this is SequentialWriter. It's not used in a way 
> that causes a problem, but it's still a potential hazard for future 
> development.
> The attached patch replaces the SequentialWriter OutputStream implementation 
> with a wrapper class that implements the expected behavior on close, and adds 
> a warning to the Transactional interface. It also adds a unit test that 
> demonstrates the problem without the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: ninja: remove extra print output from nodetool.bat

2015-08-19 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk ceae5acd1 -> 2c3167722


ninja: remove extra print output from nodetool.bat


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb8ff5d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb8ff5d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb8ff5d3

Branch: refs/heads/trunk
Commit: cb8ff5d3a9dcfff3ee8c44ebc818b46692294283
Parents: 43b18d1
Author: Joshua McKenzie 
Authored: Wed Aug 19 13:42:07 2015 -0400
Committer: Joshua McKenzie 
Committed: Wed Aug 19 13:42:07 2015 -0400

--
 bin/nodetool.bat | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb8ff5d3/bin/nodetool.bat
--
diff --git a/bin/nodetool.bat b/bin/nodetool.bat
index 416aca5..1d3c4e5 100644
--- a/bin/nodetool.bat
+++ b/bin/nodetool.bat
@@ -26,7 +26,6 @@ if NOT DEFINED JAVA_HOME goto :err
 set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.logdir="%CASSANDRA_HOME%\logs"
 set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.storagedir="%CASSANDRA_HOME%\data"
 
-echo Starting NodeTool
 "%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% %CASSANDRA_PARAMS% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
 goto finally
 



[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-19 Thread jmckenzie
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c316772
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c316772
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c316772

Branch: refs/heads/trunk
Commit: 2c31677228626c4c00a38841944fd662bd23b2f0
Parents: ceae5ac cb8ff5d
Author: Joshua McKenzie 
Authored: Wed Aug 19 13:42:17 2015 -0400
Committer: Joshua McKenzie 
Committed: Wed Aug 19 13:42:17 2015 -0400

--
 bin/nodetool.bat | 1 -
 1 file changed, 1 deletion(-)
--




cassandra git commit: ninja: remove extra print output from nodetool.bat

2015-08-19 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 43b18d180 -> cb8ff5d3a


ninja: remove extra print output from nodetool.bat


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb8ff5d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb8ff5d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb8ff5d3

Branch: refs/heads/cassandra-3.0
Commit: cb8ff5d3a9dcfff3ee8c44ebc818b46692294283
Parents: 43b18d1
Author: Joshua McKenzie 
Authored: Wed Aug 19 13:42:07 2015 -0400
Committer: Joshua McKenzie 
Committed: Wed Aug 19 13:42:07 2015 -0400

--
 bin/nodetool.bat | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb8ff5d3/bin/nodetool.bat
--
diff --git a/bin/nodetool.bat b/bin/nodetool.bat
index 416aca5..1d3c4e5 100644
--- a/bin/nodetool.bat
+++ b/bin/nodetool.bat
@@ -26,7 +26,6 @@ if NOT DEFINED JAVA_HOME goto :err
 set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.logdir="%CASSANDRA_HOME%\logs"
 set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.storagedir="%CASSANDRA_HOME%\data"
 
-echo Starting NodeTool
 "%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% %CASSANDRA_PARAMS% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
 goto finally
 



[jira] [Created] (CASSANDRA-10130) Node failure during MV/2i update after streaming can have incomplete MV/2i when restarted

2015-08-19 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-10130:
--

 Summary: Node failure during MV/2i update after streaming can have 
incomplete MV/2i when restarted
 Key: CASSANDRA-10130
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Priority: Minor


Since MV/2i update happens after SSTables are received, node failure during 
MV/2i update can leave received SSTables live when restarted while MV/2i are 
partially up to date.

We can add some kind of tracking mechanism to automatically rebuild at the 
startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10007) Repeated rows in paged result

2015-08-19 Thread Steve Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703413#comment-14703413
 ] 

Steve Wang commented on CASSANDRA-10007:


Hi Benjamin! Yes, I'm still able to reproduce this error. What I do is start a 
3 node cluster in CCM and then run the paging-test.py script that Adam attached 
in this ticket. The first two values are as expected, but the next few values, 
which should be 100, oscillate between 104 and 103.

> Repeated rows in paged result
> -
>
> Key: CASSANDRA-10007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Holmberg
>Assignee: Benjamin Lerer
>  Labels: client-impacting
> Fix For: 3.x
>
> Attachments: paging-test.py
>
>
> We noticed an anomaly in paged results while testing against 3.0.0-alpha1. It 
> seems that unbounded selects can return rows repeated at page boundaries. 
> Furthermore, the number of repeated rows seems to dither in count across 
> consecutive runs of the same query.
> Does not reproduce on 2.2.0 and earlier.
> I also noted that this behavior only manifests on multi-node clusters.
> The attached script shows this behavior when run against 3.0.0-alpha1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-19 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ceae5acd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ceae5acd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ceae5acd

Branch: refs/heads/trunk
Commit: ceae5acd131359ab1f762a68da171ac4c2c4519a
Parents: cabdb03 43b18d1
Author: Sam Tunnicliffe 
Authored: Wed Aug 19 18:12:28 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Wed Aug 19 18:12:28 2015 +0100

--
 build.xml | 2 ++
 1 file changed, 2 insertions(+)
--




[1/3] cassandra git commit: Re-enable brief output for junit logs from CASSANDRA-9528

2015-08-19 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 96d41f0e0 -> 43b18d180
  refs/heads/trunk cabdb03a9 -> ceae5acd1


Re-enable brief output for junit logs from CASSANDRA-9528


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43b18d18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43b18d18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43b18d18

Branch: refs/heads/cassandra-3.0
Commit: 43b18d1802d90fcfaf608754c25de1ee487d8282
Parents: 96d41f0
Author: Sam Tunnicliffe 
Authored: Wed Aug 19 18:11:51 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Wed Aug 19 18:11:51 2015 +0100

--
 build.xml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/43b18d18/build.xml
--
diff --git a/build.xml b/build.xml
index 8df738c..7ed90d3 100644
--- a/build.xml
+++ b/build.xml
@@ -1216,6 +1216,8 @@
 
 
 
+
+

 
   



[2/3] cassandra git commit: Re-enable brief output for junit logs from CASSANDRA-9528

2015-08-19 Thread samt
Re-enable brief output for junit logs from CASSANDRA-9528


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43b18d18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43b18d18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43b18d18

Branch: refs/heads/trunk
Commit: 43b18d1802d90fcfaf608754c25de1ee487d8282
Parents: 96d41f0
Author: Sam Tunnicliffe 
Authored: Wed Aug 19 18:11:51 2015 +0100
Committer: Sam Tunnicliffe 
Committed: Wed Aug 19 18:11:51 2015 +0100

--
 build.xml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/43b18d18/build.xml
--
diff --git a/build.xml b/build.xml
index 8df738c..7ed90d3 100644
--- a/build.xml
+++ b/build.xml
@@ -1216,6 +1216,8 @@
 
 
 
+
+

 
   



[jira] [Commented] (CASSANDRA-9898) cqlsh crashes if it load a utf-8 file.

2015-08-19 Thread Yasuharu Goto (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703391#comment-14703391
 ] 

Yasuharu Goto commented on CASSANDRA-9898:
--

Ping [~carlyeks].
What should I do as a next step?

> cqlsh crashes if it load a utf-8 file.
> --
>
> Key: CASSANDRA-9898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux, os x yosemite.
>Reporter: Yasuharu Goto
>Assignee: Yasuharu Goto
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x
>
> Attachments: cassandra-2.1-9898.txt, cassandra-2.2-9898.txt
>
>
> cqlsh crashes when it load a cql script file encoded in utf-8.
> This is a reproduction procedure.
> {noformat}
> $cat ./test.cql
> // 日本語のコメント
> use system;
> select * from system.peers;
> $cqlsh --version
> cqlsh 5.0.1
> $cqlsh -f ./test.cql
> Traceback (most recent call last):
>   File "./cqlsh", line 2459, in 
> main(*read_options(sys.argv[1:], os.environ))
>   File "./cqlsh", line 2451, in main
> shell.cmdloop()
>   File "./cqlsh", line 940, in cmdloop
> line = self.get_input_line(self.prompt)
>   File "./cqlsh", line 909, in get_input_line
> self.lastcmd = self.stdin.readline()
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py",
>  line 675, in readline
> return self.reader.readline(size)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py",
>  line 530, in readline
> data = self.read(readsize, firstline=True)
>   File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py",
>  line 477, in read
> newchars, decodedbytes = self.decode(data, self.errors)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 3: 
> ordinal not in range(128)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10100) Windows dtest 3.0: commitlog_test.py:TestCommitLog.stop_failure_policy_test fails

2015-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703346#comment-14703346
 ] 

Paulo Motta commented on CASSANDRA-10100:
-

Waiting for review on [cassandra-dtest 
PR|https://github.com/riptano/cassandra-dtest/pull/479].

> Windows dtest 3.0: commitlog_test.py:TestCommitLog.stop_failure_policy_test 
> fails
> -
>
> Key: CASSANDRA-10100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10100
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Paulo Motta
>  Labels: Windows
> Fix For: 3.0.x
>
>
> {noformat}
> FAIL: stop_failure_policy_test (commitlog_test.TestCommitLog)
> --
> Traceback (most recent call last):
>   File "c:\src\cassandra-dtest\commitlog_test.py", line 258, in 
> stop_failure_policy_test
> self.assertTrue(failure, "Cannot find the commitlog failure message in 
> logs")
> AssertionError: Cannot find the commitlog failure message in logs
>  >> begin captured logging << 
> {noformat}
> Failure history: 
> [consistent|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest_win32/17/testReport/junit/commitlog_test/TestCommitLog/small_segment_size_test/history/]
> Env: Both CI and local



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7771) Allow multiple 2ndary index on the same column

2015-08-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1470#comment-1470
 ] 

Sam Tunnicliffe commented on CASSANDRA-7771:


Allowing multiple indexes on a single column causes a dtest failure (it was 
previously expecting the creation of a second index on a map column to be 
rejected). PR for that 
[here|https://github.com/riptano/cassandra-dtest/pull/481]

> Allow multiple 2ndary index on the same column
> --
>
> Key: CASSANDRA-7771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7771
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sylvain Lebresne
>Assignee: Sam Tunnicliffe
>  Labels: client-impacting
> Fix For: 3.0 beta 1
>
>
> Currently, the code assumes that we'll only have one 2ndary index per column. 
> This has been reasonable so far but stop being it with CASSANDRA-6382 (you 
> might want to index multiple fields of the same UDT column) and 
> CASSANDRA-7458 (you may want to have one "normal" index an multiple 
> functional index for the same column). So we should consider removing that 
> assumption in the code, which is mainly 2 places:
> # in the schema: each ColumnDefinition only has infos for one index. This 
> part should probably be tackled in CASSANDRA-6717 so I'm marking this issue 
> as a follow-up of CASSANDRA-6717.
> # in the 2ndary index API: this is the part I'm suggesting we fix in this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7771) Allow multiple 2ndary index on the same column

2015-08-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703315#comment-14703315
 ] 

Sam Tunnicliffe commented on CASSANDRA-7771:


This is ready for review. Unfortunately, it's tightly coupled to 
CASSANDRA-9459, so it needs to be looked at in that broader context. The branch 
is [here|https://github.com/beobal/cassandra/tree/9459-wip] where most of the 
relevant stuff is in 
{code}
commit fe9b0fe7cdaaf9a23bd56586a074637efb187ef6
Author: Sam Tunnicliffe 
Date:   Tue Aug 18 20:36:56 2015 +0100

Support for multiple indexes on a single column
{code}

> Allow multiple 2ndary index on the same column
> --
>
> Key: CASSANDRA-7771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7771
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sylvain Lebresne
>Assignee: Sam Tunnicliffe
>  Labels: client-impacting
> Fix For: 3.0 beta 1
>
>
> Currently, the code assumes that we'll only have one 2ndary index per column. 
> This has been reasonable so far but stop being it with CASSANDRA-6382 (you 
> might want to index multiple fields of the same UDT column) and 
> CASSANDRA-7458 (you may want to have one "normal" index an multiple 
> functional index for the same column). So we should consider removing that 
> assumption in the code, which is mainly 2 places:
> # in the schema: each ColumnDefinition only has infos for one index. This 
> part should probably be tackled in CASSANDRA-6717 so I'm marking this issue 
> as a follow-up of CASSANDRA-6717.
> # in the 2ndary index API: this is the part I'm suggesting we fix in this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9459) SecondaryIndex API redesign

2015-08-19 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9459:
---
Reviewer: Sylvain Lebresne  (was: Sergio Bossa)

> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level one as subtasks (but I'll defer doing that until 
> getting into the details of the redesign):
> * CASSANDRA-7771
> * CASSANDRA-8103
> * CASSANDRA-9041
> * CASSANDRA-4458
> * CASSANDRA-8505
> Whilst they're not hard dependencies, I propose that this be done on top of 
> both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
> storage engine changes may facilitate a friendlier index API, but also 
> because of the changes to SIS mentioned above. As for 6717, the changes to 
> schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9459) SecondaryIndex API redesign

2015-08-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703314#comment-14703314
 ] 

Sam Tunnicliffe commented on CASSANDRA-9459:


I've pushed some further commits which I think make this ready for another 
round of review (pinging [~slebresne])

Outstanding issues (some of which may be split out to follow up tickets):

* I'd particularly like some feedback on the way that 
{{SIM.WriteTimeTransaction#onUpdated}} and {{Index#updateRow}} handle Memtable 
updates. Rather than pass the existing, update and reconciled versions of the 
row to every registered Index, and have each of them potentially perform a very 
similar set of diff operations, the diffing is done once (in the transaction) 
and only the deltas are passed to the {{Indexes}} - one row containing the 
subset of the existing row that is now gone, another with the subset of the 
merged row that was added in the operation. This feels a bit counter-intuitive 
and writing the javadoc on {{Index#updateRow}} was tough, so I'm a bit 
concerned that this is going to be tricky for implementors to work with (but I 
don't want to duplicate the diffing if we can avoid it).
* Provide a better API for (re)building indexes. The current approach assumes 
that indexes should always be built from the merged view of data in SSTables, 
but this may not always be the case. That said, this is true for most existing 
implementations, and so is optimised to perform only a single pass through the 
data. I don't want to prohibit that optimisation, so some further thought it 
required.
* Lookups in {{SecondaryIndexManager}} could certainly be improved with some 
better datastructures, rather than always resorting to a scan through the 
entire collection of registered indexes
* There is a mismatch between the name of an index as stored in schema and in 
the value returned from Index.getIndexName, which for the builtin index impls 
is the name of the underlying index CFS. This leaks into a number of places, 
notably around (re)building indexes. I've opened CASSANDRA-10127 for this.
* I'm not entirely happy with the way we validate restrictions using 
{{Index.supportsExpression}}. It seems a bit blunt, but I haven't been able to 
come up with anything better yet.


I've avoided squashing the [wip 
branch|https://github.com/beobal/cassandra/tree/9459-wip] since people have 
already commented on that, but I have had to rebase it several times so 
although the commit history has been overwritten, it's remained more or less 
semantically consistent.


> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level one as subtasks (but I'll defer doing that until 
> getting into the details of the redesign):
> * CASSANDRA-7771
> * CASSANDRA-8103
> * CASSANDRA-9041
> * CASSANDRA-4458
> * CASSANDRA-8505
> Whilst they're not hard dependencies, I propose that this be done on top of 
> both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
> storage engine changes may facilitate a friendlier index API, but also 
> because of the changes to SIS mentioned above. As for 6717, the changes to 
> schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9459) SecondaryIndex API redesign

2015-08-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703305#comment-14703305
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-9459 at 8/19/15 4:38 PM:
-

[~sbtourist] in response to your comments (sorry for the delay) :

bq. It seems we've lost CASSANDRA-9196.

This was necessary because of the fact that each Index defined in schema was 
automatically registered with {{SecondaryIndexManager}}. So even if a 
particular custom index would not participate in any indexing or search 
activity on a certain node, due to external configuration or whatnot, its mere 
presence would mean that whenever new SSTables were loaded we would perform an 
expensive, and possibly pointless iteration through them. This shouldn't happen 
anymore, as the decision whether to register an index is now the responsibility 
of the index itself, so it can make that choice based on whatever criteria is 
necessary.


bq. It would be useful to distinguish between a cleanup and a compaction at the 
Indexer level, as indexes not backed by CFs will probably be do nothing during 
compaction.

{{SecondaryIndexManager.TransactionType}} now allows impls to distinguish 
between {{WRITE_TIME}}, {{COMPACTION}} and {{CLEANUP}} transactions.

bq. Cells#reconcile doesn't call Indexer#updateCell in case of counters, but 
what if a third-party implementation wants to index them?

Indexes are not supported on counter columns directly. That said, the latest 
version changes the way updates are collected by {{WriteTimeTransaction}} with 
the effect that counter columns will be present in the Rows supplied to 
registered indexers.

bq. SIM#indexPartition seems to miss to invoke Indexer#finish.

Thanks, good catch.

On the subsequent comment regarding CASSANDRA-8717, I haven't had a chance yet 
but I'll dig further into that shortly.




was (Author: beobal):
@sbtourist in response to your comments (sorry for the delay) :

bq. It seems we've lost CASSANDRA-9196.

This was necessary because of the fact that each Index defined in schema was 
automatically registered with {{SecondaryIndexManager}}. So even if a 
particular custom index would not participate in any indexing or search 
activity on a certain node, due to external configuration or whatnot, its mere 
presence would mean that whenever new SSTables were loaded we would perform an 
expensive, and possibly pointless iteration through them. This shouldn't happen 
anymore, as the decision whether to register an index is now the responsibility 
of the index itself, so it can make that choice based on whatever criteria is 
necessary.


bq. It would be useful to distinguish between a cleanup and a compaction at the 
Indexer level, as indexes not backed by CFs will probably be do nothing during 
compaction.

{{SecondaryIndexManager.TransactionType}} now allows impls to distinguish 
between {{WRITE_TIME}}, {{COMPACTION}} and {{CLEANUP}} transactions.

bq. Cells#reconcile doesn't call Indexer#updateCell in case of counters, but 
what if a third-party implementation wants to index them?

Indexes are not supported on counter columns directly. That said, the latest 
version changes the way updates are collected by {{WriteTimeTransaction}} with 
the effect that counter columns will be present in the Rows supplied to 
registered indexers.

bq. SIM#indexPartition seems to miss to invoke Indexer#finish.

Thanks, good catch.

On the subsequent comment regarding CASSANDRA-8717, I haven't had a chance yet 
but I'll dig further into that shortly.



> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level o

[jira] [Commented] (CASSANDRA-9459) SecondaryIndex API redesign

2015-08-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703305#comment-14703305
 ] 

Sam Tunnicliffe commented on CASSANDRA-9459:


@sbtourist in response to your comments (sorry for the delay) :

bq. It seems we've lost CASSANDRA-9196.

This was necessary because of the fact that each Index defined in schema was 
automatically registered with {{SecondaryIndexManager}}. So even if a 
particular custom index would not participate in any indexing or search 
activity on a certain node, due to external configuration or whatnot, its mere 
presence would mean that whenever new SSTables were loaded we would perform an 
expensive, and possibly pointless iteration through them. This shouldn't happen 
anymore, as the decision whether to register an index is now the responsibility 
of the index itself, so it can make that choice based on whatever criteria is 
necessary.


bq. It would be useful to distinguish between a cleanup and a compaction at the 
Indexer level, as indexes not backed by CFs will probably be do nothing during 
compaction.

{{SecondaryIndexManager.TransactionType}} now allows impls to distinguish 
between {{WRITE_TIME}}, {{COMPACTION}} and {{CLEANUP}} transactions.

bq. Cells#reconcile doesn't call Indexer#updateCell in case of counters, but 
what if a third-party implementation wants to index them?

Indexes are not supported on counter columns directly. That said, the latest 
version changes the way updates are collected by {{WriteTimeTransaction}} with 
the effect that counter columns will be present in the Rows supplied to 
registered indexers.

bq. SIM#indexPartition seems to miss to invoke Indexer#finish.

Thanks, good catch.

On the subsequent comment regarding CASSANDRA-8717, I haven't had a chance yet 
but I'll dig further into that shortly.



> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level one as subtasks (but I'll defer doing that until 
> getting into the details of the redesign):
> * CASSANDRA-7771
> * CASSANDRA-8103
> * CASSANDRA-9041
> * CASSANDRA-4458
> * CASSANDRA-8505
> Whilst they're not hard dependencies, I propose that this be done on top of 
> both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
> storage engine changes may facilitate a friendlier index API, but also 
> because of the changes to SIS mentioned above. As for 6717, the changes to 
> schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9749) CommitLogReplayer continues startup after encountering errors

2015-08-19 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703274#comment-14703274
 ] 

Branimir Lambov commented on CASSANDRA-9749:


The failures are caused by the change of behaviour from CASSANDRA-8515. I will 
make a patch that fixes the test expectations shortly.

> CommitLogReplayer continues startup after encountering errors
> -
>
> Key: CASSANDRA-9749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9749
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Branimir Lambov
> Fix For: 2.2.1, 3.0 beta 1
>
> Attachments: 9749-coverage.tgz
>
>
> There are a few places where the commit log recovery method either skips 
> sections or just returns when it encounters errors.
> Specifically if it can't read the header here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L298
> Or if there are compressor problems here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L314
>  and here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L366
> Whether these are user-fixable or not, I think we should require more direct 
> user intervention (ie: fix what's wrong, or remove the bad file and restart) 
> since we're basically losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9673) Improve batchlog write path

2015-08-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9673:
--
Labels: performance  (was: )

> Improve batchlog write path
> ---
>
> Key: CASSANDRA-9673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9673
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Stefania
>  Labels: performance
> Fix For: 3.0 beta 2
>
> Attachments: 9673_001.tar.gz, 9673_004.tar.gz, 
> gc_times_first_node_patched_004.png, gc_times_first_node_trunk_004.png
>
>
> Currently we allocate an on-heap {{ByteBuffer}} to serialize the batched 
> mutations into, before sending it to a distant node, generating unnecessary 
> garbage (potentially a lot of it).
> With materialized views using the batchlog, it would be nice to optimise the 
> write path:
> - introduce a new verb ({{Batch}})
> - introduce a new message ({{BatchMessage}}) that would encapsulate the 
> mutations, expiration, and creation time (similar to {{HintMessage}} in 
> CASSANDRA-6230)
> - have MS serialize it directly instead of relying on an intermediate buffer
> To avoid merely shifting the temp buffer to the receiving side(s) we should 
> change the structure of the batchlog table to use a list or a map of 
> individual mutations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10129) Windows utest 2.2: RecoveryManagerTest.testRecoverPITUnordered failure

2015-08-19 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-10129:
---

 Summary: Windows utest 2.2: 
RecoveryManagerTest.testRecoverPITUnordered failure
 Key: CASSANDRA-10129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10129
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Paulo Motta


{noformat}
FSWriteError in build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149)
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:359)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:167)
at 
org.apache.cassandra.db.commitlog.CommitLog.startUnsafe(CommitLog.java:441)
at 
org.apache.cassandra.db.commitlog.CommitLog.resetUnsafe(CommitLog.java:414)
at 
org.apache.cassandra.db.RecoveryManagerTest.testRecoverPITUnordered(RecoveryManagerTest.java:166)
Caused by: java.nio.file.AccessDeniedException: 
build\test\cassandra\commitlog;84\CommitLog-5-1439989060300.log
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126)
{noformat}

Failure started with build #89 but reverting CASSANDRA-9749 doesn't resolve it; 
I can reproduce the error locally even after revert/rebuild.

I've bashed my head against trying to get the CommitLogTests to behave on 
Windows enough times that I think we could use a new set of eyes on them.

Assigning to Paulo and taking review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9749) CommitLogReplayer continues startup after encountering errors

2015-08-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703266#comment-14703266
 ] 

Ariel Weisberg commented on CASSANDRA-9749:
---

We did. 
http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-9749-2.2-testall/5/#showFailuresLink
 
Not sure what the story is. Maybe something changed in between?

> CommitLogReplayer continues startup after encountering errors
> -
>
> Key: CASSANDRA-9749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9749
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Branimir Lambov
> Fix For: 2.2.1, 3.0 beta 1
>
> Attachments: 9749-coverage.tgz
>
>
> There are a few places where the commit log recovery method either skips 
> sections or just returns when it encounters errors.
> Specifically if it can't read the header here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L298
> Or if there are compressor problems here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L314
>  and here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L366
> Whether these are user-fixable or not, I think we should require more direct 
> user intervention (ie: fix what's wrong, or remove the bad file and restart) 
> since we're basically losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9716) system_auth_ks_is_alterable_test dtest fails on trunk

2015-08-19 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-9716.
-
Resolution: Fixed

> system_auth_ks_is_alterable_test dtest fails on trunk
> -
>
> Key: CASSANDRA-9716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9716
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Sam Tunnicliffe
>Priority: Blocker
>
> On cassci and locally, the test logs an NPE from 
> {{o.a.c.cql3.UntypedResultSet$Row.getBoolean}}: [cassci 
> run|http://cassci.datastax.com/view/trunk/job/trunk_dtest/296/testReport/junit/auth_test/TestAuth/system_auth_ks_is_alterable_test/]
> EDIT: the command to run just this test is {{CASSANDRA_VERSION=git:trunk 
> nosetests auth_test.py:TestAuth.system_auth_ks_is_alterable_test}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9749) CommitLogReplayer continues startup after encountering errors

2015-08-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reopened CASSANDRA-9749:


This commit breaks CommitLogTest on both Windows and Linux.

[Error 
example|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_utest/lastCompletedBuild/testReport/org.apache.cassandra.db.commitlog/CommitLogTest/testRecoveryWithIdMismatch/]

A revert of this commit fixes that. Did we not confirm with CI on this before 
committing?

> CommitLogReplayer continues startup after encountering errors
> -
>
> Key: CASSANDRA-9749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9749
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Branimir Lambov
> Fix For: 2.2.1, 3.0 beta 1
>
> Attachments: 9749-coverage.tgz
>
>
> There are a few places where the commit log recovery method either skips 
> sections or just returns when it encounters errors.
> Specifically if it can't read the header here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L298
> Or if there are compressor problems here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L314
>  and here: 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L366
> Whether these are user-fixable or not, I think we should require more direct 
> user intervention (ie: fix what's wrong, or remove the bad file and restart) 
> since we're basically losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >