[jira] [Created] (CASSANDRA-10698) Static column performance with DISTINCT

2015-11-13 Thread Brice Figureau (JIRA)
Brice Figureau created CASSANDRA-10698:
--

 Summary: Static column performance with DISTINCT
 Key: CASSANDRA-10698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10698
 Project: Cassandra
  Issue Type: Bug
  Components: core
 Environment: Linux, cassandra 2.1.11
Reporter: Brice Figureau
 Attachments: bug-slow-distinct.tar.gz

As described on the mailing list, with the following schema:

{code:sql}
CREATE TABLE akka.messages (
persistence_id text,
partition_nr bigint,
sequence_nr bigint,
message blob,
used boolean static,
PRIMARY KEY ((persistence_id, partition_nr), sequence_nr)
) WITH CLUSTERING ORDER BY (sequence_nr ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 216000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}

The following query:
{code:sql}
SELECT used from akka.messages WHERE
  persistence_id = 'player-SW11f03e20b8802000' AND
  partition_nr = 0;
{code}

is quite slow, and as slow as the {{distinct}} version:
{code:sql}
SELECT DISTINCT used from akka.messages WHERE
  persistence_id = 'player-SW11f03e20b8802000' AND
  partition_nr = 0;
{code}

As shown with this tracing from a small unloaded 3 nodes cluster:
{noformat}
 activity   
   | timestamp  | source | 
source_elapsed
---+++

Execute CQL3 query | 2015-11-13 11:04:41.771000 | 192.168.168.12 |  0
 Parsing SELECT DISTINCT used    
[SharedPool-Worker-1] | 2015-11-13 11:04:41.771000 | 192.168.168.12 |   
  78
READ message received from /192.168.168.12 
[MessagingService-Incoming-/192.168.168.12] | 2015-11-13 11:04:41.772000 | 
192.168.168.29 | 22
 Preparing statement 
[SharedPool-Worker-1] | 2015-11-13 11:04:41.772000 | 192.168.168.12 |   
 271
Executing single-partition query on messages 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.772000 | 192.168.168.29 |   
 424
   reading data from /192.168.168.29 
[SharedPool-Worker-1] | 2015-11-13 11:04:41.772000 | 192.168.168.12 |   
 642
Acquiring sstable references 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.772000 | 192.168.168.29 |   
 445
   Sending READ message to /192.168.168.29 
[MessagingService-Outgoing-/192.168.168.29] | 2015-11-13 11:04:41.772000 | 
192.168.168.12 |738
 Merging memtable tombstones 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.772000 | 192.168.168.29 |   
 476
  Key cache hit for sstable 1126 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.773000 | 192.168.168.29 |   
 560
 Seeking to partition beginning in data file 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.773000 | 192.168.168.29 |   
 592
   Skipped 0/1 non-slice-intersecting sstables, included 0 due to tombstones 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.773000 | 192.168.168.29 |   
 960
  Merging data from memtables and 1 sstables 
[SharedPool-Worker-4] | 2015-11-13 11:04:41.774000 | 192.168.168.29 |   
 971
   Read 1 live and 0 tombstone cells 
[SharedPool-Worker-4] | 2015-11-13 11:04:47.301000 | 192.168.168.29 | 
529270
   Enqueuing response to /192.168.168.12 
[SharedPool-Worker-4] | 2015-11-13 11:04:47.316000 | 192.168.168.29 | 
544885
   Sending REQUEST_RESPONSE message to /192.168.168.12 
[MessagingService-Outgoing-/192.168.168.12] | 2015-11-13 11:04:47.317000 | 
192.168.168.29 | 545042
REQUEST_RESPONSE message received from /192.168.168.29 
[MessagingService-Incoming-/192.168.168.29] | 2015-11-13 11:04:47.429000 | 
192.168.168.12 | 657918
 

[jira] [Commented] (CASSANDRA-10280) Make DTCS work well with old data

2015-11-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003823#comment-15003823
 ] 

Marcus Eriksson commented on CASSANDRA-10280:
-

pushed a new commit to 
https://github.com/krummas/cassandra/commits/marcuse/10280 which deprecates 
max_sstable_age and defaults it to 1000 years

> Make DTCS work well with old data
> -
>
> Key: CASSANDRA-10280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10280
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.x
>
>
> Operational tasks become incredibly expensive if you keep around a long 
> timespan of data with DTCS - with default settings and 1 year of data, the 
> oldest window covers about 180 days. Bootstrapping a node with vnodes with 
> this data layout will force cassandra to compact very many sstables in this 
> window.
> We should probably put a cap on how big the biggest windows can get. We could 
> probably default this to something sane based on max_sstable_age (ie, say we 
> can reasonably handle 1000 sstables per node, then we can calculate how big 
> the windows should be to allow that)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10585) SSTablesPerReadHistogram seems wrong when row cache hit happend

2015-11-13 Thread Ivan Burmistrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003635#comment-15003635
 ] 

Ivan Burmistrov edited comment on CASSANDRA-10585 at 11/13/15 11:44 AM:


I prepared patches to versions 2.1, 2.2 and 3.0 (and it is not hard to prepare 
patch for trunk).

Important comment for 2.2 and 3.0 versions.
SSTablePerReadHistogram in these versions is EstimatedHistogram now.
But for this implementation of histogram zero values make almost no effect.  
It seems not good, because it is important to know if, for example, we read 0.1 
SSTables per read at average. 
For example, we want to know does 
[CASSANDRA-2498|https://issues.apache.org/jira/browse/CASSANDRA-2498] or 
[CASSANDRA-5514|https://issues.apache.org/jira/browse/CASSANDRA-5514] 
optimization works for some table. 
EstimatedHistogram returns only integer values and make this scenario 
impossible, while it was possible in versions 2.1 and below.
So in patches for 2.2 and 3.0 I switched SSTablesPerReadHistogram to 
ExponentiallyDecayingHistogram implementation.


was (Author: isburmistrov):
I have prepared patches to versions 2.1, 2.2 and 3.0 (and it is not hard to 
prepare patch for trunk).

Important comment for 2.2 and 3.0 versions.
SSTablePerReadHistogram in these versions is EstimatedHistogram now.
But for this implementation of histogram zero values make almost no effect.  
It seems not good, because it is important to know if, for example, we read 0.1 
SSTables per read at average. 
For example, we want to know does 
[CASSANDRA-2498|https://issues.apache.org/jira/browse/CASSANDRA-2498] or 
[CASSANDRA-5514|https://issues.apache.org/jira/browse/CASSANDRA-5514] 
optimization works for some table. 
EstimatedHistogram returns only integer values and make this scenario 
impossible, while it was possible in versions 2.1 and below.
So in patches for 2.2 and 3.0 I switched SSTablesPerReadHistogram to 
ExponentiallyDecayingHistogram implementation.

> SSTablesPerReadHistogram seems wrong when row cache hit happend
> ---
>
> Key: CASSANDRA-10585
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10585
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ivan Burmistrov
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
> Attachments: SSTablePerReadHistogram_RowCache-cassandra-2_1.patch, 
> SSTablePerReadHistogram_RowCache-cassandra-2_2.patch, 
> SSTablePerReadHistogram_RowCache-cassandra-3_0.patch
>
>
> SSTablePerReadHistogram metric now not considers case when row has been read 
> from row cache.
> And so, this metric will have big values even almost all requests processed 
> by row cache (and without touching SSTables, of course).
> So, it seems that correct behavior is to consider that if we read row from 
> row cache then we read zero SSTables by this request.
> The patch at the attachment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8755) Replace trivial uses of String.replace/replaceAll/split with StringUtils methods

2015-11-13 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003895#comment-15003895
 ] 

Robert Stupp commented on CASSANDRA-8755:
-

[~al_shopov] can you point me to some branches on github (forked from 
https://github.com/apache/cassandra)? That's much easier to review and we can 
spawn CI from them.

> Replace trivial uses of String.replace/replaceAll/split with StringUtils 
> methods
> 
>
> Key: CASSANDRA-8755
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8755
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jaroslav Kamenik
>Priority: Trivial
>  Labels: lhf
> Attachments: 8755.tar.gz, trunk-8755.patch, trunk-8755.txt
>
>
> There are places in the code where those regex based methods are  used with 
> plain, not regexp, strings, so StringUtils alternatives should be faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10534:

Component/s: Local Write-Read Paths

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/
> total 60
> -rw-r--r-- 1 cassandra cassandra 0 Oct 15 09:31 
> system-sstable_activity-ka-1-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra  9740 Oct 15 09:31 
> system-sstable_activity-ka-1-Data.db
> 

[jira] [Commented] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-11-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004089#comment-15004089
 ] 

Aleksey Yeschenko commented on CASSANDRA-10250:
---

The test LGTM (although we'll need more comprehensive coverage for 
CASSANDRA-9424).

The issue itself will not be become less of an issue with CASSANDRA-9425, but 
still flap (though less than before) until CASSANDRA-10699, and the latter is 
very likely to require a major-major version bump change, requiring a 
completely new protocol, so will have to wait until 4.0.

> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns one at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the dtest.  Running against a 
> local 3 node cluster will reproduce the issue (with enough runs – fails ~20% 
> on my machine).
> Also attached is the node logs from a run with when a dropped column 
> (alter_me_7 table, column s1) is still present.  Checking the system_schema 
> tables for this case shows the s1 column in both the columns and drop_columns 
> tables.
> This has been flapping on cassci on versions 2+ and doesn’t seem to be 
> related to changes in 3.0.  More testing needs to be done though.
> //cc [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10681) make index building pluggable via IndexBuildTask

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10681:
--
Labels: sasi  (was: )

> make index building pluggable via IndexBuildTask
> 
>
> Key: CASSANDRA-10681
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10681
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>Priority: Minor
>  Labels: sasi
> Fix For: 3.x
>
>
> Currently index building assumes one and only way to build all of the indexes 
> - through SecondaryIndexBuilder - which merges all of the sstables together, 
> collates columns etc. Such works fine for built-in indexes but not for SASI 
> since it's attaches to every SSTable individually. We need a "IndexBuildTask" 
> interface (based on CompactionInfo.Holder) to be returned from Index on 
> demand to give power to SI interface implementers to decide how build should 
> work. This might be less effective for CassandraIndex, since this effectively 
> means that collation will have to be done multiple times on the same data, 
> but  nevertheless is a good compromise for clean interface to outside world.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10661) Integrate SASI to Cassandra

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10661:
--
Labels: sasi  (was: )

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10676) AssertionError in CompactionExecutor

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10676:
--
Fix Version/s: (was: 2.1.9)

> AssertionError in CompactionExecutor
> 
>
> Key: CASSANDRA-10676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10676
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>
> {code}
> ERROR [CompactionExecutor:33329] 2015-11-09 08:16:22,759 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[CompactionExecutor:33329,1,main]
> java.lang.AssertionError: 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-888705-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:279)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> ^C
> root@db1:~# tail -f /var/log/cassandra/system.log
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10676) AssertionError in CompactionExecutor

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10676:
--
Reproduced In: 2.1.9

> AssertionError in CompactionExecutor
> 
>
> Key: CASSANDRA-10676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10676
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>
> {code}
> ERROR [CompactionExecutor:33329] 2015-11-09 08:16:22,759 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[CompactionExecutor:33329,1,main]
> java.lang.AssertionError: 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-888705-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:279)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> ^C
> root@db1:~# tail -f /var/log/cassandra/system.log
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8786) NullPointerException in ColumnDefinition.hasIndexOption

2015-11-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-8786:
--

Assignee: Sam Tunnicliffe  (was: Aleksey Yeschenko)

> NullPointerException in ColumnDefinition.hasIndexOption
> ---
>
> Key: CASSANDRA-8786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8786
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2
>Reporter: Mathijs Vogelzang
>Assignee: Sam Tunnicliffe
> Fix For: 2.1.5
>
> Attachments: 8786.txt
>
>
> We have a Cassandra cluster that we've been using through many upgrades, and 
> thus most of our column families have originally been created by Thrift. We 
> are on Cassandra 2.1.2 now.
> We've now ported most of our code to use CQL, and our code occasionally tries 
> to recreate tables with "IF NOT EXISTS" to work properly on development / 
> testing environments.
> When we issue the CQL statement "CREATE INDEX IF NOT EXISTS index ON 
> "tableName" (accountId)" (this index does exist on that table already), we 
> get a {{DriverInternalError: An unexpected error occurred server side on 
> cass_host/xx.xxx.xxx.xxx:9042: java.lang.NullPointerException}}
> The error on the server is:
> {noformat}
>  java.lang.NullPointerException: null
> at 
> org.apache.cassandra.config.ColumnDefinition.hasIndexOption(ColumnDefinition.java:489)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:87)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> {noformat}
> This happens every time we run this CQL statement. We've tried to reproduce 
> it in a test cassandra cluster by creating the table according to the exact 
> "DESCRIBE TABLE" specification, but then this NullPointerException doesn't 
> happon upon the CREATE INDEX one. So it seems that the tables on our 
> production cluster (that were originally created through thrift) are still 
> subtly different schema-wise then a freshly created table according to the 
> same creation statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8786) NullPointerException in ColumnDefinition.hasIndexOption

2015-11-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8786:
---
Assignee: Aleksey Yeschenko  (was: Sam Tunnicliffe)

> NullPointerException in ColumnDefinition.hasIndexOption
> ---
>
> Key: CASSANDRA-8786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8786
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2
>Reporter: Mathijs Vogelzang
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.5
>
> Attachments: 8786.txt
>
>
> We have a Cassandra cluster that we've been using through many upgrades, and 
> thus most of our column families have originally been created by Thrift. We 
> are on Cassandra 2.1.2 now.
> We've now ported most of our code to use CQL, and our code occasionally tries 
> to recreate tables with "IF NOT EXISTS" to work properly on development / 
> testing environments.
> When we issue the CQL statement "CREATE INDEX IF NOT EXISTS index ON 
> "tableName" (accountId)" (this index does exist on that table already), we 
> get a {{DriverInternalError: An unexpected error occurred server side on 
> cass_host/xx.xxx.xxx.xxx:9042: java.lang.NullPointerException}}
> The error on the server is:
> {noformat}
>  java.lang.NullPointerException: null
> at 
> org.apache.cassandra.config.ColumnDefinition.hasIndexOption(ColumnDefinition.java:489)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:87)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> {noformat}
> This happens every time we run this CQL statement. We've tried to reproduce 
> it in a test cassandra cluster by creating the table according to the exact 
> "DESCRIBE TABLE" specification, but then this NullPointerException doesn't 
> happon upon the CREATE INDEX one. So it seems that the tables on our 
> production cluster (that were originally created through thrift) are still 
> subtly different schema-wise then a freshly created table according to the 
> same creation statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9418) Fix dtests on 2.2 branch on Windows

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9418:
---
Component/s: Testing

> Fix dtests on 2.2 branch on Windows
> ---
>
> Key: CASSANDRA-9418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9418
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows, docs-impacting
> Fix For: 2.2.x
>
> Attachments: 9418_tz_formatting.txt
>
>
> There's a variety of infrastructural failures within dtest w/regards to 
> windows that are causing tests to fail and those failures to cascade.
> Error: failure to delete commit log after a test / ccm cluster is stopped:
> {noformat}
> Traceback (most recent call last):
>   File "C:\src\cassandra-dtest\dtest.py", line 452, in tearDown
> self._cleanup_cluster()
>   File "C:\src\cassandra-dtest\dtest.py", line 172, in _cleanup_cluster
> self.cluster.remove()
>   File "build\bdist.win-amd64\egg\ccmlib\cluster.py", line 212, in remove
> shutil.rmtree(self.get_path())
>   File "C:\Python27\lib\shutil.py", line 247, in rmtree
> rmtree(fullname, ignore_errors, onerror)
>   File "C:\Python27\lib\shutil.py", line 247, in rmtree
> rmtree(fullname, ignore_errors, onerror)
>   File "C:\Python27\lib\shutil.py", line 252, in rmtree
> onerror(os.remove, fullname, sys.exc_info())
>   File "C:\Python27\lib\shutil.py", line 250, in rmtree
> os.remove(fullname)
> WindowsError: [Error 5] Access is denied: 
> 'c:\\temp\\dtest-4rxq2i\\test\\node1\\commitlogs\\CommitLog-5-1431969131917.log'
> {noformat}
> Cascading error: implication is that tests aren't shutting down correctly and 
> subsequent tests cannot start:
> {noformat}
> 06:00:20 ERROR: test_incr_decr_super_remove (thrift_tests.TestMutations)
> 06:00:20 
> --
> 06:00:20 Traceback (most recent call last):
> 06:00:20   File 
> "D:\jenkins\workspace\trunk_dtest_win32\cassandra-dtest\thrift_tests.py", 
> line 55, in setUp
> 06:00:20 cluster.start()
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\cluster.py", line 249, in 
> start
> 06:00:20 p = node.start(update_pid=False, jvm_args=jvm_args, 
> profile_options=profile_options)
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\node.py", line 457, in start
> 06:00:20 common.check_socket_available(itf)
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\common.py", line 341, in 
> check_socket_available
> 06:00:20 raise UnavailableSocketError("Inet address %s:%s is not 
> available: %s" % (addr, port, msg))
> 06:00:20 UnavailableSocketError: Inet address 127.0.0.1:9042 is not 
> available: [Errno 10013] An attempt was made to access a socket in a way 
> forbidden by its access permissions
> 06:00:20  >> begin captured logging << 
> 
> 06:00:20 dtest: DEBUG: removing ccm cluster test at: d:\temp\dtest-a5iny5
> 06:00:20 dtest: DEBUG: cluster ccm directory: d:\temp\dtest-dalzcy
> 06:00:20 - >> end captured logging << 
> -
> {noformat}
> I've also seen (and am debugging) an error where a node just fails to start 
> via ccm.
> I'll update this ticket with PR's to dtest or other observations of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10690:
--
Component/s: Local Write-Read Paths

> Secondary index does not process deletes unless columns are specified
> -
>
> Key: CASSANDRA-10690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
>
> The new secondary index API does not notify indexes of single-row or slice 
> deletions unless specific columns are deleted.  I believe the problem is that 
> in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
> {{index.indexes(update.columns())}}.  When no columns are specified in the 
> the deletion, {{update.columns()}} is empty, which causes all indexes to be 
> skipped.
> I think the correct fix is to do something like this in the 
> {{ModificationStatement}} constructor:
> {code}
> if (type == StatementType.DELETE && modifiedColumns.isEmpty())
> modifiedColumns = cfm.partitionColumns();
> {code}
> However, I'm not sure if that may have unintended side-effects.  What do you 
> think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10681) make index building pluggable via IndexBuildTask

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10681:
--
Component/s: (was: index)
 Local Write-Read Paths

> make index building pluggable via IndexBuildTask
> 
>
> Key: CASSANDRA-10681
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10681
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>Priority: Minor
> Fix For: 3.x
>
>
> Currently index building assumes one and only way to build all of the indexes 
> - through SecondaryIndexBuilder - which merges all of the sstables together, 
> collates columns etc. Such works fine for built-in indexes but not for SASI 
> since it's attaches to every SSTable individually. We need a "IndexBuildTask" 
> interface (based on CompactionInfo.Holder) to be returned from Index on 
> demand to give power to SI interface implementers to decide how build should 
> work. This might be less effective for CassandraIndex, since this effectively 
> means that collation will have to be done multiple times on the same data, 
> but  nevertheless is a good compromise for clean interface to outside world.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10690:
--
Component/s: (was: index)

> Secondary index does not process deletes unless columns are specified
> -
>
> Key: CASSANDRA-10690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
>
> The new secondary index API does not notify indexes of single-row or slice 
> deletions unless specific columns are deleted.  I believe the problem is that 
> in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
> {{index.indexes(update.columns())}}.  When no columns are specified in the 
> the deletion, {{update.columns()}} is empty, which causes all indexes to be 
> skipped.
> I think the correct fix is to do something like this in the 
> {{ModificationStatement}} constructor:
> {code}
> if (type == StatementType.DELETE && modifiedColumns.isEmpty())
> modifiedColumns = cfm.partitionColumns();
> {code}
> However, I'm not sure if that may have unintended side-effects.  What do you 
> think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10694) Deletion info is dropped on updated rows when notifying secondary index

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10694:
--
Component/s: Local Write-Read Paths

> Deletion info is dropped on updated rows when notifying secondary index
> ---
>
> Key: CASSANDRA-10694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
> Attachments: index-deletion.patch
>
>
> In {{SecondaryIndexManager.onUpdated()}}, we fail to copy the 
> {{DeletionInfo}} from the existing and new rows before notifying the index of 
> the update.  This leads the index to believe a new, live row has been 
> inserted instead of a single-row deletion.  It looks like this has been a 
> problem since 3.0.0-beta1.
> I've attached a simple patch that fixes the issue.  I'm working on a full 
> patch with tests, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10694) Deletion info is dropped on updated rows when notifying secondary index

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10694:
--
Component/s: (was: index)

> Deletion info is dropped on updated rows when notifying secondary index
> ---
>
> Key: CASSANDRA-10694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
> Attachments: index-deletion.patch
>
>
> In {{SecondaryIndexManager.onUpdated()}}, we fail to copy the 
> {{DeletionInfo}} from the existing and new rows before notifying the index of 
> the update.  This leads the index to believe a new, live row has been 
> inserted instead of a single-row deletion.  It looks like this has been a 
> problem since 3.0.0-beta1.
> I've attached a simple patch that fixes the issue.  I'm working on a full 
> patch with tests, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10388) Windows dtest 3.0: SSL dtests are failing

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10388:
--
Component/s: Testing

> Windows dtest 3.0: SSL dtests are failing
> -
>
> Key: CASSANDRA-10388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10388
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Joel Knighton
>
> The dtests 
> {{native_transport_ssl_test.NativeTransportSSL.connect_to_ssl_test}} and 
> {{native_transport_ssl_test.NativeTransportSSL.use_custom_ssl_port_test}} are 
> failing on windows, but not linux.
> Stacktrace is
> {code}
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-3.0_dtest_win32\cassandra-dtest\native_transport_ssl_test.py",
>  line 32, in connect_to_ssl_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7071) Buffer cache metrics in OpsCenter

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7071:
-
Fix Version/s: (was: 3.x)

> Buffer cache metrics in OpsCenter
> -
>
> Key: CASSANDRA-7071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7071
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jay Patel
>
> It's currently very difficult to understand how the buffer cache is being 
> used by Cassandra. Unlike the key and row cache, for which there are hit rate 
> metrics pulled by the datastax agent and visible in opscenter, there are no 
> such metrics around the buffer cache. This would be immensely useful in a 
> production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7085) Specialized query filters for CQL3

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7085:
-
Fix Version/s: (was: 3.x)

> Specialized query filters for CQL3
> --
>
> Key: CASSANDRA-7085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7085
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>  Labels: cql, perfomance
>
> The semantic of CQL makes it so that the current {{NamesQueryFilter}} and 
> {{SliceQueryFilter}} are not always as efficient as we could be. Namely, when 
> a {{SELECT}} only selects a handful of columns, we still have to query to 
> query all the columns of the select rows to distinguish between 'live row but 
> with no data for the queried columns' and 'no row' (see CASSANDRA-6588 for 
> more details).
> We can solve that however by adding new filters (name and slice) specialized 
> for CQL. The new name filter would be a list of row prefix + a list of CQL 
> column names (instead of one list of cell names). The slice filter would 
> still take a ColumnSlice[] but would add the list of column names we care 
> about for each row.
> The new sstable readers that goes with those filter would use the list of 
> column names to filter out all the cells we don't care about, so we don't 
> have to ship those back to the coordinator to skip them there, yet would know 
> to still return the row marker when necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10592:
--
Reviewer:   (was: Benedict)

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 3.1, 2.2.x
>
>
> CORRECTION-
> It turns out the exception occurs when running a read using a thrift jdbc 
> driver. Once you have loaded the data with stress below, run 
> SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
> http://www.aquafold.com/aquadatastudio_downloads.html
>  
> The exception:
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml UPDATED!
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already exists)
> keyspace_definition: |
>   CREATE KEYSPACE autogeneratedtest WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> # 

[jira] [Commented] (CASSANDRA-9188) cqlsh does not display properly the modified UDTs

2015-11-13 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004251#comment-15004251
 ] 

Adam Holmberg commented on CASSANDRA-9188:
--

The pertinent change is in any version based on the 3.0rc1 branch. 3.0 should 
be covered as of last week (confirmed using the scenario from the description 
in 3.0 GA). I'm not sure what versions are bundled for other branches.

> cqlsh does not display properly the modified UDTs
> -
>
> Key: CASSANDRA-9188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9188
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.x
>
>
> The problem can be reproduced as follow:
> {code}
> cqlsh:test2> create type myType (a int);
> cqlsh:test2> create table myTable (a int primary key, b frozen);
> cqlsh:test2> insert into myTable (a, b) values (1, {a: 1});
> cqlsh:test2> select * from myTable;
>  a | b
> ---+
>  1 | {a: 1}
> (1 rows)
> cqlsh:test2> alter type myType add b int;
> cqlsh:test2> insert into myTable (a, b) values (2, {a: 2, b :2});
> cqlsh:test2> select * from myTable;
>  a | b
> ---+
>  1 | {a: 1}
>  2 | {a: 2}
> (2 rows)
> {code}
> If {{cqlsh}} is then restarted it will display the data properly.
> {code}
> cqlsh:test2> select * from mytable;
>  a | b
> ---+-
>  1 | {a: 1, b: null}
>  2 |{a: 2, b: 2}
> (2 rows)
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10594) Inconsistent permissions results return

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10594:
--
Issue Type: Improvement  (was: Bug)

> Inconsistent permissions results return
> ---
>
> Key: CASSANDRA-10594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10594
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Adam Holmberg
>Assignee: Sam Tunnicliffe
>Priority: Minor
>
> The server returns inconsistent results when listing permissions, depending 
> on whether a user is configured.
> *Observed with Cassandra 3.0:*
> Only super user configured:
> {code}
> cassandra@cqlsh> list all;
>  role | resource | permissions
> --+--+-
> (0 rows)
> {code}
> VOID result type is returned (meaning no result meta is returned and cqlsh 
> must use the table meta to determine columns)
> With one user configured, no grants:
> {code}
> cassandra@cqlsh> create user holmberg with password 'tmp';
> cassandra@cqlsh> list all;
> results meta: system_auth permissions 4
>  role  | username  | resource| permission
> ---+---+-+
>  cassandra | cassandra |  |  ALTER
>  cassandra | cassandra |  |   DROP
>  cassandra | cassandra |  |  AUTHORIZE
> (3 rows)
> {code}
> Now a ROWS result message is returned with the cassandra super user grants. 
> Dropping the regular user causes the VOID message to be returned again.
> *Slightly different behavior on 2.2 branch:* VOID message with no result meta 
> is returned, even if regular user is configured, until permissions are added 
> to that user.
> *Expected:*
> It would be nice if the query always resulted in a ROWS result, even if there 
> are no explicit permissions defined. This would provide the correct result 
> metadata even if there are no rows.
> Additionally, it is strange that the 'cassandra' super user only appears in 
> the results when another user is configured. I would expect it to always 
> appear, or never.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10538:

Component/s: Local Write-Read Paths

> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.1, 3.1
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolE
> {code}
> We should not have an assertion if it can happen when the disk is full, we 
> should rather have a runtime exception.
> I also would like to understand exactly what triggered the assertion. 
> {{LifecycleTransaction}} can throw at the beginning of the commit method if 
> it cannot write the record to disk, in which case all we have to do is ensure 
> we update the records in memory after writing 

[jira] [Updated] (CASSANDRA-8743) NFS doesn't behave on Windows

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8743:
---
Component/s: Streaming and Messaging
 Local Write-Read Paths

> NFS doesn't behave on Windows
> -
>
> Key: CASSANDRA-8743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Tamar Nirenberg
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: docker-system.log
>
>
> Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
> the ring:
> ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
> Failed creating a merkle tree for [repair 
> #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
> (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
>  /10.1.234.63 (see log for details)
> ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
> - Exception in thread Thread[ValidationExecutor:2,1,main]
> org.apache.cassandra.io.FSWriteError: 
> java.nio.file.DirectoryNotEmptyException: 
> /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
> at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_71]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_71]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_71]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> Caused by: java.nio.file.DirectoryNotEmptyException: 
> /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
> at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
> ~[na:1.7.0_71]
> at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>  ~[na:1.7.0_71]
> at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> ... 10 common frames omitted
> ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:383 
> - Stopping gossiper
> WARN  [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:291 
> - Stopping gossip by operator request
> INFO  [ValidationExecutor:2] 2015-01-22 11:48:14,829 Gossiper.java:1318 - 
> Announcing shutdown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10639) Commitlog compression test fails on Windows

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10639:

Component/s: Local Write-Read Paths

> Commitlog compression test fails on Windows
> ---
>
> Key: CASSANDRA-10639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10639
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
> Fix For: 3.1
>
>
> {{commitlog_test.py:TestCommitLog.test_compression_error}} fails on Windows 
> under CassCI. It fails in a number of different ways. Here, it looks like 
> reading the CRC fails:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/commitlog_test/TestCommitLog/test_compression_error/
> Here, I believe it fails when trying to validate the CRC header:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/99/testReport/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L497
> Here's another failure where the header has a {{Q}} written in it instead of 
> a closing brace:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/91/testReport/junit/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L513
> [~bdeggleston] Do I remember correctly that you wrote this test? Can you take 
> this on?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10012) Deadlock when session streaming is retried after exception

2015-11-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004095#comment-15004095
 ] 

Yuki Morishita commented on CASSANDRA-10012:


[~pauloricardomg] updated branches.

||branch||testall||dtest||
|[10012-2.2|https://github.com/yukim/cassandra/tree/10012-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10012-2.2-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10012-2.2-dtest/lastCompletedBuild/testReport/]|
|[10012-3.0|https://github.com/yukim/cassandra/tree/10012-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10012-3.0-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10012-3.0-dtest/lastCompletedBuild/testReport/]|

CASSANDRA-10448 can be caused by this, but I'm not sure.

> Deadlock when session streaming is retried after exception
> --
>
> Key: CASSANDRA-10012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10012
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Moos
>Assignee: Chris Moos
> Fix For: 2.2.x
>
> Attachments: CASSANDRA-10012.patch
>
>
> This patch ensures that the CompressedInputStream thread is cleaned up 
> properly (for example, if an Exception occurs and a session stream is 
> retried).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7555) Support copy and link for commitlog archiving without forking the jvm

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7555:
---
Component/s: Local Write-Read Paths

> Support copy and link for commitlog archiving without forking the jvm
> -
>
> Key: CASSANDRA-7555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Nick Bailey
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 3.x
>
>
> Right now for commitlog archiving the user specifies a command to run and c* 
> forks the jvm to run that command. The most common operations will be either 
> copy or link (hard or soft). Since we can do all of these operations without 
> forking the jvm, which is very expensive, we should have special cases for 
> those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10413) Replaying materialized view updates from commitlog after node decommission crashes Cassandra

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10413:
--
Component/s: Coordination

> Replaying materialized view updates from commitlog after node decommission 
> crashes Cassandra
> 
>
> Key: CASSANDRA-10413
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10413
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Critical
> Fix For: 3.0.0 rc2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test, runnable as
> {code}
> lein with-profile +trunk test :only 
> cassandra.mv-test/mv-crash-subset-decommission
> {code}
> This test crashes/restarts nodes while decommissioning nodes. These actions 
> are not coordinated.
> In [10164|https://issues.apache.org/jira/browse/CASSANDRA-10164], we 
> introduced a change to re-apply materialized view updates on commitlog replay.
> Some nodes, upon restart, will crash in commitlog replay. They throw the 
> "Trying to get the view natural endpoint on a non-data replica" runtime 
> exception in getViewNaturalEndpoint. I added logging to 
> getViewNaturalEndpoint to show the results of 
> replicationStrategy.getNaturalEndpoints for the baseToken and viewToken.
> It can be seen that these problems occur when the baseEndpoints and 
> viewEndpoints are identical but do not contain the broadcast address of the 
> local node.
> For example, a node at 10.0.0.5 crashes on replay of a write whose base token 
> and view token replicas are both [10.0.0.2, 10.0.0.4, 10.0.0.6]. It seems we 
> try to guard against this by considering pendingEndpoints for the viewToken, 
> but this does not appear to be sufficient.
> I've attached the system.logs for a test run with added logging. In the 
> attached logs, n1 is at 10.0.0.2, n2 is at 10.0.0.3, and so on. 10.0.0.6/n5 
> is the decommissioned node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10310) Support type casting in selection clause

2015-11-13 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004128#comment-15004128
 ] 

Robert Stupp commented on CASSANDRA-10310:
--

Overall the patch looks really good!

Just a couple of really minor/trivial things (except the first one). I'd be 
fine with fixing them on commit.

* {{CastFcts}} and {{CastFcsTest}} are missing the copyright header
* {{StrBuilder}} in {{WithCast.toString}} can be replaced with a simple string 
concatenation
* {{Selectable}} line 314 needs reformat
* Should {{CastFcsTest}} read {{CastFctsTest}} ?
* {{CastFcts.all}} can use method references for the 
to-byte,short,int,long,float,double conversions in the for-loop (e.g. 
{{Number::byteValue}}). It *feels* easier for the JVM to use method refs 
instead of lambdas - but I’m unsure whether it really is.
* {{CQL.textile}} might need a sentence that we strictly rely on Java’s 
semantics for the conversions. E.g. the double for {{1}} will be converted to 
the string value {{”1.0”}} and not just to {{”1”}} (as Python for example does).
* maybe also add the following snippet to {{testCastsWithReverseOrder}} method 
(or any other method). Just wanted to see whether nested casts and casts with 
UDFs work.
{code}
assertRows(execute("SELECT CAST(CAST(a AS tinyint) AS smallint), " +
   "CAST(CAST(b AS tinyint) AS smallint), " +
   "CAST(CAST(c AS tinyint) AS smallint) FROM %s"),
   row((short) 1, (short) 2, (short) 6));

assertRows(execute("SELECT CAST(CAST(CAST(a AS tinyint) AS double) AS 
text), " +
   "CAST(CAST(CAST(b AS tinyint) AS double) AS text), " 
+
   "CAST(CAST(CAST(c AS tinyint) AS double) AS text) 
FROM %s"),
   row("1.0", "2.0", "6.0"));

String f = createFunction(KEYSPACE, "int",
  "CREATE FUNCTION %s(val int) " +
  "RETURNS NULL ON NULL INPUT " +
  "RETURNS double " +
  "LANGUAGE java " +
  "AS 'return (double)val;'");

assertRows(execute("SELECT " + f + "(CAST(b AS int)) FROM %s"),
   row((double) 2));

assertRows(execute("SELECT CAST(" + f + "(CAST(b AS int)) AS text) FROM 
%s"),
   row("2.0"));
{code}


> Support type casting in selection clause
> 
>
> Key: CASSANDRA-10310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10310
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Benjamin Lerer
>  Labels: cql, patch
> Attachments: cassandra-2.2-10310.txt, cassandra-3.0-10310.txt
>
>
> When selecting an avg() of int values, the type of the avg value returned is 
> an int as well, meaning it's rounded off to an incorrect answer.  This is 
> both incorrect and inconsistent with other databases.
> Example:
> {quote}
> cqlsh:test> select * from monkey where id = 1;
>  id | i | v
> +---+---
>   1 | 1 | 1
>   1 | 2 | 1
>   1 | 3 | 2
> (3 rows)
> cqlsh:test> select avg(v) from monkey where id = 1;
>  system.avg(v)
> ---
>  1
> (1 rows)
> {quote}
> I tried avg() with MySQL, here's the result:
> {quote}
> mysql> create table blah ( id int primary key, v int );
> Query OK, 0 rows affected (0.15 sec)
> mysql> insert into blah set id = 1, v = 1;
> Query OK, 1 row affected (0.02 sec)
> mysql> insert into blah set id = 1, v = 1;
> ERROR 1062 (23000): Duplicate entry '1' for key 'PRIMARY'
> mysql> insert into blah set id = 2, v = 1;
> Query OK, 1 row affected (0.01 sec)
> mysql> insert into blah set id = 3, v = 2;
> Query OK, 1 row affected (0.01 sec)
> mysql> select avg(v) from blah;
> ++
> | avg(v) |
> ++
> | 1. |
> ++
> 1 row in set (0.00 sec)
> {quote}
> I created a new table using the above query. The result:
> {quote}
> mysql> create table foo as select avg(v) as a from blah;
> Query OK, 1 row affected, 1 warning (0.04 sec)
> Records: 1  Duplicates: 0  Warnings: 1
> mysql> desc foo;
> +---+---+--+-+-+---+
> | Field | Type  | Null | Key | Default | Extra |
> +---+---+--+-+-+---+
> | a | decimal(14,4) | YES  | | NULL|   |
> +---+---+--+-+-+---+
> 1 row in set (0.01 sec)
> {quote}
> It works the same way in postgres, and to my knowledge, every RDBMs.
> Broken in 2.2, 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10665) Many tests in concurrent_schema_changes_test are failing

2015-11-13 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004204#comment-15004204
 ] 

Jim Witschey commented on CASSANDRA-10665:
--

dtest PR here:

https://github.com/riptano/cassandra-dtest/pull/659

Pending review.

> Many tests in concurrent_schema_changes_test are failing
> 
>
> Key: CASSANDRA-10665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10665
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 3.x
>
>
> On the [last build at the time of this 
> writing|http://cassci.datastax.com/job/cassandra-3.0_dtest/335/], we have the 
> following failures:
> * {{create_lots_of_alters_concurrently_test}}
> * {{create_lots_of_schema_churn_with_node_down_test}}
> * {{create_lots_of_schema_churn_test}}
> but I seem to remember that other of those {{concurrent_schema_changes_test}} 
> are sometimes failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10593) Unintended interactions between commitlog archiving and commitlog recycling

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-10593.
---
   Resolution: Won't Fix
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 2.1.x)

Closing as Won't FIx, as recycling's already been removed in 2.2+, and there is 
a workaround for 2.1.

> Unintended interactions between commitlog archiving and commitlog recycling
> ---
>
> Key: CASSANDRA-10593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10593
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>
> Currently the comments in commitlog_archiving.properties suggest using either 
> cp or ln for the archive_command.  
> Using ln is problematic because commitlog recycling marks segments as 
> recycled once the corresponding memtables are flushed and Cassandra will no 
> longer replay them. This means it's only possible to do PITR on any records 
> that were written since the last flush.
> Using cp works, and this is currently how OpsCenter does for PITR, however 
> [~brandon.williams] has pointed out this could have some performance impact 
> because of the additional I/O overhead of copying the commitlog segments.
> Starting in 2.1, we can disable commit log recycling in cassandra.yaml so I 
> thought this would allow me to do PITR without the extra overhead of using 
> cp.  However, when I disable commitlog recycling and try to do a PITR, 
> Cassandra blows up when trying to replay the restored commit logs:
> {code}
> ERROR 16:56:42  Exception encountered during startup
> java.lang.IllegalStateException: Cannot safely construct descriptor for 
> segment, as name and header descriptors do not match ((4,1445878452545) vs 
> (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log
>   at 
> org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207)
>  ~[cassandra-all-2.1.9.791.jar:2.1.9.791]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) 
> ~[cassandra-all-2.1.9.791.jar:2.1.9.791]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352) 
> ~[cassandra-all-2.1.9.791.jar:2.1.9.791]
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) 
> ~[dse-core-4.8.0.jar:4.8.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  ~[cassandra-all-2.1.9.791.jar:2.1.9.791]
>   at com.datastax.bdp.DseModule.main(DseModule.java:75) 
> [dse-core-4.8.0.jar:4.8.0]
> java.lang.IllegalStateException: Cannot safely construct descriptor for 
> segment, as name and header descriptors do not match ((4,1445878452545) vs 
> (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log
>   at 
> org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:207)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:352)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>   at com.datastax.bdp.DseModule.main(DseModule.java:75)
> Exception encountered during startup: Cannot safely construct descriptor for 
> segment, as name and header descriptors do not match ((4,1445878452545) vs 
> (4,1445876822565)): /opt/dse/backup/CommitLog-4-1445876822565.log
> INFO  16:56:42  DSE shutting down...
> INFO  16:56:42  All plugins are stopped.
> ERROR 16:56:42  Exception in thread Thread[Thread-2,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1403)
>  ~[cassandra-all-2.1.9.791.jar:2.1.9.791]
>   at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:196) 
> ~[dse-core-4.8.0.jar:4.8.0]
>   at com.datastax.bdp.server.DseDaemon.preStop(DseDaemon.java:426) 
> ~[dse-core-4.8.0.jar:4.8.0]
>   at com.datastax.bdp.server.DseDaemon.safeStop(DseDaemon.java:436) 
> ~[dse-core-4.8.0.jar:4.8.0]
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:676) 
> ~[dse-core-4.8.0.jar:4.8.0]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31]
> {code}
> For the sake of completeness, I also tested using cp for the archive_command 
> and commitlog recycling disabled, and PITR works as expected, but this of 
> course defeats the point.
> It would be good to have some guidance on what is supported here. If ln isn't 
> expected to work at all, it shouldn't be documented as an acceptable option 
> for the archive_command in commitlog_archiving.properties.  If it should work 
> 

[jira] [Updated] (CASSANDRA-10486) Expose tokens of bootstrapping nodes in JMX

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10486:
--
Issue Type: Improvement  (was: Bug)

> Expose tokens of bootstrapping nodes in JMX
> ---
>
> Key: CASSANDRA-10486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10486
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Nick Bailey
>Priority: Minor
> Fix For: 2.2.x
>
>
> Currently you can get a list of bootstrapping nodes from JMX, but the only 
> way to get the tokens of those bootstrapping nodes is to string parse info 
> from the failure detector. This is fragile and can easily break when changes 
> like https://issues.apache.org/jira/browse/CASSANDRA-10330 happen.
> We should have a clean way of knowing the tokens of bootstrapping nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10486) Expose tokens of bootstrapping nodes in JMX

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10486:
--
Component/s: Observability

> Expose tokens of bootstrapping nodes in JMX
> ---
>
> Key: CASSANDRA-10486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10486
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Nick Bailey
>Priority: Minor
> Fix For: 2.2.x
>
>
> Currently you can get a list of bootstrapping nodes from JMX, but the only 
> way to get the tokens of those bootstrapping nodes is to string parse info 
> from the failure detector. This is fragile and can easily break when changes 
> like https://issues.apache.org/jira/browse/CASSANDRA-10330 happen.
> We should have a clean way of knowing the tokens of bootstrapping nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-11-13 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004289#comment-15004289
 ] 

Joel Knighton commented on CASSANDRA-10111:
---

This can occur because we only check for cluster name mismatches in the 
{{GossipDigestSynVerbHandler}}.  In the original design of Cassandra, this was 
sufficient, since we always replied to the {{listen_address}}.

Since we now reply to the {{broadcast_address}}, the 
{{GossipDigestAckVerbHandler}} and the {{GossipDigestAck2VerbHandler}} also 
need to check {{clusterId}} for mismatches. {{GossipDigestAck}} and 
{{GossipDigestAck2}} don't contain {{clusterId}} currently, so we need to bump 
the {{MessagingService}} version to accommodate the addition of this field.

The reason this metadata contamination is unidirectional is as follows:
1. New node sends {{GossipDigestSyn}} asking for all info.
2. Node from cluster A replies to cluster B node with shared broadcast address, 
adding info for all nodes from cluster A and asking for no info.
3. Cluster B node doesn't share cluster B data since it hasn't been requested.

All subsequent direct gossiping between the two clusters is blocked by the 
{{GossipDigestSynVerbHandler}}.

I have a working fix for this; we need to decide when a {{MessagingService}} 
bump will occur.

Thanks for the report!

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
> Environment: 2.0.x
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip
> Fix For: 2.1.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10268) Improve incremental repair tests

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10268:
--
Issue Type: Improvement  (was: Bug)

> Improve incremental repair tests
> 
>
> Key: CASSANDRA-10268
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10268
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>
> Incremental repairs were broken for a while due to CASSANDRA-10265 - and none 
> of the tests in incremental_repair_tests.py caught that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10200) NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10200:
--
Reviewer: Aleksey Yeschenko

> NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient
> ---
>
> Key: CASSANDRA-10200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10200
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Minor
> Fix For: 3.1, 3.0.x
>
>
> The method is much more complicated than it needs to be and creates too many 
> maps and sets. The code is easy to simplify if we use the known number of 
> racks and nodes per datacentre to choose what to do in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9453) NullPointerException on gossip state change during startup

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-9453:
-
Component/s: Distributed Metadata

> NullPointerException on gossip state change during startup
> --
>
> Key: CASSANDRA-9453
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9453
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: logs.tar.gz
>
>
> In the {{consistency_test.TestConsistency.short_read_reversed_test}} dtest 
> where nodes are restarted one-by-one, one of the nodes logged a 
> NullPointerException during startup:
> {noformat}
> INFO  [HANDSHAKE-/127.0.0.3] 2015-05-21 13:48:16,724 
> OutboundTcpConnection.java:489 - Handshaking version with /127.0.0.3
> INFO  [main] 2015-05-21 13:48:16,725 StorageService.java:1862 - Node 
> /127.0.0.2 state jump to normal
> INFO  [main] 2015-05-21 13:48:16,757 CassandraDaemon.java:517 - Waiting for 
> gossip to settle before accepting client requests...
> INFO  [GossipStage:1] 2015-05-21 13:48:16,776 Gossiper.java:995 - Node 
> /127.0.0.1 has restarted, now UP
> INFO  [CompactionExecutor:1] 2015-05-21 13:48:16,780 CompactionTask.java:225 
> - Compacted (085b4380-ffc0-11e4-b28a-efe71ca64a4e) 4 sstables to 
> [/mnt/tmp/dtest-FLOZYC/test/node2/data/system/local-7ad54392bcdd35a684174e047860b377/la-10-big,]
>  to level=0.  1,783 bytes to 1,217 (~68% of original) in 75ms = 0.015475MB/s. 
>  0 total partitions merged to 1.  Partition merge counts were {4:1, }
> INFO  [GossipStage:2] 2015-05-21 13:48:16,786 Gossiper.java:995 - Node 
> /127.0.0.3 has restarted, now UP
> INFO  [HANDSHAKE-/127.0.0.1] 2015-05-21 13:48:16,788 
> OutboundTcpConnection.java:489 - Handshaking version with /127.0.0.1
> ERROR [GossipStage:1] 2015-05-21 13:48:16,790 CassandraDaemon.java:154 - 
> Exception in thread Thread[GossipStage:1,5,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.StorageService.getApplicationStateValue(StorageService.java:1723)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.getTokensFor(StorageService.java:1796)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1850)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1621)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.onJoin(StorageService.java:2308) 
> ~[main/:na]
> at 
> org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:1017) 
> ~[main/:na]
> at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1098) 
> ~[main/:na]
> at 
> org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:49)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {noformat}
> I've attached the logs for the three nodes.  Node 2 was the one with the 
> error.
> This error was on the trunk dtests, but I assume 2.2 is affected at a 
> minimum, so I set the fix version for 2.2.x.  Please check 2.0 and 2.1 for 
> the same potential problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9814) test_scrub_collections_table in scrub_test.py fails; removes sstables

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9814:
-
Fix Version/s: (was: 3.x)

> test_scrub_collections_table in scrub_test.py fails; removes sstables
> -
>
> Key: CASSANDRA-9814
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9814
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shawn Kumar
>Priority: Blocker
> Attachments: node1.log, out.txt
>
>
> The test creates an index on a table with collections and attempts to scrub. 
> The error occurs after the scrub where somehow all relevant sstables are 
> removed, and an assertion in get_sstables fails (due to there not being any 
> sstables). Logs indicate a set of errors under CompactionExecutor related to 
> not being able to read rows. Attached is the test output (out.txt) and the 
> relevant log. Should note that my attempts to replicate this manually weren't 
> successful, so its possible it could be a test issue (though I don't see 
> why). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10665) Many tests in concurrent_schema_changes_test are failing

2015-11-13 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004224#comment-15004224
 ] 

Jim Witschey commented on CASSANDRA-10665:
--

dtest PR merged. Is there anything else to address for this ticket, or is all 
the work remaining dealt with in CASSANDRA-10699?

> Many tests in concurrent_schema_changes_test are failing
> 
>
> Key: CASSANDRA-10665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10665
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 3.x
>
>
> On the [last build at the time of this 
> writing|http://cassci.datastax.com/job/cassandra-3.0_dtest/335/], we have the 
> following failures:
> * {{create_lots_of_alters_concurrently_test}}
> * {{create_lots_of_schema_churn_with_node_down_test}}
> * {{create_lots_of_schema_churn_test}}
> but I seem to remember that other of those {{concurrent_schema_changes_test}} 
> are sometimes failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9258) Range movement causes CPU & performance impact

2015-11-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004303#comment-15004303
 ] 

Aleksey Yeschenko commented on CASSANDRA-9258:
--

[~dikanggu] Any progress on this? We want to deal with it soon-ish.

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10269) Run new upgrade tests on supported upgrade paths

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-10269.
---
   Resolution: Fixed
Fix Version/s: 3.0.0

> Run new upgrade tests on supported upgrade paths
> 
>
> Key: CASSANDRA-10269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10269
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
> Fix For: 3.0.0
>
>
> The upgrade dtests for 8099 backwards compatibility (originally dealt with in 
> [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/471] and [this 
> JIRA ticket|https://issues.apache.org/jira/browse/CASSANDRA-9893]) need to be 
> run with upgrades over the following upgrade paths:
> - 2.1 -> 3.0
> - 2.2 -> 3.0
> - 3.0 -> trunk
> There are a number of ways we could manage this. We could run the tests as 
> part of the normal dtest jobs in 2.1, 2.2, and 3.0, and select what version 
> to upgrade to based on the version of the test. We could also refactor the 
> new upgrade tests to use the upgrade machinery in the existing upgrade tests.
> [~philipthompson] [~rhatch] Do you have opinions? Can you think of other 
> options?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10460) Fix materialized_views_test.py:TestMaterializedViews.complex_mv_select_statements_test

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10460:
--
Component/s: Testing

> Fix 
> materialized_views_test.py:TestMaterializedViews.complex_mv_select_statements_test
> --
>
> Key: CASSANDRA-10460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10460
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Joel Knighton
> Fix For: 3.0.0 rc2
>
>
> This test:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/materialized_views_test/TestMaterializedViews/complex_mv_select_statements_test/
> fails on CassCI and when I run it manually on OpenStack. It's been failing 
> for a while:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/materialized_views_test/TestMaterializedViews/complex_mv_select_statements_test/history/
> Assigning to [~carlyeks] for triage; feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10423) Paxos/LWT failures when moving node

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10423:
--
Assignee: (was: Ryan McGuire)

> Paxos/LWT failures when moving node
> ---
>
> Key: CASSANDRA-10423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10423
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra version: 2.0.14
> Java-driver version: 2.0.11
>Reporter: Roger Schildmeijer
> Fix For: 2.1.x
>
>
> While moving a node (nodetool move ) we noticed that lwt started 
> failing for some (~50%) requests. The java-driver (version 2.0.11) returned 
> com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
> during write query at consistency SERIAL (7 replica were required but only 0 
> acknowledged the write). The cluster was not under heavy load.
> I noticed that the failed lwt requests all took just above 1s. That 
> information and the WriteTimeoutException could indicate that this happens:
> https://github.com/apache/cassandra/blob/cassandra-2.0.14/src/java/org/apache/cassandra/service/StorageProxy.java#L268
> I can't explain why though. Why would there be more cas contention just 
> because a node is moving?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10423) Paxos/LWT failures when moving node

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10423:
--
Fix Version/s: 2.1.x

> Paxos/LWT failures when moving node
> ---
>
> Key: CASSANDRA-10423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10423
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra version: 2.0.14
> Java-driver version: 2.0.11
>Reporter: Roger Schildmeijer
> Fix For: 2.1.x
>
>
> While moving a node (nodetool move ) we noticed that lwt started 
> failing for some (~50%) requests. The java-driver (version 2.0.11) returned 
> com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
> during write query at consistency SERIAL (7 replica were required but only 0 
> acknowledged the write). The cluster was not under heavy load.
> I noticed that the failed lwt requests all took just above 1s. That 
> information and the WriteTimeoutException could indicate that this happens:
> https://github.com/apache/cassandra/blob/cassandra-2.0.14/src/java/org/apache/cassandra/service/StorageProxy.java#L268
> I can't explain why though. Why would there be more cas contention just 
> because a node is moving?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10420) Cassandra server should throw meaningfull exception when thrift_framed_transport_size_in_mb reached

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10420:
--
Priority: Minor  (was: Major)

> Cassandra server should throw meaningfull exception when 
> thrift_framed_transport_size_in_mb reached
> ---
>
> Key: CASSANDRA-10420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10420
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuan Yao
>Priority: Minor
>
> In cassandra's configuration, we set "thrift_framed_transport_size_in_mb" as 
> 15
> When send data large than some threshold max value, java.net.SocketException: 
> Connection reset will be thrown out from server. This exception doesn't 
> deliver meaningful message. Client side can't detect what's wrong with the 
> request
> Please throw out meaningful exception in this case to client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10420) Cassandra server should throw meaningfull exception when thrift_framed_transport_size_in_mb reached

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10420:
--
Issue Type: Improvement  (was: Bug)

> Cassandra server should throw meaningfull exception when 
> thrift_framed_transport_size_in_mb reached
> ---
>
> Key: CASSANDRA-10420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10420
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuan Yao
>
> In cassandra's configuration, we set "thrift_framed_transport_size_in_mb" as 
> 15
> When send data large than some threshold max value, java.net.SocketException: 
> Connection reset will be thrown out from server. This exception doesn't 
> deliver meaningful message. Client side can't detect what's wrong with the 
> request
> Please throw out meaningful exception in this case to client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10425) Autoselect GC settings depending on system memory

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10425:
--
Issue Type: Improvement  (was: Bug)

> Autoselect GC settings depending on system memory
> -
>
> Key: CASSANDRA-10425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10425
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jonathan Shook
>
> 1) Make GC modular within cassandra-env
> 2) For systems with 32GB or less of ram, use the classic CMS with the 
> established default settings.
> 3) For systems with 48GB or more of ram, use 1/2 or up to 32GB of heap with 
> G1, whichever is lower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-11-13 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004193#comment-15004193
 ] 

Philip Thompson commented on CASSANDRA-10538:
-

I'm unaware of a pre-existing problem that would cause those errors, and 
nothing sticks out in that jenkins result as an obvious cause.

> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.1, 3.1
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolE
> {code}
> We should not have an assertion if it can happen when the disk is full, we 
> should rather have a runtime exception.
> I also would like to understand exactly what triggered the assertion. 
> {{LifecycleTransaction}} can throw at the beginning of the commit 

[jira] [Commented] (CASSANDRA-10607) Using reserved keyword for Type field crashes cqlsh

2015-11-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004205#comment-15004205
 ] 

Aleksey Yeschenko commented on CASSANDRA-10607:
---

[~aholmber] Does our bundled driver version has the commit? (in 3.0)

> Using reserved keyword for Type field crashes cqlsh
> ---
>
> Key: CASSANDRA-10607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10607
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Mac OS X El Capitan, Java 1.8.0_25-b17, fresh install of 
> apache-cassandra-2.2.1-bin.tar.gz
>Reporter: Mike Prince
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: create.cql
>
>
> 1) From a fresh cassandra node start, start cqlsh and execute:
> {code}
> CREATE KEYSPACE foospace
> WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
> USE foospace;
> CREATE TYPE Foo(
>   "from" text
> );
> CREATE TABLE Bar(
>   id text PRIMARY KEY,
>   foo frozen
> );
> {code}
> 2) {{select * from bar;}}
> {code}
> Traceback (most recent call last):
>   File "bin/cqlsh.py", line 1166, in perform_simple_statement
> rows = future.result(self.session.default_timeout)
>   File 
> "/Users/mike/mobido/servers/apache-cassandra-2.2.1/bin/../lib/cassandra-driver-internal-only-2.6.0c2.post.zip/cassandra-driver-2.6.0c2.post/cassandra/cluster.py",
>  line 3296, in result
> raise self._final_exception
> ValueError: Type names and field names cannot be a keyword: 'from'
> {code}
> 3) Exit cqlsh and try to run cqlsh again, this error occurs:
> {code}
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ValueError("Don't know how to parse type string 
> u'org.apache.cassandra.db.marshal.UserType(foospace,666f6f,66726f6d:org.apache.cassandra.db.marshal.UTF8Type)':
>  Type names and field names cannot be a keyword: 'from'",)})
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10563) Integrate new upgrade test into dtest upgrade suite

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10563:
--
Issue Type: Improvement  (was: Bug)

> Integrate new upgrade test into dtest upgrade suite
> ---
>
> Key: CASSANDRA-10563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10563
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
> Fix For: 3.1
>
>
> This is a follow-up ticket for CASSANDRA-10360, specifically [~slebresne]'s 
> comment here:
> https://issues.apache.org/jira/browse/CASSANDRA-10360?focusedCommentId=14966539=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14966539
> These tests should be incorporated into the [{{upgrade_tests}} in 
> dtest|https://github.com/riptano/cassandra-dtest/tree/master/upgrade_tests]. 
> I'll take this on; [~nutbunnies] is also a good person for it, but I'll 
> likely get to it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10340) Stress should exit with non-zero status after failure

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10340:
--
Issue Type: Improvement  (was: Bug)

> Stress should exit with non-zero status after failure
> -
>
> Key: CASSANDRA-10340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10340
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: stress
>
> Currently, stress always exits with sucess status, even if after a failure. 
> In order to be able to rely on stress exit status during dtests it would be 
> nice if it exited with a non-zero status after failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10018) Stats for several pools removed from nodetool tpstats output

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10018:
--
Component/s: Observability

> Stats for several pools removed from nodetool tpstats output 
> -
>
> Key: CASSANDRA-10018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10018
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Sam Tunnicliffe
>
> With CASSANDRA-5657, the output of nodetool tpstats changed to only include 
> threadpool info for actual Stages. There are a number of
> JMX enabled thread pool executors which we used to include in tpstats and 
> that are still in use but no longer show up.
> Before CASSANDRA-5657
> {noformat}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0  0 0
>  0
> ReadStage 0 0  0 0
>  0
> RequestResponseStage  0 0  0 0
>  0
> ReadRepairStage   0 0  0 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 1  0 0
>  0
> GossipStage   0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor0 0 48 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0  2 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0  1 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 14 0
>  0
> MemtablePostFlush 0 0 20 0
>  0
> MemtableReclaimMemory 0 0 14 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}
> After CASSANDRA-5657
> {noformat}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> ReadStage 0 0  0 0
>  0
> MutationStage 0 0  0 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> GossipStage   0 0  0 0
>  0
> RequestResponseStage  0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> MigrationStage0 0  2 0
>  0
> MiscStage 0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> ReadRepairStage   0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Fix assertion in LogFile when disk is full

2015-11-13 Thread jmckenzie
Fix assertion in LogFile when disk is full

Patch by stefania; reviewed by aweisberg for CASSANDRA-10538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32239272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32239272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32239272

Branch: refs/heads/trunk
Commit: 32239272924e3bf8053aa51adc86d83ceeb39268
Parents: cb102da
Author: Stefania Alborghetti 
Authored: Fri Nov 13 10:02:10 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:02:10 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index 4318f9c..8657869 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -274,11 +274,12 @@ final class LogFile
 
 private boolean addRecord(LogRecord record)
 {
-if (!records.add(record))
+if (records.contains(record))
 return false;
 
 replicas.append(record);
-return true;
+
+return records.add(record);
 }
 
 void remove(Type type, SSTable table)
@@ -286,8 +287,8 @@ final class LogFile
 LogRecord record = makeRecord(type, table);
 assert records.contains(record) : String.format("[%s] is not tracked 
by %s", record, id);
 
-records.remove(record);
 deleteRecordFiles(record);
+records.remove(record);
 }
 
 boolean contains(Type type, SSTable table)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
index 8b82207..ce76165 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
@@ -367,14 +367,12 @@ class LogTransaction extends 
Transactional.AbstractTransactional implements Tran
 
 protected Throwable doCommit(Throwable accumulate)
 {
-txnFile.commit();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::commit));
 }
 
 protected Throwable doAbort(Throwable accumulate)
 {
-txnFile.abort();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::abort));
 }
 
 protected void doPrepare() { }



[jira] [Resolved] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-10250.
---
   Resolution: Duplicate
Reproduced In: 3.0 beta 1, 2.2.1, 2.1.9  (was: 2.1.9, 2.2.1, 3.0 beta 1)

> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns one at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the dtest.  Running against a 
> local 3 node cluster will reproduce the issue (with enough runs – fails ~20% 
> on my machine).
> Also attached is the node logs from a run with when a dropped column 
> (alter_me_7 table, column s1) is still present.  Checking the system_schema 
> tables for this case shows the s1 column in both the columns and drop_columns 
> tables.
> This has been flapping on cassci on versions 2+ and doesn’t seem to be 
> related to changes in 3.0.  More testing needs to be done though.
> //cc [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88892af9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88892af9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88892af9

Branch: refs/heads/cassandra-3.1
Commit: 88892af934af23fa15571e165c79446d3ef1bd43
Parents: f92f69c 3223927
Author: Joshua McKenzie 
Authored: Fri Nov 13 10:03:45 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:03:45 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--




[jira] [Updated] (CASSANDRA-10661) Integrate SASI to Cassandra

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10661:
--
Component/s: (was: index)
 Local Write-Read Paths

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10231) Null status entries on nodes that crash during decommission of a different node

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10231:
--
Component/s: Distributed Metadata

> Null status entries on nodes that crash during decommission of a different 
> node
> ---
>
> Key: CASSANDRA-10231
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10231
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Joel Knighton
> Fix For: 3.0.0 rc2
>
> Attachments: n1.log, n2.log, n3.log, n4.log, n5.log
>
>
> This issue is reproducible through a Jepsen test of materialized views that 
> crashes and decommissions nodes throughout the test.
> In a 5 node cluster, if a node crashes at a certain point (unknown) during 
> the decommission of a different node, it may start with a null entry for the 
> decommissioned node like so:
> DN 10.0.0.5 ? 256 ? null rack1
> This entry does not get updated/cleared by gossip. This entry is removed upon 
> a restart of the affected node.
> This issue is further detailed in ticket 
> [10068|https://issues.apache.org/jira/browse/CASSANDRA-10068].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10517) Make sure all unit tests run on CassCI

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10517:
--
Component/s: Testing

> Make sure all unit tests run on CassCI
> --
>
> Key: CASSANDRA-10517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10517
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Joel Knighton
>  Labels: triage
>
> It seems that some Windows unit tests aren't run sometimes on CassCI, and 
> there's no error reporting for this. For instance, this test was introduced 
> around the time build #38 would have happened, but has only run in builds 
> #50-3 and #64:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_utest_win32/lastCompletedBuild/testReport/org.apache.cassandra.cql3/ViewTest/testPrimaryKeyIsNotNull/history/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8323) Adapt UDF code after JAVA-502

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8323:
-
Fix Version/s: (was: 3.x)

> Adapt UDF code after JAVA-502
> -
>
> Key: CASSANDRA-8323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8323
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>
> In CASSANDRA-7563 support for user-types, tuple-types and collections is 
> added to C* using the Java Driver.
> The code in C* requires access to some functionality which is currently 
> performed using reflection/invoke-dynamic.
> This ticket is about to provide better/direct access to that functionality.
> I'll provide patches for Java Driver + C*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9356) SSTableRewriterTest fails infrequently

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-9356.

   Resolution: Cannot Reproduce
Fix Version/s: (was: 2.2.x)
   (was: 2.1.x)

Looks like this test is no longer flaky on either 2.2 or 3.0. If it comes up in 
the future, feel free to re-open this.

> SSTableRewriterTest fails infrequently
> --
>
> Key: CASSANDRA-9356
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9356
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>  Labels: test-failure
> Attachments: system.log.gz
>
>
> This used to complain about a timeout. I am not seeing that anymore. What I 
> see is one test case fail, or the one time it reproduced on my laptop a 
> bunch. I am seeing different assertions fail in different tests now.
> http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9528-testall/6/testReport/junit/org.apache.cassandra.io.sstable/SSTableRewriterTest/testAbort2/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9556) Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9556:
-
Issue Type: Improvement  (was: Bug)

> Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)
> 
>
> Key: CASSANDRA-9556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremy Hanna
>Assignee: ZhaoYang
>  Labels: stress
> Attachments: cassandra-2.1-9556.txt, trunk-9556.txt
>
>
> Currently you can't define a data model with decimal types and use Cassandra 
> stress with it.  Also, I imagine that holds true with other newer data types 
> such as the new date and time types.  Besides that, now that data models are 
> including user defined types, we should allow users to create those 
> structures with stress as well.  Perhaps we could split out the UDTs into a 
> different ticket if it holds the other types up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10285) Compaction running indefinitely on system.hints

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-10285.
---
Resolution: Cannot Reproduce

Feel free to reopen if you can reproduce.

> Compaction running indefinitely on system.hints 
> 
>
> Key: CASSANDRA-10285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10285
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alan Boudreault
>Assignee: Marcus Eriksson
>
> During my hints storage benchmarks, I've experienced an issue using C* 2.2. 
> The hints was never replayed. After a while (more than 24H ...), I noticed 
> that there were still compactions running on system.hints and that some new 
> ones was triggered every 10-20 minutes.
> To reproduce, we create a cluster of 2 nodes. RF=2 and we generate hints by 
> shutting down the node2.
> {code}
> ccm create --install-dir=`pwd` -n 2 local && ccm start
> ccm node1 stress -- write n=50M -rate threads=300 -port jmx=7198 -errors 
> ignore -schema replication\(factor=2\)
> # wait 5-6 seconds to get the schema creation propagated on all nodes, then 
> in another window, stop node 2
> ccm node2 stop
> # wait the initial 50M writes are finished, bring back node2 up and write 
> another 50M keys.
> ccm node2 start
> ccm node1 stress -- write n=50M -rate threads=300 -port jmx=7198 -errors 
> ignore -schema replication\(factor=2\)
> # You should get the initial compaction finished after 15-20 minutes. You can 
> set the mb throughput to 0 to get that done faster. 
> # Monitor the node1. the hints will never be replayed and you should see 
> compactions happening indefinitely. 
> {code}
> //cc [~krummas] [~yukim]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10111:
--
  Environment: (was: 2.0.x)
Reproduced In: 2.0.15, 2.1.12, 2.2.4, 3.0.1  (was: 2.0.15)
Fix Version/s: (was: 2.1.x)

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[08/19] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36394542
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36394542
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36394542

Branch: refs/heads/cassandra-2.2
Commit: 36394542e6e350b49396ed1b09fecdaba76aece8
Parents: 9f02182 7e056fa
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:17 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:17 2015 -0500

--

--




[16/19] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb102da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb102da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb102da9

Branch: refs/heads/cassandra-3.1
Commit: cb102da9e1ca9899e7e1f0808cb418425d2c7448
Parents: ed424bd 73a730f
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:55 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:55 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb102da9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--



[03/19] cassandra git commit: Fix CompressionInfo not being synced on close

2015-11-13 Thread jmckenzie
Fix CompressionInfo not being synced on close

Patch by stefania; reviewed by aweisberg for CASSANDRA-10534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e056fa2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e056fa2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e056fa2

Branch: refs/heads/cassandra-2.2
Commit: 7e056fa27047a868660ff796734dcbd485e1b29a
Parents: e291382
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:56:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:56:08 2015 -0500

--
 .../cassandra/io/compress/CompressionMetadata.java   | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e056fa2/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1dc2df3 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -401,14 +401,19 @@ public class CompressionMetadata
 
 public void close(long dataLength, int chunks) throws IOException
 {
+FileOutputStream fos = null;
 DataOutputStream out = null;
 try
 {
-   out = new DataOutputStream(new BufferedOutputStream(new 
FileOutputStream(filePath)));
-   assert chunks == count;
-   writeHeader(out, dataLength, chunks);
+fos = new FileOutputStream(filePath);
+out = new DataOutputStream(new BufferedOutputStream(fos));
+assert chunks == count;
+writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 finally
 {



[19/19] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da4f7f15
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da4f7f15
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da4f7f15

Branch: refs/heads/trunk
Commit: da4f7f15a912e0117af81f9a48fdba14693b1ae0
Parents: 9c5bdb2 f92f69c
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:58:10 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:58:10 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--




[13/19] cassandra git commit: 10534 2.2 patch

2015-11-13 Thread jmckenzie
10534 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73a730f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73a730f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73a730f9

Branch: refs/heads/trunk
Commit: 73a730f926d25a7d4f693507937b8565b701259c
Parents: 3639454
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:57:42 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:42 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73a730f9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 23a9f3e..e5d470c 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -345,11 +345,15 @@ public class CompressionMetadata
 }
 
 // flush the data to disk
-try (DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(new FileOutputStream(filePath
+try (FileOutputStream fos = new FileOutputStream(filePath);
+ DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(fos)))
 {
 writeHeader(out, dataLength, count);
-for (int i = 0 ; i < count ; i++)
+for (int i = 0; i < count; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 catch (IOException e)
 {



[04/19] cassandra git commit: Fix CompressionInfo not being synced on close

2015-11-13 Thread jmckenzie
Fix CompressionInfo not being synced on close

Patch by stefania; reviewed by aweisberg for CASSANDRA-10534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e056fa2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e056fa2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e056fa2

Branch: refs/heads/cassandra-3.0
Commit: 7e056fa27047a868660ff796734dcbd485e1b29a
Parents: e291382
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:56:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:56:08 2015 -0500

--
 .../cassandra/io/compress/CompressionMetadata.java   | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e056fa2/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1dc2df3 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -401,14 +401,19 @@ public class CompressionMetadata
 
 public void close(long dataLength, int chunks) throws IOException
 {
+FileOutputStream fos = null;
 DataOutputStream out = null;
 try
 {
-   out = new DataOutputStream(new BufferedOutputStream(new 
FileOutputStream(filePath)));
-   assert chunks == count;
-   writeHeader(out, dataLength, chunks);
+fos = new FileOutputStream(filePath);
+out = new DataOutputStream(new BufferedOutputStream(fos));
+assert chunks == count;
+writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 finally
 {



[10/19] cassandra git commit: 10534 2.2 patch

2015-11-13 Thread jmckenzie
10534 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73a730f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73a730f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73a730f9

Branch: refs/heads/cassandra-3.0
Commit: 73a730f926d25a7d4f693507937b8565b701259c
Parents: 3639454
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:57:42 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:42 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73a730f9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 23a9f3e..e5d470c 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -345,11 +345,15 @@ public class CompressionMetadata
 }
 
 // flush the data to disk
-try (DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(new FileOutputStream(filePath
+try (FileOutputStream fos = new FileOutputStream(filePath);
+ DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(fos)))
 {
 writeHeader(out, dataLength, count);
-for (int i = 0 ; i < count ; i++)
+for (int i = 0; i < count; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 catch (IOException e)
 {



[02/19] cassandra git commit: Fix CompressionInfo not being synced on close

2015-11-13 Thread jmckenzie
Fix CompressionInfo not being synced on close

Patch by stefania; reviewed by aweisberg for CASSANDRA-10534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e056fa2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e056fa2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e056fa2

Branch: refs/heads/cassandra-3.1
Commit: 7e056fa27047a868660ff796734dcbd485e1b29a
Parents: e291382
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:56:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:56:08 2015 -0500

--
 .../cassandra/io/compress/CompressionMetadata.java   | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e056fa2/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1dc2df3 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -401,14 +401,19 @@ public class CompressionMetadata
 
 public void close(long dataLength, int chunks) throws IOException
 {
+FileOutputStream fos = null;
 DataOutputStream out = null;
 try
 {
-   out = new DataOutputStream(new BufferedOutputStream(new 
FileOutputStream(filePath)));
-   assert chunks == count;
-   writeHeader(out, dataLength, chunks);
+fos = new FileOutputStream(filePath);
+out = new DataOutputStream(new BufferedOutputStream(fos));
+assert chunks == count;
+writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 finally
 {



[15/19] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb102da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb102da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb102da9

Branch: refs/heads/cassandra-3.0
Commit: cb102da9e1ca9899e7e1f0808cb418425d2c7448
Parents: ed424bd 73a730f
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:55 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:55 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb102da9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--



[14/19] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb102da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb102da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb102da9

Branch: refs/heads/trunk
Commit: cb102da9e1ca9899e7e1f0808cb418425d2c7448
Parents: ed424bd 73a730f
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:55 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:55 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb102da9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--



[06/19] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36394542
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36394542
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36394542

Branch: refs/heads/cassandra-3.0
Commit: 36394542e6e350b49396ed1b09fecdaba76aece8
Parents: 9f02182 7e056fa
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:17 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:17 2015 -0500

--

--




[18/19] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f92f69c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f92f69c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f92f69c4

Branch: refs/heads/cassandra-3.1
Commit: f92f69c45eb7e23b6151eae6be6e3b30681c73d8
Parents: d404362 cb102da
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:58:04 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:58:04 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--




[07/19] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36394542
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36394542
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36394542

Branch: refs/heads/trunk
Commit: 36394542e6e350b49396ed1b09fecdaba76aece8
Parents: 9f02182 7e056fa
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:57:17 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:17 2015 -0500

--

--




[11/19] cassandra git commit: 10534 2.2 patch

2015-11-13 Thread jmckenzie
10534 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73a730f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73a730f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73a730f9

Branch: refs/heads/cassandra-3.1
Commit: 73a730f926d25a7d4f693507937b8565b701259c
Parents: 3639454
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:57:42 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:42 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73a730f9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 23a9f3e..e5d470c 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -345,11 +345,15 @@ public class CompressionMetadata
 }
 
 // flush the data to disk
-try (DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(new FileOutputStream(filePath
+try (FileOutputStream fos = new FileOutputStream(filePath);
+ DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(fos)))
 {
 writeHeader(out, dataLength, count);
-for (int i = 0 ; i < count ; i++)
+for (int i = 0; i < count; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 catch (IOException e)
 {



[01/19] cassandra git commit: Fix CompressionInfo not being synced on close

2015-11-13 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e291382fd -> 7e056fa27
  refs/heads/cassandra-2.2 9f021823f -> 73a730f92
  refs/heads/cassandra-3.0 ed424bdd6 -> cb102da9e
  refs/heads/cassandra-3.1 d404362ad -> f92f69c45
  refs/heads/trunk 9c5bdb261 -> da4f7f15a


Fix CompressionInfo not being synced on close

Patch by stefania; reviewed by aweisberg for CASSANDRA-10534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e056fa2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e056fa2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e056fa2

Branch: refs/heads/cassandra-2.1
Commit: 7e056fa27047a868660ff796734dcbd485e1b29a
Parents: e291382
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:56:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:56:08 2015 -0500

--
 .../cassandra/io/compress/CompressionMetadata.java   | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e056fa2/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1dc2df3 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -401,14 +401,19 @@ public class CompressionMetadata
 
 public void close(long dataLength, int chunks) throws IOException
 {
+FileOutputStream fos = null;
 DataOutputStream out = null;
 try
 {
-   out = new DataOutputStream(new BufferedOutputStream(new 
FileOutputStream(filePath)));
-   assert chunks == count;
-   writeHeader(out, dataLength, chunks);
+fos = new FileOutputStream(filePath);
+out = new DataOutputStream(new BufferedOutputStream(fos));
+assert chunks == count;
+writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 finally
 {



[12/19] cassandra git commit: 10534 2.2 patch

2015-11-13 Thread jmckenzie
10534 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73a730f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73a730f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73a730f9

Branch: refs/heads/cassandra-2.2
Commit: 73a730f926d25a7d4f693507937b8565b701259c
Parents: 3639454
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:57:42 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:57:42 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73a730f9/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 23a9f3e..e5d470c 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -345,11 +345,15 @@ public class CompressionMetadata
 }
 
 // flush the data to disk
-try (DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(new FileOutputStream(filePath
+try (FileOutputStream fos = new FileOutputStream(filePath);
+ DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(fos)))
 {
 writeHeader(out, dataLength, count);
-for (int i = 0 ; i < count ; i++)
+for (int i = 0; i < count; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 catch (IOException e)
 {



[05/19] cassandra git commit: Fix CompressionInfo not being synced on close

2015-11-13 Thread jmckenzie
Fix CompressionInfo not being synced on close

Patch by stefania; reviewed by aweisberg for CASSANDRA-10534


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e056fa2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e056fa2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e056fa2

Branch: refs/heads/trunk
Commit: 7e056fa27047a868660ff796734dcbd485e1b29a
Parents: e291382
Author: Stefania Alborghetti 
Authored: Fri Nov 13 09:56:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:56:08 2015 -0500

--
 .../cassandra/io/compress/CompressionMetadata.java   | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e056fa2/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1dc2df3 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -401,14 +401,19 @@ public class CompressionMetadata
 
 public void close(long dataLength, int chunks) throws IOException
 {
+FileOutputStream fos = null;
 DataOutputStream out = null;
 try
 {
-   out = new DataOutputStream(new BufferedOutputStream(new 
FileOutputStream(filePath)));
-   assert chunks == count;
-   writeHeader(out, dataLength, chunks);
+fos = new FileOutputStream(filePath);
+out = new DataOutputStream(new BufferedOutputStream(fos));
+assert chunks == count;
+writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+
+out.flush();
+fos.getFD().sync();
 }
 finally
 {



[1/6] cassandra git commit: Fix assertion in LogFile when disk is full

2015-11-13 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 cb102da9e -> 322392729
  refs/heads/cassandra-3.1 f92f69c45 -> 88892af93
  refs/heads/trunk da4f7f15a -> 34822301c


Fix assertion in LogFile when disk is full

Patch by stefania; reviewed by aweisberg for CASSANDRA-10538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32239272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32239272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32239272

Branch: refs/heads/cassandra-3.0
Commit: 32239272924e3bf8053aa51adc86d83ceeb39268
Parents: cb102da
Author: Stefania Alborghetti 
Authored: Fri Nov 13 10:02:10 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:02:10 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index 4318f9c..8657869 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -274,11 +274,12 @@ final class LogFile
 
 private boolean addRecord(LogRecord record)
 {
-if (!records.add(record))
+if (records.contains(record))
 return false;
 
 replicas.append(record);
-return true;
+
+return records.add(record);
 }
 
 void remove(Type type, SSTable table)
@@ -286,8 +287,8 @@ final class LogFile
 LogRecord record = makeRecord(type, table);
 assert records.contains(record) : String.format("[%s] is not tracked 
by %s", record, id);
 
-records.remove(record);
 deleteRecordFiles(record);
+records.remove(record);
 }
 
 boolean contains(Type type, SSTable table)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
index 8b82207..ce76165 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
@@ -367,14 +367,12 @@ class LogTransaction extends 
Transactional.AbstractTransactional implements Tran
 
 protected Throwable doCommit(Throwable accumulate)
 {
-txnFile.commit();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::commit));
 }
 
 protected Throwable doAbort(Throwable accumulate)
 {
-txnFile.abort();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::abort));
 }
 
 protected void doPrepare() { }



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88892af9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88892af9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88892af9

Branch: refs/heads/trunk
Commit: 88892af934af23fa15571e165c79446d3ef1bd43
Parents: f92f69c 3223927
Author: Joshua McKenzie 
Authored: Fri Nov 13 10:03:45 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:03:45 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--




[6/6] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34822301
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34822301
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34822301

Branch: refs/heads/trunk
Commit: 34822301c2b12253eda5465d46fe47f61655dba3
Parents: da4f7f1 88892af
Author: Joshua McKenzie 
Authored: Fri Nov 13 10:03:50 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:03:50 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--




[2/6] cassandra git commit: Fix assertion in LogFile when disk is full

2015-11-13 Thread jmckenzie
Fix assertion in LogFile when disk is full

Patch by stefania; reviewed by aweisberg for CASSANDRA-10538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32239272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32239272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32239272

Branch: refs/heads/cassandra-3.1
Commit: 32239272924e3bf8053aa51adc86d83ceeb39268
Parents: cb102da
Author: Stefania Alborghetti 
Authored: Fri Nov 13 10:02:10 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 10:02:10 2015 -0500

--
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 7 ---
 .../org/apache/cassandra/db/lifecycle/LogTransaction.java | 6 ++
 2 files changed, 6 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index 4318f9c..8657869 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -274,11 +274,12 @@ final class LogFile
 
 private boolean addRecord(LogRecord record)
 {
-if (!records.add(record))
+if (records.contains(record))
 return false;
 
 replicas.append(record);
-return true;
+
+return records.add(record);
 }
 
 void remove(Type type, SSTable table)
@@ -286,8 +287,8 @@ final class LogFile
 LogRecord record = makeRecord(type, table);
 assert records.contains(record) : String.format("[%s] is not tracked 
by %s", record, id);
 
-records.remove(record);
 deleteRecordFiles(record);
+records.remove(record);
 }
 
 boolean contains(Type type, SSTable table)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32239272/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
index 8b82207..ce76165 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
@@ -367,14 +367,12 @@ class LogTransaction extends 
Transactional.AbstractTransactional implements Tran
 
 protected Throwable doCommit(Throwable accumulate)
 {
-txnFile.commit();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::commit));
 }
 
 protected Throwable doAbort(Throwable accumulate)
 {
-txnFile.abort();
-return complete(accumulate);
+return complete(Throwables.perform(accumulate, txnFile::abort));
 }
 
 protected void doPrepare() { }



[jira] [Updated] (CASSANDRA-9501) ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata failing

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9501:
---
Component/s: Local Write-Read Paths

> ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata failing
> 
>
> Key: CASSANDRA-9501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9501
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: test
> Fix For: 2.2.x
>
> Attachments: 9501_v1.txt, 9501_v2.txt
>
>
> {noformat}
> [junit] Testcase: 
> testSliceByNamesCommandOldMetadata(org.apache.cassandra.db.ColumnFamilyStoreTest):
> FAILED
> [junit] null
> [junit] junit.framework.AssertionFailedError
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:171)
> [junit] at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166)
> [junit] at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.rename(SSTableWriter.java:266)
> [junit] at 
> org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:766)
> [junit] at 
> org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata(ColumnFamilyStoreTest.java:1125)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9428) Implement hints compression

2015-11-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9428:
---
Component/s: Coordination

> Implement hints compression
> ---
>
> Key: CASSANDRA-9428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9428
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination
>Reporter: Aleksey Yeschenko
>Assignee: Joshua McKenzie
> Fix For: 3.0.x
>
>
> CASSANDRA-6230 is being implemented with compression in mind, but it's not 
> going to be implemented by the original ticket.
> Adding it on top should be relatively straight-forward, and important, since 
> there are several users in the wild that use compression interface for 
> encryption purposes. DSE is one of them (but isn't the only one). Losing 
> encryption capabilities would be a regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-11-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004093#comment-15004093
 ] 

Aleksey Yeschenko commented on CASSANDRA-10250:
---

Closing the issue itself as Duplicate of the mentioned two tickets.

bq. I think it's good to include this in dtest now, even though it will fail. 
We are working on a better reporting mechanism with multiple views, one of 
which would be to hide "known" failures like this.

If that mechanism is not ready yet, I would much rather prefer the test to be 
excluded for now, as our #1 current goal is to get dtest to 100% passing, and 
this isn't helping.

> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns one at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the dtest.  Running against a 
> local 3 node cluster will reproduce the issue (with enough runs – fails ~20% 
> on my machine).
> Also attached is the node logs from a run with when a dropped column 
> (alter_me_7 table, column s1) is still present.  Checking the system_schema 
> tables for this case shows the s1 column in both the columns and drop_columns 
> tables.
> This has been flapping on cassci on versions 2+ and doesn’t seem to be 
> related to changes in 3.0.  More testing needs to be done though.
> //cc [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10645) sstableverify_test dtest fails on Windows

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10645:
--
Component/s: Testing

> sstableverify_test dtest fails on Windows
> -
>
> Key: CASSANDRA-10645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10645
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Joel Knighton
>
> {{offline_tools_test.py:TestOfflineTools.sstableverify_test}} fails on CassCI 
> on C* 3.0 on Windows
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/junit/offline_tools_test/TestOfflineTools/sstableverify_test/history/
> It fails consistently, but not always after checking the same number of 
> sstables. For instance:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/junit/offline_tools_test/TestOfflineTools/sstableverify_test/
> vs.
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/93/testReport/junit/offline_tools_test/TestOfflineTools/sstableverify_test/
> I'm not sure if that's significant, but just in case, I thought it was worth 
> noting.
> Doesn't look like anyone's worked on this test particularly recently; 
> [~JoshuaMcKenzie], can you please find an assignee for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10517) Make sure all unit tests run on CassCI

2015-11-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10517:
--
Fix Version/s: (was: 3.0.0)

> Make sure all unit tests run on CassCI
> --
>
> Key: CASSANDRA-10517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10517
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Joel Knighton
>  Labels: triage
>
> It seems that some Windows unit tests aren't run sometimes on CassCI, and 
> there's no error reporting for this. For instance, this test was introduced 
> around the time build #38 would have happened, but has only run in builds 
> #50-3 and #64:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_utest_win32/lastCompletedBuild/testReport/org.apache.cassandra.cql3/ViewTest/testPrimaryKeyIsNotNull/history/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10574) Add the option to skip compaction for hints cf when doing the handoff.

2015-11-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004122#comment-15004122
 ] 

Aleksey Yeschenko commented on CASSANDRA-10574:
---

The only reason we compact them at all is to clear out the tombstones. I'm not 
sure that this is a workaround we want to commit, as it doesn't fix the issue 
fundamentally (and with the flag you are risking failures due to tombstone 
overwhelmed exceptions when reading hints).

The true fix is the new hints implementation in 3.0 (which, I know, doesn't 
help you much right now, sorry).

I'm afraid you'll have to patch your C* locally if you want to run this patch 
(again, sorry).

> Add the option to skip compaction for hints cf when doing the handoff.
> --
>
> Key: CASSANDRA-10574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: hints.patch
>
>
> In our production env, we stop the gossip for about 1 hour on around 1/3 of 
> nodes, and then we enable the gossip and it took very long time for the other 
> 2/3 nodes to do the hints handoff.
> I find that most of the time is spent to do the compaction before each hints 
> handoff, while in most case, there is only one sstable, and it's really not 
> necessary to do the compaction.
> So I add the option to skip the compaction to allow the nodes to finish hints 
> handoff very quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10630) NullPointerException in DeletionInfo.isDeleted

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10630:
--
Fix Version/s: (was: 2.1.x)

> NullPointerException in DeletionInfo.isDeleted
> --
>
> Key: CASSANDRA-10630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10630
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Linux 64 bits
>Reporter: Brice Figureau
>
> The following CQL query:
> {code:sql}
> select count(*) from messages;
> {code}
> sometimes produces the following stack trace:
> {code}
> ERROR [SharedPool-Worker-1] 2015-10-31 16:32:22,606 ErrorMessage.java:251 - 
> Unexpected exception during request
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.db.DeletionInfo.isDeleted(DeletionInfo.java:138) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:102)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:39)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:260) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:122)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [apache-cassandra-2.1.11.jar:2.1.11]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_51]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [apache-cassandra-2.1.11.jar:2.1.11]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.11.jar:2.1.11]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> {code}
> After taking a snapshot on all participating nodes, the query starts to 
> apparently succeed.
> The table schema is:
> {code:sql}
> CREATE TABLE akka.messages (
> persistence_id text,
> partition_nr bigint,
> sequence_nr bigint,
> message blob,
> used boolean static,
> PRIMARY KEY ((persistence_id, partition_nr), sequence_nr)
> ) WITH CLUSTERING ORDER BY (sequence_nr ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> (this is from the [Akka Cassandra Persistence journal 
> plugin|https://github.com/krasserm/akka-persistence-cassandra])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9358) RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList times out periodically

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9358:
-
Fix Version/s: (was: 2.1.x)

> RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList times out 
> periodically
> 
>
> Key: CASSANDRA-9358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
> Attachments: system.log
>
>
> This took me about 6 loops over this test to get it to time out
> {noformat}
> [junit] Testsuite: org.apache.cassandra.db.RecoveryManagerTruncateTest
> [junit] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 9.262 sec
> [junit] 
> [junit] Testcase: 
> testTruncatePointInTimeReplayList(org.apache.cassandra.db.RecoveryManagerTruncateTest):
>FAILED
> [junit] 
> [junit] junit.framework.AssertionFailedError: 
> [junit] at 
> org.apache.cassandra.db.RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList(RecoveryManagerTruncateTest.java:159)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.db.RecoveryManagerTruncateTest FAILED
> {noformat}
> system.log from timeout attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-4458) PerRowSecondaryIndex will call buildIndexAsync multiple times for the same index

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4458:
-
Fix Version/s: (was: 3.x)

> PerRowSecondaryIndex will call buildIndexAsync multiple times for the same 
> index
> 
>
> Key: CASSANDRA-4458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4458
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jonathan Ellis
>Assignee: Sam Tunnicliffe
>Priority: Minor
>
> Mailing list thread: 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg04624.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9092:
-
Fix Version/s: (was: 2.1.x)

> Nodes in DC2 die during and after huge write workload
> -
>
> Key: CASSANDRA-9092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
>Reporter: Sergey Maznichenko
>Assignee: Sam Tunnicliffe
> Attachments: cassandra_crash1.txt
>
>
> Hello,
> We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
> Node is VM 8 CPU, 32GB RAM
> During significant workload (loading several millions blobs ~3.5MB each), 1 
> node in DC2 stops and after some time next 2 nodes in DC2 also stops.
> Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
> I see many files in system.hints table and error appears in 2-3 minutes after 
> starting system.hints auto compaction.
> Stops, means "ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
> CassandraDaemon.java:153 - Exception in thread 
> Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space"
> ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError: Java heap space
> Full errors listing attached in cassandra_crash1.txt
> The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8941) Test Coverage for CASSANDRA-8786

2015-11-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-8941:
--

Assignee: Sam Tunnicliffe  (was: Philip Thompson)

> Test Coverage for CASSANDRA-8786
> 
>
> Key: CASSANDRA-8941
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8941
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 2.1.x
>
>
> We don't currently have a test to reproduce the issue from CASSANDRA-8786.  
> It would be good to track down exactly what circustances cause this and add 
> some test coverage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10200) NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10200:
--
Issue Type: Improvement  (was: Bug)

> NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient
> ---
>
> Key: CASSANDRA-10200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10200
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Minor
>
> The method is much more complicated than it needs to be and creates too many 
> maps and sets. The code is easy to simplify if we use the known number of 
> racks and nodes per datacentre to choose what to do in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10200) NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient

2015-11-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10200:
--
Fix Version/s: 3.0.x
   3.1

> NetworkTopologyStrategy.calculateNaturalEndpoints is rather inefficient
> ---
>
> Key: CASSANDRA-10200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10200
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Minor
> Fix For: 3.1, 3.0.x
>
>
> The method is much more complicated than it needs to be and creates too many 
> maps and sets. The code is easy to simplify if we use the known number of 
> racks and nodes per datacentre to choose what to do in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[17/19] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-13 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f92f69c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f92f69c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f92f69c4

Branch: refs/heads/trunk
Commit: f92f69c45eb7e23b6151eae6be6e3b30681c73d8
Parents: d404362 cb102da
Author: Joshua McKenzie 
Authored: Fri Nov 13 09:58:04 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Nov 13 09:58:04 2015 -0500

--
 .../apache/cassandra/io/compress/CompressionMetadata.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--




  1   2   3   >