[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394179#comment-14394179
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

So I must have had some dump saved by some early development branch then. 
Thanks for the clarification.

> Add data sizing to a system table
> -
>
> Key: CASSANDRA-7688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jeremiah Jordan
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.5
>
> Attachments: 7688.txt
>
>
> Currently you can't implement something similar to describe_splits_ex purely 
> from the a native protocol driver.  
> https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
> getting ownership information to a client in the java-driver.  But you still 
> need the data sizing part to get splits of a given size.  We should add the 
> sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394179#comment-14394179
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-7688 at 4/3/15 8:03 AM:
---

So I must have had a dump saved by an early development branch then. Thanks for 
the clarification.


was (Author: pkolaczk):
So I must have had some dump saved by some early development branch then. 
Thanks for the clarification.

> Add data sizing to a system table
> -
>
> Key: CASSANDRA-7688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jeremiah Jordan
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.5
>
> Attachments: 7688.txt
>
>
> Currently you can't implement something similar to describe_splits_ex purely 
> from the a native protocol driver.  
> https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
> getting ownership information to a client in the java-driver.  But you still 
> need the data sizing part to get splits of a given size.  We should add the 
> sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394193#comment-14394193
 ] 

Stefania commented on CASSANDRA-8893:
-

Benedict, take a look at the patch attached and let me know if this is what you 
had in mind. The entry point is ChannelProxy which wraps a file channel in a 
ref counted way and ensures that only thread safe operations are accessible. It 
also translates the IO exceptions into unchecked exceptions.

The channel proxy is shared by Builder, SegmentedFile and RandomAccessReader 
instances.

In the Builder we can receive different file paths in the complete methods, in 
which case we close the old channel and create a new one. This is the part I 
was not entirely sure about.

The remaining changes are either mechanical to pass the channel around, or 
fixes to remove leaks of the channel, mostly in the unit tests.

> RandomAccessReader should share its FileChannel with all instances (via 
> SegmentedFile)
> --
>
> Key: CASSANDRA-8893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0
>
>
> There's no good reason to open a FileChannel for each 
> \(Compressed\)\?RandomAccessReader, and this would simplify 
> RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7557) User permissions for UDFs

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394192#comment-14394192
 ] 

Sylvain Lebresne commented on CASSANDRA-7557:
-

bq.  The only alternative I could come up with was to defer execution of 
terminal functions depending on the configured {{IAuthorizer}}

Alternatively, we could defer execution of functions to statement execution 
unconditionally. I mean, executing functions at preparation time when all terms 
are terminal is just a minor optimization that was done because it was easy to 
do, but in practice, it's unlikely terribly useful: for non-prepared statement, 
doing execution at preparation or execution doesn't matter at all, and for 
prepared statement, not only having function calls with only terminal terms is 
probably not that common, but if you really care about optimizing the call, 
it's easy enough to compute the function client side before preparation.
So honestly, if that minor optimization become a pain to preserve, and it does 
seem so with this (I would even argue that doing permission checking at 
preparation time is always a bad idea because if the permission is revoked 
after preparation, a user would expect further execution to be rejected), I 
submit that we should just get rid of it and simplify the code accordingly.

> User permissions for UDFs
> -
>
> Key: CASSANDRA-7557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7557
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
>  Labels: client-impacting, cql, udf
> Fix For: 3.0
>
>
> We probably want some new permissions for user defined functions.  Most 
> RDBMSes split function permissions roughly into {{EXECUTE}} and 
> {{CREATE}}/{{ALTER}}/{{DROP}} permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394195#comment-14394195
 ] 

Stefania commented on CASSANDRA-8893:
-

This patch fixes the third point of CASSANDRA-8952.

> RandomAccessReader should share its FileChannel with all instances (via 
> SegmentedFile)
> --
>
> Key: CASSANDRA-8893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0
>
>
> There's no good reason to open a FileChannel for each 
> \(Compressed\)\?RandomAccessReader, and this would simplify 
> RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394196#comment-14394196
 ] 

Stefania commented on CASSANDRA-8952:
-

The third point will be fixed by CASSANDRA-8893.

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9037) Terminal UDFs evaluated at prepare time throw protocol version error

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394198#comment-14394198
 ] 

Sylvain Lebresne commented on CASSANDRA-9037:
-

Fyi, as I suggested in CASSANDRA-7557, I suggest we just entirely get rid of 
function execution at prepare time. The short version is that it's imo starting 
to add way more complexity than it's worth as an optimization.

> Terminal UDFs evaluated at prepare time throw protocol version error
> 
>
> Key: CASSANDRA-9037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9037
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0
>
>
> When a pure function with only terminal arguments (or with no arguments) is 
> used in a where clause, it's executed at prepare time and 
> {{Server.CURRENT_VERSION}} passed as the protocol version for serialization 
> purposes. For native functions, this isn't a problem, but UDFs use classes in 
> the bundled java-driver-core jar for (de)serialization of args and return 
> values. When {{Server.CURRENT_VERSION}} is greater than the highest version 
> supported by the bundled java driver the execution fails with the following 
> exception:
> {noformat}
> ERROR [SharedPool-Worker-1] 2015-03-24 18:10:59,391 QueryMessage.java:132 - 
> Unexpected error during query
> org.apache.cassandra.exceptions.FunctionExecutionException: execution of 
> 'ks.overloaded[text]' failed: java.lang.IllegalArgumentException: No protocol 
> version matching integer version 4
> at 
> org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.udf.gen.Cksoverloaded_1.execute(Cksoverloaded_1.java)
>  ~[na:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall.executeInternal(FunctionCall.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall.access$200(FunctionCall.java:34)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall$Raw.execute(FunctionCall.java:176)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:161)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.SingleColumnRelation.toTerm(SingleColumnRelation.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:143)
>  ~[main/:na]
> at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:127) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:126)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:787)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:740)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:488)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:246) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:475)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:371)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_71]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> Caused by: java.lang.IllegalArgumentException: No protocol version matching 
> integer version 4
> at 
> com.datastax.driver.core.ProtocolVersion.fromInt(Pro

[jira] [Commented] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394202#comment-14394202
 ] 

Stefania commented on CASSANDRA-8952:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with NIO calls in CLibrary.getfd(String 
path), correct?

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9106) disable secondary indexes by default

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394213#comment-14394213
 ] 

Sylvain Lebresne commented on CASSANDRA-9106:
-

I generally agree that it's too easy to misuse so I'm in favor for trying to 
make it less so, and no allowing them by default does sound like it goes in 
that direction. I'm definitively not in favor of use the yaml to deal with 
that: if we do decide to disable them by default, then I think we should simply 
make that "capacity" not enabled by default in the context of CASSANDRA-8303.

> disable secondary indexes by default
> 
>
> Key: CASSANDRA-9106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9106
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
> Fix For: 3.0
>
>
> This feature is misused constantly.  Can we disable it by default, and 
> provide a yaml config to explicitly enable it?  Along with a massive warning 
> about how they aren't there for performance, maybe with a link to 
> documentation that explains why?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8979) MerkleTree mismatch for deleted and non-existing rows

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394218#comment-14394218
 ] 

Sylvain Lebresne commented on CASSANDRA-8979:
-

To avoid any confusion, I never suggested we wouldn't do this in a minor 
version, just that we basically added what the last patches from 
[~spo...@gmail.com] adds. so [~yukim], if you go ahead and commit those last 
patches, I'm good closing this.

> MerkleTree mismatch for deleted and non-existing rows
> -
>
> Key: CASSANDRA-8979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.1.5
>
> Attachments: 8979-AvoidBufferAllocation-2.0_patch.txt, 
> 8979-LazilyCompactedRow-2.0.txt, 8979-RevertPrecompactedRow-2.0.txt, 
> cassandra-2.0-8979-lazyrow_patch.txt, cassandra-2.0-8979-validator_patch.txt, 
> cassandra-2.0-8979-validatortest_patch.txt, 
> cassandra-2.1-8979-lazyrow_patch.txt, cassandra-2.1-8979-validator_patch.txt
>
>
> Validation compaction will currently create different hashes for rows that 
> have been deleted compared to nodes that have not seen the rows at all or 
> have already compacted them away. 
> In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to 
> prevent hashing of expired tombstones. This still seems to be in place, but 
> does not address the issue completely. Or there was a change in 2.0 that 
> rendered the patch ineffective. 
> The problem is that rowHash() in the Validator will return a new hash in any 
> case, whether the PrecompactedRow did actually update the digest or not. This 
> will lead to the case that a purged, PrecompactedRow will not change the 
> digest, but we end up with a different tree compared to not having rowHash 
> called at all (such as in case the row already doesn't exist).
> As an implication, repair jobs will constantly detect mismatches between 
> older sstables containing purgable rows and nodes that have already compacted 
> these rows. After transfering the reported ranges, the newly created sstables 
> will immediately get deleted again during the following compaction. This will 
> happen for each repair run over again until the sstable with the purgable row 
> finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394202#comment-14394202
 ] 

Stefania edited comment on CASSANDRA-8952 at 4/3/15 9:18 AM:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?


was (Author: stefania):
Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with NIO calls in CLibrary.getfd(String 
path), correct?

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8915) Improve MergeIterator performance

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394244#comment-14394244
 ] 

Stefania commented on CASSANDRA-8915:
-

In case you guys have not seen it yet, please check the changes proposed by 
CASSANDRA-8180, specifically this comment here: 
https://issues.apache.org/jira/browse/CASSANDRA-8180?focusedCommentId=14381674&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14381674.

The idea is that there will be two type of candidates, one greedy that knows 
its first value as it is the case right now. Another one, lazy, that gets 
compared based on a less accurate lower bound. What this means is that once 
this lazy candidate is picked, only then will it access the iterator to 
determine the exact first value, which could be much higher that the initial 
lower bound. 

The way I implemented this with the present implementation of the merge 
iterator is to add the lazy candidate back to the priority queue after it has 
calculated its first accurate value. It's not very elegant however and it is 
kind of wasteful.

If too complex to merge both approaches in one algorithm, we can always 
specialize a separate merge iterator implementation for supporting lazy 
candidates.

> Improve MergeIterator performance
> -
>
> Key: CASSANDRA-8915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8915
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Minor
>
> The implementation of {{MergeIterator}} uses a priority queue and applies a 
> pair of {{poll}}+{{add}} operations for every item in the resulting sequence. 
> This is quite inefficient as {{poll}} necessarily applies at least {{log N}} 
> comparisons (up to {{2log N}}), and {{add}} often requires another {{log N}}, 
> for example in the case where the inputs largely don't overlap (where {{N}} 
> is the number of iterators being merged).
> This can easily be replaced with a simple custom structure that can perform 
> replacement of the top of the queue in a single step, which will very often 
> complete after a couple of comparisons and in the worst case scenarios will 
> match the complexity of the current implementation.
> This should significantly improve merge performance for iterators with 
> limited overlap (e.g. levelled compaction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394202#comment-14394202
 ] 

Stefania edited comment on CASSANDRA-8952 at 4/3/15 9:37 AM:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?

Have a quick look here for the first two points:

https://github.com/stef1927/cassandra/commits/8952


was (Author: stefania):
Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8915) Improve MergeIterator performance

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394250#comment-14394250
 ] 

Benedict commented on CASSANDRA-8915:
-

I perhaps should have commented when I first saw the link. It should be quite 
viable to merge the behaviours; the Candidate just needs to have a flag 
indicating if the value is "real" or not, and to just discard the not-real 
values it encounters.

> Improve MergeIterator performance
> -
>
> Key: CASSANDRA-8915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8915
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Minor
>
> The implementation of {{MergeIterator}} uses a priority queue and applies a 
> pair of {{poll}}+{{add}} operations for every item in the resulting sequence. 
> This is quite inefficient as {{poll}} necessarily applies at least {{log N}} 
> comparisons (up to {{2log N}}), and {{add}} often requires another {{log N}}, 
> for example in the case where the inputs largely don't overlap (where {{N}} 
> is the number of iterators being merged).
> This can easily be replaced with a simple custom structure that can perform 
> replacement of the top of the queue in a single step, which will very often 
> complete after a couple of comparisons and in the worst case scenarios will 
> match the complexity of the current implementation.
> This should significantly improve merge performance for iterators with 
> limited overlap (e.g. levelled compaction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394294#comment-14394294
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

Will there be a command to manually refresh statistics of a table from CQL 
(like "ANALYZE TABLE ...")?
I need a way to trigger this in an integration test and I don't want to wait 
until it automatically refreshes it after the update interval...
1. create table
2. add data
3. analyze (?)
4. check stats


> Add data sizing to a system table
> -
>
> Key: CASSANDRA-7688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jeremiah Jordan
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.5
>
> Attachments: 7688.txt
>
>
> Currently you can't implement something similar to describe_splits_ex purely 
> from the a native protocol driver.  
> https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
> getting ownership information to a client in the java-driver.  But you still 
> need the data sizing part to get splits of a given size.  We should add the 
> sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Share file handles between all instances of a SegmentedFile

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 868457de2 -> cf925bdfa


Share file handles between all instances of a SegmentedFile

patch by stefania; reviewed by benedict for CASSANDRA-8893


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf925bdf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf925bdf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf925bdf

Branch: refs/heads/trunk
Commit: cf925bdfa2f211784eb22d2b98b7176e551dda69
Parents: 868457d
Author: Stefania Alborghetti 
Authored: Fri Apr 3 11:43:30 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Fri Apr 3 11:43:30 2015 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/io/util/ChannelProxy.java  | 182 +++
 .../cassandra/io/RandomAccessReaderTest.java| 234 +++
 3 files changed, 417 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf925bdf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bda5bb7..d049640 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Share file handles between all instances of a SegmentedFile (CASSANDRA-8893)
  * Make it possible to major compact LCS (CASSANDRA-7272)
  * Make FunctionExecutionException extend RequestExecutionException
(CASSANDRA-9055)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf925bdf/src/java/org/apache/cassandra/io/util/ChannelProxy.java
--
diff --git a/src/java/org/apache/cassandra/io/util/ChannelProxy.java 
b/src/java/org/apache/cassandra/io/util/ChannelProxy.java
new file mode 100644
index 000..79954a5
--- /dev/null
+++ b/src/java/org/apache/cassandra/io/util/ChannelProxy.java
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.io.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.MappedByteBuffer;
+import java.nio.channels.FileChannel;
+import java.nio.channels.WritableByteChannel;
+import java.nio.file.StandardOpenOption;
+
+import org.apache.cassandra.io.FSReadError;
+import org.apache.cassandra.utils.CLibrary;
+import org.apache.cassandra.utils.concurrent.RefCounted;
+import org.apache.cassandra.utils.concurrent.SharedCloseableImpl;
+
+/**
+ * A proxy of a FileChannel that:
+ *
+ * - implements reference counting
+ * - exports only thread safe FileChannel operations
+ * - wraps IO exceptions into runtime exceptions
+ *
+ * Tested by RandomAccessReaderTest.
+ */
+public final class ChannelProxy extends SharedCloseableImpl
+{
+private final String filePath;
+private final FileChannel channel;
+
+public static FileChannel openChannel(File file)
+{
+try
+{
+return FileChannel.open(file.toPath(), StandardOpenOption.READ);
+}
+catch (IOException e)
+{
+throw new RuntimeException(e);
+}
+}
+
+public ChannelProxy(String path)
+{
+this (new File(path));
+}
+
+public ChannelProxy(File file)
+{
+this(file.getAbsolutePath(), openChannel(file));
+}
+
+public ChannelProxy(String filePath, FileChannel channel)
+{
+super(new Cleanup(filePath, channel));
+
+this.filePath = filePath;
+this.channel = channel;
+}
+
+public ChannelProxy(ChannelProxy copy)
+{
+super(copy);
+
+this.filePath = copy.filePath;
+this.channel = copy.channel;
+}
+
+private final static class Cleanup implements RefCounted.Tidy
+{
+final String filePath;
+final FileChannel channel;
+
+protected Cleanup(String filePath, FileChannel channel)
+{
+this.filePath = filePath;
+this.channel = channel;
+}
+
+public String name()
+{
+return filePath;
+}
+
+pu

[jira] [Updated] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8893:

Reviewer: Benedict

> RandomAccessReader should share its FileChannel with all instances (via 
> SegmentedFile)
> --
>
> Key: CASSANDRA-8893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0
>
>
> There's no good reason to open a FileChannel for each 
> \(Compressed\)\?RandomAccessReader, and this would simplify 
> RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Share file handles between all instances of a SegmentedFile

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk cf925bdfa -> 4e29b7a9a


Share file handles between all instances of a SegmentedFile

patch by stefania; reviewed by benedict for CASSANDRA-8893


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e29b7a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e29b7a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e29b7a9

Branch: refs/heads/trunk
Commit: 4e29b7a9a4736e7e70757dc514849c5af7e2d7d1
Parents: cf925bd
Author: Stefania Alborghetti 
Authored: Fri Apr 3 12:32:42 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Fri Apr 3 12:32:42 2015 +0100

--
 .../compress/CompressedRandomAccessReader.java  |  26 ++--
 .../io/compress/CompressedThrottledReader.java  |  10 +-
 .../io/sstable/format/SSTableReader.java|  84 ++--
 .../io/sstable/format/big/BigTableWriter.java   |  13 +-
 .../io/util/BufferedPoolingSegmentedFile.java   |  14 +-
 .../io/util/BufferedSegmentedFile.java  |  24 ++--
 .../io/util/CompressedPoolingSegmentedFile.java |  20 +--
 .../io/util/CompressedSegmentedFile.java|  20 +--
 .../cassandra/io/util/MmappedSegmentedFile.java |  65 +++---
 .../cassandra/io/util/PoolingSegmentedFile.java |  22 ++--
 .../cassandra/io/util/RandomAccessReader.java   | 128 ++-
 .../apache/cassandra/io/util/SegmentedFile.java |  74 ---
 .../cassandra/io/util/ThrottledReader.java  |   9 +-
 .../compress/CompressedStreamWriter.java|  14 +-
 .../apache/cassandra/db/RangeTombstoneTest.java |  27 ++--
 .../unit/org/apache/cassandra/db/ScrubTest.java |  17 +--
 .../org/apache/cassandra/db/VerifyTest.java |   3 +-
 .../db/compaction/AntiCompactionTest.java   |  36 +++---
 .../db/compaction/CompactionsTest.java  |   4 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  |   5 +-
 .../CompressedRandomAccessReaderTest.java   |  22 +++-
 .../CompressedSequentialWriterTest.java |  10 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  21 +--
 .../io/sstable/SSTableScannerTest.java  |  28 ++--
 .../cassandra/io/sstable/SSTableUtils.java  |  20 +--
 .../io/util/BufferedRandomAccessFileTest.java   |  11 +-
 .../cassandra/io/util/DataOutputTest.java   |  14 +-
 .../apache/cassandra/io/util/MemoryTest.java|   1 +
 28 files changed, 377 insertions(+), 365 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e29b7a9/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index b1b4dd4..1b3cd06 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -33,10 +33,7 @@ import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSReadError;
 import org.apache.cassandra.io.sstable.CorruptSSTableException;
-import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
-import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.io.util.PoolingSegmentedFile;
-import org.apache.cassandra.io.util.RandomAccessReader;
+import org.apache.cassandra.io.util.*;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -47,15 +44,15 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 {
 private static final boolean useMmap = 
DatabaseDescriptor.getDiskAccessMode() == Config.DiskAccessMode.mmap;
 
-public static CompressedRandomAccessReader open(String dataFilePath, 
CompressionMetadata metadata)
+public static CompressedRandomAccessReader open(ChannelProxy channel, 
CompressionMetadata metadata)
 {
-return open(dataFilePath, metadata, null);
+return open(channel, metadata, null);
 }
-public static CompressedRandomAccessReader open(String path, 
CompressionMetadata metadata, CompressedPoolingSegmentedFile owner)
+public static CompressedRandomAccessReader open(ChannelProxy channel, 
CompressionMetadata metadata, CompressedPoolingSegmentedFile owner)
 {
 try
 {
-return new CompressedRandomAccessReader(path, metadata, owner);
+return new CompressedRandomAccessReader(channel, metadata, owner);
 }
 catch (FileNotFoundException e)
 {
@@ -78,9 +75,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // raw checksum bytes
 private ByteBuffer checksumBytes;
 
-protected CompressedRandomAccessReade

[jira] [Commented] (CASSANDRA-8820) Broken package dependency in Debian repository

2015-04-03 Thread Stephan Wienczny (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394316#comment-14394316
 ] 

Stephan Wienczny commented on CASSANDRA-8820:
-

The reason is that the "Packages" don't refer to the new version.

  Package: cassandra
  Version: 2.1.4
  ...

  Package: cassandra-tools  
  
  Version: 2.1.3
  
  ...

cassandra-tools is available:

http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/ 
cassandra-tools_2.1.4_all.deb

So the release process a problem because the "Packages" file is not updated 
correctly.

> Broken package dependency in Debian repository
> --
>
> Key: CASSANDRA-8820
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8820
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 14.04 LTS amd64
>Reporter: Terry Moschou
>Assignee: T Jake Luciani
>
> The Apache Debian package repository currently has unmet dependencies.
> Configured repos:
> deb http://www.apache.org/dist/cassandra/debian 21x main
> deb-src http://www.apache.org/dist/cassandra/debian 21x main
> Problem file:
> cassandra/dists/21x/main/binary-amd64/Packages
> $ sudo apt-get update && sudo apt-get install cassandra-tools
> ...(omitted)
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra-tools : Depends: cassandra (= 2.1.2) but it is not going to be 
> installed
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: follow up to CASSANDRA-8670: providing small improvements to performance of writeUTF; and improving safety of DataOutputBuffer when size is known upfront

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4e29b7a9a -> c2ecfe7b7


follow up to CASSANDRA-8670:
providing small improvements to performance of writeUTF; and
improving safety of DataOutputBuffer when size is known upfront

patch by ariel and benedict for CASSANDRA-8670


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2ecfe7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2ecfe7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2ecfe7b

Branch: refs/heads/trunk
Commit: c2ecfe7b7bffbced652b4da9dcf4ca263d345695
Parents: 4e29b7a
Author: Ariel Weisberg 
Authored: Fri Apr 3 12:29:17 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Fri Apr 3 12:33:29 2015 +0100

--
 .../cassandra/db/commitlog/CommitLog.java   |  5 +-
 .../cassandra/db/marshal/CompositeType.java |  3 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  4 +-
 .../io/util/DataOutputBufferFixed.java  | 65 
 .../cassandra/service/pager/PagingState.java|  3 +-
 .../streaming/messages/StreamInitMessage.java   |  3 +-
 .../org/apache/cassandra/utils/FBUtilities.java |  3 +-
 .../io/util/BufferedDataOutputStreamTest.java   | 39 
 8 files changed, 117 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
index 7fa7575..cf38d44 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
@@ -29,10 +29,10 @@ import com.google.common.annotations.VisibleForTesting;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.commons.lang3.StringUtils;
 
 import com.github.tjake.ICRC32;
+
 import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.ParameterizedClass;
@@ -41,6 +41,7 @@ import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.compress.ICompressor;
 import org.apache.cassandra.io.util.BufferedDataOutputStreamPlus;
+import org.apache.cassandra.io.util.DataOutputBufferFixed;
 import org.apache.cassandra.metrics.CommitLogMetrics;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageService;
@@ -251,7 +252,7 @@ public class CommitLog implements CommitLogMBean
 {
 ICRC32 checksum = CRC32Factory.instance.create();
 final ByteBuffer buffer = alloc.getBuffer();
-BufferedDataOutputStreamPlus dos = new 
BufferedDataOutputStreamPlus(null, buffer);
+BufferedDataOutputStreamPlus dos = new 
DataOutputBufferFixed(buffer);
 
 // checksummed length
 dos.writeInt((int) size);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/CompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
index 9ee9fb3..1bc772d 100644
--- a/src/java/org/apache/cassandra/db/marshal/CompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
@@ -32,6 +32,7 @@ import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.cql3.Operator;
 import org.apache.cassandra.io.util.DataOutputBuffer;
+import org.apache.cassandra.io.util.DataOutputBufferFixed;
 import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
@@ -403,7 +404,7 @@ public class CompositeType extends AbstractCompositeType
 {
 try
 {
-DataOutputBuffer out = new DataOutputBuffer(serializedSize);
+DataOutputBuffer out = new 
DataOutputBufferFixed(serializedSize);
 if (isStatic)
 out.writeShort(STATIC_MARKER);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index f4f46a1..5669a8d 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/

[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/23c84b16/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index 06234cd,000..a761e6a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@@ -1,2117 -1,0 +1,2127 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format;
 +
 +import java.io.*;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +import java.util.concurrent.*;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.concurrent.atomic.AtomicLong;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Ordering;
 +import com.google.common.primitives.Longs;
 +import com.google.common.util.concurrent.RateLimiter;
 +
 +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException;
 +import com.clearspring.analytics.stream.cardinality.HyperLogLogPlus;
 +import com.clearspring.analytics.stream.cardinality.ICardinality;
 +import org.apache.cassandra.cache.CachingOptions;
 +import org.apache.cassandra.cache.InstrumentingCache;
 +import org.apache.cassandra.cache.KeyCacheKey;
 +import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 +import org.apache.cassandra.concurrent.ScheduledExecutors;
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.db.commitlog.ReplayPosition;
 +import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.db.filter.ColumnSlice;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.dht.*;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.sstable.*;
 +import org.apache.cassandra.io.sstable.metadata.*;
 +import org.apache.cassandra.io.util.*;
 +import org.apache.cassandra.metrics.RestorableMeter;
 +import org.apache.cassandra.metrics.StorageMetrics;
 +import org.apache.cassandra.service.ActiveRepairService;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.*;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +import org.apache.cassandra.utils.concurrent.Ref;
 +import org.apache.cassandra.utils.concurrent.SelfRefCounted;
 +
 +import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR;
 +
 +/**
 + * An SSTableReader can be constructed in a number of places, but typically 
is either
 + * read from disk at startup, or constructed from a flushed memtable, or 
after compaction
 + * to replace some existing sstables. However once created, an sstablereader 
may also be modified.
 + *
 + * A reader's OpenReason describes its current stage in its lifecycle, as 
follows:
 + *
 + * NORMAL
 + * From:   None=> Reader has been read from disk, either at 
startup or from a flushed memtable
 + * EARLY   => Reader is the final result of a compaction
 + * MOVED_START => Reader WAS being compacted, but this failed and 
it has been restored to NORMAL status
 + *
 + * EARLY
 + * From:   None=> Reader is a compaction replacement that is 
either incomplete and has been opened
 + *to represent its partial result status, or has 
been finished but the compaction
 + *it is a part of has not yet completed fully
 + * EARLY   => Same as from None, only it is not the first 
time it has been
 + *
 + * MOVED_START
 + * From:   NORMAL  => Reader is being compacted. This compaction has 
not finished, but the compaction result
 + *is either partially or fully opened, to either 
parti

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23c84b16
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23c84b16
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23c84b16

Branch: refs/heads/trunk
Commit: 23c84b169febc59d3d2927bdc6389104d7d869e7
Parents: c2ecfe7 345455d
Author: Benedict Elliott Smith 
Authored: Fri Apr 3 12:58:07 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Fri Apr 3 12:58:07 2015 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/format/SSTableReader.java| 24 ++--
 2 files changed, 18 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23c84b16/CHANGES.txt
--
diff --cc CHANGES.txt
index d049640,9ddb9c9..e8cb20b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,94 -1,5 +1,95 @@@
 +3.0
 + * Share file handles between all instances of a SegmentedFile 
(CASSANDRA-8893)
 + * Make it possible to major compact LCS (CASSANDRA-7272)
 + * Make FunctionExecutionException extend RequestExecutionException
 +   (CASSANDRA-9055)
 + * Add support for SELECT JSON, INSERT JSON syntax and new toJson(), 
fromJson()
 +   functions (CASSANDRA-7970)
 + * Optimise max purgeable timestamp calculation in compaction (CASSANDRA-8920)
 + * Constrain internode message buffer sizes, and improve IO class hierarchy 
(CASSANDRA-8670) 
 + * New tool added to validate all sstables in a node (CASSANDRA-5791)
 + * Push notification when tracing completes for an operation (CASSANDRA-7807)
 + * Delay "node up" and "node added" notifications until native protocol 
server is started (CASSANDRA-8236)
 + * Compressed Commit Log (CASSANDRA-6809)
 + * Optimise IntervalTree (CASSANDRA-8988)
 + * Add a key-value payload for third party usage (CASSANDRA-8553)
 + * Bump metrics-reporter-config dependency for metrics 3.0 (CASSANDRA-8149)
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + 

[1/3] cassandra git commit: Do not load read meters for offline operations

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk c2ecfe7b7 -> 23c84b169


Do not load read meters for offline operations

patch by benedict; reviewed by tyler for CASSANDRA-9082


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/345455de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/345455de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/345455de

Branch: refs/heads/trunk
Commit: 345455dee2b154e5a9b10a7a615bcc0c7092775d
Parents: 49d64c2
Author: Benedict Elliott Smith 
Authored: Fri Apr 3 12:53:45 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Fri Apr 3 12:53:45 2015 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableReader.java | 24 ++--
 2 files changed, 18 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/345455de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b1499c1..9ddb9c9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.5
+ * Do not load read meter for offline operations (CASSANDRA-9082)
  * cqlsh: Make CompositeType data readable (CASSANDRA-8919)
  * cqlsh: Fix display of triggers (CASSANDRA-9081)
  * Fix NullPointerException when deleting or setting an element by index on

http://git-wip-us.apache.org/repos/asf/cassandra/blob/345455de/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 8fd7b85..c73d4a1 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -378,6 +378,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCounted components, CFMetaData metadata) throws IOException
 {
 return open(descriptor, components, metadata, 
StorageService.getPartitioner(), false);
@@ -434,7 +435,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCounted sstables)
@@ -2010,9 +2011,9 @@ public class SSTableReader extends SSTable implements 
SelfRefCounted

[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394352#comment-14394352
 ] 

Sergey Maznichenko commented on CASSANDRA-9092:
---

Should I provide any additional information from the failed node? I want to 
delete all hints and run repair on this node.

> Nodes in DC2 die during and after huge write workload
> -
>
> Key: CASSANDRA-9092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
>Reporter: Sergey Maznichenko
>Assignee: Sam Tunnicliffe
> Fix For: 2.1.5
>
> Attachments: cassandra_crash1.txt
>
>
> Hello,
> We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
> Node is VM 8 CPU, 32GB RAM
> During significant workload (loading several millions blobs ~3.5MB each), 1 
> node in DC2 stops and after some time next 2 nodes in DC2 also stops.
> Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
> I see many files in system.hints table and error appears in 2-3 minutes after 
> starting system.hints auto compaction.
> Stops, means "ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
> CassandraDaemon.java:153 - Exception in thread 
> Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space"
> ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError: Java heap space
> Full errors listing attached in cassandra_crash1.txt
> The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394368#comment-14394368
 ] 

Sam Tunnicliffe commented on CASSANDRA-9092:


What consistency level are you writing at? 
How are your clients performing the writes, thrift or native protocol?
How do your clients balance requests? Are they simply sending them round robin 
or using token aware routing? Are you writing in only one DC or to both?
Are there errors or warnings in the logs of the nodes which don't fail? 

Also, I don't think the schema you posted is complete as the primary key 
includes a {{chunk}} column not in the table definition.

If this is a not your regular workload (i.e. it's a periodic bulk load) and you 
expect the normal usage pattern to be different, disabling hinted handoff 
temporarily may be a reasonable workaround for you, provided you aren't relying 
on CL.ANY and your clients handle {{UnavailableException}} sanely. You'll also 
need to run repair after the load completes. 
If that isn't an option, bumping the delivery threads and opening the throttle 
might prevent a huge hints buildup if you have sufficient bandwidth and CPU, 
but I doubt it will help much as the nodes or network are clearly already 
overwhelmed otherwise there wouldn't be so many hints being written in the 
first place. 

> Nodes in DC2 die during and after huge write workload
> -
>
> Key: CASSANDRA-9092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
>Reporter: Sergey Maznichenko
>Assignee: Sam Tunnicliffe
> Fix For: 2.1.5
>
> Attachments: cassandra_crash1.txt
>
>
> Hello,
> We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
> Node is VM 8 CPU, 32GB RAM
> During significant workload (loading several millions blobs ~3.5MB each), 1 
> node in DC2 stops and after some time next 2 nodes in DC2 also stops.
> Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
> I see many files in system.hints table and error appears in 2-3 minutes after 
> starting system.hints auto compaction.
> Stops, means "ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
> CassandraDaemon.java:153 - Exception in thread 
> Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space"
> ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError: Java heap space
> Full errors listing attached in cassandra_crash1.txt
> The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-9112:
---

 Summary: Remove ternary construction of SegmentedFile.Builder in 
readers
 Key: CASSANDRA-9112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 3.0


Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9112:

Attachment: 9112.txt

> Remove ternary construction of SegmentedFile.Builder in readers
> ---
>
> Key: CASSANDRA-9112
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Trivial
> Fix For: 3.0
>
> Attachments: 9112.txt
>
>
> Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9111) SSTables originated from the same incremental repair session have different repairedAt timestamps

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9111:
---
Reviewer: Yuki Morishita

> SSTables originated from the same incremental repair session have different 
> repairedAt timestamps
> -
>
> Key: CASSANDRA-9111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: prmg
> Attachments: CASSANDRA-9111-v0.txt
>
>
> CASSANDRA-7168 optimizes QUORUM reads by skipping incrementally repaired 
> SSTables on other replicas that were repaired on or before the maximum 
> repairedAt timestamp of the coordinating replica's SSTables for the query 
> partition.
> One assumption of that optimization is that SSTables originated from the same 
> repair session in different nodes will have the same repairedAt timestamp, 
> since the objective is to skip reading SSTables originated in the same repair 
> session (or before).
> However, currently, each node timestamps independently SSTables originated 
> from the same repair session, so they almost never have the same timestamp.
> Steps to reproduce the problem:
> {code}
> ccm create test
> ccm populate -n 3
> ccm start
> ccm node1 cqlsh;
> {code}
> {code:sql}
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 3};
> CREATE TABLE foo.bar ( key int, col int, PRIMARY KEY (key) ) ;
> INSERT INTO foo.bar (key, col) VALUES (1, 1);
> exit;
> {code}
> {code}
> ccm node1 flush;
> ccm node2 flush;
> ccm node3 flush;
> nodetool -h 127.0.0.1 -p 7100 repair -par -inc foo bar
> [2015-04-02 21:56:07,726] Starting repair command #1, repairing 3 ranges for 
> keyspace foo (parallelism=PARALLEL, full=false)
> [2015-04-02 21:56:07,816] Repair session 3655b670-d99c-11e4-b250-9107aba35569 
> for range (3074457345618258602,-9223372036854775808] finished
> [2015-04-02 21:56:07,816] Repair session 365a4a50-d99c-11e4-b250-9107aba35569 
> for range (-9223372036854775808,-3074457345618258603] finished
> [2015-04-02 21:56:07,818] Repair session 365bf800-d99c-11e4-b250-9107aba35569 
> for range (-3074457345618258603,3074457345618258602] finished
> [2015-04-02 21:56:07,995] Repair command #1 finished
> sstablemetadata 
> ~/.ccm/test/node1/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  
> ~/.ccm/test/node2/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  
> ~/.ccm/test/node3/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  | grep Repaired
> Repaired at: 1428023050318
> Repaired at: 1428023050322
> Repaired at: 1428023050340
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9111) SSTables originated from the same incremental repair session have different repairedAt timestamps

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394394#comment-14394394
 ] 

Philip Thompson commented on CASSANDRA-9111:


Thanks for the patch! The file you contributed seems to have some odd 
characters in it, did you create it via the steps described here: 
http://wiki.apache.org/cassandra/HowToContribute ?

> SSTables originated from the same incremental repair session have different 
> repairedAt timestamps
> -
>
> Key: CASSANDRA-9111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: prmg
> Attachments: CASSANDRA-9111-v0.txt
>
>
> CASSANDRA-7168 optimizes QUORUM reads by skipping incrementally repaired 
> SSTables on other replicas that were repaired on or before the maximum 
> repairedAt timestamp of the coordinating replica's SSTables for the query 
> partition.
> One assumption of that optimization is that SSTables originated from the same 
> repair session in different nodes will have the same repairedAt timestamp, 
> since the objective is to skip reading SSTables originated in the same repair 
> session (or before).
> However, currently, each node timestamps independently SSTables originated 
> from the same repair session, so they almost never have the same timestamp.
> Steps to reproduce the problem:
> {code}
> ccm create test
> ccm populate -n 3
> ccm start
> ccm node1 cqlsh;
> {code}
> {code:sql}
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 3};
> CREATE TABLE foo.bar ( key int, col int, PRIMARY KEY (key) ) ;
> INSERT INTO foo.bar (key, col) VALUES (1, 1);
> exit;
> {code}
> {code}
> ccm node1 flush;
> ccm node2 flush;
> ccm node3 flush;
> nodetool -h 127.0.0.1 -p 7100 repair -par -inc foo bar
> [2015-04-02 21:56:07,726] Starting repair command #1, repairing 3 ranges for 
> keyspace foo (parallelism=PARALLEL, full=false)
> [2015-04-02 21:56:07,816] Repair session 3655b670-d99c-11e4-b250-9107aba35569 
> for range (3074457345618258602,-9223372036854775808] finished
> [2015-04-02 21:56:07,816] Repair session 365a4a50-d99c-11e4-b250-9107aba35569 
> for range (-9223372036854775808,-3074457345618258603] finished
> [2015-04-02 21:56:07,818] Repair session 365bf800-d99c-11e4-b250-9107aba35569 
> for range (-3074457345618258603,3074457345618258602] finished
> [2015-04-02 21:56:07,995] Repair command #1 finished
> sstablemetadata 
> ~/.ccm/test/node1/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  
> ~/.ccm/test/node2/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  
> ~/.ccm/test/node3/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
>  | grep Repaired
> Repaired at: 1428023050318
> Repaired at: 1428023050322
> Repaired at: 1428023050340
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9110) Bounded/RingBuffer CQL Collections

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9110:
---
Fix Version/s: 3.1

> Bounded/RingBuffer CQL Collections
> --
>
> Key: CASSANDRA-9110
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9110
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Jim Plush
>Priority: Minor
> Fix For: 3.1
>
>
> Feature Request:
> I've had frequent use cases for bounded and RingBuffer based collections. 
> For example: 
> I want to store the first 100 times I've see this thing.
> I want to store the last 100 times I've seen this thing.
> Currently that means having to do application level READ/WRITE operations and 
> we like to keep some of our high scale apps to write only where possible. 
> While probably expensive for exactly N items an approximation should be good 
> enough for most applications. Where N in our example could be 100 or 102, or 
> even make that tunable on the type or table. 
> For the RingBuffer example, consider I only want to store the last N login 
> attempts for a user. Once N+1 comes in it issues a delete for the oldest one 
> in the collection, or waits until compaction to drop the overflow data as 
> long as the CQL returns the right bounds.
> A potential implementation idea, given the rowkey would live on a single node 
> would be to have an LRU based counter cache (tunable in the yaml settings in 
> MB) that keeps a current count of how many items are already in the 
> collection for that rowkey. If > than bound, toss. 
> something akin to:
> CREATE TABLE users (
>   user_id text PRIMARY KEY,
>   first_name text,
>   first_logins set
>   last_logins set
> );



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-9112:
---
Reviewer: Jeremiah Jordan

> Remove ternary construction of SegmentedFile.Builder in readers
> ---
>
> Key: CASSANDRA-9112
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Trivial
> Fix For: 3.0
>
> Attachments: 9112.txt
>
>
> Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394414#comment-14394414
 ] 

Jeremiah Jordan commented on CASSANDRA-9112:


LGTM +1

> Remove ternary construction of SegmentedFile.Builder in readers
> ---
>
> Key: CASSANDRA-9112
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Trivial
> Fix For: 3.0
>
> Attachments: 9112.txt
>
>
> Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r8516 - in /release/cassandra/debian/dists/21x: InRelease Release Release.gpg main/binary-amd64/Packages main/binary-amd64/Packages.gz main/binary-i386/Packages main/binary-i386/Packages.g

2015-04-03 Thread jake
Author: jake
Date: Fri Apr  3 13:29:23 2015
New Revision: 8516

Log:
fix package bug

Modified:
release/cassandra/debian/dists/21x/InRelease
release/cassandra/debian/dists/21x/Release
release/cassandra/debian/dists/21x/Release.gpg
release/cassandra/debian/dists/21x/main/binary-amd64/Packages
release/cassandra/debian/dists/21x/main/binary-amd64/Packages.gz
release/cassandra/debian/dists/21x/main/binary-i386/Packages
release/cassandra/debian/dists/21x/main/binary-i386/Packages.gz

Modified: release/cassandra/debian/dists/21x/InRelease
==
--- release/cassandra/debian/dists/21x/InRelease (original)
+++ release/cassandra/debian/dists/21x/InRelease Fri Apr  3 13:29:23 2015
@@ -4,36 +4,36 @@ Hash: SHA1
 Origin: Unofficial Cassandra Packages
 Label: Unofficial Cassandra Packages
 Codename: 21x
-Date: Wed, 01 Apr 2015 11:44:52 UTC
+Date: Wed, 01 Apr 2015 11:45:00 UTC
 Architectures: i386 amd64
 Components: main
 Description: Cassandra APT Repository
 MD5Sum:
- b1c307004ca85da9dd821968d65c482b 1502 main/binary-i386/Packages
- d58314e1e5cb9efc3e225f0019c5ad96 704 main/binary-i386/Packages.gz
+ a5b062cc4b7a210a6cf16f1c5712b1c2 1502 main/binary-i386/Packages
+ ba56df615ca153bf679ff5b2c15227eb 693 main/binary-i386/Packages.gz
  3214ca2c0fb908da64d0c82cedf49fec 148 main/binary-i386/Release
- b1c307004ca85da9dd821968d65c482b 1502 main/binary-amd64/Packages
- d58314e1e5cb9efc3e225f0019c5ad96 704 main/binary-amd64/Packages.gz
+ a5b062cc4b7a210a6cf16f1c5712b1c2 1502 main/binary-amd64/Packages
+ ba56df615ca153bf679ff5b2c15227eb 693 main/binary-amd64/Packages.gz
  37f612d4cf34c75e4ee8424f6fa711ea 149 main/binary-amd64/Release
  a90ad69dfa172a759ab115310b9f55ce 1415 main/source/Sources
  f8265ce4f056fa59d089b87bdb5569d8 748 main/source/Sources.gz
  5aa47f19dcecef15ff3b353cf85bfb29 150 main/source/Release
 SHA1:
- ea40d9b6dbc454ef68499a3ad000c6fa9d60fc1d 1502 main/binary-i386/Packages
- bfae1354416365c5b2980560022df3f1b754efe5 704 main/binary-i386/Packages.gz
+ 4359daeab2cbcfd1396006787fbfd039d1043504 1502 main/binary-i386/Packages
+ a92ded808503292df2f2e5a73799050d003ad5b0 693 main/binary-i386/Packages.gz
  f49b67a648ccf6c6e4f96e003f61b089df0f54ed 148 main/binary-i386/Release
- ea40d9b6dbc454ef68499a3ad000c6fa9d60fc1d 1502 main/binary-amd64/Packages
- bfae1354416365c5b2980560022df3f1b754efe5 704 main/binary-amd64/Packages.gz
+ 4359daeab2cbcfd1396006787fbfd039d1043504 1502 main/binary-amd64/Packages
+ a92ded808503292df2f2e5a73799050d003ad5b0 693 main/binary-amd64/Packages.gz
  ea578b501d3ecec83c2200c2bcebf172fa3efe98 149 main/binary-amd64/Release
  34edb321c792bdbf14f2c96695e16cbe8fae4f48 1415 main/source/Sources
  2a2678f4a168bfc939408ef8e48bd56e3b61bc8d 748 main/source/Sources.gz
  268c44a9381bbfbe1ca79f1a6836b205ecb07f79 150 main/source/Release
 SHA256:
- f9dfb6bada6545321dce8fd36b8320eab45b0b0ea2e962b2036a03104f6daa41 1502 
main/binary-i386/Packages
- 3c9960712d409770d5b642757b0ccd7d8b00024e71b4cbc0e3c0e16e1769beb3 704 
main/binary-i386/Packages.gz
+ 2ebf4269255d48db91b21432207e65b788543b87b31619b93bfbf5fd9d5c08a3 1502 
main/binary-i386/Packages
+ 1b63d236f3e8d958f96cc77c45c51bd0660ae5a803fa6c7a3f94bd5c95afa176 693 
main/binary-i386/Packages.gz
  b9b8bd71a706df21065102d61b38de8513d0b796301e0b2df845e340347f06c7 148 
main/binary-i386/Release
- f9dfb6bada6545321dce8fd36b8320eab45b0b0ea2e962b2036a03104f6daa41 1502 
main/binary-amd64/Packages
- 3c9960712d409770d5b642757b0ccd7d8b00024e71b4cbc0e3c0e16e1769beb3 704 
main/binary-amd64/Packages.gz
+ 2ebf4269255d48db91b21432207e65b788543b87b31619b93bfbf5fd9d5c08a3 1502 
main/binary-amd64/Packages
+ 1b63d236f3e8d958f96cc77c45c51bd0660ae5a803fa6c7a3f94bd5c95afa176 693 
main/binary-amd64/Packages.gz
  b29a345c4907522e3b159abe8f5886fc7eb54bce9d0d6ecbaded5445c3fab4a2 149 
main/binary-amd64/Release
  f295e59fe311432901f72395b685361e15acfec166f89b944872fd16d4180824 1415 
main/source/Sources
  ba3b17e4fed14f0fad7b3b8dc25b3de464921ccaa0513b9e2cb7ec8b52389055 748 
main/source/Sources.gz
@@ -41,17 +41,17 @@ SHA256:
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
-iQIcBAEBAgAGBQJVG9o4AAoJEHSdbuwDU7EsjLYQAJVCh4Yl0aBgkg8XYx23JEFj
-idRAeFu8NsjgY/+NRubqiXixL+HQkXq+3JCWr0NF7V8upFOYJMiON9OA+stbMd/Q
-uYhZRvxJwvMlcj4gDXJCB1jn300gi2/4NYsSekR4cNnEzz41b87jskz/hyng3ab5
-x+FOVl6tUJrgCAjW8BIGTqBsww0hj4d5OtqgA4CFXjrQ0xQdnHTdAU47scanV/6X
-fL+Y5EMLliO7VKwRNPgPaOKro+rEb2/s5QsScvKsmH1scp/LAphzQWu+mfShYynu
-+As7YGLTbJItQ818yYVPVsi2vaGCZoWze0t+BQlUwyjZx+pqrKdTHkiB+jmWWLHm
-GKHibNkB8erR8bsvHYMIDl5FaZEPO8/vxVfvmeQgHv0TKsAEpI0zSnRRTiWVQzWM
-RkxMKjz8L8y2Kt7qXoJiEVLvYcZVZ/kKc5Th/4IQtdx/TMKAQietNPERi+sIp/RV
-fWTbG+9V38Con9YDlxlIef/PjhVSvPaulwcW+evCOW7LWR9V1xjWyfr5v0eGkOnT
-br8l/s3WLBYY9AsFS+JzhA9KMe1k8yfJFidR6Qaa8tXtilvGtvD0yhrGLuVUzrEh
-cyxrbmCVTkIstNUTnxZxz4rVZQWas912D7yd7Y7QkphTEXS2xKPrs2LWkGcVeZwu
-HEeJBCWO2rSCJun7ImSw
-=RGFt
+iQIcBAEBAgAGBQJVG9o/AAoJEHSdbuwDU7EsfPIQAKI8iuIcCXjJRGSXNry16ob/
+SjPBx1NGG3nOV+V5nc367H

[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394420#comment-14394420
 ] 

Stefania commented on CASSANDRA-8893:
-

Thank you for committing [~benedict], your nits look excellent.

Let me tackle CASSANDRA-8894 and CASSANDRA-8897 next, I also have a few more 
small tickets on the side. I can certainly look into CASSANDRA-7066 afterwards, 
what is the deadline for 3.0?

> RandomAccessReader should share its FileChannel with all instances (via 
> SegmentedFile)
> --
>
> Key: CASSANDRA-8893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0
>
>
> There's no good reason to open a FileChannel for each 
> \(Compressed\)\?RandomAccessReader, and this would simplify 
> RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8820) Broken package dependency in Debian repository

2015-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-8820.
---
Resolution: Fixed

I found the issue in my deploy script.  I was copying the repro before I added 
tools so it was always one behind on tools. I've fixed it and fixed the repo.

Thanks for picking this up! 

> Broken package dependency in Debian repository
> --
>
> Key: CASSANDRA-8820
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8820
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 14.04 LTS amd64
>Reporter: Terry Moschou
>Assignee: T Jake Luciani
>
> The Apache Debian package repository currently has unmet dependencies.
> Configured repos:
> deb http://www.apache.org/dist/cassandra/debian 21x main
> deb-src http://www.apache.org/dist/cassandra/debian 21x main
> Problem file:
> cassandra/dists/21x/main/binary-amd64/Packages
> $ sudo apt-get update && sudo apt-get install cassandra-tools
> ...(omitted)
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra-tools : Depends: cassandra (= 2.1.2) but it is not going to be 
> installed
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9037) Terminal UDFs evaluated at prepare time throw protocol version error

2015-04-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394433#comment-14394433
 ] 

Sam Tunnicliffe commented on CASSANDRA-9037:


WFM, I've pushed a new version of the branch which simply removes that prepare 
time execution.

> Terminal UDFs evaluated at prepare time throw protocol version error
> 
>
> Key: CASSANDRA-9037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9037
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0
>
>
> When a pure function with only terminal arguments (or with no arguments) is 
> used in a where clause, it's executed at prepare time and 
> {{Server.CURRENT_VERSION}} passed as the protocol version for serialization 
> purposes. For native functions, this isn't a problem, but UDFs use classes in 
> the bundled java-driver-core jar for (de)serialization of args and return 
> values. When {{Server.CURRENT_VERSION}} is greater than the highest version 
> supported by the bundled java driver the execution fails with the following 
> exception:
> {noformat}
> ERROR [SharedPool-Worker-1] 2015-03-24 18:10:59,391 QueryMessage.java:132 - 
> Unexpected error during query
> org.apache.cassandra.exceptions.FunctionExecutionException: execution of 
> 'ks.overloaded[text]' failed: java.lang.IllegalArgumentException: No protocol 
> version matching integer version 4
> at 
> org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.udf.gen.Cksoverloaded_1.execute(Cksoverloaded_1.java)
>  ~[na:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall.executeInternal(FunctionCall.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall.access$200(FunctionCall.java:34)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall$Raw.execute(FunctionCall.java:176)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:161)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.SingleColumnRelation.toTerm(SingleColumnRelation.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:143)
>  ~[main/:na]
> at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:127) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:126)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:787)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:740)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:488)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:246) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:475)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:371)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_71]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> Caused by: java.lang.IllegalArgumentException: No protocol version matching 
> integer version 4
> at 
> com.datastax.driver.core.ProtocolVersion.fromInt(ProtocolVersion.java:89) 
> ~[cassandra-driver-core-2.1.2.jar:na]
> at 
> org.apache.cassandra.cql3.functions.UDFunction.compos

[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394432#comment-14394432
 ] 

Benedict commented on CASSANDRA-8893:
-

There is no hard deadline, but I think we're aiming for release at the end of 
May. The real question is when we impose feature freeze, which is somewhat 
dependent on the commit of CASSANDRA-8099. Small but important commits like 
CASSANDRA-7066 probably have a commit window (IMO only) of a week or two after 
CASSANDRA-8099, or the end of April, whichever is sooner. So we have time.

> RandomAccessReader should share its FileChannel with all instances (via 
> SegmentedFile)
> --
>
> Key: CASSANDRA-8893
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.0
>
>
> There's no good reason to open a FileChannel for each 
> \(Compressed\)\?RandomAccessReader, and this would simplify 
> RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392398#comment-14392398
 ] 

Sergey Maznichenko edited comment on CASSANDRA-9092 at 4/3/15 1:39 PM:
---

Java heap is selected automatically in cassandra-env.sh. I tried to set 
MAX_HEAP_SIZE="8G", NEW_HEAP_SIZE="800M", but it didn't help.

nodetool disableautocompaction - didn't help, compactions continue after 
restart node.
nodetool truncatehints - didn't help, it showed message like 'cannot stop 
running hint compaction'.

One of nodes had ~24000 files in system\hints-..., I stepped node and deleted 
them, it helps and node is running about 10 hours. Other node has 18154 files 
in system\hints-... (~1.1TB) and has the same problem, I leave it for 
experiments.

Workload: 20-40 processes on application servers, each one performs loading 
files in blobs (one big table), size of each file is about 3.5MB, key - UUID.

CREATE KEYSPACE filespace WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'}  AND durable_writes = true;

CREATE TABLE filespace.filestorage (
key text,
chunk text,
value blob,
PRIMARY KEY (key, chunk)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (chunk ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

nodetool status filespace
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens  Owns (effective)  Host ID  
 Rack
UN  10.X.X.12   4.82 TB256 28.0% 
25cefe6a-a9b1-4b30-839d-46ed5f4736cc  RAC1
UN  10.X.X.13   3.98 TB256 22.9% 
ef439686-1e8f-4b31-9c42-f49ff7a8b537  RAC1
UN  10.X.X.10   4.52 TB256 26.1% 
a11f52a6-1bff-4b47-bfa9-628a55a058dc  RAC1
UN  10.X.X.11   4.01 TB256 23.1% 
0f454fa7-5cdf-45b3-bf2d-729ab7bd9e52  RAC1
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens  Owns (effective)  Host ID  
 Rack
UN  10.X.X.137  4.64 TB256 22.6% 
e184cc42-7cd9-4e2e-bd0d-55a6a62f69dd  RAC1
UN  10.X.X.136  1.25 TB256 27.2% 
c8360341-83e0-4778-b2d4-3966f083151b  RAC1
DN  10.X.X.139  4.81 TB256 25.8% 
1f434cfe-6952-4d41-8fc5-780a18e64963  RAC1
UN  10.X.X.138  3.69 TB256 24.4% 
b7467041-05d9-409f-a59a-438d0a29f6a7  RAC1

I need some workaround to prevent this situation with hints. 

How we use default values for:

hinted_handoff_enabled: 'true'
max_hints_delivery_threads: 2
max_hint_window_in_ms: 1080
hinted_handoff_throttle_in_kb: 1024

Should I disable hints or increase number of threads and throughput?

For example:

hinted_handoff_enabled: 'true'
max_hints_delivery_threads: 20
max_hint_window_in_ms: 10800
hinted_handoff_throttle_in_kb: 10240



was (Author: msb):
Java heap is selected automatically in cassandra-env.sh. I tried to set 
MAX_HEAP_SIZE="8G", NEW_HEAP_SIZE="800M", but it didn't help.

nodetool disableautocompaction - didn't help, compactions continue after 
restart node.
nodetool truncatehints - didn't help, it showed message like 'cannot stop 
running hint compaction'.

One of nodes had ~24000 files in system\hints-..., I stepped node and deleted 
them, it helps and node is running about 10 hours. Other node has 18154 files 
in system\hints-... (~1.1TB) and has the same problem, I leave it for 
experiments.

Workload: 20-40 processes on application servers, each one performs loading 
files in blobs (one big table), size of each file is about 3.5MB, key - UUID.

CREATE KEYSPACE filespace WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'}  AND durable_writes = true;

CREATE TABLE filespace.filestorage (
key text,
filename text,
value blob,
PRIMARY KEY (key, chunk)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (chunk ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compr

[jira] [Updated] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7066:

Assignee: Stefania  (was: Benedict)

> Simplify (and unify) cleanup of compaction leftovers
> 
>
> Key: CASSANDRA-7066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
>Priority: Minor
>  Labels: compaction
> Fix For: 3.0
>
>
> Currently we manage a list of in-progress compactions in a system table, 
> which we use to cleanup incomplete compactions when we're done. The problem 
> with this is that 1) it's a bit clunky (and leaves us in positions where we 
> can unnecessarily cleanup completed files, or conversely not cleanup files 
> that have been superceded); and 2) it's only used for a regular compaction - 
> no other compaction types are guarded in the same way, so can result in 
> duplication if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and 
> on startup we simply delete any sstables that occur in the union of all 
> ancestor sets. This way as soon as we finish writing we're capable of 
> cleaning up any leftovers, so we never get duplication. It's also much easier 
> to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394434#comment-14394434
 ] 

Benedict commented on CASSANDRA-7066:
-

Assigning to [~Stefania] in the hope there is time before 3.0 once your other 
tickets are cleaned up.

> Simplify (and unify) cleanup of compaction leftovers
> 
>
> Key: CASSANDRA-7066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
>Priority: Minor
>  Labels: compaction
> Fix For: 3.0
>
>
> Currently we manage a list of in-progress compactions in a system table, 
> which we use to cleanup incomplete compactions when we're done. The problem 
> with this is that 1) it's a bit clunky (and leaves us in positions where we 
> can unnecessarily cleanup completed files, or conversely not cleanup files 
> that have been superceded); and 2) it's only used for a regular compaction - 
> no other compaction types are guarded in the same way, so can result in 
> duplication if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and 
> on startup we simply delete any sstables that occur in the union of all 
> ancestor sets. This way as soon as we finish writing we're capable of 
> cleaning up any leftovers, so we never get duplication. It's also much easier 
> to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8933) Short reads can return deleted results

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8933:
---
Assignee: Sylvain Lebresne

> Short reads can return deleted results
> --
>
> Key: CASSANDRA-8933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8933
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>
> The current code for short reads protection does not handle all cases.  
> Currently, we retry only if a node had returned the requested number of 
> results, but we have less results than that post-reconciliation, because this 
> means the node in question may have more results it hadn't sent due to the 
> limit.
> Consider however 3 nodes A, B, C (RF=3), and following sequence of operations 
> (all done at QUORUM):
> # we write 1 and 2 in a partition: all nodes get it.
> # we delete 1: only A and C get it.
> # we delete 2: only B and C get it.
> # we read the first row in the partition (so with a LIMIT 1) and A and B 
> answer first.
> At the last step, A will return the tombstone for 1 and the value 2, while B 
> will return just 1. So post reconciliation, we'll return 2 (since A returned 
> it and we have no tombstone for it), while we should return nothing. This is 
> a short read situation: B stopped at 1 because it was asked only 1 result, 
> but that result didn't made it in the result and we need further results from 
> it.  However, Because 1 results is requested and we have 1 result 
> post-reconciliation, the short read retry won't kick in.
> In practice, the short read check should be generalized: if any node X 
> returns the requested number of results but any of those results gets skipped 
> post-reconciliation, we might have a short read. Basically, enforcing the 
> limit replica-side is optimistic and assumes that all results of that replica 
> will be used, and as soon as that assumption fails we should get back more 
> results.
> Implementing that generalized condition can probably be done in 
> RowDataResolver.scheduleRepairs by using the repair to know if a node has had 
> some of results skipped by reconciliation but we want to know if a full CQL 
> row has been skipped or not so this will probably force us to add some 
> recounting.
> I'll note that I've fixed this problem on my branch for CASSANDRA-8099 (where 
> this is both simpler and somewhat more efficient since short reads don't 
> retry full queries there), so if decide this is too risky to fix in 2.1, we 
> can possibly just mark this as duplicate of CASSANDRA-8099.
> Lastly, it shouldn't be too hard to extends our current short read dtests to 
> test for that case, but I haven't taken the time to do so yet 
> ([~philipthompson] do you think you can have a look at adding such test at 
> some point?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8736) core dump on creating keyspaces

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8736.

Resolution: Not a Problem

Please re-open if you can still reproduce with a newer python version.

> core dump on creating keyspaces 
> 
>
> Key: CASSANDRA-8736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Linux
>Reporter: mike
>Priority: Minor
>
> After we upgraded C* to 2.1.2, with new installation, sometimes on creating 
> Keyspaces, it failed. Is there a patch available or will it be fixed in 2.1.3 
> release. Thanks.
> {noformat}
> $CASSANDRA_DIR/bin/cqlsh ${cassandraIp} -f 
> ${dbDefaultDataDir}/${DATA_SCRIPT}”.
> *** glibc detected *** python: corrupted double-linked list: 
> 0x02dc95f0 ***
> === Backtrace: =
> /lib64/libc.so.6(+0x76166)[0x7fd1c43b8166]
> /lib64/libc.so.6(+0x78ef4)[0x7fd1c43baef4]
> /usr/lib64/libpython2.6.so.1.0(+0x7f6f7)[0x7fd1c4ffd6f7]
> /usr/lib64/libpython2.6.so.1.0(+0xa1bb0)[0x7fd1c501fbb0]
> /usr/lib64/libpython2.6.so.1.0(+0x555bb)[0x7fd1c4fd35bb]
> /usr/lib64/libpython2.6.so.1.0(+0x6d132)[0x7fd1c4feb132]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x597)[0x7fd1c505e407]
> /usr/lib64/libpython2.6.so.1.0(+0x6eead)[0x7fd1c4fecead]
> /usr/lib64/libpython2.6.so.1.0(PyObject_Call+0x53)[0x7fd1c4fc2303]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x3cd0)[0x7fd1c505b5b0]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x5304)[0x7fd1c505cbe4]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x5304)[0x7fd1c505cbe4]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x63ef)[0x7fd1c505dccf]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x63ef)[0x7fd1c505dccf]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x63ef)[0x7fd1c505dccf]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x5304)[0x7fd1c505cbe4]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x5304)[0x7fd1c505cbe4]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(+0x6eead)[0x7fd1c4fecead]
> /usr/lib64/libpython2.6.so.1.0(PyObject_Call+0x53)[0x7fd1c4fc2303]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x3cd0)[0x7fd1c505b5b0]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x63ef)[0x7fd1c505dccf]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalFrameEx+0x63ef)[0x7fd1c505dccf]
> /usr/lib64/libpython2.6.so.1.0(PyEval_EvalCodeEx+0x927)[0x7fd1c505e797]
> /usr/lib64/libpython2.6.so.1.0(+0x6edb0)[0x7fd1c4fecdb0]
> /usr/lib64/libpython2.6.so.1.0(PyObject_Call+0x53)[0x7fd1c4fc2303]
> /usr/lib64/libpython2.6.so.1.0(+0x5970f)[0x7fd1c4fd770f]
> /usr/lib64/libpython2.6.so.1.0(PyObject_Call+0x53)[0x7fd1c4fc2303]
> /usr/lib64/libpython2.6.so.1.0(PyEval_CallObjectWithKeywords+0x43)[0x7fd1c5056dd3]
> /usr/lib64/libpython2.6.so.1.0(+0x10bf2a)[0x7fd1c5089f2a]
> /lib64/libpthread.so.0(+0x79d1)[0x7fd1c4d689d1]
> /lib64/libc.so.6(clone+0x6d)[0x7fd1c442ab6d]
> === Memory map: 
> 0040-00401000 r-xp  08:06 450964 
> /usr/bin/python
> 0060-00601000 rw-p  08:06 450964 
> /usr/bin/python
> 024ba000-02f29000 rw-p  00:00 0  
> [heap]
> 7fd1b000-7fd1b0172000 rw-p  00:00 0
> 7fd1b0172000-7fd1b400 ---p  00:00 0
> 7fd1b400-7fd1b4021000 rw-p  00:00 0
> 7fd1b4021000-7fd1b800 ---p  00:00 0
> 7fd1b800-7fd1b8021000 rw-p  00:00 0
> 7fd1b8021000-7fd1bc00 ---p  00:00 0
> 7fd1bc1d-7fd1bc1e6000 r-xp  08:0a 917506 
> /lib64/libgcc_s-4.4.6-20120305.so.1
> 7fd1bc1e6000-7fd1bc3e5000 ---p 00016000 08:0a 917506 
> /lib64/libgcc_s-4.4.6-20120305.so.1
> 7fd1bc3e5000-7fd1bc3e6000 rw-p 00015000 08:0a 917506 
> /lib64/libgcc_s-4.4.6-20120305.so.1
> 7fd1bc3e6000-7fd1bc3e7000 ---p  00:00 0
> 7fd1bc3e7000-7fd1bcde7000 rw-p  00:00 0
> 7fd1bcde7000-7fd1bcde8000 ---p  00:00 0
> 7fd1bcde8000-7fd1bd7e8000 rw-p  00:00 0
> 7fd1bd7e8000-7fd1bd7e9000 ---p  00:00 0
> 7fd1bd7e9000-7fd1be1e9000 rw-p  00:00 0
> 7fd1be1e9000-7fd1be1ed000 r-xp  08:06 180906 
> /usr/lib64/python2.6/lib-dynload/selectmodule.so
> 7fd1be1ed000-7fd1be3ed000 ---p 4000 08:06 180906 
> /usr/lib64/python2.6/lib-dynload/sele

[jira] [Updated] (CASSANDRA-8029) BufferUnderflowException when writing a null value to a UDT field

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8029:
---
Fix Version/s: 2.1.5

> BufferUnderflowException when writing a null value to a UDT field
> -
>
> Key: CASSANDRA-8029
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8029
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04. Cassandra 2.1.0. Single node.
>Reporter: Cory Snyder
> Fix For: 2.1.5
>
> Attachments: schema.txt
>
>
> The schema that was being used when this error was produced is attached.
> Whenever I write a null value to any of the id, name, or description fields 
> for a LOAD_BALANCER_POOL udt in the pools set of the LOAD_BALANCER_SERVICE 
> table or to the name, enabled, description, or ip_addresses fields of the 
> LOAD_BALANCER_VIRTUAL_SERVER table in the virtual_servers set of the 
> LOAD_BALANCER_SERVICE table, I get the following error from cqlsh:
> {code}
>  message="java.nio.BufferUnderflowException">
> {code}
> When doing the same from the Java Datastax driver, this seems to succeed on 
> the first write but fail with a timeout exception on all subsequent writes 
> until either the table is truncated or both the pools and virtual_servers 
> collections are both written as empty sets.
> Having null values in other UDT fields in the hierarchy doesn't seem to cause 
> any issues.
> When I restart Cassandra after having these errors, Cassandra fails to start 
> and throws the following error when trying to replay the commit logs:
> {code}
> ERROR [main] 2014-09-30 13:43:04,183 CassandraDaemon.java:474 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.nio.BufferUnderflowException
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:411) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:400) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:426)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:95)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:137) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:117) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:296) 
> [apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
>  [apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
> [apache-cassandra-2.1.0.jar:2.1.0]
> Caused by: java.util.concurrent.ExecutionException: 
> java.nio.BufferUnderflowException
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.get(AbstractTracingAwareExecutorService.java:198)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:407) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> ... 8 common frames omitted
> Caused by: java.nio.BufferUnderflowException: null
> at java.nio.Buffer.nextGetIndex(Buffer.java:506) ~[na:1.8.0_20]
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
> ~[na:1.8.0_20]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readCollectionSize(CollectionSerializer.java:85)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.marshal.ListType.compareListOrSet(ListType.java:96) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at org.apache.cassandra.db.marshal.SetType.compare(SetType.java:77) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at org.apache.cassandra.db.marshal.SetType.compare(SetType.java:29) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.marshal.TupleType.compare(TupleType.java:95) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.marshal.TupleType.compare(TupleType.java:38) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.marshal.ColumnToCollectionType.compareCollectionMembers(ColumnToCollectionType.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.composites.CompoundSparseCellNameType$WithCollection.compare(CompoundSparseCellNameType.java:292)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.composit

[jira] [Updated] (CASSANDRA-7973) cqlsh connect error "member_descriptor' object is not callable"

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7973:
---
Reproduced In: 2.1.0
Fix Version/s: 2.1.5

> cqlsh connect error "member_descriptor' object is not callable"
> ---
>
> Key: CASSANDRA-7973
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7973
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.0
>Reporter: Digant Modha
>Priority: Minor
>  Labels: cqlsh, lhf
> Fix For: 2.1.5
>
>
> When using cqlsh (Cassandra 2.1.0) with ssl, python 2.6.9. I get Connection 
> error: ('Unable to connect to any servers', {...: 
> TypeError("'member_descriptor' object is not callable",)}) 
> I am able to connect from another machine using python 2.7.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6335) Hints broken for nodes that change broadcast address

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-6335:
---
Assignee: Ryan McGuire

> Hints broken for nodes that change broadcast address
> 
>
> Key: CASSANDRA-6335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6335
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Rick Branson
>Assignee: Ryan McGuire
>
> When a node changes it's broadcast address, the transition process works 
> properly, but hints that are destined for it can't be delivered because of 
> the address change. It produces an exception:
> java.lang.AssertionError: Missing host ID for 10.1.60.22
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:598)
> at 
> org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:567)
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1679)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6335) Hints broken for nodes that change broadcast address

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394460#comment-14394460
 ] 

Philip Thompson commented on CASSANDRA-6335:


Assigning to Ryan to have someone in test create a dtest to try to reproduce 
this.

> Hints broken for nodes that change broadcast address
> 
>
> Key: CASSANDRA-6335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6335
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Rick Branson
>
> When a node changes it's broadcast address, the transition process works 
> properly, but hints that are destined for it can't be delivered because of 
> the address change. It produces an exception:
> java.lang.AssertionError: Missing host ID for 10.1.60.22
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:598)
> at 
> org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:567)
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1679)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7537) Updates and partition tombstones are not given the same timestamp in a CAS batch

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7537:
---
Fix Version/s: 2.0.15

> Updates and partition tombstones are not given the same timestamp in a CAS 
> batch
> 
>
> Key: CASSANDRA-7537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7537
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Nicolas Favre-Felix
> Fix For: 2.0.15
>
>
> Create a table with one partition and 2 CQL rows:
> {code}
> CREATE TABLE t1 (
> k text,
> c text,
> v text,
> PRIMARY KEY(k,c)
> );
> BEGIN BATCH
> INSERT INTO t1 (k,c,v) VALUES ('x','1','1');
> INSERT INTO t1 (k,c,v) VALUES ('x','2','2');
> APPLY BATCH;
> {code}
> CAS-delete the full partition based on the expected value of a single column:
> {code}
> cqlsh:ks1> SELECT * FROM t1 WHERE k='x';
>  k | c | v
> ---+---+---
>  x | 1 | 1
>  x | 2 | 2
> (2 rows)
> cqlsh:ks1> BEGIN BATCH
>... UPDATE t1 SET v = '0' WHERE k = 'x' AND c = '1' IF v = '1';
>... DELETE FROM t1 WHERE k = 'x';
>... APPLY BATCH;
>  [applied]
> ---
>   True
> cqlsh:ks1> SELECT * FROM t1 WHERE k='x';
>  k | c | v
> ---+---+---
>  x | 1 | 0
> (1 rows)
> {code}
> sstable2json reports that the updated column has a timestamp 1 greater than 
> the partition delete:
> {code}
> {"key": "78","metadata": {"deletionInfo": 
> {"markedForDeleteAt":1405097039224999,"localDeletionTime":1405097039}},"columns":
>  [["1:v","0",1405097039225000]]}
> {code}
> All mutations in a CAS batch should be applied with the same timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394474#comment-14394474
 ] 

Sergey Maznichenko commented on CASSANDRA-9092:
---

Consistency ONE. Clients use Datastax Client (Java).
We are writing only to DC1.

In the logs of the nodes which don't fail we have errors and warnings during 
load:

INFO  [SharedPool-Worker-5] 2015-03-31 15:48:52,534 Message.java:532 - 
Unexpected exception during request; channel = [id: 0x48b3ad12, 
/10.77.81.33:56581 :> /10.XX.XX.10:9042]
java.io.IOException: Error while read(...): Connection reset by peer
at io.netty.channel.epoll.Native.readAddress(Native Method) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]

ERROR [Thrift:15] 2015-03-31 11:54:35,163 CustomTThreadPoolServer.java:221 - 
Error occurred during processing of message.
java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
received only 2 responses.
at org.apache.cassandra.auth.Auth.selectUser(Auth.java:317) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:125) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.service.ClientState.login(ClientState.java:171) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1493) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3579)
 ~[apache-cassandra-thrift-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3563)
 ~[apache-cassandra-thrift-2.1.2.jar:2.1.2]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:202)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[na:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.7.0_71]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]
Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
timed out - received only 2 responses.
at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:103) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:144)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1263) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1184) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:262)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.auth.Auth.selectUser(Auth.java:306) 
~[apache-cassandra-2.1.2.jar:2.1.2]
... 11 common frames omitted

I've changed schema definition.
It's periodic workload, so I will disable hinted handoff temporary. Also I 
disabled compaction for filespace.filestorage because it takes long time and 
gives <1% efficiency.

My hints parameters now:
hinted_handoff_enabled: 'true'
max_hints_delivery_threads: 4
max_hint_window_in_ms: 1080
hinted_handoff_throttle_in_kb: 10240

I suppose Cassandra should do some kind of partial compaction if system.hints 
is big, or do clean old hints before compaction. Do you have idea about 
nessesary changes in 2.1.5? 


> Nodes in DC2 die during and after huge write workload
> -

[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-04-03 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9449a701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9449a701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9449a701

Branch: refs/heads/trunk
Commit: 9449a70162879884745a950666bfae0969b2608f
Parents: 49d64c2 2e6492a
Author: Yuki Morishita 
Authored: Fri Apr 3 09:19:31 2015 -0500
Committer: Yuki Morishita 
Committed: Fri Apr 3 09:19:31 2015 -0500

--
 .../cassandra/db/compaction/LazilyCompactedRow.java   | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9449a701/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 88c87a4,9962d3f..56a4ede
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@@ -156,12 -141,14 +156,18 @@@ public class LazilyCompactedRow extend
  // blindly updating everything wouldn't be correct
  DataOutputBuffer out = new DataOutputBuffer();
  
++// initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
++indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, 
key.getKey(), out);
++
  try
  {
  
DeletionTime.serializer.serialize(emptyColumnFamily.deletionInfo().getTopLevelDeletion(),
 out);
 +
  // do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
- if (emptyColumnFamily.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+ // - digest for non-empty rows needs to be updated with deletion 
in any case to match digest with versions before patch
+ // - empty rows must not update digest in case of LIVE delete 
status to avoid mismatches with non-existing rows
+ //   this will however introduce in return a digest mismatch for 
versions before patch (which would update digest in any case)
 -if (iter.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
++if (merger.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
  {
  digest.update(out.getData(), 0, out.getLength());
  }
@@@ -171,10 -158,10 +177,8 @@@
  throw new AssertionError(e);
  }
  
--// initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
- indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, 
key.getKey(), out);
 -indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, key.key, 
out);
 -while (iter.hasNext())
 -iter.next().updateDigest(digest);
 +while (merger.hasNext())
 +merger.next().updateDigest(digest);
  close();
  }
  



[1/6] cassandra git commit: Digest will now always updated for non-empty rows

2015-04-03 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 c8ab96d17 -> 2e6492a18
  refs/heads/cassandra-2.1 49d64c23b -> 9449a7016
  refs/heads/trunk 23c84b169 -> 51908e240


Digest will now always updated for non-empty rows

Also reverted PreCompactedRow.

Another follow up on CASSANDRA-8979


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e6492a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e6492a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e6492a1

Branch: refs/heads/cassandra-2.0
Commit: 2e6492a1839040d5b417ab934490c453c92896d7
Parents: c8ab96d
Author: Stefan Podkowinski 
Authored: Thu Apr 2 12:21:20 2015 +0200
Committer: Yuki Morishita 
Committed: Fri Apr 3 08:21:05 2015 -0500

--
 .../db/compaction/LazilyCompactedRow.java   |  9 +++--
 .../db/compaction/PrecompactedRow.java  | 21 
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index b562ba5..9962d3f 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -134,6 +134,9 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 assert !closed;
 
+// create merge iterator for reduced rows
+Iterator iter = iterator();
+
 // no special-case for rows.size == 1, we're actually skipping some 
bytes here so just
 // blindly updating everything wouldn't be correct
 DataOutputBuffer out = new DataOutputBuffer();
@@ -142,7 +145,10 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 
DeletionTime.serializer.serialize(emptyColumnFamily.deletionInfo().getTopLevelDeletion(),
 out);
 // do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (emptyColumnFamily.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+// - digest for non-empty rows needs to be updated with deletion 
in any case to match digest with versions before patch
+// - empty rows must not update digest in case of LIVE delete 
status to avoid mismatches with non-existing rows
+//   this will however introduce in return a digest mismatch for 
versions before patch (which would update digest in any case)
+if (iter.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
 {
 digest.update(out.getData(), 0, out.getLength());
 }
@@ -154,7 +160,6 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
 indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, key.key, 
out);
-Iterator iter = iterator();
 while (iter.hasNext())
 iter.next().updateDigest(digest);
 close();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
index f41e073..db72847 100644
--- a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
@@ -157,20 +157,15 @@ public class PrecompactedRow extends AbstractCompactedRow
 if (compactedCf == null)
 return;
 
-// do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (compactedCf.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+DataOutputBuffer buffer = new DataOutputBuffer();
+try
 {
-DataOutputBuffer buffer = new DataOutputBuffer();
-try
-{
-
DeletionTime.serializer.serialize(compactedCf.deletionInfo().getTopLevelDeletion(),
 buffer);
-
-digest.update(buffer.getData(), 0, buffer.getLength());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+
DeletionTime.serializer.serialize(compactedCf.deletionInf

[2/6] cassandra git commit: Digest will now always updated for non-empty rows

2015-04-03 Thread yukim
Digest will now always updated for non-empty rows

Also reverted PreCompactedRow.

Another follow up on CASSANDRA-8979


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e6492a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e6492a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e6492a1

Branch: refs/heads/cassandra-2.1
Commit: 2e6492a1839040d5b417ab934490c453c92896d7
Parents: c8ab96d
Author: Stefan Podkowinski 
Authored: Thu Apr 2 12:21:20 2015 +0200
Committer: Yuki Morishita 
Committed: Fri Apr 3 08:21:05 2015 -0500

--
 .../db/compaction/LazilyCompactedRow.java   |  9 +++--
 .../db/compaction/PrecompactedRow.java  | 21 
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index b562ba5..9962d3f 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -134,6 +134,9 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 assert !closed;
 
+// create merge iterator for reduced rows
+Iterator iter = iterator();
+
 // no special-case for rows.size == 1, we're actually skipping some 
bytes here so just
 // blindly updating everything wouldn't be correct
 DataOutputBuffer out = new DataOutputBuffer();
@@ -142,7 +145,10 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 
DeletionTime.serializer.serialize(emptyColumnFamily.deletionInfo().getTopLevelDeletion(),
 out);
 // do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (emptyColumnFamily.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+// - digest for non-empty rows needs to be updated with deletion 
in any case to match digest with versions before patch
+// - empty rows must not update digest in case of LIVE delete 
status to avoid mismatches with non-existing rows
+//   this will however introduce in return a digest mismatch for 
versions before patch (which would update digest in any case)
+if (iter.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
 {
 digest.update(out.getData(), 0, out.getLength());
 }
@@ -154,7 +160,6 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
 indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, key.key, 
out);
-Iterator iter = iterator();
 while (iter.hasNext())
 iter.next().updateDigest(digest);
 close();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
index f41e073..db72847 100644
--- a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
@@ -157,20 +157,15 @@ public class PrecompactedRow extends AbstractCompactedRow
 if (compactedCf == null)
 return;
 
-// do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (compactedCf.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+DataOutputBuffer buffer = new DataOutputBuffer();
+try
 {
-DataOutputBuffer buffer = new DataOutputBuffer();
-try
-{
-
DeletionTime.serializer.serialize(compactedCf.deletionInfo().getTopLevelDeletion(),
 buffer);
-
-digest.update(buffer.getData(), 0, buffer.getLength());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+
DeletionTime.serializer.serialize(compactedCf.deletionInfo().getTopLevelDeletion(),
 buffer);
+digest.update(buffer.getData(), 0, buffer.getLength());
+}
+catch (IOException e)
+{
+throw new Ru

[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-04-03 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9449a701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9449a701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9449a701

Branch: refs/heads/cassandra-2.1
Commit: 9449a70162879884745a950666bfae0969b2608f
Parents: 49d64c2 2e6492a
Author: Yuki Morishita 
Authored: Fri Apr 3 09:19:31 2015 -0500
Committer: Yuki Morishita 
Committed: Fri Apr 3 09:19:31 2015 -0500

--
 .../cassandra/db/compaction/LazilyCompactedRow.java   | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9449a701/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 88c87a4,9962d3f..56a4ede
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@@ -156,12 -141,14 +156,18 @@@ public class LazilyCompactedRow extend
  // blindly updating everything wouldn't be correct
  DataOutputBuffer out = new DataOutputBuffer();
  
++// initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
++indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, 
key.getKey(), out);
++
  try
  {
  
DeletionTime.serializer.serialize(emptyColumnFamily.deletionInfo().getTopLevelDeletion(),
 out);
 +
  // do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
- if (emptyColumnFamily.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+ // - digest for non-empty rows needs to be updated with deletion 
in any case to match digest with versions before patch
+ // - empty rows must not update digest in case of LIVE delete 
status to avoid mismatches with non-existing rows
+ //   this will however introduce in return a digest mismatch for 
versions before patch (which would update digest in any case)
 -if (iter.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
++if (merger.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
  {
  digest.update(out.getData(), 0, out.getLength());
  }
@@@ -171,10 -158,10 +177,8 @@@
  throw new AssertionError(e);
  }
  
--// initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
- indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, 
key.getKey(), out);
 -indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, key.key, 
out);
 -while (iter.hasNext())
 -iter.next().updateDigest(digest);
 +while (merger.hasNext())
 +merger.next().updateDigest(digest);
  close();
  }
  



[3/6] cassandra git commit: Digest will now always updated for non-empty rows

2015-04-03 Thread yukim
Digest will now always updated for non-empty rows

Also reverted PreCompactedRow.

Another follow up on CASSANDRA-8979


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e6492a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e6492a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e6492a1

Branch: refs/heads/trunk
Commit: 2e6492a1839040d5b417ab934490c453c92896d7
Parents: c8ab96d
Author: Stefan Podkowinski 
Authored: Thu Apr 2 12:21:20 2015 +0200
Committer: Yuki Morishita 
Committed: Fri Apr 3 08:21:05 2015 -0500

--
 .../db/compaction/LazilyCompactedRow.java   |  9 +++--
 .../db/compaction/PrecompactedRow.java  | 21 
 2 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index b562ba5..9962d3f 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -134,6 +134,9 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 assert !closed;
 
+// create merge iterator for reduced rows
+Iterator iter = iterator();
+
 // no special-case for rows.size == 1, we're actually skipping some 
bytes here so just
 // blindly updating everything wouldn't be correct
 DataOutputBuffer out = new DataOutputBuffer();
@@ -142,7 +145,10 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 {
 
DeletionTime.serializer.serialize(emptyColumnFamily.deletionInfo().getTopLevelDeletion(),
 out);
 // do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (emptyColumnFamily.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+// - digest for non-empty rows needs to be updated with deletion 
in any case to match digest with versions before patch
+// - empty rows must not update digest in case of LIVE delete 
status to avoid mismatches with non-existing rows
+//   this will however introduce in return a digest mismatch for 
versions before patch (which would update digest in any case)
+if (iter.hasNext() || 
emptyColumnFamily.deletionInfo().getTopLevelDeletion() != DeletionTime.LIVE)
 {
 digest.update(out.getData(), 0, out.getLength());
 }
@@ -154,7 +160,6 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // initialize indexBuilder for the benefit of its tombstoneTracker, 
used by our reducing iterator
 indexBuilder = new ColumnIndex.Builder(emptyColumnFamily, key.key, 
out);
-Iterator iter = iterator();
 while (iter.hasNext())
 iter.next().updateDigest(digest);
 close();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e6492a1/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
index f41e073..db72847 100644
--- a/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/PrecompactedRow.java
@@ -157,20 +157,15 @@ public class PrecompactedRow extends AbstractCompactedRow
 if (compactedCf == null)
 return;
 
-// do not update digest in case of missing or purged row level 
tombstones, see CASSANDRA-8979
-if (compactedCf.deletionInfo().getTopLevelDeletion() != 
DeletionTime.LIVE)
+DataOutputBuffer buffer = new DataOutputBuffer();
+try
 {
-DataOutputBuffer buffer = new DataOutputBuffer();
-try
-{
-
DeletionTime.serializer.serialize(compactedCf.deletionInfo().getTopLevelDeletion(),
 buffer);
-
-digest.update(buffer.getData(), 0, buffer.getLength());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+
DeletionTime.serializer.serialize(compactedCf.deletionInfo().getTopLevelDeletion(),
 buffer);
+digest.update(buffer.getData(), 0, buffer.getLength());
+}
+catch (IOException e)
+{
+throw new RuntimeExc

[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51908e24
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51908e24
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51908e24

Branch: refs/heads/trunk
Commit: 51908e240c97ac6fa228950fbdea2eb790345525
Parents: 23c84b1 9449a70
Author: Yuki Morishita 
Authored: Fri Apr 3 09:19:51 2015 -0500
Committer: Yuki Morishita 
Committed: Fri Apr 3 09:19:51 2015 -0500

--
 .../cassandra/db/compaction/LazilyCompactedRow.java   | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51908e24/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--



[jira] [Resolved] (CASSANDRA-8979) MerkleTree mismatch for deleted and non-existing rows

2015-04-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-8979.
---
   Resolution: Fixed
Fix Version/s: 2.0.15

Thanks, committed follow ups.

> MerkleTree mismatch for deleted and non-existing rows
> -
>
> Key: CASSANDRA-8979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8979-AvoidBufferAllocation-2.0_patch.txt, 
> 8979-LazilyCompactedRow-2.0.txt, 8979-RevertPrecompactedRow-2.0.txt, 
> cassandra-2.0-8979-lazyrow_patch.txt, cassandra-2.0-8979-validator_patch.txt, 
> cassandra-2.0-8979-validatortest_patch.txt, 
> cassandra-2.1-8979-lazyrow_patch.txt, cassandra-2.1-8979-validator_patch.txt
>
>
> Validation compaction will currently create different hashes for rows that 
> have been deleted compared to nodes that have not seen the rows at all or 
> have already compacted them away. 
> In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to 
> prevent hashing of expired tombstones. This still seems to be in place, but 
> does not address the issue completely. Or there was a change in 2.0 that 
> rendered the patch ineffective. 
> The problem is that rowHash() in the Validator will return a new hash in any 
> case, whether the PrecompactedRow did actually update the digest or not. This 
> will lead to the case that a purged, PrecompactedRow will not change the 
> digest, but we end up with a different tree compared to not having rowHash 
> called at all (such as in case the row already doesn't exist).
> As an implication, repair jobs will constantly detect mismatches between 
> older sstables containing purgable rows and nodes that have already compacted 
> these rows. After transfering the reported ranges, the newly created sstables 
> will immediately get deleted again during the following compaction. This will 
> happen for each repair run over again until the sstable with the purgable row 
> finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6565) New node refuses to join the ring.

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394485#comment-14394485
 ] 

Philip Thompson commented on CASSANDRA-6565:


For future users, please open new tickets, do not comment here. Failing to 
bootstrap is not a uniform problem across deployments.

> New node refuses to join the ring.
> --
>
> Key: CASSANDRA-6565
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6565
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shao-Chuan Wang
>
> We have 30 nodes in one DC, 25 nodes in another. We are running 2.0.1.
> Two nodes are joining the ring, but one of them failed
> ARN [STREAM-IN-/10.4.197.53] 2014-01-09 19:41:40,418 StreamResultFuture.java 
> (line 209) [Stream #e515d6e0-795d-11e3-b74a-b72892248056] Stream failed
> ERROR [main] 2014-01-09 19:41:40,418 CassandraDaemon.java (line 459) 
> Exception encountered during startup
> java.lang.RuntimeException: Error during boostrap: Stream failed
> at 
> org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86)
> at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:901)
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:670)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:529)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:428)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:343)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:442)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:485)
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210)
> at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:185)
> at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:321)
> at 
> org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:501) 
>at org.apache.cassandra.streaming.StreamSession.messageReceived(Stre
> amSession.java:376)at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:293)
> at java.lang.Thread.run(Thread.java:744)
> ERROR [StorageServiceShutdownHook] 2014-01-09 19:41:40,428 
> CassandraDaemon.java (line 185) Exception in thread 
> Thread[StorageServiceShutdownHook,5,main]
> java.lang.NullPointerException
> at 
> org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:312)
> at 
> org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:361)
> at 
> org.apache.cassandra.service.StorageService.access$000(StorageService.java:96)
> at 
> org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:494)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7491) Incorrect thrift-server dependency in 2.0 poms

2015-04-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe resolved CASSANDRA-7491.

   Resolution: Fixed
Reproduced In: 2.0.9, 2.0.8  (was: 2.0.8, 2.0.9)

CASSANDRA-7594 brought the generated pom.xml back in line with the jar in lib/

> Incorrect thrift-server dependency in 2.0 poms
> --
>
> Key: CASSANDRA-7491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7491
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Sam Tunnicliffe
> Fix For: 2.0.15
>
>
> On the 2.0 branch we recently replaced thrift-server-0.3.3.jar with 
> thrift-server-internal-only-0.3.3.jar (commit says CASSANDRA-6545, but I 
> don't think that's right), but didn't update the generated pom that gets 
> deployed to mvn central. The upshot is that the poms on maven central for 
> 2.0.8 & 2.0.9 specify their dependencies incorrectly. So any project pulling 
> in those versions of cassandra-all as a dependency will incorrectly include 
> the old jar.
> However, on 2.1 & trunk the internal-only jar was subsequently replaced by 
> thrift-server-0.3.5.jar (CASSANDRA-6285), which *is* available in mvn 
> central. build.xml has also been updated correctly on these branches.
> [~xedin], is there any reason for not switching 2.0 to 
> thrift-server-0.3.5.jar ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9113) Improve error message when bootstrap fails

2015-04-03 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-9113:
--

 Summary: Improve error message when bootstrap fails
 Key: CASSANDRA-9113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9113
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Philip Thompson
 Fix For: 3.1


Currently when bootstrap fails, users see a {{RuntimeException: Stream failed}} 
with a long stack trace. This typically brings them to IRC, the mailing list, 
or jira. However, most of the time, it is not due to a C* server failure, but 
network or machine issues.

While there are probably improvements that could be made to improve the 
resiliency of streaming, it would be nice if, assuming no server errors 
detected, that instead of the RuntimeException users are shown a less traumatic 
error message, that includes or points to documentation on how to solve a 
failed bootstrap stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8905:
---
Fix Version/s: 2.0.15

> IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
> ---
>
> Key: CASSANDRA-8905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
> Fix For: 2.0.15
>
>
> After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
> {noformat}
> ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
> java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
> at java.util.ArrayList.(ArrayList.java:142)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I've identified which sstable is causing this, it's an -ic- format sstable, 
> i.e. something written before the upgrade. I can repeat with 
> forceUserDefinedCompaction.
> Running upgradesstables also causes the same exception. 
> Scrub helps, but skips a row as incorrect. 
> I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394511#comment-14394511
 ] 

Philip Thompson commented on CASSANDRA-8905:


[~krummas], if scrubbing solved the issue, do we consider this a problem?

> IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
> ---
>
> Key: CASSANDRA-8905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
> Fix For: 2.0.15
>
>
> After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
> {noformat}
> ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
> java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
> at java.util.ArrayList.(ArrayList.java:142)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I've identified which sstable is causing this, it's an -ic- format sstable, 
> i.e. something written before the upgrade. I can repeat with 
> forceUserDefinedCompaction.
> Running upgradesstables also causes the same exception. 
> Scrub helps, but skips a row as incorrect. 
> I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394526#comment-14394526
 ] 

Philip Thompson commented on CASSANDRA-8589:


[~slebresne], would you like this on your backlog? Or should I assign it to 
Benjamin, Tyler, or Carl?

> Reconciliation in presence of tombstone might yield state data
> --
>
> Key: CASSANDRA-8589
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>
> Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
> sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
> each operation (and let's assume that a replica that don't ack is a replica 
> that don't get the update):
> {noformat}
> CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
> INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
> DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
> UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
> SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
> {noformat}
> Every operation has achieved quorum, but on the last read, A will respond 
> {{0->0, tombstone 1, 2->2}} and B will respond {{0->0, 1->1}}. As a 
> consequence we'll answer {{0->0, 2->2}} which is incorrect (we should respond 
> {{0->0, 2->3}}).
> Put another way, if we have a limit, every replica honors that limit but 
> since tombstones can "suppress" results from other nodes, we may have some 
> cells for which we actually don't get a quorum of response (even though we 
> globally have a quorum of replica responses).
> In practice, this probably occurs rather rarely and so the "simpler" fix is 
> probably to do something similar to the "short reads protection": detect when 
> this could have happen (based on how replica response are reconciled) and do 
> an additional request in that case. That detection will have potential false 
> positives but I suspect we can be precise enough that those false positives 
> will be very very rare (we should nonetheless track how often this code gets 
> triggered and if we see that it's more often than we think, we could 
> pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9113) Improve error message when bootstrap fails

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9113:
---
Priority: Minor  (was: Major)

> Improve error message when bootstrap fails
> --
>
> Key: CASSANDRA-9113
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9113
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Philip Thompson
>Priority: Minor
> Fix For: 3.1
>
>
> Currently when bootstrap fails, users see a {{RuntimeException: Stream 
> failed}} with a long stack trace. This typically brings them to IRC, the 
> mailing list, or jira. However, most of the time, it is not due to a C* 
> server failure, but network or machine issues.
> While there are probably improvements that could be made to improve the 
> resiliency of streaming, it would be nice if, assuming no server errors 
> detected, that instead of the RuntimeException users are shown a less 
> traumatic error message, that includes or points to documentation on how to 
> solve a failed bootstrap stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394530#comment-14394530
 ] 

Sylvain Lebresne commented on CASSANDRA-8589:
-

It would actually be nice to start by ensuring we can reproduce it through a 
dtest. It shoudn't be too hard to write one, and no point in chasing a complex 
solution if like for CASSANDRA-8933, something I forgot about in the code made 
this not a problem. Also, CASSANDRA-8099 should actually solve that, so if 
that's confirmed by said reproduction dtest, maybe we're good with fixing in 
3.0 only.

> Reconciliation in presence of tombstone might yield state data
> --
>
> Key: CASSANDRA-8589
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>
> Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
> sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
> each operation (and let's assume that a replica that don't ack is a replica 
> that don't get the update):
> {noformat}
> CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
> INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
> DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
> UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
> SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
> {noformat}
> Every operation has achieved quorum, but on the last read, A will respond 
> {{0->0, tombstone 1, 2->2}} and B will respond {{0->0, 1->1}}. As a 
> consequence we'll answer {{0->0, 2->2}} which is incorrect (we should respond 
> {{0->0, 2->3}}).
> Put another way, if we have a limit, every replica honors that limit but 
> since tombstones can "suppress" results from other nodes, we may have some 
> cells for which we actually don't get a quorum of response (even though we 
> globally have a quorum of replica responses).
> In practice, this probably occurs rather rarely and so the "simpler" fix is 
> probably to do something similar to the "short reads protection": detect when 
> this could have happen (based on how replica response are reconciled) and do 
> an additional request in that case. That detection will have potential false 
> positives but I suspect we can be precise enough that those false positives 
> will be very very rare (we should nonetheless track how often this code gets 
> triggered and if we see that it's more often than we think, we could 
> pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8589:
---
   Tester: Ryan McGuire
Fix Version/s: 3.0

> Reconciliation in presence of tombstone might yield state data
> --
>
> Key: CASSANDRA-8589
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 3.0
>
>
> Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
> sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
> each operation (and let's assume that a replica that don't ack is a replica 
> that don't get the update):
> {noformat}
> CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
> INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
> DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
> UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
> SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
> {noformat}
> Every operation has achieved quorum, but on the last read, A will respond 
> {{0->0, tombstone 1, 2->2}} and B will respond {{0->0, 1->1}}. As a 
> consequence we'll answer {{0->0, 2->2}} which is incorrect (we should respond 
> {{0->0, 2->3}}).
> Put another way, if we have a limit, every replica honors that limit but 
> since tombstones can "suppress" results from other nodes, we may have some 
> cells for which we actually don't get a quorum of response (even though we 
> globally have a quorum of replica responses).
> In practice, this probably occurs rather rarely and so the "simpler" fix is 
> probably to do something similar to the "short reads protection": detect when 
> this could have happen (based on how replica response are reconciled) and do 
> an additional request in that case. That detection will have potential false 
> positives but I suspect we can be precise enough that those false positives 
> will be very very rare (we should nonetheless track how often this code gets 
> triggered and if we see that it's more often than we think, we could 
> pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394536#comment-14394536
 ] 

Philip Thompson commented on CASSANDRA-8589:


Okay, I've set Ryan as tester, he'll forward it along.

> Reconciliation in presence of tombstone might yield state data
> --
>
> Key: CASSANDRA-8589
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 3.0
>
>
> Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
> sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
> each operation (and let's assume that a replica that don't ack is a replica 
> that don't get the update):
> {noformat}
> CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
> INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
> INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
> DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
> UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
> SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
> {noformat}
> Every operation has achieved quorum, but on the last read, A will respond 
> {{0->0, tombstone 1, 2->2}} and B will respond {{0->0, 1->1}}. As a 
> consequence we'll answer {{0->0, 2->2}} which is incorrect (we should respond 
> {{0->0, 2->3}}).
> Put another way, if we have a limit, every replica honors that limit but 
> since tombstones can "suppress" results from other nodes, we may have some 
> cells for which we actually don't get a quorum of response (even though we 
> globally have a quorum of replica responses).
> In practice, this probably occurs rather rarely and so the "simpler" fix is 
> probably to do something similar to the "short reads protection": detect when 
> this could have happen (based on how replica response are reconciled) and do 
> an additional request in that case. That detection will have potential false 
> positives but I suspect we can be precise enough that those false positives 
> will be very very rare (we should nonetheless track how often this code gets 
> triggered and if we see that it's more often than we think, we could 
> pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394571#comment-14394571
 ] 

Sam Tunnicliffe commented on CASSANDRA-9092:



Really, I think that the answer is likely to be that your cluster in 
underpowered for this particular workload and the build up of hints is a 
symptom of that. Setting {{hinted_handoff_enabled: false}} during the load will 
obviously stop that build up, but you're still going to see failures if the 
nodes can't keep up with the workload. 

One thing that puzzles me is that you say you only write to nodes in DC1, but 
you're seeing the hints build up in DC2. Hints are only written on the 
coordinator, so I suspect that somehow writes are being sent to all the nodes. 
Do you see hints being written on the DC1 nodes too?

Regarding hints storage, the plan is to stop writing them to a system table and 
instead use a log file. Obviously, this will remove a lot of overhead (i.e. no 
compaction required) so will be much more efficient (see CASSANDRA-6230). As 
for 2.1, as far as I'm aware there's nothing planned at the moment, and any 
large or invasive changes are not likely to make it into 2.1 this late in the 
lifetime of the release.  

Finally, I notice that you have authentication enabled as that second timeout 
is coming when C* is verifying the supplied credentials. That particular 
stacktrace indicates a thrift connection, whereas the first one is from a 
native CQL client. So I have 2 questions related to that:
 * Do you have multiple clients connecting (could be management tools like 
OpsCenter Agent)? 
 * Is that error, the one related to Auth, repeated frequently or are there 
more of the netty connection errors?


> Nodes in DC2 die during and after huge write workload
> -
>
> Key: CASSANDRA-9092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
>Reporter: Sergey Maznichenko
>Assignee: Sam Tunnicliffe
> Fix For: 2.1.5
>
> Attachments: cassandra_crash1.txt
>
>
> Hello,
> We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
> Node is VM 8 CPU, 32GB RAM
> During significant workload (loading several millions blobs ~3.5MB each), 1 
> node in DC2 stops and after some time next 2 nodes in DC2 also stops.
> Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
> I see many files in system.hints table and error appears in 2-3 minutes after 
> starting system.hints auto compaction.
> Stops, means "ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
> CassandraDaemon.java:153 - Exception in thread 
> Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space"
> ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError: Java heap space
> Full errors listing attached in cassandra_crash1.txt
> The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8952:
---
Reviewer: Joshua McKenzie

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394627#comment-14394627
 ] 

Sergey Maznichenko commented on CASSANDRA-9092:
---

We have OpsCenter Agent. Such errors repeat 1-2 timer per hour during load of 
data. In DC1 now we don't have any hints.
I guess that traffic can go to all nodes because client settings, I will check 
it.
I had tried to perform 'nodetool repair' from the node in DC2 and after 30 
hours delay, I got bunch of errors in console, like:

[2015-04-02 19:32:14,352] Repair session 6ff4f071-d94d-11e4-9257-f7b14a924a15 
for range (-3563451573336693456,-3535530477916720868] failed with error 
java.io.IOException: Cannot proceed on repair because a neighbor (/10.XX.XX.11) 
is dead: session failed

but 'nodetool status' reports that all nodes are live and I can see successful 
communication between nodes in their logs. It's strange...


> Nodes in DC2 die during and after huge write workload
> -
>
> Key: CASSANDRA-9092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
>Reporter: Sergey Maznichenko
>Assignee: Sam Tunnicliffe
> Fix For: 2.1.5
>
> Attachments: cassandra_crash1.txt
>
>
> Hello,
> We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
> Node is VM 8 CPU, 32GB RAM
> During significant workload (loading several millions blobs ~3.5MB each), 1 
> node in DC2 stops and after some time next 2 nodes in DC2 also stops.
> Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
> I see many files in system.hints table and error appears in 2-3 minutes after 
> starting system.hints auto compaction.
> Stops, means "ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
> CassandraDaemon.java:153 - Exception in thread 
> Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space"
> ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError: Java heap space
> Full errors listing attached in cassandra_crash1.txt
> The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394649#comment-14394649
 ] 

Marcus Eriksson commented on CASSANDRA-8905:


[~philipthompson] no, then I assume it is a corrupt sstable

> IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
> ---
>
> Key: CASSANDRA-8905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
> Fix For: 2.0.15
>
>
> After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
> {noformat}
> ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
> java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
> at java.util.ArrayList.(ArrayList.java:142)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I've identified which sstable is causing this, it's an -ic- format sstable, 
> i.e. something written before the upgrade. I can repeat with 
> forceUserDefinedCompaction.
> Running upgradesstables also causes the same exception. 
> Scrub helps, but skips a row as incorrect. 
> I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394656#comment-14394656
 ] 

Jonathan Ellis commented on CASSANDRA-7066:
---

bq.  if we're compacting multiple files into one, we write that the new file(s) 
are "in progress", then when they're done, we write a new log file saying we're 
swapping these files (as a checkpoint), then clear the "in progress" log file 
and write that we're "deleting" the old files, followed by immediately 
promoting the new ones and deleting our "swapping" log entry

Since all writes are idempotent now I think we are okay simplifying this to

... write that the new file(s) are "in progress", then when they're done, we 
clear the "in progress" log file and delete the old files.  If the process dies 
in between those two steps (very rare, deletes are fast), we have some extra 
redundant data left but correctness is preserved.

> Simplify (and unify) cleanup of compaction leftovers
> 
>
> Key: CASSANDRA-7066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
>Priority: Minor
>  Labels: compaction
> Fix For: 3.0
>
>
> Currently we manage a list of in-progress compactions in a system table, 
> which we use to cleanup incomplete compactions when we're done. The problem 
> with this is that 1) it's a bit clunky (and leaves us in positions where we 
> can unnecessarily cleanup completed files, or conversely not cleanup files 
> that have been superceded); and 2) it's only used for a regular compaction - 
> no other compaction types are guarded in the same way, so can result in 
> duplication if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and 
> on startup we simply delete any sstables that occur in the union of all 
> ancestor sets. This way as soon as we finish writing we're capable of 
> cleaning up any leftovers, so we never get duplication. It's also much easier 
> to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8085) Make PasswordAuthenticator number of hashing rounds configurable

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8085:
---
Fix Version/s: 2.0.15

> Make PasswordAuthenticator number of hashing rounds configurable
> 
>
> Key: CASSANDRA-8085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8085
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8085-2.0.txt, 8085-2.1.txt, 8085-3.0.txt
>
>
> Running 2^10 rounds of bcrypt can take a while.  In environments (like PHP) 
> where connections are not typically long-lived, authenticating can add 
> substantial overhead.  On IRC, one user saw the time to connect, 
> authenticate, and execute a query jump from 5ms to 150ms with authentication 
> enabled ([debug logs|http://pastebin.com/bSUufbr0]).
> CASSANDRA-7715 is a more complete fix for this, but in the meantime (and even 
> after 7715), this is a good option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove transient RAF usage

2015-04-03 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 51908e240 -> 75409a185


Remove transient RAF usage

Patch by stefania; reviewed by jmckenzie for CASSANDRA-8952


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/75409a18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/75409a18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/75409a18

Branch: refs/heads/trunk
Commit: 75409a185d97c566430ab6e6cfd823ceb80ff40b
Parents: 51908e2
Author: Stefania Alborghetti 
Authored: Fri Apr 3 11:37:28 2015 -0500
Committer: Joshua McKenzie 
Committed: Fri Apr 3 11:37:28 2015 -0500

--
 .../org/apache/cassandra/io/util/FileUtils.java | 27 ++
 .../org/apache/cassandra/utils/CLibrary.java| 25 +++--
 .../apache/cassandra/io/util/FileUtilsTest.java | 55 
 .../apache/cassandra/utils/CLibraryTest.java| 37 +
 4 files changed, 104 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/src/java/org/apache/cassandra/io/util/FileUtils.java
--
diff --git a/src/java/org/apache/cassandra/io/util/FileUtils.java 
b/src/java/org/apache/cassandra/io/util/FileUtils.java
index ef9d23b..8007039 100644
--- a/src/java/org/apache/cassandra/io/util/FileUtils.java
+++ b/src/java/org/apache/cassandra/io/util/FileUtils.java
@@ -19,10 +19,8 @@ package org.apache.cassandra.io.util;
 
 import java.io.*;
 import java.nio.ByteBuffer;
-import java.nio.file.AtomicMoveNotSupportedException;
-import java.nio.file.Files;
-import java.nio.file.Path;
-import java.nio.file.StandardCopyOption;
+import java.nio.channels.FileChannel;
+import java.nio.file.*;
 import java.text.DecimalFormat;
 import java.util.Arrays;
 
@@ -185,28 +183,13 @@ public class FileUtils
 }
 public static void truncate(String path, long size)
 {
-RandomAccessFile file;
-
-try
-{
-file = new RandomAccessFile(path, "rw");
-}
-catch (FileNotFoundException e)
-{
-throw new RuntimeException(e);
-}
-
-try
+try(FileChannel channel = FileChannel.open(Paths.get(path), 
StandardOpenOption.READ, StandardOpenOption.WRITE))
 {
-file.getChannel().truncate(size);
+channel.truncate(size);
 }
 catch (IOException e)
 {
-throw new FSWriteError(e, path);
-}
-finally
-{
-closeQuietly(file);
+throw new RuntimeException(e);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/src/java/org/apache/cassandra/utils/CLibrary.java
--
diff --git a/src/java/org/apache/cassandra/utils/CLibrary.java 
b/src/java/org/apache/cassandra/utils/CLibrary.java
index 25f7e5a..fed314b 100644
--- a/src/java/org/apache/cassandra/utils/CLibrary.java
+++ b/src/java/org/apache/cassandra/utils/CLibrary.java
@@ -18,9 +18,12 @@
 package org.apache.cassandra.utils;
 
 import java.io.FileDescriptor;
+import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.lang.reflect.Field;
 import java.nio.channels.FileChannel;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -316,29 +319,15 @@ public final class CLibrary
 
 public static int getfd(String path)
 {
-RandomAccessFile file = null;
-try
+try(FileChannel channel = FileChannel.open(Paths.get(path), 
StandardOpenOption.READ))
 {
-file = new RandomAccessFile(path, "r");
-return getfd(file.getFD());
+return getfd(channel);
 }
-catch (Throwable t)
+catch (IOException e)
 {
-JVMStabilityInspector.inspectThrowable(t);
+JVMStabilityInspector.inspectThrowable(e);
 // ignore
 return -1;
 }
-finally
-{
-try
-{
-if (file != null)
-file.close();
-}
-catch (Throwable t)
-{
-// ignore
-}
-}
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java 
b/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java
new file mode 100644
index 000..7110504
--- /dev/null
+++ b/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache 

[jira] [Updated] (CASSANDRA-8056) nodetool snapshot -cf -t does not work on multiple tabes of the same keyspace

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8056:
---
Fix Version/s: 2.0.15

> nodetool snapshot  -cf  -t  does not work on 
> multiple tabes of the same keyspace
> --
>
> Key: CASSANDRA-8056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8056
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cassandra 2.0.6 debian wheezy and squeeze
>Reporter: Esha Pathak
>Priority: Trivial
>  Labels: lhf
> Fix For: 2.0.15, 2.1.5
>
> Attachments: CASSANDRA-8056.txt
>
>
> 
> keyspace  has tables : thing:user , thing:object, thing:user_details
> steps to reproduce :
> 1. nodetool snapshot thing --column-family user --tag tagname
>   Requested creating snapshot for: thing and table: user
>   Snapshot directory: tagname
> 2.nodetool snapshot thing --column-family object --tag tagname
> Requested creating snapshot for: thing and table: object
> Exception in thread "main" java.io.IOException: Snapshot tagname already 
> exists.
>   at 
> org.apache.cassandra.service.StorageService.takeColumnFamilySnapshot(StorageService.java:2274)
>   at sun.reflect.GeneratedMethodAccessor129.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394661#comment-14394661
 ] 

Philip Thompson edited comment on CASSANDRA-8905 at 4/3/15 4:40 PM:


If this was fixed by scrub, it was most likely a corrupted sstable.


was (Author: philipthompson):
Fixed by scrub. Probably corrupted sstable.

> IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
> ---
>
> Key: CASSANDRA-8905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
> Fix For: 2.0.15
>
>
> After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
> {noformat}
> ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
> java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
> at java.util.ArrayList.(ArrayList.java:142)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I've identified which sstable is causing this, it's an -ic- format sstable, 
> i.e. something written before the upgrade. I can repeat with 
> forceUserDefinedCompaction.
> Running upgradesstables also causes the same exception. 
> Scrub helps, but skips a row as incorrect. 
> I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8905.

Resolution: Not a Problem

Fixed by scrub. Probably corrupted sstable.

> IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
> ---
>
> Key: CASSANDRA-8905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
> Fix For: 2.0.15
>
>
> After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
> {noformat}
> ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
> java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
> at java.util.ArrayList.(ArrayList.java:142)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
> at 
> org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I've identified which sstable is causing this, it's an -ic- format sstable, 
> i.e. something written before the upgrade. I can repeat with 
> forceUserDefinedCompaction.
> Running upgradesstables also causes the same exception. 
> Scrub helps, but skips a row as incorrect. 
> I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8808) CQLSSTableWriter: close does not work + more than one table throws ex

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8808:
---
Fix Version/s: 2.0.15

> CQLSSTableWriter: close does not work + more than one table throws ex
> -
>
> Key: CASSANDRA-8808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8808
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sebastian YEPES FERNANDEZ
>Assignee: Benjamin Lerer
>  Labels: cql
> Fix For: 2.0.15, 2.1.5
>
> Attachments: CASSANDRA-8808-2.0-V2.txt, CASSANDRA-8808-2.0.txt, 
> CASSANDRA-8808-2.1-V2.txt, CASSANDRA-8808-2.1.txt, 
> CASSANDRA-8808-trunk-V2.txt, CASSANDRA-8808-trunk.txt
>
>
> I have encountered the following two issues:
>  - When closing the CQLSSTableWriter it just hangs the process and does 
> nothing. (https://issues.apache.org/jira/browse/CASSANDRA-8281)
>  - When writing more than one table throws ex. 
> (https://issues.apache.org/jira/browse/CASSANDRA-8251)
> These issue can be reproduced with the following code:
> {code:title=test.java|borderStyle=solid}
> import org.apache.cassandra.config.Config;
> import org.apache.cassandra.io.sstable.CQLSSTableWriter;
> public static void main(String[] args) {
>   Config.setClientMode(true);
>   CQLSSTableWriter w1 = CQLSSTableWriter.builder()
> .inDirectory("/tmp/kspc/t1")
> .forTable("CREATE TABLE kspc.t1 ( id  int, PRIMARY KEY (id));")
> .using("INSERT INTO kspc.t1 (id) VALUES ( ? );")
> .build();
>   CQLSSTableWriter w2 = CQLSSTableWriter.builder()
> .inDirectory("/tmp/kspc/t2")
> .forTable("CREATE TABLE kspc.t2 ( id  int, PRIMARY KEY (id));")
> .using("INSERT INTO kspc.t2 (id) VALUES ( ? );")
> .build();
>   try {
> w1.addRow(1);
> w2.addRow(1);
> w1.close();
> w2.close();
>   } catch (Exception e) {
> System.out.println(e);
>   }
> }
> {code}
> {code:title=The error|borderStyle=solid}
> Exception in thread "main" java.lang.ExceptionInInitializerError
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:324)
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:277)
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:119)
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:96)
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:101)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.rawAddRow(CQLSSTableWriter.java:226)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.addRow(CQLSSTableWriter.java:145)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.addRow(CQLSSTableWriter.java:120)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:189)
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
> at 
> com.allthingsmonitoring.utils.BulkDataLoader.main(BulkDataLoader.groovy:415)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1053)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:85)
> ... 18 more
> {code}
> I have just tested the in the cassandra-2.1 branch and the issue still 
> persists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8086:
---
Fix Version/s: 2.0.15

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-2.1.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8909) Replication Strategy creation errors are lost in try/catch

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8909:
---
Fix Version/s: 2.0.15

> Replication Strategy creation errors are lost in try/catch
> --
>
> Key: CASSANDRA-8909
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8909
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alan Boudreault
>Assignee: Alan Boudreault
>Priority: Trivial
> Fix For: 2.0.15, 2.1.5
>
> Attachments: replication-strategy-exception-2.0.patch
>
>
> I was initially executing a bad cassandra-stress command  and was getting 
> this error:
> {code}
> Unable to create stress keyspace: Error constructing replication strategy 
> class
> {code}
> with the following command:
> {code}
> cassandra-stress -o insert --replication-strategy NetworkTopologyStrategy 
> --strategy-properties dc1:1,dc2:1 --replication-factor 1
> {code}
> After digging in the code, I noticed that the error displayed was not the one 
> thrown by the replication strategy code and that the try/catch block could be 
> improved. Basically, the Constructor.newInstance can throw an 
> InvocationTargetException, which provide a better error report.
> I think this improvement can also be done in 2.1 (not tested yet). If my 
> attached patch is acceptable, I will test and provide the right version for 
> 2.1 and trunk.
> With the patch, I can see the proper error when executing my bad command:
> {code}
> Unable to create stress keyspace: replication_factor is an option for 
> SimpleStrategy, not NetworkTopologyStrategy
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8613) Regression in mixed single and multi-column relation support

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8613:
---
Fix Version/s: 2.0.15

> Regression in mixed single and multi-column relation support
> 
>
> Key: CASSANDRA-8613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8613
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Benjamin Lerer
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8613-2.0-v2.txt, 8613-2.1-v2.txt, 8613-trunk-v2.txt, 
> CASSANDRA-8613-2.0.txt, CASSANDRA-8613-2.1.txt, CASSANDRA-8613-trunk.txt
>
>
> In 2.0.6 through 2.0.8, a query like the following was supported:
> {noformat}
> SELECT * FROM mytable WHERE clustering_0 = ? AND (clustering_1, clustering_2) 
> > (?, ?)
> {noformat}
> However, after CASSANDRA-6875, you'll get the following error:
> {noformat}
> Clustering columns may not be skipped in multi-column relations. They should 
> appear in the PRIMARY KEY order. Got (c, d) > (0, 0)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-8952.

Resolution: Fixed

In retrospect, linking to line #'s on trunk in a ticket isn't useful.  Changes 
look good and cover the few places I had concerns about w/regards to Windows.

Committed w/1 nit: added copyright header to the 2 new test files.

> Remove transient RandomAccessFile usage
> ---
>
> Key: CASSANDRA-8952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joshua McKenzie
>Assignee: Stefania
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
>
> There are a few places within the code base where we use a RandomAccessFile 
> transiently to either grab fd's or channels for other operations. This is 
> prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
> - while these usages don't appear to be causing issues at this time there's 
> no reason to keep them. The less RandomAccessFile usage in the code-base the 
> more stable we'll be on Windows.
> [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
> * Used to getFD, have FileChannel version
> [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
> * Used to get file channel for channel truncate call. Only use is in index 
> file close so channel truncation down-only is acceptable.
> [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
> * Used to get file channel for mapping.
> Keeping these in a single ticket as all three should be fairly trivial 
> refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7816:
---
Fix Version/s: 2.0.15

> Duplicate DOWN/UP Events Pushed with Native Protocol
> 
>
> Key: CASSANDRA-7816
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Michael Penick
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 7816-v2.0.txt, tcpdump_repeating_status_change.txt, 
> trunk-7816.txt
>
>
> Added "MOVED_NODE" as a possible type of topology change and also specified 
> that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8734) Expose commit log archive status

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8734:
---
Fix Version/s: 2.0.15

> Expose commit log archive status
> 
>
> Key: CASSANDRA-8734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8734
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Philip S Doctor
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8734-cassandra-2.0.txt, 8734-cassandra-2.1.txt
>
>
> The operational procedure to modify commit log archiving is to edit 
> commitlog_archiving.properties and then perform a restart.  However this has 
> troublesome edge cases:
> 1) It is possible for people to modify commitlog_archiving.properties but 
> then not perform a restart
> 2) It is possible for people to modify commitlog_archiving.properties only on 
> some nodes
> 3) It is possible for people to have modified file + restart but then later 
> add more nodes without correct modifications.
> Because of these reasons, it is operationally useful to be able to audit the 
> commit log archive state of a node.  Simply parsing 
> commitlog_archiving.properties is insufficient due to #1.  
> I would suggest exposing either via some system table or JMX would be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7712) temporary files need to be cleaned by unit tests

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7712:
---
Fix Version/s: 2.0.15

> temporary files need to be cleaned by unit tests
> 
>
> Key: CASSANDRA-7712
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7712
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 7712-hung-CliTest_system.log.gz, 7712-v2.txt, 
> 7712-v3.txt, 7712_workaround.txt, CASSANDRA-7712_apache_cassandra_2.0.txt
>
>
> There are many unit test temporary files left behind after test runs. In the 
> case of CI servers, I have seen >70,000 files accumulate in /tmp over a 
> period of time. Each unit test should make an effort to remove its temporary 
> files when the test is completed.
> My current unit test cleanup block:
> {noformat}
> # clean up after unit tests..
> rm -rf  /tmp/140*-0 /tmp/CFWith* /tmp/Counter1* /tmp/DescriptorTest* 
> /tmp/Keyspace1* \
> /tmp/KeyStreamingTransferTestSpace* /tmp/SSTableExportTest* 
> /tmp/SSTableImportTest* \
> /tmp/Standard1* /tmp/Statistics.db* /tmp/StreamingTransferTest* 
> /tmp/ValuesWithQuotes* \
> /tmp/cassandra* /tmp/jna-* /tmp/ks-cf-ib-1-* /tmp/lengthtest* 
> /tmp/liblz4-java*.so /tmp/readtest* \
> /tmp/set_length_during_read_mode* /tmp/set_negative_length* 
> /tmp/snappy-*.so
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8950) NullPointerException in nodetool getendpoints with non-existent keyspace or table

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8950:
---
Fix Version/s: 2.0.15

> NullPointerException in nodetool getendpoints with non-existent keyspace or 
> table
> -
>
> Key: CASSANDRA-8950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8950
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8950-2.0.txt, 8950-2.1.txt
>
>
> If {{nodetool getendpoints}} is run with a non-existent keyspace or table 
> table, a NullPointerException will occur:
> {noformat}
> ~/cassandra $ bin/nodetool getendpoints badkeyspace badtable mykey
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2914)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8559) OOM caused by large tombstone warning.

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8559:
---
Fix Version/s: 2.0.15

> OOM caused by large tombstone warning.
> --
>
> Key: CASSANDRA-8559
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8559
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.0.11 / 2.1
>Reporter: Dominic Letz
>Assignee: Aleksey Yeschenko
>  Labels: tombstone
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8559.txt, Selection_048.png, cassandra-2.0.11-8559.txt, 
> stacktrace.log
>
>
> When running with high amount of tombstones the error message generation from 
> CASSANDRA-6117 can lead to out of memory situation with the default setting.
> Attached a heapdump viewed in visualvm showing how this construct created two 
> 777mb strings to print the error message for a read query and then crashed 
> OOM.
> {code}
> if (respectTombstoneThresholds() && columnCounter.ignored() > 
> DatabaseDescriptor.getTombstoneWarnThreshold())
> {
> StringBuilder sb = new StringBuilder();
> CellNameType type = container.metadata().comparator;
> for (ColumnSlice sl : slices)
> {
> assert sl != null;
> sb.append('[');
> sb.append(type.getString(sl.start));
> sb.append('-');
> sb.append(type.getString(sl.finish));
> sb.append(']');
> }
> logger.warn("Read {} live and {} tombstoned cells in {}.{} (see 
> tombstone_warn_threshold). {} columns was requested, slices={}, delInfo={}",
> columnCounter.live(), columnCounter.ignored(), 
> container.metadata().ksName, container.metadata().cfName, count, sb, 
> container.deletionInfo());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8934) COPY command has inherent 128KB field size limit

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8934:
---
Fix Version/s: 2.0.15

> COPY command has inherent 128KB field size limit
> 
>
> Key: CASSANDRA-8934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter:  Brian Hess
>Assignee: Philip Thompson
>  Labels: cqlsh, docs-impacting
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8934-2.0.txt, 8934-2.1.txt
>
>
> In using the COPY command as follows:
> {{cqlsh -e "COPY test.test1mb(pkey, ccol, data) FROM 
> 'in/data1MB/data1MB_9.csv'"}}
> the following error is thrown:
> {{:1:field larger than field limit (131072)}}
> The data file contains a field that is greater than 128KB (it's more like 
> almost 1MB).
> A work-around (thanks to [~jjordan] and [~thobbs] is to modify the cqlsh 
> script and add the line
> {{csv.field_size_limit(10)}}
> anywhere after the line
> {{import csv}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: (was: 2.1.5)

> cassandra-stress does not honour consistency level (cl) parameter when used 
> in combination with user command
> 
>
> Key: CASSANDRA-8948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Andreas Flinck
>Assignee: T Jake Luciani
> Fix For: 2.1.5
>
> Attachments: 8948.txt
>
>
> The stress test tool does not honour "cl" parameter when used in combination 
> with the "user" command. Consistency level will be default ONE no matter what 
> is set by "cl=".
> Works fine with "write" command.
> How to reproduce:
> 1. Create a suitable yaml-file to use in test
> 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
> no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
> 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8949) CompressedSequentialWriter.resetAndTruncate can lose data

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8949:
---
Fix Version/s: 2.0.15

> CompressedSequentialWriter.resetAndTruncate can lose data
> -
>
> Key: CASSANDRA-8949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8949
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 2.0.15, 2.1.5
>
>
> If the FileMark passed into this method fully fills the buffer, a subsequent 
> call to write will reBuffer and drop the data currently in the buffer. We 
> need to mark the buffer contents as dirty in resetAndTruncate to prevent this 
> - see CASSANDRA-8709 notes for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: 2.0.15

> cassandra-stress does not honour consistency level (cl) parameter when used 
> in combination with user command
> 
>
> Key: CASSANDRA-8948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Andreas Flinck
>Assignee: T Jake Luciani
> Fix For: 2.1.5
>
> Attachments: 8948.txt
>
>
> The stress test tool does not honour "cl" parameter when used in combination 
> with the "user" command. Consistency level will be default ONE no matter what 
> is set by "cl=".
> Works fine with "write" command.
> How to reproduce:
> 1. Create a suitable yaml-file to use in test
> 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
> no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
> 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: (was: 2.0.15)
   2.1.5

> cassandra-stress does not honour consistency level (cl) parameter when used 
> in combination with user command
> 
>
> Key: CASSANDRA-8948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Andreas Flinck
>Assignee: T Jake Luciani
> Fix For: 2.1.5
>
> Attachments: 8948.txt
>
>
> The stress test tool does not honour "cl" parameter when used in combination 
> with the "user" command. Consistency level will be default ONE no matter what 
> is set by "cl=".
> Works fine with "write" command.
> How to reproduce:
> 1. Create a suitable yaml-file to use in test
> 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
> no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
> 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9027) Error processing org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9027:
---
Fix Version/s: 2.0.15

> Error processing 
> org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-  address>
> -
>
> Key: CASSANDRA-9027
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9027
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
>Assignee: Erik Forsberg
> Fix For: 2.0.15, 2.1.5
>
> Attachments: cassandra-2.0-9027.txt, cassandra-2.0-9027.txt
>
>
> Getting some of these on 2.0.13:
> {noformat}
>  WARN [MutationStage:92] 2015-03-24 08:57:20,204 JmxReporter.java (line 397) 
> Error processing 
> org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-2001:4c28:1:413:0:1:4:1
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newCounter(MetricsRegistry.java:115)
> at com.yammer.metrics.Metrics.newCounter(Metrics.java:108)
> at 
> org.apache.cassandra.metrics.HintedHandoffMetrics$2.load(HintedHandoffMetrics.java:58)
> at 
> org.apache.cassandra.metrics.HintedHandoffMetrics$2.load(HintedHandoffMetrics.java:55)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4812)
> at 
> org.apache.cassandra.metrics.HintedHandoffMetrics.incrCreatedHints(HintedHandoffMetrics.java:64)
> at 
> org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:124)
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:957)
> at 
> org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:927)
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2069)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Seems to be about the same as CASSANDRA-5298.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7533) Let MAX_OUTSTANDING_REPLAY_COUNT be configurable

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7533:
---
Fix Version/s: 2.0.15

> Let MAX_OUTSTANDING_REPLAY_COUNT be configurable
> 
>
> Key: CASSANDRA-7533
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7533
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 0001-CASSANDRA-7533.txt
>
>
> There are some workloads where commit log replay will run into contention 
> issues with multiple things updating the same partition.  Through some 
> testing it was found that lowering CommitLogReplayer.java 
> MAX_OUTSTANDING_REPLAY_COUNT can help with this issue.
> The calculations added in CASSANDRA-6655 are one such place things get 
> bottlenecked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394673#comment-14394673
 ] 

Benedict commented on CASSANDRA-7066:
-

Even better. It hadn't occurred to me the current code was all due to the lack 
of idempotency; I assumed there was just concern about leaving a large amount 
of data around. There _is_ still the risk that this could be a prohibitive 
danger on some systems (say, you have a multi-Tb file that's just been 
compacted). So to offer one further alternative that is perhaps only slightly 
more complicated and retains the safety: 

* create two logs files: A and B; both log _each other_; file A also logs the 
new file(s) as they're created; file B also logs the old file(s)
* once done delete file A; then delete the old files; then delete file B
* if we find file A we delete its contents (including file B); If we find file 
B only, we delete its contents

> Simplify (and unify) cleanup of compaction leftovers
> 
>
> Key: CASSANDRA-7066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
>Priority: Minor
>  Labels: compaction
> Fix For: 3.0
>
>
> Currently we manage a list of in-progress compactions in a system table, 
> which we use to cleanup incomplete compactions when we're done. The problem 
> with this is that 1) it's a bit clunky (and leaves us in positions where we 
> can unnecessarily cleanup completed files, or conversely not cleanup files 
> that have been superceded); and 2) it's only used for a regular compaction - 
> no other compaction types are guarded in the same way, so can result in 
> duplication if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and 
> on startup we simply delete any sstables that occur in the union of all 
> ancestor sets. This way as soon as we finish writing we're capable of 
> cleaning up any leftovers, so we never get duplication. It's also much easier 
> to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8516) NEW_NODE topology event emitted instead of MOVED_NODE by moving node

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8516:
---
Fix Version/s: 2.0.15

> NEW_NODE topology event emitted instead of MOVED_NODE by moving node
> 
>
> Key: CASSANDRA-8516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8516
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8516-v2.1-a.txt, 8516-v2.1-b.txt, 
> cassandra_8516_dtest.txt
>
>
> As discovered in CASSANDRA-8373, when you move a node in a single-node 
> cluster, a {{NEW_NODE}} event is generated instead of a {{MOVED_NODE}} event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9032) Reduce logging level for MigrationTask abort due to down node from ERROR to INFO

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9032:
---
Fix Version/s: 2.0.15

> Reduce logging level for MigrationTask abort due to down node from ERROR to 
> INFO
> 
>
> Key: CASSANDRA-9032
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9032
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 9032.txt
>
>
> A lot of the dtests are failing during Jenkins runs due to the following 
> error message in the logs:
> {noformat}
> "ERROR [MigrationStage:1] 2015-03-24 20:02:03,464 MigrationTask.java:62 - 
> Can't send migration request: node /127.0.0.3 is down.\n"]
> {noformat}
> This log message happens when a schema pull is scheduled, but the target 
> endpoint is down when the scheduled task actually runs.  The failing dtests 
> generally stop a node as part of the test, which results in this.
> I believe the log message should be moved from ERROR to INFO (or perhaps even 
> DEBUG).  This isn't an unexpected type of problem (nodes go down all the 
> time), and it's not actionable by the user.  This would also have the nice 
> side effect of fixing the dtests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9036) "disk full" when running cleanup (on a far from full disk)

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9036:
---
Fix Version/s: 2.0.15

> "disk full" when running cleanup (on a far from full disk)
> --
>
> Key: CASSANDRA-9036
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9036
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Erik Forsberg
>Assignee: Robert Stupp
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 9036-2.0.txt, 9036-2.1.txt, 9036-3.0.txt
>
>
> I'm trying to run cleanup, but get this:
> {noformat}
>  INFO [CompactionExecutor:18] 2015-03-25 10:29:16,355 CompactionManager.java 
> (line 564) Cleaning up 
> SSTableReader(path='/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db')
> ERROR [CompactionExecutor:18] 2015-03-25 10:29:16,664 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:18,1,main]
> java.io.IOException: disk full
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:567)
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Now that's odd, since:
> * Disk has some 680G left
> * The sstable it's trying to cleanup is far less than 680G:
> {noformat}
> # ls -lh *4345750*
> -rw-r--r-- 1 cassandra cassandra  64M Mar 21 04:42 
> production-Data_daily-jb-4345750-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra 219G Mar 21 04:42 
> production-Data_daily-jb-4345750-Data.db
> -rw-r--r-- 1 cassandra cassandra 503M Mar 21 04:42 
> production-Data_daily-jb-4345750-Filter.db
> -rw-r--r-- 1 cassandra cassandra  42G Mar 21 04:42 
> production-Data_daily-jb-4345750-Index.db
> -rw-r--r-- 1 cassandra cassandra 5.9K Mar 21 04:42 
> production-Data_daily-jb-4345750-Statistics.db
> -rw-r--r-- 1 cassandra cassandra  81M Mar 21 04:42 
> production-Data_daily-jb-4345750-Summary.db
> -rw-r--r-- 1 cassandra cassandra   79 Mar 21 04:42 
> production-Data_daily-jb-4345750-TOC.txt
> {noformat}
> Sure, it's large, but it's not 680G. 
> No other compactions are running on that server. I'm getting this on 12 / 56 
> servers right now. 
> Could it be some bug in the calculation of the expected size of the new 
> sstable, perhaps? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8360) In DTCS, always compact SSTables in the same time window, even if they are fewer than min_threshold

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8360:
---
Fix Version/s: 2.0.15

> In DTCS, always compact SSTables in the same time window, even if they are 
> fewer than min_threshold
> ---
>
> Key: CASSANDRA-8360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8360
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Björn Hegerfors
>Assignee: Björn Hegerfors
>Priority: Minor
> Fix For: 2.0.15, 2.1.5
>
> Attachments: cassandra-2.0-CASSANDRA-8360.txt
>
>
> DTCS uses min_threshold to decide how many time windows of the same size that 
> need to accumulate before merging into a larger window. The age of an SSTable 
> is determined as its min timestamp, and it always falls into exactly one of 
> the time windows. If multiple SSTables fall into the same window, DTCS 
> considers compacting them, but if they are fewer than min_threshold, it 
> decides not to do it.
> When do more than 1 but fewer than min_threshold SSTables end up in the same 
> time window (except for the current window), you might ask? In the current 
> state, DTCS can spill some extra SSTables into bigger windows when the 
> previous window wasn't fully compacted, which happens all the time when the 
> latest window stops being the current one. Also, repairs and hints can put 
> new SSTables in old windows.
> I think, and [~jjordan] agreed in a comment on CASSANDRA-6602, that DTCS 
> should ignore min_threshold and compact tables in the same windows regardless 
> of how few they are. I guess max_threshold should still be respected.
> [~jjordan] suggested that this should apply to all windows but the current 
> window, where all the new SSTables end up. That could make sense. I'm not 
> clear on whether compacting many SSTables at once is more cost efficient or 
> not, when it comes to the very newest and smallest SSTables. Maybe compacting 
> as soon as 2 SSTables are seen is fine if the initial window size is small 
> enough? I guess the opposite could be the case too; that the very newest 
> SSTables should be compacted very many at a time?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8740) java.lang.AssertionError when reading saved cache

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8740:
---
Fix Version/s: 2.0.15

> java.lang.AssertionError when reading saved cache
> -
>
> Key: CASSANDRA-8740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: OEL 6.5, DSE 4.6.0, Cassandra 2.0.11.83
>Reporter: Nikolai Grigoriev
>Assignee: Dave Brosius
> Fix For: 2.0.15, 2.1.5
>
> Attachments: 8740.txt
>
>
> I have started seeing it recently. Not sure from which version but now it 
> happens relatively often one some of my nodes.
> {code}
>  INFO [main] 2015-02-04 18:18:09,253 ColumnFamilyStore.java (line 249) 
> Initializing duo_xxx
>  INFO [main] 2015-02-04 18:18:09,254 AutoSavingCache.java (line 114) reading 
> saved cache /var/lib/cassandra/saved_caches/duo_xxx-RowCach
> e-b.db
> ERROR [main] 2015-02-04 18:18:09,256 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.AssertionError
> at 
> org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:41)
> at 
> org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:37)
> at 
> org.apache.cassandra.cache.SerializingCache.serialize(SerializingCache.java:118)
> at 
> org.apache.cassandra.cache.SerializingCache.put(SerializingCache.java:177)
> at 
> org.apache.cassandra.cache.InstrumentingCache.put(InstrumentingCache.java:44)
> at 
> org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:130)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.initRowCache(ColumnFamilyStore.java:592)
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:119)
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:92)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:305)
> at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:419)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
> at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:659)
>  INFO [Thread-2] 2015-02-04 18:18:09,259 DseDaemon.java (line 505) DSE 
> shutting down...
> ERROR [Thread-2] 2015-02-04 18:18:09,279 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.AssertionError
> at 
> org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1274)
> at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:171)
> at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:506)
> at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:408)
>  INFO [main] 2015-02-04 18:18:49,144 CassandraDaemon.java (line 135) Logging 
> initialized
>  INFO [main] 2015-02-04 18:18:49,169 DseDaemon.java (line 382) DSE version: 
> 4.6.0
> {code}
> Cassandra version: 2.0.11.83 (DSE 4.6.0)
> Looks like similar issues were reported and fixed in the past - like 
> CASSANDRA-6325.
> Maybe I am missing something, but I think that Cassandra should not crash and 
> stop at startup if it cannot read a saved cache. This does not make the node 
> inoperable and does not necessarily indicate a severe data corruption. I have 
> applied a small change to my cluster config, restarted it and 30% of my nodes 
> did not start because of that. Of course the solution is simple, but it 
> requires to go to every node that failed to start, wipe the cache and start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8978) CQLSSTableWriter causes ArrayIndexOutOfBoundsException

2015-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8978:
--
Attachment: 8978-2.1-v2.txt
test-8978.txt

The test that I added had been failing for me when I posted this patch, but I 
can't get it to anymore. I'm attaching a new test instead (test-8978.txt), 
which does fail on 2.1.

The issue is that {{UpdateStatement}} has a {{ColumnFamily}} which it applies 
the modification to. When we hit the size that we are targeting in 
{{ABSC.addColumn}}, we replace the current column family with a new one and 
send the previous one to the writer thread. Since update statement doesn't have 
the new column family, it continues to write columns to the old one which 
should no longer be modified.

The change that I made moves replacing the column family to a point where 
update statement is complete.

> CQLSSTableWriter causes ArrayIndexOutOfBoundsException
> --
>
> Key: CASSANDRA-8978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 3.8.0-42-generic #62~precise1-Ubuntu SMP Wed Jun 4 
> 22:04:18 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_20"
> Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
> Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
>Reporter: Thomas Borg Salling
>Assignee: Carl Yeksigian
> Fix For: 2.1.5
>
> Attachments: 8978-2.1-v2.txt, 8978-2.1.txt, test-8978.txt
>
>
> On long-running jobs with CQLSSTableWriter preparing sstables for later bulk 
> load via sstableloader, occassionally I get the sporadic error shown below.
> I can run the exact same job again - and it will succeed or fail with the 
> same error at another location in the input stream. The error is appears to 
> occur "randomly" - with the same input it may occur never, early or late in 
> the run with no apparent logic or system.
> I use five instances of CQLSSTableWriter in the application (to write 
> redundantly to five different tables). But these instances do not exist at 
> the same time; and thus never used concurrently.
> {code}
> 09:26:33.582 [main] INFO  d.dma.ais.store.FileSSTableConverter - Finished 
> processing directory, 369582175 packets was converted from /nas1/
> Exception in thread "main" java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at dk.dma.commons.app.CliCommandList$1.execute(CliCommandList.java:50)
> at dk.dma.commons.app.CliCommandList.invoke(CliCommandList.java:80)
> at dk.dma.ais.store.Main.main(Main.java:34)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 297868
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.append(ArrayBackedSortedColumns.java:196)
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.appendOrReconcile(ArrayBackedSortedColumns.java:191)
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.sortCells(ArrayBackedSortedColumns.java:176)
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.maybeSortCells(ArrayBackedSortedColumns.java:125)
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.access$1100(ArrayBackedSortedColumns.java:44)
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns$CellCollection.iterator(ArrayBackedSortedColumns.java:622)
> at 
> org.apache.cassandra.db.ColumnFamily.iterator(ColumnFamily.java:476)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:129)
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215){code}
> So far I overcome this problem by simply retrying with another run of the 
> application in attempt to generate the sstables. But this is a rather time 
> consuming and shaky approach - and I feel a bit uneasy relying on the 
> produced sstables, though their contents appear to be correct when I sample 
> them with cqlsh 'select' after load into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9114) cqlsh: Formatting of map contents broken

2015-04-03 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-9114:
--

 Summary: cqlsh: Formatting of map contents broken
 Key: CASSANDRA-9114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1.5


In CASSANDRA-9081, we upgraded the bundled python driver to version 2.5.0.  
This upgrade changed the class that's used for map collections, and we failed 
to add a new formatting adaptor for the new class.

This was causing the {{cqlsh_tests.TestCqlsh.test_eat_glass}} dtest to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >