[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394179#comment-14394179
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

So I must have had some dump saved by some early development branch then. 
Thanks for the clarification.

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.5

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394179#comment-14394179
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-7688 at 4/3/15 8:03 AM:
---

So I must have had a dump saved by an early development branch then. Thanks for 
the clarification.


was (Author: pkolaczk):
So I must have had some dump saved by some early development branch then. 
Thanks for the clarification.

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.5

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394193#comment-14394193
 ] 

Stefania commented on CASSANDRA-8893:
-

Benedict, take a look at the patch attached and let me know if this is what you 
had in mind. The entry point is ChannelProxy which wraps a file channel in a 
ref counted way and ensures that only thread safe operations are accessible. It 
also translates the IO exceptions into unchecked exceptions.

The channel proxy is shared by Builder, SegmentedFile and RandomAccessReader 
instances.

In the Builder we can receive different file paths in the complete methods, in 
which case we close the old channel and create a new one. This is the part I 
was not entirely sure about.

The remaining changes are either mechanical to pass the channel around, or 
fixes to remove leaks of the channel, mostly in the unit tests.

 RandomAccessReader should share its FileChannel with all instances (via 
 SegmentedFile)
 --

 Key: CASSANDRA-8893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.0


 There's no good reason to open a FileChannel for each 
 \(Compressed\)\?RandomAccessReader, and this would simplify 
 RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9106) disable secondary indexes by default

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394213#comment-14394213
 ] 

Sylvain Lebresne commented on CASSANDRA-9106:
-

I generally agree that it's too easy to misuse so I'm in favor for trying to 
make it less so, and no allowing them by default does sound like it goes in 
that direction. I'm definitively not in favor of use the yaml to deal with 
that: if we do decide to disable them by default, then I think we should simply 
make that capacity not enabled by default in the context of CASSANDRA-8303.

 disable secondary indexes by default
 

 Key: CASSANDRA-9106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9106
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jon Haddad
 Fix For: 3.0


 This feature is misused constantly.  Can we disable it by default, and 
 provide a yaml config to explicitly enable it?  Along with a massive warning 
 about how they aren't there for performance, maybe with a link to 
 documentation that explains why?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7557) User permissions for UDFs

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394192#comment-14394192
 ] 

Sylvain Lebresne commented on CASSANDRA-7557:
-

bq.  The only alternative I could come up with was to defer execution of 
terminal functions depending on the configured {{IAuthorizer}}

Alternatively, we could defer execution of functions to statement execution 
unconditionally. I mean, executing functions at preparation time when all terms 
are terminal is just a minor optimization that was done because it was easy to 
do, but in practice, it's unlikely terribly useful: for non-prepared statement, 
doing execution at preparation or execution doesn't matter at all, and for 
prepared statement, not only having function calls with only terminal terms is 
probably not that common, but if you really care about optimizing the call, 
it's easy enough to compute the function client side before preparation.
So honestly, if that minor optimization become a pain to preserve, and it does 
seem so with this (I would even argue that doing permission checking at 
preparation time is always a bad idea because if the permission is revoked 
after preparation, a user would expect further execution to be rejected), I 
submit that we should just get rid of it and simplify the code accordingly.

 User permissions for UDFs
 -

 Key: CASSANDRA-7557
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7557
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Tyler Hobbs
Assignee: Sam Tunnicliffe
  Labels: client-impacting, cql, udf
 Fix For: 3.0


 We probably want some new permissions for user defined functions.  Most 
 RDBMSes split function permissions roughly into {{EXECUTE}} and 
 {{CREATE}}/{{ALTER}}/{{DROP}} permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394195#comment-14394195
 ] 

Stefania commented on CASSANDRA-8893:
-

This patch fixes the third point of CASSANDRA-8952.

 RandomAccessReader should share its FileChannel with all instances (via 
 SegmentedFile)
 --

 Key: CASSANDRA-8893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.0


 There's no good reason to open a FileChannel for each 
 \(Compressed\)\?RandomAccessReader, and this would simplify 
 RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394196#comment-14394196
 ] 

Stefania commented on CASSANDRA-8952:
-

The third point will be fixed by CASSANDRA-8893.

 Remove transient RandomAccessFile usage
 ---

 Key: CASSANDRA-8952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Stefania
Priority: Minor
  Labels: Windows
 Fix For: 3.0


 There are a few places within the code base where we use a RandomAccessFile 
 transiently to either grab fd's or channels for other operations. This is 
 prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
 - while these usages don't appear to be causing issues at this time there's 
 no reason to keep them. The less RandomAccessFile usage in the code-base the 
 more stable we'll be on Windows.
 [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
 * Used to getFD, have FileChannel version
 [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
 * Used to get file channel for channel truncate call. Only use is in index 
 file close so channel truncation down-only is acceptable.
 [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
 * Used to get file channel for mapping.
 Keeping these in a single ticket as all three should be fairly trivial 
 refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394202#comment-14394202
 ] 

Stefania commented on CASSANDRA-8952:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with NIO calls in CLibrary.getfd(String 
path), correct?

 Remove transient RandomAccessFile usage
 ---

 Key: CASSANDRA-8952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Stefania
Priority: Minor
  Labels: Windows
 Fix For: 3.0


 There are a few places within the code base where we use a RandomAccessFile 
 transiently to either grab fd's or channels for other operations. This is 
 prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
 - while these usages don't appear to be causing issues at this time there's 
 no reason to keep them. The less RandomAccessFile usage in the code-base the 
 more stable we'll be on Windows.
 [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
 * Used to getFD, have FileChannel version
 [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
 * Used to get file channel for channel truncate call. Only use is in index 
 file close so channel truncation down-only is acceptable.
 [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
 * Used to get file channel for mapping.
 Keeping these in a single ticket as all three should be fairly trivial 
 refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8979) MerkleTree mismatch for deleted and non-existing rows

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394218#comment-14394218
 ] 

Sylvain Lebresne commented on CASSANDRA-8979:
-

To avoid any confusion, I never suggested we wouldn't do this in a minor 
version, just that we basically added what the last patches from 
[~spo...@gmail.com] adds. so [~yukim], if you go ahead and commit those last 
patches, I'm good closing this.

 MerkleTree mismatch for deleted and non-existing rows
 -

 Key: CASSANDRA-8979
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8979
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Stefan Podkowinski
Assignee: Stefan Podkowinski
 Fix For: 2.1.5

 Attachments: 8979-AvoidBufferAllocation-2.0_patch.txt, 
 8979-LazilyCompactedRow-2.0.txt, 8979-RevertPrecompactedRow-2.0.txt, 
 cassandra-2.0-8979-lazyrow_patch.txt, cassandra-2.0-8979-validator_patch.txt, 
 cassandra-2.0-8979-validatortest_patch.txt, 
 cassandra-2.1-8979-lazyrow_patch.txt, cassandra-2.1-8979-validator_patch.txt


 Validation compaction will currently create different hashes for rows that 
 have been deleted compared to nodes that have not seen the rows at all or 
 have already compacted them away. 
 In case this sounds familiar to you, see CASSANDRA-4905 which was supposed to 
 prevent hashing of expired tombstones. This still seems to be in place, but 
 does not address the issue completely. Or there was a change in 2.0 that 
 rendered the patch ineffective. 
 The problem is that rowHash() in the Validator will return a new hash in any 
 case, whether the PrecompactedRow did actually update the digest or not. This 
 will lead to the case that a purged, PrecompactedRow will not change the 
 digest, but we end up with a different tree compared to not having rowHash 
 called at all (such as in case the row already doesn't exist).
 As an implication, repair jobs will constantly detect mismatches between 
 older sstables containing purgable rows and nodes that have already compacted 
 these rows. After transfering the reported ranges, the newly created sstables 
 will immediately get deleted again during the following compaction. This will 
 happen for each repair run over again until the sstable with the purgable row 
 finally gets compacted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9037) Terminal UDFs evaluated at prepare time throw protocol version error

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394198#comment-14394198
 ] 

Sylvain Lebresne commented on CASSANDRA-9037:
-

Fyi, as I suggested in CASSANDRA-7557, I suggest we just entirely get rid of 
function execution at prepare time. The short version is that it's imo starting 
to add way more complexity than it's worth as an optimization.

 Terminal UDFs evaluated at prepare time throw protocol version error
 

 Key: CASSANDRA-9037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9037
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0


 When a pure function with only terminal arguments (or with no arguments) is 
 used in a where clause, it's executed at prepare time and 
 {{Server.CURRENT_VERSION}} passed as the protocol version for serialization 
 purposes. For native functions, this isn't a problem, but UDFs use classes in 
 the bundled java-driver-core jar for (de)serialization of args and return 
 values. When {{Server.CURRENT_VERSION}} is greater than the highest version 
 supported by the bundled java driver the execution fails with the following 
 exception:
 {noformat}
 ERROR [SharedPool-Worker-1] 2015-03-24 18:10:59,391 QueryMessage.java:132 - 
 Unexpected error during query
 org.apache.cassandra.exceptions.FunctionExecutionException: execution of 
 'ks.overloaded[text]' failed: java.lang.IllegalArgumentException: No protocol 
 version matching integer version 4
 at 
 org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.udf.gen.Cksoverloaded_1.execute(Cksoverloaded_1.java)
  ~[na:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall.executeInternal(FunctionCall.java:78)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall.access$200(FunctionCall.java:34)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall$Raw.execute(FunctionCall.java:176)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:161)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.SingleColumnRelation.toTerm(SingleColumnRelation.java:108)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:143)
  ~[main/:na]
 at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:127) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.restrictions.StatementRestrictions.init(StatementRestrictions.java:126)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:787)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:740)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:488)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:246) 
 ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:475)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:371)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_71]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.lang.IllegalArgumentException: No protocol version matching 
 integer version 4
 at 
 com.datastax.driver.core.ProtocolVersion.fromInt(ProtocolVersion.java:89) 
 ~[cassandra-driver-core-2.1.2.jar:na]
 at 
 

[jira] [Comment Edited] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394202#comment-14394202
 ] 

Stefania edited comment on CASSANDRA-8952 at 4/3/15 9:37 AM:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?

Have a quick look here for the first two points:

https://github.com/stef1927/cassandra/commits/8952


was (Author: stefania):
Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?

 Remove transient RandomAccessFile usage
 ---

 Key: CASSANDRA-8952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Stefania
Priority: Minor
  Labels: Windows
 Fix For: 3.0


 There are a few places within the code base where we use a RandomAccessFile 
 transiently to either grab fd's or channels for other operations. This is 
 prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
 - while these usages don't appear to be causing issues at this time there's 
 no reason to keep them. The less RandomAccessFile usage in the code-base the 
 more stable we'll be on Windows.
 [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
 * Used to getFD, have FileChannel version
 [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
 * Used to get file channel for channel truncate call. Only use is in index 
 file close so channel truncation down-only is acceptable.
 [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
 * Used to get file channel for mapping.
 Keeping these in a single ticket as all three should be fairly trivial 
 refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8820) Broken package dependency in Debian repository

2015-04-03 Thread Stephan Wienczny (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394316#comment-14394316
 ] 

Stephan Wienczny commented on CASSANDRA-8820:
-

The reason is that the Packages don't refer to the new version.

  Package: cassandra
  Version: 2.1.4
  ...

  Package: cassandra-tools  
  
  Version: 2.1.3
  
  ...

cassandra-tools is available:

http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/ 
cassandra-tools_2.1.4_all.deb

So the release process a problem because the Packages file is not updated 
correctly.

 Broken package dependency in Debian repository
 --

 Key: CASSANDRA-8820
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8820
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: Ubuntu 14.04 LTS amd64
Reporter: Terry Moschou
Assignee: T Jake Luciani

 The Apache Debian package repository currently has unmet dependencies.
 Configured repos:
 deb http://www.apache.org/dist/cassandra/debian 21x main
 deb-src http://www.apache.org/dist/cassandra/debian 21x main
 Problem file:
 cassandra/dists/21x/main/binary-amd64/Packages
 $ sudo apt-get update  sudo apt-get install cassandra-tools
 ...(omitted)
 Reading state information... Done
 Some packages could not be installed. This may mean that you have
 requested an impossible situation or if you are using the unstable
 distribution that some required packages have not yet been created
 or been moved out of Incoming.
 The following information may help to resolve the situation:
 The following packages have unmet dependencies:
  cassandra-tools : Depends: cassandra (= 2.1.2) but it is not going to be 
 installed
 E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394202#comment-14394202
 ] 

Stefania edited comment on CASSANDRA-8952 at 4/3/15 9:18 AM:
-

Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with a FileChannel in CLibrary.getfd(String 
path), correct?


was (Author: stefania):
Regarding the first point, I only found dropPageCache() in SegmentedFile. We 
need to replace the transient RAF with NIO calls in CLibrary.getfd(String 
path), correct?

 Remove transient RandomAccessFile usage
 ---

 Key: CASSANDRA-8952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Stefania
Priority: Minor
  Labels: Windows
 Fix For: 3.0


 There are a few places within the code base where we use a RandomAccessFile 
 transiently to either grab fd's or channels for other operations. This is 
 prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
 - while these usages don't appear to be causing issues at this time there's 
 no reason to keep them. The less RandomAccessFile usage in the code-base the 
 more stable we'll be on Windows.
 [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
 * Used to getFD, have FileChannel version
 [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
 * Used to get file channel for channel truncate call. Only use is in index 
 file close so channel truncation down-only is acceptable.
 [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
 * Used to get file channel for mapping.
 Keeping these in a single ticket as all three should be fairly trivial 
 refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8915) Improve MergeIterator performance

2015-04-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394244#comment-14394244
 ] 

Stefania commented on CASSANDRA-8915:
-

In case you guys have not seen it yet, please check the changes proposed by 
CASSANDRA-8180, specifically this comment here: 
https://issues.apache.org/jira/browse/CASSANDRA-8180?focusedCommentId=14381674page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14381674.

The idea is that there will be two type of candidates, one greedy that knows 
its first value as it is the case right now. Another one, lazy, that gets 
compared based on a less accurate lower bound. What this means is that once 
this lazy candidate is picked, only then will it access the iterator to 
determine the exact first value, which could be much higher that the initial 
lower bound. 

The way I implemented this with the present implementation of the merge 
iterator is to add the lazy candidate back to the priority queue after it has 
calculated its first accurate value. It's not very elegant however and it is 
kind of wasteful.

If too complex to merge both approaches in one algorithm, we can always 
specialize a separate merge iterator implementation for supporting lazy 
candidates.

 Improve MergeIterator performance
 -

 Key: CASSANDRA-8915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8915
 Project: Cassandra
  Issue Type: Improvement
Reporter: Branimir Lambov
Assignee: Branimir Lambov
Priority: Minor

 The implementation of {{MergeIterator}} uses a priority queue and applies a 
 pair of {{poll}}+{{add}} operations for every item in the resulting sequence. 
 This is quite inefficient as {{poll}} necessarily applies at least {{log N}} 
 comparisons (up to {{2log N}}), and {{add}} often requires another {{log N}}, 
 for example in the case where the inputs largely don't overlap (where {{N}} 
 is the number of iterators being merged).
 This can easily be replaced with a simple custom structure that can perform 
 replacement of the top of the queue in a single step, which will very often 
 complete after a couple of comparisons and in the worst case scenarios will 
 match the complexity of the current implementation.
 This should significantly improve merge performance for iterators with 
 limited overlap (e.g. levelled compaction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Share file handles between all instances of a SegmentedFile

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk cf925bdfa - 4e29b7a9a


Share file handles between all instances of a SegmentedFile

patch by stefania; reviewed by benedict for CASSANDRA-8893


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e29b7a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e29b7a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e29b7a9

Branch: refs/heads/trunk
Commit: 4e29b7a9a4736e7e70757dc514849c5af7e2d7d1
Parents: cf925bd
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri Apr 3 12:32:42 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 12:32:42 2015 +0100

--
 .../compress/CompressedRandomAccessReader.java  |  26 ++--
 .../io/compress/CompressedThrottledReader.java  |  10 +-
 .../io/sstable/format/SSTableReader.java|  84 ++--
 .../io/sstable/format/big/BigTableWriter.java   |  13 +-
 .../io/util/BufferedPoolingSegmentedFile.java   |  14 +-
 .../io/util/BufferedSegmentedFile.java  |  24 ++--
 .../io/util/CompressedPoolingSegmentedFile.java |  20 +--
 .../io/util/CompressedSegmentedFile.java|  20 +--
 .../cassandra/io/util/MmappedSegmentedFile.java |  65 +++---
 .../cassandra/io/util/PoolingSegmentedFile.java |  22 ++--
 .../cassandra/io/util/RandomAccessReader.java   | 128 ++-
 .../apache/cassandra/io/util/SegmentedFile.java |  74 ---
 .../cassandra/io/util/ThrottledReader.java  |   9 +-
 .../compress/CompressedStreamWriter.java|  14 +-
 .../apache/cassandra/db/RangeTombstoneTest.java |  27 ++--
 .../unit/org/apache/cassandra/db/ScrubTest.java |  17 +--
 .../org/apache/cassandra/db/VerifyTest.java |   3 +-
 .../db/compaction/AntiCompactionTest.java   |  36 +++---
 .../db/compaction/CompactionsTest.java  |   4 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  |   5 +-
 .../CompressedRandomAccessReaderTest.java   |  22 +++-
 .../CompressedSequentialWriterTest.java |  10 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  21 +--
 .../io/sstable/SSTableScannerTest.java  |  28 ++--
 .../cassandra/io/sstable/SSTableUtils.java  |  20 +--
 .../io/util/BufferedRandomAccessFileTest.java   |  11 +-
 .../cassandra/io/util/DataOutputTest.java   |  14 +-
 .../apache/cassandra/io/util/MemoryTest.java|   1 +
 28 files changed, 377 insertions(+), 365 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e29b7a9/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index b1b4dd4..1b3cd06 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -33,10 +33,7 @@ import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSReadError;
 import org.apache.cassandra.io.sstable.CorruptSSTableException;
-import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
-import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.io.util.PoolingSegmentedFile;
-import org.apache.cassandra.io.util.RandomAccessReader;
+import org.apache.cassandra.io.util.*;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -47,15 +44,15 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 {
 private static final boolean useMmap = 
DatabaseDescriptor.getDiskAccessMode() == Config.DiskAccessMode.mmap;
 
-public static CompressedRandomAccessReader open(String dataFilePath, 
CompressionMetadata metadata)
+public static CompressedRandomAccessReader open(ChannelProxy channel, 
CompressionMetadata metadata)
 {
-return open(dataFilePath, metadata, null);
+return open(channel, metadata, null);
 }
-public static CompressedRandomAccessReader open(String path, 
CompressionMetadata metadata, CompressedPoolingSegmentedFile owner)
+public static CompressedRandomAccessReader open(ChannelProxy channel, 
CompressionMetadata metadata, CompressedPoolingSegmentedFile owner)
 {
 try
 {
-return new CompressedRandomAccessReader(path, metadata, owner);
+return new CompressedRandomAccessReader(channel, metadata, owner);
 }
 catch (FileNotFoundException e)
 {
@@ -78,9 +75,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // raw checksum bytes
 private ByteBuffer 

[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/23c84b16/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index 06234cd,000..a761e6a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@@ -1,2117 -1,0 +1,2127 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format;
 +
 +import java.io.*;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +import java.util.concurrent.*;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.concurrent.atomic.AtomicLong;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Ordering;
 +import com.google.common.primitives.Longs;
 +import com.google.common.util.concurrent.RateLimiter;
 +
 +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException;
 +import com.clearspring.analytics.stream.cardinality.HyperLogLogPlus;
 +import com.clearspring.analytics.stream.cardinality.ICardinality;
 +import org.apache.cassandra.cache.CachingOptions;
 +import org.apache.cassandra.cache.InstrumentingCache;
 +import org.apache.cassandra.cache.KeyCacheKey;
 +import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 +import org.apache.cassandra.concurrent.ScheduledExecutors;
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.db.commitlog.ReplayPosition;
 +import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.db.filter.ColumnSlice;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.dht.*;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.sstable.*;
 +import org.apache.cassandra.io.sstable.metadata.*;
 +import org.apache.cassandra.io.util.*;
 +import org.apache.cassandra.metrics.RestorableMeter;
 +import org.apache.cassandra.metrics.StorageMetrics;
 +import org.apache.cassandra.service.ActiveRepairService;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.*;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +import org.apache.cassandra.utils.concurrent.Ref;
 +import org.apache.cassandra.utils.concurrent.SelfRefCounted;
 +
 +import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR;
 +
 +/**
 + * An SSTableReader can be constructed in a number of places, but typically 
is either
 + * read from disk at startup, or constructed from a flushed memtable, or 
after compaction
 + * to replace some existing sstables. However once created, an sstablereader 
may also be modified.
 + *
 + * A reader's OpenReason describes its current stage in its lifecycle, as 
follows:
 + *
 + * NORMAL
 + * From:   None= Reader has been read from disk, either at 
startup or from a flushed memtable
 + * EARLY   = Reader is the final result of a compaction
 + * MOVED_START = Reader WAS being compacted, but this failed and 
it has been restored to NORMAL status
 + *
 + * EARLY
 + * From:   None= Reader is a compaction replacement that is 
either incomplete and has been opened
 + *to represent its partial result status, or has 
been finished but the compaction
 + *it is a part of has not yet completed fully
 + * EARLY   = Same as from None, only it is not the first 
time it has been
 + *
 + * MOVED_START
 + * From:   NORMAL  = Reader is being compacted. This compaction has 
not finished, but the compaction result
 + *is either partially or fully opened, to either 
partially or 

[jira] [Commented] (CASSANDRA-9111) SSTables originated from the same incremental repair session have different repairedAt timestamps

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394394#comment-14394394
 ] 

Philip Thompson commented on CASSANDRA-9111:


Thanks for the patch! The file you contributed seems to have some odd 
characters in it, did you create it via the steps described here: 
http://wiki.apache.org/cassandra/HowToContribute ?

 SSTables originated from the same incremental repair session have different 
 repairedAt timestamps
 -

 Key: CASSANDRA-9111
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9111
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: prmg
 Attachments: CASSANDRA-9111-v0.txt


 CASSANDRA-7168 optimizes QUORUM reads by skipping incrementally repaired 
 SSTables on other replicas that were repaired on or before the maximum 
 repairedAt timestamp of the coordinating replica's SSTables for the query 
 partition.
 One assumption of that optimization is that SSTables originated from the same 
 repair session in different nodes will have the same repairedAt timestamp, 
 since the objective is to skip reading SSTables originated in the same repair 
 session (or before).
 However, currently, each node timestamps independently SSTables originated 
 from the same repair session, so they almost never have the same timestamp.
 Steps to reproduce the problem:
 {code}
 ccm create test
 ccm populate -n 3
 ccm start
 ccm node1 cqlsh;
 {code}
 {code:sql}
 CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 3};
 CREATE TABLE foo.bar ( key int, col int, PRIMARY KEY (key) ) ;
 INSERT INTO foo.bar (key, col) VALUES (1, 1);
 exit;
 {code}
 {code}
 ccm node1 flush;
 ccm node2 flush;
 ccm node3 flush;
 nodetool -h 127.0.0.1 -p 7100 repair -par -inc foo bar
 [2015-04-02 21:56:07,726] Starting repair command #1, repairing 3 ranges for 
 keyspace foo (parallelism=PARALLEL, full=false)
 [2015-04-02 21:56:07,816] Repair session 3655b670-d99c-11e4-b250-9107aba35569 
 for range (3074457345618258602,-9223372036854775808] finished
 [2015-04-02 21:56:07,816] Repair session 365a4a50-d99c-11e4-b250-9107aba35569 
 for range (-9223372036854775808,-3074457345618258603] finished
 [2015-04-02 21:56:07,818] Repair session 365bf800-d99c-11e4-b250-9107aba35569 
 for range (-3074457345618258603,3074457345618258602] finished
 [2015-04-02 21:56:07,995] Repair command #1 finished
 sstablemetadata 
 ~/.ccm/test/node1/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  
 ~/.ccm/test/node2/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  
 ~/.ccm/test/node3/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  | grep Repaired
 Repaired at: 1428023050318
 Repaired at: 1428023050322
 Repaired at: 1428023050340
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: follow up to CASSANDRA-8670: providing small improvements to performance of writeUTF; and improving safety of DataOutputBuffer when size is known upfront

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4e29b7a9a - c2ecfe7b7


follow up to CASSANDRA-8670:
providing small improvements to performance of writeUTF; and
improving safety of DataOutputBuffer when size is known upfront

patch by ariel and benedict for CASSANDRA-8670


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2ecfe7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2ecfe7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2ecfe7b

Branch: refs/heads/trunk
Commit: c2ecfe7b7bffbced652b4da9dcf4ca263d345695
Parents: 4e29b7a
Author: Ariel Weisberg ariel.wesib...@datastax.com
Authored: Fri Apr 3 12:29:17 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 12:33:29 2015 +0100

--
 .../cassandra/db/commitlog/CommitLog.java   |  5 +-
 .../cassandra/db/marshal/CompositeType.java |  3 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  4 +-
 .../io/util/DataOutputBufferFixed.java  | 65 
 .../cassandra/service/pager/PagingState.java|  3 +-
 .../streaming/messages/StreamInitMessage.java   |  3 +-
 .../org/apache/cassandra/utils/FBUtilities.java |  3 +-
 .../io/util/BufferedDataOutputStreamTest.java   | 39 
 8 files changed, 117 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
index 7fa7575..cf38d44 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
@@ -29,10 +29,10 @@ import com.google.common.annotations.VisibleForTesting;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.commons.lang3.StringUtils;
 
 import com.github.tjake.ICRC32;
+
 import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.ParameterizedClass;
@@ -41,6 +41,7 @@ import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.compress.ICompressor;
 import org.apache.cassandra.io.util.BufferedDataOutputStreamPlus;
+import org.apache.cassandra.io.util.DataOutputBufferFixed;
 import org.apache.cassandra.metrics.CommitLogMetrics;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageService;
@@ -251,7 +252,7 @@ public class CommitLog implements CommitLogMBean
 {
 ICRC32 checksum = CRC32Factory.instance.create();
 final ByteBuffer buffer = alloc.getBuffer();
-BufferedDataOutputStreamPlus dos = new 
BufferedDataOutputStreamPlus(null, buffer);
+BufferedDataOutputStreamPlus dos = new 
DataOutputBufferFixed(buffer);
 
 // checksummed length
 dos.writeInt((int) size);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/CompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
index 9ee9fb3..1bc772d 100644
--- a/src/java/org/apache/cassandra/db/marshal/CompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
@@ -32,6 +32,7 @@ import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.cql3.Operator;
 import org.apache.cassandra.io.util.DataOutputBuffer;
+import org.apache.cassandra.io.util.DataOutputBufferFixed;
 import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
@@ -403,7 +404,7 @@ public class CompositeType extends AbstractCompositeType
 {
 try
 {
-DataOutputBuffer out = new DataOutputBuffer(serializedSize);
+DataOutputBuffer out = new 
DataOutputBufferFixed(serializedSize);
 if (isStatic)
 out.writeShort(STATIC_MARKER);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ecfe7b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index f4f46a1..5669a8d 100644
--- 

[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394368#comment-14394368
 ] 

Sam Tunnicliffe commented on CASSANDRA-9092:


What consistency level are you writing at? 
How are your clients performing the writes, thrift or native protocol?
How do your clients balance requests? Are they simply sending them round robin 
or using token aware routing? Are you writing in only one DC or to both?
Are there errors or warnings in the logs of the nodes which don't fail? 

Also, I don't think the schema you posted is complete as the primary key 
includes a {{chunk}} column not in the table definition.

If this is a not your regular workload (i.e. it's a periodic bulk load) and you 
expect the normal usage pattern to be different, disabling hinted handoff 
temporarily may be a reasonable workaround for you, provided you aren't relying 
on CL.ANY and your clients handle {{UnavailableException}} sanely. You'll also 
need to run repair after the load completes. 
If that isn't an option, bumping the delivery threads and opening the throttle 
might prevent a huge hints buildup if you have sufficient bandwidth and CPU, 
but I doubt it will help much as the nodes or network are clearly already 
overwhelmed otherwise there wouldn't be so many hints being written in the 
first place. 

 Nodes in DC2 die during and after huge write workload
 -

 Key: CASSANDRA-9092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
 java version 1.7.0_71
 Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
Reporter: Sergey Maznichenko
Assignee: Sam Tunnicliffe
 Fix For: 2.1.5

 Attachments: cassandra_crash1.txt


 Hello,
 We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
 Node is VM 8 CPU, 32GB RAM
 During significant workload (loading several millions blobs ~3.5MB each), 1 
 node in DC2 stops and after some time next 2 nodes in DC2 also stops.
 Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
 I see many files in system.hints table and error appears in 2-3 minutes after 
 starting system.hints auto compaction.
 Stops, means ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:1,1,main]
 java.lang.OutOfMemoryError: Java heap space
 ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:1,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.OutOfMemoryError: Java heap space
 Full errors listing attached in cassandra_crash1.txt
 The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9112:

Attachment: 9112.txt

 Remove ternary construction of SegmentedFile.Builder in readers
 ---

 Key: CASSANDRA-9112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 3.0

 Attachments: 9112.txt


 Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9111) SSTables originated from the same incremental repair session have different repairedAt timestamps

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9111:
---
Reviewer: Yuki Morishita

 SSTables originated from the same incremental repair session have different 
 repairedAt timestamps
 -

 Key: CASSANDRA-9111
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9111
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: prmg
 Attachments: CASSANDRA-9111-v0.txt


 CASSANDRA-7168 optimizes QUORUM reads by skipping incrementally repaired 
 SSTables on other replicas that were repaired on or before the maximum 
 repairedAt timestamp of the coordinating replica's SSTables for the query 
 partition.
 One assumption of that optimization is that SSTables originated from the same 
 repair session in different nodes will have the same repairedAt timestamp, 
 since the objective is to skip reading SSTables originated in the same repair 
 session (or before).
 However, currently, each node timestamps independently SSTables originated 
 from the same repair session, so they almost never have the same timestamp.
 Steps to reproduce the problem:
 {code}
 ccm create test
 ccm populate -n 3
 ccm start
 ccm node1 cqlsh;
 {code}
 {code:sql}
 CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 3};
 CREATE TABLE foo.bar ( key int, col int, PRIMARY KEY (key) ) ;
 INSERT INTO foo.bar (key, col) VALUES (1, 1);
 exit;
 {code}
 {code}
 ccm node1 flush;
 ccm node2 flush;
 ccm node3 flush;
 nodetool -h 127.0.0.1 -p 7100 repair -par -inc foo bar
 [2015-04-02 21:56:07,726] Starting repair command #1, repairing 3 ranges for 
 keyspace foo (parallelism=PARALLEL, full=false)
 [2015-04-02 21:56:07,816] Repair session 3655b670-d99c-11e4-b250-9107aba35569 
 for range (3074457345618258602,-9223372036854775808] finished
 [2015-04-02 21:56:07,816] Repair session 365a4a50-d99c-11e4-b250-9107aba35569 
 for range (-9223372036854775808,-3074457345618258603] finished
 [2015-04-02 21:56:07,818] Repair session 365bf800-d99c-11e4-b250-9107aba35569 
 for range (-3074457345618258603,3074457345618258602] finished
 [2015-04-02 21:56:07,995] Repair command #1 finished
 sstablemetadata 
 ~/.ccm/test/node1/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  
 ~/.ccm/test/node2/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  
 ~/.ccm/test/node3/data/foo/bar-377b5540d99d11e49cc09107aba35569/foo-bar-ka-1-Statistics.db
  | grep Repaired
 Repaired at: 1428023050318
 Repaired at: 1428023050322
 Repaired at: 1428023050340
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23c84b16
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23c84b16
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23c84b16

Branch: refs/heads/trunk
Commit: 23c84b169febc59d3d2927bdc6389104d7d869e7
Parents: c2ecfe7 345455d
Author: Benedict Elliott Smith bened...@apache.org
Authored: Fri Apr 3 12:58:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 12:58:07 2015 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/format/SSTableReader.java| 24 ++--
 2 files changed, 18 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23c84b16/CHANGES.txt
--
diff --cc CHANGES.txt
index d049640,9ddb9c9..e8cb20b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,94 -1,5 +1,95 @@@
 +3.0
 + * Share file handles between all instances of a SegmentedFile 
(CASSANDRA-8893)
 + * Make it possible to major compact LCS (CASSANDRA-7272)
 + * Make FunctionExecutionException extend RequestExecutionException
 +   (CASSANDRA-9055)
 + * Add support for SELECT JSON, INSERT JSON syntax and new toJson(), 
fromJson()
 +   functions (CASSANDRA-7970)
 + * Optimise max purgeable timestamp calculation in compaction (CASSANDRA-8920)
 + * Constrain internode message buffer sizes, and improve IO class hierarchy 
(CASSANDRA-8670) 
 + * New tool added to validate all sstables in a node (CASSANDRA-5791)
 + * Push notification when tracing completes for an operation (CASSANDRA-7807)
 + * Delay node up and node added notifications until native protocol 
server is started (CASSANDRA-8236)
 + * Compressed Commit Log (CASSANDRA-6809)
 + * Optimise IntervalTree (CASSANDRA-8988)
 + * Add a key-value payload for third party usage (CASSANDRA-8553)
 + * Bump metrics-reporter-config dependency for metrics 3.0 (CASSANDRA-8149)
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings 

[1/3] cassandra git commit: Do not load read meters for offline operations

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk c2ecfe7b7 - 23c84b169


Do not load read meters for offline operations

patch by benedict; reviewed by tyler for CASSANDRA-9082


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/345455de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/345455de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/345455de

Branch: refs/heads/trunk
Commit: 345455dee2b154e5a9b10a7a615bcc0c7092775d
Parents: 49d64c2
Author: Benedict Elliott Smith bened...@apache.org
Authored: Fri Apr 3 12:53:45 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 12:53:45 2015 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableReader.java | 24 ++--
 2 files changed, 18 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/345455de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b1499c1..9ddb9c9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.5
+ * Do not load read meter for offline operations (CASSANDRA-9082)
  * cqlsh: Make CompositeType data readable (CASSANDRA-8919)
  * cqlsh: Fix display of triggers (CASSANDRA-9081)
  * Fix NullPointerException when deleting or setting an element by index on

http://git-wip-us.apache.org/repos/asf/cassandra/blob/345455de/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 8fd7b85..c73d4a1 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -378,6 +378,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 return open(descriptor, components, metadata, partitioner, true);
 }
 
+// use only for offline or Standalone operations
 public static SSTableReader openNoValidation(Descriptor descriptor, 
SetComponent components, CFMetaData metadata) throws IOException
 {
 return open(descriptor, components, metadata, 
StorageService.getPartitioner(), false);
@@ -434,7 +435,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 sstable.ifile = 
ibuilder.complete(sstable.descriptor.filenameFor(Component.PRIMARY_INDEX));
 sstable.dfile = 
dbuilder.complete(sstable.descriptor.filenameFor(Component.DATA));
 sstable.bf = FilterFactory.AlwaysPresent;
-sstable.setup();
+sstable.setup(true);
 return sstable;
 }
 
@@ -478,7 +479,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 sstable.load(validationMetadata);
 logger.debug(INDEX LOAD TIME for {}: {} ms., descriptor, 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start));
 
-sstable.setup();
+sstable.setup(!validate);
 if (validate)
 sstable.validate();
 
@@ -599,7 +600,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 this.dfile = dfile;
 this.indexSummary = indexSummary;
 this.bf = bloomFilter;
-this.setup();
+this.setup(false);
 }
 
 public static long getTotalBytes(IterableSSTableReader sstables)
@@ -2010,9 +2011,9 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 return selfRef.ref();
 }
 
-void setup()
+void setup(boolean isOffline)
 {
-tidy.setup(this);
+tidy.setup(this, isOffline);
 this.readMeter = tidy.global.readMeter;
 }
 
@@ -2059,7 +2060,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 
 private boolean setup;
 
-void setup(SSTableReader reader)
+void setup(SSTableReader reader, boolean isOffline)
 {
 this.setup = true;
 this.bf = reader.bf;
@@ -2070,6 +2071,8 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 this.typeRef = DescriptorTypeTidy.get(reader);
 this.type = typeRef.get();
 this.global = type.globalRef.get();
+if (!isOffline)
+global.ensureReadMeter();
 }
 
 InstanceTidier(Descriptor descriptor, CFMetaData metadata)
@@ -2212,7 +2215,7 @@ public class SSTableReader extends SSTable implements 
SelfRefCountedSSTableRead
 private RestorableMeter readMeter;
 // the scheduled persistence of the readMeter, that we 

[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394352#comment-14394352
 ] 

Sergey Maznichenko commented on CASSANDRA-9092:
---

Should I provide any additional information from the failed node? I want to 
delete all hints and run repair on this node.

 Nodes in DC2 die during and after huge write workload
 -

 Key: CASSANDRA-9092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
 java version 1.7.0_71
 Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
Reporter: Sergey Maznichenko
Assignee: Sam Tunnicliffe
 Fix For: 2.1.5

 Attachments: cassandra_crash1.txt


 Hello,
 We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
 Node is VM 8 CPU, 32GB RAM
 During significant workload (loading several millions blobs ~3.5MB each), 1 
 node in DC2 stops and after some time next 2 nodes in DC2 also stops.
 Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
 I see many files in system.hints table and error appears in 2-3 minutes after 
 starting system.hints auto compaction.
 Stops, means ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:1,1,main]
 java.lang.OutOfMemoryError: Java heap space
 ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:1,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.OutOfMemoryError: Java heap space
 Full errors listing attached in cassandra_crash1.txt
 The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9112) Remove ternary construction of SegmentedFile.Builder in readers

2015-04-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-9112:
---

 Summary: Remove ternary construction of SegmentedFile.Builder in 
readers
 Key: CASSANDRA-9112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9112
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 3.0


Self explanatory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9110) Bounded/RingBuffer CQL Collections

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9110:
---
Fix Version/s: 3.1

 Bounded/RingBuffer CQL Collections
 --

 Key: CASSANDRA-9110
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9110
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jim Plush
Priority: Minor
 Fix For: 3.1


 Feature Request:
 I've had frequent use cases for bounded and RingBuffer based collections. 
 For example: 
 I want to store the first 100 times I've see this thing.
 I want to store the last 100 times I've seen this thing.
 Currently that means having to do application level READ/WRITE operations and 
 we like to keep some of our high scale apps to write only where possible. 
 While probably expensive for exactly N items an approximation should be good 
 enough for most applications. Where N in our example could be 100 or 102, or 
 even make that tunable on the type or table. 
 For the RingBuffer example, consider I only want to store the last N login 
 attempts for a user. Once N+1 comes in it issues a delete for the oldest one 
 in the collection, or waits until compaction to drop the overflow data as 
 long as the CQL returns the right bounds.
 A potential implementation idea, given the rowkey would live on a single node 
 would be to have an LRU based counter cache (tunable in the yaml settings in 
 MB) that keeps a current count of how many items are already in the 
 collection for that rowkey. If  than bound, toss. 
 something akin to:
 CREATE TABLE users (
   user_id text PRIMARY KEY,
   first_name text,
   first_logins settext, 100, oldest
   last_logins settext, 100, newest
 );



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9113) Improve error message when bootstrap fails

2015-04-03 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-9113:
--

 Summary: Improve error message when bootstrap fails
 Key: CASSANDRA-9113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9113
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Philip Thompson
 Fix For: 3.1


Currently when bootstrap fails, users see a {{RuntimeException: Stream failed}} 
with a long stack trace. This typically brings them to IRC, the mailing list, 
or jira. However, most of the time, it is not due to a C* server failure, but 
network or machine issues.

While there are probably improvements that could be made to improve the 
resiliency of streaming, it would be nice if, assuming no server errors 
detected, that instead of the RuntimeException users are shown a less traumatic 
error message, that includes or points to documentation on how to solve a 
failed bootstrap stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8905:
---
Fix Version/s: 2.0.15

 IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
 ---

 Key: CASSANDRA-8905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
 Fix For: 2.0.15


 After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
 {noformat}
 ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
 java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
 at java.util.ArrayList.init(ArrayList.java:142)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I've identified which sstable is causing this, it's an -ic- format sstable, 
 i.e. something written before the upgrade. I can repeat with 
 forceUserDefinedCompaction.
 Running upgradesstables also causes the same exception. 
 Scrub helps, but skips a row as incorrect. 
 I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394511#comment-14394511
 ] 

Philip Thompson commented on CASSANDRA-8905:


[~krummas], if scrubbing solved the issue, do we consider this a problem?

 IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
 ---

 Key: CASSANDRA-8905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
 Fix For: 2.0.15


 After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
 {noformat}
 ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
 java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
 at java.util.ArrayList.init(ArrayList.java:142)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I've identified which sstable is causing this, it's an -ic- format sstable, 
 i.e. something written before the upgrade. I can repeat with 
 forceUserDefinedCompaction.
 Running upgradesstables also causes the same exception. 
 Scrub helps, but skips a row as incorrect. 
 I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394526#comment-14394526
 ] 

Philip Thompson commented on CASSANDRA-8589:


[~slebresne], would you like this on your backlog? Or should I assign it to 
Benjamin, Tyler, or Carl?

 Reconciliation in presence of tombstone might yield state data
 --

 Key: CASSANDRA-8589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne

 Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
 sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
 each operation (and let's assume that a replica that don't ack is a replica 
 that don't get the update):
 {noformat}
 CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
 INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
 DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
 UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
 SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
 {noformat}
 Every operation has achieved quorum, but on the last read, A will respond 
 {{0-0, tombstone 1, 2-2}} and B will respond {{0-0, 1-1}}. As a 
 consequence we'll answer {{0-0, 2-2}} which is incorrect (we should respond 
 {{0-0, 2-3}}).
 Put another way, if we have a limit, every replica honors that limit but 
 since tombstones can suppress results from other nodes, we may have some 
 cells for which we actually don't get a quorum of response (even though we 
 globally have a quorum of replica responses).
 In practice, this probably occurs rather rarely and so the simpler fix is 
 probably to do something similar to the short reads protection: detect when 
 this could have happen (based on how replica response are reconciled) and do 
 an additional request in that case. That detection will have potential false 
 positives but I suspect we can be precise enough that those false positives 
 will be very very rare (we should nonetheless track how often this code gets 
 triggered and if we see that it's more often than we think, we could 
 pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9113) Improve error message when bootstrap fails

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9113:
---
Priority: Minor  (was: Major)

 Improve error message when bootstrap fails
 --

 Key: CASSANDRA-9113
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9113
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Philip Thompson
Priority: Minor
 Fix For: 3.1


 Currently when bootstrap fails, users see a {{RuntimeException: Stream 
 failed}} with a long stack trace. This typically brings them to IRC, the 
 mailing list, or jira. However, most of the time, it is not due to a C* 
 server failure, but network or machine issues.
 While there are probably improvements that could be made to improve the 
 resiliency of streaming, it would be nice if, assuming no server errors 
 detected, that instead of the RuntimeException users are shown a less 
 traumatic error message, that includes or points to documentation on how to 
 solve a failed bootstrap stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394530#comment-14394530
 ] 

Sylvain Lebresne commented on CASSANDRA-8589:
-

It would actually be nice to start by ensuring we can reproduce it through a 
dtest. It shoudn't be too hard to write one, and no point in chasing a complex 
solution if like for CASSANDRA-8933, something I forgot about in the code made 
this not a problem. Also, CASSANDRA-8099 should actually solve that, so if 
that's confirmed by said reproduction dtest, maybe we're good with fixing in 
3.0 only.

 Reconciliation in presence of tombstone might yield state data
 --

 Key: CASSANDRA-8589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne

 Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
 sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
 each operation (and let's assume that a replica that don't ack is a replica 
 that don't get the update):
 {noformat}
 CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
 INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
 DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
 UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
 SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
 {noformat}
 Every operation has achieved quorum, but on the last read, A will respond 
 {{0-0, tombstone 1, 2-2}} and B will respond {{0-0, 1-1}}. As a 
 consequence we'll answer {{0-0, 2-2}} which is incorrect (we should respond 
 {{0-0, 2-3}}).
 Put another way, if we have a limit, every replica honors that limit but 
 since tombstones can suppress results from other nodes, we may have some 
 cells for which we actually don't get a quorum of response (even though we 
 globally have a quorum of replica responses).
 In practice, this probably occurs rather rarely and so the simpler fix is 
 probably to do something similar to the short reads protection: detect when 
 this could have happen (based on how replica response are reconciled) and do 
 an additional request in that case. That detection will have potential false 
 positives but I suspect we can be precise enough that those false positives 
 will be very very rare (we should nonetheless track how often this code gets 
 triggered and if we see that it's more often than we think, we could 
 pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8589:
---
   Tester: Ryan McGuire
Fix Version/s: 3.0

 Reconciliation in presence of tombstone might yield state data
 --

 Key: CASSANDRA-8589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 3.0


 Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
 sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
 each operation (and let's assume that a replica that don't ack is a replica 
 that don't get the update):
 {noformat}
 CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
 INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
 DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
 UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
 SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
 {noformat}
 Every operation has achieved quorum, but on the last read, A will respond 
 {{0-0, tombstone 1, 2-2}} and B will respond {{0-0, 1-1}}. As a 
 consequence we'll answer {{0-0, 2-2}} which is incorrect (we should respond 
 {{0-0, 2-3}}).
 Put another way, if we have a limit, every replica honors that limit but 
 since tombstones can suppress results from other nodes, we may have some 
 cells for which we actually don't get a quorum of response (even though we 
 globally have a quorum of replica responses).
 In practice, this probably occurs rather rarely and so the simpler fix is 
 probably to do something similar to the short reads protection: detect when 
 this could have happen (based on how replica response are reconciled) and do 
 an additional request in that case. That detection will have potential false 
 positives but I suspect we can be precise enough that those false positives 
 will be very very rare (we should nonetheless track how often this code gets 
 triggered and if we see that it's more often than we think, we could 
 pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8915) Improve MergeIterator performance

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394250#comment-14394250
 ] 

Benedict commented on CASSANDRA-8915:
-

I perhaps should have commented when I first saw the link. It should be quite 
viable to merge the behaviours; the Candidate just needs to have a flag 
indicating if the value is real or not, and to just discard the not-real 
values it encounters.

 Improve MergeIterator performance
 -

 Key: CASSANDRA-8915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8915
 Project: Cassandra
  Issue Type: Improvement
Reporter: Branimir Lambov
Assignee: Branimir Lambov
Priority: Minor

 The implementation of {{MergeIterator}} uses a priority queue and applies a 
 pair of {{poll}}+{{add}} operations for every item in the resulting sequence. 
 This is quite inefficient as {{poll}} necessarily applies at least {{log N}} 
 comparisons (up to {{2log N}}), and {{add}} often requires another {{log N}}, 
 for example in the case where the inputs largely don't overlap (where {{N}} 
 is the number of iterators being merged).
 This can easily be replaced with a simple custom structure that can perform 
 replacement of the top of the queue in a single step, which will very often 
 complete after a couple of comparisons and in the worst case scenarios will 
 match the complexity of the current implementation.
 This should significantly improve merge performance for iterators with 
 limited overlap (e.g. levelled compaction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394294#comment-14394294
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

Will there be a command to manually refresh statistics of a table from CQL 
(like ANALYZE TABLE ...)?
I need a way to trigger this in an integration test and I don't want to wait 
until it automatically refreshes it after the update interval...
1. create table
2. add data
3. analyze (?)
4. check stats


 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.5

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Share file handles between all instances of a SegmentedFile

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 868457de2 - cf925bdfa


Share file handles between all instances of a SegmentedFile

patch by stefania; reviewed by benedict for CASSANDRA-8893


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf925bdf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf925bdf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf925bdf

Branch: refs/heads/trunk
Commit: cf925bdfa2f211784eb22d2b98b7176e551dda69
Parents: 868457d
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri Apr 3 11:43:30 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 11:43:30 2015 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/io/util/ChannelProxy.java  | 182 +++
 .../cassandra/io/RandomAccessReaderTest.java| 234 +++
 3 files changed, 417 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf925bdf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bda5bb7..d049640 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Share file handles between all instances of a SegmentedFile (CASSANDRA-8893)
  * Make it possible to major compact LCS (CASSANDRA-7272)
  * Make FunctionExecutionException extend RequestExecutionException
(CASSANDRA-9055)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf925bdf/src/java/org/apache/cassandra/io/util/ChannelProxy.java
--
diff --git a/src/java/org/apache/cassandra/io/util/ChannelProxy.java 
b/src/java/org/apache/cassandra/io/util/ChannelProxy.java
new file mode 100644
index 000..79954a5
--- /dev/null
+++ b/src/java/org/apache/cassandra/io/util/ChannelProxy.java
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.io.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.MappedByteBuffer;
+import java.nio.channels.FileChannel;
+import java.nio.channels.WritableByteChannel;
+import java.nio.file.StandardOpenOption;
+
+import org.apache.cassandra.io.FSReadError;
+import org.apache.cassandra.utils.CLibrary;
+import org.apache.cassandra.utils.concurrent.RefCounted;
+import org.apache.cassandra.utils.concurrent.SharedCloseableImpl;
+
+/**
+ * A proxy of a FileChannel that:
+ *
+ * - implements reference counting
+ * - exports only thread safe FileChannel operations
+ * - wraps IO exceptions into runtime exceptions
+ *
+ * Tested by RandomAccessReaderTest.
+ */
+public final class ChannelProxy extends SharedCloseableImpl
+{
+private final String filePath;
+private final FileChannel channel;
+
+public static FileChannel openChannel(File file)
+{
+try
+{
+return FileChannel.open(file.toPath(), StandardOpenOption.READ);
+}
+catch (IOException e)
+{
+throw new RuntimeException(e);
+}
+}
+
+public ChannelProxy(String path)
+{
+this (new File(path));
+}
+
+public ChannelProxy(File file)
+{
+this(file.getAbsolutePath(), openChannel(file));
+}
+
+public ChannelProxy(String filePath, FileChannel channel)
+{
+super(new Cleanup(filePath, channel));
+
+this.filePath = filePath;
+this.channel = channel;
+}
+
+public ChannelProxy(ChannelProxy copy)
+{
+super(copy);
+
+this.filePath = copy.filePath;
+this.channel = copy.channel;
+}
+
+private final static class Cleanup implements RefCounted.Tidy
+{
+final String filePath;
+final FileChannel channel;
+
+protected Cleanup(String filePath, FileChannel channel)
+{
+this.filePath = filePath;
+this.channel = channel;
+}
+
+public String name()
+{
+  

[jira] [Updated] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8893:

Reviewer: Benedict

 RandomAccessReader should share its FileChannel with all instances (via 
 SegmentedFile)
 --

 Key: CASSANDRA-8893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.0


 There's no good reason to open a FileChannel for each 
 \(Compressed\)\?RandomAccessReader, and this would simplify 
 RandomAccessReader to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394536#comment-14394536
 ] 

Philip Thompson commented on CASSANDRA-8589:


Okay, I've set Ryan as tester, he'll forward it along.

 Reconciliation in presence of tombstone might yield state data
 --

 Key: CASSANDRA-8589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 3.0


 Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
 sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
 each operation (and let's assume that a replica that don't ack is a replica 
 that don't get the update):
 {noformat}
 CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))
 INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
 INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C
 DELETE FROM test WHERE k='k' AND t=1; // acked by A and C
 UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C
 SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
 {noformat}
 Every operation has achieved quorum, but on the last read, A will respond 
 {{0-0, tombstone 1, 2-2}} and B will respond {{0-0, 1-1}}. As a 
 consequence we'll answer {{0-0, 2-2}} which is incorrect (we should respond 
 {{0-0, 2-3}}).
 Put another way, if we have a limit, every replica honors that limit but 
 since tombstones can suppress results from other nodes, we may have some 
 cells for which we actually don't get a quorum of response (even though we 
 globally have a quorum of replica responses).
 In practice, this probably occurs rather rarely and so the simpler fix is 
 probably to do something similar to the short reads protection: detect when 
 this could have happen (based on how replica response are reconciled) and do 
 an additional request in that case. That detection will have potential false 
 positives but I suspect we can be precise enough that those false positives 
 will be very very rare (we should nonetheless track how often this code gets 
 triggered and if we see that it's more often than we think, we could 
 pro-actively bump user limits internally to reduce those occurrences).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: (was: 2.1.5)

 cassandra-stress does not honour consistency level (cl) parameter when used 
 in combination with user command
 

 Key: CASSANDRA-8948
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Andreas Flinck
Assignee: T Jake Luciani
 Fix For: 2.1.5

 Attachments: 8948.txt


 The stress test tool does not honour cl parameter when used in combination 
 with the user command. Consistency level will be default ONE no matter what 
 is set by cl=.
 Works fine with write command.
 How to reproduce:
 1. Create a suitable yaml-file to use in test
 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
 no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8949) CompressedSequentialWriter.resetAndTruncate can lose data

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8949:
---
Fix Version/s: 2.0.15

 CompressedSequentialWriter.resetAndTruncate can lose data
 -

 Key: CASSANDRA-8949
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8949
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Critical
 Fix For: 2.0.15, 2.1.5


 If the FileMark passed into this method fully fills the buffer, a subsequent 
 call to write will reBuffer and drop the data currently in the buffer. We 
 need to mark the buffer contents as dirty in resetAndTruncate to prevent this 
 - see CASSANDRA-8709 notes for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: 2.0.15

 cassandra-stress does not honour consistency level (cl) parameter when used 
 in combination with user command
 

 Key: CASSANDRA-8948
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Andreas Flinck
Assignee: T Jake Luciani
 Fix For: 2.1.5

 Attachments: 8948.txt


 The stress test tool does not honour cl parameter when used in combination 
 with the user command. Consistency level will be default ONE no matter what 
 is set by cl=.
 Works fine with write command.
 How to reproduce:
 1. Create a suitable yaml-file to use in test
 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
 no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8934) COPY command has inherent 128KB field size limit

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8934:
---
Fix Version/s: 2.0.15

 COPY command has inherent 128KB field size limit
 

 Key: CASSANDRA-8934
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8934
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter:  Brian Hess
Assignee: Philip Thompson
  Labels: cqlsh, docs-impacting
 Fix For: 2.0.15, 2.1.5

 Attachments: 8934-2.0.txt, 8934-2.1.txt


 In using the COPY command as follows:
 {{cqlsh -e COPY test.test1mb(pkey, ccol, data) FROM 
 'in/data1MB/data1MB_9.csv'}}
 the following error is thrown:
 {{stdin:1:field larger than field limit (131072)}}
 The data file contains a field that is greater than 128KB (it's more like 
 almost 1MB).
 A work-around (thanks to [~jjordan] and [~thobbs] is to modify the cqlsh 
 script and add the line
 {{csv.field_size_limit(10)}}
 anywhere after the line
 {{import csv}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-04-03 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394926#comment-14394926
 ] 

Aleksey Yeschenko commented on CASSANDRA-7688:
--

There most definitely won't be a separate CQL command just for that, but when 
we switch this to a virtual table implementation (when we have those) it might 
be as simple as {{UPDATE}}ing a boolean field in that table to trigger recalc.

We could temporarily add a JMX method. Or you could set the interval to be 
really low for now, and add some sleep.

I know it's a bit ugly, but it's just an interim measure.

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.5

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7712) temporary files need to be cleaned by unit tests

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7712:
---
Fix Version/s: 2.0.15

 temporary files need to be cleaned by unit tests
 

 Key: CASSANDRA-7712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7712
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
  Labels: bootcamp, lhf
 Fix For: 2.0.15, 2.1.5

 Attachments: 7712-hung-CliTest_system.log.gz, 7712-v2.txt, 
 7712-v3.txt, 7712_workaround.txt, CASSANDRA-7712_apache_cassandra_2.0.txt


 There are many unit test temporary files left behind after test runs. In the 
 case of CI servers, I have seen 70,000 files accumulate in /tmp over a 
 period of time. Each unit test should make an effort to remove its temporary 
 files when the test is completed.
 My current unit test cleanup block:
 {noformat}
 # clean up after unit tests..
 rm -rf  /tmp/140*-0 /tmp/CFWith* /tmp/Counter1* /tmp/DescriptorTest* 
 /tmp/Keyspace1* \
 /tmp/KeyStreamingTransferTestSpace* /tmp/SSTableExportTest* 
 /tmp/SSTableImportTest* \
 /tmp/Standard1* /tmp/Statistics.db* /tmp/StreamingTransferTest* 
 /tmp/ValuesWithQuotes* \
 /tmp/cassandra* /tmp/jna-* /tmp/ks-cf-ib-1-* /tmp/lengthtest* 
 /tmp/liblz4-java*.so /tmp/readtest* \
 /tmp/set_length_during_read_mode* /tmp/set_negative_length* 
 /tmp/snappy-*.so
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8950) NullPointerException in nodetool getendpoints with non-existent keyspace or table

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8950:
---
Fix Version/s: 2.0.15

 NullPointerException in nodetool getendpoints with non-existent keyspace or 
 table
 -

 Key: CASSANDRA-8950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8950
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 8950-2.0.txt, 8950-2.1.txt


 If {{nodetool getendpoints}} is run with a non-existent keyspace or table 
 table, a NullPointerException will occur:
 {noformat}
 ~/cassandra $ bin/nodetool getendpoints badkeyspace badtable mykey
 error: null
 -- StackTrace --
 java.lang.NullPointerException
   at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2914)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8559) OOM caused by large tombstone warning.

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8559:
---
Fix Version/s: 2.0.15

 OOM caused by large tombstone warning.
 --

 Key: CASSANDRA-8559
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8559
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.0.11 / 2.1
Reporter: Dominic Letz
Assignee: Aleksey Yeschenko
  Labels: tombstone
 Fix For: 2.0.15, 2.1.5

 Attachments: 8559.txt, Selection_048.png, cassandra-2.0.11-8559.txt, 
 stacktrace.log


 When running with high amount of tombstones the error message generation from 
 CASSANDRA-6117 can lead to out of memory situation with the default setting.
 Attached a heapdump viewed in visualvm showing how this construct created two 
 777mb strings to print the error message for a read query and then crashed 
 OOM.
 {code}
 if (respectTombstoneThresholds()  columnCounter.ignored()  
 DatabaseDescriptor.getTombstoneWarnThreshold())
 {
 StringBuilder sb = new StringBuilder();
 CellNameType type = container.metadata().comparator;
 for (ColumnSlice sl : slices)
 {
 assert sl != null;
 sb.append('[');
 sb.append(type.getString(sl.start));
 sb.append('-');
 sb.append(type.getString(sl.finish));
 sb.append(']');
 }
 logger.warn(Read {} live and {} tombstoned cells in {}.{} (see 
 tombstone_warn_threshold). {} columns was requested, slices={}, delInfo={},
 columnCounter.live(), columnCounter.ignored(), 
 container.metadata().ksName, container.metadata().cfName, count, sb, 
 container.deletionInfo());
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394805#comment-14394805
 ] 

Jonathan Ellis commented on CASSANDRA-7066:
---

I like it.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394656#comment-14394656
 ] 

Jonathan Ellis commented on CASSANDRA-7066:
---

bq.  if we're compacting multiple files into one, we write that the new file(s) 
are in progress, then when they're done, we write a new log file saying we're 
swapping these files (as a checkpoint), then clear the in progress log file 
and write that we're deleting the old files, followed by immediately 
promoting the new ones and deleting our swapping log entry

Since all writes are idempotent now I think we are okay simplifying this to

... write that the new file(s) are in progress, then when they're done, we 
clear the in progress log file and delete the old files.  If the process dies 
in between those two steps (very rare, deletes are fast), we have some extra 
redundant data left but correctness is preserved.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8952) Remove transient RandomAccessFile usage

2015-04-03 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-8952.

Resolution: Fixed

In retrospect, linking to line #'s on trunk in a ticket isn't useful.  Changes 
look good and cover the few places I had concerns about w/regards to Windows.

Committed w/1 nit: added copyright header to the 2 new test files.

 Remove transient RandomAccessFile usage
 ---

 Key: CASSANDRA-8952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Stefania
Priority: Minor
  Labels: Windows
 Fix For: 3.0


 There are a few places within the code base where we use a RandomAccessFile 
 transiently to either grab fd's or channels for other operations. This is 
 prone to access violations on Windows (see CASSANDRA-4050 and CASSANDRA-8709) 
 - while these usages don't appear to be causing issues at this time there's 
 no reason to keep them. The less RandomAccessFile usage in the code-base the 
 more stable we'll be on Windows.
 [SSTableReader.dropPageCache|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L2021]
 * Used to getFD, have FileChannel version
 [FileUtils.truncate|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/FileUtils.java#L188]
 * Used to get file channel for channel truncate call. Only use is in index 
 file close so channel truncation down-only is acceptable.
 [MMappedSegmentedFile.createSegments|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/util/MmappedSegmentedFile.java#L196]
 * Used to get file channel for mapping.
 Keeping these in a single ticket as all three should be fairly trivial 
 refactors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7816:
---
Fix Version/s: 2.0.15

 Duplicate DOWN/UP Events Pushed with Native Protocol
 

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Michael Penick
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 7816-v2.0.txt, tcpdump_repeating_status_change.txt, 
 trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8734) Expose commit log archive status

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8734:
---
Fix Version/s: 2.0.15

 Expose commit log archive status
 

 Key: CASSANDRA-8734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8734
 Project: Cassandra
  Issue Type: New Feature
  Components: Config
Reporter: Philip S Doctor
Assignee: Chris Lohfink
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 8734-cassandra-2.0.txt, 8734-cassandra-2.1.txt


 The operational procedure to modify commit log archiving is to edit 
 commitlog_archiving.properties and then perform a restart.  However this has 
 troublesome edge cases:
 1) It is possible for people to modify commitlog_archiving.properties but 
 then not perform a restart
 2) It is possible for people to modify commitlog_archiving.properties only on 
 some nodes
 3) It is possible for people to have modified file + restart but then later 
 add more nodes without correct modifications.
 Because of these reasons, it is operationally useful to be able to audit the 
 commit log archive state of a node.  Simply parsing 
 commitlog_archiving.properties is insufficient due to #1.  
 I would suggest exposing either via some system table or JMX would be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2341e945
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2341e945
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2341e945

Branch: refs/heads/trunk
Commit: 2341e945b950afd631faaad9189e61191d2cc2fe
Parents: 7d68ced e4072cf
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Apr 3 20:58:37 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:58:37 2015 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/tools/SSTableOfflineRelevel.java | 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2341e945/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2341e945/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
--



[jira] [Updated] (CASSANDRA-8056) nodetool snapshot keyspace -cf table -t sametagname does not work on multiple tabes of the same keyspace

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8056:
---
Fix Version/s: 2.0.15

 nodetool snapshot keyspace -cf table -t sametagname does not work on 
 multiple tabes of the same keyspace
 --

 Key: CASSANDRA-8056
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8056
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Cassandra 2.0.6 debian wheezy and squeeze
Reporter: Esha Pathak
Priority: Trivial
  Labels: lhf
 Fix For: 2.0.15, 2.1.5

 Attachments: CASSANDRA-8056.txt


 scenario
 keyspace thing has tables : thing:user , thing:object, thing:user_details
 steps to reproduce :
 1. nodetool snapshot thing --column-family user --tag tagname
   Requested creating snapshot for: thing and table: user
   Snapshot directory: tagname
 2.nodetool snapshot thing --column-family object --tag tagname
 Requested creating snapshot for: thing and table: object
 Exception in thread main java.io.IOException: Snapshot tagname already 
 exists.
   at 
 org.apache.cassandra.service.StorageService.takeColumnFamilySnapshot(StorageService.java:2274)
   at sun.reflect.GeneratedMethodAccessor129.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394661#comment-14394661
 ] 

Philip Thompson edited comment on CASSANDRA-8905 at 4/3/15 4:40 PM:


If this was fixed by scrub, it was most likely a corrupted sstable.


was (Author: philipthompson):
Fixed by scrub. Probably corrupted sstable.

 IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
 ---

 Key: CASSANDRA-8905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
 Fix For: 2.0.15


 After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
 {noformat}
 ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
 java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
 at java.util.ArrayList.init(ArrayList.java:142)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I've identified which sstable is causing this, it's an -ic- format sstable, 
 i.e. something written before the upgrade. I can repeat with 
 forceUserDefinedCompaction.
 Running upgradesstables also causes the same exception. 
 Scrub helps, but skips a row as incorrect. 
 I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9036) disk full when running cleanup (on a far from full disk)

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9036:
---
Fix Version/s: 2.0.15

 disk full when running cleanup (on a far from full disk)
 --

 Key: CASSANDRA-9036
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9036
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
Assignee: Robert Stupp
 Fix For: 2.0.15, 2.1.5

 Attachments: 9036-2.0.txt, 9036-2.1.txt, 9036-3.0.txt


 I'm trying to run cleanup, but get this:
 {noformat}
  INFO [CompactionExecutor:18] 2015-03-25 10:29:16,355 CompactionManager.java 
 (line 564) Cleaning up 
 SSTableReader(path='/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db')
 ERROR [CompactionExecutor:18] 2015-03-25 10:29:16,664 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:18,1,main]
 java.io.IOException: disk full
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:567)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Now that's odd, since:
 * Disk has some 680G left
 * The sstable it's trying to cleanup is far less than 680G:
 {noformat}
 # ls -lh *4345750*
 -rw-r--r-- 1 cassandra cassandra  64M Mar 21 04:42 
 production-Data_daily-jb-4345750-CompressionInfo.db
 -rw-r--r-- 1 cassandra cassandra 219G Mar 21 04:42 
 production-Data_daily-jb-4345750-Data.db
 -rw-r--r-- 1 cassandra cassandra 503M Mar 21 04:42 
 production-Data_daily-jb-4345750-Filter.db
 -rw-r--r-- 1 cassandra cassandra  42G Mar 21 04:42 
 production-Data_daily-jb-4345750-Index.db
 -rw-r--r-- 1 cassandra cassandra 5.9K Mar 21 04:42 
 production-Data_daily-jb-4345750-Statistics.db
 -rw-r--r-- 1 cassandra cassandra  81M Mar 21 04:42 
 production-Data_daily-jb-4345750-Summary.db
 -rw-r--r-- 1 cassandra cassandra   79 Mar 21 04:42 
 production-Data_daily-jb-4345750-TOC.txt
 {noformat}
 Sure, it's large, but it's not 680G. 
 No other compactions are running on that server. I'm getting this on 12 / 56 
 servers right now. 
 Could it be some bug in the calculation of the expected size of the new 
 sstable, perhaps? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8584) Add strerror output on failed trySkipCache calls

2015-04-03 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394779#comment-14394779
 ] 

Joshua McKenzie commented on CASSANDRA-8584:


Removed NoSpamLogger in deference to CASSANDRA-9029. A quick run against 
2.1-HEAD w/this patch gives:
{noformat}
grep trySkipCache 8584_utest.txt | wc -l
432
{noformat}

I'll either wait until 9029's in or track down the source of the failing 
trySkipCache calls and create another ticket for that. I'd prefer to have a 
clean slate w/regards to our page-cache prompting before committing this.

 Add strerror output on failed trySkipCache calls
 

 Key: CASSANDRA-8584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8584
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Ariel Weisberg
Priority: Trivial
 Fix For: 2.1.5

 Attachments: 8584_v1.txt, NoSpamLogger.java, nospamlogger.txt


 Since trySkipCache returns an errno rather than -1 and setting errno like our 
 other CLibrary calls, it's thread-safe and we could print out more helpful 
 information if we failed to prompt the kernel to skip the page cache.  That 
 system call should always succeed unless we have an invalid fd as it's free 
 to ignore us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Don't include tmp files in offline relevel

2015-04-03 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7d68cedec - 2341e945b


Don't include tmp files in offline relevel

Patch by marcuse; reviewed by carlyeks for CASSANDRA-9088


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67038a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67038a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67038a32

Branch: refs/heads/trunk
Commit: 67038a32e118c6a8a0a9de50c8c099b85ccd7b07
Parents: 2e6492a
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Apr 1 17:28:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:53:23 2015 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/tools/SSTableOfflineRelevel.java  | 16 +---
 2 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c569de5..28f79f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.15:
+ * Don't include tmp files when doing offline relevel (CASSANDRA-9088)
  * Use the proper CAS WriteType when finishing a previous round during Paxos
preparation (CASSANDRA-8672)
  * Avoid race in cancelling compactions (CASSANDRA-9070)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java 
b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
index 3fb2f7a..6293faa 100644
--- a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
+++ b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
@@ -28,6 +28,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.DecoratedKey;
@@ -93,12 +95,20 @@ public class SSTableOfflineRelevel
 Keyspace.openWithoutSSTables(keyspace);
 Directories directories = Directories.create(keyspace, columnfamily);
 SetSSTableReader sstables = new HashSet();
-for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().list().entrySet())
+for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().skipTemporary(true).list().entrySet())
 {
 if (sstable.getKey() != null)
 {
-SSTableReader reader = SSTableReader.open(sstable.getKey());
-sstables.add(reader);
+try
+{
+SSTableReader reader = 
SSTableReader.open(sstable.getKey());
+sstables.add(reader);
+}
+catch (Throwable t)
+{
+out.println(Couldn't open sstable: 
+sstable.getKey().filenameFor(Component.DATA));
+Throwables.propagate(t);
+}
 }
 }
 if (sstables.isEmpty())



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-04-03 Thread marcuse
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4072cf0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4072cf0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4072cf0

Branch: refs/heads/trunk
Commit: e4072cf09f92385eac1a64525e8ec8b2624d94cd
Parents: 9449a70 67038a3
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Apr 3 20:58:15 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:58:15 2015 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/tools/SSTableOfflineRelevel.java | 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4072cf0/CHANGES.txt
--
diff --cc CHANGES.txt
index b1499c1,28f79f9..5cd914a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,73 -1,5 +1,74 @@@
 -2.0.15:
 +2.1.5
 + * cqlsh: Make CompositeType data readable (CASSANDRA-8919)
 + * cqlsh: Fix display of triggers (CASSANDRA-9081)
 + * Fix NullPointerException when deleting or setting an element by index on
 +   a null list collection (CASSANDRA-9077)
 + * Buffer bloom filter serialization (CASSANDRA-9066)
 + * Fix anti-compaction target bloom filter size (CASSANDRA-9060)
 + * Make FROZEN and TUPLE unreserved keywords in CQL (CASSANDRA-9047)
 + * Prevent AssertionError from SizeEstimatesRecorder (CASSANDRA-9034)
 + * Avoid overwriting index summaries for sstables with an older format that
 +   does not support downsampling; rebuild summaries on startup when this
 +   is detected (CASSANDRA-8993)
 + * Fix potential data loss in CompressedSequentialWriter (CASSANDRA-8949)
 + * Make PasswordAuthenticator number of hashing rounds configurable 
(CASSANDRA-8085)
 + * Fix AssertionError when binding nested collections in DELETE 
(CASSANDRA-8900)
 + * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
 + * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
 + * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
 + * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)
 + * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)
 + * SSTableScanner enforces its bounds (CASSANDRA-8946)
 + * Cleanup cell equality (CASSANDRA-8947)
 + * Introduce intra-cluster message coalescing (CASSANDRA-8692)
 + * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
 + * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)
 + * Don't set clientMode in SSTableLoader (CASSANDRA-8238)
 + * Fix SSTableRewriter with disabled early open (CASSANDRA-8535)
 + * Allow invalidating permissions and cache time (CASSANDRA-8722)
 + * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
 +   are executed (CASSANDRA-8418)
 + * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
 + * Fix rare NPE in ColumnDefinition#hasIndexOption() (CASSANDRA-8786)
 + * cassandra-stress reports per-operation statistics, plus misc 
(CASSANDRA-8769)
 + * Add SimpleDate (cql date) and Time (cql time) types (CASSANDRA-7523)
 + * Use long for key count in cfstats (CASSANDRA-8913)
 + * Make SSTableRewriter.abort() more robust to failure (CASSANDRA-8832)
 + * Remove cold_reads_to_omit from STCS (CASSANDRA-8860)
 + * Make EstimatedHistogram#percentile() use ceil instead of floor 
(CASSANDRA-8883)
 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
 + * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
 + * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
 + * Fix parallelism adjustment in range and secondary index queries
 +   when the first fetch does not satisfy the limit (CASSANDRA-8856)
 + * Check if the filtered sstables is non-empty in STCS (CASSANDRA-8843)
 + * Upgrade java-driver used for cassandra-stress (CASSANDRA-8842)
 + * Fix CommitLog.forceRecycleAllSegments() memory access error 
(CASSANDRA-8812)
 + * Improve assertions in Memory (CASSANDRA-8792)
 + * Fix SSTableRewriter cleanup (CASSANDRA-8802)
 + * Introduce SafeMemory for CompressionMetadata.Writer (CASSANDRA-8758)
 + * 'nodetool info' prints exception against older node (CASSANDRA-8796)
 + * Ensure SSTableReader.last corresponds exactly with the file end 
(CASSANDRA-8750)
 + * Make SSTableWriter.openEarly more robust and obvious (CASSANDRA-8747)
 + * Enforce SSTableReader.first/last (CASSANDRA-8744)
 + * Cleanup 

[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-04-03 Thread marcuse
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4072cf0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4072cf0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4072cf0

Branch: refs/heads/cassandra-2.1
Commit: e4072cf09f92385eac1a64525e8ec8b2624d94cd
Parents: 9449a70 67038a3
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Apr 3 20:58:15 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:58:15 2015 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/tools/SSTableOfflineRelevel.java | 14 --
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4072cf0/CHANGES.txt
--
diff --cc CHANGES.txt
index b1499c1,28f79f9..5cd914a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,73 -1,5 +1,74 @@@
 -2.0.15:
 +2.1.5
 + * cqlsh: Make CompositeType data readable (CASSANDRA-8919)
 + * cqlsh: Fix display of triggers (CASSANDRA-9081)
 + * Fix NullPointerException when deleting or setting an element by index on
 +   a null list collection (CASSANDRA-9077)
 + * Buffer bloom filter serialization (CASSANDRA-9066)
 + * Fix anti-compaction target bloom filter size (CASSANDRA-9060)
 + * Make FROZEN and TUPLE unreserved keywords in CQL (CASSANDRA-9047)
 + * Prevent AssertionError from SizeEstimatesRecorder (CASSANDRA-9034)
 + * Avoid overwriting index summaries for sstables with an older format that
 +   does not support downsampling; rebuild summaries on startup when this
 +   is detected (CASSANDRA-8993)
 + * Fix potential data loss in CompressedSequentialWriter (CASSANDRA-8949)
 + * Make PasswordAuthenticator number of hashing rounds configurable 
(CASSANDRA-8085)
 + * Fix AssertionError when binding nested collections in DELETE 
(CASSANDRA-8900)
 + * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
 + * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
 + * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
 + * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)
 + * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)
 + * SSTableScanner enforces its bounds (CASSANDRA-8946)
 + * Cleanup cell equality (CASSANDRA-8947)
 + * Introduce intra-cluster message coalescing (CASSANDRA-8692)
 + * DatabaseDescriptor throws NPE when rpc_interface is used (CASSANDRA-8839)
 + * Don't check if an sstable is live for offline compactions (CASSANDRA-8841)
 + * Don't set clientMode in SSTableLoader (CASSANDRA-8238)
 + * Fix SSTableRewriter with disabled early open (CASSANDRA-8535)
 + * Allow invalidating permissions and cache time (CASSANDRA-8722)
 + * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
 +   are executed (CASSANDRA-8418)
 + * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
 + * Fix rare NPE in ColumnDefinition#hasIndexOption() (CASSANDRA-8786)
 + * cassandra-stress reports per-operation statistics, plus misc 
(CASSANDRA-8769)
 + * Add SimpleDate (cql date) and Time (cql time) types (CASSANDRA-7523)
 + * Use long for key count in cfstats (CASSANDRA-8913)
 + * Make SSTableRewriter.abort() more robust to failure (CASSANDRA-8832)
 + * Remove cold_reads_to_omit from STCS (CASSANDRA-8860)
 + * Make EstimatedHistogram#percentile() use ceil instead of floor 
(CASSANDRA-8883)
 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
 + * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
 + * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
 + * Fix parallelism adjustment in range and secondary index queries
 +   when the first fetch does not satisfy the limit (CASSANDRA-8856)
 + * Check if the filtered sstables is non-empty in STCS (CASSANDRA-8843)
 + * Upgrade java-driver used for cassandra-stress (CASSANDRA-8842)
 + * Fix CommitLog.forceRecycleAllSegments() memory access error 
(CASSANDRA-8812)
 + * Improve assertions in Memory (CASSANDRA-8792)
 + * Fix SSTableRewriter cleanup (CASSANDRA-8802)
 + * Introduce SafeMemory for CompressionMetadata.Writer (CASSANDRA-8758)
 + * 'nodetool info' prints exception against older node (CASSANDRA-8796)
 + * Ensure SSTableReader.last corresponds exactly with the file end 
(CASSANDRA-8750)
 + * Make SSTableWriter.openEarly more robust and obvious (CASSANDRA-8747)
 + * Enforce SSTableReader.first/last (CASSANDRA-8744)
 + * Cleanup 

cassandra git commit: Don't include tmp files in offline relevel

2015-04-03 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 2e6492a18 - 67038a32e


Don't include tmp files in offline relevel

Patch by marcuse; reviewed by carlyeks for CASSANDRA-9088


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67038a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67038a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67038a32

Branch: refs/heads/cassandra-2.0
Commit: 67038a32e118c6a8a0a9de50c8c099b85ccd7b07
Parents: 2e6492a
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Apr 1 17:28:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:53:23 2015 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/tools/SSTableOfflineRelevel.java  | 16 +---
 2 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c569de5..28f79f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.15:
+ * Don't include tmp files when doing offline relevel (CASSANDRA-9088)
  * Use the proper CAS WriteType when finishing a previous round during Paxos
preparation (CASSANDRA-8672)
  * Avoid race in cancelling compactions (CASSANDRA-9070)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java 
b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
index 3fb2f7a..6293faa 100644
--- a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
+++ b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
@@ -28,6 +28,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.DecoratedKey;
@@ -93,12 +95,20 @@ public class SSTableOfflineRelevel
 Keyspace.openWithoutSSTables(keyspace);
 Directories directories = Directories.create(keyspace, columnfamily);
 SetSSTableReader sstables = new HashSet();
-for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().list().entrySet())
+for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().skipTemporary(true).list().entrySet())
 {
 if (sstable.getKey() != null)
 {
-SSTableReader reader = SSTableReader.open(sstable.getKey());
-sstables.add(reader);
+try
+{
+SSTableReader reader = 
SSTableReader.open(sstable.getKey());
+sstables.add(reader);
+}
+catch (Throwable t)
+{
+out.println(Couldn't open sstable: 
+sstable.getKey().filenameFor(Component.DATA));
+Throwables.propagate(t);
+}
 }
 }
 if (sstables.isEmpty())



[1/2] cassandra git commit: Don't include tmp files in offline relevel

2015-04-03 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 9449a7016 - e4072cf09


Don't include tmp files in offline relevel

Patch by marcuse; reviewed by carlyeks for CASSANDRA-9088


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67038a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67038a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67038a32

Branch: refs/heads/cassandra-2.1
Commit: 67038a32e118c6a8a0a9de50c8c099b85ccd7b07
Parents: 2e6492a
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Apr 1 17:28:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Apr 3 20:53:23 2015 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/tools/SSTableOfflineRelevel.java  | 16 +---
 2 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c569de5..28f79f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.15:
+ * Don't include tmp files when doing offline relevel (CASSANDRA-9088)
  * Use the proper CAS WriteType when finishing a previous round during Paxos
preparation (CASSANDRA-8672)
  * Avoid race in cancelling compactions (CASSANDRA-9070)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/67038a32/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java 
b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
index 3fb2f7a..6293faa 100644
--- a/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
+++ b/src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java
@@ -28,6 +28,8 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.DecoratedKey;
@@ -93,12 +95,20 @@ public class SSTableOfflineRelevel
 Keyspace.openWithoutSSTables(keyspace);
 Directories directories = Directories.create(keyspace, columnfamily);
 SetSSTableReader sstables = new HashSet();
-for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().list().entrySet())
+for (Map.EntryDescriptor, SetComponent sstable : 
directories.sstableLister().skipTemporary(true).list().entrySet())
 {
 if (sstable.getKey() != null)
 {
-SSTableReader reader = SSTableReader.open(sstable.getKey());
-sstables.add(reader);
+try
+{
+SSTableReader reader = 
SSTableReader.open(sstable.getKey());
+sstables.add(reader);
+}
+catch (Throwable t)
+{
+out.println(Couldn't open sstable: 
+sstable.getKey().filenameFor(Component.DATA));
+Throwables.propagate(t);
+}
 }
 }
 if (sstables.isEmpty())



[jira] [Commented] (CASSANDRA-9114) cqlsh: Formatting of map contents broken

2015-04-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394929#comment-14394929
 ] 

Philip Thompson commented on CASSANDRA-9114:


+1

 cqlsh: Formatting of map contents broken
 

 Key: CASSANDRA-9114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
  Labels: cqlsh
 Fix For: 2.1.5

 Attachments: 9114-2.1.txt


 In CASSANDRA-9081, we upgraded the bundled python driver to version 2.5.0.  
 This upgrade changed the class that's used for map collections, and we failed 
 to add a new formatting adaptor for the new class.
 This was causing the {{cqlsh_tests.TestCqlsh.test_eat_glass}} dtest to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394649#comment-14394649
 ] 

Marcus Eriksson commented on CASSANDRA-8905:


[~philipthompson] no, then I assume it is a corrupt sstable

 IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
 ---

 Key: CASSANDRA-8905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
 Fix For: 2.0.15


 After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
 {noformat}
 ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
 java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
 at java.util.ArrayList.init(ArrayList.java:142)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I've identified which sstable is causing this, it's an -ic- format sstable, 
 i.e. something written before the upgrade. I can repeat with 
 forceUserDefinedCompaction.
 Running upgradesstables also causes the same exception. 
 Scrub helps, but skips a row as incorrect. 
 I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8085) Make PasswordAuthenticator number of hashing rounds configurable

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8085:
---
Fix Version/s: 2.0.15

 Make PasswordAuthenticator number of hashing rounds configurable
 

 Key: CASSANDRA-8085
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8085
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Sam Tunnicliffe
 Fix For: 2.0.15, 2.1.5

 Attachments: 8085-2.0.txt, 8085-2.1.txt, 8085-3.0.txt


 Running 2^10 rounds of bcrypt can take a while.  In environments (like PHP) 
 where connections are not typically long-lived, authenticating can add 
 substantial overhead.  On IRC, one user saw the time to connect, 
 authenticate, and execute a query jump from 5ms to 150ms with authentication 
 enabled ([debug logs|http://pastebin.com/bSUufbr0]).
 CASSANDRA-7715 is a more complete fix for this, but in the meantime (and even 
 after 7715), this is a good option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8740) java.lang.AssertionError when reading saved cache

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8740:
---
Fix Version/s: 2.0.15

 java.lang.AssertionError when reading saved cache
 -

 Key: CASSANDRA-8740
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8740
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: OEL 6.5, DSE 4.6.0, Cassandra 2.0.11.83
Reporter: Nikolai Grigoriev
Assignee: Dave Brosius
 Fix For: 2.0.15, 2.1.5

 Attachments: 8740.txt


 I have started seeing it recently. Not sure from which version but now it 
 happens relatively often one some of my nodes.
 {code}
  INFO [main] 2015-02-04 18:18:09,253 ColumnFamilyStore.java (line 249) 
 Initializing duo_xxx
  INFO [main] 2015-02-04 18:18:09,254 AutoSavingCache.java (line 114) reading 
 saved cache /var/lib/cassandra/saved_caches/duo_xxx-RowCach
 e-b.db
 ERROR [main] 2015-02-04 18:18:09,256 CassandraDaemon.java (line 513) 
 Exception encountered during startup
 java.lang.AssertionError
 at 
 org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:41)
 at 
 org.apache.cassandra.cache.SerializingCacheProvider$RowCacheSerializer.serialize(SerializingCacheProvider.java:37)
 at 
 org.apache.cassandra.cache.SerializingCache.serialize(SerializingCache.java:118)
 at 
 org.apache.cassandra.cache.SerializingCache.put(SerializingCache.java:177)
 at 
 org.apache.cassandra.cache.InstrumentingCache.put(InstrumentingCache.java:44)
 at 
 org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:130)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.initRowCache(ColumnFamilyStore.java:592)
 at org.apache.cassandra.db.Keyspace.open(Keyspace.java:119)
 at org.apache.cassandra.db.Keyspace.open(Keyspace.java:92)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:305)
 at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:419)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:659)
  INFO [Thread-2] 2015-02-04 18:18:09,259 DseDaemon.java (line 505) DSE 
 shutting down...
 ERROR [Thread-2] 2015-02-04 18:18:09,279 CassandraDaemon.java (line 199) 
 Exception in thread Thread[Thread-2,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1274)
 at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:171)
 at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:506)
 at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:408)
  INFO [main] 2015-02-04 18:18:49,144 CassandraDaemon.java (line 135) Logging 
 initialized
  INFO [main] 2015-02-04 18:18:49,169 DseDaemon.java (line 382) DSE version: 
 4.6.0
 {code}
 Cassandra version: 2.0.11.83 (DSE 4.6.0)
 Looks like similar issues were reported and fixed in the past - like 
 CASSANDRA-6325.
 Maybe I am missing something, but I think that Cassandra should not crash and 
 stop at startup if it cannot read a saved cache. This does not make the node 
 inoperable and does not necessarily indicate a severe data corruption. I have 
 applied a small change to my cluster config, restarted it and 30% of my nodes 
 did not start because of that. Of course the solution is simple, but it 
 requires to go to every node that failed to start, wipe the cache and start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8360) In DTCS, always compact SSTables in the same time window, even if they are fewer than min_threshold

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8360:
---
Fix Version/s: 2.0.15

 In DTCS, always compact SSTables in the same time window, even if they are 
 fewer than min_threshold
 ---

 Key: CASSANDRA-8360
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8360
 Project: Cassandra
  Issue Type: Improvement
Reporter: Björn Hegerfors
Assignee: Björn Hegerfors
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: cassandra-2.0-CASSANDRA-8360.txt


 DTCS uses min_threshold to decide how many time windows of the same size that 
 need to accumulate before merging into a larger window. The age of an SSTable 
 is determined as its min timestamp, and it always falls into exactly one of 
 the time windows. If multiple SSTables fall into the same window, DTCS 
 considers compacting them, but if they are fewer than min_threshold, it 
 decides not to do it.
 When do more than 1 but fewer than min_threshold SSTables end up in the same 
 time window (except for the current window), you might ask? In the current 
 state, DTCS can spill some extra SSTables into bigger windows when the 
 previous window wasn't fully compacted, which happens all the time when the 
 latest window stops being the current one. Also, repairs and hints can put 
 new SSTables in old windows.
 I think, and [~jjordan] agreed in a comment on CASSANDRA-6602, that DTCS 
 should ignore min_threshold and compact tables in the same windows regardless 
 of how few they are. I guess max_threshold should still be respected.
 [~jjordan] suggested that this should apply to all windows but the current 
 window, where all the new SSTables end up. That could make sense. I'm not 
 clear on whether compacting many SSTables at once is more cost efficient or 
 not, when it comes to the very newest and smallest SSTables. Maybe compacting 
 as soon as 2 SSTables are seen is fine if the initial window size is small 
 enough? I guess the opposite could be the case too; that the very newest 
 SSTables should be compacted very many at a time?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7032) Improve vnode allocation

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394730#comment-14394730
 ] 

Benedict commented on CASSANDRA-7032:
-

One other thing to consider optimising for: hash bit distribution. A lot of 
algorithmic optimisations can be made if we can assume each node has an 
approximately uniform distribution of hash bits. We should introduce some 
scoring based on this. e.g. try the best N candidates by our current 
evaluation, and select the one that delivers the best resulting hash bit 
distribution.

 Improve vnode allocation
 

 Key: CASSANDRA-7032
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7032
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Branimir Lambov
  Labels: performance, vnodes
 Fix For: 3.0

 Attachments: TestVNodeAllocation.java, TestVNodeAllocation.java, 
 TestVNodeAllocation.java, TestVNodeAllocation.java, TestVNodeAllocation.java, 
 TestVNodeAllocation.java


 It's been known for a little while that random vnode allocation causes 
 hotspots of ownership. It should be possible to improve dramatically on this 
 with deterministic allocation. I have quickly thrown together a simple greedy 
 algorithm that allocates vnodes efficiently, and will repair hotspots in a 
 randomly allocated cluster gradually as more nodes are added, and also 
 ensures that token ranges are fairly evenly spread between nodes (somewhat 
 tunably so). The allocation still permits slight discrepancies in ownership, 
 but it is bound by the inverse of the size of the cluster (as opposed to 
 random allocation, which strangely gets worse as the cluster size increases). 
 I'm sure there is a decent dynamic programming solution to this that would be 
 even better.
 If on joining the ring a new node were to CAS a shared table where a 
 canonical allocation of token ranges lives after running this (or a similar) 
 algorithm, we could then get guaranteed bounds on the ownership distribution 
 in a cluster. This will also help for CASSANDRA-6696.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7032) Improve vnode allocation

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394730#comment-14394730
 ] 

Benedict edited comment on CASSANDRA-7032 at 4/3/15 6:22 PM:
-

One other thing to consider optimising for: hash bit distribution. A lot of 
algorithmic optimisations can be made if we can assume each node has an 
approximately uniform distribution of hash bits. We should introduce some 
scoring based on this. e.g. try the best N candidates by our current 
evaluation, and select the one that delivers the best resulting hash bit 
distribution. I've filed CASSANDRA-9115 as a follow up ticket to investigate 
this.

We can file a ticket for the anti-clumping if we decide this is too onerous to 
introduce now (although I think it just requires a little bit of thought to 
ensure the filtering is helpful and safe, since implementing the filtration 
should be relatively easy)


was (Author: benedict):
One other thing to consider optimising for: hash bit distribution. A lot of 
algorithmic optimisations can be made if we can assume each node has an 
approximately uniform distribution of hash bits. We should introduce some 
scoring based on this. e.g. try the best N candidates by our current 
evaluation, and select the one that delivers the best resulting hash bit 
distribution.

 Improve vnode allocation
 

 Key: CASSANDRA-7032
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7032
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Branimir Lambov
  Labels: performance, vnodes
 Fix For: 3.0

 Attachments: TestVNodeAllocation.java, TestVNodeAllocation.java, 
 TestVNodeAllocation.java, TestVNodeAllocation.java, TestVNodeAllocation.java, 
 TestVNodeAllocation.java


 It's been known for a little while that random vnode allocation causes 
 hotspots of ownership. It should be possible to improve dramatically on this 
 with deterministic allocation. I have quickly thrown together a simple greedy 
 algorithm that allocates vnodes efficiently, and will repair hotspots in a 
 randomly allocated cluster gradually as more nodes are added, and also 
 ensures that token ranges are fairly evenly spread between nodes (somewhat 
 tunably so). The allocation still permits slight discrepancies in ownership, 
 but it is bound by the inverse of the size of the cluster (as opposed to 
 random allocation, which strangely gets worse as the cluster size increases). 
 I'm sure there is a decent dynamic programming solution to this that would be 
 even better.
 If on joining the ring a new node were to CAS a shared table where a 
 canonical allocation of token ranges lives after running this (or a similar) 
 algorithm, we could then get guaranteed bounds on the ownership distribution 
 in a cluster. This will also help for CASSANDRA-6696.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394673#comment-14394673
 ] 

Benedict commented on CASSANDRA-7066:
-

Even better. It hadn't occurred to me the current code was all due to the lack 
of idempotency; I assumed there was just concern about leaving a large amount 
of data around. There _is_ still the risk that this could be a prohibitive 
danger on some systems (say, you have a multi-Tb file that's just been 
compacted). So to offer one further alternative that is perhaps only slightly 
more complicated and retains the safety: 

* create two logs files: A and B; both log _each other_; file A also logs the 
new file(s) as they're created; file B also logs the old file(s)
* once done delete file A; then delete the old files; then delete file B
* if we find file A we delete its contents (including file B); If we find file 
B only, we delete its contents

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9032) Reduce logging level for MigrationTask abort due to down node from ERROR to INFO

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9032:
---
Fix Version/s: 2.0.15

 Reduce logging level for MigrationTask abort due to down node from ERROR to 
 INFO
 

 Key: CASSANDRA-9032
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9032
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 9032.txt


 A lot of the dtests are failing during Jenkins runs due to the following 
 error message in the logs:
 {noformat}
 ERROR [MigrationStage:1] 2015-03-24 20:02:03,464 MigrationTask.java:62 - 
 Can't send migration request: node /127.0.0.3 is down.\n]
 {noformat}
 This log message happens when a schema pull is scheduled, but the target 
 endpoint is down when the scheduled task actually runs.  The failing dtests 
 generally stop a node as part of the test, which results in this.
 I believe the log message should be moved from ERROR to INFO (or perhaps even 
 DEBUG).  This isn't an unexpected type of problem (nodes go down all the 
 time), and it's not actionable by the user.  This would also have the nice 
 side effect of fixing the dtests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8516) NEW_NODE topology event emitted instead of MOVED_NODE by moving node

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8516:
---
Fix Version/s: 2.0.15

 NEW_NODE topology event emitted instead of MOVED_NODE by moving node
 

 Key: CASSANDRA-8516
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8516
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 8516-v2.1-a.txt, 8516-v2.1-b.txt, 
 cassandra_8516_dtest.txt


 As discovered in CASSANDRA-8373, when you move a node in a single-node 
 cluster, a {{NEW_NODE}} event is generated instead of a {{MOVED_NODE}} event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8905) IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12

2015-04-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8905.

Resolution: Not a Problem

Fixed by scrub. Probably corrupted sstable.

 IllegalArgumentException in compaction of -ic- file after upgrade to 2.0.12
 ---

 Key: CASSANDRA-8905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8905
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
 Fix For: 2.0.15


 After upgrade from 1.2.18 to 2.0.12, I've started to get exceptions like:
 {noformat}
 ERROR [CompactionExecutor:1149] 2015-03-04 11:48:46,045 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1149,1,main]
 java.lang.IllegalArgumentException: Illegal Capacity: -2147483648
 at java.util.ArrayList.init(ArrayList.java:142)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:182)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:194)
 at 
 org.apache.cassandra.db.SuperColumns$SCIterator.next(SuperColumns.java:138)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I've identified which sstable is causing this, it's an -ic- format sstable, 
 i.e. something written before the upgrade. I can repeat with 
 forceUserDefinedCompaction.
 Running upgradesstables also causes the same exception. 
 Scrub helps, but skips a row as incorrect. 
 I can share the sstable privately if it helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8808) CQLSSTableWriter: close does not work + more than one table throws ex

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8808:
---
Fix Version/s: 2.0.15

 CQLSSTableWriter: close does not work + more than one table throws ex
 -

 Key: CASSANDRA-8808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sebastian YEPES FERNANDEZ
Assignee: Benjamin Lerer
  Labels: cql
 Fix For: 2.0.15, 2.1.5

 Attachments: CASSANDRA-8808-2.0-V2.txt, CASSANDRA-8808-2.0.txt, 
 CASSANDRA-8808-2.1-V2.txt, CASSANDRA-8808-2.1.txt, 
 CASSANDRA-8808-trunk-V2.txt, CASSANDRA-8808-trunk.txt


 I have encountered the following two issues:
  - When closing the CQLSSTableWriter it just hangs the process and does 
 nothing. (https://issues.apache.org/jira/browse/CASSANDRA-8281)
  - When writing more than one table throws ex. 
 (https://issues.apache.org/jira/browse/CASSANDRA-8251)
 These issue can be reproduced with the following code:
 {code:title=test.java|borderStyle=solid}
 import org.apache.cassandra.config.Config;
 import org.apache.cassandra.io.sstable.CQLSSTableWriter;
 public static void main(String[] args) {
   Config.setClientMode(true);
   CQLSSTableWriter w1 = CQLSSTableWriter.builder()
 .inDirectory(/tmp/kspc/t1)
 .forTable(CREATE TABLE kspc.t1 ( id  int, PRIMARY KEY (id));)
 .using(INSERT INTO kspc.t1 (id) VALUES ( ? );)
 .build();
   CQLSSTableWriter w2 = CQLSSTableWriter.builder()
 .inDirectory(/tmp/kspc/t2)
 .forTable(CREATE TABLE kspc.t2 ( id  int, PRIMARY KEY (id));)
 .using(INSERT INTO kspc.t2 (id) VALUES ( ? );)
 .build();
   try {
 w1.addRow(1);
 w2.addRow(1);
 w1.close();
 w2.close();
   } catch (Exception e) {
 System.out.println(e);
   }
 }
 {code}
 {code:title=The error|borderStyle=solid}
 Exception in thread main java.lang.ExceptionInInitializerError
 at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:324)
 at org.apache.cassandra.db.Keyspace.init(Keyspace.java:277)
 at org.apache.cassandra.db.Keyspace.open(Keyspace.java:119)
 at org.apache.cassandra.db.Keyspace.open(Keyspace.java:96)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:101)
 at 
 org.apache.cassandra.io.sstable.CQLSSTableWriter.rawAddRow(CQLSSTableWriter.java:226)
 at 
 org.apache.cassandra.io.sstable.CQLSSTableWriter.addRow(CQLSSTableWriter.java:145)
 at 
 org.apache.cassandra.io.sstable.CQLSSTableWriter.addRow(CQLSSTableWriter.java:120)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:189)
 at 
 org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
 at 
 org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
 at 
 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
 at 
 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
 at 
 com.allthingsmonitoring.utils.BulkDataLoader.main(BulkDataLoader.groovy:415)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1053)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clinit(ColumnFamilyStore.java:85)
 ... 18 more
 {code}
 I have just tested the in the cassandra-2.1 branch and the issue still 
 persists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8086:
---
Fix Version/s: 2.0.15

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.0.15, 2.1.5

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-2.1.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8909) Replication Strategy creation errors are lost in try/catch

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8909:
---
Fix Version/s: 2.0.15

 Replication Strategy creation errors are lost in try/catch
 --

 Key: CASSANDRA-8909
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8909
 Project: Cassandra
  Issue Type: Improvement
Reporter: Alan Boudreault
Assignee: Alan Boudreault
Priority: Trivial
 Fix For: 2.0.15, 2.1.5

 Attachments: replication-strategy-exception-2.0.patch


 I was initially executing a bad cassandra-stress command  and was getting 
 this error:
 {code}
 Unable to create stress keyspace: Error constructing replication strategy 
 class
 {code}
 with the following command:
 {code}
 cassandra-stress -o insert --replication-strategy NetworkTopologyStrategy 
 --strategy-properties dc1:1,dc2:1 --replication-factor 1
 {code}
 After digging in the code, I noticed that the error displayed was not the one 
 thrown by the replication strategy code and that the try/catch block could be 
 improved. Basically, the Constructor.newInstance can throw an 
 InvocationTargetException, which provide a better error report.
 I think this improvement can also be done in 2.1 (not tested yet). If my 
 attached patch is acceptable, I will test and provide the right version for 
 2.1 and trunk.
 With the patch, I can see the proper error when executing my bad command:
 {code}
 Unable to create stress keyspace: replication_factor is an option for 
 SimpleStrategy, not NetworkTopologyStrategy
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8613) Regression in mixed single and multi-column relation support

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8613:
---
Fix Version/s: 2.0.15

 Regression in mixed single and multi-column relation support
 

 Key: CASSANDRA-8613
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8613
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Benjamin Lerer
 Fix For: 2.0.15, 2.1.5

 Attachments: 8613-2.0-v2.txt, 8613-2.1-v2.txt, 8613-trunk-v2.txt, 
 CASSANDRA-8613-2.0.txt, CASSANDRA-8613-2.1.txt, CASSANDRA-8613-trunk.txt


 In 2.0.6 through 2.0.8, a query like the following was supported:
 {noformat}
 SELECT * FROM mytable WHERE clustering_0 = ? AND (clustering_1, clustering_2) 
  (?, ?)
 {noformat}
 However, after CASSANDRA-6875, you'll get the following error:
 {noformat}
 Clustering columns may not be skipped in multi-column relations. They should 
 appear in the PRIMARY KEY order. Got (c, d)  (0, 0)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Don't execute any functions at prepare-time

2015-04-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 75409a185 - 7d68cedec


Don't execute any functions at prepare-time

Patch by Sam Tunnicliffe; reviewed by Tyler Hobbs for CASSANDRA-9037


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d68cede
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d68cede
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d68cede

Branch: refs/heads/trunk
Commit: 7d68cedecd537535e48074c38686ccc76f9c
Parents: 75409a1
Author: Sam Tunnicliffe s...@beobal.com
Authored: Fri Apr 3 12:51:24 2015 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 3 12:51:24 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/functions/FunctionCall.java  | 22 +---
 2 files changed, 2 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d68cede/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e8cb20b..92c09b1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Don't execute any functions at prepare-time (CASSANDRA-9037)
  * Share file handles between all instances of a SegmentedFile (CASSANDRA-8893)
  * Make it possible to major compact LCS (CASSANDRA-7272)
  * Make FunctionExecutionException extend RequestExecutionException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d68cede/src/java/org/apache/cassandra/cql3/functions/FunctionCall.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/FunctionCall.java 
b/src/java/org/apache/cassandra/cql3/functions/FunctionCall.java
index 90ebaaf..a3bd669 100644
--- a/src/java/org/apache/cassandra/cql3/functions/FunctionCall.java
+++ b/src/java/org/apache/cassandra/cql3/functions/FunctionCall.java
@@ -140,33 +140,13 @@ public class FunctionCall extends Term.NonTerminal
 fun, 
fun.argTypes().size(), terms.size()));
 
 ListTerm parameters = new ArrayList(terms.size());
-boolean allTerminal = true;
 for (int i = 0; i  terms.size(); i++)
 {
 Term t = terms.get(i).prepare(keyspace, 
Functions.makeArgSpec(receiver.ksName, receiver.cfName, scalarFun, i));
-if (t instanceof NonTerminal)
-allTerminal = false;
 parameters.add(t);
 }
 
-// If all parameters are terminal and the function is pure, we can
-// evaluate it now, otherwise we'd have to wait execution time
-return allTerminal  scalarFun.isPure()
-? makeTerminal(scalarFun, execute(scalarFun, parameters), 
QueryOptions.DEFAULT.getProtocolVersion())
-: new FunctionCall(scalarFun, parameters);
-}
-
-// All parameters must be terminal
-private static ByteBuffer execute(ScalarFunction fun, ListTerm 
parameters) throws InvalidRequestException
-{
-ListByteBuffer buffers = new ArrayList(parameters.size());
-for (Term t : parameters)
-{
-assert t instanceof Term.Terminal;
-
buffers.add(((Term.Terminal)t).get(QueryOptions.DEFAULT.getProtocolVersion()));
-}
-
-return executeInternal(Server.CURRENT_VERSION, fun, buffers);
+return new FunctionCall(scalarFun, parameters);
 }
 
 public AssignmentTestable.TestResult testAssignment(String keyspace, 
ColumnSpecification receiver)



[jira] [Commented] (CASSANDRA-9092) Nodes in DC2 die during and after huge write workload

2015-04-03 Thread Sergey Maznichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394627#comment-14394627
 ] 

Sergey Maznichenko commented on CASSANDRA-9092:
---

We have OpsCenter Agent. Such errors repeat 1-2 timer per hour during load of 
data. In DC1 now we don't have any hints.
I guess that traffic can go to all nodes because client settings, I will check 
it.
I had tried to perform 'nodetool repair' from the node in DC2 and after 30 
hours delay, I got bunch of errors in console, like:

[2015-04-02 19:32:14,352] Repair session 6ff4f071-d94d-11e4-9257-f7b14a924a15 
for range (-3563451573336693456,-3535530477916720868] failed with error 
java.io.IOException: Cannot proceed on repair because a neighbor (/10.XX.XX.11) 
is dead: session failed

but 'nodetool status' reports that all nodes are live and I can see successful 
communication between nodes in their logs. It's strange...


 Nodes in DC2 die during and after huge write workload
 -

 Key: CASSANDRA-9092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9092
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.2 64-bit, Cassandra 2.1.2, 
 java version 1.7.0_71
 Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
Reporter: Sergey Maznichenko
Assignee: Sam Tunnicliffe
 Fix For: 2.1.5

 Attachments: cassandra_crash1.txt


 Hello,
 We have Cassandra 2.1.2 with 8 nodes, 4 in DC1 and 4 in DC2.
 Node is VM 8 CPU, 32GB RAM
 During significant workload (loading several millions blobs ~3.5MB each), 1 
 node in DC2 stops and after some time next 2 nodes in DC2 also stops.
 Now, 2 of nodes in DC2 do not work and stops after 5-10 minutes after start. 
 I see many files in system.hints table and error appears in 2-3 minutes after 
 starting system.hints auto compaction.
 Stops, means ERROR [CompactionExecutor:1] 2015-04-01 23:33:44,456 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:1,1,main]
 java.lang.OutOfMemoryError: Java heap space
 ERROR [HintedHandoff:1] 2015-04-01 23:33:44,456 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:1,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.OutOfMemoryError: Java heap space
 Full errors listing attached in cassandra_crash1.txt
 The problem exists only in DC2. We have 1GbE between DC1 and DC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove transient RAF usage

2015-04-03 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 51908e240 - 75409a185


Remove transient RAF usage

Patch by stefania; reviewed by jmckenzie for CASSANDRA-8952


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/75409a18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/75409a18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/75409a18

Branch: refs/heads/trunk
Commit: 75409a185d97c566430ab6e6cfd823ceb80ff40b
Parents: 51908e2
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri Apr 3 11:37:28 2015 -0500
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Fri Apr 3 11:37:28 2015 -0500

--
 .../org/apache/cassandra/io/util/FileUtils.java | 27 ++
 .../org/apache/cassandra/utils/CLibrary.java| 25 +++--
 .../apache/cassandra/io/util/FileUtilsTest.java | 55 
 .../apache/cassandra/utils/CLibraryTest.java| 37 +
 4 files changed, 104 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/src/java/org/apache/cassandra/io/util/FileUtils.java
--
diff --git a/src/java/org/apache/cassandra/io/util/FileUtils.java 
b/src/java/org/apache/cassandra/io/util/FileUtils.java
index ef9d23b..8007039 100644
--- a/src/java/org/apache/cassandra/io/util/FileUtils.java
+++ b/src/java/org/apache/cassandra/io/util/FileUtils.java
@@ -19,10 +19,8 @@ package org.apache.cassandra.io.util;
 
 import java.io.*;
 import java.nio.ByteBuffer;
-import java.nio.file.AtomicMoveNotSupportedException;
-import java.nio.file.Files;
-import java.nio.file.Path;
-import java.nio.file.StandardCopyOption;
+import java.nio.channels.FileChannel;
+import java.nio.file.*;
 import java.text.DecimalFormat;
 import java.util.Arrays;
 
@@ -185,28 +183,13 @@ public class FileUtils
 }
 public static void truncate(String path, long size)
 {
-RandomAccessFile file;
-
-try
-{
-file = new RandomAccessFile(path, rw);
-}
-catch (FileNotFoundException e)
-{
-throw new RuntimeException(e);
-}
-
-try
+try(FileChannel channel = FileChannel.open(Paths.get(path), 
StandardOpenOption.READ, StandardOpenOption.WRITE))
 {
-file.getChannel().truncate(size);
+channel.truncate(size);
 }
 catch (IOException e)
 {
-throw new FSWriteError(e, path);
-}
-finally
-{
-closeQuietly(file);
+throw new RuntimeException(e);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/src/java/org/apache/cassandra/utils/CLibrary.java
--
diff --git a/src/java/org/apache/cassandra/utils/CLibrary.java 
b/src/java/org/apache/cassandra/utils/CLibrary.java
index 25f7e5a..fed314b 100644
--- a/src/java/org/apache/cassandra/utils/CLibrary.java
+++ b/src/java/org/apache/cassandra/utils/CLibrary.java
@@ -18,9 +18,12 @@
 package org.apache.cassandra.utils;
 
 import java.io.FileDescriptor;
+import java.io.IOException;
 import java.io.RandomAccessFile;
 import java.lang.reflect.Field;
 import java.nio.channels.FileChannel;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -316,29 +319,15 @@ public final class CLibrary
 
 public static int getfd(String path)
 {
-RandomAccessFile file = null;
-try
+try(FileChannel channel = FileChannel.open(Paths.get(path), 
StandardOpenOption.READ))
 {
-file = new RandomAccessFile(path, r);
-return getfd(file.getFD());
+return getfd(channel);
 }
-catch (Throwable t)
+catch (IOException e)
 {
-JVMStabilityInspector.inspectThrowable(t);
+JVMStabilityInspector.inspectThrowable(e);
 // ignore
 return -1;
 }
-finally
-{
-try
-{
-if (file != null)
-file.close();
-}
-catch (Throwable t)
-{
-// ignore
-}
-}
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/75409a18/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java 
b/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java
new file mode 100644
index 000..7110504
--- /dev/null
+++ b/test/unit/org/apache/cassandra/io/util/FileUtilsTest.java

[jira] [Updated] (CASSANDRA-7533) Let MAX_OUTSTANDING_REPLAY_COUNT be configurable

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7533:
---
Fix Version/s: 2.0.15

 Let MAX_OUTSTANDING_REPLAY_COUNT be configurable
 

 Key: CASSANDRA-7533
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7533
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Jeremiah Jordan
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 0001-CASSANDRA-7533.txt


 There are some workloads where commit log replay will run into contention 
 issues with multiple things updating the same partition.  Through some 
 testing it was found that lowering CommitLogReplayer.java 
 MAX_OUTSTANDING_REPLAY_COUNT can help with this issue.
 The calculations added in CASSANDRA-6655 are one such place things get 
 bottlenecked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9114) cqlsh: Formatting of map contents broken

2015-04-03 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-9114:
--

 Summary: cqlsh: Formatting of map contents broken
 Key: CASSANDRA-9114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1.5


In CASSANDRA-9081, we upgraded the bundled python driver to version 2.5.0.  
This upgrade changed the class that's used for map collections, and we failed 
to add a new formatting adaptor for the new class.

This was causing the {{cqlsh_tests.TestCqlsh.test_eat_glass}} dtest to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9114) cqlsh: Formatting of map contents broken

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9114:
---
Attachment: 9114-2.1.txt

The attached patch adds an adaptor and fixes the failing dtest.

 cqlsh: Formatting of map contents broken
 

 Key: CASSANDRA-9114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
  Labels: cqlsh
 Fix For: 2.1.5

 Attachments: 9114-2.1.txt


 In CASSANDRA-9081, we upgraded the bundled python driver to version 2.5.0.  
 This upgrade changed the class that's used for map collections, and we failed 
 to add a new formatting adaptor for the new class.
 This was causing the {{cqlsh_tests.TestCqlsh.test_eat_glass}} dtest to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9037) Terminal UDFs evaluated at prepare time throw protocol version error

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9037:
---
Attachment: 9037-final.txt

 Terminal UDFs evaluated at prepare time throw protocol version error
 

 Key: CASSANDRA-9037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9037
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0

 Attachments: 9037-final.txt


 When a pure function with only terminal arguments (or with no arguments) is 
 used in a where clause, it's executed at prepare time and 
 {{Server.CURRENT_VERSION}} passed as the protocol version for serialization 
 purposes. For native functions, this isn't a problem, but UDFs use classes in 
 the bundled java-driver-core jar for (de)serialization of args and return 
 values. When {{Server.CURRENT_VERSION}} is greater than the highest version 
 supported by the bundled java driver the execution fails with the following 
 exception:
 {noformat}
 ERROR [SharedPool-Worker-1] 2015-03-24 18:10:59,391 QueryMessage.java:132 - 
 Unexpected error during query
 org.apache.cassandra.exceptions.FunctionExecutionException: execution of 
 'ks.overloaded[text]' failed: java.lang.IllegalArgumentException: No protocol 
 version matching integer version 4
 at 
 org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.udf.gen.Cksoverloaded_1.execute(Cksoverloaded_1.java)
  ~[na:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall.executeInternal(FunctionCall.java:78)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall.access$200(FunctionCall.java:34)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall$Raw.execute(FunctionCall.java:176)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:161)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.SingleColumnRelation.toTerm(SingleColumnRelation.java:108)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:143)
  ~[main/:na]
 at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:127) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.restrictions.StatementRestrictions.init(StatementRestrictions.java:126)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:787)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:740)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:488)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:246) 
 ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:475)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:371)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_71]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.lang.IllegalArgumentException: No protocol version matching 
 integer version 4
 at 
 com.datastax.driver.core.ProtocolVersion.fromInt(ProtocolVersion.java:89) 
 ~[cassandra-driver-core-2.1.2.jar:na]
 at 
 org.apache.cassandra.cql3.functions.UDFunction.compose(UDFunction.java:177) 
 ~[main/:na]
 ... 25 common frames omitted
 {noformat}
 This is currently the case on trunk following the bump of 
 {{Server.CURRENT_VERSION}} to 4 by CASSANDRA-7660.


[jira] [Created] (CASSANDRA-9115) Improve vnode allocation hash bit distribution

2015-04-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-9115:
---

 Summary: Improve vnode allocation hash bit distribution
 Key: CASSANDRA-9115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9115
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Branimir Lambov


Following on from CASSANDRA-7032, we should explore if it is possible to 
further improve the vnode allocation strategy to consider hash bit distribution 
for the selected vnodes, so that our hash based data structures can ensure good 
behaviour



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9027) Error processing org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-IPv6 address

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9027:
---
Fix Version/s: 2.0.15

 Error processing 
 org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-IPv6
  address
 -

 Key: CASSANDRA-9027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9027
 Project: Cassandra
  Issue Type: Bug
Reporter: Erik Forsberg
Assignee: Erik Forsberg
 Fix For: 2.0.15, 2.1.5

 Attachments: cassandra-2.0-9027.txt, cassandra-2.0-9027.txt


 Getting some of these on 2.0.13:
 {noformat}
  WARN [MutationStage:92] 2015-03-24 08:57:20,204 JmxReporter.java (line 397) 
 Error processing 
 org.apache.cassandra.metrics:type=HintedHandOffManager,name=Hints_created-2001:4c28:1:413:0:1:4:1
 javax.management.MalformedObjectNameException: Invalid character ':' in value 
 part of property
 at javax.management.ObjectName.construct(ObjectName.java:618)
 at javax.management.ObjectName.init(ObjectName.java:1382)
 at 
 com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
 at 
 com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
 at 
 com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
 at 
 com.yammer.metrics.core.MetricsRegistry.newCounter(MetricsRegistry.java:115)
 at com.yammer.metrics.Metrics.newCounter(Metrics.java:108)
 at 
 org.apache.cassandra.metrics.HintedHandoffMetrics$2.load(HintedHandoffMetrics.java:58)
 at 
 org.apache.cassandra.metrics.HintedHandoffMetrics$2.load(HintedHandoffMetrics.java:55)
 at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
 at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
 at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
 at 
 com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
 at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
 at 
 com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4812)
 at 
 org.apache.cassandra.metrics.HintedHandoffMetrics.incrCreatedHints(HintedHandoffMetrics.java:64)
 at 
 org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:124)
 at 
 org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:957)
 at 
 org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:927)
 at 
 org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2069)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 Seems to be about the same as CASSANDRA-5298.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8948) cassandra-stress does not honour consistency level (cl) parameter when used in combination with user command

2015-04-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8948:
---
Fix Version/s: (was: 2.0.15)
   2.1.5

 cassandra-stress does not honour consistency level (cl) parameter when used 
 in combination with user command
 

 Key: CASSANDRA-8948
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8948
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Andreas Flinck
Assignee: T Jake Luciani
 Fix For: 2.1.5

 Attachments: 8948.txt


 The stress test tool does not honour cl parameter when used in combination 
 with the user command. Consistency level will be default ONE no matter what 
 is set by cl=.
 Works fine with write command.
 How to reproduce:
 1. Create a suitable yaml-file to use in test
 2. Run e.g. {code}./cassandra-stress user profile=./file.yaml cl=ALL 
 no-warmup duration=10s  ops\(insert=1\) -rate threads=4 -port jmx=7100{code}
 3. Observe that cl=ONE in trace logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8978) CQLSSTableWriter causes ArrayIndexOutOfBoundsException

2015-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8978:
--
Attachment: 8978-2.1-v2.txt
test-8978.txt

The test that I added had been failing for me when I posted this patch, but I 
can't get it to anymore. I'm attaching a new test instead (test-8978.txt), 
which does fail on 2.1.

The issue is that {{UpdateStatement}} has a {{ColumnFamily}} which it applies 
the modification to. When we hit the size that we are targeting in 
{{ABSC.addColumn}}, we replace the current column family with a new one and 
send the previous one to the writer thread. Since update statement doesn't have 
the new column family, it continues to write columns to the old one which 
should no longer be modified.

The change that I made moves replacing the column family to a point where 
update statement is complete.

 CQLSSTableWriter causes ArrayIndexOutOfBoundsException
 --

 Key: CASSANDRA-8978
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8978
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 3.8.0-42-generic #62~precise1-Ubuntu SMP Wed Jun 4 
 22:04:18 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.8.0_20
 Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
 Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
Reporter: Thomas Borg Salling
Assignee: Carl Yeksigian
 Fix For: 2.1.5

 Attachments: 8978-2.1-v2.txt, 8978-2.1.txt, test-8978.txt


 On long-running jobs with CQLSSTableWriter preparing sstables for later bulk 
 load via sstableloader, occassionally I get the sporadic error shown below.
 I can run the exact same job again - and it will succeed or fail with the 
 same error at another location in the input stream. The error is appears to 
 occur randomly - with the same input it may occur never, early or late in 
 the run with no apparent logic or system.
 I use five instances of CQLSSTableWriter in the application (to write 
 redundantly to five different tables). But these instances do not exist at 
 the same time; and thus never used concurrently.
 {code}
 09:26:33.582 [main] INFO  d.dma.ais.store.FileSSTableConverter - Finished 
 processing directory, 369582175 packets was converted from /nas1/
 Exception in thread main java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at dk.dma.commons.app.CliCommandList$1.execute(CliCommandList.java:50)
 at dk.dma.commons.app.CliCommandList.invoke(CliCommandList.java:80)
 at dk.dma.ais.store.Main.main(Main.java:34)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 297868
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.append(ArrayBackedSortedColumns.java:196)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.appendOrReconcile(ArrayBackedSortedColumns.java:191)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.sortCells(ArrayBackedSortedColumns.java:176)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.maybeSortCells(ArrayBackedSortedColumns.java:125)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.access$1100(ArrayBackedSortedColumns.java:44)
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns$CellCollection.iterator(ArrayBackedSortedColumns.java:622)
 at 
 org.apache.cassandra.db.ColumnFamily.iterator(ColumnFamily.java:476)
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:129)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218)
 at 
 org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215){code}
 So far I overcome this problem by simply retrying with another run of the 
 application in attempt to generate the sstables. But this is a rather time 
 consuming and shaky approach - and I feel a bit uneasy relying on the 
 produced sstables, though their contents appear to be correct when I sample 
 them with cqlsh 'select' after load into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9106) disable secondary indexes by default

2015-04-03 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394808#comment-14394808
 ] 

Jon Haddad commented on CASSANDRA-9106:
---

[~slebresne] Agreed - I was not aware of CASSANDRA-8303.  Much better to be 
included there than the yaml.

 disable secondary indexes by default
 

 Key: CASSANDRA-9106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9106
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jon Haddad
 Fix For: 3.0


 This feature is misused constantly.  Can we disable it by default, and 
 provide a yaml config to explicitly enable it?  Along with a massive warning 
 about how they aren't there for performance, maybe with a link to 
 documentation that explains why?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9099) Validation compaction not working for parallel repair

2015-04-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394904#comment-14394904
 ] 

Marcus Eriksson commented on CASSANDRA-9099:


+1

 Validation compaction not working for parallel repair
 -

 Key: CASSANDRA-9099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9099
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 3.0

 Attachments: 0001-Fix-wrong-check-when-validating-in-parallel.patch


 Because boundary check is inverse, we are validating wrong SSTables.
 This is only for trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9029) Add support for rate limiting log statements

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394984#comment-14394984
 ] 

Benedict commented on CASSANDRA-9029:
-

I'm not sure we need all of the myriad accessors (so many different ways of 
providing the timing interval, the logger, etc). I would personally pare it 
back to just the NoSpamStatement. I particularly don't see a great deal of 
benefit to the provision of the current time, since the clock semantics may not 
match, and we shouldn't be logging in code _that_ performance sensitive. But no 
super strong feelings, and it looks like it works. +1

 Add support for rate limiting log statements
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9029) Add support for rate limiting log statements

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394984#comment-14394984
 ] 

Benedict edited comment on CASSANDRA-9029 at 4/3/15 9:08 PM:
-

I'm not sure we need all of the myriad accessors (so many different ways of 
providing the timing interval, the logger, etc). I would personally pare it 
back to just the NoSpamStatement. I particularly don't see a great deal of 
benefit to the provision of the current time, since the clock semantics may not 
match, and we shouldn't be logging in code _that_ performance sensitive. But no 
super strong feelings, and it looks like it works. +1

(Since I'm no doubt committing, make whatever tweaks you feel necessary post my 
comments, and I'll commit the result)


was (Author: benedict):
I'm not sure we need all of the myriad accessors (so many different ways of 
providing the timing interval, the logger, etc). I would personally pare it 
back to just the NoSpamStatement. I particularly don't see a great deal of 
benefit to the provision of the current time, since the clock semantics may not 
match, and we shouldn't be logging in code _that_ performance sensitive. But no 
super strong feelings, and it looks like it works. +1

 Add support for rate limiting log statements
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9084) Do not generate line number in logs

2015-04-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-9084:
---

Assignee: Benedict

 Do not generate line number in logs
 ---

 Key: CASSANDRA-9084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9084
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
Reporter: Andrey
Assignee: Benedict
Priority: Minor

 According to logback documentation 
 (http://logback.qos.ch/manual/layouts.html):
 {code}
 Generating the line number information is not particularly fast. Thus, its 
 use should be avoided unless execution speed is not an issue.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove line numbers from default logback.xml

2015-04-03 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 57e1a2654 - 77c40dfc4


Remove line numbers from default logback.xml

ninja patch by benedict for CASSANDRA-9084


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77c40dfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77c40dfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77c40dfc

Branch: refs/heads/trunk
Commit: 77c40dfc4910654fa0bad5be030d33d487cf2105
Parents: 57e1a26
Author: Benedict Elliott Smith bened...@apache.org
Authored: Fri Apr 3 23:22:57 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Apr 3 23:22:57 2015 +0100

--
 CHANGES.txt  | 1 +
 conf/logback.xml | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/77c40dfc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f351666..9449386 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Remove line number generation from default logback.xml
  * Don't execute any functions at prepare-time (CASSANDRA-9037)
  * Share file handles between all instances of a SegmentedFile (CASSANDRA-8893)
  * Make it possible to major compact LCS (CASSANDRA-7272)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/77c40dfc/conf/logback.xml
--
diff --git a/conf/logback.xml b/conf/logback.xml
index e170d41..1c94a2c 100644
--- a/conf/logback.xml
+++ b/conf/logback.xml
@@ -31,7 +31,7 @@
   maxFileSize20MB/maxFileSize
 /triggeringPolicy
 encoder
-  pattern%-5level [%thread] %date{ISO8601} %F:%L - %msg%n/pattern
+  pattern%-5level [%thread] %date{ISO8601} %F: %msg%n/pattern
   !-- old-style log format
   pattern%5level [%thread] %date{ISO8601} %F (line %L) %msg%n/pattern
   --



[jira] [Updated] (CASSANDRA-9116) Indexes lost on upgrading to 2.1.4

2015-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9116:
--
Assignee: Sam Overton

 Indexes lost on upgrading to 2.1.4
 --

 Key: CASSANDRA-9116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mark Dewey
Assignee: Sam Overton
Priority: Blocker
 Fix For: 2.1.5


 How to reproduce:
 # Create a 2.0.12 cluster
 # Create the following keyspace/table (or something similar, it's primarily 
 the indexes that matter to this case afaict)
 {noformat}
 CREATE KEYSPACE tshirts WITH replication = {'class': 
 'NetworkTopologyStrategy', 'datacenter1': '1'}  AND durable_writes = true;
 CREATE TABLE tshirts.tshirtorders (
 store text,
 order_time timestamp,
 order_number uuid,
 color text,
 qty int,
 size text,
 PRIMARY KEY (store, order_time, order_number)
 ) WITH CLUSTERING ORDER BY (order_time ASC, order_number ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 CREATE INDEX color ON tshirts.tshirtorders (color);
 CREATE INDEX size ON tshirts.tshirtorders (size);
 {noformat}
 # Load it with data
 # Stop the node (one node cluster is enough to replicate)
 # Upgrade the node to 2.1.4
 # Start the node
 # Optional: Run nodetool upgradesstables
 # Run the following queries:
 {noformat}
 SELECT * FROM tshirts.tshirtorders WHERE store = 'store 65';
 SELECT store, color, qty, size FROM tshirts.tshirtorders WHERE store = 'store 
 65' AND color = 'red';
 {noformat}
 No rows containing will appear in the indexed table.
 Sample output:
 {noformat}
 cqlsh SELECT * FROM tshirts.tshirtorders WHERE store = 'store 65';
  store| order_time   | order_number | 
 color  | qty  | size
 --+--+--++--+--
  store 65 | 2000-01-03 18:20:20+ | 457e60e6-da39-11e4-add3-42010af08298 | 
red | 1295 |M
  store 65 | 2000-01-04 01:29:21+ | 45947304-da39-11e4-add3-42010af08298 | 
   grey | 2805 |M
  store 65 | 2000-01-04 19:55:51+ | 45d69220-da39-11e4-add3-42010af08298 | 
  brown | 3380 | XXXL
  store 65 | 2000-01-04 22:45:07+ | 45e16894-da39-11e4-add3-42010af08298 | 
 yellow | 7000 |  XXL
  store 65 | 2000-01-05 17:09:56+ | 46083bd6-da39-11e4-add3-42010af08298 | 
 purple | 2440 |S
  store 65 | 2000-01-05 19:16:48+ | 460cadd8-da39-11e4-add3-42010af08298 | 
  green | 5690 |L
  store 65 | 2000-01-06 00:26:06+ | 461ccdbc-da39-11e4-add3-42010af08298 | 
  brown | 9890 |P
  store 65 | 2000-01-06 11:35:11+ | 4633aa00-da39-11e4-add3-42010af08298 | 
  black | 9350 |P
  store 65 | 2000-01-07 06:07:20+ | 4658e0ea-da39-11e4-add3-42010af08298 | 
  black | 1300 |S
  store 65 | 2000-01-07 06:47:40+ | 465be93e-da39-11e4-add3-42010af08298 | 
 purple | 9630 |   XL
  store 65 | 2000-01-09 12:42:38+ | 46bafdd4-da39-11e4-add3-42010af08298 | 
 purple | 1470 |M
  store 65 | 2000-01-09 19:07:35+ | 46c43e08-da39-11e4-add3-42010af08298 | 
   pink | 6005 |S
  store 65 | 2000-01-10 04:47:56+ | 46d4b170-da39-11e4-add3-42010af08298 | 
red |  345 |   XL
  store 65 | 2000-01-10 20:25:44+ | 46ef7d52-da39-11e4-add3-42010af08298 | 
   pink |  420 |  XXL
  store 65 | 2000-01-11 00:55:27+ | 46f7a84c-da39-11e4-add3-42010af08298 | 
 purple | 9045 |S
  store 65 | 2000-01-11 17:54:25+ | 4724ea00-da39-11e4-add3-42010af08298 | 
  green | 5030 |  XXL
  store 65 | 2000-01-12 08:21:15+ | 473c0370-da39-11e4-add3-42010af08298 | 
  white | 2860 |   XL
  store 65 | 2000-01-12 17:09:19+ | 47497d2a-da39-11e4-add3-42010af08298 | 
red | 6425 |L
  store 65 | 2000-01-14 07:27:37+ | 478662a8-da39-11e4-add3-42010af08298 | 
   pink |  330 | XXXL
  store 65 | 2000-01-14 11:31:38+ | 478b43cc-da39-11e4-add3-42010af08298 | 
   pink | 3335 |  XXL
  store 65 | 2000-01-14 18:55:59+ | 47955a24-da39-11e4-add3-42010af08298 | 
 yellow |  500 |P
  store 65 | 2000-01-15 01:59:52+ | 479f0c5e-da39-11e4-add3-42010af08298 | 
red | 8415 |   XL
  store 65 | 

[jira] [Commented] (CASSANDRA-9116) Indexes lost on upgrading to 2.1.4

2015-04-03 Thread Mark Dewey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395096#comment-14395096
 ] 

Mark Dewey commented on CASSANDRA-9116:
---

In tests against a three-node cluster, when two nodes are on 2.1.4 and one node 
is on 2.0.12, the following exception shows up in the system.log:
{noformat}
ERROR [Thread-18] 2015-04-03 16:02:04,366 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-18,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:247)
at 
org.apache.cassandra.db.RangeSliceCommandSerializer.deserialize(RangeSliceCommand.java:156)
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:149)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:131)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
{noformat}

 Indexes lost on upgrading to 2.1.4
 --

 Key: CASSANDRA-9116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mark Dewey
Assignee: Sam Tunnicliffe
Priority: Blocker
 Fix For: 2.1.5


 How to reproduce:
 # Create a 2.0.12 cluster
 # Create the following keyspace/table (or something similar, it's primarily 
 the indexes that matter to this case afaict)
 {noformat}
 CREATE KEYSPACE tshirts WITH replication = {'class': 
 'NetworkTopologyStrategy', 'datacenter1': '1'}  AND durable_writes = true;
 CREATE TABLE tshirts.tshirtorders (
 store text,
 order_time timestamp,
 order_number uuid,
 color text,
 qty int,
 size text,
 PRIMARY KEY (store, order_time, order_number)
 ) WITH CLUSTERING ORDER BY (order_time ASC, order_number ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 CREATE INDEX color ON tshirts.tshirtorders (color);
 CREATE INDEX size ON tshirts.tshirtorders (size);
 {noformat}
 # Load it with data
 # Stop the node (one node cluster is enough to replicate)
 # Upgrade the node to 2.1.4
 # Start the node
 # Optional: Run nodetool upgradesstables
 # Run the following queries:
 {noformat}
 SELECT * FROM tshirts.tshirtorders WHERE store = 'store 65';
 SELECT store, color, qty, size FROM tshirts.tshirtorders WHERE store = 'store 
 65' AND color = 'red';
 {noformat}
 No rows containing will appear in the indexed table.
 Sample output:
 {noformat}
 cqlsh SELECT * FROM tshirts.tshirtorders WHERE store = 'store 65';
  store| order_time   | order_number | 
 color  | qty  | size
 --+--+--++--+--
  store 65 | 2000-01-03 18:20:20+ | 457e60e6-da39-11e4-add3-42010af08298 | 
red | 1295 |M
  store 65 | 2000-01-04 01:29:21+ | 45947304-da39-11e4-add3-42010af08298 | 
   grey | 2805 |M
  store 65 | 2000-01-04 19:55:51+ | 45d69220-da39-11e4-add3-42010af08298 | 
  brown | 3380 | XXXL
  store 65 | 2000-01-04 22:45:07+ | 45e16894-da39-11e4-add3-42010af08298 | 
 yellow | 7000 |  XXL
  store 65 | 2000-01-05 17:09:56+ | 46083bd6-da39-11e4-add3-42010af08298 | 
 purple | 2440 |S
  store 65 | 2000-01-05 19:16:48+ | 460cadd8-da39-11e4-add3-42010af08298 | 
  green | 5690 |L
  store 65 | 2000-01-06 00:26:06+ | 461ccdbc-da39-11e4-add3-42010af08298 | 
  brown | 9890 |P
  store 65 | 2000-01-06 11:35:11+ | 4633aa00-da39-11e4-add3-42010af08298 | 
  black | 9350 |P
  store 65 | 2000-01-07 06:07:20+ | 4658e0ea-da39-11e4-add3-42010af08298 | 
  black | 1300 |S
  store 65 | 2000-01-07 06:47:40+ | 465be93e-da39-11e4-add3-42010af08298 | 
 purple | 9630 |   XL
  store 65 | 2000-01-09 12:42:38+ | 46bafdd4-da39-11e4-add3-42010af08298 | 
 purple | 1470 |M
  store 65 | 2000-01-09 19:07:35+ | 46c43e08-da39-11e4-add3-42010af08298 | 
   pink | 6005 |S
  store 65 | 2000-01-10 04:47:56+ | 46d4b170-da39-11e4-add3-42010af08298 | 
red |  345 |   XL
  

[jira] [Commented] (CASSANDRA-9117) LEAK DETECTED during repair, startup

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395154#comment-14395154
 ] 

Benedict commented on CASSANDRA-9117:
-

I was looking into a flood of LEAK DETECTED earlier today on trunk, but since 
they don't affect 2.1, I was hoping to wait until CASSANDRA-8984 is committed, 
since I fully expect that to fix them if they're bugs anywhere in the main 
innards of resource management.

 LEAK DETECTED during repair, startup
 

 Key: CASSANDRA-9117
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9117
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Benedict
 Fix For: 3.0

 Attachments: node1.log, node2.log.gz


 When running the 
 {{incremental_repair_test.TestIncRepair.multiple_repair_test}} dtest, the 
 following error logs show up:
 {noformat}
 ERROR [Reference-Reaper:1] 2015-04-03 15:48:25,491 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@83f047e) to class 
 org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1631580268:Memory@[7f354800bdc0..7f354800bde8)
  was not released before the reference was garbage collected
 ERROR [Reference-Reaper:1] 2015-04-03 15:48:25,493 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@50bc8f67) to class 
 org.apache.cassandra.io.util.SafeMemory$MemoryTidy@191552666:Memory@[7f354800ba90..7f354800bdb0)
  was not released before the reference was garbage collected
 ERROR [Reference-Reaper:1] 2015-04-03 15:48:25,493 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@7fd10877) to class 
 org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1954741807:Memory@[7f3548101190..7f3548101194)
  was not released before the reference was garbage collected
 ERROR [Reference-Reaper:1] 2015-04-03 15:48:25,494 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@578550ac) to class 
 org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1903393047:[[OffHeapBitSet]]
  was not released before the reference was garbage collected
 {noformat}
 The test is being run against trunk (commit {{1dff098e}}).  I've attached a 
 DEBUG-level log from the test run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395089#comment-14395089
 ] 

Benedict commented on CASSANDRA-7066:
-

Just an implementation note: we need to ensure we sync the directory file 
descriptor after each log file creation/deletion action, since they need a 
happens before relation.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9071) CQLSSTableWriter gives java.lang.AssertionError: Empty partition

2015-04-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395195#comment-14395195
 ] 

Benedict commented on CASSANDRA-9071:
-

Looking at the code, it appears that the bug that could be introduced there has 
been fixed. I suspect you may simply be providing zero mutations for a 
partition Can you post the code you're using to interact with it?

 CQLSSTableWriter gives java.lang.AssertionError: Empty partition
 

 Key: CASSANDRA-9071
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9071
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: java 7 / 8
 cassandra 2.1.3 snapshot build locally with last commit 
 https://github.com/apache/cassandra/commit/6ee4b0989d9a3ae3e704918622024fa57fdf63e7
 macos Yosemite 10.10.2
Reporter: Ajit Joglekar
Assignee: Benedict
 Fix For: 2.1.5


 I am always getting the following error:
 Exception in thread main java.lang.AssertionError: Empty partition
 at 
 org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:228)
 It happens at a certain point that seems to be repeatable. Only issue is I am 
 converting 400 million records into multiple SSTables and creating small test 
 is a challenge
 Last comment from Benedict looks relevant here 
 https://issues.apache.org/jira/browse/CASSANDRA-8619
 Is there a work around, quick fix, fix that I can try out locally?
 Thanks,
 -Ajit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: cqlsh: fix formatting of map keys and values

2015-04-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2341e945b - adcb8a439


cqlsh: fix formatting of map keys and values

Patch by Tyler Hobbs; reviewed by Philip Thompson for CASSANDRA-9114


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7162293
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7162293
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7162293

Branch: refs/heads/trunk
Commit: f7162293d2d61319be25aad49e76546ab335b9ff
Parents: e4072cf
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 3 15:17:20 2015 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 3 15:17:20 2015 -0500

--
 pylib/cqlshlib/formatting.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7162293/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index ac12fe6..868ec28 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -264,6 +264,7 @@ def format_value_map(val, encoding, colormap, 
date_time_format, float_precision,
 return FormattedValue(bval, coloredval, displaywidth)
 formatter_for('OrderedDict')(format_value_map)
 formatter_for('OrderedMap')(format_value_map)
+formatter_for('OrderedMapSerializedKey')(format_value_map)
 
 
 def format_value_utype(val, encoding, colormap, date_time_format, 
float_precision, nullval, **_):



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-04-03 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/adcb8a43
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/adcb8a43
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/adcb8a43

Branch: refs/heads/trunk
Commit: adcb8a439a4826db6d5db1672c882525b60c66cc
Parents: 2341e94 f716229
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 3 15:18:17 2015 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 3 15:18:17 2015 -0500

--
 pylib/cqlshlib/formatting.py | 1 +
 1 file changed, 1 insertion(+)
--




  1   2   >