[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375856#comment-14375856
 ] 

Aleksey Yeschenko commented on CASSANDRA-6809:
--

+1 to committing as is.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-23 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375860#comment-14375860
 ] 

Branimir Lambov commented on CASSANDRA-8988:


+1

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-23 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375717#comment-14375717
 ] 

Branimir Lambov commented on CASSANDRA-8988:


Method names are the best form of documentation there is; not using them well 
is a missed opportunity.

My concrete suggestions:
- compare2: compareAsymmetric (sorry I didn't realize this is not restricted to 
intervals)
- find2: binaryBoundarySearch ('binary' lets people know its complexity and 
that it needs a sorted array immediately)
- the arguments: searchIn and searchFor are good ones
- inclusivei: acceptsEqual (or maybe lowerIncludesEqual). 'lt' could be called 
similarly ({{if (c  acceptEqual)}} is not the clearest line of code, but it 
does help understand what's going on).
- returni: selectBoundary. Might be worth renaming i, j to lower, upper as well.

Actually I also preferred the original explicit version of the code. You can go 
back to it if you want; the downside with it is that it sort of itches to be 
improved and people will waste time trying to do just that.

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375771#comment-14375771
 ] 

Benedict edited comment on CASSANDRA-6936 at 3/23/15 11:43 AM:
---

So, the more often I think of future storage changes, the more this becomes a 
pain and a headache. I would like to reassess the possibility of making 
everything byte-order comparable. How widely deployed are custom AbstractType 
implementations where the comparator makes a difference? Because it seems 
dropping support for just this (and having the user define an ASC/DESC order on 
the fields for maps/sets/tables within a UDT instead, for instance) would give 
us the ability to deliver it universally.

As far as I am aware, we're the only database that hamstrings ourselves with 
this limitation (or permittance). I would like to byte-prefix compress our 
index file (because as standard it takes up a significant proportion of the 
data it indexes unnecessarily, inflating the number of disk accesses and 
reducing the effective capacity of the key cache), but this isn't possible 
without a majority of fields supporting this. Even then, if we have special 
casing for those that do not, this is a headache and code complexity. It also 
pollutes the icache and branch predictors (not just with the inflation of 
variances, but in the logic to select between them). This is not to be 
understated: it's surprising how many icache misses you can get on a simple 
in-memory stress workload, which is underrepresentative of the variation for a 
normal deployment. vtune rates our utilisation of chips pretty poorly, and this 
is a major contributor. The same is true for optimising merges (we get 
significantly better algorithmic complexity with much fewer changes if the 
comparable fields are byte-prefix comparable), and for compressing clustering 
columns in data files on disk. I am certain I will encounter more scenarios 
before long.

I think the cumulative performance wins here would be really _very_ 
significant, for all workloads (compaction, disk reads and in-memory reads all 
have significant wins from this change).

CASSANDRA-8099, CASSANDRA-8731, CASSANDRA-8906 and CASSANDRA-8915 all help, but 
none will help as significantly - and each adds its own complexity, whereas 
this would _simplify_, which I think is important (for us as well as the CPU)


was (Author: benedict):
So, the more often I think of future storage changes, the more this becomes a 
pain and a headache. I would like to reassess the possibility of making 
everything byte-order comparable. How widely deployed are custom AbstractType 
implementations where the comparator makes a difference? Because it seems 
dropping support for just this (and having the user define an ASC/DESC order on 
the fields for maps/sets/tables within a UDT instead, for instance) would give 
us the ability to deliver it universally.

As far as I am aware, we're the only database that hamstrings ourselves with 
this limitation (or permittance). I would like to byte-prefix compress our 
index file (because as standard it takes up a significant proportion of the 
data it indexes unnecessarily, inflating the number of disk accesses and 
reducing the effective capacity of the key cache), but this isn't possible 
without a majority of fields supporting this. Even then, if we have special 
casing for those that do not, this is a headache and code complexity. It also 
pollutes the icache and branch predictors (not just with the inflation of 
variances, but in the logic to select between them). This is not to be 
understated: it's surprising how many icache misses you can get on a simple 
in-memory stress workload, which is underrepresentative of the variation for a 
normal deployment. vtune rates our utilisation of chips pretty poorly, and this 
is a major contributor. The same is true for optimising merges (we get 
significantly better algorithmic complexity with much fewer changes if the 
comparable fields are byte-prefix comparable), and for compressing clustering 
columns in data files on disk. I am certain I will encounter more scenarios 
before long.

I think the cumulative performance wins here would be really _very_ 
significant, for all workloads (compaction, disk reads and in-memory reads all 
have significant wins from this change).

 Make all byte representations of types comparable by their unsigned byte 
 representation only
 

 Key: CASSANDRA-6936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
  Labels: performance
 Fix For: 3.0


 This could be a painful change, but is necessary for 

[jira] [Resolved] (CASSANDRA-6173) Unable to delete multiple entries using In clause on clustering part of compound key

2015-03-23 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-6173.
---
Resolution: Duplicate

This issue will be fixed as part of CASSANDRA-6237

 Unable to delete multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6173
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6173
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Ashot Golovenko
Assignee: Benjamin Lerer
Priority: Minor

 I have the following table:
 CREATE TABLE user_relation (
 u1 bigint,
 u2 bigint,
 mf int,
 i boolean,
 PRIMARY KEY (u1, u2));
 And I'm trying to delete two entries using In clause on clustering part of 
 compound key and I fail to do so:
 cqlsh:bm DELETE from user_relation WHERE u1 = 755349113 and u2 in 
 (13404014120, 12537242743);
 Bad Request: Invalid operator IN for PRIMARY KEY part u2
 Although the select statement works just fine:
 cqlsh:bm select * from user_relation WHERE u1 = 755349113 and u2 in 
 (13404014120, 12537242743);
  u1| u2  | i| mf
 ---+-+--+
  755349113 | 12537242743 | null | 27
  755349113 | 13404014120 | null |  0
 (2 rows)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8397) Support UPDATE with IN requirement for clustering key

2015-03-23 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-8397.
---
Resolution: Duplicate

The fix for CASSANDRA-6237 will also fix this issue.

 Support UPDATE with IN requirement for clustering key
 -

 Key: CASSANDRA-8397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
 Project: Cassandra
  Issue Type: Wish
Reporter: Jens Rantil
Assignee: Benjamin Lerer
Priority: Minor

 {noformat}
 CREATE TABLE events (
 userid uuid,
 id timeuuid,
 content text,
 type text,
 PRIMARY KEY (userid, id)
 )
 # Add data
 cqlsh:mykeyspace UPDATE events SET content='Hello' WHERE 
 userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
 (046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
 code=2200 [Invalid query] message=Invalid operator IN for PRIMARY KEY part 
 id
 {noformat}
 I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375849#comment-14375849
 ] 

Benedict commented on CASSANDRA-8988:
-

OK, pushed a final version

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8397) Support UPDATE with IN requirement for clustering key

2015-03-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375700#comment-14375700
 ] 

Benjamin Lerer commented on CASSANDRA-8397:
---

[~ztyx] For an {{IN}} on a clustering key the code will create only one 
{{Mutation}} so the commit log will insert only one item.

 Support UPDATE with IN requirement for clustering key
 -

 Key: CASSANDRA-8397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
 Project: Cassandra
  Issue Type: Wish
Reporter: Jens Rantil
Assignee: Benjamin Lerer
Priority: Minor

 {noformat}
 CREATE TABLE events (
 userid uuid,
 id timeuuid,
 content text,
 type text,
 PRIMARY KEY (userid, id)
 )
 # Add data
 cqlsh:mykeyspace UPDATE events SET content='Hello' WHERE 
 userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
 (046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
 code=2200 [Invalid query] message=Invalid operator IN for PRIMARY KEY part 
 id
 {noformat}
 I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6922) Investigate if we can drop ByteOrderedPartitioner and OrderPreservingPartitioner in 3.0

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375698#comment-14375698
 ] 

Benedict commented on CASSANDRA-6922:
-

New storage formats would also be materially simplified if we can assume a hash 
distribution of data. For locality there has been talk (not sure if a ticket 
has been filed) of supporting token computation against only a prefix of the 
partition key, which would support most use cases. The problem we have with 
maintaining both is that they are pretty (by definition) orthogonal approaches, 
and so we make a lot of decisions in favour of hashed at the expense of BOP, 
but it lingering still creates a number of headaches and less optimal 
decisions. I'm +1 deprecating, and slating for removal.

 Investigate if we can drop ByteOrderedPartitioner and 
 OrderPreservingPartitioner in 3.0
 ---

 Key: CASSANDRA-6922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6922
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 3.0


 We would need to add deprecation warnings in 2.1, rewrite a lot of unit 
 tests, and perhaps provide tools/guidelines to migrate an existing data set 
 to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375750#comment-14375750
 ] 

Benedict commented on CASSANDRA-8988:
-

bq. find2: binaryBoundarySearch ('binary' lets people know its complexity and 
that it needs a sorted array immediately)

Boundary confuses me, whereas the argument Op makes it quite clear what you're 
asking for, and we would have to call it binaryBoundarySearchAsymmetric which 
is much too verbose. We could call it binarySearchAsymmetric() but I really 
don't think either this or compareAsymmetric offer more than compare2 since 
they have their type info to convey this information. Succinctness has its own 
rewards, and there is a tension between the two. That's my last statement on it 
though, since I don't feel dramatically strongly, and if you're still convinced 
I'll bow to your judgement.

bq. Might be worth renaming i, j to lower, upper as well.

Took me a while to figure out why this sentence confused me (and I think would 
make the code less clear): lower/higher already have a meaning in the Op, and 
it is different to the meaning you're ascribing here. By convention i = j in 
algorithms (unless they span the same range, or a different dataset, in which 
case it is just an order of declaration), so it conveys the same information 
without any potential for confusion. Alternatively lb/ub are clearer (to me).

bq. inclusivei: acceptsEqual (or maybe lowerIncludesEqual). 'lt' could be 
called similarly (if (c  acceptEqual) is not the clearest line of code, but it 
does help understand what's going on).

How about strictnessOfLessThan? I don't think acceptsEqual is more descriptive 
than inclusivei, but {{if (c  strictnessOfLessThan)}} seems to convey the 
right meaning.

bq. returni: selectBoundary. 

That works for me.

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree

2015-03-23 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375790#comment-14375790
 ] 

Branimir Lambov commented on CASSANDRA-8988:


bq. the argument Op makes it quite clear what you're asking for

Agreed, let's use binarySearchAsymmetric.

bq. By convention i = j in algorithms

I don't think this is a widely used convention, it is probably very clear to 
you but isn't to me. lb/ub sounds good.

bq. strictnessOfLessThan

Sounds good.

 Optimise IntervalTree
 -

 Key: CASSANDRA-8988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0

 Attachments: 8988.txt


 We perform a lot of unnecessary comparisons in 
 IntervalTree.IntervalNode.searchInternal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE

2015-03-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376019#comment-14376019
 ] 

Benjamin Lerer commented on CASSANDRA-8900:
---

+1
[~iamaleksey] can you commit?

 AssertionError when binding nested collection in a DELETE
 -

 Key: CASSANDRA-8900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8900
 Project: Cassandra
  Issue Type: Bug
Reporter: Olivier Michallat
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.4


 Running this with the Java driver:
 {code}
 session.execute(create table if not exists foo2(k int primary key, m 
 mapfrozenlistint, int););
 PreparedStatement pst = session.prepare(delete m[?] from foo2 where k = 1);
 session.execute(pst.bind(ImmutableList.of(1)));
 {code}
 Produces a server error. Server-side stack trace:
 {code}
 ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0xf9e92e61, 
 /127.0.0.1:58163 = /127.0.0.1:9042]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_60]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
 {code}
 A simple statement (i.e. QUERY message with values) produces the same result:
 {code}
 session.execute(delete m[?] from foo2 where k = 1, ImmutableList.of(1));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8880) Add metrics to monitor the amount of tombstones created

2015-03-23 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-8880:
--
Assignee: (was: Lyuben Todorov)

 Add metrics to monitor the amount of tombstones created
 ---

 Key: CASSANDRA-8880
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8880
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Priority: Minor
  Labels: metrics
 Fix For: 2.1.4

 Attachments: cassandra-2.1-8880.patch


 AFAIK there's currently no way to monitor the amount of tombstones created on 
 a Cassandra node. CASSANDRA-6057 has made it possible for users to figure out 
 how many tombstones are scanned at read time, but in write mostly workloads, 
 it may not be possible to realize if some inappropriate queries are 
 generating too many tombstones.
 Therefore the following additional metrics should be added:
 * {{writtenCells}}: amount of cells that have been written
 * {{writtenTombstoneCells}}: amount of tombstone cells that have been written
 Alternatively these could be exposed as a single gauge such as 
 {{writtenTombstoneCellsRatio}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Allow scrub for secondary index

2015-03-23 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk bee474626 - 540e9cf75


Allow scrub for secondary index

patch by Stefania Alborghetti; reviewed by yukim for CASSANDRA-5174


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/540e9cf7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/540e9cf7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/540e9cf7

Branch: refs/heads/trunk
Commit: 540e9cf75243888f878760d5488dda3a0bcfdc86
Parents: bee4746
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Mon Mar 23 10:15:52 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Mon Mar 23 10:15:52 2015 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  65 -
 .../org/apache/cassandra/db/DataTracker.java|   7 +-
 src/java/org/apache/cassandra/db/Keyspace.java  |  78 +-
 .../db/compaction/CompactionManager.java|  23 +-
 .../cassandra/db/compaction/Scrubber.java   |  21 +-
 .../AbstractSimplePerColumnSecondaryIndex.java  |   3 +-
 .../cassandra/db/index/SecondaryIndex.java  |   3 +-
 .../db/index/SecondaryIndexManager.java |   9 +
 .../io/sstable/format/SSTableReader.java|   4 +-
 .../io/sstable/format/SSTableWriter.java|  13 +-
 .../cassandra/service/StorageService.java   |  69 +-
 .../org/apache/cassandra/tools/NodeTool.java|   5 +-
 .../cassandra/tools/StandaloneScrubber.java |  33 ++-
 .../cassandra/tools/StandaloneSplitter.java |   2 +-
 .../cassandra/tools/StandaloneUpgrader.java |   2 +-
 test/unit/org/apache/cassandra/Util.java|  10 +
 .../apache/cassandra/db/RangeTombstoneTest.java |   1 -
 .../unit/org/apache/cassandra/db/ScrubTest.java | 237 ---
 .../db/index/PerRowSecondaryIndexTest.java  |   1 -
 20 files changed, 432 insertions(+), 155 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/540e9cf7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 68df3e6..c136c52 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -71,6 +71,7 @@
  * Evaluate MurmurHash of Token once per query (CASSANDRA-7096)
  * Generalize progress reporting (CASSANDRA-8901)
  * Resumable bootstrap streaming (CASSANDRA-8838)
+ * Allow scrub for secondary index (CASSANDRA-5174)
 
 2.1.4
  * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/540e9cf7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4795b88..7f1bf98 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -315,9 +315,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.info(Initializing {}.{}, keyspace.getName(), name);
 
 // scan for sstables corresponding to this cf and load them
-data = new DataTracker(this);
+data = new DataTracker(this, loadSSTables);
 
-if (loadSSTables)
+if (data.loadsstables)
 {
 Directories.SSTableLister sstableFiles = 
directories.sstableLister().skipTemporary(true);
 CollectionSSTableReader sstables = 
SSTableReader.openAll(sstableFiles.list().entrySet(), metadata, 
this.partitioner);
@@ -431,12 +431,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return createColumnFamilyStore(keyspace, columnFamily, 
StorageService.getPartitioner(), 
Schema.instance.getCFMetaData(keyspace.getName(), columnFamily), loadSSTables);
 }
 
-public static ColumnFamilyStore createColumnFamilyStore(Keyspace keyspace, 
String columnFamily, IPartitioner partitioner, CFMetaData metadata)
-{
-return createColumnFamilyStore(keyspace, columnFamily, partitioner, 
metadata, true);
-}
-
-private static synchronized ColumnFamilyStore 
createColumnFamilyStore(Keyspace keyspace,
+public static synchronized ColumnFamilyStore 
createColumnFamilyStore(Keyspace keyspace,
  
String columnFamily,
  
IPartitioner partitioner,
  
CFMetaData metadata,
@@ -764,6 +759,11 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.info(Done loading load new SSTables for {}/{}, 
keyspace.getName(), name);
 }
 
+  

[jira] [Commented] (CASSANDRA-8880) Add metrics to monitor the amount of tombstones created

2015-03-23 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376051#comment-14376051
 ] 

Chris Lohfink commented on CASSANDRA-8880:
--

Could just count the number of range tombstones in the columnFamily and 
increment that. Can change metric to writtenTombstones which would meet 
expectations a little more.

 Add metrics to monitor the amount of tombstones created
 ---

 Key: CASSANDRA-8880
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8880
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Priority: Minor
  Labels: metrics
 Fix For: 2.1.4

 Attachments: cassandra-2.1-8880.patch


 AFAIK there's currently no way to monitor the amount of tombstones created on 
 a Cassandra node. CASSANDRA-6057 has made it possible for users to figure out 
 how many tombstones are scanned at read time, but in write mostly workloads, 
 it may not be possible to realize if some inappropriate queries are 
 generating too many tombstones.
 Therefore the following additional metrics should be added:
 * {{writtenCells}}: amount of cells that have been written
 * {{writtenTombstoneCells}}: amount of tombstone cells that have been written
 Alternatively these could be exposed as a single gauge such as 
 {{writtenTombstoneCellsRatio}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9017) Let QueryHandler provide preparedStatementsCount

2015-03-23 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-9017:
---

 Summary: Let QueryHandler provide preparedStatementsCount
 Key: CASSANDRA-9017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9017
 Project: Cassandra
  Issue Type: Wish
Reporter: Robert Stupp
Priority: Minor


{{CQLMetrics.init()}} uses {{QueryProcessor.preparedStatementsCount();}} but 
calls to {{prepareStatement}} usually go though {{QueryHandler}}. So 
{{preparedStatementsCount()}} would logically belong to {{QueryHandler}} and 
not {{QueryProcessor}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9010) Identify missing test coverage by documented functionality

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375973#comment-14375973
 ] 

Philip Thompson edited comment on CASSANDRA-9010 at 3/23/15 2:38 PM:
-

https://docs.google.com/spreadsheets/d/1KgKIHgFxL0nGJqjfRE0kCpuWyjFpOHUtirGIRH1so-M/edit?usp=sharing


was (Author: philipthompson):
https://docs.google.com/a/datastax.com/spreadsheets/d/1KgKIHgFxL0nGJqjfRE0kCpuWyjFpOHUtirGIRH1so-M/edit#gid=0

 Identify missing test coverage by documented functionality
 --

 Key: CASSANDRA-9010
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9010
 Project: Cassandra
  Issue Type: Task
Reporter: Ariel Weisberg
Assignee: Philip Thompson
  Labels: monthly-release

 The output of this is a spreadsheet that is already being worked on.
 [~philipthompson] you can assign to me once you are done and I can do another 
 pass and review things like unit test coverage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9010) Identify missing test coverage by documented functionality

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9010:
---
Assignee: Ariel Weisberg  (was: Philip Thompson)

 Identify missing test coverage by documented functionality
 --

 Key: CASSANDRA-9010
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9010
 Project: Cassandra
  Issue Type: Task
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
  Labels: monthly-release

 The output of this is a spreadsheet that is already being worked on.
 [~philipthompson] you can assign to me once you are done and I can do another 
 pass and review things like unit test coverage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9010) Identify missing test coverage by documented functionality

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375973#comment-14375973
 ] 

Philip Thompson commented on CASSANDRA-9010:


https://docs.google.com/a/datastax.com/spreadsheets/d/1KgKIHgFxL0nGJqjfRE0kCpuWyjFpOHUtirGIRH1so-M/edit#gid=0

 Identify missing test coverage by documented functionality
 --

 Key: CASSANDRA-9010
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9010
 Project: Cassandra
  Issue Type: Task
Reporter: Ariel Weisberg
Assignee: Philip Thompson
  Labels: monthly-release

 The output of this is a spreadsheet that is already being worked on.
 [~philipthompson] you can assign to me once you are done and I can do another 
 pass and review things like unit test coverage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-23 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375918#comment-14375918
 ] 

Alan Boudreault commented on CASSANDRA-6809:


[~benedict] Adding a note to retest my simple dtest with the latest branch this 
week. No objection to see this committed right now though.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8411) Cassandra stress tool fails with NotStrictlyPositiveException on example profiles

2015-03-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375937#comment-14375937
 ] 

T Jake Luciani commented on CASSANDRA-8411:
---

+1

 Cassandra stress tool fails with NotStrictlyPositiveException on example 
 profiles
 -

 Key: CASSANDRA-8411
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8411
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux Centos
Reporter: Igor Meltser
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4

 Attachments: 8411.txt


 Trying to run stress tool with provided profile fails:
 dsc-cassandra-2.1.2/tools $ ./bin/cassandra-stress user n=1 
 profile=cqlstress-example.yaml ops\(insert=1\) -node 
 INFO  06:21:35 Using data-center name 'datacenter1' for 
 DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
 datacenter name with DCAwareRoundRobinPolicy constructor)
 Connected to cluster: Benchmark Cluster
 INFO  06:21:35 New Cassandra host /:9042 added
 Datatacenter: datacenter1; Host: /.; Rack: rack1
 Datatacenter: datacenter1; Host: /; Rack: rack1
 Datatacenter: datacenter1; Host: /; Rack: rack1
 INFO  06:21:35 New Cassandra host /:9042 added
 INFO  06:21:35 New Cassandra host /:9042 added
 Created schema. Sleeping 3s for propagation.
 Exception in thread main 
 org.apache.commons.math3.exception.NotStrictlyPositiveException: standard 
 deviation (0)
 at 
 org.apache.commons.math3.distribution.NormalDistribution.init(NormalDistribution.java:108)
 at 
 org.apache.cassandra.stress.settings.OptionDistribution$GaussianFactory.get(OptionDistribution.java:418)
 at 
 org.apache.cassandra.stress.generate.SeedManager.init(SeedManager.java:59)
 at 
 org.apache.cassandra.stress.settings.SettingsCommandUser.getFactory(SettingsCommandUser.java:78)
 at org.apache.cassandra.stress.StressAction.run(StressAction.java:61)
 at org.apache.cassandra.stress.Stress.main(Stress.java:109)
 The tool is 2.1.2 version, but the tested Cassandra is 2.0.8 version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8949) CompressedSequentialWriter.resetAndTruncate can lose data

2015-03-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375926#comment-14375926
 ] 

T Jake Luciani commented on CASSANDRA-8949:
---

+1

 CompressedSequentialWriter.resetAndTruncate can lose data
 -

 Key: CASSANDRA-8949
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8949
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Critical
 Fix For: 2.0.14


 If the FileMark passed into this method fully fills the buffer, a subsequent 
 call to write will reBuffer and drop the data currently in the buffer. We 
 need to mark the buffer contents as dirty in resetAndTruncate to prevent this 
 - see CASSANDRA-8709 notes for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9018) Dropped keyspace is not collected

2015-03-23 Thread Maxim Podkolzine (JIRA)
Maxim Podkolzine created CASSANDRA-9018:
---

 Summary: Dropped keyspace is not collected
 Key: CASSANDRA-9018
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9018
 Project: Cassandra
  Issue Type: Bug
Reporter: Maxim Podkolzine
 Attachments: cassandra-log.zip

As far as I understand when a keyspace is dropped, the data is marked as 
tombstone. We expect that after the grace period (all tables are created with 
gc_grace_seconds=7200), this data is automatically removed during the 
compaction process, which means that keyspace no longer takes any space on disk.

This is not happening (not after 2 or 24 hours). The log keeps saying No files 
to compact for user defined compaction, keyspace files remain on disk. It's 
not clear whether Cassandra is still waiting for certain event, or decided not 
to collect the data.

Is there any setting that I missed? Any clues to figure out from the log, 
what's the current state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-03-23 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-8561:
--
Attachment: cassandra-trunk-head-1427125869-8561.diff
cassandra-2.1-head-1427124485-8561.diff

Attaching patch for trunk and cassandra-2.1, should apply cleanly on 23 Mar 
15:50 GMT.

 Tombstone log warning does not log partition key
 

 Key: CASSANDRA-8561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Datastax DSE 4.5
Reporter: Jens Rantil
Assignee: Lyuben Todorov
  Labels: logging
 Fix For: 2.1.4

 Attachments: cassandra-2.1-8561.diff, 
 cassandra-2.1-head-1427124485-8561.diff, 
 cassandra-trunk-head-1427125869-8561.diff


 AFAIK, the tombstone warning in system.log does not contain the primary key. 
 See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
 Including it would help a lot in diagnosing why the (CQL) row has so many 
 tombstones.
 Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7533) Let MAX_OUTSTANDING_REPLAY_COUNT be configurable

2015-03-23 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reassigned CASSANDRA-7533:
--

Assignee: Jeremiah Jordan  (was: Branimir Lambov)

 Let MAX_OUTSTANDING_REPLAY_COUNT be configurable
 

 Key: CASSANDRA-7533
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7533
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Jeremiah Jordan
Priority: Minor
 Fix For: 2.0.14


 There are some workloads where commit log replay will run into contention 
 issues with multiple things updating the same partition.  Through some 
 testing it was found that lowering CommitLogReplayer.java 
 MAX_OUTSTANDING_REPLAY_COUNT can help with this issue.
 The calculations added in CASSANDRA-6655 are one such place things get 
 bottlenecked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375771#comment-14375771
 ] 

Benedict commented on CASSANDRA-6936:
-

So, the more often I think of future storage changes, the more this becomes a 
pain and a headache. I would like to reassess the possibility of making 
everything byte-order comparable. How widely deployed are custom AbstractType 
implementations where the comparator makes a difference? Because it seems 
dropping support for just this (and having the user define an ASC/DESC order on 
the fields for maps/sets/tables within a UDT instead, for instance) would give 
us the ability to deliver it universally.

As far as I am aware, we're the only database that hamstrings ourselves with 
this limitation (or permittance). I would like to byte-prefix compress our 
index file (because as standard it takes up a significant proportion of the 
data it indexes unnecessarily, inflating the number of disk accesses and 
reducing the effective capacity of the key cache), but this isn't possible 
without a majority of fields supporting this. Even then, if we have special 
casing for those that do not, this is a headache and code complexity. It also 
pollutes the icache and branch predictors (not just with the inflation of 
variances, but in the logic to select between them). This is not to be 
understated: it's surprising how many icache misses you can get on a simple 
in-memory stress workload, which is underrepresentative of the variation for a 
normal deployment. vtune rates our utilisation of chips pretty poorly, and this 
is a major contributor. The same is true for optimising merges (we get 
significantly better algorithmic complexity with much fewer changes if the 
comparable fields are byte-prefix comparable), and for compressing clustering 
columns in data files on disk. I am certain I will encounter more scenarios 
before long.

I think the cumulative performance wins here would be really _very_ 
significant, for all workloads (compaction, disk reads and in-memory reads all 
have significant wins from this change).

 Make all byte representations of types comparable by their unsigned byte 
 representation only
 

 Key: CASSANDRA-6936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
  Labels: performance
 Fix For: 3.0


 This could be a painful change, but is necessary for implementing a 
 trie-based index, and settling for less would be suboptimal; it also should 
 make comparisons cheaper all-round, and since comparison operations are 
 pretty much the majority of C*'s business, this should be easily felt (see 
 CASSANDRA-6553 and CASSANDRA-6934 for an example of some minor changes with 
 major performance impacts). No copying/special casing/slicing should mean 
 fewer opportunities to introduce performance regressions as well.
 Since I have slated for 3.0 a lot of non-backwards-compatible sstable 
 changes, hopefully this shouldn't be too much more of a burden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375447#comment-14375447
 ] 

xiangdong Huang commented on CASSANDRA-8581:


Yeah, the follow is the differences.

diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangdong Huang updated CASSANDRA-8581:
---
Comment: was deleted

(was: Yeah, the follow is the differences.

diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {)

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375448#comment-14375448
 ] 

xiangdong Huang commented on CASSANDRA-8581:


Ok, here is the differences:

diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {


 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375448#comment-14375448
 ] 

xiangdong Huang edited comment on CASSANDRA-8581 at 3/23/15 6:24 AM:
-

Ok, here is the differences:
(I find that the format will be changed in this website. So I update the 
difference in the Attachment)
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {



was (Author: jixuan1989):
Ok, here is the differences:

diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {


 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 2015-01-08 
 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that 

[jira] [Comment Edited] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375448#comment-14375448
 ] 

xiangdong Huang edited comment on CASSANDRA-8581 at 3/23/15 6:25 AM:
-

Ok, here is the differences:
(I find that the format will be changed in this website. So I update the 
difference in the Attachment difference)
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {



was (Author: jixuan1989):
Ok, here is the differences:
(I find that the format will be changed in this website. So I update the 
difference in the Attachment)
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
index 6823342..68c8714 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
@@ -187,21 +187,23 @@ final class ColumnFamilyRecordWriter extends 
AbstractColumnFamilyRecordWriterBy
 while (true)
 {
 // send the mutation to the last-used endpoint.  first 
time through, this will NPE harmlessly.
-try
-{
-client.batch_mutate(batch, consistencyLevel);
-break;
-}
-catch (Exception e)
-{
-closeInternal();
-if (!iter.hasNext())
-{
-lastException = new IOException(e);
-break outer;
-}
+if(client!=null){
+   try
+   {
+   client.batch_mutate(batch, consistencyLevel);
+   break;
+   }
+   catch (Exception e)
+   {
+   e.printStackTrace();
+   closeInternal();
+   if (!iter.hasNext())
+   {
+   lastException = new IOException(e);
+   break outer;
+   }
+   }
 }
-
 // attempt to connect to a different endpoint
 try
 {


 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: difference, 屏幕快照 2015-01-08 下午7.59.29.png, 

[jira] [Updated] (CASSANDRA-8581) Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter

2015-03-23 Thread xiangdong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangdong Huang updated CASSANDRA-8581:
---
Attachment: difference

my patch to fix NullPointer exception

 Null pointer in cassandra.hadoop.ColumnFamilyRecoderWriter
 --

 Key: CASSANDRA-8581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8581
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: xiangdong Huang
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 2.1.4

 Attachments: difference, 屏幕快照 2015-01-08 下午7.59.29.png, 屏幕快照 
 2015-01-08 下午8.01.15.png, 屏幕快照 2015-01-08 下午8.07.23.png


 When I run examples/hadoop_word_count. I find that ReducerToFilesystem is 
 correct but when I use ReducerToCassandra, the program will call loadYaml().
 The reason is that the program catch a exception at line 196 of 
 ColumnFamilyRecoderWriter.java. 
 Then it check why the exception occur, then it loadYaml to check if the disk 
 is broken...
 However, the exception is NullPointerException. because the client is not 
 initialized.
  
 So we need a check to judge whether the client is null. 
 (
 The exception, original code and fixed code are in the attachments.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9024) cleanup nodetool command clears all the data of some columnfamily

2015-03-23 Thread Chetana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377276#comment-14377276
 ] 

Chetana  commented on CASSANDRA-9024:
-

[root@dl360x2861 tmp]#  /usr/share/cassandra/bin/nodetool -h localhost -u vs 
-pw EvaiKiO1@2 -p 7199 cleanup
Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be 
cast to org.apache.cassandra.io.sstable.SSTableScanner
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:227)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1054)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2038)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImp
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:5
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTranspor
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTable 
   canner
at org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
at org.apache.cassandra.db.compaction.CompactionManager.access$400(Compa
at org.apache.cassandra.db.compaction.CompactionManager$5.perform(Compac
at org.apache.cassandra.db.compaction.CompactionManager$2.call(Compactio
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more


 cleanup nodetool command clears all the data of some columnfamily 
 --

 Key: CASSANDRA-9024
 URL: 

[jira] [Commented] (CASSANDRA-9024) cleanup nodetool command clears all the data of some columnfamily

2015-03-23 Thread Chetana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377284#comment-14377284
 ] 

Chetana  commented on CASSANDRA-9024:
-

table structure:

Keyspace: vs:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [DC1:3]
  Column Families:
 ColumnFamily: vshash
  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.0
  DC Local Read repair chance: 0.0
  Populate IO Cache on flush: false
  Replicate on write: true
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: 0.01
  Index interval: 128
  Speculative Retry: 99.0PERCENTILE
  Built indexes: []
  Column Metadata:
Column Name: TX_KEY
  Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Compaction Strategy: 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.LZ4Compressor


This column family has just one row 

RowKey: TX_KEY
= (name=c:TX_KEY, 
value=er8798r790tdfjdkdjfgkjdlkfgjd,heryieuwyroieyotieourtiuer8787, 
u54u43y5i4unsdbfdngjgh, timestamp=1417504310368001)

This row gets completely lost when cleanup command is run. and this occurs very 
frequently after scalling from 3 to 5  nodes.


 cleanup nodetool command clears all the data of some columnfamily 
 --

 Key: CASSANDRA-9024
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9024
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux
Reporter: Chetana 
 Fix For: 2.0.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9025) cleanup command throws exception :Error occurred during cleanup java.util.concurrent.ExecutionException: java.lang.ClassCastException:

2015-03-23 Thread Chetana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377278#comment-14377278
 ] 

Chetana  commented on CASSANDRA-9025:
-

[root@dl360x2861 tmp]#  /usr/share/cassandra/bin/nodetool -h localhost -u vs 
-pw EvaiKiO1@2 -p 7199 cleanup
Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be 
cast to org.apache.cassandra.io.sstable.SSTableScanner
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:227)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1054)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2038)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImp
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:5
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTranspor
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTable 
   canner
at org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
at org.apache.cassandra.db.compaction.CompactionManager.access$400(Compa
at org.apache.cassandra.db.compaction.CompactionManager$5.perform(Compac
at org.apache.cassandra.db.compaction.CompactionManager$2.call(Compactio
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more


 cleanup command throws exception :Error occurred during cleanup 
 java.util.concurrent.ExecutionException: java.lang.ClassCastException:
 

[jira] [Comment Edited] (CASSANDRA-9025) cleanup command throws exception :Error occurred during cleanup java.util.concurrent.ExecutionException: java.lang.ClassCastException:

2015-03-23 Thread Chetana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377278#comment-14377278
 ] 

Chetana  edited comment on CASSANDRA-9025 at 3/24/15 5:25 AM:
--

[root@dl360x2861 tmp]#  /usr/share/cassandra/bin/nodetool -h localhost -u user1 
-pw xyz -p 7199 cleanup
Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be 
cast to org.apache.cassandra.io.sstable.SSTableScanner
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:227)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1054)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2038)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImp
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:5
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTranspor
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTable 
   canner
at org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
at org.apache.cassandra.db.compaction.CompactionManager.access$400(Compa
at org.apache.cassandra.db.compaction.CompactionManager$5.perform(Compac
at org.apache.cassandra.db.compaction.CompactionManager$2.call(Compactio
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more



was (Author: chetanadixit):
[root@dl360x2861 tmp]#  /usr/share/cassandra/bin/nodetool -h localhost -u vs 
-pw EvaiKiO1@2 -p 7199 cleanup
Error occurred during 

[jira] [Created] (CASSANDRA-9025) cleanup command throws exception :Error occurred during cleanup java.util.concurrent.ExecutionException: java.lang.ClassCastException:

2015-03-23 Thread Chetana (JIRA)
Chetana  created CASSANDRA-9025:
---

 Summary: cleanup command throws exception :Error occurred during 
cleanup java.util.concurrent.ExecutionException: java.lang.ClassCastException:
 Key: CASSANDRA-9025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9025
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux
Reporter: Chetana 
 Fix For: 2.0.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9024) cleanup nodetool command clears all the data of some columnfamily

2015-03-23 Thread Chetana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetana  updated CASSANDRA-9024:

Comment: was deleted

(was: [root@dl360x2861 tmp]#  /usr/share/cassandra/bin/nodetool -h localhost -u 
vs -pw EvaiKiO1@2 -p 7199 cleanup
Error occurred during cleanup
java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be 
cast to org.apache.cassandra.io.sstable.SSTableScanner
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:227)
at 
org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1054)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2038)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImp
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:5
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTranspor
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassCastException: 
org.apache.cassandra.io.sstable.SSTable 
   canner
at org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
at org.apache.cassandra.db.compaction.CompactionManager.access$400(Compa
at org.apache.cassandra.db.compaction.CompactionManager$5.perform(Compac
at org.apache.cassandra.db.compaction.CompactionManager$2.call(Compactio
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more
)

 cleanup nodetool command clears all the data of some columnfamily 
 --

 Key: CASSANDRA-9024
 URL: 

[jira] [Commented] (CASSANDRA-9018) Dropped keyspace is not collected

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376591#comment-14376591
 ] 

Philip Thompson commented on CASSANDRA-9018:


What version are you using? Are you sure the data files are not just being kept 
around in snapshot form, or were incrementally backed up? 

 Dropped keyspace is not collected
 -

 Key: CASSANDRA-9018
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9018
 Project: Cassandra
  Issue Type: Bug
Reporter: Maxim Podkolzine
 Attachments: cassandra-log.zip


 As far as I understand when a keyspace is dropped, the data is marked as 
 tombstone. We expect that after the grace period (all tables are created with 
 gc_grace_seconds=7200), this data is automatically removed during the 
 compaction process, which means that keyspace no longer takes any space on 
 disk.
 This is not happening (not after 2 or 24 hours). The log keeps saying No 
 files to compact for user defined compaction, keyspace files remain on disk. 
 It's not clear whether Cassandra is still waiting for certain event, or 
 decided not to collect the data.
 Is there any setting that I missed? Any clues to figure out from the log, 
 what's the current state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376696#comment-14376696
 ] 

Ariel Weisberg edited comment on CASSANDRA-9019 at 3/23/15 9:36 PM:


My only concern is that the string returned suppresses information about the 
exception. The return value may not be a great place to barf an exception 
anyways. Logging to the log at info level also might be too verbose. This is 
where having a rate limited logger can be nice.

I here there is one coming as part of CASSANDRA-8584.


was (Author: aweisberg):
My only concern is that the string returned suppresses information about the 
exception. The return value may not be a great place to barf an exception 
anyways. Logging to the log at info level also might be too verbose. This is 
where having a rate limited logger can be nice.

 GCInspector detected GC before ThreadPools are initialized
 --

 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0

 Attachments: 9019.txt


 While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, 
 I ran into the following exception:
 {code}
 java.lang.RuntimeException: Error reading: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
 at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
 at 
 org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
 at 
 sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
 at 
 sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy3.getValue(Unknown Source)
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
 ... 5 more
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 7 more
 
 {code}
 Dtest didn't preserve the logs, which implies that this wasn't in the 
 system.log, but printed to stderr somehow, it's unclear with all the piping 
 dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE

2015-03-23 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8900:
--
Attachment: CASSANDRA-8900.txt

The patch made from Stefania branch.

 AssertionError when binding nested collection in a DELETE
 -

 Key: CASSANDRA-8900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8900
 Project: Cassandra
  Issue Type: Bug
Reporter: Olivier Michallat
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.4

 Attachments: CASSANDRA-8900.txt


 Running this with the Java driver:
 {code}
 session.execute(create table if not exists foo2(k int primary key, m 
 mapfrozenlistint, int););
 PreparedStatement pst = session.prepare(delete m[?] from foo2 where k = 1);
 session.execute(pst.bind(ImmutableList.of(1)));
 {code}
 Produces a server error. Server-side stack trace:
 {code}
 ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0xf9e92e61, 
 /127.0.0.1:58163 = /127.0.0.1:9042]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_60]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
 {code}
 A simple statement (i.e. QUERY message with values) produces the same result:
 {code}
 session.execute(delete m[?] from foo2 where k = 1, ImmutableList.of(1));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7533) Let MAX_OUTSTANDING_REPLAY_COUNT be configurable

2015-03-23 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-7533:
---
 Reviewer: Brandon Williams
Reproduced In: 2.0.9, 2.0.6  (was: 2.0.6, 2.0.9)

 Let MAX_OUTSTANDING_REPLAY_COUNT be configurable
 

 Key: CASSANDRA-7533
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7533
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Jeremiah Jordan
Priority: Minor
 Fix For: 2.0.14

 Attachments: 0001-CASSANDRA-7533.txt


 There are some workloads where commit log replay will run into contention 
 issues with multiple things updating the same partition.  Through some 
 testing it was found that lowering CommitLogReplayer.java 
 MAX_OUTSTANDING_REPLAY_COUNT can help with this issue.
 The calculations added in CASSANDRA-6655 are one such place things get 
 bottlenecked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7533) Let MAX_OUTSTANDING_REPLAY_COUNT be configurable

2015-03-23 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-7533:
---
Attachment: 0001-CASSANDRA-7533.txt

Have had a couple more people hitting this, so I think we should put something 
in for it.

Here is a simple fix to add 
-Dcassandra.commitlog_max_outstanding_replay_count=X to allow a user to change 
it from the default.

 Let MAX_OUTSTANDING_REPLAY_COUNT be configurable
 

 Key: CASSANDRA-7533
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7533
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Jeremiah Jordan
Priority: Minor
 Fix For: 2.0.14

 Attachments: 0001-CASSANDRA-7533.txt


 There are some workloads where commit log replay will run into contention 
 issues with multiple things updating the same partition.  Through some 
 testing it was found that lowering CommitLogReplayer.java 
 MAX_OUTSTANDING_REPLAY_COUNT can help with this issue.
 The calculations added in CASSANDRA-6655 are one such place things get 
 bottlenecked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix AssertionError when binding nested collections in DELETE

2015-03-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ff14d7ab8 - bcf0ec681


Fix AssertionError when binding nested collections in DELETE

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-8900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bcf0ec68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bcf0ec68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bcf0ec68

Branch: refs/heads/cassandra-2.1
Commit: bcf0ec681912e891d66a7b0ba28ff3d41ab3e304
Parents: ff14d7a
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Wed Mar 11 15:09:21 2015 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 23 18:54:11 2015 +0300

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |   4 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   3 +-
 .../org/apache/cassandra/cql3/Operation.java|   2 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  25 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 260 +++
 6 files changed, 286 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 924bdcf..0e75973 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Fix AssertionError when binding nested collections in DELETE 
(CASSANDRA-8900)
  * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
  * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index 26d5de2..fc81900 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -494,10 +494,8 @@ public abstract class Lists
 if (index == null)
 throw new InvalidRequestException(Invalid null value for list 
index);
 
-assert index instanceof Constants.Value;
-
 ListCell existingList = params.getPrefetchedList(rowKey, 
column.name);
-int idx = ByteBufferUtil.toInt(((Constants.Value)index).bytes);
+int idx = ByteBufferUtil.toInt(index.get(params.options));
 if (idx  0 || idx = existingList.size())
 throw new InvalidRequestException(String.format(List index %d 
out of bound, list has size %d, idx, existingList.size()));
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Maps.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Maps.java 
b/src/java/org/apache/cassandra/cql3/Maps.java
index 5b58833..8a64663 100644
--- a/src/java/org/apache/cassandra/cql3/Maps.java
+++ b/src/java/org/apache/cassandra/cql3/Maps.java
@@ -378,9 +378,8 @@ public abstract class Maps
 Term.Terminal key = t.bind(params.options);
 if (key == null)
 throw new InvalidRequestException(Invalid null map key);
-assert key instanceof Constants.Value;
 
-CellName cellName = cf.getComparator().create(prefix, column, 
((Constants.Value)key).bytes);
+CellName cellName = cf.getComparator().create(prefix, column, 
key.get(params.options));
 cf.addColumn(params.makeTombstone(cellName));
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Operation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Operation.java 
b/src/java/org/apache/cassandra/cql3/Operation.java
index 816acb2..ac25e29 100644
--- a/src/java/org/apache/cassandra/cql3/Operation.java
+++ b/src/java/org/apache/cassandra/cql3/Operation.java
@@ -414,7 +414,7 @@ public abstract class Operation
 return new Lists.DiscarderByIndex(receiver, idx);
 case SET:
 Term elt = element.prepare(keyspace, 
Sets.valueSpecOf(receiver));
-return new Sets.Discarder(receiver, elt);
+return new Sets.ElementDiscarder(receiver, elt);
 case MAP:
 Term key = element.prepare(keyspace, 
Maps.keySpecOf(receiver));
 return new Maps.DiscarderByKey(receiver, key);


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-23 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1faca1cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1faca1cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1faca1cb

Branch: refs/heads/trunk
Commit: 1faca1cb588435b4b04f7137ee045f7d823b1e1b
Parents: 540e9cf bcf0ec6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 23 19:14:01 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 23 19:14:01 2015 +0300

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |   4 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   3 +-
 .../org/apache/cassandra/cql3/Operation.java|   2 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  25 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 260 +++
 6 files changed, 286 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1faca1cb/CHANGES.txt
--
diff --cc CHANGES.txt
index c136c52,0e75973..e10c476
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,79 -1,5 +1,80 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 

[1/2] cassandra git commit: Fix AssertionError when binding nested collections in DELETE

2015-03-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 540e9cf75 - 1faca1cb5


Fix AssertionError when binding nested collections in DELETE

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for
CASSANDRA-8900


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bcf0ec68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bcf0ec68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bcf0ec68

Branch: refs/heads/trunk
Commit: bcf0ec681912e891d66a7b0ba28ff3d41ab3e304
Parents: ff14d7a
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Wed Mar 11 15:09:21 2015 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 23 18:54:11 2015 +0300

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |   4 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   3 +-
 .../org/apache/cassandra/cql3/Operation.java|   2 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  25 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 260 +++
 6 files changed, 286 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 924bdcf..0e75973 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Fix AssertionError when binding nested collections in DELETE 
(CASSANDRA-8900)
  * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
  * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index 26d5de2..fc81900 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -494,10 +494,8 @@ public abstract class Lists
 if (index == null)
 throw new InvalidRequestException(Invalid null value for list 
index);
 
-assert index instanceof Constants.Value;
-
 ListCell existingList = params.getPrefetchedList(rowKey, 
column.name);
-int idx = ByteBufferUtil.toInt(((Constants.Value)index).bytes);
+int idx = ByteBufferUtil.toInt(index.get(params.options));
 if (idx  0 || idx = existingList.size())
 throw new InvalidRequestException(String.format(List index %d 
out of bound, list has size %d, idx, existingList.size()));
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Maps.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Maps.java 
b/src/java/org/apache/cassandra/cql3/Maps.java
index 5b58833..8a64663 100644
--- a/src/java/org/apache/cassandra/cql3/Maps.java
+++ b/src/java/org/apache/cassandra/cql3/Maps.java
@@ -378,9 +378,8 @@ public abstract class Maps
 Term.Terminal key = t.bind(params.options);
 if (key == null)
 throw new InvalidRequestException(Invalid null map key);
-assert key instanceof Constants.Value;
 
-CellName cellName = cf.getComparator().create(prefix, column, 
((Constants.Value)key).bytes);
+CellName cellName = cf.getComparator().create(prefix, column, 
key.get(params.options));
 cf.addColumn(params.makeTombstone(cellName));
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcf0ec68/src/java/org/apache/cassandra/cql3/Operation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Operation.java 
b/src/java/org/apache/cassandra/cql3/Operation.java
index 816acb2..ac25e29 100644
--- a/src/java/org/apache/cassandra/cql3/Operation.java
+++ b/src/java/org/apache/cassandra/cql3/Operation.java
@@ -414,7 +414,7 @@ public abstract class Operation
 return new Lists.DiscarderByIndex(receiver, idx);
 case SET:
 Term elt = element.prepare(keyspace, 
Sets.valueSpecOf(receiver));
-return new Sets.Discarder(receiver, elt);
+return new Sets.ElementDiscarder(receiver, elt);
 case MAP:
 Term key = element.prepare(keyspace, 
Maps.keySpecOf(receiver));
 return new Maps.DiscarderByKey(receiver, key);


[jira] [Created] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-9019:
--

 Summary: GCInspector detected GC before ThreadPools are initialized
 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0


While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, I 
ran into the following exception:
{code}


java.lang.RuntimeException: Error reading: 
org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
at 
org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
at 
org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
at 
sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
at 
sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
Caused by: java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy3.getValue(Unknown Source)
at 
org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
... 5 more
Caused by: javax.management.InstanceNotFoundException: 
org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
... 7 more


{code}

Dtest didn't preserve the logs, which implies that this wasn't in the 
system.log, but printed to stderr somehow, it's unclear with all the piping 
dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376205#comment-14376205
 ] 

Yuki Morishita commented on CASSANDRA-9019:
---

My guess is that GCInspector detected long GC before StageManager initialized 
stages in its static initializer.

Possible solution is to defer {{GCInspector.register()}} after 
{{StorageService.instance.initServer()}} or do explicit StageManger static 
initialization before registering GCInspector in {{CassandraDaemon#setup()}}.

I wonder why it was not logged in system.log, since we added default exception 
handler before registering GCInspector.

 GCInspector detected GC before ThreadPools are initialized
 --

 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0


 While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, 
 I ran into the following exception:
 {code}
 java.lang.RuntimeException: Error reading: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
 at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
 at 
 org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
 at 
 sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
 at 
 sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy3.getValue(Unknown Source)
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
 ... 5 more
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 7 more
 
 {code}
 Dtest didn't preserve the logs, which implies that this wasn't in the 
 system.log, but printed to stderr somehow, it's unclear with all the piping 
 dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376221#comment-14376221
 ] 

Philip Thompson commented on CASSANDRA-9019:


If you look at the recent trunk dtests on jenkins, 
http://cassci.datastax.com/job/CTOOL_trunk_dtest/11/console , it appears this 
is occurring MUCH more frequently there.

 GCInspector detected GC before ThreadPools are initialized
 --

 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0


 While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, 
 I ran into the following exception:
 {code}
 java.lang.RuntimeException: Error reading: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
 at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
 at 
 org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
 at 
 sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
 at 
 sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy3.getValue(Unknown Source)
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
 ... 5 more
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 7 more
 
 {code}
 Dtest didn't preserve the logs, which implies that this wasn't in the 
 system.log, but printed to stderr somehow, it's unclear with all the piping 
 dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9022) Node Cleanup deletes all its data after a new node joined the cluster

2015-03-23 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-9022:
---

Assignee: Benedict

 Node Cleanup deletes all its data after a new node joined the cluster
 -

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9022) Node Cleanup deletes all its data

2015-03-23 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9022:
---
Summary: Node Cleanup deletes all its data   (was: Node Cleanup deletes all 
data )

 Node Cleanup deletes all its data 
 --

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Priority: Critical
 Fix For: 2.1.4

 Attachments: bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376465#comment-14376465
 ] 

Benedict commented on CASSANDRA-6936:
-

Right. There are three possibilities here: 

1) do nothing
2) make all *common* fields behave this way
3) make all fields behave this way

If we deliver 2 we're likely to get a significant chunk of any performance 
benefit, but at the cost of code simplicity. 3 should give us a smidgen more 
benefit but with simpler code (which in turn may let us squeeze more out of it, 
as the code becomes less brittle and easier to test, so we can push it a little 
further). There's also an orthogonal discussion of a perhaps weakening of the 
requirements for this ticket to just binary prefix comparable, or even _byte_ 
prefix comparable, rather than _unsigned binary_ prefix comparable. If any such 
relaxation makes it appreciable easier and less ugly.

I just want us to investigate as open mindedly as possible how viable going the 
whole hog is, and where the ugliness, deprecation or user pain points might be. 
It's possible it's a no-go, but I think we may have aborted too early, given 
the significant upsides.

 Make all byte representations of types comparable by their unsigned byte 
 representation only
 

 Key: CASSANDRA-6936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
  Labels: performance
 Fix For: 3.0


 This could be a painful change, but is necessary for implementing a 
 trie-based index, and settling for less would be suboptimal; it also should 
 make comparisons cheaper all-round, and since comparison operations are 
 pretty much the majority of C*'s business, this should be easily felt (see 
 CASSANDRA-6553 and CASSANDRA-6934 for an example of some minor changes with 
 major performance impacts). No copying/special casing/slicing should mean 
 fewer opportunities to introduce performance regressions as well.
 Since I have slated for 3.0 a lot of non-backwards-compatible sstable 
 changes, hopefully this shouldn't be too much more of a burden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9021:

Attachment: 9021.txt

OK, so it was a really stupid mistake; the runOnClose now depends on 
ifile/dfile which are closed prior to it being run. Simple patch attached to 
reorder these operations.

 AssertionError and Leak detected during sstable compaction
 --

 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
 - 20-node stress cluster, GCE n1-standard-2
 - 10-node receiver cluster ingesting data, GCE n1-standard-8 
 Platform:
 - Ubuntu 12.0.4 x86_64
 Versions:
 - DSE 4.7.0
 - Cassandra 2.1.3.304
 - Java 1.7.0_45
 DSE Configuration:
 - Xms7540M 
 - Xmx7540M 
 - Xmn800M
 - Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
 - Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
 - ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
 - XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 - XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
 - XX:+HeapDumpOnOutOfMemoryError -Xss256k 
 - XX:StringTableSize=103 -XX:+UseParNewGC 
 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 - XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 - XX:CMSInitiatingOccupancyFraction=75 
 - XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Fix For: 2.1.4

 Attachments: 9021.txt, system.log


 After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
 errors during what looks like sstable compaction.
 system.log snippets (full log attached):
 {code}
 ...
 INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/requests_ks/timeline-   
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
 bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 
 total   partitions merged to 805.  Partition merge counts were {1:627, 
 2:178, }
 INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/system/ 
 compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
   426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
 partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
 ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 
 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.AssertionError: null
  at 
 org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier$1.run(SSTableReader.java:2036)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
  at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 ...
 INFO  [MemtableFlushWriter:50] 2015-03-23 02:47:29,465  Memtable.java:378 - 
 Completed flushing /mnt/cass_data_disks/data1/requests_ks/timeline-   
9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-188-Data.db 
 (16311981 bytes) for commitlog position 
 ReplayPosition(segmentId=1427071574495, position=4523631)
 ERROR [Reference-Reaper:1] 2015-03-23 02:47:33,987  Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@2f59b10) to class 
 

[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2015-03-23 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376550#comment-14376550
 ] 

sankalp kohli commented on CASSANDRA-6157:
--

Look at Jonathan comment. 
Requires some string parsing in Config, but is backwards compatible without 
adding a two-layer approach of turn it on, but turn it back off sometimes.

 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.6

 Attachments: super-csv-2.1.0.jar, trunk-6157-v2.diff, 
 trunk-6157-v3.diff, trunk-6157-v4.diff, trunk-6157-v5.diff, trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9021:
--
Reviewer: Joshua McKenzie

 AssertionError and Leak detected during sstable compaction
 --

 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
 - 20-node stress cluster, GCE n1-standard-2
 - 10-node receiver cluster ingesting data, GCE n1-standard-8 
 Platform:
 - Ubuntu 12.0.4 x86_64
 Versions:
 - DSE 4.7.0
 - Cassandra 2.1.3.304
 - Java 1.7.0_45
 DSE Configuration:
 - Xms7540M 
 - Xmx7540M 
 - Xmn800M
 - Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
 - Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
 - ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
 - XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 - XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
 - XX:+HeapDumpOnOutOfMemoryError -Xss256k 
 - XX:StringTableSize=103 -XX:+UseParNewGC 
 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 - XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 - XX:CMSInitiatingOccupancyFraction=75 
 - XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Fix For: 2.1.4

 Attachments: 9021.txt, system.log


 After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
 errors during what looks like sstable compaction.
 system.log snippets (full log attached):
 {code}
 ...
 INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/requests_ks/timeline-   
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
 bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 
 total   partitions merged to 805.  Partition merge counts were {1:627, 
 2:178, }
 INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/system/ 
 compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
   426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
 partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
 ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 
 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.AssertionError: null
  at 
 org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier$1.run(SSTableReader.java:2036)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
  at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 ...
 INFO  [MemtableFlushWriter:50] 2015-03-23 02:47:29,465  Memtable.java:378 - 
 Completed flushing /mnt/cass_data_disks/data1/requests_ks/timeline-   
9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-188-Data.db 
 (16311981 bytes) for commitlog position 
 ReplayPosition(segmentId=1427071574495, position=4523631)
 ERROR [Reference-Reaper:1] 2015-03-23 02:47:33,987  Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@2f59b10) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1251424500:/mnt/cass_data_disks/data1/requests_ks/timeline-9500fe40d0f611e495675d5ea01541b5/
 requests_ks-timeline-ka-149 was not released before the reference was 
 garbage collected
 INFO  [Service Thread] 2015-03-23 

[jira] [Updated] (CASSANDRA-9022) Node Cleanup deletes all its data after a new node joined the cluster

2015-03-23 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9022:
---
Summary: Node Cleanup deletes all its data after a new node joined the 
cluster  (was: Node Cleanup deletes all its data )

 Node Cleanup deletes all its data after a new node joined the cluster
 -

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Priority: Critical
 Fix For: 2.1.4

 Attachments: bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376500#comment-14376500
 ] 

Benedict commented on CASSANDRA-9021:
-

Hmm. So this isn't as simple as I first hoped. We could fix it quite 
trivially, but the exception seems to suggest the file is _empty_ - and we 
shouldn't have any empty sstables. Is this easily reproduced? Could you provide 
me with the files that it's throwing this exception for?

 AssertionError and Leak detected during sstable compaction
 --

 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
 - 20-node stress cluster, GCE n1-standard-2
 - 10-node receiver cluster ingesting data, GCE n1-standard-8 
 Platform:
 - Ubuntu 12.0.4 x86_64
 Versions:
 - DSE 4.7.0
 - Cassandra 2.1.3.304
 - Java 1.7.0_45
 DSE Configuration:
 - Xms7540M 
 - Xmx7540M 
 - Xmn800M
 - Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
 - Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
 - ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
 - XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 - XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
 - XX:+HeapDumpOnOutOfMemoryError -Xss256k 
 - XX:StringTableSize=103 -XX:+UseParNewGC 
 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 - XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 - XX:CMSInitiatingOccupancyFraction=75 
 - XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Fix For: 2.1.4

 Attachments: system.log


 After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
 errors during what looks like sstable compaction.
 system.log snippets (full log attached):
 {code}
 ...
 INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/requests_ks/timeline-   
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
 bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 
 total   partitions merged to 805.  Partition merge counts were {1:627, 
 2:178, }
 INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/system/ 
 compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
   426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
 partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
 ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 
 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.AssertionError: null
  at 
 org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier$1.run(SSTableReader.java:2036)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
  at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 ...
 INFO  [MemtableFlushWriter:50] 2015-03-23 02:47:29,465  Memtable.java:378 - 
 Completed flushing /mnt/cass_data_disks/data1/requests_ks/timeline-   
9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-188-Data.db 
 (16311981 bytes) for commitlog position 
 ReplayPosition(segmentId=1427071574495, position=4523631)
 ERROR [Reference-Reaper:1] 2015-03-23 02:47:33,987  Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@2f59b10) to class 
 

[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2015-03-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376568#comment-14376568
 ] 

Aleksey Yeschenko commented on CASSANDRA-6157:
--

I've seen it. Should've noticed this a year ago, but probably was on vacation. 
Sorry about that. I also believe that Jonathan commented on the particular 
approach, and I agree that having an override map is a bit of an overkill. One 
global on/off switch plus a list of blacklist DCs to consider when the global 
switch is on should be more than enough.

Putting CSV in YAML is conceptually ugly, and so is mixing types here. Plus, 
the solution as it is stands now supports selective enablement vs. selective 
disablement (a blacklist), contrary to the goal of the ticket.

 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.6

 Attachments: super-csv-2.1.0.jar, trunk-6157-v2.diff, 
 trunk-6157-v3.diff, trunk-6157-v4.diff, trunk-6157-v5.diff, trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7807) Push notification when tracing completes for an operation

2015-03-23 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375618#comment-14375618
 ] 

Stefania commented on CASSANDRA-7807:
-

Alright, here is my best attempt

General comments:
* Existing events ({{TOPOLOGY_CHANGE, STATUS_CHANGE, SCHEMA_CHANGE}}) are more 
generic than {{TRACE_FINISHED}} perhaps to align with the existing events we 
should use {{TRACING_CHANGE}} with an additional argument, STOPPED, and does 
it make sense at all to also have STARTED?
* {{QueryState.traceNextQuery()}} will return true also when probabilistic 
tracing kicks in, so sessions that did not enable tracing might receive a trace 
notification, is this what you wanted? If not, we should only pass the 
connection to the tracing state if {{Message.isTracingRequested()}} is true 
but, I wonder, would it be really bad if all clients received these 
notifications regardless of whether they requested tracing? Can't the driver 
just ignore the tracing session ids?
* Making {{TraceState}} communicate directly with the connection tracker and 
the cast at ln 148 of {{TraceState.java}} 
(https://github.com/apache/cassandra/blob/28c78721a8e10b987cd71d0382ad0c481e0c8d2a/src/java/org/apache/cassandra/tracing/TraceState.java#L148)
 is a bit hacky and cannot really be extended. If we really must limit the 
notifications to the channel (see point above), can we do better by introducing 
a generic listener or leveraging the existing {{TraceState}} listeners?
* I'm pretty new to Netty, is it safe to compare the channels using the == 
operator?
https://github.com/apache/cassandra/blob/28c78721a8e10b987cd71d0382ad0c481e0c8d2a/src/java/org/apache/cassandra/transport/Server.java#L241

Testing:
* What other tests did you do besides {{TracingFinishedTest}}? I am not sure 
this covers the event serialization and deserialization?
* Some further unit tests suggestions:
** Multiple threads, is feasible
** Check only correct channel receives notification (if still applicable)

Trivial stuff:
* Extra semi-column at 
https://github.com/apache/cassandra/blob/28c78721a8e10b987cd71d0382ad0c481e0c8d2a/test/unit/org/apache/cassandra/tracing/TracingFinishedTest.java#L62

 Push notification when tracing completes for an operation
 -

 Key: CASSANDRA-7807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7807
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Tyler Hobbs
Assignee: Robert Stupp
Priority: Minor
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 7807.txt


 Tracing is an asynchronous operation, and drivers currently poll to determine 
 when the trace is complete (in a loop with sleeps).  Instead, the server 
 could push a notification to the driver when the trace completes.
 I'm guessing that most of the work for this will be around pushing 
 notifications to a single connection instead of all connections that have 
 registered listeners for a particular event type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8592) Add WriteFailureException

2015-03-23 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375571#comment-14375571
 ] 

Stefania commented on CASSANDRA-8592:
-

Hi [~thobbs]

bq. Since the dtest depends on having CURRENT_VERSION set to VERSION_4, go 
ahead and update CURRENT_VERSION in your branch.

During the code review of CASSANDRA-7807, I noticed Robert had to change 
{{CQLTester}} to use the highest {{ProtocolVersion}} available rather than 
{{Server.CURRENT_VERSION}}. Is it that perhaps we should have increased 
{{ProtocolVersion}} too?

 Add WriteFailureException
 -

 Key: CASSANDRA-8592
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8592
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tyler Hobbs
Assignee: Stefania
  Labels: client-impacting
 Fix For: 3.0


 Similar to what CASSANDRA-7886 did for reads, we should add a 
 WriteFailureException and have replicas signal a failure while handling a 
 write to the coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6237) Allow range deletions in CQL

2015-03-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375649#comment-14375649
 ] 

Benjamin Lerer commented on CASSANDRA-6237:
---

The current version of the patch can be found 
[here|https://github.com/apache/cassandra/compare/trunk...blerer:CASSANDRA-6237].
The first part of the patch is a refactoring of {{ModificationStatement}} and 
its two sub-classes {{UpdateStatement}} and {{DeleteStatement}}. The 
refactoring replaces the code used to build the restrictions in those classes 
to reuse {{StatementRestrictions}}.

The second part of the patch add to {{UPDATE}} and {{DELETE}} queries support 
for:
* {{IN}} restrictions on any partition key component
* {{IN}} restictions  on any clustering key 
* {{EQ}} and {{IN}} multi column restrictions on the clustering keys (mixed or 
not with single column restrictions).

Side remark: {{IN}} restrictions are still not supported for conditional 
updates or deletes 

The third part of the patch add support for range deletion of entire rows.

In this third part, I run into the issue that the serialization of 
{{RangeTombstone}} does not support empty composite for the min bound. This 
cause problem for queries like: {{DELETE * FROM partitionKey =  AND 
clusteringColumn   ?;}}.

According to Sylvain this problem will be fixed as part of CASSANDRA-8099. So I 
will wait for it to be committed before delivering the final patch for this 
ticket.

 Allow range deletions in CQL
 

 Key: CASSANDRA-6237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql, docs
 Fix For: 3.0

 Attachments: CASSANDRA-6237.txt


 We uses RangeTombstones internally in a number of places, but we could expose 
 more directly too. Typically, given a table like:
 {noformat}
 CREATE TABLE events (
 id text,
 created_at timestamp,
 content text,
 PRIMARY KEY (id, created_at)
 )
 {noformat}
 we could allow queries like:
 {noformat}
 DELETE FROM events WHERE id='someEvent' AND created_at  'Jan 3, 2013';
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-23 Thread marcuse
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6633421d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6633421d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6633421d

Branch: refs/heads/trunk
Commit: 6633421ddeca4bb535f22a251605a04f0f870487
Parents: fb3d9fd fd0bdef
Author: Marcus Eriksson marc...@apache.org
Authored: Mon Mar 23 09:11:02 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:11:02 2015 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/LazilyCompactedRow.java   | 31 ++--
 2 files changed, 23 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6633421d/CHANGES.txt
--
diff --cc CHANGES.txt
index 9501253,25b0a06..4c3ef62
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,78 -1,5 +1,79 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation 

[1/2] cassandra git commit: Only calculate maxPurgableTimestamp if we know there are tombstones

2015-03-23 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk fb3d9fd85 - 6633421dd


Only calculate maxPurgableTimestamp if we know there are tombstones

Patch by marcuse; reviewed by Sylvain Lebresne for CASSANDRA-8914


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd0bdef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd0bdef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd0bdef5

Branch: refs/heads/trunk
Commit: fd0bdef5abb9c7fb4318b7c4c989cb90d352a19b
Parents: 6c69f9a
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Mar 6 17:34:28 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:07:42 2015 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/LazilyCompactedRow.java   | 31 ++--
 2 files changed, 23 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd0bdef5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 03f7e1c..25b0a06 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
  * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)
  * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd0bdef5/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index cfdbd17..f61225a 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -50,7 +50,8 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 {
 private final List? extends OnDiskAtomIterator rows;
 private final CompactionController controller;
-private final long maxPurgeableTimestamp;
+private boolean hasCalculatedMaxPurgeableTimestamp = false;
+private long maxPurgeableTimestamp;
 private final ColumnFamily emptyColumnFamily;
 private ColumnStats columnStats;
 private boolean closed;
@@ -77,19 +78,29 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 maxRowTombstone = rowTombstone;
 }
 
-// tombstones with a localDeletionTime before this can be purged.  
This is the minimum timestamp for any sstable
-// containing `key` outside of the set of sstables involved in this 
compaction.
-maxPurgeableTimestamp = controller.maxPurgeableTimestamp(key);
-
 emptyColumnFamily = 
ArrayBackedSortedColumns.factory.create(controller.cfs.metadata);
 emptyColumnFamily.delete(maxRowTombstone);
-if (maxRowTombstone.markedForDeleteAt  maxPurgeableTimestamp)
+if (!maxRowTombstone.isLive()  maxRowTombstone.markedForDeleteAt  
getMaxPurgeableTimestamp())
 emptyColumnFamily.purgeTombstones(controller.gcBefore);
 
 reducer = new Reducer();
 merger = Iterators.filter(MergeIterator.get(rows, 
emptyColumnFamily.getComparator().onDiskAtomComparator(), reducer), 
Predicates.notNull());
 }
 
+/**
+ * tombstones with a localDeletionTime before this can be purged.  This is 
the minimum timestamp for any sstable
+ * containing `key` outside of the set of sstables involved in this 
compaction.
+ */
+private long getMaxPurgeableTimestamp()
+{
+if (!hasCalculatedMaxPurgeableTimestamp)
+{
+hasCalculatedMaxPurgeableTimestamp = true;
+maxPurgeableTimestamp = controller.maxPurgeableTimestamp(key);
+}
+return maxPurgeableTimestamp;
+}
+
 private static void removeDeleted(ColumnFamily cf, boolean shouldPurge, 
DecoratedKey key, CompactionController controller)
 {
 // We should only purge cell tombstones if shouldPurge is true, but 
regardless, it's still ok to remove cells that
@@ -251,7 +262,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 RangeTombstone t = tombstone;
 tombstone = null;
 
-if (t.timestamp()  maxPurgeableTimestamp  
t.data.isGcAble(controller.gcBefore))
+if (t.timestamp()  getMaxPurgeableTimestamp()  
t.data.isGcAble(controller.gcBefore))
 {
 indexBuilder.tombstoneTracker().update(t, true);
 return null;
@@ -269,11 +280,13 @@ public 

cassandra git commit: Only calculate maxPurgableTimestamp if we know there are tombstones

2015-03-23 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 6c69f9a7c - fd0bdef5a


Only calculate maxPurgableTimestamp if we know there are tombstones

Patch by marcuse; reviewed by Sylvain Lebresne for CASSANDRA-8914


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd0bdef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd0bdef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd0bdef5

Branch: refs/heads/cassandra-2.1
Commit: fd0bdef5abb9c7fb4318b7c4c989cb90d352a19b
Parents: 6c69f9a
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Mar 6 17:34:28 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:07:42 2015 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/LazilyCompactedRow.java   | 31 ++--
 2 files changed, 23 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd0bdef5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 03f7e1c..25b0a06 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
  * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)
  * Use correct bounds for page cache eviction of compressed files 
(CASSANDRA-8746)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd0bdef5/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index cfdbd17..f61225a 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -50,7 +50,8 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 {
 private final List? extends OnDiskAtomIterator rows;
 private final CompactionController controller;
-private final long maxPurgeableTimestamp;
+private boolean hasCalculatedMaxPurgeableTimestamp = false;
+private long maxPurgeableTimestamp;
 private final ColumnFamily emptyColumnFamily;
 private ColumnStats columnStats;
 private boolean closed;
@@ -77,19 +78,29 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 maxRowTombstone = rowTombstone;
 }
 
-// tombstones with a localDeletionTime before this can be purged.  
This is the minimum timestamp for any sstable
-// containing `key` outside of the set of sstables involved in this 
compaction.
-maxPurgeableTimestamp = controller.maxPurgeableTimestamp(key);
-
 emptyColumnFamily = 
ArrayBackedSortedColumns.factory.create(controller.cfs.metadata);
 emptyColumnFamily.delete(maxRowTombstone);
-if (maxRowTombstone.markedForDeleteAt  maxPurgeableTimestamp)
+if (!maxRowTombstone.isLive()  maxRowTombstone.markedForDeleteAt  
getMaxPurgeableTimestamp())
 emptyColumnFamily.purgeTombstones(controller.gcBefore);
 
 reducer = new Reducer();
 merger = Iterators.filter(MergeIterator.get(rows, 
emptyColumnFamily.getComparator().onDiskAtomComparator(), reducer), 
Predicates.notNull());
 }
 
+/**
+ * tombstones with a localDeletionTime before this can be purged.  This is 
the minimum timestamp for any sstable
+ * containing `key` outside of the set of sstables involved in this 
compaction.
+ */
+private long getMaxPurgeableTimestamp()
+{
+if (!hasCalculatedMaxPurgeableTimestamp)
+{
+hasCalculatedMaxPurgeableTimestamp = true;
+maxPurgeableTimestamp = controller.maxPurgeableTimestamp(key);
+}
+return maxPurgeableTimestamp;
+}
+
 private static void removeDeleted(ColumnFamily cf, boolean shouldPurge, 
DecoratedKey key, CompactionController controller)
 {
 // We should only purge cell tombstones if shouldPurge is true, but 
regardless, it's still ok to remove cells that
@@ -251,7 +262,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 RangeTombstone t = tombstone;
 tombstone = null;
 
-if (t.timestamp()  maxPurgeableTimestamp  
t.data.isGcAble(controller.gcBefore))
+if (t.timestamp()  getMaxPurgeableTimestamp()  
t.data.isGcAble(controller.gcBefore))
 {
 indexBuilder.tombstoneTracker().update(t, true);
 return null;
@@ -269,11 

[jira] [Commented] (CASSANDRA-8592) Add WriteFailureException

2015-03-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375578#comment-14375578
 ] 

Robert Stupp commented on CASSANDRA-8592:
-

That's just because the Java Driver doesn't support native protocol v4 yet.

 Add WriteFailureException
 -

 Key: CASSANDRA-8592
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8592
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tyler Hobbs
Assignee: Stefania
  Labels: client-impacting
 Fix For: 3.0


 Similar to what CASSANDRA-7886 did for reads, we should add a 
 WriteFailureException and have replicas signal a failure while handling a 
 write to the coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8553) Add a key-value payload for third party usage

2015-03-23 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8553:

Attachment: 8553-v3.txt

Attached v3 with the review comments worked in.

 Add a key-value payload for third party usage
 -

 Key: CASSANDRA-8553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sergio Bossa
Assignee: Robert Stupp
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 8553-v2.txt, 8553-v3.txt, 8553.txt


 An useful improvement would be to include a generic key-value payload, so 
 that developers implementing a custom {{QueryHandler}} could leverage that to 
 move custom data back and forth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Check for overlap with non-early opened files in LCS

2015-03-23 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 fd0bdef5a - ff14d7ab8


Check for overlap with non-early opened files in LCS

patch by marcuse; reviewed by carlyeks for CASSANDRA-8739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff14d7ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff14d7ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff14d7ab

Branch: refs/heads/cassandra-2.1
Commit: ff14d7ab8d66c7665d0a5d6c87fb8efac35b0d13
Parents: fd0bdef
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Feb 5 13:43:31 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:24:38 2015 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/LeveledManifest.java| 16 +++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff14d7ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 25b0a06..924bdcf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
  * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
  * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff14d7ab/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index c076a64..ecebfe0 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -534,7 +534,7 @@ public class LeveledManifest
 
 if (level == 0)
 {
-SetSSTableReader compactingL0 = 
ImmutableSet.copyOf(Iterables.filter(getLevel(0), Predicates.in(compacting)));
+SetSSTableReader compactingL0 = getCompacting(0);
 
 RowPosition lastCompactingKey = null;
 RowPosition firstCompactingKey = null;
@@ -595,6 +595,8 @@ public class LeveledManifest
 SetSSTableReader l1overlapping = overlapping(candidates, 
getLevel(1));
 if (Sets.intersection(l1overlapping, compacting).size()  0)
 return Collections.emptyList();
+if (!overlapping(candidates, compactingL0).isEmpty())
+return Collections.emptyList();
 candidates = Sets.union(candidates, l1overlapping);
 }
 if (candidates.size()  2)
@@ -632,6 +634,18 @@ public class LeveledManifest
 return Collections.emptyList();
 }
 
+private SetSSTableReader getCompacting(int level)
+{
+SetSSTableReader sstables = new HashSet();
+SetSSTableReader levelSSTables = new HashSet(getLevel(level));
+for (SSTableReader sstable : cfs.getDataTracker().getCompacting())
+{
+if (levelSSTables.contains(sstable))
+sstables.add(sstable);
+}
+return sstables;
+}
+
 private ListSSTableReader ageSortedSSTables(CollectionSSTableReader 
candidates)
 {
 ListSSTableReader ageSortedCandidates = new 
ArrayListSSTableReader(candidates);



[1/2] cassandra git commit: Check for overlap with non-early opened files in LCS

2015-03-23 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6633421dd - bee474626


Check for overlap with non-early opened files in LCS

patch by marcuse; reviewed by carlyeks for CASSANDRA-8739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff14d7ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff14d7ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff14d7ab

Branch: refs/heads/trunk
Commit: ff14d7ab8d66c7665d0a5d6c87fb8efac35b0d13
Parents: fd0bdef
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Feb 5 13:43:31 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:24:38 2015 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/LeveledManifest.java| 16 +++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff14d7ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 25b0a06..924bdcf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Check for overlap with non-early sstables in LCS (CASSANDRA-8739)
  * Only calculate max purgable timestamp if we have to (CASSANDRA-8914)
  * (cqlsh) Greatly improve performance of COPY FROM (CASSANDRA-8225)
  * IndexSummary effectiveIndexInterval is now a guideline, not a rule 
(CASSANDRA-8993)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff14d7ab/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index c076a64..ecebfe0 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -534,7 +534,7 @@ public class LeveledManifest
 
 if (level == 0)
 {
-SetSSTableReader compactingL0 = 
ImmutableSet.copyOf(Iterables.filter(getLevel(0), Predicates.in(compacting)));
+SetSSTableReader compactingL0 = getCompacting(0);
 
 RowPosition lastCompactingKey = null;
 RowPosition firstCompactingKey = null;
@@ -595,6 +595,8 @@ public class LeveledManifest
 SetSSTableReader l1overlapping = overlapping(candidates, 
getLevel(1));
 if (Sets.intersection(l1overlapping, compacting).size()  0)
 return Collections.emptyList();
+if (!overlapping(candidates, compactingL0).isEmpty())
+return Collections.emptyList();
 candidates = Sets.union(candidates, l1overlapping);
 }
 if (candidates.size()  2)
@@ -632,6 +634,18 @@ public class LeveledManifest
 return Collections.emptyList();
 }
 
+private SetSSTableReader getCompacting(int level)
+{
+SetSSTableReader sstables = new HashSet();
+SetSSTableReader levelSSTables = new HashSet(getLevel(level));
+for (SSTableReader sstable : cfs.getDataTracker().getCompacting())
+{
+if (levelSSTables.contains(sstable))
+sstables.add(sstable);
+}
+return sstables;
+}
+
 private ListSSTableReader ageSortedSSTables(CollectionSSTableReader 
candidates)
 {
 ListSSTableReader ageSortedCandidates = new 
ArrayListSSTableReader(candidates);



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-23 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bee47462
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bee47462
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bee47462

Branch: refs/heads/trunk
Commit: bee474626f71f7abe38e82b4fcdf3abdabadd29a
Parents: 6633421 ff14d7a
Author: Marcus Eriksson marc...@apache.org
Authored: Mon Mar 23 09:28:26 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon Mar 23 09:28:26 2015 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/db/compaction/LeveledManifest.java| 16 +++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bee47462/CHANGES.txt
--
diff --cc CHANGES.txt
index 4c3ef62,924bdcf..68df3e6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,78 -1,5 +1,79 @@@
 +3.0
 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + 

[jira] [Created] (CASSANDRA-9016) QueryHandler interface misses parseStatement()

2015-03-23 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-9016:
---

 Summary: QueryHandler interface misses parseStatement()
 Key: CASSANDRA-9016
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9016
 Project: Cassandra
  Issue Type: Wish
Reporter: Robert Stupp
Priority: Minor


{{QueryHandler}} is missing the method {{parseStatement()}} as implemented by 
{{QueryProcessor}}. This is a bit strange when looking into the code in 
{{BatchMessage.execute}} which uses {{QueryProcessor.parseStatement}} for 
{{String}}s but {{QueryHandler}} for prepared statements:

{code}
if (query instanceof String)
{
p = QueryProcessor.parseStatement((String)query, state);
}
else
{
p = handler.getPrepared((MD5Digest)query);
if (p == null)
throw new 
PreparedQueryNotFoundException((MD5Digest)query);
}
{code}

Note: this is not an issue right now, more a note that handling in 
{{BatchMessage}} is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8773) cassandra-stress should validate its results in user mode

2015-03-23 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8773:

Fix Version/s: 2.1.4

 cassandra-stress should validate its results in user mode
 ---

 Key: CASSANDRA-8773
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8773
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Benedict
Assignee: Benedict
  Labels: stress
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8137) Prepared statement size overflow error

2015-03-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375617#comment-14375617
 ] 

Benjamin Lerer commented on CASSANDRA-8137:
---

[~kishkaru] could you check with 2.1.3? As the problem does not seems easy to 
reproduce I would prefer to be sure that it is still there before starting to 
investigate.

 Prepared statement size overflow error
 --

 Key: CASSANDRA-8137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master
Reporter: Kishan Karunaratne
Assignee: Benjamin Lerer
 Fix For: 2.1.4


 When using C* 2.1.0 and Ruby-driver master, I get the following error when 
 running the Ruby duration test (which prepares a lot of statements, in many 
 threads):
 {noformat}
 Prepared statement of size 4451848 bytes is larger than allowed maximum of 
 2027520 bytes.
 Prepared statement of size 4434568 bytes is larger than allowed maximum of 
 2027520 bytes.
 {noformat}
 They usually occur in batches of 1, but sometimes in multiples as seen above. 
  It happens occasionally, around 20% of the time when running the code.  
 Unfortunately I don't have a stacktrace as the error isn't recorded in the 
 system log. 
 This is my schema, and the offending prepare statement:
 {noformat}
 @session.execute(CREATE TABLE duration_test.ints (
 key INT,
 copy INT,
 value INT,
 PRIMARY KEY (key, copy))
 )
 {noformat}
 {noformat}
 select = @session.prepare(SELECT * FROM ints WHERE key=?)
 {noformat}
 Now, I notice that if I explicitly specify the keyspace in the prepare, I 
 don't get the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8938) Full Row Scan does not count towards Reads

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8938:
---
Assignee: Marcus Eriksson

 Full Row Scan does not count towards Reads
 --

 Key: CASSANDRA-8938
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8938
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Tools
 Environment: Unix, Cassandra 2.0.3
Reporter: Amit Singh Chowdhery
Assignee: Marcus Eriksson
Priority: Minor
  Labels: none

 When a CQL SELECT statement is executed with WHERE clause, Read Count is 
 incremented in cfstats of the column family. But, when a full row scan is 
 done using SELECT statement without WHERE clause, Read Count is not 
 incremented. 
 Similarly, when using Size Tiered Compaction, if we do a full row scan using 
 Hector RangeslicesQuery, Read Count is not incremented in cfstats, Cassandra 
 still considers all sstables as cold and does not trigger compaction for 
 them. If we fire MultigetSliceQuery, Read Count is incremented and sstables 
 becomes hot, triggering compaction of these sstables. 
 Expected Behavior:
 1. Read Count must be incremented by number of rows read during a full row 
 scan done using CQL SELECT statement or Hector RangeslicesQuery.
 2. Size Tiered compaction must consider all sstables as Hot after a full row 
 scan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9022) Node Cleanup deletes all its data after a new node joined the cluster

2015-03-23 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376706#comment-14376706
 ] 

Alan Boudreault commented on CASSANDRA-9022:


Yep. This patch seems to fix the issue. Ran my test case twice and everything 
looks good. I will finish my dtest for this later or tomorrow morning. Thanks! 

 Node Cleanup deletes all its data after a new node joined the cluster
 -

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: 9022.txt, bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8917) Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8917:
---
Reproduced In: 2.1.3
Fix Version/s: 2.1.4

 Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions
 -

 Key: CASSANDRA-8917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8917
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.0.9, Centos 6.5, Java 1.7.0_72, spring data 
 cassandra 1.1.1, cassandra java driver 2.0.9
Reporter: Gary Ogden
 Fix For: 2.1.4

 Attachments: b_output.log, jersey_error.log, node1-cassandra.yaml, 
 node1-system.log, node2-cassandra.yaml, node2-system.log, 
 node3-cassandra.yaml, node3-system.log


 We have java apps running on glassfish that read/write to our 3 node cluster 
 running on 2.0.9. 
 we have the CL set to quorum for all reads and writes.
 When we started to upgrade the first node and did the sstable upgrade on that 
 node, we started getting this error on reads and writes:
 com.datastax.driver.core.exceptions.UnavailableException: Not enough replica 
 available for query at consistency QUORUM (2 required but only 1 alive)
 How is that possible when we have 3 nodes total, and there was 2 that were up 
 and it's saying we can't get the required CL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8803) Implement transitional mode in C* that will accept both encrypted and non-encrypted client traffic

2015-03-23 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8803:

Fix Version/s: 2.0.14

 Implement transitional mode in C* that will accept both encrypted and 
 non-encrypted client traffic
 --

 Key: CASSANDRA-8803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8803
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vishy Kasar
 Fix For: 2.0.14


 We have some non-secure clusters taking live traffic in production from 
 active clients. We want to enable client to node encryption on these 
 clusters. Once we set the client_encryption_options enabled to true in yaml 
 and bounce a cassandra node in the ring, the existing clients that do not do 
 SSL will fail to connect to that node.
 There does not seem to be a good way to roll this change with out taking an 
 outage. Can we implement a transitional mode in C* that will accept both 
 encrypted and non-encrypted client traffic? We would enable this during 
 transition and turn it off after both server and client start talking SSL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376696#comment-14376696
 ] 

Ariel Weisberg commented on CASSANDRA-9019:
---

My only concern is that the string returned suppresses information about the 
exception. The return value may not be a great place to barf an exception 
anyways. Logging to the log at info level also might be too verbose. This is 
where having a rate limited logger can be nice.

 GCInspector detected GC before ThreadPools are initialized
 --

 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0

 Attachments: 9019.txt


 While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, 
 I ran into the following exception:
 {code}
 java.lang.RuntimeException: Error reading: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
 at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
 at 
 org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
 at 
 sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
 at 
 sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy3.getValue(Unknown Source)
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
 ... 5 more
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 7 more
 
 {code}
 Dtest didn't preserve the logs, which implies that this wasn't in the 
 system.log, but printed to stderr somehow, it's unclear with all the piping 
 dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9022) Node Cleanup deletes all its data after a new node joined the cluster

2015-03-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9022:
--
Reviewer: Sylvain Lebresne

 Node Cleanup deletes all its data after a new node joined the cluster
 -

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: 9022.txt, bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8794) AntiEntropySessions doesn't show up until after a repair

2015-03-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376751#comment-14376751
 ] 

Yuki Morishita commented on CASSANDRA-8794:
---

I'm sure we don't change that behavior for a long time. It can be pre-1.2 or 
even further before.
And AntiEntropySessions is removed on 3.0. Repair will run on thread pool 
allocated to each repair invocation.

I think this issue can be superceded by CASSANDRA-8076 or CASSANDRA-5839.

 AntiEntropySessions doesn't show up until after a repair
 

 Key: CASSANDRA-8794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8794
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday
Assignee: Yuki Morishita

 The metric AntiEntropySessions for internal thread pools doesn't actually 
 show up as an mbean until after a repair is run.  This should actually be 
 displayed before.  This also, keeps any cluster that doesn't need repairing 
 from displaying stats for AntiEntropySessions.  The lack of the mbean's 
 existence until after the repair will cause problem for various monitoring 
 tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8858) Avoid not doing anticompaction on compacted away sstables

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8858:
---
Issue Type: Improvement  (was: Bug)

 Avoid not doing anticompaction on compacted away sstables
 -

 Key: CASSANDRA-8858
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8858
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 3.0


 Currently, if an sstable is compacted away during repair, we will not 
 anticompact it, meaning we will do too much work when we run the next repair.
 There are a few ways to solve this:
 1. track where the compacted sstables end up (ie, we compact and sstable 1,2  
 that are being repaired into sstable 3, we can anticompact sstable 3 once 
 repair is done). Note that this would force us to not compact newly flushed 
 sstables with the ones that existed when we started repair.
 2. don't do compactions at all among the sstables we repair (essentially just 
 mark the as compacting when we start validating and keep them that way 
 throughout the repair)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8984) Introduce Transactional API for behaviours that can corrupt system state

2015-03-23 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8984:
---
Attachment: 8984_windows_timeout.txt

 Introduce Transactional API for behaviours that can corrupt system state
 

 Key: CASSANDRA-8984
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8984
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4

 Attachments: 8984_windows_timeout.txt


 As a penultimate (and probably final for 2.1, if we agree to introduce it 
 there) round of changes to the internals managing sstable writing, I've 
 introduced a new API called Transactional that I hope will make it much 
 easier to write correct behaviour. As things stand we conflate a lot of 
 behaviours into methods like close - the recent changes unpicked some of 
 these, but didn't go far enough. My proposal here introduces an interface 
 designed to support four actions (on top of their normal function):
 * prepareToCommit
 * commit
 * abort
 * cleanup
 In normal operation, once we have finished constructing a state change we 
 call prepareToCommit; once all such state changes are prepared, we call 
 commit. If at any point everything fails, abort is called. In _either_ case, 
 cleanup is called at the very last.
 These transactional objects are all AutoCloseable, with the behaviour being 
 to rollback any changes unless commit has completed successfully.
 The changes are actually less invasive than it might sound, since we did 
 recently introduce abort in some places, as well as have commit like methods. 
 This simply formalises the behaviour, and makes it consistent between all 
 objects that interact in this way. Much of the code change is boilerplate, 
 such as moving an object into a try-declaration, although the change is still 
 non-trivial. What it _does_ do is eliminate a _lot_ of special casing that we 
 have had since 2.1 was released. The data tracker API changes and compaction 
 leftover cleanups should finish the job with making this much easier to 
 reason about, but this change I think is worthwhile considering for 2.1, 
 since we've just overhauled this entire area (and not released these 
 changes), and this change is essentially just the finishing touches, so the 
 risk is minimal and the potential gains reasonably significant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8984) Introduce Transactional API for behaviours that can corrupt system state

2015-03-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376703#comment-14376703
 ] 

Joshua McKenzie commented on CASSANDRA-8984:


All unit tests are timing out on Windows - attaching a sample of one to this 
ticket. I've reviewed most of the patch and will leave feedback for what I have 
thus far - I still need to digest the changes to SSTableWriter as they're 
fairly extensive.

Comments:
* I'm not too keen on prepareToCommit being part of the usage pattern without 
actually being present in the interface but I can see why you went that route. 
I don't have any alternative suggestions unfortunately.
* The passing around of Throwables to merge deep in the stack (Transactional, 
SafeMemory*, Ref* etc) is a little cludgy and has some pretty deeply nested 
tag-along variables. Not sure why we can't just return a Throwable up that 
stack and merge at the top level when we know we might need to merge rather 
than passing the Throwable all the way down from the top to merge at the 
bottom...?
* The StateManager abstraction and the Transactional Interface seem to be a 
poor fit to several of the implementers. Having 2/3 of the methods resolve to 
the noOpTransition seems like we're conflating the idea of classes that have 
resources that need to be cleaned up with classes that have a set of state 
transitions they go through and a logical abort process.
* With regard to the StateManager requiring a beginTransition / 
rejectedTransition combo and specific completeTransition - we're trading one 
set of manually managed states for another. Still error-prone and has quite a 
bit of duplication where it's implemented (rather than noOpTransition)
* autoclose seems to have some redundant assignment - we switch on state and if 
it's COMMITTED, we set state = COMMITTED, ABORTED we set it to ABORTED.
* Consider renaming StateManager.autoclose(). Something like 'finalize()' might 
be more accurate, as 'state.autoclose()' describes the context in which it's 
called rather than what it's doing.

nits:
* Inconsistent prepareForCommit vs. prepareToCommit in comment in Transactional
* Unused Logger added to IndexSummary - was this intentional?
* You left a comment in SSTRW.prepareToCommitAndMaybeThrow that should be 
removed:
{noformat}
// No early open to finalize and replace
{noformat}

In general this patch and the recent trend in our code-base on the 2.1+ 
branches makes me uneasy. Moving the state tracking logic from within the SSTRW 
and SSTW into their own abstraction helps separate our concerns and increase 
modularity at the cost of increased complexity w/regards to the depth of the 
type system and object interaction, similarly to the introduction of the 
formalized ref-counting infrastructure. Each additional step we've take to 
shore up our stability w/regards to SSTable lifecycles is increasing our net 
complexity and the contrast between where we started and where we are now is 
pretty striking. Now, that's not to say that I prefer the alternative of being 
back where we started with regards to having an error-prone brittle interface 
for ref-counting for instance, but in general I'm left feeling wary when I see 
more wide-spread changes in the same vein particularly as we're approaching a 
.4 release on 2.1.

Note: When I refer to wide-spread changes, I have no hard and fast rule as to 
what qualifies however this change touches [many 
files|https://github.com/belliottsmith/cassandra/commit/16d92cc5926d54667609fb8300f3c573bea5c89f].

As we're not attempting to address any current pain-point with this ticket, 
there's the outstanding potential missed close() w/regards to commit(), and 
this commit rewrites some of the hotter areas in SSTRW w/regards to recent 
errors, I'm of the opinion this change is better targeted towards 3.0 similarly 
to CASSANDRA-8568. This would give us more time to beef up our testing 
infrastructure to better test changes in these portions of the code-base that 
are historically vulnerable to races w/regards to state and would also give the 
rest of the developers working on the code-base more time to get familiar with 
these changes rather than having them out in the wild immediately.

So all that being said, on the whole I believe this approach is a net 
improvement and once we get the details hammered out I believe this will be 
harder to get wrong compared to our previous implementation. The operations 
it's modifying are subtle enough that I don't feel like 2.1 is the right place 
for it at this time though (this could be the whole Once bitten, twice shy 
problem though...)

I'll dig into SSTW and update the ticket when I've gone through that.

 Introduce Transactional API for behaviours that can corrupt system state
 

 Key: CASSANDRA-8984
 

[jira] [Commented] (CASSANDRA-8938) Full Row Scan does not count towards Reads

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376597#comment-14376597
 ] 

Philip Thompson commented on CASSANDRA-8938:


[~krummas], is it correct behavior for STCS to still consider all sstables as 
cold after full scans?

 Full Row Scan does not count towards Reads
 --

 Key: CASSANDRA-8938
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8938
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Tools
 Environment: Unix, Cassandra 2.0.3
Reporter: Amit Singh Chowdhery
Assignee: Marcus Eriksson
Priority: Minor
  Labels: none

 When a CQL SELECT statement is executed with WHERE clause, Read Count is 
 incremented in cfstats of the column family. But, when a full row scan is 
 done using SELECT statement without WHERE clause, Read Count is not 
 incremented. 
 Similarly, when using Size Tiered Compaction, if we do a full row scan using 
 Hector RangeslicesQuery, Read Count is not incremented in cfstats, Cassandra 
 still considers all sstables as cold and does not trigger compaction for 
 them. If we fire MultigetSliceQuery, Read Count is incremented and sstables 
 becomes hot, triggering compaction of these sstables. 
 Expected Behavior:
 1. Read Count must be incremented by number of rows read during a full row 
 scan done using CQL SELECT statement or Hector RangeslicesQuery.
 2. Size Tiered compaction must consider all sstables as Hot after a full row 
 scan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8874) running out of FD, and causing clients hang when dropping a keyspace with many CF with many sstables

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376609#comment-14376609
 ] 

Philip Thompson commented on CASSANDRA-8874:


Could you attach the system log of one of the affected nodes?

 running out of FD, and causing clients hang when dropping a keyspace with 
 many CF with many sstables
 

 Key: CASSANDRA-8874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8874
 Project: Cassandra
  Issue Type: Bug
Reporter: Jackson Chung
 Fix For: 2.0.14


 we already set number of file descriptors to 10 for c* usage, and 
 confirmed that from /proc/$cass_pid/limits
 we have 16 nodes, 2 DC, each node stores about 600GB to 1TB data; ec2, i2-2xl 
 instances, raid0 the 2 disks
 we use both hector and datastax drivers, and there are many clients 
 connecting to the cluster.
 1 day we dropped a keyspace (that our app no longer use), which has a good 
 amount of CFs, with some of them use leveledbcompaction and have some good 
 amount of sstables... and our app went down. CPU/load avg were high and we 
 couldn't even ssh to them. We have to force a reboot, and restart 2 of the 
 C*, that was filled (hundreds of thousands) of errors of too many open files
 C* 2.0.11
 {noformat}$ grep -ic caused by.*too many open file system.log.*
 system.log.1:0
 system.log.10:18659
 system.log.11:17539
 system.log.12:18941
 system.log.13:18936
 system.log.14:18601
 system.log.15:18933
 system.log.16:18937
 system.log.17:18954
 system.log.18:18892
 system.log.19:18942
 system.log.2:0
 system.log.20:18977
 system.log.21:18977
 system.log.22:18852
 system.log.23:18978
 system.log.24:18978
 system.log.25:18978
 system.log.26:18978
 system.log.27:18978
 system.log.28:18978
 system.log.29:18978
 system.log.3:654
 system.log.30:18978
 system.log.31:18978
 system.log.32:18978
 system.log.33:18977
 system.log.34:18978
 system.log.35:18978
 system.log.36:17943
 system.log.37:18867
 system.log.38:15082
 system.log.39:17766
 system.log.4:17932
 system.log.40:18029
 system.log.41:18890
 system.log.42:18048
 system.log.43:18812
 system.log.44:18787
 system.log.45:18962
 system.log.46:18978
 system.log.47:18978
 system.log.48:18978
 system.log.49:18978
 system.log.5:15284
 system.log.50:18978
 system.log.6:17180
 system.log.7:17286
 system.log.8:18651
 system.log.9:17720
 {noformat}
 all the logs are from that day..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8794) AntiEntropySessions doesn't show up until after a repair

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376637#comment-14376637
 ] 

Philip Thompson commented on CASSANDRA-8794:


What version is this affecting, 2.0.X?

 AntiEntropySessions doesn't show up until after a repair
 

 Key: CASSANDRA-8794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8794
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday
Assignee: Yuki Morishita

 The metric AntiEntropySessions for internal thread pools doesn't actually 
 show up as an mbean until after a repair is run.  This should actually be 
 displayed before.  This also, keeps any cluster that doesn't need repairing 
 from displaying stats for AntiEntropySessions.  The lack of the mbean's 
 existence until after the repair will cause problem for various monitoring 
 tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8794) AntiEntropySessions doesn't show up until after a repair

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8794:
---
Assignee: Yuki Morishita

 AntiEntropySessions doesn't show up until after a repair
 

 Key: CASSANDRA-8794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8794
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday
Assignee: Yuki Morishita

 The metric AntiEntropySessions for internal thread pools doesn't actually 
 show up as an mbean until after a repair is run.  This should actually be 
 displayed before.  This also, keeps any cluster that doesn't need repairing 
 from displaying stats for AntiEntropySessions.  The lack of the mbean's 
 existence until after the repair will cause problem for various monitoring 
 tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8794) AntiEntropySessions doesn't show up until after a repair

2015-03-23 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376711#comment-14376711
 ] 

Chris Lohfink commented on CASSANDRA-8794:
--

i've seen it in 2.0.x, .7 for sure but I don't know if it goes further back.

 AntiEntropySessions doesn't show up until after a repair
 

 Key: CASSANDRA-8794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8794
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday
Assignee: Yuki Morishita

 The metric AntiEntropySessions for internal thread pools doesn't actually 
 show up as an mbean until after a repair is run.  This should actually be 
 displayed before.  This also, keeps any cluster that doesn't need repairing 
 from displaying stats for AntiEntropySessions.  The lack of the mbean's 
 existence until after the repair will cause problem for various monitoring 
 tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9022) Node Cleanup deletes all its data after a new node joined the cluster

2015-03-23 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9022:

Attachment: 9022.txt

OK, so I think this is a very simple bug, which would have been fixed by 
CASSANDRA-8946 if we had rolled it out to both constructors. Since only the 
other constructor is covered by extensive unit tests, I'm attaching a patch 
that shares the behaviour between each. The simple likely explanation is that 
the normalize call yields a minimum() bound for the RHS of a range, which would 
cause nothing to be returned for that interval. At the time of writing it I 
didn't realise the minimum() bound was used for the RHS max (which is actually 
an unnecessary complication for Range, since it is an inclusive RHS, so we 
could consider changing that to avoid anyone else making the mistake).

[~aboudreault]: could you try out with this patch?

 Node Cleanup deletes all its data after a new node joined the cluster
 -

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4

 Attachments: 9022.txt, bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6922) Investigate if we can drop ByteOrderedPartitioner and OrderPreservingPartitioner in 3.0

2015-03-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376755#comment-14376755
 ] 

Jason Brown commented on CASSANDRA-6922:


bq. Curious what the legitimate use cases are ...

A long time ago, and I'm not remembering so well, there was a project back at 
netflix where the users wanted to map a set of their middle-ware services to an 
explicit range of the database. They wanted to be able to read the entire range 
into their app's memory for metrics calculation and such. Also, as per 
[~xedin]'s earlier comment, it looks like Titan makes use of BOP. I wonder if 
this is still true? Poking at the current github code for Titan, I do see some 
some references to BOP, but not sure if it's legacy or still used or 


 Investigate if we can drop ByteOrderedPartitioner and 
 OrderPreservingPartitioner in 3.0
 ---

 Key: CASSANDRA-6922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6922
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 3.0


 We would need to add deprecation warnings in 2.1, rewrite a lot of unit 
 tests, and perhaps provide tools/guidelines to migrate an existing data set 
 to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8670) Large columns + NIO memory pooling causes excessive direct memory usage

2015-03-23 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-8670:
--
Attachment: largecolumn_test.py

Took way longer than I would have liked, but here is an alternative 
implementation of DataInputStream and DataOutputStreamPlus that wraps a 
WritableByteChannel and does any necessary buffering.  

[Implementation available on  
github.|https://github.com/apache/cassandra/compare/trunk...aweisberg:C-8670?expand=1]

I am also attaching a dtest validates that almost no direct byte buffer memory 
is allocated even when using large columns. To check how much is allocated I 
used reflection on java.nio.Bits and have GCInspector supply it along with the 
other metrics it supplies.

To make it easy to test for I added a -D flag for testing that has Netty not 
pool memory and prefer non-direct byte buffers.

The only other place where I think we might run into this issue is streaming. 
That operates on the input/output streams from sockets. With streaming you 
don't connect to as many nodes, and if the thread that is used for streaming is 
released once streaming completes is shouldn't be a problem.


 Large columns + NIO memory pooling causes excessive direct memory usage
 ---

 Key: CASSANDRA-8670
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8670
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0

 Attachments: largecolumn_test.py


 If you provide a large byte array to NIO and ask it to populate the byte 
 array from a socket it will allocate a thread local byte buffer that is the 
 size of the requested read no matter how large it is. Old IO wraps new IO for 
 sockets (but not files) so old IO is effected as well.
 Even If you are using Buffered{Input | Output}Stream you can end up passing a 
 large byte array to NIO. The byte array read method will pass the array to 
 NIO directly if it is larger than the internal buffer.  
 Passing large cells between nodes as part of intra-cluster messaging can 
 cause the NIO pooled buffers to quickly reach a high watermark and stay 
 there. This ends up costing 2x the largest cell size because there is a 
 buffer for input and output since they are different threads. This is further 
 multiplied by the number of nodes in the cluster - 1 since each has a 
 dedicated thread pair with separate thread locals.
 Anecdotally it appears that the cost is doubled beyond that although it isn't 
 clear why. Possibly the control connections or possibly there is some way in 
 which multiple 
 Need a workload in CI that tests the advertised limits of cells on a cluster. 
 It would be reasonable to ratchet down the max direct memory for the test to 
 trigger failures if a memory pooling issue is introduced. I don't think we 
 need to test concurrently pulling in a lot of them, but it should at least 
 work serially.
 The obvious fix to address this issue would be to read in smaller chunks when 
 dealing with large values. I think small should still be relatively large (4 
 megabytes) so that code that is reading from a disk can amortize the cost of 
 a seek. It can be hard to tell what the underlying thing being read from is 
 going to be in some of the contexts where we might choose to implement 
 switching to reading chunks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-03-23 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376213#comment-14376213
 ] 

Dave Brosius commented on CASSANDRA-8561:
-

any concerns about limiting the size of logging in the case where keys are 
massive?

 Tombstone log warning does not log partition key
 

 Key: CASSANDRA-8561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Datastax DSE 4.5
Reporter: Jens Rantil
Assignee: Lyuben Todorov
  Labels: logging
 Fix For: 2.1.4

 Attachments: cassandra-2.1-8561.diff, 
 cassandra-2.1-head-1427124485-8561.diff, 
 cassandra-trunk-head-1427125869-8561.diff


 AFAIK, the tombstone warning in system.log does not contain the primary key. 
 See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
 Including it would help a lot in diagnosing why the (CQL) row has so many 
 tombstones.
 Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6922) Investigate if we can drop ByteOrderedPartitioner and OrderPreservingPartitioner in 3.0

2015-03-23 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376237#comment-14376237
 ] 

Jonathan Ellis commented on CASSANDRA-6922:
---

I'd be in favor of deprecating (and removing from the yaml comments) if only so 
that people don't shoot themselves in the foot by accident.  

Curious what the legitimate use cases are that [~jasobrown] has in mind.

 Investigate if we can drop ByteOrderedPartitioner and 
 OrderPreservingPartitioner in 3.0
 ---

 Key: CASSANDRA-6922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6922
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 3.0


 We would need to add deprecation warnings in 2.1, rewrite a lot of unit 
 tests, and perhaps provide tools/guidelines to migrate an existing data set 
 to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9019) GCInspector detected GC before ThreadPools are initialized

2015-03-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376217#comment-14376217
 ] 

Philip Thompson commented on CASSANDRA-9019:


It's possible it was just an error in dtest, and it really was in system.log. 

 GCInspector detected GC before ThreadPools are initialized
 --

 Key: CASSANDRA-9019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9019
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Yuki Morishita
 Fix For: 3.0


 While running the dtest {{one_all_test (consistency_test.TestConsistency)}}, 
 I ran into the following exception:
 {code}
 java.lang.RuntimeException: Error reading: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:134)
 at org.apache.cassandra.utils.StatusLogger.log(StatusLogger.java:55)
 at 
 org.apache.cassandra.service.GCInspector.handleNotification(GCInspector.java:147)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1754)
 at 
 sun.management.NotificationEmitterSupport.sendNotification(NotificationEmitterSupport.java:156)
 at 
 sun.management.GarbageCollectorImpl.createGCNotification(GarbageCollectorImpl.java:150)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at com.sun.proxy.$Proxy3.getValue(Unknown Source)
 at 
 org.apache.cassandra.metrics.ThreadPoolMetrics.getJmxMetric(ThreadPoolMetrics.java:123)
 ... 5 more
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=ActiveTasks
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 ... 7 more
 
 {code}
 Dtest didn't preserve the logs, which implies that this wasn't in the 
 system.log, but printed to stderr somehow, it's unclear with all the piping 
 dtest and ccm do. I have yet to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9020) java.lang.AssertionError on node startup

2015-03-23 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-9020:
--

 Summary: java.lang.AssertionError on node startup
 Key: CASSANDRA-9020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9020
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Sam Tunnicliffe
 Fix For: 3.0
 Attachments: node1.log, node2.log, node3.log

Occasionally when running dtests, I will see this exception when starting a 
test:
{code}
ERROR [main] 2015-03-23 13:23:25,719 CassandraDaemon.java:612 - Exception 
encountered during startup
java.lang.AssertionError: 
org.apache.cassandra.exceptions.InvalidRequestException: unconfigured table 
roles
at 
org.apache.cassandra.auth.CassandraRoleManager.prepare(CassandraRoleManager.java:415)
 ~[main/:na]
at 
org.apache.cassandra.auth.CassandraRoleManager.setup(CassandraRoleManager.java:127)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:897)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:829)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:576) 
~[main/:na]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:464) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:357) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:492) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:599) 
[main/:na]
Caused by: org.apache.cassandra.exceptions.InvalidRequestException: 
unconfigured table roles
at 
org.apache.cassandra.thrift.ThriftValidation.validateColumnFamily(ThriftValidation.java:115)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:733)
 ~[main/:na]
at 
org.apache.cassandra.auth.CassandraRoleManager.prepare(CassandraRoleManager.java:411)
 ~[main/:na]
... 8 common frames omitted
{code}

Most recently it occurred on {{test_paging_across_multi_wide_rows 
(paging_test.TestPagingData)}}, though I have seen it in other tests. It does 
not reproduce consistently. I have attached the system.log file of an affected 
node, node1.log.   The other two log files belong to other nodes in the 
cluster, in case that helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Rocco Varela (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rocco Varela updated CASSANDRA-9021:

Environment: 
Cluster setup:
- 20-node stress cluster, GCE n1-standard-2
- 10-node receiver cluster ingesting data, GCE n1-standard-8 

Platform:
- Ubuntu 12.0.4 x86_64

Versions:
- DSE 4.7.0
- Cassandra 2.1.3.304
- Java 1.7.0_45

DSE Configuration:
- Xms7540M 
- Xmx7540M 
- Xmn800M
- Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
- Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
- ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
- XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
- XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
- XX:+HeapDumpOnOutOfMemoryError -Xss256k 
- XX:StringTableSize=103 -XX:+UseParNewGC 
- XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
- XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
- XX:CMSInitiatingOccupancyFraction=75 
- XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB

  was:
Cluster setup:
- 20-node stress cluster, GCE n1-standard-2
- 10-node receiver cluster ingesting data, GCE n1-standard-8 

Versions:
- DSE 4.7.0
- Cassandra 2.1.3.304

DSE Configuration:
- Xms7540M 
- Xmx7540M 
- Xmn800M
- Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
- Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
- ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
- XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
- XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
- XX:+HeapDumpOnOutOfMemoryError -Xss256k 
- XX:StringTableSize=103 -XX:+UseParNewGC 
- XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
- XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
- XX:CMSInitiatingOccupancyFraction=75 
- XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB


 AssertionError and Leak detected during sstable compaction
 --

 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
 - 20-node stress cluster, GCE n1-standard-2
 - 10-node receiver cluster ingesting data, GCE n1-standard-8 
 Platform:
 - Ubuntu 12.0.4 x86_64
 Versions:
 - DSE 4.7.0
 - Cassandra 2.1.3.304
 - Java 1.7.0_45
 DSE Configuration:
 - Xms7540M 
 - Xmx7540M 
 - Xmn800M
 - Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
 - Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
 - ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
 - XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 - XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
 - XX:+HeapDumpOnOutOfMemoryError -Xss256k 
 - XX:StringTableSize=103 -XX:+UseParNewGC 
 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 - XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 - XX:CMSInitiatingOccupancyFraction=75 
 - XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Attachments: system.log


 After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
 errors during what looks like sstable compaction.
 system.log snippets (full log attached):
 {code}
 ...
 INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/requests_ks/timeline-   
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
 bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 
 total   partitions merged to 805.  Partition merge counts were {1:627, 
 2:178, }
 INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/system/ 
 compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
   426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
 partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
 ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 
 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.AssertionError: null
  at 
 org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  

[jira] [Updated] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9021:
---
Fix Version/s: 2.1.4

 AssertionError and Leak detected during sstable compaction
 --

 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
 - 20-node stress cluster, GCE n1-standard-2
 - 10-node receiver cluster ingesting data, GCE n1-standard-8 
 Platform:
 - Ubuntu 12.0.4 x86_64
 Versions:
 - DSE 4.7.0
 - Cassandra 2.1.3.304
 - Java 1.7.0_45
 DSE Configuration:
 - Xms7540M 
 - Xmx7540M 
 - Xmn800M
 - Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
 - Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
 - ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
 - XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 - XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
 - XX:+HeapDumpOnOutOfMemoryError -Xss256k 
 - XX:StringTableSize=103 -XX:+UseParNewGC 
 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 - XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 - XX:CMSInitiatingOccupancyFraction=75 
 - XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Fix For: 2.1.4

 Attachments: system.log


 After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
 errors during what looks like sstable compaction.
 system.log snippets (full log attached):
 {code}
 ...
 INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/requests_ks/timeline-   
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
 bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 
 total   partitions merged to 805.  Partition merge counts were {1:627, 
 2:178, }
 INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  
 CompactionTask.java:267 - Compacted 4 sstables to 
 [/mnt/cass_data_disks/data1/system/ 
 compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
   426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
 partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
 ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 
 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.AssertionError: null
  at 
 org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier$1.run(SSTableReader.java:2036)
  ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
  at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 ...
 INFO  [MemtableFlushWriter:50] 2015-03-23 02:47:29,465  Memtable.java:378 - 
 Completed flushing /mnt/cass_data_disks/data1/requests_ks/timeline-   
9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-188-Data.db 
 (16311981 bytes) for commitlog position 
 ReplayPosition(segmentId=1427071574495, position=4523631)
 ERROR [Reference-Reaper:1] 2015-03-23 02:47:33,987  Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@2f59b10) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1251424500:/mnt/cass_data_disks/data1/requests_ks/timeline-9500fe40d0f611e495675d5ea01541b5/
 requests_ks-timeline-ka-149 was not released before the reference was 
 garbage collected
 INFO  [Service Thread] 2015-03-23 02:47:40,158  

[jira] [Created] (CASSANDRA-9021) AssertionError and Leak detected during sstable compaction

2015-03-23 Thread Rocco Varela (JIRA)
Rocco Varela created CASSANDRA-9021:
---

 Summary: AssertionError and Leak detected during sstable compaction
 Key: CASSANDRA-9021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9021
 Project: Cassandra
  Issue Type: Bug
 Environment: Cluster setup:
- 20-node stress cluster, GCE n1-standard-2
- 10-node receiver cluster ingesting data, GCE n1-standard-8 

Versions:
- DSE 4.7.0
- Cassandra 2.1.3.304

DSE Configuration:
- Xms7540M 
- Xmx7540M 
- Xmn800M
- Ddse.system_cpu_cores=8 -Ddse.system_memory_in_mb=30161 
- Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader 
- ea -javaagent:/usr/local/lib/dse/resources/cassandra/lib/jamm-0.3.0.jar 
- XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
- XX:ThreadPriorityPolicy=42 -Xms7540M -Xmx7540M -Xmn800M 
- XX:+HeapDumpOnOutOfMemoryError -Xss256k 
- XX:StringTableSize=103 -XX:+UseParNewGC 
- XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
- XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
- XX:CMSInitiatingOccupancyFraction=75 
- XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB
Reporter: Rocco Varela
Assignee: Benedict
 Attachments: system.log

After ~3 hours of data ingestion we see assertion errors and 'LEAK DETECTED' 
errors during what looks like sstable compaction.


system.log snippets (full log attached):
{code}
...
INFO  [CompactionExecutor:12] 2015-03-23 02:45:51,770  CompactionTask.java:267 
- Compacted 4 sstables to [/mnt/cass_data_disks/data1/requests_ks/timeline- 
  9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-185,].  65,916,594 
bytes to 66,159,512 (~100% of original) in 26,554ms = 2.376087MB/s.  983 total  
 partitions merged to 805.  Partition merge counts were {1:627, 2:178, }
INFO  [CompactionExecutor:11] 2015-03-23 02:45:51,837  CompactionTask.java:267 
- Compacted 4 sstables to [/mnt/cass_data_disks/data1/system/   
  
compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-119,].
  426 bytes to 42 (~9% of original) in 82ms = 0.000488MB/s.  5  total 
partitions merged to 1.  Partition merge counts were {1:1, 2:2, }
ERROR [NonPeriodicTasks:1] 2015-03-23 02:45:52,251  CassandraDaemon.java:167 - 
Exception in thread Thread[NonPeriodicTasks:1,5,main]
java.lang.AssertionError: null
 at 
org.apache.cassandra.io.compress.CompressionMetadata$Chunk.init(CompressionMetadata.java:438)
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
 at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
 at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.dropPageCache(CompressedPoolingSegmentedFile.java:80)
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
 at org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:923) 
~[cassandra-all-2.1.3.304.jar:2.1.3.304]
 at 
org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier$1.run(SSTableReader.java:2036)
 ~[cassandra-all-2.1.3.304.jar:2.1.3.304]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_45]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_45]
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
 ~[na:1.7.0_45]
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
 ~[na:1.7.0_45]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_45]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
...
INFO  [MemtableFlushWriter:50] 2015-03-23 02:47:29,465  Memtable.java:378 - 
Completed flushing /mnt/cass_data_disks/data1/requests_ks/timeline- 
 9500fe40d0f611e495675d5ea01541b5/requests_ks-timeline-ka-188-Data.db 
(16311981 bytes) for commitlog position ReplayPosition(segmentId=1427071574495, 
position=4523631)
ERROR [Reference-Reaper:1] 2015-03-23 02:47:33,987  Ref.java:181 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@2f59b10) 
to class 
org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1251424500:/mnt/cass_data_disks/data1/requests_ks/timeline-9500fe40d0f611e495675d5ea01541b5/
requests_ks-timeline-ka-149 was not released before the reference was 
garbage collected
INFO  [Service Thread] 2015-03-23 02:47:40,158  GCInspector.java:142 - 
ConcurrentMarkSweep GC in 12247ms.  CMS Old Gen: 5318987136 - 457655168; CMS 
Perm Gen:   44731264 - 44699160; Par Eden Space: 8597912 - 418006664; Par 
Survivor Space: 71865728 - 59679584
...
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9022) Node Cleanup deletes all data

2015-03-23 Thread Alan Boudreault (JIRA)
Alan Boudreault created CASSANDRA-9022:
--

 Summary: Node Cleanup deletes all data 
 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Priority: Critical
 Fix For: 2.1.4
 Attachments: bisect.sh, results_cassandra_2.1.3.txt, 
results_cassandra_2.1_branch.txt

I try to add a node in my cluster and doing some cleanup deleted all my data on 
a node. This makes the cluster totally broken since all next read seem to not 
be able to validate the data. Even a repair on the problematic node doesn't fix 
the issue.  I've attached the bisect script used and the output results of the 
procedure.

Procedure to reproduce:
{code}
ccm stop  ccm remove
ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
ccm start
ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
threads=50
ccm node1 nodetool status
ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
ccm node3 start
ccm node1 nodetool status
ccm node3 repair
ccm node3 nodetool status
ccm node1 nodetool cleanup
ccm node2 nodetool cleanup
ccm node3 nodetool cleanup
ccm node1 nodetool status
ccm node1 repair
ccm node1 stress -- read n=100 ## CRASH Data returned was not validated ?!?
{code}

bisec script output:
{code}
$ git bisect start cassandra-2.1 cassandra-2.1.3
$ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
...
4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
Author: Benedict Elliott Smith bened...@apache.org
Date:   Wed Feb 11 15:49:43 2015 +

Enforce SSTableReader.first/last

patch by benedict; reviewed by yukim for CASSANDRA-8744

:100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
:04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
bisect run success
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9022) Node Cleanup deletes all data

2015-03-23 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376401#comment-14376401
 ] 

Alan Boudreault commented on CASSANDRA-9022:


//cc [~benedict] [~krummas]

 Node Cleanup deletes all data 
 --

 Key: CASSANDRA-9022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9022
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Priority: Critical
 Fix For: 2.1.4

 Attachments: bisect.sh, results_cassandra_2.1.3.txt, 
 results_cassandra_2.1_branch.txt


 I try to add a node in my cluster and doing some cleanup deleted all my data 
 on a node. This makes the cluster totally broken since all next read seem to 
 not be able to validate the data. Even a repair on the problematic node 
 doesn't fix the issue.  I've attached the bisect script used and the output 
 results of the procedure.
 Procedure to reproduce:
 {code}
 ccm stop  ccm remove
 ccm create -n 2 --install-dir=path/to/cassandra-2.1/branch demo
 ccm start
 ccm node1 stress -- write n=100 -schema replication\(factor=2\) -rate 
 threads=50
 ccm node1 nodetool status
 ccm add -i 127.0.0.3 -j 7400 node3 # no auto-boostrap
 ccm node3 start
 ccm node1 nodetool status
 ccm node3 repair
 ccm node3 nodetool status
 ccm node1 nodetool cleanup
 ccm node2 nodetool cleanup
 ccm node3 nodetool cleanup
 ccm node1 nodetool status
 ccm node1 repair
 ccm node1 stress -- read n=100 ## CRASH Data returned was not validated 
 ?!?
 {code}
 bisec script output:
 {code}
 $ git bisect start cassandra-2.1 cassandra-2.1.3
 $ git bisect run ~/dev/cstar/cleanup_issue/bisect.sh
 ...
 4b05b204acfa60ecad5672c7e6068eb47b21397a is the first bad commit
 commit 4b05b204acfa60ecad5672c7e6068eb47b21397a
 Author: Benedict Elliott Smith bened...@apache.org
 Date:   Wed Feb 11 15:49:43 2015 +
 Enforce SSTableReader.first/last
 
 patch by benedict; reviewed by yukim for CASSANDRA-8744
 :100644 100644 3f0463731e624cbe273dcb3951b2055fa5d9e1a2 
 b2f894eb22b9102d410f1eabeb3e11d26727fbd3 M  CHANGES.txt
 :04 04 51ac2a6cd39bd2377c2e1ed6693ef789ab65a26c 
 79fa2501f4155a64dca2bbdcc9e578008e4e425a M  src
 bisect run success
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: bump metrics-reporter-config dependency

2015-03-23 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1faca1cb5 - a672097db


bump metrics-reporter-config dependency

patch by tjake; reviewed by cburroughs for CASSANDRA-8149


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a672097d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a672097d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a672097d

Branch: refs/heads/trunk
Commit: a672097db4cdddc78baa16de6990c45bfc2e25aa
Parents: 1faca1c
Author: T Jake Luciani j...@apache.org
Authored: Mon Mar 23 13:44:30 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Mon Mar 23 13:44:30 2015 -0400

--
 CHANGES.txt |   1 +
 build.xml   |   2 +-
 lib/licenses/reporter-config-2.1.0.txt  | 177 ---
 lib/licenses/reporter-config-base-3.0.0.txt | 177 +++
 lib/licenses/reporter-config3-3.0.0.txt | 177 +++
 lib/reporter-config-2.1.0.jar   | Bin 22291 - 0 bytes
 lib/reporter-config-base-3.0.0.jar  | Bin 0 - 23633 bytes
 lib/reporter-config3-3.0.0.jar  | Bin 0 - 14379 bytes
 .../cassandra/service/CassandraDaemon.java  |   4 +-
 9 files changed, 358 insertions(+), 180 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a672097d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e10c476..c2944c8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Bump metrics-reporter-config dependency for metrics 3.0 (CASSANDRA-8149)
  * Partition intra-cluster message streams by size, not type (CASSANDRA-8789)
  * Add WriteFailureException to native protocol, notify coordinator of
write failures (CASSANDRA-8592)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a672097d/build.xml
--
diff --git a/build.xml b/build.xml
index 09e67d3..7f046b4 100644
--- a/build.xml
+++ b/build.xml
@@ -372,7 +372,7 @@
   dependency groupId=org.apache.cassandra 
artifactId=cassandra-all version=${version} /
   dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
   dependency groupId=io.dropwizard.metrics 
artifactId=metrics-core version=3.1.0 /
-  dependency groupId=com.addthis.metrics 
artifactId=reporter-config version=2.1.0 /
+  dependency groupId=com.addthis.metrics 
artifactId=reporter-config version=3.0.0 /
   dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
   dependency groupId=io.airlift artifactId=airline version=0.6 
/
   dependency groupId=io.netty artifactId=netty-all 
version=4.0.23.Final /

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a672097d/lib/licenses/reporter-config-2.1.0.txt
--
diff --git a/lib/licenses/reporter-config-2.1.0.txt 
b/lib/licenses/reporter-config-2.1.0.txt
deleted file mode 100644
index 430d42b..000
--- a/lib/licenses/reporter-config-2.1.0.txt
+++ /dev/null
@@ -1,177 +0,0 @@
-
-  Apache License
-Version 2.0, January 2004
- http://www.apache.org/licenses/
-
-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-1. Definitions.
-
-   License shall mean the terms and conditions for use, reproduction,
-   and distribution as defined by Sections 1 through 9 of this document.
-
-   Licensor shall mean the copyright owner or entity authorized by
-   the copyright owner that is granting the License.
-
-   Legal Entity shall mean the union of the acting entity and all
-   other entities that control, are controlled by, or are under common
-   control with that entity. For the purposes of this definition,
-   control means (i) the power, direct or indirect, to cause the
-   direction or management of such entity, whether by contract or
-   otherwise, or (ii) ownership of fifty percent (50%) or more of the
-   outstanding shares, or (iii) beneficial ownership of such entity.
-
-   You (or Your) shall mean an individual or Legal Entity
-   exercising permissions granted by this License.
-
-   Source form shall mean the preferred form for making modifications,
-   including but not limited to software source code, documentation
-   source, and configuration files.
-
-   Object form shall mean any form resulting from mechanical
-   transformation or translation of a Source form, including but
-   not limited to compiled object code, generated documentation,
-   and conversions to other media types.
-
-   Work shall mean the work of authorship, 

  1   2   >