[jira] [Updated] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-11 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-12777:
--
Description: 
The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does not work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.

  was:
The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.


> Optimize the vnode allocation for single replica per DC
> ---
>
> Key: CASSANDRA-12777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
> for the situation that there are multiple replicas per DC.
> In our production environment, most cluster only has one replica, in this 
> case, the algorithm does not work perfectly. It always tries to split token 
> ranges by half, so that the ownership of "min" node could go as low as ~60% 
> compared to avg.
> So for single replica case, I'm working on a new algorithm, which is based on 
> Branimir's previous commit, to split token ranges by "some" percentage, 
> instead of always by half. In this way, we can get a very small variation of 
> the ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-11 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567784#comment-15567784
 ] 

Dikang Gu commented on CASSANDRA-12777:
---

I have a draft patch, and there is the sample results:
{code}
4 vnode, 250 nodes, max 1.11 min 0.89 stddev 0.0734
16 vnode, 250 nodes, max 1.04 min 0.97 stddev 0.0179
64 vnode, 250 nodes, max 1.01 min 0.99 stddev 0.0044
{code}

Will clean it a bit and send it out for review.

> Optimize the vnode allocation for single replica per DC
> ---
>
> Key: CASSANDRA-12777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
> for the situation that there are multiple replicas per DC.
> In our production environment, most cluster only has one replica, in this 
> case, the algorithm does work perfectly. It always tries to split token 
> ranges by half, so that the ownership of "min" node could go as low as ~60% 
> compared to avg.
> So for single replica case, I'm working on a new algorithm, which is based on 
> Branimir's previous commit, to split token ranges by "some" percentage, 
> instead of always by half. In this way, we can get a very small variation of 
> the ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Merge branch 'cassandra-3.X' into trunk [Forced Update!]

2016-10-11 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6e9c3db56 -> 0b82c4fc6 (forced update)


Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b82c4fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b82c4fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b82c4fc

Branch: refs/heads/trunk
Commit: 0b82c4fc6f6b1fc8a6cb8f9e5a6c00f739dd5e44
Parents: 8e6a58c b25d903
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:27:20 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 23:28:48 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b82c4fc/CHANGES.txt
--
diff --cc CHANGES.txt
index 57ff13c,c59459c..cfc46ad
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -125,6 -119,6 +125,7 @@@ Merged from 2.2
   * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
   * Fail repair on non-existing table (CASSANDRA-12279)
   * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
++ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
  3.8, 3.9



[jira] [Created] (CASSANDRA-12777) Optimize the vnode allocation for single replica per DC

2016-10-11 Thread Dikang Gu (JIRA)
Dikang Gu created CASSANDRA-12777:
-

 Summary: Optimize the vnode allocation for single replica per DC
 Key: CASSANDRA-12777
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12777
 Project: Cassandra
  Issue Type: Improvement
Reporter: Dikang Gu
Assignee: Dikang Gu
 Fix For: 3.x


The new vnode allocation algorithm introduced in CASSANDRA-7032 is optimized 
for the situation that there are multiple replicas per DC.

In our production environment, most cluster only has one replica, in this case, 
the algorithm does work perfectly. It always tries to split token ranges by 
half, so that the ownership of "min" node could go as low as ~60% compared to 
avg.

So for single replica case, I'm working on a new algorithm, which is based on 
Branimir's previous commit, to split token ranges by "some" percentage, instead 
of always by half. In this way, we can get a very small variation of the 
ownership among different nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567769#comment-15567769
 ] 

Jeff Jirsa commented on CASSANDRA-12296:


{quote}
is not true, at least from my testing with rebuilds... I couldn't force this 
error message to occur with repair, but maybe I'm missing something.
{quote}

It can't hit with repair because that code block requires 
{{strat.getReplicationFactor() == 1}} - in that case, there would be nothing to 
repair.

The case I was imagining was bootstrap related, which has a similar error 
message, but is actually in {{getAllRangesWithStrictSourcesFor}} rather than 
{{getRangeFetchMap}} - so I withdraw my comment, and insert foot firmly into 
mouth - I can't see any way to trigger this with NTS, so perhaps "switch to 
NTS" is the right fix. 

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12182) redundant StatusLogger print out when both dropped message and long GC event happen

2016-10-11 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567694#comment-15567694
 ] 

Kurt Greaves commented on CASSANDRA-12182:
--

You can set the logging level for the statuslogger to warn to avoid those error 
messages.
{code}nodetool setlogginglevel org.apache.cassandra.utils.StatusLogger 
WARN{code}

or you can set the equivalent in logback.xml. Maybe INFO is noisy but I think 
replacing the log messages with StatusLogger is busy would somewhat defeat the 
purpose.

> redundant StatusLogger print out when both dropped message and long GC event 
> happen
> ---
>
> Key: CASSANDRA-12182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Priority: Minor
>  Labels: lhf
>
> I was stress testing a C* 3.0 environment and it appears that when the CPU is 
> running low, HINT and MUTATION messages will start to get dropped, and the GC 
> thread can also get some really long-running GC, and I'd get some redundant 
> log entries in system.log like the following:
> {noformat}
> WARN  [Service Thread] 2016-07-12 22:48:45,748  GCInspector.java:282 - G1 
> Young Generation GC in 522ms.  G1 Eden Space: 68157440 -> 0; G1 Old Gen: 
> 3376113224 -> 3468387912; G1 Survivor Space: 24117248 -> 0; 
> INFO  [Service Thread] 2016-07-12 22:48:45,763  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,775  MessagingService.java:983 - 
> MUTATION messages were dropped in last 5000 ms: 419 for internal timeout and 
> 0 for cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  MessagingService.java:983 - 
> HINT messages were dropped in last 5000 ms: 89 for internal timeout and 0 for 
> cross node timeout
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,776  StatusLogger.java:52 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> MutationStage32  4194   32997234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,798  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,799  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,800  StatusLogger.java:56 - 
> MutationStage32  4363   32997333 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ViewMutationStage 0 0  0 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,801  StatusLogger.java:56 - 
> ReadStage 0 0940 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> RequestResponseStage  0 0   11094437 0
>  0
> INFO  [Service Thread] 2016-07-12 22:48:45,802  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,803  StatusLogger.java:56 - 
> RequestResponseStage  4 0   11094509 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,807  StatusLogger.java:56 - 
> ReadRepairStage   0 0  5 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,808  StatusLogger.java:56 - 
> CounterMutationStage  0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,809  StatusLogger.java:56 - 
> MiscStage 0 0  0 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,809  StatusLogger.java:56 - 
> CompactionExecutor262   1234 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,810  StatusLogger.java:56 - 
> MemtableReclaimMemory 0 0 79 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,810  StatusLogger.java:56 - 
> PendingRangeCalculator0 0  3 0
>  0
> INFO  [ScheduledTasks:1] 2016-07-12 22:48:45,819  StatusLogger.java:56 - 
> GossipStage   0 0   5214 0
>  0
> INFO  

[jira] [Updated] (CASSANDRA-12776) when memtable flush Statistics thisOffHeap error

2016-10-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12776:
---
Component/s: (was: Tools)

> when memtable flush Statistics thisOffHeap error
> 
>
> Key: CASSANDRA-12776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12776
> Project: Cassandra
>  Issue Type: Bug
>Reporter: 翟玉勇
>Priority: Trivial
>  Labels: lhf
>
> {code}
> if (largest != null)
> {
> float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
> float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
> float flushingOnHeap = 
> Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
> float flushingOffHeap = 
> Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
> float thisOnHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> float thisOffHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> logger.info("Flushing largest {} to free up room. Used total: 
> {}, live: {}, flushing: {}, this: {}",
> largest.cfs, ratio(usedOnHeap, usedOffHeap), 
> ratio(liveOnHeap, liveOffHeap),
> ratio(flushingOnHeap, flushingOffHeap), 
> ratio(thisOnHeap, thisOffHeap));
> largest.cfs.switchMemtableIfCurrent(largest);
> }
> {code}
> Should:
> {code}
> float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
> {code}
> Be:
> {{offHeap().ownershipRatio();}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12776) when memtable flush Statistics thisOffHeap error

2016-10-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12776:
---
Labels: lhf  (was: )

> when memtable flush Statistics thisOffHeap error
> 
>
> Key: CASSANDRA-12776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12776
> Project: Cassandra
>  Issue Type: Bug
>Reporter: 翟玉勇
>Priority: Trivial
>  Labels: lhf
>
> {code}
> if (largest != null)
> {
> float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
> float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
> float flushingOnHeap = 
> Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
> float flushingOffHeap = 
> Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
> float thisOnHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> float thisOffHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> logger.info("Flushing largest {} to free up room. Used total: 
> {}, live: {}, flushing: {}, this: {}",
> largest.cfs, ratio(usedOnHeap, usedOffHeap), 
> ratio(liveOnHeap, liveOffHeap),
> ratio(flushingOnHeap, flushingOffHeap), 
> ratio(thisOnHeap, thisOffHeap));
> largest.cfs.switchMemtableIfCurrent(largest);
> }
> {code}
> Should:
> {code}
> float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
> {code}
> Be:
> {{offHeap().ownershipRatio();}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12776) when memtable flush Statistics thisOffHeap error

2016-10-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12776:
---
Description: 
{code}
if (largest != null)
{
float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
float thisOnHeap = 
largest.getAllocator().onHeap().ownershipRatio();
float thisOffHeap = 
largest.getAllocator().onHeap().ownershipRatio();
logger.info("Flushing largest {} to free up room. Used total: 
{}, live: {}, flushing: {}, this: {}",
largest.cfs, ratio(usedOnHeap, usedOffHeap), 
ratio(liveOnHeap, liveOffHeap),
ratio(flushingOnHeap, flushingOffHeap), 
ratio(thisOnHeap, thisOffHeap));
largest.cfs.switchMemtableIfCurrent(largest);
}
{code}

Should:

{code}
float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
{code}

Be:

{{offHeap().ownershipRatio();}}

  was:
if (largest != null)
{
float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
float thisOnHeap = 
largest.getAllocator().onHeap().ownershipRatio();
float thisOffHeap = 
largest.getAllocator().onHeap().ownershipRatio();
logger.info("Flushing largest {} to free up room. Used total: 
{}, live: {}, flushing: {}, this: {}",
largest.cfs, ratio(usedOnHeap, usedOffHeap), 
ratio(liveOnHeap, liveOffHeap),
ratio(flushingOnHeap, flushingOffHeap), 
ratio(thisOnHeap, thisOffHeap));
largest.cfs.switchMemtableIfCurrent(largest);
}


 float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
should offHeap().ownershipRatio();


> when memtable flush Statistics thisOffHeap error
> 
>
> Key: CASSANDRA-12776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12776
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: 翟玉勇
>Priority: Trivial
>
> {code}
> if (largest != null)
> {
> float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
> float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
> float flushingOnHeap = 
> Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
> float flushingOffHeap = 
> Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
> float thisOnHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> float thisOffHeap = 
> largest.getAllocator().onHeap().ownershipRatio();
> logger.info("Flushing largest {} to free up room. Used total: 
> {}, live: {}, flushing: {}, this: {}",
> largest.cfs, ratio(usedOnHeap, usedOffHeap), 
> ratio(liveOnHeap, liveOffHeap),
> ratio(flushingOnHeap, flushingOffHeap), 
> ratio(thisOnHeap, thisOffHeap));
> largest.cfs.switchMemtableIfCurrent(largest);
> }
> {code}
> Should:
> {code}
> float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
> {code}
> Be:
> {{offHeap().ownershipRatio();}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567560#comment-15567560
 ] 

Kurt Greaves commented on CASSANDRA-12296:
--

Yep, I didn't expect it to be correct. Just needed a starting point to get 
someone talking about it.

Also:
{quote}If you're running NTS with only one replica, the patch will advise you 
to consider NetworkTopologyStrategy{quote}
is not true, at least from my testing with rebuilds. This is because the 
replica will always be in the specified DC, whereas with SimpleStrategy the 
replica could potentially reside either in the current DC or another DC. When 
else would this error message be triggered? I couldn't force this error message 
to occur with repair, but maybe I'm missing something.

I can't see how SimpleStrategy would work if you take into account multiple 
datacentres. If a user has 3 datacentres then an RF of 2 wouldn't be adequate. 
It seems like there would be too many cases to cover to make a concise 
recommendation to people.


> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2016-10-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12700:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed in {{ff5c497d1fc553f3dcc57a5b0f1329d66082c1d3}}

Thanks [~rajesh_con] and [~beobal]


> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 2.2.x, 3.0.x, 3.x, 4.x
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.

[04/10] cassandra git commit: CASSANDRA-12700: Better handle invalid system roles table

2016-10-11 Thread jjirsa
CASSANDRA-12700: Better handle invalid system roles table

Patch by Jeff Jirsa; Reviewed by Sam Tunnicliffe for CASSANDRA-12700


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff5c497d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff5c497d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff5c497d

Branch: refs/heads/trunk
Commit: ff5c497d1fc553f3dcc57a5b0f1329d66082c1d3
Parents: 73b888d
Author: Jeff Jirsa 
Authored: Thu Sep 29 22:29:22 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:23:05 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6ee2ddc..ae9ef7a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
+ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
 2.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java 
b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
index 3a59581..dbae1ba 100644
--- a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
+++ b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
@@ -81,11 +81,23 @@ public class CassandraRoleManager implements IRoleManager
 {
 public Role apply(UntypedResultSet.Row row)
 {
-return new Role(row.getString("role"),
-row.getBoolean("is_superuser"),
-row.getBoolean("can_login"),
-row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
- : 
Collections.emptySet());
+try
+{
+return new Role(row.getString("role"),
+ row.getBoolean("is_superuser"),
+ row.getBoolean("can_login"),
+ row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
+  : 
Collections.emptySet());
+}
+// Failing to deserialize a boolean in is_superuser or can_login 
will throw an NPE
+catch (NullPointerException e)
+{
+logger.warn("An invalid value has been detected in the {} 
table for role {}. If you are " +
+"unable to login, you may need to disable 
authentication and confirm " +
+"that values in that table are accurate", 
AuthKeyspace.ROLES, row.getString("role"));
+throw new RuntimeException(String.format("Invalid metadata has 
been detected for role %s", row.getString("role")), e);
+}
+
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java 
b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
index dffecd6..0d6580e 100644
--- a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
@@ -30,7 +30,7 @@ public class BooleanSerializer implements 
TypeSerializer
 
 public Boolean deserialize(ByteBuffer bytes)
 {
-if (bytes.remaining() == 0)
+if (bytes == null || bytes.remaining() == 0)
 return null;
 
 byte value = bytes.get(bytes.position());



[10/10] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-11 Thread jjirsa
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e9c3db5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e9c3db5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e9c3db5

Branch: refs/heads/trunk
Commit: 6e9c3db56a6039f08f6748ea21b9691ac8752a4d
Parents: 8e6a58c b25d903
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:27:20 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:29:14 2016 -0700

--
 CHANGES.txt |  2 ++
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 20 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e9c3db5/CHANGES.txt
--
diff --cc CHANGES.txt
index 57ff13c,c59459c..ee109c7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,3 +1,12 @@@
 +4.0
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
++Merged from 2.2:
++ * Better handle invalid system roles table (CASSANDRA-12700)
 +
 +
  3.10
   * Check for hash conflicts in prepared statements (CASSANDRA-12733)
   * Exit query parsing upon first error (CASSANDRA-12598)



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-11 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b25d9030
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b25d9030
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b25d9030

Branch: refs/heads/trunk
Commit: b25d9030a898bd867d55d8d7f564cdf4c332d08f
Parents: f9ae1b7 703b151
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:26:00 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:27:09 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25d9030/CHANGES.txt
--
diff --cc CHANGES.txt
index 2e9186f,81fb544..c59459c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -106,60 -42,12 +106,61 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
++ * Better handle invalid system roles table (CASSANDRA-12700)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
 + * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 
(CASSANDRA-11546)
 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578)
 + * Faster streaming (CASSANDRA-9766)
 + * Add prepared query parameter to trace for "Execute CQL3 prepared query" 
session (CASSANDRA-11425)
 + * Add repaired percentage metric (CASSANDRA-11503)
 + * Add Change-Data-Capture (CASSANDRA-8844)
 +Merged from 3.0:
 + * Fix paging for 2.x t

[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-11 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b25d9030
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b25d9030
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b25d9030

Branch: refs/heads/cassandra-3.X
Commit: b25d9030a898bd867d55d8d7f564cdf4c332d08f
Parents: f9ae1b7 703b151
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:26:00 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:27:09 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25d9030/CHANGES.txt
--
diff --cc CHANGES.txt
index 2e9186f,81fb544..c59459c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -106,60 -42,12 +106,61 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
++ * Better handle invalid system roles table (CASSANDRA-12700)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
 + * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 
(CASSANDRA-11546)
 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578)
 + * Faster streaming (CASSANDRA-9766)
 + * Add prepared query parameter to trace for "Execute CQL3 prepared query" 
session (CASSANDRA-11425)
 + * Add repaired percentage metric (CASSANDRA-11503)
 + * Add Change-Data-Capture (CASSANDRA-8844)
 +Merged from 3.0:
 + * Fix paging f

[03/10] cassandra git commit: CASSANDRA-12700: Better handle invalid system roles table

2016-10-11 Thread jjirsa
CASSANDRA-12700: Better handle invalid system roles table

Patch by Jeff Jirsa; Reviewed by Sam Tunnicliffe for CASSANDRA-12700


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff5c497d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff5c497d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff5c497d

Branch: refs/heads/cassandra-3.X
Commit: ff5c497d1fc553f3dcc57a5b0f1329d66082c1d3
Parents: 73b888d
Author: Jeff Jirsa 
Authored: Thu Sep 29 22:29:22 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:23:05 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6ee2ddc..ae9ef7a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
+ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
 2.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java 
b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
index 3a59581..dbae1ba 100644
--- a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
+++ b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
@@ -81,11 +81,23 @@ public class CassandraRoleManager implements IRoleManager
 {
 public Role apply(UntypedResultSet.Row row)
 {
-return new Role(row.getString("role"),
-row.getBoolean("is_superuser"),
-row.getBoolean("can_login"),
-row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
- : 
Collections.emptySet());
+try
+{
+return new Role(row.getString("role"),
+ row.getBoolean("is_superuser"),
+ row.getBoolean("can_login"),
+ row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
+  : 
Collections.emptySet());
+}
+// Failing to deserialize a boolean in is_superuser or can_login 
will throw an NPE
+catch (NullPointerException e)
+{
+logger.warn("An invalid value has been detected in the {} 
table for role {}. If you are " +
+"unable to login, you may need to disable 
authentication and confirm " +
+"that values in that table are accurate", 
AuthKeyspace.ROLES, row.getString("role"));
+throw new RuntimeException(String.format("Invalid metadata has 
been detected for role %s", row.getString("role")), e);
+}
+
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java 
b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
index dffecd6..0d6580e 100644
--- a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
@@ -30,7 +30,7 @@ public class BooleanSerializer implements 
TypeSerializer
 
 public Boolean deserialize(ByteBuffer bytes)
 {
-if (bytes.remaining() == 0)
+if (bytes == null || bytes.remaining() == 0)
 return null;
 
 byte value = bytes.get(bytes.position());



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread jjirsa
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/703b151b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/703b151b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/703b151b

Branch: refs/heads/cassandra-3.0
Commit: 703b151b151b7abcfe5b5aa26fe6349506c33da8
Parents: 74f1e0a ff5c497
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:24:07 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:25:33 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/CHANGES.txt
--
diff --cc CHANGES.txt
index 186a8d3,ae9ef7a..81fb544
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,6 -2,10 +17,7 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
+  * Better handle invalid system roles table (CASSANDRA-12700)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--



[02/10] cassandra git commit: CASSANDRA-12700: Better handle invalid system roles table

2016-10-11 Thread jjirsa
CASSANDRA-12700: Better handle invalid system roles table

Patch by Jeff Jirsa; Reviewed by Sam Tunnicliffe for CASSANDRA-12700


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff5c497d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff5c497d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff5c497d

Branch: refs/heads/cassandra-3.0
Commit: ff5c497d1fc553f3dcc57a5b0f1329d66082c1d3
Parents: 73b888d
Author: Jeff Jirsa 
Authored: Thu Sep 29 22:29:22 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:23:05 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6ee2ddc..ae9ef7a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
+ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
 2.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java 
b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
index 3a59581..dbae1ba 100644
--- a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
+++ b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
@@ -81,11 +81,23 @@ public class CassandraRoleManager implements IRoleManager
 {
 public Role apply(UntypedResultSet.Row row)
 {
-return new Role(row.getString("role"),
-row.getBoolean("is_superuser"),
-row.getBoolean("can_login"),
-row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
- : 
Collections.emptySet());
+try
+{
+return new Role(row.getString("role"),
+ row.getBoolean("is_superuser"),
+ row.getBoolean("can_login"),
+ row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
+  : 
Collections.emptySet());
+}
+// Failing to deserialize a boolean in is_superuser or can_login 
will throw an NPE
+catch (NullPointerException e)
+{
+logger.warn("An invalid value has been detected in the {} 
table for role {}. If you are " +
+"unable to login, you may need to disable 
authentication and confirm " +
+"that values in that table are accurate", 
AuthKeyspace.ROLES, row.getString("role"));
+throw new RuntimeException(String.format("Invalid metadata has 
been detected for role %s", row.getString("role")), e);
+}
+
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java 
b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
index dffecd6..0d6580e 100644
--- a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
@@ -30,7 +30,7 @@ public class BooleanSerializer implements 
TypeSerializer
 
 public Boolean deserialize(ByteBuffer bytes)
 {
-if (bytes.remaining() == 0)
+if (bytes == null || bytes.remaining() == 0)
 return null;
 
 byte value = bytes.get(bytes.position());



[01/10] cassandra git commit: CASSANDRA-12700: Better handle invalid system roles table

2016-10-11 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 73b888db1 -> ff5c497d1
  refs/heads/cassandra-3.0 74f1e0aaa -> 703b151b1
  refs/heads/cassandra-3.X f9ae1b7c6 -> b25d9030a
  refs/heads/trunk 8e6a58ccd -> 6e9c3db56


CASSANDRA-12700: Better handle invalid system roles table

Patch by Jeff Jirsa; Reviewed by Sam Tunnicliffe for CASSANDRA-12700


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff5c497d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff5c497d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff5c497d

Branch: refs/heads/cassandra-2.2
Commit: ff5c497d1fc553f3dcc57a5b0f1329d66082c1d3
Parents: 73b888d
Author: Jeff Jirsa 
Authored: Thu Sep 29 22:29:22 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:23:05 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6ee2ddc..ae9ef7a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,6 +2,7 @@
  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
+ * Better handle invalid system roles table (CASSANDRA-12700)
  
  
 2.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java 
b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
index 3a59581..dbae1ba 100644
--- a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
+++ b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
@@ -81,11 +81,23 @@ public class CassandraRoleManager implements IRoleManager
 {
 public Role apply(UntypedResultSet.Row row)
 {
-return new Role(row.getString("role"),
-row.getBoolean("is_superuser"),
-row.getBoolean("can_login"),
-row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
- : 
Collections.emptySet());
+try
+{
+return new Role(row.getString("role"),
+ row.getBoolean("is_superuser"),
+ row.getBoolean("can_login"),
+ row.has("member_of") ? row.getSet("member_of", 
UTF8Type.instance)
+  : 
Collections.emptySet());
+}
+// Failing to deserialize a boolean in is_superuser or can_login 
will throw an NPE
+catch (NullPointerException e)
+{
+logger.warn("An invalid value has been detected in the {} 
table for role {}. If you are " +
+"unable to login, you may need to disable 
authentication and confirm " +
+"that values in that table are accurate", 
AuthKeyspace.ROLES, row.getString("role"));
+throw new RuntimeException(String.format("Invalid metadata has 
been detected for role %s", row.getString("role")), e);
+}
+
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff5c497d/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java 
b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
index dffecd6..0d6580e 100644
--- a/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/BooleanSerializer.java
@@ -30,7 +30,7 @@ public class BooleanSerializer implements 
TypeSerializer
 
 public Boolean deserialize(ByteBuffer bytes)
 {
-if (bytes.remaining() == 0)
+if (bytes == null || bytes.remaining() == 0)
 return null;
 
 byte value = bytes.get(bytes.position());



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread jjirsa
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/703b151b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/703b151b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/703b151b

Branch: refs/heads/cassandra-3.X
Commit: 703b151b151b7abcfe5b5aa26fe6349506c33da8
Parents: 74f1e0a ff5c497
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:24:07 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:25:33 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/CHANGES.txt
--
diff --cc CHANGES.txt
index 186a8d3,ae9ef7a..81fb544
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,6 -2,10 +17,7 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
+  * Better handle invalid system roles table (CASSANDRA-12700)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread jjirsa
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/703b151b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/703b151b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/703b151b

Branch: refs/heads/trunk
Commit: 703b151b151b7abcfe5b5aa26fe6349506c33da8
Parents: 74f1e0a ff5c497
Author: Jeff Jirsa 
Authored: Tue Oct 11 21:24:07 2016 -0700
Committer: Jeff Jirsa 
Committed: Tue Oct 11 21:25:33 2016 -0700

--
 CHANGES.txt |  1 +
 .../cassandra/auth/CassandraRoleManager.java| 22 +++-
 .../serializers/BooleanSerializer.java  |  2 +-
 3 files changed, 19 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/CHANGES.txt
--
diff --cc CHANGES.txt
index 186a8d3,ae9ef7a..81fb544
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,6 -2,10 +17,7 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
+  * Better handle invalid system roles table (CASSANDRA-12700)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/703b151b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
--



[jira] [Commented] (CASSANDRA-5988) Make hint TTL customizable

2016-10-11 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567501#comment-15567501
 ] 

sankalp kohli commented on CASSANDRA-5988:
--

[~iamaleksey] I could not search "cassandra.maxHintTTL" in 3.0.9. With new 
hints in 3.0, how can we change this?

> Make hint TTL customizable
> --
>
> Key: CASSANDRA-5988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5988
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Oleg Kibirev
>Assignee: Vishy Kasar
>  Labels: patch
> Fix For: 1.2.12, 2.0.3
>
> Attachments: 5988.txt
>
>
> Currently time to live for stored hints is hardcoded to be gc_grace_seconds. 
> This causes problems for applications using backdated deletes as a form of 
> optimistic locking. Hints for updates made to the same data on which delete 
> was attempted can persist for days, making it impossible to determine if 
> delete succeeded by doing read(ALL) after a reasonable delay. We need a way 
> to explicitly configure hint TTL, either through schema parameter or through 
> a yaml file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12776) when memtable flush Statistics thisOffHeap error

2016-10-11 Thread JIRA
翟玉勇 created CASSANDRA-12776:
---

 Summary: when memtable flush Statistics thisOffHeap error
 Key: CASSANDRA-12776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12776
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: 翟玉勇
Priority: Trivial


if (largest != null)
{
float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
float thisOnHeap = 
largest.getAllocator().onHeap().ownershipRatio();
float thisOffHeap = 
largest.getAllocator().onHeap().ownershipRatio();
logger.info("Flushing largest {} to free up room. Used total: 
{}, live: {}, flushing: {}, this: {}",
largest.cfs, ratio(usedOnHeap, usedOffHeap), 
ratio(liveOnHeap, liveOffHeap),
ratio(flushingOnHeap, flushingOffHeap), 
ratio(thisOnHeap, thisOffHeap));
largest.cfs.switchMemtableIfCurrent(largest);
}


 float thisOffHeap = largest.getAllocator().onHeap().ownershipRatio();
should offHeap().ownershipRatio();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567449#comment-15567449
 ] 

Brandon Williams commented on CASSANDRA-12296:
--

Let's decide what is, then, since we tagged this lhf.

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567411#comment-15567411
 ] 

Jeff Jirsa commented on CASSANDRA-12296:


[~KurtG] - If you're running NTS with only one replica, the patch will advise 
you to consider {{NetworkTopologyStrategy}}. Further, {{SimpleStrategy}} with a 
higher RF would also be adequate to find a second replica for streaming - the 
datastax docs recommending {{NetworkTopologyStrategy}} before {{nodetool 
rebuild}} are specifically referencing adding a new DC. The error message above 
is not limited to ONLY adding a new DC, and could be hit in a number of other 
ways, where the recommendation to switch to NTS isn't necessary.

I agree the message is difficult to understand now, but "please switch to NTS" 
isn't the right fix here, either.




> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-12296:
-
Status: Patch Available  (was: Open)

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12754) Change cassandra.wait_for_tracing_events_timeout_secs default to -1 so C* doesn't wait on trace events to be written before responding to request by default

2016-10-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15567327#comment-15567327
 ] 

Stefania commented on CASSANDRA-12754:
--

The CI results are OK except for two failures in 3.X utests, but they don't 
seem related and they pass locally. I've relaunched one more run of utests for 
3.X.

> Change cassandra.wait_for_tracing_events_timeout_secs default to -1 so C* 
> doesn't wait on trace events to be written before responding to request by 
> default
> 
>
> Key: CASSANDRA-12754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12754
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Stefania
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> [CASSANDRA-11465] introduces a new system property 
> {{cassandra.wait_for_tracing_events_timeout_secs}} that controls whether or 
> not C* waits for events to be written before responding to client.   The 
> current default behavior is to wait up to 1 second and then respond and 
> timeout.  
> If using probabilistic tracing this can cause queries to be randomly delayed 
> up to 1 second.
> Changing the default to -1 (disabled and enabling it explicitly in 
> {{cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test}}.
> Ideally it would be nice to be able to control this behavior on a per request 
> basis (which would require native protocol changes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12754) Change cassandra.wait_for_tracing_events_timeout_secs default to -1 so C* doesn't wait on trace events to be written before responding to request by default

2016-10-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12754:
-
Reproduced In: 3.0.9, 2.2.8  (was: 2.2.8, 3.0.9)
   Status: Patch Available  (was: Open)

> Change cassandra.wait_for_tracing_events_timeout_secs default to -1 so C* 
> doesn't wait on trace events to be written before responding to request by 
> default
> 
>
> Key: CASSANDRA-12754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12754
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Stefania
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> [CASSANDRA-11465] introduces a new system property 
> {{cassandra.wait_for_tracing_events_timeout_secs}} that controls whether or 
> not C* waits for events to be written before responding to client.   The 
> current default behavior is to wait up to 1 second and then respond and 
> timeout.  
> If using probabilistic tracing this can cause queries to be randomly delayed 
> up to 1 second.
> Changing the default to -1 (disabled and enabling it explicitly in 
> {{cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test}}.
> Ideally it would be nice to be able to control this behavior on a per request 
> basis (which would require native protocol changes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12296) system_auth can't be rebuilt by default

2016-10-11 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12296:
-
Attachment: 12296.patch

Attached a patch that just changes the message, removing the suggestion to use 
consistent.rangemovement=false and instead suggesting to use 
NetworkTopologyStrategy.

Patch is against 3.0 as this isn't a critical bugfix, however looks like it 
would be very similar for 2.2/2.1

> system_auth can't be rebuilt by default
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Priority: Minor
>  Labels: lhf
> Attachments: 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[03/10] cassandra git commit: ninja: fixed typo in CHANGES.txt for CASSANDRA-12642

2016-10-11 Thread stefania
ninja: fixed typo in CHANGES.txt for CASSANDRA-12642


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73b888db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73b888db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73b888db

Branch: refs/heads/cassandra-3.X
Commit: 73b888db1a504b17fcd8250073f987cd6973f49c
Parents: c5fdb32
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:48:36 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:48:36 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73b888db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54425fa..6ee2ddc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,7 +6,7 @@
  
 2.2.8
  * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
- * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
  * Decrement pending range calculator jobs counter in finally block
   (CASSANDRA-12554)
  * Add local address entry in PropertyFileSnitch (CASSANDRA-11332)



[02/10] cassandra git commit: ninja: fixed typo in CHANGES.txt for CASSANDRA-12642

2016-10-11 Thread stefania
ninja: fixed typo in CHANGES.txt for CASSANDRA-12642


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73b888db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73b888db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73b888db

Branch: refs/heads/cassandra-3.0
Commit: 73b888db1a504b17fcd8250073f987cd6973f49c
Parents: c5fdb32
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:48:36 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:48:36 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73b888db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54425fa..6ee2ddc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,7 +6,7 @@
  
 2.2.8
  * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
- * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
  * Decrement pending range calculator jobs counter in finally block
   (CASSANDRA-12554)
  * Add local address entry in PropertyFileSnitch (CASSANDRA-11332)



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-11 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f9ae1b7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f9ae1b7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f9ae1b7c

Branch: refs/heads/trunk
Commit: f9ae1b7c61454f53ee34a78d2388084c21479458
Parents: c7fb95c 74f1e0a
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:50:46 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:50:46 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f9ae1b7c/CHANGES.txt
--
diff --cc CHANGES.txt
index 5907c74,186a8d3..2e9186f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -106,60 -41,12 +106,60 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
-  * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
++ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 
(CASSANDRA-11546)
 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578)
 + * Faster streaming (CASSANDRA-9766)
 + * Add prepared query parameter to trace for "Execute CQL3 prepared query" 
session (CASSANDRA-11425)
 + * Add repaired percentage metric (CASSANDRA-11503)
 + * Add Change-Data-Capture (CASSANDRA-8844)
 +Merged from 3.0:
 + * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
   * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-1

[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/74f1e0aa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/74f1e0aa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/74f1e0aa

Branch: refs/heads/trunk
Commit: 74f1e0aaa84d20248c26e3b8850969934f54452a
Parents: 72c9eb2 73b888d
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:49:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:49:11 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/74f1e0aa/CHANGES.txt
--
diff --cc CHANGES.txt
index 9f7fff8,6ee2ddc..186a8d3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,68 -2,13 +17,68 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
-  * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+  * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block
(CASSANDRA-12554)
 +Merged from 2.1:
 + * Add system property to set the max number of native transport requests in 
queue (CASSANDRA-11363)
 +
 +
 +3.0.9
 + * Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound (CASSANDRA-12423)
 + * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
 + * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
 + * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)
 + * Fix legacy regex for temporary files from 2.2 (CASSANDRA-12565)
 + * Add option to state current gc_grace_seconds to tools/bin/sstablemetadata 
(CASSANDRA-12208)
 + * Fix file system race condition that may cause LogAwareFileLister to fail 
to classify files (CASSANDRA-11889)
 + * Fix file handle leaks due to simultaneous compaction/repair and
 +   listing snapshots, calculating snapshot sizes, or making schema
 +   changes (CASSANDRA-11594)
 + * Fix nodetool repair exits with 0 for some errors (CASSANDRA-12508)
 + * Do not shut down BatchlogManager twice during drain (CASSANDRA-12504)
 + * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
 + * Calculate last compacted key on startup (CASSANDRA-6216)
 + * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-12436)
 + * Fix potential resource leak in RMIServerSocketFactoryImpl (CASSANDRA-12331)
 + * Backport CASSANDRA-12002 (CASSANDRA-12177)
 + * Make sure compaction stats are updated when compaction is interrupted 
(CASSANDRA-12100)
 + * Fix potential bad messaging service message for paged range reads
 +   within mixed-version 3.x clusters (CASSANDRA-12249)
 + * Change commitlog and sstables to track dirty and clean intervals 
(CASSANDRA-11828)
 + * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
 + * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
 + * Fix upgrade of super columns on thrift (CASSANDRA-12335)
 + * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)
 + * Rerun ReplicationAwareTokenAllocatorTest on failure to avoid flakiness 
(CASSANDRA-12277)
 + * Exception when computing read-repair for range tombstones (CASSANDRA-12263)
 + * Lost counter writes in compact table and static columns (CASSANDRA-12219)
 + * AssertionError with MVs on updating a row that isn't indexed due to a null 
value (CASSANDRA-12247)
 + * Disable RR and speculative retry with EACH_QUORUM reads (CASSANDRA-11980)
 + * Add option to override compaction space check (CASSANDRA-12180)
 + * Faster startup by only scanning each directory for temporary files once 
(CASSANDRA-12114)
 + * Respond with v1/v2 protocol header when responding to driver that attempts
 +   to connect with too low of a protocol version (CASSANDRA-11464)
 + * NullPointerExpception when reading/compacting table (CASSANDRA-11988)
 + * Fix problem with undeleteable rows on upgrade to new sstable format 
(CASSANDRA-12144)
 + * Fix paging logic for deleted partitions with static columns 
(CASSANDRA-12107)
 + * Wait until the message is being send to decide which serializer must be 

[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/74f1e0aa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/74f1e0aa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/74f1e0aa

Branch: refs/heads/cassandra-3.X
Commit: 74f1e0aaa84d20248c26e3b8850969934f54452a
Parents: 72c9eb2 73b888d
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:49:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:49:11 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/74f1e0aa/CHANGES.txt
--
diff --cc CHANGES.txt
index 9f7fff8,6ee2ddc..186a8d3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,68 -2,13 +17,68 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
-  * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+  * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block
(CASSANDRA-12554)
 +Merged from 2.1:
 + * Add system property to set the max number of native transport requests in 
queue (CASSANDRA-11363)
 +
 +
 +3.0.9
 + * Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound (CASSANDRA-12423)
 + * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
 + * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
 + * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)
 + * Fix legacy regex for temporary files from 2.2 (CASSANDRA-12565)
 + * Add option to state current gc_grace_seconds to tools/bin/sstablemetadata 
(CASSANDRA-12208)
 + * Fix file system race condition that may cause LogAwareFileLister to fail 
to classify files (CASSANDRA-11889)
 + * Fix file handle leaks due to simultaneous compaction/repair and
 +   listing snapshots, calculating snapshot sizes, or making schema
 +   changes (CASSANDRA-11594)
 + * Fix nodetool repair exits with 0 for some errors (CASSANDRA-12508)
 + * Do not shut down BatchlogManager twice during drain (CASSANDRA-12504)
 + * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
 + * Calculate last compacted key on startup (CASSANDRA-6216)
 + * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-12436)
 + * Fix potential resource leak in RMIServerSocketFactoryImpl (CASSANDRA-12331)
 + * Backport CASSANDRA-12002 (CASSANDRA-12177)
 + * Make sure compaction stats are updated when compaction is interrupted 
(CASSANDRA-12100)
 + * Fix potential bad messaging service message for paged range reads
 +   within mixed-version 3.x clusters (CASSANDRA-12249)
 + * Change commitlog and sstables to track dirty and clean intervals 
(CASSANDRA-11828)
 + * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
 + * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
 + * Fix upgrade of super columns on thrift (CASSANDRA-12335)
 + * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)
 + * Rerun ReplicationAwareTokenAllocatorTest on failure to avoid flakiness 
(CASSANDRA-12277)
 + * Exception when computing read-repair for range tombstones (CASSANDRA-12263)
 + * Lost counter writes in compact table and static columns (CASSANDRA-12219)
 + * AssertionError with MVs on updating a row that isn't indexed due to a null 
value (CASSANDRA-12247)
 + * Disable RR and speculative retry with EACH_QUORUM reads (CASSANDRA-11980)
 + * Add option to override compaction space check (CASSANDRA-12180)
 + * Faster startup by only scanning each directory for temporary files once 
(CASSANDRA-12114)
 + * Respond with v1/v2 protocol header when responding to driver that attempts
 +   to connect with too low of a protocol version (CASSANDRA-11464)
 + * NullPointerExpception when reading/compacting table (CASSANDRA-11988)
 + * Fix problem with undeleteable rows on upgrade to new sstable format 
(CASSANDRA-12144)
 + * Fix paging logic for deleted partitions with static columns 
(CASSANDRA-12107)
 + * Wait until the message is being send to decide which serializer m

[10/10] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-11 Thread stefania
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e6a58cc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e6a58cc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e6a58cc

Branch: refs/heads/trunk
Commit: 8e6a58ccdae3e3b724044babe9cb132dc009c7dc
Parents: 338b3e6 f9ae1b7
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:51:08 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:51:08 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e6a58cc/CHANGES.txt
--



[01/10] cassandra git commit: ninja: fixed typo in CHANGES.txt for CASSANDRA-12642

2016-10-11 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 c5fdb32ce -> 73b888db1
  refs/heads/cassandra-3.0 72c9eb2dc -> 74f1e0aaa
  refs/heads/cassandra-3.X c7fb95c98 -> f9ae1b7c6
  refs/heads/trunk 338b3e643 -> 8e6a58ccd


ninja: fixed typo in CHANGES.txt for CASSANDRA-12642


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73b888db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73b888db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73b888db

Branch: refs/heads/cassandra-2.2
Commit: 73b888db1a504b17fcd8250073f987cd6973f49c
Parents: c5fdb32
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:48:36 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:48:36 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73b888db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54425fa..6ee2ddc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,7 +6,7 @@
  
 2.2.8
  * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
- * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
  * Decrement pending range calculator jobs counter in finally block
   (CASSANDRA-12554)
  * Add local address entry in PropertyFileSnitch (CASSANDRA-11332)



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-11 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f9ae1b7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f9ae1b7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f9ae1b7c

Branch: refs/heads/cassandra-3.X
Commit: f9ae1b7c61454f53ee34a78d2388084c21479458
Parents: c7fb95c 74f1e0a
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:50:46 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:50:46 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f9ae1b7c/CHANGES.txt
--
diff --cc CHANGES.txt
index 5907c74,186a8d3..2e9186f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -106,60 -41,12 +106,60 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
-  * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
++ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 
(CASSANDRA-11546)
 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578)
 + * Faster streaming (CASSANDRA-9766)
 + * Add prepared query parameter to trace for "Execute CQL3 prepared query" 
session (CASSANDRA-11425)
 + * Add repaired percentage metric (CASSANDRA-11503)
 + * Add Change-Data-Capture (CASSANDRA-8844)
 +Merged from 3.0:
 + * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
   * Fix clean interval not sent to commit log for empty memtable flush 
(CAS

[04/10] cassandra git commit: ninja: fixed typo in CHANGES.txt for CASSANDRA-12642

2016-10-11 Thread stefania
ninja: fixed typo in CHANGES.txt for CASSANDRA-12642


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73b888db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73b888db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73b888db

Branch: refs/heads/trunk
Commit: 73b888db1a504b17fcd8250073f987cd6973f49c
Parents: c5fdb32
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:48:36 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:48:36 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73b888db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54425fa..6ee2ddc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,7 +6,7 @@
  
 2.2.8
  * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
- * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+ * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
  * Decrement pending range calculator jobs counter in finally block
   (CASSANDRA-12554)
  * Add local address entry in PropertyFileSnitch (CASSANDRA-11332)



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-11 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/74f1e0aa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/74f1e0aa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/74f1e0aa

Branch: refs/heads/cassandra-3.0
Commit: 74f1e0aaa84d20248c26e3b8850969934f54452a
Parents: 72c9eb2 73b888d
Author: Stefania Alborghetti 
Authored: Wed Oct 12 09:49:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:49:11 2016 +0800

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/74f1e0aa/CHANGES.txt
--
diff --cc CHANGES.txt
index 9f7fff8,6ee2ddc..186a8d3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,68 -2,13 +17,68 @@@ Merged from 2.2
   * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
-  * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
+  * Fix authentication problem when invoking cqlsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block
(CASSANDRA-12554)
 +Merged from 2.1:
 + * Add system property to set the max number of native transport requests in 
queue (CASSANDRA-11363)
 +
 +
 +3.0.9
 + * Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound (CASSANDRA-12423)
 + * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
 + * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
 + * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)
 + * Fix legacy regex for temporary files from 2.2 (CASSANDRA-12565)
 + * Add option to state current gc_grace_seconds to tools/bin/sstablemetadata 
(CASSANDRA-12208)
 + * Fix file system race condition that may cause LogAwareFileLister to fail 
to classify files (CASSANDRA-11889)
 + * Fix file handle leaks due to simultaneous compaction/repair and
 +   listing snapshots, calculating snapshot sizes, or making schema
 +   changes (CASSANDRA-11594)
 + * Fix nodetool repair exits with 0 for some errors (CASSANDRA-12508)
 + * Do not shut down BatchlogManager twice during drain (CASSANDRA-12504)
 + * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
 + * Calculate last compacted key on startup (CASSANDRA-6216)
 + * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-12436)
 + * Fix potential resource leak in RMIServerSocketFactoryImpl (CASSANDRA-12331)
 + * Backport CASSANDRA-12002 (CASSANDRA-12177)
 + * Make sure compaction stats are updated when compaction is interrupted 
(CASSANDRA-12100)
 + * Fix potential bad messaging service message for paged range reads
 +   within mixed-version 3.x clusters (CASSANDRA-12249)
 + * Change commitlog and sstables to track dirty and clean intervals 
(CASSANDRA-11828)
 + * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
 + * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
 + * Fix upgrade of super columns on thrift (CASSANDRA-12335)
 + * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)
 + * Rerun ReplicationAwareTokenAllocatorTest on failure to avoid flakiness 
(CASSANDRA-12277)
 + * Exception when computing read-repair for range tombstones (CASSANDRA-12263)
 + * Lost counter writes in compact table and static columns (CASSANDRA-12219)
 + * AssertionError with MVs on updating a row that isn't indexed due to a null 
value (CASSANDRA-12247)
 + * Disable RR and speculative retry with EACH_QUORUM reads (CASSANDRA-11980)
 + * Add option to override compaction space check (CASSANDRA-12180)
 + * Faster startup by only scanning each directory for temporary files once 
(CASSANDRA-12114)
 + * Respond with v1/v2 protocol header when responding to driver that attempts
 +   to connect with too low of a protocol version (CASSANDRA-11464)
 + * NullPointerExpception when reading/compacting table (CASSANDRA-11988)
 + * Fix problem with undeleteable rows on upgrade to new sstable format 
(CASSANDRA-12144)
 + * Fix paging logic for deleted partitions with static columns 
(CASSANDRA-12107)
 + * Wait until the message is being send to decide which serializer m

[jira] [Updated] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-10-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12705:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

dtest results are clean, utests only have timeout failures, committed to trunk 
as 338b3e6438de321275122e09670c7567ea0c9820.

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add column definition kind to dropped columns in schema

2016-10-11 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/trunk 231a93706 -> 338b3e643


Add column definition kind to dropped columns in schema

Patch by Stefania Alborghetti; reviewed by Aleksey Yeschenko for CASSANDRA-12705


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/338b3e64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/338b3e64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/338b3e64

Branch: refs/heads/trunk
Commit: 338b3e6438de321275122e09670c7567ea0c9820
Parents: 231a937
Author: Stefania Alborghetti 
Authored: Fri Sep 23 17:25:13 2016 +0800
Committer: Stefania Alborghetti 
Committed: Wed Oct 12 09:28:46 2016 +0800

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/config/CFMetaData.java  | 14 +++---
 .../org/apache/cassandra/config/ColumnDefinition.java |  5 -
 .../apache/cassandra/schema/LegacySchemaMigrator.java |  2 +-
 .../org/apache/cassandra/schema/SchemaKeyspace.java   | 11 +--
 5 files changed, 26 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/338b3e64/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7f5b7cb..69f7e42 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
  * Add (automate) Nodetool Documentation (CASSANDRA-12672)
  * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
  * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/338b3e64/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 76a3ead..a60700c 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -712,7 +712,7 @@ public final class CFMetaData
 // it means that it's a dropped column from before 3.0, and in that 
case using
 // BytesType is fine for what we'll be using it for, even if that's a 
hack.
 AbstractType type = dropped.type == null ? BytesType.instance : 
dropped.type;
-return isStatic
+return isStatic || dropped.kind == ColumnDefinition.Kind.STATIC
? ColumnDefinition.staticDef(this, name, type)
: ColumnDefinition.regularDef(this, name, type);
 }
@@ -994,7 +994,7 @@ public final class CFMetaData
  */
 public void recordColumnDrop(ColumnDefinition def, long timeMicros)
 {
-droppedColumns.put(def.name.bytes, new 
DroppedColumn(def.name.toString(), def.type, timeMicros));
+droppedColumns.put(def.name.bytes, new DroppedColumn(def, timeMicros));
 }
 
 public void renameColumn(ColumnIdentifier from, ColumnIdentifier to) 
throws InvalidRequestException
@@ -1379,11 +1379,19 @@ public final class CFMetaData
 // drop timestamp, in microseconds, yet with millisecond granularity
 public final long droppedTime;
 
-public DroppedColumn(String name, AbstractType type, long 
droppedTime)
+public final ColumnDefinition.Kind kind;
+
+public DroppedColumn(ColumnDefinition def, long droppedTime)
+{
+this(def.name.toString(), def.type, droppedTime, def.kind);
+}
+
+public DroppedColumn(String name, AbstractType type, long 
droppedTime, ColumnDefinition.Kind kind)
 {
 this.name = name;
 this.type = type;
 this.droppedTime = droppedTime;
+this.kind = kind;
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/338b3e64/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index 6044ee9..9e6d9ec 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -48,7 +48,7 @@ public class ColumnDefinition extends ColumnSpecification 
implements Selectable,
 ASC, DESC, NONE
 }
 
-/*
+/**
  * The type of CQL3 column this definition represents.
  * There is 4 main type of CQL3 columns: those parts of the partition key,
  * those parts of the clustering columns and amongst the others, regular 
and
@@ -56,6 +56,9 @@ public class ColumnDefinition extends ColumnSpecification 
impleme

[jira] [Created] (CASSANDRA-12775) CQLSH should be able to pin requests to a server

2016-10-11 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-12775:
--

 Summary: CQLSH should be able to pin requests to a server
 Key: CASSANDRA-12775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12775
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Haddad


If CASSANDRA-7296 is added, it would be very helpful to be able to ensure 
requests are sent to a specific machine for debugging purposes when using 
cqlsh.  something as simple as PIN & UNPIN to the host provided when starting 
cqlsh would be enough, with PIN optionally taking a new host to pin requests to.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11949) GC log directory should be created in startup scripts

2016-10-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-11949.

Resolution: Not A Problem

> GC log directory should be created in startup scripts
> -
>
> Key: CASSANDRA-11949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11949
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Mahdi Mohammadi
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In [CASSANDRA-10140], we enabled GC logging by default, since the overhead 
> was low and asking people providing diagnostics to restart can often make it 
> more difficult to diagnose problems.
> The default GC log path is set to {{$CASSANDRA_HOME/logs/gc.log}} in 
> {{cassandra-env.sh}}, a directory that is not present in a fresh 
> clone/install. Even if logback creates this directory later in startup, it is 
> not present when the JVM initiates GC logging, so GC logging will silently 
> fail for this first Cassandra run
> I haven't tested this in Windows but suspect the same problem may occur. 
> Since lots of tooling around Cassandra won't create this directory, we should 
> instead consider attempting to create it in our startup scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566748#comment-15566748
 ] 

Michael Kjellman edited comment on CASSANDRA-9754 at 10/11/16 10:32 PM:


Attaching an initial set of very rough graphs showing the last 12 hours of 
stress/performance testing that's been running. I apologize ahead of time for 
some of the graphs -- I wanted to include the average, p99.9th, and count for 
all key metrics and in some cases some of the values overlapped and my graphing 
foo wasn't good enough to improve the readability. I'll take another pass when 
I get some time with the next round of performance testing. The "large" CQL 
partitions in all 3 clusters are currently (and during the duration of the 
test) between ~6GB and ~12.5GB, although I'm planning on running the 
stress/performance tests in all 3 clusters until the "large" CQL partitions 
hits ~50GB. The load was started in all 3 clusters (where all 3 were totally 
empty at start) at the same time -- from the same stress tool code that I wrote 
specifically to realistically test Birch as after repeated attempts to generate 
a good workload with cassandra-stress I gave up. Some details of the stress 
tool and load that was being generated for these graphs is below.

h3. There are three read-write workloads being run to generate the load during 
these tests.

I wrote the following two methods for my "simple-cassandra-stress" tool I threw 
together to generate keys that the worker-threads operate on. I'll refer to 
them below in terms of how the stress load is currently being generated. 

{code:java}
public static List generateRandomKeys(int number) {
List keysToOperateOn = new ArrayList<>();
HashFunction hf = Hashing.murmur3_128();
for (int i = 0; i < number; i++) {
HashCode hashedKey = 
hf.newHasher().putLong(RANDOM_THREAD_LOCAL.get().nextInt(30) + 1).hash();
keysToOperateOn.add(hashedKey);
}
return keysToOperateOn;
}

public static List generateEvenlySpacedPredictableKeys(int number, 
int offset,
 String seed, 
Cluster cluster) throws InvalidParameterException {
Set tokenRanges = cluster.getMetadata().getTokenRanges();
int numberOfKeysToGenerate = (number < tokenRanges.size()) ? 
tokenRanges.size() : number;

Long[] tokens = new Long[numberOfKeysToGenerate];

int pos = 0;

int numberOfSplits = (number <= tokenRanges.size()) ? 1 : (number / 
tokenRanges.size()) + 1;
for (TokenRange tokenRange : tokenRanges) {
for (TokenRange splitTokenRange : 
tokenRange.splitEvenly(numberOfSplits)) {
if (pos >= tokens.length)
break;

tokens[pos++] = (Long) splitTokenRange.getStart().getValue();
}

if (pos >= tokens.length)
break;
}

HashCode[] randomKeys = new HashCode[tokens.length];
int pendingRandomKeys = tokens.length;
while (pendingRandomKeys > 0) {
for (int i = offset; i < (offset + numberOfKeysToGenerate) * (number * 
10); i++) {
if (pendingRandomKeys <= 0)
break;

HashFunction hf = Hashing.murmur3_128();
HashCode hashedKey = hf.newHasher().putString(seed, 
Charset.defaultCharset()).putInt(i).hash();

for (int t = 0; t < tokens.length; t++) {
if ((t + 1 == tokens.length && hashedKey.asLong() >= tokens[t]) 
|| (hashedKey.asLong() >= tokens[t] && hashedKey.asLong() < tokens[t + 1])) {
if (randomKeys[t] == null) {
randomKeys[t] = hashedKey;
pendingRandomKeys--;
}

break;
}
}
}
}

return Arrays.asList(randomKeys);
}
{code}

There are 12 Cassandra instances in each performance/stress cluster running JDK 
1.8_u74 with the CMS collector (obviously simplified) running with -Xms5G 
-Xmx5G -Xmn1G. 

The test keyspace is created with RF=3:
{code:SQL}
CREATE KEYSPACE IF NOT EXISTS test_keyspace WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': 3}
{code}

Operations for test_keyspace.largeuuid1 generate a new key to insert and read 
from at the top of every iteration with generateRandomKeys(1). Each worker then 
generates 10,000 random mutations, with the current timeuuid and a random value 
blob of 30 bytes to 2kb. This is intended to get some more "normal" load on the 
cluster.

{code:SQL}
CREATE TABLE IF NOT EXISTS test_keyspace.timeuuid1 (name text, col1 timeuuid, 
value blob, primary key(name, col1)) WITH compaction = { 
'class':'LeveledCompactionStrategy' }

"INSERT INTO test_keyspace.largeuuid1 (name, col1, value) VALUES (?, ?, ?)"
"SELECT * FROM test_keyspace.largeuuid1 WHERE name = ? and col1 = ?"
{code}

The second and third gener

[jira] [Commented] (CASSANDRA-11949) GC log directory should be created in startup scripts

2016-10-11 Thread Mahdi Mohammadi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566806#comment-15566806
 ] 

Mahdi Mohammadi commented on CASSANDRA-11949:
-

[~mshuler], [~JoshuaMcKenzie] Shall I close this ticket?

> GC log directory should be created in startup scripts
> -
>
> Key: CASSANDRA-11949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11949
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Mahdi Mohammadi
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> In [CASSANDRA-10140], we enabled GC logging by default, since the overhead 
> was low and asking people providing diagnostics to restart can often make it 
> more difficult to diagnose problems.
> The default GC log path is set to {{$CASSANDRA_HOME/logs/gc.log}} in 
> {{cassandra-env.sh}}, a directory that is not present in a fresh 
> clone/install. Even if logback creates this directory later in startup, it is 
> not present when the JVM initiates GC logging, so GC logging will silently 
> fail for this first Cassandra run
> I haven't tested this in Windows but suspect the same problem may occur. 
> Since lots of tooling around Cassandra won't create this directory, we should 
> instead consider attempting to create it in our startup scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566748#comment-15566748
 ] 

Michael Kjellman edited comment on CASSANDRA-9754 at 10/11/16 10:21 PM:


Attaching an initial set of very rough graphs showing the last 12 hours of 
stress/performance testing that's been running. I apologize ahead of time for 
some of the graphs -- I wanted to include the average, p99.9th, and count for 
all key metrics and in some cases some of the values overlapped and my graphing 
foo wasn't good enough to improve the readability. I'll take another pass when 
I get some time with the next round of performance testing. The "large" CQL 
partitions in all 3 clusters are currently (and during the duration of the 
test) between ~6GB and ~12.5GB, although I'm planning on running the 
stress/performance tests in all 3 clusters until the "large" CQL partitions 
hits ~50GB. The load was started in all 3 clusters (where all 3 were totally 
empty at start) at the same time -- from the same stress tool code that I wrote 
specifically to realistically test Birch as after repeated attempts to generate 
a good workload with cassandra-stress I gave up. Some details of the stress 
tool and load that was being generated for these graphs is below.

h3. There are three read-write workloads being run to generate the load during 
these tests.

I wrote the following two methods for my "simple-cassandra-stress" tool I threw 
together to generate keys that the worker-threads operate on. I'll refer to 
them below in terms of how the stress load is currently being generated. 

{code:java}
public static List generateRandomKeys(int number) {
List keysToOperateOn = new ArrayList<>();
HashFunction hf = Hashing.murmur3_128();
for (int i = 0; i < number; i++) {
HashCode hashedKey = 
hf.newHasher().putLong(RANDOM_THREAD_LOCAL.get().nextInt(30) + 1).hash();
keysToOperateOn.add(hashedKey);
}
return keysToOperateOn;
}

public static List generateEvenlySpacedPredictableKeys(int number, 
int offset,
 String seed, 
Cluster cluster) throws InvalidParameterException {
Set tokenRanges = cluster.getMetadata().getTokenRanges();
int numberOfKeysToGenerate = (number < tokenRanges.size()) ? 
tokenRanges.size() : number;

Long[] tokens = new Long[numberOfKeysToGenerate];

int pos = 0;

int numberOfSplits = (number <= tokenRanges.size()) ? 1 : (number / 
tokenRanges.size()) + 1;
for (TokenRange tokenRange : tokenRanges) {
for (TokenRange splitTokenRange : 
tokenRange.splitEvenly(numberOfSplits)) {
if (pos >= tokens.length)
break;

tokens[pos++] = (Long) splitTokenRange.getStart().getValue();
}

if (pos >= tokens.length)
break;
}

HashCode[] randomKeys = new HashCode[tokens.length];
int pendingRandomKeys = tokens.length;
while (pendingRandomKeys > 0) {
for (int i = offset; i < (offset + numberOfKeysToGenerate) * (number * 
10); i++) {
if (pendingRandomKeys <= 0)
break;

HashFunction hf = Hashing.murmur3_128();
HashCode hashedKey = hf.newHasher().putString(seed, 
Charset.defaultCharset()).putInt(i).hash();

for (int t = 0; t < tokens.length; t++) {
if ((t + 1 == tokens.length && hashedKey.asLong() >= tokens[t]) 
|| (hashedKey.asLong() >= tokens[t] && hashedKey.asLong() < tokens[t + 1])) {
if (randomKeys[t] == null) {
randomKeys[t] = hashedKey;
pendingRandomKeys--;
}

break;
}
}
}
}

return Arrays.asList(randomKeys);
}
{code}

There are 12 Cassandra instances in each performance/stress cluster running JDK 
1.8_u74 with the CMS collector (obviously simplified) running with -Xms5G 
-Xmx5G -Xmn1G. 

The test keyspace is created with RF=3:
{code:SQL}
CREATE KEYSPACE IF NOT EXISTS test_keyspace WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': 3}
{code}

Operations for test_keyspace.largeuuid1 generate a new key to insert and read 
from at the top of every iteration with generateRandomKeys(1). Each worker then 
generates 10,000 random mutations, with the current timeuuid and a random value 
blob of 30 bytes to 2kb. This is intended to get some more "normal" load on the 
cluster.

{code:SQL}
CREATE TABLE IF NOT EXISTS test_keyspace.timeuuid1 (name text, col1 timeuuid, 
value blob, primary key(name, col1)) WITH compaction = { 
'class':'LeveledCompactionStrategy' }

"INSERT INTO test_keyspace.largeuuid1 (name, col1, value) VALUES (?, ?, ?)"
"SELECT * FROM test_keyspace.largeuuid1 WHERE name = ? and col1 = ?"
{code}

The second and third gener

[jira] [Comment Edited] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566748#comment-15566748
 ] 

Michael Kjellman edited comment on CASSANDRA-9754 at 10/11/16 10:18 PM:


Attaching an initial set of very rough graphs showing the last 12 hours of 
stress/performance testing that's been running. I apologize ahead of time for 
some of the graphs -- I wanted to include the average, p99.9th, and count for 
all key metrics and in some cases some of the values overlapped and my graphing 
foo wasn't good enough to improve the readability. I'll take another pass when 
I get some time with the next round of performance testing. The "large" CQL 
partitions in all 3 clusters are currently (and during the duration of the 
test) between ~6GB and ~12.5GB, although I'm planning on running the 
stress/performance tests in all 3 clusters until the "large" CQL partitions 
hits ~50GB. The load was started in all 3 clusters (where all 3 were totally 
empty at start) at the same time -- from the same stress tool code that I wrote 
specifically to realistically test Birch as after repeated attempts to generate 
a good workload with cassandra-stress I gave up. Some details of the stress 
tool and load that was being generated for these graphs is below.

h3. There are three read-write workloads being run to generate the load during 
these tests.

I wrote the following two methods for my "simple-cassandra-stress" tool I threw 
together to generate keys that the worker-threads operate on. I'll refer to 
them below in terms of how the stress load is currently being generated. 

{code:java}
public static List generateRandomKeys(int number) {
List keysToOperateOn = new ArrayList<>();
HashFunction hf = Hashing.murmur3_128();
for (int i = 0; i < number; i++) {
HashCode hashedKey = 
hf.newHasher().putLong(RANDOM_THREAD_LOCAL.get().nextInt(30) + 1).hash();
keysToOperateOn.add(hashedKey);
}
return keysToOperateOn;
}

public static List generateEvenlySpacedPredictableKeys(int number, 
int offset,
 String seed, 
Cluster cluster) throws InvalidParameterException {
Set tokenRanges = cluster.getMetadata().getTokenRanges();
int numberOfKeysToGenerate = (number < tokenRanges.size()) ? 
tokenRanges.size() : number;

Long[] tokens = new Long[numberOfKeysToGenerate];

int pos = 0;

int numberOfSplits = (number <= tokenRanges.size()) ? 1 : (number / 
tokenRanges.size()) + 1;
for (TokenRange tokenRange : tokenRanges) {
for (TokenRange splitTokenRange : 
tokenRange.splitEvenly(numberOfSplits)) {
if (pos >= tokens.length)
break;

tokens[pos++] = (Long) splitTokenRange.getStart().getValue();
}

if (pos >= tokens.length)
break;
}

HashCode[] randomKeys = new HashCode[tokens.length];
int pendingRandomKeys = tokens.length;
while (pendingRandomKeys > 0) {
for (int i = offset; i < (offset + numberOfKeysToGenerate) * (number * 
10); i++) {
if (pendingRandomKeys <= 0)
break;

HashFunction hf = Hashing.murmur3_128();
HashCode hashedKey = hf.newHasher().putString(seed, 
Charset.defaultCharset()).putInt(i).hash();

for (int t = 0; t < tokens.length; t++) {
if ((t + 1 == tokens.length && hashedKey.asLong() >= tokens[t]) 
|| (hashedKey.asLong() >= tokens[t] && hashedKey.asLong() < tokens[t + 1])) {
if (randomKeys[t] == null) {
randomKeys[t] = hashedKey;
pendingRandomKeys--;
}

break;
}
}
}
}

return Arrays.asList(randomKeys);
}
{code}

There are 12 Cassandra instances in each performance/stress cluster running JDK 
1.8_u74 with the CMS collector (obviously simplified) running with -Xms5G 
-Xmx5G -Xmn1G. 

The test keyspace is created with RF=3:
{code:SQL}
CREATE KEYSPACE IF NOT EXISTS test_keyspace WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': 3}
{code}

Operations for test_keyspace.largeuuid1 generate a new key to insert and read 
from at the top of every iteration with generateRandomKeys(1). Each worker then 
generates 10,000 random mutations, with the current timeuuid and a random value 
blob of 30 bytes to 2kb. This is intended to get some more "normal" load on the 
cluster.

{code:SQL}
CREATE TABLE IF NOT EXISTS test_keyspace.timeuuid1 (name text, col1 timeuuid, 
value blob, primary key(name, col1)) WITH compaction = { 
'class':'LeveledCompactionStrategy' }

"INSERT INTO test_keyspace.largeuuid1 (name, col1, value) VALUES (?, ?, ?)"
"SELECT * FROM test_keyspace.largeuuid1 WHERE name = ? and col1 = ?"
{code}

The second and third gener

[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566748#comment-15566748
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Attaching an initial set of very rough graphs showing the last 12 hours of 
stress/performance testing that's been running. I apologize ahead of time for 
some of the graphs -- I wanted to include the average, p99.9th, and count for 
all key metrics and in some cases some of the values overlapped and my graphing 
foo wasn't good enough to improve the readability. I'll take another pass when 
I get some time with the next round of performance testing. The "large" CQL 
partitions in all 3 clusters are currently (and during the duration of the 
test) between ~6GB and ~12.5GB, although I'm planning on running the 
stress/performance tests in all 3 clusters until the "large" CQL partitions 
hits ~50GB. The load was started in all 3 clusters (where all 3 were totally 
empty at start) at the same time -- from the same stress tool code that I wrote 
specifically to realistically test Birch as after repeated attempts to generate 
a good workload with cassandra-stress I gave up. Some details of the stress 
tool and load that was being generated for these graphs is below.

h3. There are three read-write workloads being run to generate the load during 
these tests.

I wrote the following two methods for my "simple-cassandra-stress" tool I threw 
together to generate keys that the worker-threads operate on. I'll refer to 
them below in terms of how the stress load is currently being generated. 

{code:java}
public static List generateRandomKeys(int number) {
List keysToOperateOn = new ArrayList<>();
HashFunction hf = Hashing.murmur3_128();
for (int i = 0; i < number; i++) {
HashCode hashedKey = 
hf.newHasher().putLong(RANDOM_THREAD_LOCAL.get().nextInt(30) + 1).hash();
keysToOperateOn.add(hashedKey);
}
return keysToOperateOn;
}

public static List generateEvenlySpacedPredictableKeys(int number, 
int offset,
 String seed, 
Cluster cluster) throws InvalidParameterException {
Set tokenRanges = cluster.getMetadata().getTokenRanges();
int numberOfKeysToGenerate = (number < tokenRanges.size()) ? 
tokenRanges.size() : number;

Long[] tokens = new Long[numberOfKeysToGenerate];

int pos = 0;

int numberOfSplits = (number <= tokenRanges.size()) ? 1 : (number / 
tokenRanges.size()) + 1;
for (TokenRange tokenRange : tokenRanges) {
for (TokenRange splitTokenRange : 
tokenRange.splitEvenly(numberOfSplits)) {
if (pos >= tokens.length)
break;

tokens[pos++] = (Long) splitTokenRange.getStart().getValue();
}

if (pos >= tokens.length)
break;
}

HashCode[] randomKeys = new HashCode[tokens.length];
int pendingRandomKeys = tokens.length;
while (pendingRandomKeys > 0) {
for (int i = offset; i < (offset + numberOfKeysToGenerate) * (number * 
10); i++) {
if (pendingRandomKeys <= 0)
break;

HashFunction hf = Hashing.murmur3_128();
HashCode hashedKey = hf.newHasher().putString(seed, 
Charset.defaultCharset()).putInt(i).hash();

for (int t = 0; t < tokens.length; t++) {
if ((t + 1 == tokens.length && hashedKey.asLong() >= tokens[t]) 
|| (hashedKey.asLong() >= tokens[t] && hashedKey.asLong() < tokens[t + 1])) {
if (randomKeys[t] == null) {
randomKeys[t] = hashedKey;
pendingRandomKeys--;
}

break;
}
}
}
}

return Arrays.asList(randomKeys);
}
{code}

There are 12 Cassandra instances in each performance/stress cluster running JDK 
1.8_u74 with the CMS collector (obviously simplified) running with -Xms5G 
-Xmx5G -Xmn1G. 

The test keyspace is created with RF=3:
{code:SQL}
CREATE KEYSPACE IF NOT EXISTS test_keyspace WITH replication = {'class': 
'NetworkTopologyStrategy', 'datacenter1': 3}
{code}

Operations for test_keyspace.largeuuid1 generate a new key to insert and read 
from at the top of every iteration with generateRandomKeys(1). Each worker then 
generates 10,000 random mutations, with the current timeuuid and a random value 
blob of 30 bytes to 2kb. This is intended to get some more "normal" load on the 
cluster.

{code:SQL}
CREATE TABLE IF NOT EXISTS test_keyspace.timeuuid1 (name text, col1 timeuuid, 
value blob, primary key(name, col1)) WITH compaction = { 
'class':'LeveledCompactionStrategy' }

"INSERT INTO test_keyspace.largeuuid1 (name, col1, value) VALUES (?, ?, ?)"
"SELECT * FROM test_keyspace.largeuuid1 WHERE name = ? and col1 = ?"
{code}

The second and third generated workload attempt to stress the large row size 
e

[jira] [Updated] (CASSANDRA-12774) Expose dc in Unavailable exception errors

2016-10-11 Thread Andy Tolbert (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Tolbert updated CASSANDRA-12774:
-
Description: 
For protocol v5 or later, it could be useful if Unavailable optionally included 
the DC that could not meet the CL.  

For example. if a user has a keyspace with RF of { dc1: 3, dc2: 3 } and they 
make a query at {{EACH_QUORUM}} and not enough replicas are available in dc2, 
an {{UnavailableException}} will be sent to the client with X available and 2 
required, but we don't know which DC failed.  It looks like 
{{UnavailableException}} already has a constructor that takes in the DC (see 
[here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/exceptions/UnavailableException.java#L33])
 so this could be feasible.

  was:
For protocol v5 or later, it could be useful if Unavailable optionally included 
the DC that could not meet the CL.  

For example. if a user has a keyspace with RF of { dc1: 3, and dc2: 3 } and 
they make a query at {{EACH_QUORUM}} and not enough replicas are available in 
dc2, an {{UnavailableException}} will be sent to the client with X available 
and 2 required, but we don't know which DC failed.  It looks like 
{{UnavailableException}} already has a constructor that takes in the DC (see 
[here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/exceptions/UnavailableException.java#L33])
 so this could be feasible.


> Expose dc in Unavailable exception errors
> -
>
> Key: CASSANDRA-12774
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12774
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Priority: Minor
>
> For protocol v5 or later, it could be useful if Unavailable optionally 
> included the DC that could not meet the CL.  
> For example. if a user has a keyspace with RF of { dc1: 3, dc2: 3 } and they 
> make a query at {{EACH_QUORUM}} and not enough replicas are available in dc2, 
> an {{UnavailableException}} will be sent to the client with X available and 2 
> required, but we don't know which DC failed.  It looks like 
> {{UnavailableException}} already has a constructor that takes in the DC (see 
> [here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/exceptions/UnavailableException.java#L33])
>  so this could be feasible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12774) Expose dc in Unavailable exception errors

2016-10-11 Thread Andy Tolbert (JIRA)
Andy Tolbert created CASSANDRA-12774:


 Summary: Expose dc in Unavailable exception errors
 Key: CASSANDRA-12774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12774
 Project: Cassandra
  Issue Type: Bug
Reporter: Andy Tolbert
Priority: Minor


For protocol v5 or later, it could be useful if Unavailable optionally included 
the DC that could not meet the CL.  

For example. if a user has a keyspace with RF of { dc1: 3, and dc2: 3 } and 
they make a query at {{EACH_QUORUM}} and not enough replicas are available in 
dc2, an {{UnavailableException}} will be sent to the client with X available 
and 2 required, but we don't know which DC failed.  It looks like 
{{UnavailableException}} already has a constructor that takes in the DC (see 
[here|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/exceptions/UnavailableException.java#L33])
 so this could be feasible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: perf_cluster_3_without_birch_read_latency_and_counts.png

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_read_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: (was: 
perf_cluster_3_without_birch_read_latency_and_counts.png)

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: perf_cluster_1_with_birch_write_latency_and_counts.png

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: (was: 
perf_cluster_1_with_birch_write_latency_and_counts.png)

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_read_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: perf_cluster_3_without_birch_write_latency_and_counts.png
perf_cluster_3_without_birch_read_latency_and_counts.png
perf_cluster_2_with_birch_write_latency_and_counts.png
perf_cluster_2_with_birch_read_latency_and_counts.png
perf_cluster_1_with_birch_write_latency_and_counts.png
perf_cluster_1_with_birch_read_latency_and_counts.png
gc_counts_without_birch.png
gc_counts_with_birch.png
gc_collection_times_without_birch.png
gc_collection_times_with_birch.png

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_read_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: (was: 9754_part2-v1.diff)

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12274) mx4j not work in 3.0.8

2016-10-11 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566605#comment-15566605
 ] 

Edward Ribeiro commented on CASSANDRA-12274:


Sorry for taking so long to look at this. :( It's +1 from me, but, more 
importantly, Jake has given his blessing already. ;)

> mx4j not work in 3.0.8
> --
>
> Key: CASSANDRA-12274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: suse 12
> java 1.8.0_60
> mx4j 3.0.2
>Reporter: Ilya
>Assignee: Robert Stupp
> Fix For: 3.0.x
>
> Attachments: mx4j-error-log.txt
>
>
> After update from 2.1 to 3.x version mx4j page begin empty
> {code}
> $ curl -i cassandra1:8081
> HTTP/1.0 200 OK
> expires: now
> Server: MX4J-HTTPD/1.0
> Cache-Control: no-cache
> pragma: no-cache
> Content-Type: text/html
> {code}
> There are no errors in the log.
> logs:
> {code}
> ~ $ grep -i mx4j /local/apache-cassandra/logs/system.log | tail -2
> INFO  [main] 2016-07-22 13:48:00,352 CassandraDaemon.java:432 - JVM 
> Arguments: [-Xloggc:/local/apache-cassandra//logs/gc.log, 
> -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, 
> -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/local/tmp, -Xss256k, 
> -XX:StringTableSize=103, -XX:+AlwaysPreTouch, -XX:+UseTLAB, 
> -XX:+ResizeTLAB, -XX:+UseNUMA, -Djava.net.preferIPv4Stack=true, -Xms512M, 
> -Xmx1G, -XX:+UseG1GC, -XX:G1RSetUpdatingPauseTimePercent=5, 
> -XX:MaxGCPauseMillis=500, -XX:InitiatingHeapOccupancyPercent=25, 
> -XX:G1HeapRegionSize=32m, -XX:ParallelGCThreads=16, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, 
> -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, 
> -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, 
> -XX:CompileCommandFile=/local/apache-cassandra//conf/hotspot_compiler, 
> -javaagent:/local/apache-cassandra//lib/jamm-0.3.0.jar, 
> -Djava.rmi.server.hostname=cassandra1.d3, 
> -Dcom.sun.management.jmxremote.port=7199, 
> -Dcom.sun.management.jmxremote.rmi.port=7199, 
> -Dcom.sun.management.jmxremote.ssl=false, 
> -Dcom.sun.management.jmxremote.authenticate=false, 
> -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password,
>  -Djava.library.path=/local/apache-cassandra//lib/sigar-bin, -Dmx4jport=8081, 
> -Dlogback.configurationFile=logback.xml, 
> -Dcassandra.logdir=/local/apache-cassandra//logs, 
> -Dcassandra.storagedir=/local/apache-cassandra//data, 
> -Dcassandra-pidfile=/local/apache-cassandra/run/cassandra.pid]
> INFO  [main] 2016-07-22 13:48:04,045 Mx4jTool.java:63 - mx4j successfuly 
> loaded
> {code}
> {code}
> ~ $ sudo lsof -i:8081
> COMMAND   PID  USER   FD   TYPEDEVICE SIZE/OFF NODE NAME
> java14489 cassandra   86u  IPv4 381043582  0t0  TCP 
> cassandra1.d3:sunproxyadmin (LISTEN)
> {code}
> I checked versions 3.0.8  and 3.5, result the same - not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Attachment: (was: 9754_part1-v1.diff)

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/core destination

2016-10-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-12772:
-
Reviewer: Michael Shuler
  Status: Patch Available  (was: Open)

> [Debian] Allow user configuration of hprof/core destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable user configuration. It would be nice for this 
> to be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/commit/3c7ecc1bb530fa8104320aedba470bc3f2065533
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12629) All Nodes Replication Strategy

2016-10-11 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566292#comment-15566292
 ] 

Ben Bromhead commented on CASSANDRA-12629:
--

This is not super important to get it committed as like you mentioned 
replication strategies are pluggable, just keen to figure out what we are 
missing here. 

We are still not hugely comfortable with the default rf of the system_auth 
keyspace and the way in which authN/Z information is replicated as the current 
default is also pretty easy to shoot yourself in the foot (from what we have 
seen helping folks out). Also while maintaining a separate process to manage 
system_auth keyspace RF and repairs works... it is also somewhat unwieldy (this 
is our current approach). 

After doing some reading, particularly of 
https://issues.apache.org/jira/browse/CASSANDRA-826, my gut feel is that 
replication of authN/Z keyspace requires a more elegant solution than what an 
"Everywhere" strategy would provide and should be more in line with the way the 
schema keyspaces behave? Such a discussion may require a new ticket or there is 
an existing one for it?

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Attachments: 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12531) dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3

2016-10-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12531:

Assignee: Sam Tunnicliffe

> dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3
> --
>
> Key: CASSANDRA-12531
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12531
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Sam Tunnicliffe
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v4
> {code}
> Error Message
> ReadTimeout not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-swJYMH
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 90, in 
> test_tombstone_failure_v3
> self._perform_cql_statement(session, "SELECT value FROM tombstonefailure")
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 63, in 
> _perform_cql_statement
> session.execute(statement)
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> "ReadTimeout not raised\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-swJYMH\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12141) dtest failure in consistency_test.TestConsistency.short_read_reversed_test

2016-10-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12141:

Assignee: Alex Petrov

> dtest failure in consistency_test.TestConsistency.short_read_reversed_test
> --
>
> Key: CASSANDRA-12141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12141
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/280/testReport/consistency_test/TestConsistency/short_read_reversed_test
> Failed on CassCI build trunk_offheap_dtest #280
> {code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [epollEventLoopGroup-2-5] 2016-06-27 19:14:54,412 Slf4JLogger.java:176 
> - LEAK: ByteBuf.release() was not called before it's garbage-collected. 
> Enable advanced leak reporting to find out where the leak occurred. To enable 
> advanced leak reporting, specify the JVM option 
> '-Dio.netty.leakDetection.level=advanced' or call 
> ResourceLeakDetector.setLevel() See 
> http://netty.io/wiki/reference-counted-objects.html for more information.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566188#comment-15566188
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

There were some issues with my cherry-pick to my public GitHub branch. I 
started from scratch and squashed all 182 individual commits  from scratch, 
rebased up to 2.1.16, and pushed to a new branch: 
https://github.com/mkjellman/cassandra/tree/CASSANDRA-9754-2.1-v2

The full squashed 2.1 based patch is 
https://github.com/mkjellman/cassandra/commit/b17f2c1317326fac7b6864a2fc61d7ee2580f740

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566183#comment-15566183
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Fixed: single squashed commit for 182 individual commits (wow didn't realize it 
was that many) just pushed to a new branch and rebased up to 2.1.16  
https://github.com/mkjellman/cassandra/tree/CASSANDRA-9754-2.1-v2

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-9754:

Comment: was deleted

(was: Yes, however something went wrong with the cherry-pick to the external 
github.com repo as caught by Jeff. I'm squashing all the changes now into a 
single commit and pushing a new branch up. Give me a few more moments.)

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566132#comment-15566132
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Yes, however something went wrong with the cherry-pick to the external 
github.com repo as caught by Jeff. I'm squashing all the changes now into a 
single commit and pushing a new branch up. Give me a few more moments.

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566131#comment-15566131
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Yes, however something went wrong with the cherry-pick to the external 
github.com repo as caught by Jeff. I'm squashing all the changes now into a 
single commit and pushing a new branch up. Give me a few more moments.

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/core destination

2016-10-11 Thread Justin Venus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Venus updated CASSANDRA-12772:
-
Summary: [Debian] Allow user configuration of hprof/core destination  (was: 
[Debian] Allow user configuration of hprof/conf destination)

> [Debian] Allow user configuration of hprof/core destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable user configuration. It would be nice for this 
> to be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/commit/3c7ecc1bb530fa8104320aedba470bc3f2065533
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12773) cassandra-stress error for one way SSL

2016-10-11 Thread Jane Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566091#comment-15566091
 ] 

Jane Deng commented on CASSANDRA-12773:
---

To reproduce the error, I created a cluster with client-node SSL enabled, 
require_client_auth=false. The password of the keystore and truststore are 
different from the default password of "cassandra". 

I rebuilt cassandra with the change in SettingsTransport.java to bypass the 
problem:

{noformat}
   if (options.keyStore.present())
{
encOptions.keystore = options.keyStore.value();
encOptions.keystore_password = options.keyStorePw.value();
}
else
{
// mandatory for SSLFactory.createSSLContext(), see 
CASSANDRA-9325
encOptions.keystore = encOptions.truststore;
// my code
encOptions.keystore_password = encOptions.truststore_password;
}
{noformat}

> cassandra-stress error for one way SSL 
> ---
>
> Key: CASSANDRA-12773
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12773
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jane Deng
>
> CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. 
> However, for one way ssl (require_client_auth=false), there is no need to 
> pass keystore info into ssloptions. Cassadra-stress errored out:
> {noformat}
> java.lang.RuntimeException: java.io.IOException: Error creating the 
> initializing the SSL Context 
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
>  
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207)
>  
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) 
> at org.apache.cassandra.stress.Stress.main(Stress.java:117) 
> Caused by: java.io.IOException: Error creating the initializing the SSL 
> Context 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151)
>  
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128)
>  
> at 
> org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191)
>  
> ... 5 more 
> Caused by: java.io.IOException: Keystore was tampered with, or password was 
> incorrect 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) 
> at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) 
> at java.security.KeyStore.load(KeyStore.java:1445) 
> at 
> org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129)
>  
> ... 7 more 
> Caused by: java.security.UnrecoverableKeyException: Password verification 
> failed 
> at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) 
> ... 10 more
> {noformat}
> It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is 
> assigned to the path of the truststore, but the password isn't taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12773) cassandra-stress error for one way SSL

2016-10-11 Thread Jane Deng (JIRA)
Jane Deng created CASSANDRA-12773:
-

 Summary: cassandra-stress error for one way SSL 
 Key: CASSANDRA-12773
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12773
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jane Deng


CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. 
However, for one way ssl (require_client_auth=false), there is no need to pass 
keystore info into ssloptions. Cassadra-stress errored out:

{noformat}
java.lang.RuntimeException: java.io.IOException: Error creating the 
initializing the SSL Context 
at 
org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200)
 
at 
org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
 
at 
org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
 
at 
org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207)
 
at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) 
at org.apache.cassandra.stress.Stress.main(Stress.java:117) 
Caused by: java.io.IOException: Error creating the initializing the SSL Context 
at 
org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151) 
at 
org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128)
 
at 
org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191)
 
... 5 more 
Caused by: java.io.IOException: Keystore was tampered with, or password was 
incorrect 
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) 
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) 
at java.security.KeyStore.load(KeyStore.java:1445) 
at 
org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129) 
... 7 more 
Caused by: java.security.UnrecoverableKeyException: Password verification 
failed 
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) 
... 10 more
{noformat}

It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is 
assigned to the path of the truststore, but the password isn't taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566048#comment-15566048
 ] 

Branimir Lambov commented on CASSANDRA-9754:


Is it now ready for review?

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10309) Avoid always looking up column type

2016-10-11 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566003#comment-15566003
 ] 

Carl Yeksigian commented on CASSANDRA-10309:


CASSANDRA-12443 removes the need to reload the sstables after they have been 
loaded - since there is no way for the types to change after the sstable has 
been loaded, we don't need to have a way to refresh the metadata which 
simplifies this ticket.4

> Avoid always looking up column type
> ---
>
> Key: CASSANDRA-10309
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10309
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Carl Yeksigian
>Priority: Minor
>  Labels: perfomance
> Fix For: 3.x
>
>
> Doing some read profiling I noticed we always seem to look up the type of a 
> column from the schema metadata when we have the type already in the column 
> class.
> This one simple change to SerializationHeader improves read performance 
> non-trivially.
> https://github.com/tjake/cassandra/commit/69b94c389b3f36aa035ac4619fd22d1f62ea80b2
> http://cstar.datastax.com/graph?stats=3fb1ced4-58c7-11e5-9faf-42010af0688f&metric=op_rate&operation=2_read&smoothing=1&show_aggregates=true&xmin=0&xmax=357.94&ymin=0&ymax=157416.6
> I assume we are looking this up to deal with schema changes. But I'm sure 
> there is a more performant way of doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10309) Avoid always looking up column type

2016-10-11 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15566003#comment-15566003
 ] 

Carl Yeksigian edited comment on CASSANDRA-10309 at 10/11/16 5:12 PM:
--

CASSANDRA-12443 removes the need to reload the sstables after they have been 
loaded - since there is no way for the types to change after the sstable has 
been loaded, we don't need to have a way to refresh the metadata which 
simplifies this ticket.


was (Author: carlyeks):
CASSANDRA-12443 removes the need to reload the sstables after they have been 
loaded - since there is no way for the types to change after the sstable has 
been loaded, we don't need to have a way to refresh the metadata which 
simplifies this ticket.4

> Avoid always looking up column type
> ---
>
> Key: CASSANDRA-10309
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10309
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Carl Yeksigian
>Priority: Minor
>  Labels: perfomance
> Fix For: 3.x
>
>
> Doing some read profiling I noticed we always seem to look up the type of a 
> column from the schema metadata when we have the type already in the column 
> class.
> This one simple change to SerializationHeader improves read performance 
> non-trivially.
> https://github.com/tjake/cassandra/commit/69b94c389b3f36aa035ac4619fd22d1f62ea80b2
> http://cstar.datastax.com/graph?stats=3fb1ced4-58c7-11e5-9faf-42010af0688f&metric=op_rate&operation=2_read&smoothing=1&show_aggregates=true&xmin=0&xmax=357.94&ymin=0&ymax=157416.6
> I assume we are looking this up to deal with schema changes. But I'm sure 
> there is a more performant way of doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/conf destination

2016-10-11 Thread Justin Venus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Venus updated CASSANDRA-12772:
-
Description: 
It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.

I propose this patch to enable user configuration. It would be nice for this to 
be cherrypicked into all of 3.x  
{quote}
https://github.com/JustinVenus/cassandra/commit/3c7ecc1bb530fa8104320aedba470bc3f2065533
{quote}

  was:
It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.

I propose this patch to enable user configuration. It would be nice for this to 
be cherrypicked into all of 3.x  
{quote}
https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
{quote}


> [Debian] Allow user configuration of hprof/conf destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable user configuration. It would be nice for this 
> to be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/commit/3c7ecc1bb530fa8104320aedba470bc3f2065533
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/conf destination

2016-10-11 Thread Justin Venus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Venus updated CASSANDRA-12772:
-
  Flags: Patch
Description: 
It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.

I propose this patch to enable the behavior and it would be nice for this to be 
cherrypicked into all of 3.x  
{quote}
https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
{quote}

  was:It would be nice if the $cassandra_home was consistent and configurable 
in the debian init script especially in the case where the /home partition is 
smaller than the heap size making core/heap dumps impossible to 
configure/capture.


> [Debian] Allow user configuration of hprof/conf destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable the behavior and it would be nice for this to 
> be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/conf destination

2016-10-11 Thread Justin Venus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Venus updated CASSANDRA-12772:
-
Description: 
It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.

I propose this patch to enable user configuration. It would be nice for this to 
be cherrypicked into all of 3.x  
{quote}
https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
{quote}

  was:
It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.

I propose this patch to enable the behavior and it would be nice for this to be 
cherrypicked into all of 3.x  
{quote}
https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
{quote}


> [Debian] Allow user configuration of hprof/conf destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable user configuration. It would be nice for this 
> to be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/tree/CASSANDRA-12772/user_defined_cassandra_home
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/conf destination

2016-10-11 Thread Justin Venus (JIRA)
Justin Venus created CASSANDRA-12772:


 Summary: [Debian] Allow user configuration of hprof/conf 
destination
 Key: CASSANDRA-12772
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
 Project: Cassandra
  Issue Type: Improvement
Reporter: Justin Venus
Priority: Minor


It would be nice if the $cassandra_home was consistent and configurable in the 
debian init script especially in the case where the /home partition is smaller 
than the heap size making core/heap dumps impossible to configure/capture.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565876#comment-15565876
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Latest set of fixes pushed to 
https://github.com/mkjellman/cassandra/commit/5586be24f55a16887376cb244a7d1b1fa777927f

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-11 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565868#comment-15565868
 ] 

Alex Petrov commented on CASSANDRA-12756:
-

In {{3.8}}: [CASSANDRA-12144].

> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> temperature map,
> PRIMARY KEY ((installation_id, node_id, time_bucket), gateway_time)
> ) WITH CLUSTERING ORDER BY (gateway_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> and the result from the SELECT is
> {code:title=cql output|borderStyle=solid}
> > select * from climate.climate_1510 where installation_id = 133235 and 
> > node_id = 35453983 and time_bucket = 189 and gateway_time > '2016-08-10 
> > 20:00:00' and gateway_time < '2016-08-10 21:00:00' ;
>  installation_id | node_id  | time_bucket | gateway_time | 
> humidity | temperature
> -+--+-+--+--+---
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
> {code}
> I've used Andrew Tolbert's sstable-tools to be able to dump the json for this 
> specific time and this is what I find. 
> {code:title=json dump|borderStyle=solid}
> [133235:35453983:189] Row[info=[ts=1470878906618000] ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> del(humidity)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [humidity[0]=51.0 ts=1470878906618000], 
> del(temperature)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470864506441999, localDeletion=1470864506 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470868106489000, localDeletion=1470868106 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470871706530999, localDeletion=1470871706 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470878906617999, localDeletion=1470878906 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> {code}
> From my understanding this should be impossible. Even if we have duplicates 
> in the sstables (which is normal) it should be filtered away before being 
> returned to the client.
> I'm happy to add details to this bug if anything is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-10-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565823#comment-15565823
 ] 

Michael Kjellman commented on CASSANDRA-9754:
-

Stable all night! My large test partitions have grown to ~12.5GB Just as stable 
-- latencies are unchanged. I'm so happy!!! ~7ms avg p99.9th and ~925 
microseconds average read latency. GC basically non-existant -- and for what GC 
is happening, the instances are averaging a 111 microsecond ParNew collection 
-- almost NO CMS! Compaction is keeping up.

On the converse side (the control 2.1 cluster running the same load) has 
instances are OOMing left and right -- CMS is frequently running 250 ms 
collections, ParNew is running 1.28 times a second on average with 75 ms 
average ParNew times. Horrible! And that's average -- the upper percentiles are 
a mess so I won't bore everyone. Read latencies are currently 380 ms average 
with many 15 *second* read latencies in the p99.9.



> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12462) NullPointerException in CompactionInfo.getId(CompactionInfo.java:65)

2016-10-11 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-12462:
---
Attachment: CASSANDRA-12462-v2.patch

Patch v2 attached.

> NullPointerException in CompactionInfo.getId(CompactionInfo.java:65)
> 
>
> Key: CASSANDRA-12462
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12462
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jonathan DePrizio
> Attachments: 
> 0001-Fix-NPE-when-running-nodetool-compactionstats.patch, 
> CASSANDRA-12462-v2.patch
>
>
> Note: The same trace is cited in the last comment of 
> https://issues.apache.org/jira/browse/CASSANDRA-11961
> I've noticed that some of my nodes in my 2.1 cluster have fallen way behind 
> on compactions, and have huge numbers (thousands) of uncompacted, tiny 
> SSTables (~30MB or so).
> In diagnosing the issue, I've found that "nodetool compactionstats" returns 
> the exception below.  Restarting cassandra on the node here causes the 
> pending tasks count to jump to ~2000.  Compactions run properly for about an 
> hour, until this exception occurs again.  Once it occurs, I see the pending 
> tasks value rapidly drop towards zero, but without any compactions actually 
> running (the logs show no compactions finishing).  It would seem that this is 
> causing compactions to fail on this node, which is leading to it running out 
> of space, etc.
> [redacted]# nodetool compactionstats
> xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms12G -Xmx12G 
> -Xmn1000M -Xss255k
> pending tasks: 5
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.db.compaction.CompactionInfo.getId(CompactionInfo.java:65)
>   at 
> org.apache.cassandra.db.compaction.CompactionInfo.asMap(CompactionInfo.java:118)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactions(CompactionManager.java:1405)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.reflect.misc.Trampoline.invoke(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
>   at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
>   at com.sun.jmx.mbeanserver.PerInterface.getAttribute(Unknown Source)
>   at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(Unknown Source)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(Unknown 
> Source)
>   at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(Unknown Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
> Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
> Source)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown 
> Source)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown 
> Source)
>   at javax.management.remote.rmi.RMIConnectionImpl.getAttribute(Unknown 
> Source)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
>   at sun.rmi.transport.Transport$1.run(Unknown Source)
>   at sun.rmi.transport.Transport$1.run(Unknown Source)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Unknown Source)
>   at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
>   at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
> Source)
>   at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
> Source)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-11 Thread Andreas Wederbrand (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565728#comment-15565728
 ] 

Andreas Wederbrand commented on CASSANDRA-12756:


We already run 3.7, is this fix also in 3.8 or 3.9?
 

> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> temperature map,
> PRIMARY KEY ((installation_id, node_id, time_bucket), gateway_time)
> ) WITH CLUSTERING ORDER BY (gateway_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> and the result from the SELECT is
> {code:title=cql output|borderStyle=solid}
> > select * from climate.climate_1510 where installation_id = 133235 and 
> > node_id = 35453983 and time_bucket = 189 and gateway_time > '2016-08-10 
> > 20:00:00' and gateway_time < '2016-08-10 21:00:00' ;
>  installation_id | node_id  | time_bucket | gateway_time | 
> humidity | temperature
> -+--+-+--+--+---
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
> {code}
> I've used Andrew Tolbert's sstable-tools to be able to dump the json for this 
> specific time and this is what I find. 
> {code:title=json dump|borderStyle=solid}
> [133235:35453983:189] Row[info=[ts=1470878906618000] ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> del(humidity)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [humidity[0]=51.0 ts=1470878906618000], 
> del(temperature)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470864506441999, localDeletion=1470864506 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470868106489000, localDeletion=1470868106 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470871706530999, localDeletion=1470871706 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470878906617999, localDeletion=1470878906 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> {code}
> From my understanding this should be impossible. Even if we have duplicates 
> in the sstables (which is normal) it should be filtered away before being 
> returned to the client.
> I'm happy to add details to this bug if anything is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-11 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565718#comment-15565718
 ] 

Jeremy Hanna commented on CASSANDRA-7296:
-

It does look like given the use case and that it really only applies to CL.ONE, 
it does look like the CL addition is a clearer/cleaner option.  It makes using 
the rest of the driver options simpler to reason about because it makes the CL 
contract very clear regardless of the other options.  The driver changes appear 
to have the same level of intrusiveness and the protocol would have to be 
updated in either case.

Is there a reason why a CL addition couldn't be done in this case - or in other 
words, do the edge cases of adding a CL outweigh the clarity of this function 
as a CL?

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12740) cqlsh copy tests hang in case of no answer from the server or driver

2016-10-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565550#comment-15565550
 ] 

Philip Thompson commented on CASSANDRA-12740:
-

Thanks so much for this fix! The issue was really bothering me.

> cqlsh copy tests hang in case of no answer from the server or driver
> 
>
> Key: CASSANDRA-12740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.0.10, 3.10
>
>
> -If we bundle the driver to cqlsh using the 3.6.0 tag or cassandra_test head, 
> some cqlsh copy tests hang, for example {{test_bulk_round_trip_blogposts}}. 
> See CASSANDRA-12736 and CASSANDRA-11534 for some sample failures.-
> If the driver fails to invoke a callback (either error or success), or if the 
> server never answers to the driver, then the copy parent process will wait 
> forever to receive an answer from child processes. We should put a cap to 
> this. We should also use a very high timeout rather than None, so that the 
> driver will notify us if there is no answer from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-11 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12761:

Status: Patch Available  (was: Awaiting Feedback)

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-11 Thread Guy Bolton King (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565454#comment-15565454
 ] 

Guy Bolton King commented on CASSANDRA-12761:
-

Updated patch with requested changes attached.

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-11 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12761:

Attachment: 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-11 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12761:

Attachment: (was: 
0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch)

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Assignee: Guy Bolton King
>Priority: Trivial
> Fix For: 3.x, 4.x
>
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-11 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565432#comment-15565432
 ] 

Alex Petrov commented on CASSANDRA-12756:
-

Update to 3.0.8 and run scrub. If you do it in test environment, you can also 
reassure yourself that no data loss occurs (the records get reconciled as they 
would usually be across sstables). If you upgrade nodes directly to 3.0.8 issue 
should not occur.

Can we close it as a duplicate?

> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> temperature map,
> PRIMARY KEY ((installation_id, node_id, time_bucket), gateway_time)
> ) WITH CLUSTERING ORDER BY (gateway_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> and the result from the SELECT is
> {code:title=cql output|borderStyle=solid}
> > select * from climate.climate_1510 where installation_id = 133235 and 
> > node_id = 35453983 and time_bucket = 189 and gateway_time > '2016-08-10 
> > 20:00:00' and gateway_time < '2016-08-10 21:00:00' ;
>  installation_id | node_id  | time_bucket | gateway_time | 
> humidity | temperature
> -+--+-+--+--+---
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
> {code}
> I've used Andrew Tolbert's sstable-tools to be able to dump the json for this 
> specific time and this is what I find. 
> {code:title=json dump|borderStyle=solid}
> [133235:35453983:189] Row[info=[ts=1470878906618000] ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> del(humidity)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [humidity[0]=51.0 ts=1470878906618000], 
> del(temperature)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470864506441999, localDeletion=1470864506 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470868106489000, localDeletion=1470868106 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470871706530999, localDeletion=1470871706 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470878906617999, localDeletion=1470878906 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> {code}
> From my understanding this should be impossible. Even if we have duplicates 
> in the sstables (which is normal) it should be filtered away before being 
> returned to the c

[jira] [Updated] (CASSANDRA-12733) Throw an exception if there is a prepared statement id hash conflict.

2016-10-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12733:
--
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Ready to Commit)

> Throw an exception if there is a prepared statement id hash conflict.
> -
>
> Key: CASSANDRA-12733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Minor
> Fix For: 3.10
>
>
> I seriously doubt there is any chance of actually getting two prepared 
> statement strings that have the same MD5.  But there should probably be 
> checks in QueryProcessor.getStoredPreparedStatement that the query string of 
> the statement being prepared matches the query string of the ID returned from 
> the cache when one already exists there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12733) Throw an exception if there is a prepared statement id hash conflict.

2016-10-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565421#comment-15565421
 ] 

Aleksey Yeschenko commented on CASSANDRA-12733:
---

Committed as 
[c7fb95c98d2c370fcdb8e0389528ce6668f3a58c|https://github.com/apache/cassandra/commit/c7fb95c98d2c370fcdb8e0389528ce6668f3a58c]
 to 3.X and merged into trunk. For the record, I still find the ticket silly, 
but ¯\_(ツ)_/¯

> Throw an exception if there is a prepared statement id hash conflict.
> -
>
> Key: CASSANDRA-12733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Minor
> Fix For: 3.10
>
>
> I seriously doubt there is any chance of actually getting two prepared 
> statement strings that have the same MD5.  But there should probably be 
> checks in QueryProcessor.getStoredPreparedStatement that the query string of 
> the statement being prepared matches the query string of the ID returned from 
> the cache when one already exists there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-11 Thread aleksey
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/231a9370
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/231a9370
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/231a9370

Branch: refs/heads/trunk
Commit: 231a9370619c86a0661ee421572750ea0ab197bf
Parents: dffe270 c7fb95c
Author: Aleksey Yeschenko 
Authored: Tue Oct 11 14:33:55 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Oct 11 14:33:55 2016 +0100

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/cql3/QueryProcessor.java   | 16 ++--
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/231a9370/CHANGES.txt
--
diff --cc CHANGES.txt
index 1310795,5907c74..7f5b7cb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,5 +1,11 @@@
 +4.0
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 +
 +
  3.10
+  * Check for hash conflicts in prepared statements (CASSANDRA-12733)
   * Exit query parsing upon first error (CASSANDRA-12598)
   * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
   * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)



[2/3] cassandra git commit: Check for hash conflicts in prepared statements.

2016-10-11 Thread aleksey
Check for hash conflicts in prepared statements.

Patch by Jeremiah Jordan; reviewed by Alex Petrov for CASSANDRA-12733


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7fb95c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7fb95c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7fb95c9

Branch: refs/heads/trunk
Commit: c7fb95c98d2c370fcdb8e0389528ce6668f3a58c
Parents: 8309543
Author: Jeremiah D Jordan 
Authored: Thu Sep 29 13:55:17 2016 -0500
Committer: Aleksey Yeschenko 
Committed: Tue Oct 11 14:33:05 2016 +0100

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/cql3/QueryProcessor.java   | 16 ++--
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7fb95c9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2f517e0..5907c74 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Check for hash conflicts in prepared statements (CASSANDRA-12733)
  * Exit query parsing upon first error (CASSANDRA-12598)
  * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729)
  * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7fb95c9/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 5313a1a..1d5a024 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -57,6 +57,8 @@ import org.apache.cassandra.transport.Server;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.utils.*;
 
+import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
+
 public class QueryProcessor implements QueryHandler
 {
 public static final CassandraVersion CQL_VERSION = new 
CassandraVersion("3.4.3");
@@ -437,13 +439,23 @@ public class QueryProcessor implements QueryHandler
 {
 Integer thriftStatementId = computeThriftId(queryString, keyspace);
 ParsedStatement.Prepared existing = 
thriftPreparedStatements.get(thriftStatementId);
-return existing == null ? null : 
ResultMessage.Prepared.forThrift(thriftStatementId, existing.boundNames);
+if (existing == null)
+return null;
+
+checkTrue(queryString.equals(existing.rawCQLStatement),
+  String.format("MD5 hash collision: query with the same 
MD5 hash was already prepared. \n Existing: '%s'", existing.rawCQLStatement));
+return ResultMessage.Prepared.forThrift(thriftStatementId, 
existing.boundNames);
 }
 else
 {
 MD5Digest statementId = computeId(queryString, keyspace);
 ParsedStatement.Prepared existing = 
preparedStatements.get(statementId);
-return existing == null ? null : new 
ResultMessage.Prepared(statementId, existing);
+if (existing == null)
+return null;
+
+checkTrue(queryString.equals(existing.rawCQLStatement),
+  String.format("MD5 hash collision: query with the same 
MD5 hash was already prepared. \n Existing: '%s'", existing.rawCQLStatement));
+return new ResultMessage.Prepared(statementId, existing);
 }
 }
 



[1/3] cassandra git commit: Check for hash conflicts in prepared statements.

2016-10-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 830954304 -> c7fb95c98
  refs/heads/trunk dffe27077 -> 231a93706


Check for hash conflicts in prepared statements.

Patch by Jeremiah Jordan; reviewed by Alex Petrov for CASSANDRA-12733


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7fb95c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7fb95c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7fb95c9

Branch: refs/heads/cassandra-3.X
Commit: c7fb95c98d2c370fcdb8e0389528ce6668f3a58c
Parents: 8309543
Author: Jeremiah D Jordan 
Authored: Thu Sep 29 13:55:17 2016 -0500
Committer: Aleksey Yeschenko 
Committed: Tue Oct 11 14:33:05 2016 +0100

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/cql3/QueryProcessor.java   | 16 ++--
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7fb95c9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2f517e0..5907c74 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Check for hash conflicts in prepared statements (CASSANDRA-12733)
  * Exit query parsing upon first error (CASSANDRA-12598)
  * Fix cassandra-stress to use single seed in UUID generation (CASSANDRA-12729)
  * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7fb95c9/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index 5313a1a..1d5a024 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -57,6 +57,8 @@ import org.apache.cassandra.transport.Server;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.utils.*;
 
+import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
+
 public class QueryProcessor implements QueryHandler
 {
 public static final CassandraVersion CQL_VERSION = new 
CassandraVersion("3.4.3");
@@ -437,13 +439,23 @@ public class QueryProcessor implements QueryHandler
 {
 Integer thriftStatementId = computeThriftId(queryString, keyspace);
 ParsedStatement.Prepared existing = 
thriftPreparedStatements.get(thriftStatementId);
-return existing == null ? null : 
ResultMessage.Prepared.forThrift(thriftStatementId, existing.boundNames);
+if (existing == null)
+return null;
+
+checkTrue(queryString.equals(existing.rawCQLStatement),
+  String.format("MD5 hash collision: query with the same 
MD5 hash was already prepared. \n Existing: '%s'", existing.rawCQLStatement));
+return ResultMessage.Prepared.forThrift(thriftStatementId, 
existing.boundNames);
 }
 else
 {
 MD5Digest statementId = computeId(queryString, keyspace);
 ParsedStatement.Prepared existing = 
preparedStatements.get(statementId);
-return existing == null ? null : new 
ResultMessage.Prepared(statementId, existing);
+if (existing == null)
+return null;
+
+checkTrue(queryString.equals(existing.rawCQLStatement),
+  String.format("MD5 hash collision: query with the same 
MD5 hash was already prepared. \n Existing: '%s'", existing.rawCQLStatement));
+return new ResultMessage.Prepared(statementId, existing);
 }
 }
 



[jira] [Updated] (CASSANDRA-12454) Unable to start on IPv6-only node with local JMX

2016-10-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12454:

Status: Ready to Commit  (was: Patch Available)

> Unable to start on IPv6-only node with local JMX
> 
>
> Key: CASSANDRA-12454
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12454
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu Trusty, Oracle JDK 1.8.0_102-b14, IPv6-only host
>Reporter: Vadim Tsesko
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> A Cassandra node using *default* configuration is unable to start on 
> *IPv6-only* machine with the following error message:
> {code}
> ERROR [main] 2016-08-13 14:38:07,309 CassandraDaemon.java:731 - Bad URL path: 
> :0:0:0:0:0:1/jndi/rmi://0:0:0:0:0:0:0:1:7199/jmxrmi
> {code}
> The problem might be located in {{JMXServerUtils.createJMXServer()}} (I am 
> not sure, because there is no stack trace in {{system.log}}):
> {code:java}
> String urlTemplate = "service:jmx:rmi://%1$s/jndi/rmi://%1$s:%2$d/jmxrmi";
> ...
> String url = String.format(urlTemplate, (serverAddress != null ? 
> serverAddress.getHostAddress() : "0.0.0.0"), port);
> {code}
> IPv6 addresses must be surrounded by square brackets when passed to 
> {{JMXServiceURL}}.
> Disabling {{LOCAL_JMX}} mode in {{cassandra-env.sh}} (and enabling JMX 
> authentication) helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2016-10-11 Thread Jean-Baptiste Le Duigou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Le Duigou updated CASSANDRA-12743:

Description: 
While running compaction I run into an error sometimes :
{noformat}
nodetool compact
error: null
-- StackTrace --
java.lang.AssertionError
at 
org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
at 
org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Why is that happening?
Is there anyway to provide more details (e.g. which SSTable cannot be 
compacted)?

We are using Cassandra 2.2.7

  was:
While running compaction I run into an error sometimes :
{noformat}
nodetool compact
error: null
-- StackTrace --
java.lang.AssertionError
at 
org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
at 
org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Why is that happening?
Is there anyway to provide more details (e.g. which SSTable cannot be 
compacted)?


> Assertion error while running compaction 
> 

[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2016-10-11 Thread Jean-Baptiste Le Duigou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Le Duigou updated CASSANDRA-12743:

Summary: Assertion error while running compaction   (was: Assertion error 
while runnning compaction )

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-10-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12629:
--
Status: Open  (was: Patch Available)

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12629) All Nodes Replication Strategy

2016-10-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565394#comment-15565394
 ] 

Aleksey Yeschenko commented on CASSANDRA-12629:
---

It was and is a bad idea in the 'Enterprise version', and would be a bad idea 
in proper Cassandra, too. That said, there is a reason replication strategies 
are pluggable in Cassandra - you don't have to commit them to C* to be able to 
use it. Formally -1 on this, sorry.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Attachments: 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12629) All Nodes Replication Strategy

2016-10-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-12629.
---
   Resolution: Not A Problem
Fix Version/s: (was: 3.x)

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Attachments: 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >