[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261541#comment-15261541
 ] 

Chanh Le commented on CASSANDRA-10661:
--

[~xedin] Thank man. You got my day.

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261537#comment-15261537
 ] 

Pavel Yaskevich edited comment on CASSANDRA-10661 at 4/28/16 4:50 AM:
--

Hi [~giaosudau], the name of the index class is 
'org.apache.cassandra.index.sasi.SASIIndex' you most likely reading 
documentation specific for 2.0, here is the updated doc 
https://github.com/apache/cassandra/blob/trunk/doc/SASI.md, it resides in doc/ 
folder of Apache Cassandra distribution. 

Edit: also NonTokenizingAnalyzer is located in 
'org.apache.cassandra.index.sasi' as well.


was (Author: xedin):
Hi [~giaosudau], the name of the index class is 
'org.apache.cassandra.index.sasi.SASIIndex' you most likely reading 
documentation specific for 2.0, here is the updated doc 
https://github.com/apache/cassandra/blob/trunk/doc/SASI.md, it resides in doc/ 
folder of Apache Cassandra distribution. 

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261537#comment-15261537
 ] 

Pavel Yaskevich edited comment on CASSANDRA-10661 at 4/28/16 4:50 AM:
--

Hi [~giaosudau], the name of the index class is 
'org.apache.cassandra.index.sasi.SASIIndex' you most likely reading 
documentation specific for 2.0, here is the updated doc 
https://github.com/apache/cassandra/blob/trunk/doc/SASI.md, it resides in doc/ 
folder of Apache Cassandra distribution. 

Edit: also NonTokenizingAnalyzer is located in 
'org.apache.cassandra.index.sasi.analyzer' as well.


was (Author: xedin):
Hi [~giaosudau], the name of the index class is 
'org.apache.cassandra.index.sasi.SASIIndex' you most likely reading 
documentation specific for 2.0, here is the updated doc 
https://github.com/apache/cassandra/blob/trunk/doc/SASI.md, it resides in doc/ 
folder of Apache Cassandra distribution. 

Edit: also NonTokenizingAnalyzer is located in 
'org.apache.cassandra.index.sasi' as well.

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261537#comment-15261537
 ] 

Pavel Yaskevich commented on CASSANDRA-10661:
-

Hi [~giaosudau], the name of the index class is 
'org.apache.cassandra.index.sasi.SASIIndex' you most likely reading 
documentation specific for 2.0, here is the updated doc 
https://github.com/apache/cassandra/blob/trunk/doc/SASI.md, it resides in doc/ 
folder of Apache Cassandra distribution. 

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11452) Cache implementation using LIRS eviction for in-process page cache

2016-04-27 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261535#comment-15261535
 ] 

Ben Manes commented on CASSANDRA-11452:
---

Branimir, I assume you're referring to preferring the candidate on equality. It 
is probably my fault that Roy left it out, as I likely forgot to emphasize your 
observation. It does have a negative impact on the LIRS traces, such as halving 
the hit rate of glimpse (analytical) from 34% => 16%.

Benedict, since I'm hesitant to start down the path of direct hash table access 
it seems like a natural solution for OHC. There is always going to be a limit 
where being on-heap makes no sense, but it has been a nice place to explore 
algorithms. OHC uses a custom hash table, I believe because using CLHM with 
off-heap values had too much GC overhead in Cassandra's very large caches. I 
think the biggest win will come from leverage what we've learned into improving 
OHC and the custom non-concurrent cache for Cassandra's thread-per-core 
redesign.

Does anyone know what our next steps are for moving CASSANDRA-10855 forward?

> Cache implementation using LIRS eviction for in-process page cache
> --
>
> Key: CASSANDRA-11452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11452
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>
> Following up from CASSANDRA-5863, to make best use of caching and to avoid 
> having to explicitly marking compaction accesses as non-cacheable, we need a 
> cache implementation that uses an eviction algorithm that can better handle 
> non-recurring accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261528#comment-15261528
 ] 

Chanh Le commented on CASSANDRA-10661:
--

Hi I am using cassandra 3.5 and I have problem when create index with that.
CREATE CUSTOM INDEX ON bar (fname) USING 
'org.apache.cassandra.db.index.SSTableAttachedSecondaryIndex'
WITH OPTIONS = {
'analyzer_class':
'org.apache.cassandra.db.index.sasi.analyzer.NonTokenizingAnalyzer',
'case_sensitive': 'false'
};

it throws: unable to find custom indexer class 
'org.apache.cassandra.db.index.SSTableAttachedSecondaryIndex



> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11678) cassandra crush when enable hints_compression

2016-04-27 Thread Weijian Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijian Lin updated CASSANDRA-11678:

Description: 
When I enable hints_compression and set the compression class to
LZ4Compressor,the
cassandra (v3.05, V3.5.0) will crush。That is a bug, or any conf is wrong?


*Exception   in V 3.5.0  *

ERROR [HintsDispatcher:2] 2016-04-26 15:02:56,970
HintsDispatchExecutor.java:225 - Failed to dispatch hints file
abc4dda2-b551-427e-bb0b-e383d4a392e1-1461654138963-1.hints: file is
corrupted ({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
[apache-cassandra-3.5.0.jar:3.5.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_65]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
Caused by: java.io.EOFException: null
at
org.apache.cassandra.io.util.RebufferingInputStream.readByte(RebufferingInputStream.java:146)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.io.util.RebufferingInputStream.readPrimitiveSlowly(RebufferingInputStream.java:108)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.io.util.RebufferingInputStream.readInt(RebufferingInputStream.java:188)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:297)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:280)
~[apache-cassandra-3.5.0.jar:3.5.0]
... 15 common frames omitted



*Exception   in V 3.0.5  *

ERROR [HintsDispatcher:2] 2016-04-26 15:54:46,294
HintsDispatchExecutor.java:225 - Failed to dispatch hints file
8603be13-6878-4de3-8bc3-a7a7146b0376-1461657251205-1.hints: file is
corrupted ({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
[apache-cassandra-3.0.5.jar:3.0.5]
at 

[jira] [Created] (CASSANDRA-11678) cassandra crush when enable hints_compression

2016-04-27 Thread Weijian Lin (JIRA)
Weijian Lin created CASSANDRA-11678:
---

 Summary: cassandra crush when enable hints_compression
 Key: CASSANDRA-11678
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11678
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Local Write-Read Paths
 Environment: Centos 7
Reporter: Weijian Lin
Priority: Critical


When I enable hints_compression and set the compression class to
LZ4Compressor,the
cassandra (v3.05, V3.5.0) will crush。That is a bug, or any conf is wrong?



*Exception   in V 3.5.0  *

ERROR [HintsDispatcher:2] 2016-04-26 15:02:56,970
HintsDispatchExecutor.java:225 - Failed to dispatch hints file
abc4dda2-b551-427e-bb0b-e383d4a392e1-1461654138963-1.hints: file is
corrupted ({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:284)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:254)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
[apache-cassandra-3.5.0.jar:3.5.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_65]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_65]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
Caused by: java.io.EOFException: null
at
org.apache.cassandra.io.util.RebufferingInputStream.readByte(RebufferingInputStream.java:146)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.io.util.RebufferingInputStream.readPrimitiveSlowly(RebufferingInputStream.java:108)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.io.util.RebufferingInputStream.readInt(RebufferingInputStream.java:188)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:297)
~[apache-cassandra-3.5.0.jar:3.5.0]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:280)
~[apache-cassandra-3.5.0.jar:3.5.0]
... 15 common frames omitted



*Exception   in V 3.0.5  *

ERROR [HintsDispatcher:2] 2016-04-26 15:54:46,294
HintsDispatchExecutor.java:225 - Failed to dispatch hints file
8603be13-6878-4de3-8bc3-a7a7146b0376-1461657251205-1.hints: file is
corrupted ({})
org.apache.cassandra.io.FSReadError: java.io.EOFException
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91)
~[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259)
[apache-cassandra-3.0.5.jar:3.0.5]
at
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
[apache-cassandra-3.0.5.jar:3.0.5]
at

[jira] [Comment Edited] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261396#comment-15261396
 ] 

Stefania edited comment on CASSANDRA-11676 at 4/28/16 2:04 AM:
---

It's failing at this line here:

{code}
table_meta = self.session.cluster.metadata.keyspaces[self.ks].tables[table_name]
{code}

My best guess if that the keyspace or table metadata may not have been received 
by the driver on the control connection yet, since they are created with 
cassandra-stress and not using the session open in the test.

I've added some checks to refresh the metadata if either ks or table metadata 
are not available. I hope this is sufficient.

PR [here|https://github.com/riptano/cassandra-dtest/pull/961]


was (Author: stefania):
It's failing at this line here:

{code}
table_meta = self.session.cluster.metadata.keyspaces[self.ks].tables[table_name]
{code}

I think either the keyspace or table metadata may not have been received by the 
driver on the control connection yet, since they are created with 
cassandra-stress and not using the same session open in the test.

I've added some checks to refresh the metadata if either ks or table metadata 
are not available yet. I hope this is sufficient.

PR [here|https://github.com/riptano/cassandra-dtest/pull/961]

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11676:
-
Reviewer: Russ Hatch
  Status: Patch Available  (was: In Progress)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261396#comment-15261396
 ] 

Stefania commented on CASSANDRA-11676:
--

It's failing at this line here:

{code}
table_meta = self.session.cluster.metadata.keyspaces[self.ks].tables[table_name]
{code}

I think either the keyspace or table metadata may not have been received by the 
driver on the control connection yet, since they are created with 
cassandra-stress and not using the same session open in the test.

I've added some checks to refresh the metadata if either ks or table metadata 
are not available yet. I hope this is sufficient.

PR [here|https://github.com/riptano/cassandra-dtest/pull/961]

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11137:
-
Fix Version/s: 2.2.7
   3.0.6

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6, 3.0.6, 2.2.7
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261352#comment-15261352
 ] 

Stefania commented on CASSANDRA-11137:
--

Code and tests look good: all dtests passed and failing utests were either 
unrelated timeouts or also failing on unpatched branches.

Committed to 2.2 as 88f22b9692c6fdddf837556f13140d949afe0d28 and up-merged 
(with -s ours for trunk).  Also added a section to NEWS.txt.

Test PR closed.

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6, 3.0.6, 2.2.7
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b360653f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b360653f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b360653f

Branch: refs/heads/cassandra-3.0
Commit: b360653fcfe07a7af66107ef9e55fdc9e33c1d0a
Parents: 7a2be8f 88f22b9
Author: Stefania Alborghetti 
Authored: Thu Apr 28 09:12:56 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:18:07 2016 +0800

--
 CHANGES.txt | 1 +
 NEWS.txt| 8 
 .../apache/cassandra/serializers/TimestampSerializer.java   | 9 ++---
 .../apache/cassandra/cql3/validation/entities/JsonTest.java | 6 --
 4 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b360653f/CHANGES.txt
--
diff --cc CHANGES.txt
index 8877fa9,91179b3..46206b1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,5 +1,19 @@@
 -2.2.7
 +3.0.6
 + * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600)
 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
 + * Ignore all LocalStrategy keyspaces for streaming and other related
 +   operations (CASSANDRA-11627)
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * JSON datetime formatting needs timezone (CASSANDRA-11137)
   * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
   * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
   * Add missing files to debian packages (CASSANDRA-11642)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b360653f/NEWS.txt
--
diff --cc NEWS.txt
index 1b982cb,a3ba0dd..d13f94f
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,7 -13,15 +13,15 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -2.2.7
++3.0.6
+ =
+ 
+ New features
+ 
 -- JSON timestamps are now in UTC and contain the timezone information, see
 -  CASSANDRA-11137 for more details.
++   - JSON timestamps are now in UTC and contain the timezone information, see
++ CASSANDRA-11137 for more details.
+ 
 -2.2.6
 +3.0.5
  =
  
  Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b360653f/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index fbd98d1,77a5df9..9bd9a8d
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,26 -96,16 +97,29 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
 +
+ private static final String TO_JSON_FORMAT = dateStringPatterns[19];
  private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
  {
  protected SimpleDateFormat initialValue()
  {
- return new SimpleDateFormat(dateStringPatterns[15]);
+ SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+ sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+ return sdf;
  }
  };
 +
 +
  
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  


[1/6] cassandra git commit: JSON datetime formatting needs timezone (backported from trunk)

2016-04-27 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 e5c402780 -> 88f22b969
  refs/heads/cassandra-3.0 7a2be8fa4 -> b360653fc
  refs/heads/trunk 4254de17f -> 2bb1bfcb9


JSON datetime formatting needs timezone (backported from trunk)

patch by Alex Petrov; reviewed by Stefania Alborghetti for CASSANDRA-11137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88f22b96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88f22b96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88f22b96

Branch: refs/heads/cassandra-2.2
Commit: 88f22b9692c6fdddf837556f13140d949afe0d28
Parents: e5c4027
Author: Alex Petrov 
Authored: Thu Apr 28 08:59:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:10:18 2016 +0800

--
 CHANGES.txt  |  1 +
 NEWS.txt |  8 
 .../cassandra/serializers/TimestampSerializer.java   | 11 +++
 .../cassandra/cql3/validation/entities/JsonTest.java |  6 --
 4 files changed, 20 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3641816..91179b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * JSON datetime formatting needs timezone (CASSANDRA-11137)
  * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e8f4e66..a3ba0dd 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.7
+=
+
+New features
+
+- JSON timestamps are now in UTC and contain the timezone information, see
+  CASSANDRA-11137 for more details.
+
 2.2.6
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 78ee7e7..77a5df9 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -22,7 +22,7 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import java.nio.ByteBuffer;
 import java.text.SimpleDateFormat;
 import java.text.ParseException;
-import java.util.Date;
+import java.util.*;
 import java.util.regex.Pattern;
 
 import org.apache.commons.lang3.time.DateUtils;
@@ -48,11 +48,11 @@ public class TimestampSerializer implements 
TypeSerializer
 "-MM-dd HH:mm:ssX",
 "-MM-dd HH:mm:ssXX",
 "-MM-dd HH:mm:ssXXX",
-"-MM-dd HH:mm:ss.SSS",   // TO_JSON_FORMAT
+"-MM-dd HH:mm:ss.SSS",
 "-MM-dd HH:mm:ss.SSS z",
 "-MM-dd HH:mm:ss.SSS zz",
 "-MM-dd HH:mm:ss.SSS zzz",
-"-MM-dd HH:mm:ss.SSSX",
+"-MM-dd HH:mm:ss.SSSX", // TO_JSON_FORMAT
 "-MM-dd HH:mm:ss.SSSXX",
 "-MM-dd HH:mm:ss.SSSXXX",
 "-MM-dd'T'HH:mm",
@@ -96,11 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
+private static final String TO_JSON_FORMAT = dateStringPatterns[19];
 private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
 {
 protected SimpleDateFormat initialValue()
 {
-return new SimpleDateFormat(dateStringPatterns[15]);
+SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+return sdf;
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
index 2c471b0..824d436 100644
--- 

[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fc10676
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fc10676
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fc10676

Branch: refs/heads/trunk
Commit: 3fc106761ca2618f4a1af518ca7b51f182790976
Parents: 7a2be8f 88f22b9
Author: Stefania Alborghetti 
Authored: Thu Apr 28 09:12:56 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:13:41 2016 +0800

--
 CHANGES.txt | 1 +
 NEWS.txt| 8 
 .../apache/cassandra/serializers/TimestampSerializer.java   | 9 ++---
 .../apache/cassandra/cql3/validation/entities/JsonTest.java | 6 --
 4 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fc10676/CHANGES.txt
--
diff --cc CHANGES.txt
index 8877fa9,91179b3..46206b1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,5 +1,19 @@@
 -2.2.7
 +3.0.6
 + * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600)
 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
 + * Ignore all LocalStrategy keyspaces for streaming and other related
 +   operations (CASSANDRA-11627)
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * JSON datetime formatting needs timezone (CASSANDRA-11137)
   * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
   * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
   * Add missing files to debian packages (CASSANDRA-11642)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fc10676/NEWS.txt
--
diff --cc NEWS.txt
index 1b982cb,a3ba0dd..3dcb387
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,7 -13,15 +13,15 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -2.2.7
++3.0.6
+ =
+ 
+ New features
+ 
+ - JSON timestamps are now in UTC and contain the timezone information, see
+   CASSANDRA-11137 for more details.
+ 
 -2.2.6
 +3.0.5
  =
  
  Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fc10676/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index fbd98d1,77a5df9..9bd9a8d
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,26 -96,16 +97,29 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
 +
+ private static final String TO_JSON_FORMAT = dateStringPatterns[19];
  private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
  {
  protected SimpleDateFormat initialValue()
  {
- return new SimpleDateFormat(dateStringPatterns[15]);
+ SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+ sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+ return sdf;
  }
  };
 +
 +
  
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fc10676/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java

[2/6] cassandra git commit: JSON datetime formatting needs timezone (backported from trunk)

2016-04-27 Thread stefania
JSON datetime formatting needs timezone (backported from trunk)

patch by Alex Petrov; reviewed by Stefania Alborghetti for CASSANDRA-11137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88f22b96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88f22b96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88f22b96

Branch: refs/heads/cassandra-3.0
Commit: 88f22b9692c6fdddf837556f13140d949afe0d28
Parents: e5c4027
Author: Alex Petrov 
Authored: Thu Apr 28 08:59:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:10:18 2016 +0800

--
 CHANGES.txt  |  1 +
 NEWS.txt |  8 
 .../cassandra/serializers/TimestampSerializer.java   | 11 +++
 .../cassandra/cql3/validation/entities/JsonTest.java |  6 --
 4 files changed, 20 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3641816..91179b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * JSON datetime formatting needs timezone (CASSANDRA-11137)
  * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e8f4e66..a3ba0dd 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.7
+=
+
+New features
+
+- JSON timestamps are now in UTC and contain the timezone information, see
+  CASSANDRA-11137 for more details.
+
 2.2.6
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 78ee7e7..77a5df9 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -22,7 +22,7 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import java.nio.ByteBuffer;
 import java.text.SimpleDateFormat;
 import java.text.ParseException;
-import java.util.Date;
+import java.util.*;
 import java.util.regex.Pattern;
 
 import org.apache.commons.lang3.time.DateUtils;
@@ -48,11 +48,11 @@ public class TimestampSerializer implements 
TypeSerializer
 "-MM-dd HH:mm:ssX",
 "-MM-dd HH:mm:ssXX",
 "-MM-dd HH:mm:ssXXX",
-"-MM-dd HH:mm:ss.SSS",   // TO_JSON_FORMAT
+"-MM-dd HH:mm:ss.SSS",
 "-MM-dd HH:mm:ss.SSS z",
 "-MM-dd HH:mm:ss.SSS zz",
 "-MM-dd HH:mm:ss.SSS zzz",
-"-MM-dd HH:mm:ss.SSSX",
+"-MM-dd HH:mm:ss.SSSX", // TO_JSON_FORMAT
 "-MM-dd HH:mm:ss.SSSXX",
 "-MM-dd HH:mm:ss.SSSXXX",
 "-MM-dd'T'HH:mm",
@@ -96,11 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
+private static final String TO_JSON_FORMAT = dateStringPatterns[19];
 private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
 {
 protected SimpleDateFormat initialValue()
 {
-return new SimpleDateFormat(dateStringPatterns[15]);
+SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+return sdf;
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
index 2c471b0..824d436 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
@@ -618,8 +618,10 @@ public class 

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-27 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bb1bfcb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bb1bfcb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bb1bfcb

Branch: refs/heads/trunk
Commit: 2bb1bfcb9525b2f43a9f1297861662e528b3e96f
Parents: 4254de1 3fc1067
Author: Stefania Alborghetti 
Authored: Thu Apr 28 09:15:26 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:16:51 2016 +0800

--
 CHANGES.txt | 2 +-
 NEWS.txt| 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bb1bfcb/CHANGES.txt
--
diff --cc CHANGES.txt
index 6466310,46206b1..f78ea90
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,4 +1,63 @@@
 -3.0.6
 +3.6
 + * Always perform collision check before joining ring (CASSANDRA-10134)
 + * SSTableWriter output discrepancy (CASSANDRA-11646)
 + * Fix potential timeout in NativeTransportService.testConcurrentDestroys 
(CASSANDRA-10756)
 + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
-  * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Add support to rebuild from specific range (CASSANDRA-10406)
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   

[3/6] cassandra git commit: JSON datetime formatting needs timezone (backported from trunk)

2016-04-27 Thread stefania
JSON datetime formatting needs timezone (backported from trunk)

patch by Alex Petrov; reviewed by Stefania Alborghetti for CASSANDRA-11137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88f22b96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88f22b96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88f22b96

Branch: refs/heads/trunk
Commit: 88f22b9692c6fdddf837556f13140d949afe0d28
Parents: e5c4027
Author: Alex Petrov 
Authored: Thu Apr 28 08:59:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 28 09:10:18 2016 +0800

--
 CHANGES.txt  |  1 +
 NEWS.txt |  8 
 .../cassandra/serializers/TimestampSerializer.java   | 11 +++
 .../cassandra/cql3/validation/entities/JsonTest.java |  6 --
 4 files changed, 20 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3641816..91179b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * JSON datetime formatting needs timezone (CASSANDRA-11137)
  * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e8f4e66..a3ba0dd 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.7
+=
+
+New features
+
+- JSON timestamps are now in UTC and contain the timezone information, see
+  CASSANDRA-11137 for more details.
+
 2.2.6
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 78ee7e7..77a5df9 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -22,7 +22,7 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import java.nio.ByteBuffer;
 import java.text.SimpleDateFormat;
 import java.text.ParseException;
-import java.util.Date;
+import java.util.*;
 import java.util.regex.Pattern;
 
 import org.apache.commons.lang3.time.DateUtils;
@@ -48,11 +48,11 @@ public class TimestampSerializer implements 
TypeSerializer
 "-MM-dd HH:mm:ssX",
 "-MM-dd HH:mm:ssXX",
 "-MM-dd HH:mm:ssXXX",
-"-MM-dd HH:mm:ss.SSS",   // TO_JSON_FORMAT
+"-MM-dd HH:mm:ss.SSS",
 "-MM-dd HH:mm:ss.SSS z",
 "-MM-dd HH:mm:ss.SSS zz",
 "-MM-dd HH:mm:ss.SSS zzz",
-"-MM-dd HH:mm:ss.SSSX",
+"-MM-dd HH:mm:ss.SSSX", // TO_JSON_FORMAT
 "-MM-dd HH:mm:ss.SSSXX",
 "-MM-dd HH:mm:ss.SSSXXX",
 "-MM-dd'T'HH:mm",
@@ -96,11 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
+private static final String TO_JSON_FORMAT = dateStringPatterns[19];
 private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
 {
 protected SimpleDateFormat initialValue()
 {
-return new SimpleDateFormat(dateStringPatterns[15]);
+SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+return sdf;
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/88f22b96/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
index 2c471b0..824d436 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
@@ -618,8 +618,10 @@ public class JsonTest 

[jira] [Commented] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261327#comment-15261327
 ] 

Russ Hatch commented on CASSANDRA-11636:


100 test iterations without a single failure, should be good to resolve this.

> dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
> ---
>
> Key: CASSANDRA-11636
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11636
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test
> Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative
> {noformat}
> Error Message
> Problem stopping node node1
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Default role created by node1
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in 
> restart_node_doesnt_lose_auth_data_test
> self.cluster.stop()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop
> if not node.stop(wait, gently=gently):
>   File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop
> raise NodeError("Problem stopping node %s" % self.name)
> "Problem stopping node node1\n >> begin captured logging 
> << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Default role created by node1\n- >> 
> end captured logging << -"
> {noformat}
> This test was successful in the next build on a commit that does not appear 
> to be auth-related, and the test does not appear to be flappy. Looping over 
> the test, I have not gotten a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test

2016-04-27 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11636.

Resolution: Cannot Reproduce

> dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
> ---
>
> Key: CASSANDRA-11636
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11636
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test
> Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative
> {noformat}
> Error Message
> Problem stopping node node1
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Default role created by node1
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in 
> restart_node_doesnt_lose_auth_data_test
> self.cluster.stop()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop
> if not node.stop(wait, gently=gently):
>   File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop
> raise NodeError("Problem stopping node %s" % self.name)
> "Problem stopping node node1\n >> begin captured logging 
> << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Default role created by node1\n- >> 
> end captured logging << -"
> {noformat}
> This test was successful in the next build on a commit that does not appear 
> to be auth-related, and the test does not appear to be flappy. Looping over 
> the test, I have not gotten a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261320#comment-15261320
 ] 

Russ Hatch commented on CASSANDRA-11597:


I made a dtest pr with a "fix" of sorts for this test: 
https://github.com/riptano/cassandra-dtest/pull/960

I'm trying out a bulk run of the test to confirm it works as expected in the 
failure case, over here: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/86/

> dtest failure in 
> upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
> ---
>
> Key: CASSANDRA-11597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11597
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> Looks like a new flap. Example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test
> Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative
> {code}
> Error Message
> TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Upgrading to binary:2.0.17
> dtest: DEBUG: Shutting down node: node1
> dtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17
> dtest: DEBUG: Starting node1 on new version (binary:2.0.17)
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line 
> 215, in upgrade_with_counters_test
> client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 985, in add
> self.recv_add()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1013, in recv_add
> raise result.te
> "TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down 
> node: node1\ndtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new 
> version (binary:2.0.17)\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test

2016-04-27 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11597:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in 
> upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
> ---
>
> Key: CASSANDRA-11597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11597
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> Looks like a new flap. Example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test
> Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative
> {code}
> Error Message
> TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Upgrading to binary:2.0.17
> dtest: DEBUG: Shutting down node: node1
> dtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17
> dtest: DEBUG: Starting node1 on new version (binary:2.0.17)
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line 
> 215, in upgrade_with_counters_test
> client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 985, in add
> self.recv_add()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1013, in recv_add
> raise result.te
> "TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down 
> node: node1\ndtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new 
> version (binary:2.0.17)\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11676:


Assignee: Stefania  (was: DS Test Eng)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261284#comment-15261284
 ] 

Stefania commented on CASSANDRA-11676:
--

It is not related to CASSANDRA-11675, I will take a look.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest

2016-04-27 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania resolved CASSANDRA-11675.
--
   Resolution: Fixed
 Reviewer: Stefania
Fix Version/s: 3.6

> multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
> 
>
> Key: CASSANDRA-11675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11675
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.6
>
>
> these appear to be related, all failed on the same build (but appear to be 
> passing now).
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261283#comment-15261283
 ] 

Stefania commented on CASSANDRA-11675:
--

I've merged the dtest PR for CASSANDRA-11631 a few minutes after committing and 
this caused the intermittent failures. 

I don't think CASSANDRA-11676 is related, it's the first time I see that 
failure.

> multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
> 
>
> Key: CASSANDRA-11675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11675
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.6
>
>
> these appear to be related, all failed on the same build (but appear to be 
> passing now).
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11677) Incredibly slow jolokia response times

2016-04-27 Thread Andrew Jorgensen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261231#comment-15261231
 ] 

Andrew Jorgensen commented on CASSANDRA-11677:
--

So this appears to actually be a jolokia problem. I was able to downgrade to 
jolokia 1.2.3 and now metrics are coming in fine and the requests to that 
endpoint are down to only a couple seconds. I am not sure what changed between 
jolokia 1.2.3 and 1.3.3 but it appears to be causing an issue.

> Incredibly slow jolokia response times
> --
>
> Key: CASSANDRA-11677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11677
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Jorgensen
>
> I am seeing some very slow jolokia request times on my Cassandra 3.0 cluster. 
> Specifically when running the following:
> {code}
> curl 127.0.0.1:8778/jolokia/list
> {code}
> on a slightly loaded cluster I am seeing request times around 30-40 seconds 
> and on a more heavily loaded cluster I am seeing request times in the 2 
> minute mark. We are currently using jolokia 1.3.2 and v4 of the diamond 
> collector. I also have a Cassandra 1.1 cluster that has the same load and 
> number of nodes and running the same curl command comes back in about 1 
> second.
> Is there anything I can do to help diagnose this issue to see what is causing 
> the slowdown or has anyone else experience this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261228#comment-15261228
 ] 

Russ Hatch commented on CASSANDRA-11636:


trying a bulk run here: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/85/

> dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
> ---
>
> Key: CASSANDRA-11636
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11636
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test
> Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative
> {noformat}
> Error Message
> Problem stopping node node1
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Default role created by node1
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in 
> restart_node_doesnt_lose_auth_data_test
> self.cluster.stop()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop
> if not node.stop(wait, gently=gently):
>   File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop
> raise NodeError("Problem stopping node %s" % self.name)
> "Problem stopping node node1\n >> begin captured logging 
> << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Default role created by node1\n- >> 
> end captured logging << -"
> {noformat}
> This test was successful in the next build on a commit that does not appear 
> to be auth-related, and the test does not appear to be flappy. Looping over 
> the test, I have not gotten a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test

2016-04-27 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11636:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
> ---
>
> Key: CASSANDRA-11636
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11636
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test
> Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative
> {noformat}
> Error Message
> Problem stopping node node1
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Default role created by node1
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in 
> restart_node_doesnt_lose_auth_data_test
> self.cluster.stop()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop
> if not node.stop(wait, gently=gently):
>   File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop
> raise NodeError("Problem stopping node %s" % self.name)
> "Problem stopping node node1\n >> begin captured logging 
> << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Default role created by node1\n- >> 
> end captured logging << -"
> {noformat}
> This test was successful in the next build on a commit that does not appear 
> to be auth-related, and the test does not appear to be flappy. Looping over 
> the test, I have not gotten a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test

2016-04-27 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261213#comment-15261213
 ] 

Philip Thompson commented on CASSANDRA-11597:
-

I wish :(. After the upgrade to 2.0.17, it then undergoes an upgrade to 2.1. We 
won't be able to retire this until 2.1 is EOL.

> dtest failure in 
> upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
> ---
>
> Key: CASSANDRA-11597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11597
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> Looks like a new flap. Example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test
> Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative
> {code}
> Error Message
> TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Upgrading to binary:2.0.17
> dtest: DEBUG: Shutting down node: node1
> dtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17
> dtest: DEBUG: Starting node1 on new version (binary:2.0.17)
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line 
> 215, in upgrade_with_counters_test
> client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 985, in add
> self.recv_add()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1013, in recv_add
> raise result.te
> "TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down 
> node: node1\ndtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new 
> version (binary:2.0.17)\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261211#comment-15261211
 ] 

Russ Hatch commented on CASSANDRA-11597:


[~philipthompson] If I understand correctly, this test is always starting on 
1.2 and upgrading to 2.0.17 . If another 2.0 release is unlikely, can we just 
retire this test?

> dtest failure in 
> upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
> ---
>
> Key: CASSANDRA-11597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11597
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> Looks like a new flap. Example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test
> Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative
> {code}
> Error Message
> TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Upgrading to binary:2.0.17
> dtest: DEBUG: Shutting down node: node1
> dtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17
> dtest: DEBUG: Starting node1 on new version (binary:2.0.17)
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line 
> 215, in upgrade_with_counters_test
> client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 985, in add
> self.recv_add()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1013, in recv_add
> raise result.te
> "TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down 
> node: node1\ndtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new 
> version (binary:2.0.17)\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11677) Incredibly slow jolokia response times

2016-04-27 Thread Andrew Jorgensen (JIRA)
Andrew Jorgensen created CASSANDRA-11677:


 Summary: Incredibly slow jolokia response times
 Key: CASSANDRA-11677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11677
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrew Jorgensen


I am seeing some very slow jolokia request times on my Cassandra 3.0 cluster. 
Specifically when running the following:

{code}
curl 127.0.0.1:8778/jolokia/list
{code}

on a slightly loaded cluster I am seeing request times around 30-40 seconds and 
on a more heavily loaded cluster I am seeing request times in the 2 minute 
mark. We are currently using jolokia 1.3.2 and v4 of the diamond collector. I 
also have a Cassandra 1.1 cluster that has the same load and number of nodes 
and running the same curl command comes back in about 1 second.

Is there anything I can do to help diagnose this issue to see what is causing 
the slowdown or has anyone else experience this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache

2016-04-27 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261121#comment-15261121
 ] 

Pavel Yaskevich commented on CASSANDRA-5863:


+1 on the changes, much more readable now. Maybe one more nit from my original 
comments - is there anyway we can change ChunkCache#invalidatePosition so 
instead of doing instance-of checks and redirects to CachedRebufferer it simply 
does invalidate(new Key(...)), since ChunkReader is effectively stateless maybe 
we could drop RebuffererFactory and use ChunkReader as a source of all 
Rebufferers? This way IMHO it's clearer that ChunkReader is the source of the 
data and doesn't have any bufferering, if buffering/caching is needed it can 
produce Rebufferer which manages the memory, WDYT?

Also how do you want to proceed with this? After all of the changes can you 
squash/rebase, so I can push?



> In process (uncompressed) page cache
> 
>
> Key: CASSANDRA-5863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: T Jake Luciani
>Assignee: Branimir Lambov
>  Labels: performance
> Fix For: 3.x
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a 
> byte[], sends it to ICompressor, gets back another byte[] and verifies a 
> checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond 
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this 
> process for hot data, possibly off heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11432) Counter values become under-counted when running repair.

2016-04-27 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261023#comment-15261023
 ] 

Dikang Gu commented on CASSANDRA-11432:
---

[~iamaleksey], any ideas about this? Thanks!

> Counter values become under-counted when running repair.
> 
>
> Key: CASSANDRA-11432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11432
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dikang Gu
>Assignee: Aleksey Yeschenko
>
> We are experimenting Counters in Cassandra 2.2.5. Our setup is that we have 6 
> nodes, across three different regions, and in each region, the replication 
> factor is 2. Basically, each nodes holds a full copy of the data.
> We are writing to cluster with CL = 2, and reading with CL = 1. 
> When are doing 30k/s counter increment/decrement per node, and at the 
> meanwhile, we are double writing to our mysql tier, so that we can measure 
> the accuracy of C* counter, compared to mysql.
> The experiment result was great at the beginning, the counter value in C* and 
> mysql are very close. The difference is less than 0.1%. 
> But when we start to run the repair on one node, the counter value in C* 
> become much less than the value in mysql,  the difference becomes larger than 
> 1%.
> My question is that is it a known problem that the counter value will become 
> under-counted if repair is running? Should we avoid running repair for 
> counter tables?
> Thanks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair

2016-04-27 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260982#comment-15260982
 ] 

T Jake Luciani commented on CASSANDRA-9766:
---

[testall | 
https://cassci.datastax.com/view/Dev/view/tjake/job/tjake-faster-streaming-testall/]
[dtest | 
https://cassci.datastax.com/view/Dev/view/tjake/job/tjake-faster-streaming-dtest/]

> Bootstrap outgoing streaming speeds are much slower than during repair
> --
>
> Key: CASSANDRA-9766
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9766
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.2. more details in the pdf attached 
>Reporter: Alexei K
>Assignee: T Jake Luciani
>  Labels: performance
> Fix For: 3.x
>
> Attachments: problem.pdf
>
>
> I have a cluster in Amazon cloud , its described in detail in the attachment. 
> What I've noticed is that we during bootstrap we never go above 12MB/sec 
> transmission speeds and also those speeds flat line almost like we're hitting 
> some sort of a limit ( this remains true for other tests that I've ran) 
> however during the repair we see much higher,variable sending rates. I've 
> provided network charts in the attachment as well . Is there an explanation 
> for this? Is something wrong with my configuration, or is it a possible bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11673) (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11673:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup
> 
>
> Key: CASSANDRA-11673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11673
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Philip Thompson
>  Labels: dtest
>
> This test was originally waiting on CASSANDRA-11179, which I recently removed 
> the 'require' annotation from (since 11179 is committed). Not sure why 
> failing on 2.1 now, perhaps didn't get committed.
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/339/testReport/bootstrap_test/TestBootstrap/test_cleanup
> Failed on CassCI build cassandra-2.1_offheap_dtest #339



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11674:

Status: Patch Available  (was: Open)

https://github.com/riptano/cassandra-dtest/pull/959

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-11674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11674
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Philip Thompson
>Priority: Minor
>  Labels: dtest
>
> single failure, but might be worth looking into to see if it repros at all.
> http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build cassandra-3.0_dtest #669



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11674:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-11674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11674
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Philip Thompson
>Priority: Minor
>  Labels: dtest
>
> single failure, but might be worth looking into to see if it repros at all.
> http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build cassandra-3.0_dtest #669



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260775#comment-15260775
 ] 

Russ Hatch commented on CASSANDRA-11676:


seems like could be related to CASSANDRA-11675 since it's in the same test 
module and started about the same time.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
> --
>
> Key: CASSANDRA-11676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> failed on most recent trunk-offheap job example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows
> Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260777#comment-15260777
 ] 

Russ Hatch commented on CASSANDRA-11675:


CASSANDRA-11676 may be related

> multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
> 
>
> Key: CASSANDRA-11675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11675
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> these appear to be related, all failed on the same build (but appear to be 
> passing now).
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows

2016-04-27 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11676:
--

 Summary: dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
 Key: CASSANDRA-11676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11676
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


failed on most recent trunk-offheap job example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows

Failed on CassCI build trunk_offheap_dtest #162



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11626) cqlsh fails and exists on non-ascii chars

2016-04-27 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259587#comment-15259587
 ] 

Wei Deng edited comment on CASSANDRA-11626 at 4/27/16 7:03 PM:
---

Yeah I don't think it's the same problem as CASSANDRA-11124. See the following 
using latest trunk build:

{noformat}
root@node0:~/cassandra-trunk# ~/cassandra-trunk/bin/cqlsh --encoding=utf-8 
--debug
Using CQL driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.6-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> ä
Invalid syntax at line 1, char 1
Traceback (most recent call last):
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2636, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2625, in main
shell.cmdloop()
  File "/root/cassandra-trunk/bin/cqlsh.py", line 1114, in cmdloop
if self.onecmd(self.statement.getvalue()):
  File "/root/cassandra-trunk/bin/cqlsh.py", line 1139, in onecmd
self.printerr('  %s' % statementline)
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2314, in printerr
self.writeresult(text, color, newline=newline, out=sys.stderr)
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2303, in writeresult
out.write(self.applycolor(str(text), color) + ('\n' if newline else ''))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: 
ordinal not in range(128)
{noformat}

This is easily reproducible on a number of C* 3.x versions (3.0.4 and 3.6).


was (Author: weideng):
Yeah I don't think it's the same problem as CASSANDRA-11124. See the following 
using latest trunk build:

{noformat}
root@node0:~/cassandra-trunk# ~/cassandra-trunk/bin/cqlsh --encoding=utf-8 
--debug
Using CQL driver: 
Using connect timeout: 5 seconds
Using 'utf-8' encoding
Using ssl: False
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.6-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> ä
Invalid syntax at line 1, char 1
Traceback (most recent call last):
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2636, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2625, in main
shell.cmdloop()
  File "/root/cassandra-trunk/bin/cqlsh.py", line 1114, in cmdloop
if self.onecmd(self.statement.getvalue()):
  File "/root/cassandra-trunk/bin/cqlsh.py", line 1139, in onecmd
self.printerr('  %s' % statementline)
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2314, in printerr
self.writeresult(text, color, newline=newline, out=sys.stderr)
  File "/root/cassandra-trunk/bin/cqlsh.py", line 2303, in writeresult
out.write(self.applycolor(str(text), color) + ('\n' if newline else ''))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: 
ordinal not in range(128)
{noformat}

This is easily reproducible on a number C* 3.x version (3.0.4 and 3.6).

> cqlsh fails and exists on non-ascii chars
> -
>
> Key: CASSANDRA-11626
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11626
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Priority: Minor
>
> Just seen on cqlsh on current trunk:
> To repro, copy {{ä}} (german umlaut) to cqlsh and press return.
> cqlsh errors out and immediately exits.
> {code}
> $ bin/cqlsh
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol 
> v3]
> Use HELP for help.
> cqlsh> ä
> Invalid syntax at line 1, char 1
> Traceback (most recent call last):
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2636, in 
> 
> main(*read_options(sys.argv[1:], os.environ))
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2625, in main
> shell.cmdloop()
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1114, in 
> cmdloop
> if self.onecmd(self.statement.getvalue()):
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1139, in onecmd
> self.printerr('  %s' % statementline)
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2314, in 
> printerr
> self.writeresult(text, color, newline=newline, out=sys.stderr)
>   File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2303, in 
> writeresult
> out.write(self.applycolor(str(text), color) + ('\n' if newline else ''))
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 
> 2: ordinal not in range(128)
> $ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest

2016-04-27 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11675:
--

 Summary: multiple dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
 Key: CASSANDRA-11675
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11675
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


these appear to be related, all failed on the same build (but appear to be 
passing now).

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/

http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest

2016-04-27 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260747#comment-15260747
 ] 

Russ Hatch commented on CASSANDRA-11675:


/cc [~Stefania]

> multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
> 
>
> Key: CASSANDRA-11675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11675
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> these appear to be related, all failed on the same build (but appear to be 
> passing now).
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/
> http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11666) dtest failure in topology_test.TestTopology.movement_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-11666.
-
Resolution: Duplicate

> dtest failure in topology_test.TestTopology.movement_test
> -
>
> Key: CASSANDRA-11666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11666
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/353/testReport/topology_test/TestTopology/movement_test
> Failed on CassCI build trunk_novnode_dtest #353



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11539:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in topology_test.TestTopology.movement_test
> -
>
> Key: CASSANDRA-11539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11539
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.x
>
>
> example failure:
> {noformat}
> Error Message
> values not within 16.00% of the max: (335.88, 404.31) ()
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in 
> movement_test
> assert_almost_equal(sizes[1], sizes[2])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 75, in 
> assert_almost_equal
> assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within 
> %.2f%% of the max: %s (%s)" % (error * 100, args, error_message)
> "values not within 16.00% of the max: (335.88, 404.31) 
> ()\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test
> 
> I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. 
> It appears this test is stable and passing on 3.0 & 2.2 (which could be just 
> luck). On trunk & 3.5, however, this test has flapped a small number of times.
> The test's threshold is 16% and I found test failures in the 3.5 branch of 
> 16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11665) dtest failure in topology_test.TestTopology.decommissioned_node_cant_rejoin_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11665:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in 
> topology_test.TestTopology.decommissioned_node_cant_rejoin_test
> 
>
> Key: CASSANDRA-11665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11665
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Philip Thompson
>  Labels: dtest
>
> intermittent failure, example failure:
> failed on trunk no-vnodes job
> "True is not false"
> http://cassci.datastax.com/job/trunk_novnode_dtest/351/testReport/topology_test/TestTopology/decommissioned_node_cant_rejoin_test
> Failed on CassCI build trunk_novnode_dtest #351



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9049) Run validation harness against a real cluster

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-9049.

Resolution: Fixed

A series of open source Jepsen tests written by Joel Knighton fulfill what we 
wanted here. https://github.com/riptano/jepsen

> Run validation harness against a real cluster
> -
>
> Key: CASSANDRA-9049
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9049
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>
> Currently we run against CCM nodes. We will get more useful data and feedback 
> if we run against real C* clusters, whether on dedicated hardware or 
> provisioned on a cloud.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8187) Create long-running Test Suite

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8187.

Resolution: Fixed

A series of open source Jepsen tests written by Joel Knighton fulfill what we 
wanted here. https://github.com/riptano/jepsen

> Create long-running Test Suite
> --
>
> Key: CASSANDRA-8187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8187
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>
> We need to start running tests that run for at least several hours. Our 
> current dtest suite is inadequate at catching data loss bugs and compaction 
> problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9007) Run stress nightly against trunk in a way that validates

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-9007.

Resolution: Fixed

A series of open source Jepsen tests written by Joel Knighton fulfill what we 
wanted here. https://github.com/riptano/jepsen

> Run stress nightly against trunk in a way that validates
> 
>
> Key: CASSANDRA-9007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9007
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Philip Thompson
>  Labels: monthly-release
>
> Stress has some very basic validation functionality when used without 
> workload profiles. It found a bug on trunk when I first ran it so it has 
> value even though the validation is basic.
> As a beachhead for the kind of blackbox validation that we are missing we can 
> start by running stress nightly or 24/7 in some rotation.
> There should be two jobs. One job has inverted success criteria (C* should 
> lose some data) and the job should only "pass" if the failure is detected. 
> This is just to prove that the harness reports failure if failure occurs.
> Another would be the real job that runs stress, parses and parses the output 
> for reports of missing data.
> This job is the first pass and basis of what we can point to when a developer 
> makes a change, implements a feature, or fixes a bug, and say "go add 
> validation to this job."
> Follow on tickets to link to this
> * Test multiple configurations
> * Get stress to validate more query functionality and APIs (counters, LWT, 
> batches)
> * Parse logs and fail tests on error level logs (great way to improve log 
> messages over time)
> * ?
> I am going to hold off on creating a ton of issues until we have a basic 
> version of the job running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11665) dtest failure in topology_test.TestTopology.decommissioned_node_cant_rejoin_test

2016-04-27 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11665:

Status: Patch Available  (was: Open)

https://github.com/riptano/cassandra-dtest/pull/958

> dtest failure in 
> topology_test.TestTopology.decommissioned_node_cant_rejoin_test
> 
>
> Key: CASSANDRA-11665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11665
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Philip Thompson
>  Labels: dtest
>
> intermittent failure, example failure:
> failed on trunk no-vnodes job
> "True is not false"
> http://cassci.datastax.com/job/trunk_novnode_dtest/351/testReport/topology_test/TestTopology/decommissioned_node_cant_rejoin_test
> Failed on CassCI build trunk_novnode_dtest #351



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-04-27 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11674:
--

 Summary: dtest failure in 
materialized_views_test.TestMaterializedViews.clustering_column_test
 Key: CASSANDRA-11674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11674
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng
Priority: Minor


single failure, but might be worth looking into to see if it repros at all.

http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test

Failed on CassCI build cassandra-3.0_dtest #669



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-27 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11363:
--
Reproduced In: 3.0.3, 2.1.13, 2.1.12  (was: 2.1.12, 2.1.13, 3.0.3)
   Status: Awaiting Feedback  (was: Open)

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances

2016-04-27 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-11647:

Status: Patch Available  (was: Open)

There were some new failures in testall/dtest that don't appear to be related 
to the patch, and that I wasn't able to reproduce locally (or on cassci for 
that matter)

> Don't use static dataDirectories field in Directories instances
> ---
>
> Key: CASSANDRA-11647
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11647
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 3.6
>
>
> Some of the changes to Directories by CASSANDRA-6696 use the static 
> {{dataDirectories}} field, instead of the instance field {{paths}}. This 
> complicates things for external code creating their own Directories instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances

2016-04-27 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256972#comment-15256972
 ] 

Blake Eggleston edited comment on CASSANDRA-11647 at 4/27/16 6:11 PM:
--

| *trunk* |
| [branch|https://github.com/bdeggleston/cassandra/tree/11647] |
| 
[dtests|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-dtest/4/]
 |
| 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-testall/3/]
 |


was (Author: bdeggleston):
| *trunk* |
| [branch|https://github.com/bdeggleston/cassandra/tree/11647] |
| 
[dtests|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-dtest/1/]
 |
| 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-testall/1/]
 |

> Don't use static dataDirectories field in Directories instances
> ---
>
> Key: CASSANDRA-11647
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11647
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 3.6
>
>
> Some of the changes to Directories by CASSANDRA-6696 use the static 
> {{dataDirectories}} field, instead of the instance field {{paths}}. This 
> complicates things for external code creating their own Directories instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-27 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260641#comment-15260641
 ] 

Paulo Motta commented on CASSANDRA-11363:
-

I wasn't able to reproduce this condition so far in a 2.1 
[cstar_perf|http://cstar.datastax.com/] cluster with the following spec: 1 
stress, 3 Cassandra, each node 2x Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (12 
cores total), 64G, 3 Samsung SSD 845DC EVO 240GB, mdadm RAID 0.

The test consistent in the following sequence of stress steps followed by 
{{nodetool tpstats}}:
* {{user 
profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml
 ops\(insert=1\) n=1M -rate threads=300}}
* {{user 
profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml
 ops\(simple1=1\) n=1M -rate threads=300}}
* {{user 
profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml
 ops\(range1=1\) n=1M -rate threads=300}}

At the end of 5 runs, the total number of blocked NTR threads was negligible (0 
for all runs, except one with 0.004% blocked). I will try running on a larger 
mixed workload, ramping up the number of stress threads and also try it on 3.0.

Meanwhile, some JFR files, reproduction steps or at least more detailed 
description on the environment/workload to reproduce this would be greatly 
appreciated.

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 

[jira] [Created] (CASSANDRA-11673) (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup

2016-04-27 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11673:
--

 Summary: (2.1) dtest failure in 
bootstrap_test.TestBootstrap.test_cleanup
 Key: CASSANDRA-11673
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11673
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


This test was originally waiting on CASSANDRA-11179, which I recently removed 
the 'require' annotation from (since 11179 is committed). Not sure why failing 
on 2.1 now, perhaps didn't get committed.

http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/339/testReport/bootstrap_test/TestBootstrap/test_cleanup

Failed on CassCI build cassandra-2.1_offheap_dtest #339



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-27 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260535#comment-15260535
 ] 

Aleksey Yeschenko commented on CASSANDRA-11600:
---

Committed as 
[7a2be8fa4a539dde2553996d57df02453e213c2f|https://github.com/apache/cassandra/commit/7a2be8fa4a539dde2553996d57df02453e213c2f]
 to 3.0 and merged with trunk, thanks.

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.6
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-27 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11600:
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   3.0.6
   Status: Resolved  (was: Ready to Commit)

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.6
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-27 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4254de17
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4254de17
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4254de17

Branch: refs/heads/trunk
Commit: 4254de17f4416fbd032068f2223ba32c5e8d097b
Parents: 5c5cc54 7a2be8f
Author: Aleksey Yeschenko 
Authored: Wed Apr 27 18:26:22 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 18:26:22 2016 +0100

--
 CHANGES.txt|  1 +
 conf/cassandra-env.ps1 | 14 +--
 conf/cassandra-env.sh  | 58 ++---
 3 files changed, 37 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4254de17/CHANGES.txt
--
diff --cc CHANGES.txt
index 50ec72b,8877fa9..6466310
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,5 +1,65 @@@
 -3.0.6
 +3.6
 + * Always perform collision check before joining ring (CASSANDRA-10134)
 + * SSTableWriter output discrepancy (CASSANDRA-11646)
 + * Fix potential timeout in NativeTransportService.testConcurrentDestroys 
(CASSANDRA-10756)
 + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Add support to rebuild from specific range (CASSANDRA-10406)
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + 

[1/3] cassandra git commit: Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-27 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 3079ae60d -> 7a2be8fa4
  refs/heads/trunk 5c5cc540f -> 4254de17f


Don't require HEAP_NEW_SIZE to be set when using G1

patch by Blake Eggleston; reviewed by Paulo Motta for CASSANDRA-11600


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a2be8fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a2be8fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a2be8fa

Branch: refs/heads/cassandra-3.0
Commit: 7a2be8fa4a539dde2553996d57df02453e213c2f
Parents: 3079ae6
Author: Blake Eggleston 
Authored: Wed Apr 27 18:25:04 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 18:25:04 2016 +0100

--
 CHANGES.txt|  1 +
 conf/cassandra-env.ps1 | 14 +--
 conf/cassandra-env.sh  | 58 ++---
 3 files changed, 37 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6b6bc1f..8877fa9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600)
  * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
  * Ignore all LocalStrategy keyspaces for streaming and other related
operations (CASSANDRA-11627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index a322a4d..794189f 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -133,7 +133,7 @@ Function CalculateHeapSizes
 return
 }
 
-if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE))
+if ((($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE)) -and ($using_cms -eq $true))
 {
 echo "Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs.  
Aborting startup."
 exit 1
@@ -327,12 +327,6 @@ Function SetCassandraEnvironment
 # times. If in doubt, and if you do not particularly want to tweak, go
 # 100 MB per physical CPU core.
 
-#$env:MAX_HEAP_SIZE="4096M"
-#$env:HEAP_NEWSIZE="800M"
-CalculateHeapSizes
-
-ParseJVMInfo
-
 #GC log path has to be defined here since it needs to find CASSANDRA_HOME
 $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log"""
 
@@ -352,6 +346,12 @@ Function SetCassandraEnvironment
 $defined_xms = $env:JVM_OPTS -like '*Xms*'
 $using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*'
 
+#$env:MAX_HEAP_SIZE="4096M"
+#$env:HEAP_NEWSIZE="800M"
+CalculateHeapSizes
+
+ParseJVMInfo
+
 # We only set -Xms and -Xmx if they were not defined on jvm.options file
 # If defined, both Xmx and Xms should be defined together.
 if (($defined_xmx -eq $false) -and ($defined_xms -eq $false))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 83fe4c5..0ba0c4e 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -121,6 +121,31 @@ case "$jvm" in
 ;;
 esac
 
+#GC log path has to be defined here because it needs to access CASSANDRA_HOME
+JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log"
+
+# Here we create the arguments that will get passed to the jvm when
+# starting cassandra.
+
+# Read user-defined JVM options from jvm.options file
+JVM_OPTS_FILE=$CASSANDRA_CONF/jvm.options
+for opt in `grep "^-" $JVM_OPTS_FILE`
+do
+  JVM_OPTS="$JVM_OPTS $opt"
+done
+
+# Check what parameters were defined on jvm.options file to avoid conflicts
+echo $JVM_OPTS | grep -q Xmn
+DEFINED_XMN=$?
+echo $JVM_OPTS | grep -q Xmx
+DEFINED_XMX=$?
+echo $JVM_OPTS | grep -q Xms
+DEFINED_XMS=$?
+echo $JVM_OPTS | grep -q UseConcMarkSweepGC
+USING_CMS=$?
+echo $JVM_OPTS | grep -q UseG1GC
+USING_G1=$?
+
 # Override these to set the amount of memory to allocate to the JVM at
 # start-up. For production use you may wish to adjust this for your
 # environment. MAX_HEAP_SIZE is the total amount of memory dedicated
@@ -143,42 +168,17 @@ esac
 #export MALLOC_ARENA_MAX=4
 
 # only calculate the size if it's not set manually
-if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" ]; then
+if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" -o $USING_G1 -eq 0 
]; then
 calculate_heap_sizes
-else
-if [ "x$MAX_HEAP_SIZE" = "x" ] ||  [ 

[2/3] cassandra git commit: Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-27 Thread aleksey
Don't require HEAP_NEW_SIZE to be set when using G1

patch by Blake Eggleston; reviewed by Paulo Motta for CASSANDRA-11600


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a2be8fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a2be8fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a2be8fa

Branch: refs/heads/trunk
Commit: 7a2be8fa4a539dde2553996d57df02453e213c2f
Parents: 3079ae6
Author: Blake Eggleston 
Authored: Wed Apr 27 18:25:04 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 18:25:04 2016 +0100

--
 CHANGES.txt|  1 +
 conf/cassandra-env.ps1 | 14 +--
 conf/cassandra-env.sh  | 58 ++---
 3 files changed, 37 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6b6bc1f..8877fa9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600)
  * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
  * Ignore all LocalStrategy keyspaces for streaming and other related
operations (CASSANDRA-11627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index a322a4d..794189f 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -133,7 +133,7 @@ Function CalculateHeapSizes
 return
 }
 
-if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE))
+if ((($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE 
-and $env:HEAP_NEWSIZE)) -and ($using_cms -eq $true))
 {
 echo "Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs.  
Aborting startup."
 exit 1
@@ -327,12 +327,6 @@ Function SetCassandraEnvironment
 # times. If in doubt, and if you do not particularly want to tweak, go
 # 100 MB per physical CPU core.
 
-#$env:MAX_HEAP_SIZE="4096M"
-#$env:HEAP_NEWSIZE="800M"
-CalculateHeapSizes
-
-ParseJVMInfo
-
 #GC log path has to be defined here since it needs to find CASSANDRA_HOME
 $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log"""
 
@@ -352,6 +346,12 @@ Function SetCassandraEnvironment
 $defined_xms = $env:JVM_OPTS -like '*Xms*'
 $using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*'
 
+#$env:MAX_HEAP_SIZE="4096M"
+#$env:HEAP_NEWSIZE="800M"
+CalculateHeapSizes
+
+ParseJVMInfo
+
 # We only set -Xms and -Xmx if they were not defined on jvm.options file
 # If defined, both Xmx and Xms should be defined together.
 if (($defined_xmx -eq $false) -and ($defined_xms -eq $false))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 83fe4c5..0ba0c4e 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -121,6 +121,31 @@ case "$jvm" in
 ;;
 esac
 
+#GC log path has to be defined here because it needs to access CASSANDRA_HOME
+JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log"
+
+# Here we create the arguments that will get passed to the jvm when
+# starting cassandra.
+
+# Read user-defined JVM options from jvm.options file
+JVM_OPTS_FILE=$CASSANDRA_CONF/jvm.options
+for opt in `grep "^-" $JVM_OPTS_FILE`
+do
+  JVM_OPTS="$JVM_OPTS $opt"
+done
+
+# Check what parameters were defined on jvm.options file to avoid conflicts
+echo $JVM_OPTS | grep -q Xmn
+DEFINED_XMN=$?
+echo $JVM_OPTS | grep -q Xmx
+DEFINED_XMX=$?
+echo $JVM_OPTS | grep -q Xms
+DEFINED_XMS=$?
+echo $JVM_OPTS | grep -q UseConcMarkSweepGC
+USING_CMS=$?
+echo $JVM_OPTS | grep -q UseG1GC
+USING_G1=$?
+
 # Override these to set the amount of memory to allocate to the JVM at
 # start-up. For production use you may wish to adjust this for your
 # environment. MAX_HEAP_SIZE is the total amount of memory dedicated
@@ -143,42 +168,17 @@ esac
 #export MALLOC_ARENA_MAX=4
 
 # only calculate the size if it's not set manually
-if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" ]; then
+if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" -o $USING_G1 -eq 0 
]; then
 calculate_heap_sizes
-else
-if [ "x$MAX_HEAP_SIZE" = "x" ] ||  [ "x$HEAP_NEWSIZE" = "x" ]; then
-echo "please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs (see 
cassandra-env.sh)"
-exit 

[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz

2016-04-27 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260507#comment-15260507
 ] 

Sam Tunnicliffe commented on CASSANDRA-10091:
-

Yes, jmx would be unavailable until the node has joined the ring because it's 
only at that point that auth setup happens which initializes the authenticator, 
authorizer & role manager. (related: CASSANDRA-11381).

> Integrated JMX authn & authz
> 
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift

2016-04-27 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11502:
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   2.2.7
   3.0.6
   3.6
Reproduced In: 2.2.5, 2.1.13  (was: 2.1.13, 2.2.5)
   Status: Resolved  (was: Patch Available)

> Fix denseness and column metadata updates coming from Thrift
> 
>
> Key: CASSANDRA-11502
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11502
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.6, 3.0.6, 2.2.7
>
>
> It was 
> [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472]
>  that we'd be recalculating {{is_dense}} for table updates coming from Thrift 
> on every change. However, due to some oversight, {{is_dense}} can only go 
> from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will 
> not reset {{is_dense}} back to {{false}}.
> The recalculation fails because no matter what happens, we never remove the 
> auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table.
> Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see 
> CASSANDRA-11315).
> What we should do is remove the special-case for Thrift in 
> {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in 
> {{ThriftConversion::internalFromThrift}} to remove those columns when going 
> from dense to sparse.
> This is not enough to fix CASSANDRA-11315, however, as we need to handle 
> pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting 
> proper schema from {{DESCRIBE}} now and b) using the more efficient 
> {{SparseCellNameType}} when you add columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift

2016-04-27 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260482#comment-15260482
 ] 

Aleksey Yeschenko commented on CASSANDRA-11502:
---

Committed as 
[e5c40278001bf3a9582085a58941e5f4765f118c|https://github.com/apache/cassandra/commit/e5c40278001bf3a9582085a58941e5f4765f118c]
 to 2.2 and merged with 3.0 and trunk, thanks. Did some manual testing w/ 
cqlsh/nodetool to make sure sparse CFs w/ clustering columns don't pass 
{{isThriftCompatibleTest()}}, and it seems like we are all good.

> Fix denseness and column metadata updates coming from Thrift
> 
>
> Key: CASSANDRA-11502
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11502
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> It was 
> [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472]
>  that we'd be recalculating {{is_dense}} for table updates coming from Thrift 
> on every change. However, due to some oversight, {{is_dense}} can only go 
> from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will 
> not reset {{is_dense}} back to {{false}}.
> The recalculation fails because no matter what happens, we never remove the 
> auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table.
> Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see 
> CASSANDRA-11315).
> What we should do is remove the special-case for Thrift in 
> {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in 
> {{ThriftConversion::internalFromThrift}} to remove those columns when going 
> from dense to sparse.
> This is not enough to fix CASSANDRA-11315, however, as we need to handle 
> pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting 
> proper schema from {{DESCRIBE}} now and b) using the more efficient 
> {{SparseCellNameType}} when you add columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3079ae60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3079ae60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3079ae60

Branch: refs/heads/trunk
Commit: 3079ae60d29baec262a4b05d7082e88091299d26
Parents: 8bfe09f e5c4027
Author: Aleksey Yeschenko 
Authored: Wed Apr 27 17:55:27 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:57:59 2016 +0100

--
 CHANGES.txt   |  3 ++-
 .../cql3/statements/AlterTableStatement.java  |  2 +-
 .../cassandra/cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java |  2 +-
 .../cql3/statements/CreateTriggerStatement.java   |  2 +-
 .../cassandra/cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java |  2 +-
 .../org/apache/cassandra/schema/SchemaKeyspace.java   | 14 ++
 .../apache/cassandra/service/MigrationManager.java|  8 
 .../org/apache/cassandra/thrift/CassandraServer.java  |  2 +-
 test/unit/org/apache/cassandra/schema/DefsTest.java   | 14 +++---
 .../apache/cassandra/schema/SchemaKeyspaceTest.java   |  2 +-
 .../apache/cassandra/triggers/TriggersSchemaTest.java |  4 ++--
 13 files changed, 25 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/CHANGES.txt
--
diff --cc CHANGES.txt
index bc15d32,3641816..6b6bc1f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,27 -1,11 +1,28 @@@
 -2.2.7
 +3.0.6
 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
 + * Ignore all LocalStrategy keyspaces for streaming and other related
 +   operations (CASSANDRA-11627)
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
   * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
   * Add missing files to debian packages (CASSANDRA-11642)
   * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
   * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
-   report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
+report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
   * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  Merged from 2.1:
   * cqlsh COPY FROM fails for null values with non-prepared statements 
(CASSANDRA-11631)
   * Make cython optional in pylib/setup.py (CASSANDRA-11630)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 3515c6b,f4a7b39..381971f
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@@ -322,61 -284,8 +322,61 @@@ public class AlterTableStatement extend
  break;
  }
  
- MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly);
+ MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly);
 -return true;
 +
 +if (viewUpdates != null)
 +{
 +for (ViewDefinition viewUpdate : viewUpdates)
 +

[6/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread aleksey
http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --cc src/java/org/apache/cassandra/schema/SchemaKeyspace.java
index 6e9d44b,000..e3756ec
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@@ -1,1410 -1,0 +1,1400 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.schema;
 +
 +import java.nio.ByteBuffer;
 +import java.nio.charset.CharacterCodingException;
 +import java.security.MessageDigest;
 +import java.security.NoSuchAlgorithmException;
 +import java.util.*;
 +import java.util.concurrent.TimeUnit;
 +import java.util.stream.Collectors;
 +
 +import com.google.common.collect.ImmutableList;
 +import com.google.common.collect.MapDifference;
 +import com.google.common.collect.Maps;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder;
 +import org.apache.cassandra.cql3.*;
 +import org.apache.cassandra.cql3.functions.*;
 +import org.apache.cassandra.cql3.statements.SelectStatement;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.marshal.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.view.View;
 +import org.apache.cassandra.exceptions.ConfigurationException;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.transport.Server;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.Pair;
 +
 +import static java.lang.String.format;
 +
 +import static java.util.stream.Collectors.toList;
 +import static org.apache.cassandra.cql3.QueryProcessor.executeInternal;
 +import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal;
 +import static org.apache.cassandra.schema.CQLTypeParser.parse;
 +
 +/**
 + * system_schema.* tables and methods for manipulating them.
 + */
 +public final class SchemaKeyspace
 +{
 +private SchemaKeyspace()
 +{
 +}
 +
 +private static final Logger logger = 
LoggerFactory.getLogger(SchemaKeyspace.class);
 +
 +private static final boolean FLUSH_SCHEMA_TABLES = 
Boolean.valueOf(System.getProperty("cassandra.test.flush_local_schema_changes", 
"true"));
 +
 +public static final String NAME = "system_schema";
 +
 +public static final String KEYSPACES = "keyspaces";
 +public static final String TABLES = "tables";
 +public static final String COLUMNS = "columns";
 +public static final String DROPPED_COLUMNS = "dropped_columns";
 +public static final String TRIGGERS = "triggers";
 +public static final String VIEWS = "views";
 +public static final String TYPES = "types";
 +public static final String FUNCTIONS = "functions";
 +public static final String AGGREGATES = "aggregates";
 +public static final String INDEXES = "indexes";
 +
 +public static final List ALL =
 +ImmutableList.of(KEYSPACES, TABLES, COLUMNS, DROPPED_COLUMNS, 
TRIGGERS, VIEWS, TYPES, FUNCTIONS, AGGREGATES, INDEXES);
 +
 +private static final CFMetaData Keyspaces =
 +compile(KEYSPACES,
 +"keyspace definitions",
 +"CREATE TABLE %s ("
 ++ "keyspace_name text,"
 ++ "durable_writes boolean,"
 ++ "replication frozen>,"
 ++ "PRIMARY KEY ((keyspace_name)))");
 +
 +private static final CFMetaData Tables =
 +compile(TABLES,
 +"table definitions",
 +"CREATE TABLE %s ("
 ++ "keyspace_name text,"
 ++ "table_name text,"
 ++ "bloom_filter_fp_chance double,"
 ++ "caching frozen>,"
 ++ "comment text,"
 ++ "compaction frozen>,"
 ++ "compression frozen

[4/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread aleksey
http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --cc src/java/org/apache/cassandra/schema/SchemaKeyspace.java
index 6e9d44b,000..e3756ec
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@@ -1,1410 -1,0 +1,1400 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.schema;
 +
 +import java.nio.ByteBuffer;
 +import java.nio.charset.CharacterCodingException;
 +import java.security.MessageDigest;
 +import java.security.NoSuchAlgorithmException;
 +import java.util.*;
 +import java.util.concurrent.TimeUnit;
 +import java.util.stream.Collectors;
 +
 +import com.google.common.collect.ImmutableList;
 +import com.google.common.collect.MapDifference;
 +import com.google.common.collect.Maps;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder;
 +import org.apache.cassandra.cql3.*;
 +import org.apache.cassandra.cql3.functions.*;
 +import org.apache.cassandra.cql3.statements.SelectStatement;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.marshal.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.view.View;
 +import org.apache.cassandra.exceptions.ConfigurationException;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.transport.Server;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.Pair;
 +
 +import static java.lang.String.format;
 +
 +import static java.util.stream.Collectors.toList;
 +import static org.apache.cassandra.cql3.QueryProcessor.executeInternal;
 +import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal;
 +import static org.apache.cassandra.schema.CQLTypeParser.parse;
 +
 +/**
 + * system_schema.* tables and methods for manipulating them.
 + */
 +public final class SchemaKeyspace
 +{
 +private SchemaKeyspace()
 +{
 +}
 +
 +private static final Logger logger = 
LoggerFactory.getLogger(SchemaKeyspace.class);
 +
 +private static final boolean FLUSH_SCHEMA_TABLES = 
Boolean.valueOf(System.getProperty("cassandra.test.flush_local_schema_changes", 
"true"));
 +
 +public static final String NAME = "system_schema";
 +
 +public static final String KEYSPACES = "keyspaces";
 +public static final String TABLES = "tables";
 +public static final String COLUMNS = "columns";
 +public static final String DROPPED_COLUMNS = "dropped_columns";
 +public static final String TRIGGERS = "triggers";
 +public static final String VIEWS = "views";
 +public static final String TYPES = "types";
 +public static final String FUNCTIONS = "functions";
 +public static final String AGGREGATES = "aggregates";
 +public static final String INDEXES = "indexes";
 +
 +public static final List ALL =
 +ImmutableList.of(KEYSPACES, TABLES, COLUMNS, DROPPED_COLUMNS, 
TRIGGERS, VIEWS, TYPES, FUNCTIONS, AGGREGATES, INDEXES);
 +
 +private static final CFMetaData Keyspaces =
 +compile(KEYSPACES,
 +"keyspace definitions",
 +"CREATE TABLE %s ("
 ++ "keyspace_name text,"
 ++ "durable_writes boolean,"
 ++ "replication frozen>,"
 ++ "PRIMARY KEY ((keyspace_name)))");
 +
 +private static final CFMetaData Tables =
 +compile(TABLES,
 +"table definitions",
 +"CREATE TABLE %s ("
 ++ "keyspace_name text,"
 ++ "table_name text,"
 ++ "bloom_filter_fp_chance double,"
 ++ "caching frozen>,"
 ++ "comment text,"
 ++ "compaction frozen>,"
 ++ "compression frozen

[1/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables

2016-04-27 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 3db30aab9 -> e5c402780
  refs/heads/cassandra-3.0 8bfe09f46 -> 3079ae60d
  refs/heads/trunk 2bc5f0c61 -> 5c5cc540f


Fix is_dense recalculation for Thrift-updated tables

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-11502


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278

Branch: refs/heads/cassandra-2.2
Commit: e5c40278001bf3a9582085a58941e5f4765f118c
Parents: 3db30aa
Author: Aleksey Yeschenko 
Authored: Fri Apr 1 17:36:14 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:47:29 2016 +0100

--
 CHANGES.txt |  3 +-
 .../cql3/statements/AlterTableStatement.java|  2 +-
 .../cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java   |  2 +-
 .../cql3/statements/CreateTriggerStatement.java |  2 +-
 .../cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java   |  2 +-
 .../cassandra/schema/LegacySchemaTables.java| 10 +---
 .../cassandra/service/MigrationManager.java |  8 +--
 .../cassandra/thrift/CassandraServer.java   |  2 +-
 .../cassandra/thrift/ThriftConversion.java  | 24 +++-
 .../config/LegacySchemaTablesTest.java  | 60 +++-
 .../org/apache/cassandra/schema/DefsTest.java   | 14 ++---
 .../cassandra/triggers/TriggersSchemaTest.java  |  4 +-
 14 files changed, 103 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e8a301a..3641816 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,9 +1,10 @@
 2.2.7
+ * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)
  * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
-  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
+   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * cqlsh COPY FROM fails for null values with non-prepared statements 
(CASSANDRA-11631)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 63a53fa..f4a7b39 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -284,7 +284,7 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 break;
 }
 
-MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly);
 return true;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
index 6459e6b..9203cf9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
@@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends 
SchemaAlteringStatement
 for (ColumnDefinition def : copy.allColumns())
 modified |= updateDefinition(copy, def, toUpdate.keyspace, 
toUpdate.name, updated);
 if (modified)
-MigrationManager.announceColumnFamilyUpdate(copy, false, 
isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly);
 }
 
 // Other user types potentially using the updated type


[7/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-27 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3079ae60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3079ae60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3079ae60

Branch: refs/heads/cassandra-3.0
Commit: 3079ae60d29baec262a4b05d7082e88091299d26
Parents: 8bfe09f e5c4027
Author: Aleksey Yeschenko 
Authored: Wed Apr 27 17:55:27 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:57:59 2016 +0100

--
 CHANGES.txt   |  3 ++-
 .../cql3/statements/AlterTableStatement.java  |  2 +-
 .../cassandra/cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java |  2 +-
 .../cql3/statements/CreateTriggerStatement.java   |  2 +-
 .../cassandra/cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java |  2 +-
 .../org/apache/cassandra/schema/SchemaKeyspace.java   | 14 ++
 .../apache/cassandra/service/MigrationManager.java|  8 
 .../org/apache/cassandra/thrift/CassandraServer.java  |  2 +-
 test/unit/org/apache/cassandra/schema/DefsTest.java   | 14 +++---
 .../apache/cassandra/schema/SchemaKeyspaceTest.java   |  2 +-
 .../apache/cassandra/triggers/TriggersSchemaTest.java |  4 ++--
 13 files changed, 25 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/CHANGES.txt
--
diff --cc CHANGES.txt
index bc15d32,3641816..6b6bc1f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,27 -1,11 +1,28 @@@
 -2.2.7
 +3.0.6
 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654)
 + * Ignore all LocalStrategy keyspaces for streaming and other related
 +   operations (CASSANDRA-11627)
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
   * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
   * Add missing files to debian packages (CASSANDRA-11642)
   * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
   * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
-   report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
+report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
   * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  Merged from 2.1:
   * cqlsh COPY FROM fails for null values with non-prepared statements 
(CASSANDRA-11631)
   * Make cython optional in pylib/setup.py (CASSANDRA-11630)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 3515c6b,f4a7b39..381971f
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@@ -322,61 -284,8 +322,61 @@@ public class AlterTableStatement extend
  break;
  }
  
- MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly);
+ MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly);
 -return true;
 +
 +if (viewUpdates != null)
 +{
 +for (ViewDefinition viewUpdate : viewUpdates)
 +

[2/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables

2016-04-27 Thread aleksey
Fix is_dense recalculation for Thrift-updated tables

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-11502


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278

Branch: refs/heads/cassandra-3.0
Commit: e5c40278001bf3a9582085a58941e5f4765f118c
Parents: 3db30aa
Author: Aleksey Yeschenko 
Authored: Fri Apr 1 17:36:14 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:47:29 2016 +0100

--
 CHANGES.txt |  3 +-
 .../cql3/statements/AlterTableStatement.java|  2 +-
 .../cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java   |  2 +-
 .../cql3/statements/CreateTriggerStatement.java |  2 +-
 .../cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java   |  2 +-
 .../cassandra/schema/LegacySchemaTables.java| 10 +---
 .../cassandra/service/MigrationManager.java |  8 +--
 .../cassandra/thrift/CassandraServer.java   |  2 +-
 .../cassandra/thrift/ThriftConversion.java  | 24 +++-
 .../config/LegacySchemaTablesTest.java  | 60 +++-
 .../org/apache/cassandra/schema/DefsTest.java   | 14 ++---
 .../cassandra/triggers/TriggersSchemaTest.java  |  4 +-
 14 files changed, 103 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e8a301a..3641816 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,9 +1,10 @@
 2.2.7
+ * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)
  * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
-  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
+   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * cqlsh COPY FROM fails for null values with non-prepared statements 
(CASSANDRA-11631)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 63a53fa..f4a7b39 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -284,7 +284,7 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 break;
 }
 
-MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly);
 return true;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
index 6459e6b..9203cf9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
@@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends 
SchemaAlteringStatement
 for (ColumnDefinition def : copy.allColumns())
 modified |= updateDefinition(copy, def, toUpdate.keyspace, 
toUpdate.name, updated);
 if (modified)
-MigrationManager.announceColumnFamilyUpdate(copy, false, 
isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly);
 }
 
 // Other user types potentially using the updated type

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java 

[8/8] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-27 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5cc540
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5cc540
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5cc540

Branch: refs/heads/trunk
Commit: 5c5cc540facef9f8645a179e1467ad7edffbda48
Parents: 2bc5f0c 3079ae6
Author: Aleksey Yeschenko 
Authored: Wed Apr 27 17:58:12 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:58:12 2016 +0100

--
 CHANGES.txt   |  3 ++-
 .../cql3/statements/AlterTableStatement.java  |  2 +-
 .../cassandra/cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java |  2 +-
 .../cql3/statements/CreateTriggerStatement.java   |  2 +-
 .../cassandra/cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java |  2 +-
 .../org/apache/cassandra/schema/SchemaKeyspace.java   | 14 ++
 .../apache/cassandra/service/MigrationManager.java|  8 
 .../org/apache/cassandra/thrift/CassandraServer.java  |  2 +-
 test/unit/org/apache/cassandra/schema/DefsTest.java   | 14 +++---
 .../apache/cassandra/schema/SchemaKeyspaceTest.java   |  2 +-
 .../apache/cassandra/triggers/TriggersSchemaTest.java |  4 ++--
 13 files changed, 25 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/service/MigrationManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[3/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables

2016-04-27 Thread aleksey
Fix is_dense recalculation for Thrift-updated tables

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-11502


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278

Branch: refs/heads/trunk
Commit: e5c40278001bf3a9582085a58941e5f4765f118c
Parents: 3db30aa
Author: Aleksey Yeschenko 
Authored: Fri Apr 1 17:36:14 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Apr 27 17:47:29 2016 +0100

--
 CHANGES.txt |  3 +-
 .../cql3/statements/AlterTableStatement.java|  2 +-
 .../cql3/statements/AlterTypeStatement.java |  2 +-
 .../cql3/statements/CreateIndexStatement.java   |  2 +-
 .../cql3/statements/CreateTriggerStatement.java |  2 +-
 .../cql3/statements/DropIndexStatement.java |  2 +-
 .../cql3/statements/DropTriggerStatement.java   |  2 +-
 .../cassandra/schema/LegacySchemaTables.java| 10 +---
 .../cassandra/service/MigrationManager.java |  8 +--
 .../cassandra/thrift/CassandraServer.java   |  2 +-
 .../cassandra/thrift/ThriftConversion.java  | 24 +++-
 .../config/LegacySchemaTablesTest.java  | 60 +++-
 .../org/apache/cassandra/schema/DefsTest.java   | 14 ++---
 .../cassandra/triggers/TriggersSchemaTest.java  |  4 +-
 14 files changed, 103 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e8a301a..3641816 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,9 +1,10 @@
 2.2.7
+ * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
  * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
  * Add missing files to debian packages (CASSANDRA-11642)
  * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
-  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
+   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * cqlsh COPY FROM fails for null values with non-prepared statements 
(CASSANDRA-11631)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 63a53fa..f4a7b39 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -284,7 +284,7 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 break;
 }
 
-MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly);
 return true;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
index 6459e6b..9203cf9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java
@@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends 
SchemaAlteringStatement
 for (ColumnDefinition def : copy.allColumns())
 modified |= updateDefinition(copy, def, toUpdate.keyspace, 
toUpdate.name, updated);
 if (modified)
-MigrationManager.announceColumnFamilyUpdate(copy, false, 
isLocalOnly);
+MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly);
 }
 
 // Other user types potentially using the updated type

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java 

[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz

2016-04-27 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260458#comment-15260458
 ] 

Nick Bailey commented on CASSANDRA-10091:
-

I'm curious how this would behave during a bootstrap operation with auth 
enabled. Would jmx be unavailable until the relevant auth data had been 
streamed to the system_auth keyspace?

> Integrated JMX authn & authz
> 
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11514) trunk compaction performance regression

2016-04-27 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260406#comment-15260406
 ] 

Michael Shuler commented on CASSANDRA-11514:


I was unable to find a concrete method to bisect this - I attempted a good 
number of variations to find a way to call a commit "good" or "bad", but was 
unsuccessful. Those are on a private jira 
[CSTAR-478|https://datastax.jira.com/browse/CSTAR-478], which I'm going to 
close, since I'm currently unsure of how to proceed.

> trunk compaction performance regression
> ---
>
> Key: CASSANDRA-11514
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11514
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cstar_perf
>Reporter: Michael Shuler
>  Labels: performance
> Fix For: 3.x
>
> Attachments: trunk-compaction_dtcs-op_rate.png, 
> trunk-compaction_lcs-op_rate.png
>
>
> It appears that a commit between Mar 29-30 has resulted in a drop in 
> compaction performance. I attempted to get a log list of commits to post 
> here, but
> {noformat}
> git log trunk@{2016-03-29}..trunk@{2016-03-31}
> {noformat}
> appears to be incomplete, since reading through {{git log}} I see netty and 
> och were upgraded during this time period.
> !trunk-compaction_dtcs-op_rate.png!
> !trunk-compaction_lcs-op_rate.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3486) Node Tool command to stop repair

2016-04-27 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260388#comment-15260388
 ] 

Nick Bailey commented on CASSANDRA-3486:


bq. Do you think a blocking + timeout approach would be preferable?

Maybe. My goal in asking would be to know if the repair needs to be canceled on 
other nodes or not. Right now you need to either just run the abort on all 
nodes from the start or run it on the coordinator then check the participants 
to double check that it succeeded there as well.

bq. I personally think we should go this route of making repair more stateful

I agree, especially with the upcoming coordinated repairs in C*

> Node Tool command to stop repair
> 
>
> Key: CASSANDRA-3486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3486
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: JVM
>Reporter: Vijay
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: repair
> Fix For: 2.1.x
>
> Attachments: 0001-stop-repair-3583.patch
>
>
> After CASSANDRA-1740, If the validation compaction is stopped then the repair 
> will hang. This ticket will allow users to kill the original repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10745) Deprecate PropertyFileSnitch

2016-04-27 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260368#comment-15260368
 ] 

Brandon Williams commented on CASSANDRA-10745:
--

I think if people want to continue using PFS, that's fine.  I think the best 
step we can take here is making GPFS not be PFS-compatible unless a -D flag is 
passed.  This way we're optimized for the new cluster with GPFS case, instead 
of the migration case, since the latter is likely in the minority now.

> Deprecate PropertyFileSnitch
> 
>
> Key: CASSANDRA-10745
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10745
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Distributed Metadata
>Reporter: Paulo Motta
>Priority: Minor
>
> Opening this ticket to discuss deprecating PropertyFileSnitch, since it's 
> error-prone and more snitch code to maintain (See CASSANDRA-10243). Migration 
> from existing cluster with PropertyFileSnitch to GossipingPropertyFileSnitch 
> is straightforward.
> Is there any useful use case that can be achieved only with 
> PropertyFileSnitch?
> If not objections, we would add deprecation warnings in 2.2.x, 3.0.x, 3.2 and 
> deprecate in 3.4 or 3.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed

2016-04-27 Thread Anastasia Osintseva (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260338#comment-15260338
 ] 

Anastasia Osintseva commented on CASSANDRA-11670:
-

I had no more Mutation of Y bytes is too large for the maxiumum size of X, but 
I got again Error: 
{noformat}
ERROR [main] 2016-04-27 17:32:24,714 StorageService.java:1300 - Error while 
waiting on bootstrap to complete. Bootstrap will have to be restarted.
java.util.concurrent.ExecutionException: 
org.apache.cassandra.streaming.StreamException: Stream failed
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
 ~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
 ~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
~[guava-18.0.jar:na]
at 
org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1295) 
[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:971)
 [apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:745) 
[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:610) 
[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:333) 
[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) 
[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
[apache-cassandra-3.0.5.jar:3.0.5]
Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
at 
org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) 
~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
 ~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 ~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) 
~[guava-18.0.jar:na]
at 
com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 ~[guava-18.0.jar:na]
at 
org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:186)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:430)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.streaming.StreamSession.maybeCompleted(StreamSession.java:707)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.streaming.StreamSession.taskCompleted(StreamSession.java:668)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:210)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_11]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_11]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_11]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_11]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_11]
{noformat}

> Error while waiting on bootstrap to complete. Bootstrap will have to be 
> restarted. Stream failed
> 
>
> Key: CASSANDRA-11670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11670
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Streaming and Messaging
>Reporter: Anastasia Osintseva
> Fix For: 3.0.5
>
>
> I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each 
> DC. One node has been added successfully after I had made scrubing. 
> Now I'm trying to add node to another DC, but get error: 
> org.apache.cassandra.streaming.StreamException: Stream failed. 
> After scrubing and repair I get the same error.  
> {noformat}
> ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - 
> Unknown exception caught while attempting to update 

[jira] [Updated] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-04-27 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11555:
-
Reviewer: Benjamin Lerer

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift

2016-04-27 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260146#comment-15260146
 ] 

Aleksey Yeschenko commented on CASSANDRA-11502:
---

bq. I help someone a few days ago with an upgrade problem and was able to do an 
update on a CQL table, but that was on some 2.0 version so must have been on 
some version from before we introduced that.

Do you have that table schema handy? I might as well check if the check for 
that fails in 2.1+ and open a new ticket if so.

> Fix denseness and column metadata updates coming from Thrift
> 
>
> Key: CASSANDRA-11502
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11502
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> It was 
> [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472]
>  that we'd be recalculating {{is_dense}} for table updates coming from Thrift 
> on every change. However, due to some oversight, {{is_dense}} can only go 
> from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will 
> not reset {{is_dense}} back to {{false}}.
> The recalculation fails because no matter what happens, we never remove the 
> auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table.
> Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see 
> CASSANDRA-11315).
> What we should do is remove the special-case for Thrift in 
> {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in 
> {{ThriftConversion::internalFromThrift}} to remove those columns when going 
> from dense to sparse.
> This is not enough to fix CASSANDRA-11315, however, as we need to handle 
> pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting 
> proper schema from {{DESCRIBE}} now and b) using the more efficient 
> {{SparseCellNameType}} when you add columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache

2016-04-27 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260127#comment-15260127
 ] 

Branimir Lambov commented on CASSANDRA-5863:


In the latest couple of updates I did some renaming:
- {{BufferlessRebufferer}} to {{ChunkReader}} with {{rebuffer}} to {{readChunk}}
- {{BaseRebufferer}} to {{ReaderFileProxy}}
- {{SharedRebufferer}} to {{RebuffererFactory}} with factory method
- {{ReaderCache}} to {{ChunkCache}}

and updated some of the documentation. Hopefully this reads better now?

Switched to Caffeine as planned in CASSANDRA-11452:
- [better cache 
efficiency|https://docs.google.com/spreadsheets/d/11VcYh8wiCbpVmeix10onalAS4phfREWcxE-RMPTM7cc/edit#gid=0]
 on CachingBench which includes compaction, scans and collation from multiple 
sstables
- [cstar_perf with everything served off 
cache|http://cstar.datastax.com/tests/id/b5963866-0b9a-11e6-a761-0256e416528f] 
shows equivalent performance, i.e. it does not degrade on heavy load
- [cstar_perf on smaller 
cache|http://cstar.datastax.com/tests/id/41b4c650-0c6d-11e6-bf41-0256e416528f] 
shows better hit rate even with uniformly random access patterns (48.8 vs 45.4% 
as reported by nodetool info)
- unlike LIRS, memory overheads are very controlled and specified 
[here|https://github.com/ben-manes/caffeine/wiki/Memory-overhead]: at most 112 
bytes per chunk including key, i.e. 0.2% for 64k chunks to 3% for 4k chunks.

And finally rebased to get dtest in sync:
|[code|https://github.com/blambov/cassandra/tree/5863-page-cache-caffeine-rebased]|[utest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-testall/]|[dtest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-dtest/]|

> In process (uncompressed) page cache
> 
>
> Key: CASSANDRA-5863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: T Jake Luciani
>Assignee: Branimir Lambov
>  Labels: performance
> Fix For: 3.x
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a 
> byte[], sends it to ICompressor, gets back another byte[] and verifies a 
> checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond 
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this 
> process for hot data, possibly off heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift

2016-04-27 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260110#comment-15260110
 ] 

Sylvain Lebresne commented on CASSANDRA-11502:
--

bq. but I, instead, feel more paranoid about leaving it in

Fair enough, I'm good getting rid of it.

bq. I think we should be safe here b/c of the {{isThriftCompatible()}} guard in 
{{CassandraServer::system_update_column_family()}}.

You're right. I got confused because I help someone a few days ago with an 
upgrade problem and was able to do an update on a CQL table, but that was on 
some 2.0 version so must have been on some version from before we introduced 
that. Would still be great to double check but +1 on the patch in any case.

> Fix denseness and column metadata updates coming from Thrift
> 
>
> Key: CASSANDRA-11502
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11502
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> It was 
> [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472]
>  that we'd be recalculating {{is_dense}} for table updates coming from Thrift 
> on every change. However, due to some oversight, {{is_dense}} can only go 
> from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will 
> not reset {{is_dense}} back to {{false}}.
> The recalculation fails because no matter what happens, we never remove the 
> auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table.
> Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see 
> CASSANDRA-11315).
> What we should do is remove the special-case for Thrift in 
> {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in 
> {{ThriftConversion::internalFromThrift}} to remove those columns when going 
> from dense to sparse.
> This is not enough to fix CASSANDRA-11315, however, as we need to handle 
> pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting 
> proper schema from {{DESCRIBE}} now and b) using the more efficient 
> {{SparseCellNameType}} when you add columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11662) Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14

2016-04-27 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-11662.
---
   Resolution: Duplicate
Fix Version/s: (was: 2.1.x)

> Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14
> ---
>
> Key: CASSANDRA-11662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11662
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: cassandra server 2.1.5 and java jdk1.7.0_101-b14
>Reporter: William Boutin
>
> We have the Cassandra Server 2.1.5 running. When we applied java patch java 
> jdk1.7.0_101-b14, cassandra will not start. The cassandra log states 
> "Cassandra 2.0 and later require Java 7u25 or later".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed

2016-04-27 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260041#comment-15260041
 ] 

Paulo Motta commented on CASSANDRA-11670:
-

This is strange, can you double check that none of your nodes in any data 
center have a custom {{commitlog_segment_size_in_mb}} or 
{{max_mutation_size_in_kb}} configuration set?

Also, can you verify during node initialization on {{system.log}} that 
{{commitlog_segment_size_in_mb=128}} was picked up by configuration when you 
changed and that {{max_mutation_size_in_kb=null}}? Maybe check that on other 
nodes as well to see if you find any strange combination.

> Error while waiting on bootstrap to complete. Bootstrap will have to be 
> restarted. Stream failed
> 
>
> Key: CASSANDRA-11670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11670
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Streaming and Messaging
>Reporter: Anastasia Osintseva
> Fix For: 3.0.5
>
>
> I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each 
> DC. One node has been added successfully after I had made scrubing. 
> Now I'm trying to add node to another DC, but get error: 
> org.apache.cassandra.streaming.StreamException: Stream failed. 
> After scrubing and repair I get the same error.  
> {noformat}
> ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - 
> Unknown exception caught while attempting to update MaterializedView! 
> messages_dump.messages
> java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large 
> for the maxiumum size of 33554432
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169)
>  [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_11]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_11]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_11]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_11]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11]
> ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 
> StreamReceiveTask.java:214 - Error applying streamed data: 
> java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large 
> for the maxiumum size of 33554432
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) 
> 

[jira] [Commented] (CASSANDRA-11662) Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14

2016-04-27 Thread William Boutin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15260003#comment-15260003
 ] 

William Boutin commented on CASSANDRA-11662:


Thank you for the replies. How do I close my duplicate request?


Billy S. Boutin 
Office Phone No. (913) 241-5574 
Cell Phone No. (732) 213-1368 
LYNC IM: william.bou...@ericsson.com 




> Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14
> ---
>
> Key: CASSANDRA-11662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11662
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: cassandra server 2.1.5 and java jdk1.7.0_101-b14
>Reporter: William Boutin
> Fix For: 2.1.x
>
>
> We have the Cassandra Server 2.1.5 running. When we applied java patch java 
> jdk1.7.0_101-b14, cassandra will not start. The cassandra log states 
> "Cassandra 2.0 and later require Java 7u25 or later".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259985#comment-15259985
 ] 

Aleksey Yeschenko commented on CASSANDRA-11137:
---

It is a bug, and something that should normally be backported. It's also 
potentially a breaking behaviour change for JSON consumers. That said, I think 
benefits of fixing the bug outweigh that risk.

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2016-04-27 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259981#comment-15259981
 ] 

Benjamin Lerer commented on CASSANDRA-10783:


My plan is to review it as soon as possible.

> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259946#comment-15259946
 ] 

Stefania commented on CASSANDRA-11137:
--

I'm +1 on the dtest PR, assuming the test team is also OK with using ellipses 
to relax the output checks.

We shouldn't backport a patch because of test limitations but, I've noticed 
that this ticket is classified as a bug, so back-porting it might be the 
correct thing to do after all. Do you agree that it should be back-ported to 
2.2 and 3.0, [~iamaleksey] or [~slebresne]?

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column

2016-04-27 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259926#comment-15259926
 ] 

Chris Lohfink commented on CASSANDRA-11655:
---

merged with trunk and (per CASSANDRA-11656) changed timestamps to always print 
consistent iso8691 string. Added a {{-t}} option to print timestamps out like 
before.

> sstabledump doesn't print out tombstone information for deleted collection 
> column
> -
>
> Key: CASSANDRA-11655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11655
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Wei Deng
>Assignee: Chris Lohfink
>  Labels: Tools
> Attachments: CASSANDRA-11655.patch, trunk-11655v2.patch
>
>
> Pretty trivial to reproduce.
> {noformat}
> echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh
> echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, 
> val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh
> echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, 
> 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh
> echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh
> echo "select * from testks.testcf;" | cqlsh
> nodetool flush testks testcf
> {noformat}
> Now if you run sstabledump (even after taking the 
> [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for 
> CASSANDRA-11654) against the newly generated SSTable like the following:
> {noformat}
> ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "clustering" : [ "c1" ],
> "liveness_info" : { "tstamp" : 1461645231352208 },
> "cells" : [
>   { "name" : "val0_int", "value" : "100" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> You will see that the collection-level Deletion Info is nowhere to be found, 
> so you will not be able to know "markedForDeleteAt" or "localDeletionTime" 
> for this collection tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column

2016-04-27 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-11655:
--
Attachment: trunk-11655v2.patch

> sstabledump doesn't print out tombstone information for deleted collection 
> column
> -
>
> Key: CASSANDRA-11655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11655
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Wei Deng
>Assignee: Chris Lohfink
>  Labels: Tools
> Attachments: CASSANDRA-11655.patch, trunk-11655v2.patch
>
>
> Pretty trivial to reproduce.
> {noformat}
> echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh
> echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, 
> val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh
> echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, 
> 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh
> echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh
> echo "select * from testks.testcf;" | cqlsh
> nodetool flush testks testcf
> {noformat}
> Now if you run sstabledump (even after taking the 
> [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for 
> CASSANDRA-11654) against the newly generated SSTable like the following:
> {noformat}
> ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "clustering" : [ "c1" ],
> "liveness_info" : { "tstamp" : 1461645231352208 },
> "cells" : [
>   { "name" : "val0_int", "value" : "100" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> You will see that the collection-level Deletion Info is nowhere to be found, 
> so you will not be able to know "markedForDeleteAt" or "localDeletionTime" 
> for this collection tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2016-04-27 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259922#comment-15259922
 ] 

DOAN DuyHai commented on CASSANDRA-10783:
-

Ok so it'll be in 3.8 then

> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2016-04-27 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259895#comment-15259895
 ] 

Benjamin Lerer commented on CASSANDRA-10783:


Sorry guys, I underestimated the time that I needed for some other tasks. 
Taking into account that the code freeze for 3.6 is on monday and that I still 
have a several reviews that have higher priorities, I do not think that this 
ticket will make it.

> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11629) java.lang.UnsupportedOperationException when selecting rows with counters

2016-04-27 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11629:

Status: Patch Available  (was: Open)

Patch for {{3.0}} and {{trunk}}:

|[trunk|https://github.com/ifesdjeen/cassandra/tree/11629-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-trunk-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/11629-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-3.0-dtest/]|

I have also added paging tests with counter columns in 
[dtest|https://github.com/riptano/cassandra-dtest/pull/956].

The {{dtest}} failures on 3.0 are "known issues", existing before the patch: 
[11650|https://issues.apache.org/jira/browse/CASSANDRA-11650] 
[11127|https://issues.apache.org/jira/browse/CASSANDRA-11127]. Tests are 
passing locally..

> java.lang.UnsupportedOperationException when selecting rows with counters
> -
>
> Key: CASSANDRA-11629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11629
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04 LTS
> Cassandra 3.0.5 Community Edition
>Reporter: Arnd Hannemann
>Assignee: Alex Petrov
>  Labels: 3.0.5
> Fix For: 3.6, 3.0.x
>
>
> When selecting a non empty set of rows with counters a exception occurs:
> {code}
> WARN  [SharedPool-Worker-2] 2016-04-21 23:47:47,542 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.5.jar:3.0.5]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:172)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:202)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:169) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:619)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:258)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:246)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:236)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> 

[jira] [Updated] (CASSANDRA-11650) dtest failure in json_test.ToJsonSelectTests.complex_data_types_test

2016-04-27 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11650:

Status: Patch Available  (was: Open)

> dtest failure in json_test.ToJsonSelectTests.complex_data_types_test
> 
>
> Key: CASSANDRA-11650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11650
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/complex_data_types_test
> Failed on CassCI build cassandra-2.2_dtest #585



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11650) dtest failure in json_test.ToJsonSelectTests.complex_data_types_test

2016-04-27 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259841#comment-15259841
 ] 

Alex Petrov commented on CASSANDRA-11650:
-

The problem was caused by 
[11137|https://issues.apache.org/jira/browse/CASSANDRA-11137], I've opened a 
[PR to dtest|https://github.com/riptano/cassandra-dtest/pull/955] that would 
fix inconsistencies between 3.0 (/2.2) and 3.6.

> dtest failure in json_test.ToJsonSelectTests.complex_data_types_test
> 
>
> Key: CASSANDRA-11650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11650
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/complex_data_types_test
> Failed on CassCI build cassandra-2.2_dtest #585



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2016-04-27 Thread Ajeet Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259840#comment-15259840
 ] 

Ajeet Singh commented on CASSANDRA-10783:
-

Thanks Robert Stupp Benjamin Lerer,
It will be great if it will be available in 3.6.

Signature of my UDF:
CREATE OR REPLACE FUNCTION spatial_keyspace.state_group_and_max( state   
map, type text, pkey int, level int)
CQL Query:
select  spatial_keyspace.group_and_count(quadkey, pkey, %level_bind_parameter%) 
from spatial_keyspace.businesspoints where longitude >= -179.98333 and 
longitude <=86 and latitude >= -179.98333 and latitude <= 86 LIMIT 10 ALLOW 
FILTERING;


> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov resolved CASSANDRA-11137.
-
Resolution: Fixed

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259837#comment-15259837
 ] 

Alex Petrov edited comment on CASSANDRA-11137 at 4/27/16 9:19 AM:
--

I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly), in case we 
would like to have backported versions: 

|[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]|

And [opened up an PR that fix 
tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the 
progress in corresponding [test team 
issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. 


was (Author: ifesdjeen):
I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly)
|[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]|

And [opened up an PR that fix 
tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the 
progress in corresponding [test team 
issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. 

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-27 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259837#comment-15259837
 ] 

Alex Petrov commented on CASSANDRA-11137:
-

I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly)
|[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]|

And [opened up an PR that fix 
tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the 
progress in corresponding [test team 
issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. 

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >